code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# # Decision tree based models
#
# This week we will use the <a href='https://archive.ics.uci.edu/ml/machine-learning-databases/00603/in-vehicle-coupon-recommendation.csv'>vehincle coupon recommendation dataset</a>. Our goal is to classify people based on their driving habits whether they would accept a vehicle coupon or not.
# -
# ## I. Prepare dataset
#
# 1. Load the `in-vehicle-coupon-recommendation.csv` dataset
# 2. Search for missing values and if needed, handle them!
# 3. Encode the non numeric variables into numeric ones! For the binary
# features simply encode them as ($0$/$1$). Do not create two separate
# columns for them! You'll have to use the description of the dataset
# provided at its download location!
# ## II. Train & visualize decision tree classifier
#
# 1. Train a **decision tree classifier** using the `sklearn` API
# - Use its default parameters
# - Use all the data
# 2. Visualize the decision tree, with the *Gini impurities* also showing on the
# plot. The `plot_tree` function in `sklearn` will be really helpful. You
# may or may not need to tune its arguments to get a reasonable result.
# 3. Manually check for the labels and for an arbitrary feature whether the
# returned *Gini impurities* are correct
# 4. In a few sentences, discuss the results
# ## III. Random forest feature importance
#
# 1. Train a random forest classifier on all the data using the sklearn API
# - Use default values again, but fix the `random_state` to $57$!
# 2. Plot the importance values of the $10$ most important features
# - Create a bar plot where the height of the bar is the feature importance
# - The `feature_importances_` attribute is helpful
# ## IV. Evaluation
#
# 1. Generate prediction probabilities with a **decision tree** and with a
# **random forest model**
# - Use $5$-fold cross validation for both models
# - Use default parameters for both models
# 2. Compare the two models with ROC curves
# - Why does the shape of the decision tree's ROC curve looks different?
# ## V. Tuning model
#
# 1. Using $80\%$ - $20\%$ train-test split generate predictions for a **random
# forest model**
# - Set the `random_state` parameter for every run to $57$ for the
# train-test split and for the Random Forest Classifier as well!
# 2. Plot the AUC as the function of the number of trees in the forest for both
# the traing and the test data!
# 3. Do we experience overfitting if we use too many trees?
# ### Hints:
#
# - On total you can get $10$ points for fully completing all tasks.
# - Decorate your notebook with, questions, explanation etc., make it
# self-contained and understandable!
# - Comment your code when necessary
# - Write functions for repetitive tasks!
# - Use the `pandas` package for data loading and handling
# - Use `matplotlib` and `seaborn` for plotting or `bokeh` and `plotly` for
# interactive investigation.
# - Use the `scikit-learn` package for almost everything
# - Use for loops only if it is really necessary!
# - Code sharing is not allowed between student! Sharing code will
# result in zero points.
# - If you use code found on web, it is OK, **but, make its source clear**!
| hw8/original/lab_08_tree_models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="vTs11WubmRhV"
# download data (-q is the quiet mode)
# ! wget -q https://github.com/CISC-372/Notebook/releases/download/a1/test.csv -O test.csv
# ! wget -q https://github.com/CISC-372/Notebook/releases/download/a1/train.csv -O train.csv
# you can tuning the model (search for the best hyper-parameters setting) automatically if we have a narrow range of hyper-parameter to be searched for
# + colab={"base_uri": "https://localhost:8080/", "height": 299} colab_type="code" id="jrsga6qkouO1" outputId="0afd0d67-f590-4ed3-e5d8-77168e3a44bb"
import pandas as pd
# The dataset contain rental house information, where each data sample (data row) represents a rental post
# we can do data pre-processing with pandas or build it into the pieline for hyper-parameter tuning
Xy_train = pd.read_csv('train.csv', engine='python')
X_train = Xy_train.drop(columns=['price_rating'])
y_train = Xy_train[['price_rating']]
print('training', len(X_train))
#Xy_train.price_rating.hist()
X_test = pd.read_csv('test.csv', engine='python')
testing_ids = X_test.Id
print('testing', len(X_test))
# Note: The pre-processing steps are split, below are some basic pre-processing done through Pandas dataframe
# In the original training and testing datasets, the attribute 'deposit' and 'extra_people' has '$' as prefix, ',' as infix , and 'host_response_rate' has '%' as suffix
# so we remove the '$', ',' , '%' and convert those string integers into float values
X_train['security_deposit'] = Xy_train['security_deposit'].replace({'\$': '', ',': ''}, regex=True).astype(float)
X_train['extra_people'] = Xy_train['extra_people'].replace({'\$': '', ',': ''}, regex=True).astype(float)
X_train['host_response_rate'] = Xy_train['host_response_rate'].replace({'%': ''}, regex=True).astype(float) / 100 # divided by 100 to transform from '%' representation to ordinary numeric representation
# The attribute 'deposit', 'extra_people', and 'host_response_rate' values in X_test must also be converted into float type values
# So that our model can be applied to the testing set
X_test['security_deposit'] = X_test['security_deposit'].replace({'\$': '', ',': ''}, regex=True).astype(float)
X_test['extra_people'] = X_test['extra_people'].replace({'\$': '', ',': ''}, regex=True).astype(float)
X_test['host_response_rate'] = X_test['host_response_rate'].replace({'%': ''}, regex=True).astype(float) / 100
# For time-series attribute 'host_since' and 'last_scraped', we can combine them to create a new numeric feature to be added into the feature space
training_days_active = pd.to_datetime(Xy_train['last_scraped']) - pd.to_datetime(Xy_train['host_since']) # This is a pandas Seres about the number of days that a host has been on the platform
testing_days_active = pd.to_datetime(X_test['last_scraped']) - pd.to_datetime(X_test['host_since'])
# Create a new numeric feature, named 'host_days_active', based on the 'host_since' and 'last_scraped'
X_train['host_days_active'] = training_days_active.astype('timedelta64[D]')
X_test['host_days_active'] = testing_days_active.astype('timedelta64[D]')
# For time-series attribute 'first_review' and 'last_review', we can combine them to create a new numeric feature to be added into the feature space
training_review_active = pd.to_datetime(Xy_train['last_review']) - pd.to_datetime(Xy_train['first_review']) # This is a pandas Seres about the number of days that reviews are being written for the rental listings
testing_review_active = pd.to_datetime(X_test['last_review']) - pd.to_datetime(X_test['first_review'])
# Create a new numeric feature, named 'review_active', based on the 'first_review' and 'last_review'
X_train['review_active'] = training_review_active.astype('timedelta64[D]')
X_test['review_active'] = testing_review_active.astype('timedelta64[D]')
# -
# # Manual tuning approach:
# 1. Split the training set into 2 subsets (as the training set have the known target attribute values), one subset for training/building the model, and the another one for evaluating the model as the 1st validation set
# 2. Then, based on the model's performance on the 1st validation set, we can do hyper-parameter tuning to adjust the model
# 3. Once a new model with good hyper-parameter settings is gained, we test the model on the entire training set (put the 2 subsets back togeter) namely adjusting the model again based on its performance on Xy_train (but we should not change the hyper-parameter settings as we have already optimized its)
# 4. Then, we can apply our model to the 2nd validation set (testing set in public leaderboard)
#
# # Semi-auto tuning approach: (using hold-out method)
# 1. split Xy_train into training set and 1st validation set
# 2. pick a range of hyper-parameters (ex: regularization, learning rate,etc)
# 2. training set -> build all the models based on the hyper-parameter range we choose
# 3. validation set -> evaluate all the models we gained in step 3 -> adjust the hyper-parameter ranges and change the model (go back step 2 if needed)
# 4. train a new model using the chosen hyper-parameters on Xy_train, and evaluate on X_test
#
# # Semi-auto tuning approach: (using cross-validation method)
# 1. pick a range of hyper-parameters (ex: pre-processing, data selections, regularization, learning rate,etc)
# 2. train/evaluate models using CV on the training set, Xy_train
# 3. Based on the results of CV, we adjust the hyper-parameter ranges or change the model (go back step 2 if needed)
# 4. train a new model using the chosen/ideal hyper-parameters on Xy_train, and evaluate on X_test
#
# + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" id="3vFDJ3lNmE6P" outputId="ea3e89a7-c3e7-45cb-d567-3679a8209a6a"
# model training and tuning
import numpy as np
from sklearn.compose import ColumnTransformer
from sklearn.datasets import fetch_openml
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, Normalizer, OneHotEncoder,OrdinalEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, GridSearchCV
#from xgboost.sklearn import XGBClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import RandomizedSearchCV
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
np.random.seed(0) # set the randomizer to default, so we can get the same sequence of data rows in each fold of CV every time we re-run the code
# so we can better analyze the performance of the model as the impact brought by the randomization in CV is removed
# select needed data attributes for classification purpose (as the target attribute is categorical data [0,1,2])
# and we can select different pre-processing techniques for different types of attributes (numeric vs categorical)
# so we need to treat different types of attributes separately
# increase model performance by selecting more attributes
numeric_features = ['bedrooms', 'review_scores_location','host_total_listings_count', 'availability_60','accommodates', 'beds', 'bathrooms',
'availability_90','guests_included', 'minimum_nights','maximum_nights', 'review_scores_rating',
'reviews_per_month','availability_365','availability_30','review_scores_accuracy','review_scores_value',
'review_scores_cleanliness','security_deposit','extra_people','review_scores_communication',
'review_scores_checkin', 'minimum_nights_avg_ntm', 'maximum_nights_avg_ntm',
'number_of_reviews_ltm', 'host_response_rate',
'calculated_host_listings_count_entire_homes', 'calculated_host_listings_count', 'number_of_reviews','calculated_host_listings_count_private_rooms', 'calculated_host_listings_count_shared_rooms',
'minimum_minimum_nights','maximum_minimum_nights','minimum_maximum_nights','maximum_maximum_nights',
# 'host_days_active','review_active', Note: these two attributes are commented out as adding them into the training set reduces the model's performance
] # select needed numeric features from the training dataset
# Define a transformer/pre-processor for numeric attributes
# pipeline() means we can have different steps in pre-processing, and in each step, we can transform the features (we can have as many steps as we want)
# Also, we need to give a name to each step in pipeline as we need to specify the range of hyper-parameters by specifying which step we want to adjust later
numeric_transformer = Pipeline(steps=[
('iterative_imputer', IterativeImputer(max_iter=10, random_state=0)),
# ('imputer', SimpleImputer(strategy='median')), # the first step is called 'imputer', which replaces the missing value with the median attribute value, but it did not yield a better performance than 'iterative_imputer'
('scaler', StandardScaler())]) # the second step is called 'scaler', which transfers each entry in numeric data columns to have zero mean and unit variance with respect to the column it is in
# select categorical features
categorical_features = [
'property_type', 'is_business_travel_ready', 'room_type', 'bed_type', 'is_location_exact','host_identity_verified',
'host_response_time','require_guest_profile_picture','require_guest_phone_verification','has_availability',
'cancellation_policy','host_is_superhost','instant_bookable',
'calendar_updated', 'requires_license',
]
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')), # replace all the missing values with the constant string 'missing'
# note the step names in different pipelines can be the same
# ('ordinal', OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1)) # try ordinal encoder
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
# ex of onehot encoder: if 'property_type' have value = [apt,house,room]
# After encoding, we will get three different features with each having the value [0,1]:
# property_type_apt = [0,1], so 1 stands for the observation having 'apt' at property_type, 0 for other property_type
# property_type_house = [0,1]
# property_type_room = [0,1]
# ColumnTransformer transforms each of the selected data column (it will transform all the selected attributes in training and testing sets)
preprocessor = ColumnTransformer( # apply categorical_transformer to categorical_features, and apply numeric_transformer to numeric_features
transformers=[ # define list of transformer we want
('num', numeric_transformer, numeric_features), # give a name 'num' to the numeric_transformer to pre-process the numeric_features
('cat', categorical_transformer, categorical_features)]) # give a name 'cat' to the categorical_transformer to pre-process the categorical_features
# note that each transformer requires an input list of features, and
# we have 2 different transformers because we have different pre-processing techniques for different types of attributes (categorical vs numeric)
# and you can have as many transformers as you want in the ColumnTransformer()
# define the whole pipeline of building/training the classifier/model by combining the pre-processor to the model building process
regr = Pipeline(steps=[('preprocessor', preprocessor), # the first step is do the pre-processing with using the pre-processor we defined above, and the pre-processor will process different features based on which type group it is in with using the corresponding transformer
#('standardscaler', StandardScaler(copy=False, with_mean=False)), # standardization on the training set, not needed as they are in the preprocessor already
('normalizer', Normalizer()), # the second step is to integrate regularization into the model building process to avoid overfitting
('classifier', LogisticRegression(random_state=123, multi_class='multinomial'))]) # The third step is choosing Logistic Regression model for the classification problem
# Feature selection: select the needed non-target attributes from the updated training and testing sets
X_train = X_train[[*numeric_features, *categorical_features]] # [*numeric_features, *categorical_features] merges two independent lists into one, instead of merging into a list of lists
X_test = X_test[[*numeric_features, *categorical_features]]
# `__` denotes attribute of the previous ONE term/name
# (e.g. regressor__n_estimators means the `n_estimators` parameter for `classifier`
# which is our xgb)
# try RandomSearchCV
distributions = { # set the range of (hyper-)parameters we want to search,
'preprocessor__num__iterative_imputer__max_iter': range(10,20), # The default value of 'max_iter' is 10, and there is no need to extend further the range(10,20) as the best setting for the 'max_iter' is always below 18
# 'preprocessor__num__imputer__strategy': ['mean','median','most_frequent'], # search the optimal hyper-parameter setting for the pre-processor 'imputer', but it did not yield a better model performance than 'iterative_imputer'
'classifier__max_iter': range(200,400), # defautl value is 100, we set the range to be (200,400) as training longer can avoid underfitting
'classifier__solver': ['newton-cg','lbfgs', 'sag', 'saga' ], # search which algorithm is the best to be used in the optimization problem
'classifier__tol': [1e-4,1e-5,], # note that '1e-5' tends to not converge when the 'max_iter' is reached
'classifier__class_weight': ['balanced',None], # the default value is None, but it turns out that 'balanced' is the best setting for this parameter in most of situations
'normalizer__norm': ['l1','l2','max'] # 'max' turns out to be the best settting for the regularization strategy
#'classifier__penalty': ['l2','none','elasticnet'] #see if regularization needed, the default value is 'l2', and it turns out tuning this parameter is redundant as we already have a normalizer() in the search space
}
# Adjustment log (previous records/logs are lost due to some data storage problem):
# 9th tuning: remove Normalizer() from regr to see if it is redundant
# result: Performance decreases dramatically (need to add regularization back), and 'balanced' should be the best parameter setting for 'class_weight'
# 10th tuning: add Normalizer back, extend 'max_iter' to range(200,400)
# result: Performance did not improve, may need to change the model architecture or remove some attributes from the training set
# fit the model on the full training dataset with using CV, namely the step 4 in the Semi-auto tuning apporoach
random_search_log = RandomizedSearchCV(regr, distributions, n_iter=40,random_state=0,scoring = 'f1_micro', # f1 with 'micro-averaging' in a multiclass setting is chosen, and will return the total ratio of tp/(tp + fp)
n_jobs = -1, cv=5, verbose=1 )
# n_jobs = -1 means using all the CPU processors, random_state=0 to ensure we get the same result/performance each time we run this cell of code
random_search_log.fit(X_train, y_train)
print('best score {}'.format(random_search_log.best_score_))
# -
# Get feedbacks, see what are the best (hyper-)parameter settings in the search space we specified above
random_search_log.best_params_
# Get feedbacks, determine the performance of the model on the training set, and the evaluation metric is set to be 'accuracy' (as we want to check the model's accuracy first)
from sklearn.model_selection import cross_validate
scoring = ['accuracy']
scores = cross_validate(random_search_log.best_estimator_, X_train, y_train, scoring=scoring, cv=5, n_jobs=-1)
scores # print the scores
# + colab={} colab_type="code" id="PF6WrzdKmJ97"
# Prediction & generating the submission file
y_pred = random_search_log.predict(X_test) # generate the predictions on the testing set with using the model/classifier we trained/tuned above
pd.DataFrame( #construct a dataframe with the 'Id' value of the observations and the predictions of testing 'price_rating' values we gained above, then export the dataframe as a CSV file that can be submitted to the Kaggle leaderboard
{'Id': testing_ids, 'price_rating':y_pred}).to_csv('sample_submission.csv', index=False)
# -
# # Define another pipeline of building/training a XGB classifier/model with using RandomSearchCV
# +
regr = Pipeline(steps=[('preprocessor', preprocessor), # the first step is do the pre-processing with using the pre-processor we defined above, and the pre-processor will process different features based on which type group it is in with using the corresponding transformer
# ('normalizer', Normalizer()), # the second step is to integrate regularization into the model building process to avoid overfitting
('classifier', XGBClassifier( # the third step is choosing the model architecture we will use
seed=1, ))]) # set the number of classes, num_class=3
param_random = { # the range of hyper-parameters we want to search
'preprocessor__num__iterative_imputer__max_iter': range(10,20),# The default value of 'max_iter' is 10,
# 'preprocessor__num__imputer__strategy': ['mean'],#'median','most_frequent'], # search the optimal hyper-parameter setting for the pre-processor 'imputer', but it failed to yield a better model performance than 'iterative_imputer'
# 'preprocessor__cat__onehot__drop': ['first','if_binary',None], #Tuning the categorical pre-processor, the default value is 'None', and it turns out that 'None' is the one yielding the best model performance in most of the situations
'classifier__objective': ['multi:softmax', 'multi:softprob'],#'rank:pairwise','rank:map' ], tuning the objective function, but it turns out 'multi:softmax' is the best parameter setting in most of the cases
#'classifier__eval_metric': ['merror','map','mlogloss','aucpr'], # Tuning this hyperparameter does not affect the performance of the model at all
'classifier__max_depth': range(6,30), # The default value is 6, so we make it more flexible by extending the max_depth from the default value 6 to 30
'classifier__n_estimators': range(200,400), # The default value is 100, so we make it more flexible by extending the number of estimators from the default value 100 to the range(200,400)
'classifier__colsample_bynode': np.arange(0.0, 1.1, 0.1), # the ranges of colsample_bynode, colsample_bylevel, and colsample_bytree are all (0,1], so we use np.arange() rather than range()
'classifier__colsample_bytree': np.arange(0.0, 1.1, 0.1), # np.arange(0.0, 1.1, 0.1) gives a array of float values from 0.0 to 1.0 with incrementing each element by 0.1
'classifier__colsample_bylevel': np.arange(0.0, 1.1, 0.1),
# try np.arange(0.0,1.0,0.05) to see if it give better performance than (0.0, 1.1, 0.1), but it turns out that (0.0,1.0,0.05) does not give a better performance
# 'classifier__colsample_bynode': np.arange(0.0,1.0,0.05), # the ranges of colsample_bynode, colsample_bylevel, and colsample_bytree are all (0,1], so we use np.arange() rather than range()
# 'classifier__colsample_bytree': np.arange(0.0,1.0,0.05), # np.arange(0.0,1.05,0.05) gives a array of float values from 0.0 to 1.0 with incrementing each element by 0.05
# 'classifier__colsample_bylevel': np.arange(0.0,1.0,0.05),
# 'classifier__booster':['gbtree', 'gblinear', 'dart'], # check which booster performs better, and it turns out that the default 'gbtree' is always better than the other two boosters
'classifier__min_child_weight': range(0,10), # the larger the min_child_weight(default value is 1) and max_delta_step(default value is 0) values are, the more conservative the algorithm will be
# 'classifier__max_delta_step': range(0,10), # However, tuning the 'max_delta_step' tends to deteriorate the model's performance
'classifier__eta': np.arange(0.01,0.2,0.01), # adjust the learning rate, but the performance of the model is worse when we tuning the learning rate in a range that is above the default value 0.3, such as (0.4,1.0,0.1)
# 'classifier__scale_pos_weight': range(1,10), # we adjust the balance of positive and negative weights, the default value is 1, but tuning this parameter is not recommended by the system, and it did not improve the model's performance too
'classifier__gamma': range(0,10), # default value of gamma is 0, and the larger gamma is, the more conservative the algorithm will be
# 'classifier__tree_method': ['auto','hist'], # the default value is 'auto', and it turns out that the default value will yield the best performance of the model in most of the situations
# 'normalizer__norm': ['max'], # ,'l1','l2'], 'max' turns out to be the best settting for the regularization strategy, however, it later turns out that tuning the model's regularization parameter will be better than tuning this Normalizer in the model's pipeline
'classifier__lambda': range(1,6), # L2 regularization term on weights, the default value is 1, and there is no need to extend further the range(1,6) as the best setting for the 'lambda' is always below 3
'classifier__alpha': range(0,6), # L1 regularization term on weights, the default value is 0, and there is no need to extend further the range(0,6) as the best setting for the 'alpha' is always below 3
}
# Adjustment log (previous records/logs are lost due to some data storage problem):
# 24th tuning: remove 'l1','l2' from normalizer__norm, remove classifier__colsample_bylevel, replace 'exact' with 'hist' in 'tree_method' to see if performance increase
# result: performance (accuracy score) did not improve (may because we did not tune 'colsample_bylevel'), and 'auto' is still the best setting for 'tree_method' parameter
# 25th tuning: put back classifier__colsample_bylevel for tuning, change 'eta' tuning range from (0.1,0.3,0.1) to (0.01,0.2,0.01)
# result: performance in both training set and the validation set improve to 73.09%
# 26th tuning: Same parameter setting with 25th tuning, but add 'host_days_active' and 'review_active' attributes into the feature space
# result: performance decreases
# 27th tuning: change 'eta' from (0.01,0.2,0.01) back to (0.1,0.3,0.1)
# result: performance did not improve (may need to remove the added 'host_days_active' and 'review_active' attributes)
# 28th: remove Normalizer() parameter and 'host_days_active' and 'review_active' attributes
# result: Performances in both training set and the validation set improve to 73.5%
# 29th: Tuning the model's regularization parameter 'lambda' and 'alpha'
# result: Performance on both training set and the validation set improve to 73.6% (The end of Kaggle competition)
# Originally, I used GridSearchCV() for XGBooster model training, but it takes 50 mins to train and can only search a few (hyper-)parameters at one run, so I shifted to randomSearchCV
random_search = RandomizedSearchCV( # pass the model pipeline and the ranges of (hyper-)parameters we want to search as arguments to RandomizedSearchCV()
regr, param_random, cv=5, verbose=1, n_jobs=-1, # cv=5 means we have 5 folds for the CV, n_jobs = -1 means using all CPU processors
n_iter=35,random_state=1, # 'n_iter'=35 means that 35 parameter settings are randomly sampled, and so we will have 35 models that will go through the 5-fold cross-validation
scoring='f1_micro')
random_search.fit(X_train, y_train)
print('best score {}'.format(random_search.best_score_))
# -
# Get feedbacks, see what are the best (hyper-)parameter settings in the search space we specified above
random_search.best_params_
# Get feedbacks, determine the performance of the model on the training set, and the evaluation metric is set to be 'accuracy'
scoring = ['accuracy']
scores = cross_validate(random_search.best_estimator_, X_train, y_train, scoring=scoring, cv=5, n_jobs=-1)
scores
# Prediction & generating the submission file
y_pred = random_search.best_estimator_.predict(X_test) # generate the predictions on the testing set with using the model/classifier we trained/tuned above
pd.DataFrame( #construct a dataframe with the 'Id' value of the observations and the predictions of testing 'price_rating' values we gained above, then export the dataframe as a CSV file that can be submitted to the Kaggle leaderboard
{'Id': testing_ids, 'price_rating':y_pred}).to_csv('sample_submission.csv', index=False)
#
# # Define another pipeline of building/training a XGB classifier/model with using GridSearchCV for comparison purpose
#
# + jupyter={"outputs_hidden": true}
regr = Pipeline(steps=[('preprocessor', preprocessor), # the first step is do the pre-processing with using the pre-processor we defined above, and the pre-processor will process different features based on which type group it is in with using the corresponding transformer
('normalizer', Normalizer()), # the second step is to integrate regularization into the model building process to avoid overfitting
('classifier', XGBClassifier( # the third step is choosing the model architecture we will use
seed=1, num_class=3 ))]) # set the number of classes, num_class=3
grid_para = {'preprocessor__num__iterative_imputer__max_iter':[15],
'classifier__objective': ['multi:softmax', ],#'multi:softprob'],
'classifier__max_depth': [6,20],
'classifier__n_estimators': [200,300,],
'classifier__colsample_bynode': [0.4,0.7,0.9], # the ranges of colsample_bynode, colsample_bylevel, and colsample_bytree are all (0,1],
'classifier__colsample_bytree': [0.4,0.7,0.9],
# 'classifier__colsample_bylevel': [0.4,0.7,0.9], these parameters are commented out as tuning them will increase the training time significantly
# 'classifier__min_child_weight': [1,3,7],
# 'classifier__max_delta_step': [1,3,7],
# 'classifier__eta': [0.1,0.3,0.5]
}
grid_search = GridSearchCV(
regr, grid_para, cv=5, verbose=3, n_jobs=-1,
scoring='f1')
grid_search.fit(X_train, y_train)
print('best score {}'.format(grid_search.best_score_))
# The disadvantage of gridSearch is apparent,
# the number of (hyper-)parameters and the numbers of the optional values for the parameters we are tuning are much less than the parameters we are tuning for the RandomSearchCV at one run
# And if we want to tuning more parameters, then the total number of fits will be much higher than that of fits required by RandomSearchCV with n_iter set reasonably
# Thus, the time taken to complete the GridSearchCV is much longer than RandomSearchCV if we want to tuning an adequate number of (hyper-)parameters
# and the performances of the model gained from both GridSearchCV and RandomSearchCV tend to be similar if sufficient amount of training time are given to both of the methods,
# while the performance gained from RandomSearchCV is likely to be better than the one gained from GridSearchCV if the training time is limited
# -
scoring = ['accuracy'] # you can include as many scores as you want
scores = cross_validate(grid_search.best_estimator_, X_train, y_train, scoring=scoring, cv=5, n_jobs=-1)
scores
# # Define another pipeline of building/training a different classifier/model with using Bayesian optimization
# +
from skopt import BayesSearchCV
# parameter ranges are specified by one of below
from skopt.space import Real, Categorical, Integer
from sklearn.svm import SVC
regr = Pipeline(steps=[('preprocessor', preprocessor), # the first step is do the pre-processing with using the pre-processor we defined above, and the pre-processor will process different features based on which type group it is in with using the corresponding transformer
('normalizer', Normalizer()), # the second step is to integrate regularization into the model building process to avoid overfitting
('classifier', SVC( # the third step is choosing the search space (which model we will use)
max_iter=10000 ))])
param_grid = { # the range of hyper-parameters we want to search
'preprocessor__num__iterative_imputer__max_iter': Integer (5,20),
# 'preprocessor__num__imputer__strategy': ['mean','median','most_frequent'], # search the optimal hyper-parameter setting for the pre-processor
'classifier__kernel': Categorical(['linear','poly', 'rbf', 'sigmoid','precomputed']),
'classifier__gamma': Categorical(['auto','scale']),
# note that for Integer and Real, we only need to supply the lower and upper bounds (inclusive), and random values will be sampled uniformly accoridng to the range we set
'classifier__degree': Integer(1,10), # set degree for poly kernel use only
'classifier__coef0': Integer(0,5), # only used in โpolyโ and โsigmoidโ
'classifier__tol': Real(1e-4,1e-3),
'classifier__decision_function_shape': Categorical(['ovr','ovo']), # default value is 'ovr',and 'ovo' cannot be used when is 'break_ties=True'
# 'classifier__break_ties': Categorical([True, False]), # This parameter can only be True when decision_function_shape='ovr', so the default value is False
'classifier__cache_size': Integer(200,300), # Specify the size of the kernel cache size (MB), the default value is 200
'normalizer__norm': Categorical(['l1','l2','max']),
}
# adjustment log
# 1st Tuning Result: 72% in training set
# 2nd Tuning: Added parameter 'decision_function_shape', 'break_ties', 'cache_size' for tuning
# result: performance did not improve, 'False' is the best parameter setting for 'break_ties'
# 3rd Tuning: Remove 'break_ties',
# result: performance did not improve, may need to change the model architecture or remove/add some attributes from/to the training set, 'ovr' should be the best setting for 'decision_function_shape'
bayes_search = BayesSearchCV( # putting the model pipeline, the range of hyper-parameters we want to search, into the BayesSearchCV
regr, param_grid, n_iter=50, n_points=10,
cv=5, random_state=0 ,verbose=1, n_jobs=-1, iid=True, # cv=5 means we have 5 folds for the CV, and n_jobs = 2 means number of CPU we want to use
scoring = 'f1_micro')
bayes_search.fit(X_train, y_train)
print('best score {}'.format(bayes_search.best_score_))
# -
bayes_search.best_params_
scoring = ['accuracy']
scores = cross_validate(bayes_search.best_estimator_, X_train, y_train, scoring=scoring, cv=5, n_jobs=-1)
scores
# Prediction & generating the submission file
y_pred = bayes_search.predict(X_test) # generate the predictions on the testing set with using the model/classifier we trained/tuned above
pd.DataFrame( #construct a dataframe with the 'Id' value of the observations and the predictions of testing 'price_rating' values we gained above, then export the dataframe as a CSV file that can be submitted to the Kaggle leaderboard
{'Id': testing_ids, 'price_rating':y_pred}).to_csv('sample_submission.csv', index=False)
# # Bayesian optimization is slightly better than Randomized Search in this problem, because Bayesian Search spend about 21 mins to finish the (hyper-)parameters optimization, while Randomized Search spend about 24 mins (3 mins more than Bayesian Search), and the score gained by Bayesian Search is similar to that gained by Randomized Search (both scores are around '72%')
# +
# SVM with RandomizedSearchCV for comparison purpose
regr = Pipeline(steps=[('preprocessor', preprocessor), # the first step is do the pre-processing with using the pre-processor we defined above, and the pre-processor will process different features based on which type group it is in with using the corresponding transformer
('normalizer', Normalizer()),
('classifier', SVC( # the second step is choosing the search space (which model we will use)
random_state=1, max_iter=10000))])
param_svc = { # the range of hyper-parameters we want to search
'preprocessor__num__iterative_imputer__max_iter': range(10,20),
# 'preprocessor__num__imputer__strategy': ['mean','median','most_frequent'], # search the optimal hyper-parameter setting for the pre-processor
'classifier__kernel':['linear','poly', 'rbf', 'sigmoid','precomputed'],
'classifier__gamma': ['auto','scale'],
'classifier__degree': range(1,10), # set degree for poly kernel use only, the default value is 3
'classifier__coef0': range(0,10), # only used in โpolyโ and โsigmoidโ, default value is 0,
'classifier__tol': [1e-4,1e-3], # default value is 1e-3
'classifier__decision_function_shape': ['ovr','ovo'], # default value is 'ovr', and 'ovo' cannot be used when is 'break_ties=True'
# 'classifier__break_ties': [True, False], # This parameter can only be 'True' when decision_function_shape='ovr', so the default value is 'False'. However, tuning this parameter is redundant as 'False' turns out to be the best parameter setting for 'break_ties'
'classifier__cache_size': [200,300], # Specify the size of the kernel cache size (MB), the default value is 200
'normalizer__norm': ['l1','l2','max'],
}
random_search_svc = RandomizedSearchCV( # pass the model pipeline and the ranges of (hyper-)parameters we want to search as arguments to RandomizedSearchCV()
regr, param_svc, cv=5, verbose=3, n_jobs=-1, # cv=5 means we have 5 folds for the CV, n_jobs = -1 means using all CPU processors
n_iter=35,random_state=1, # 'n_iter'=35 means that 35 parameter settings are randomly sampled, and so we have 35 models that will go through the 5-fold cross-validation
scoring='f1_micro')
random_search_svc.fit(X_train, y_train)
print('best score {}'.format(random_search_svc.best_score_))
# -
random_search_svc.best_params_
scoring = ['accuracy']
scores = cross_validate(random_search_svc.best_estimator_, X_train, y_train, scoring=scoring, cv=5, n_jobs=-1)
scores
# +
# Note: This is a patch copied from https://github.com/scikit-optimize/scikit-optimize/issues/978
# because in the newest skopt version, the parameter 'iid' is removed from BayesSearchCV(), and if we still want to run the code properly, this patch needs to be run first
def bayes_search_CV_init(self, estimator, search_spaces, optimizer_kwargs=None,
n_iter=50, scoring=None, fit_params=None, n_jobs=1,
n_points=1, iid=True, refit=True, cv=None, verbose=0,
pre_dispatch='2*n_jobs', random_state=None,
error_score='raise', return_train_score=False):
self.search_spaces = search_spaces
self.n_iter = n_iter
self.n_points = n_points
self.random_state = random_state
self.optimizer_kwargs = optimizer_kwargs
self._check_search_space(self.search_spaces)
self.fit_params = fit_params
super(BayesSearchCV, self).__init__(
estimator=estimator, scoring=scoring,
n_jobs=n_jobs, refit=refit, cv=cv, verbose=verbose,
pre_dispatch=pre_dispatch, error_score=error_score,
return_train_score=return_train_score)
BayesSearchCV.__init__ = bayes_search_CV_init
# -
#
# ### Note: Answers to the word questions in this assignment are documented in 'A1 original_script' notebook
# ### Below is a Cell of Code trying to implement the LGBMClassifier, but I failed to make it running on my Macbook as I need to download and install multiple softwares to make it run and I do not have that mcuh time
# ### (So you can ignore the below codes though I believe it can be run if you have the 'lightgbm' package)
# + jupyter={"outputs_hidden": true}
# define another pipeline of building/training a different classifier/model
from lightgbm import LGBMClassifier
regr = Pipeline(steps=[('preprocessor', preprocessor), # the first step is do the pre-processing with using the pre-processor we defined above, and the pre-processor will process different features based on which type group it is in with using the corresponding transformer
('normalizer', Normalizer()),
('classifier', LGBMClassifier( # the second step is choosing the search space (which model we will use)
objective='multiclass'))]) # the objective is multiclass for LGBMClassifier as we have three types of label values
distribution = { # the range of hyper-parameters we want to search
'preprocessor__num__iterative_imputer__max_iter': range(5,20),# use [5,10,15], in GridSearchCV
# 'preprocessor__num__imputer__strategy': ['mean'],#'median','most_frequent'], # search the optimal hyper-parameter setting for the pre-processor
'classifier__boosting_type': ['gbdt', 'dart','goss','rf'],#'rank:pairwise','rank:map' ],
#'classifier__eval_metric': ['merror','map','mlogloss','aucpr'], # merror = Multiclass classification error rate
'classifier__max_depth': range(5,20), # use [6, 10], in GridSearchCV
'classifier__n_estimator': range(100,250), # use [100,200], in GridSearchCV
'classifier__colsample_bytree': range(0,1), # the range of colsample_bytree is (0,1]
'classifier__colsample_bylevel': range(0,1),
'normalizer__norm': ['max'] # 'l1','l2',
}
random_search = RandomizedSearchCV( # putting the model pipeline, the range of hyper-parameters we want to search,
regr, distribution, cv=5, verbose=1, n_jobs=-1, # cv=5 means we have 5 folds for the CV, n_jobs = 2 means number of CPU we want to use
n_iter=60,random_state=0,
scoring='f1')
random_search.fit(X_train, y_train)
print('best score {}'.format(random_search.best_score_))
# -
random_search.best_params_
scoring = ['accuracy'] # you can include as many scores as you want
scores = cross_validate(random_search.best_estimator_, X_train, y_train, scoring=scoring, cv=5, n_jobs=-1)
scores
| A1_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### with multithreading
# +
import requests
import concurrent.futures
import csv
import os
api_url = 'https://www.alphavantage.co/query?'
api_key =os.environ['ALPHA_ADVANTAGE']
params = {'function':'TIME_SERIES_DAILY_ADJUSTED',
'outputsize':'full',
'datatype':'csv',
'apikey': api_key}
with open("./nasdaqlisted.txt", 'r') as f:
stock_listed = f.read()
symbols = [line.split('|')[0] for line in stock_listed.split("\n")[1:10]]
# -
def download_data_full(symbol):
'''
download the full-length time series of
up to 20 years of historical data for a symbol
'''
params['symbol'] = symbol
response = requests.get(api_url, params=params)
if response.status_code == 200:
data = response.content.decode('utf-8')
else:
print('[!] HTTP {0} calling [{1}]'.format(response.status_code, api_url))
result = [data.split(',') for data in data.split('\r\n')]
return symbol, result
# %%timeit
with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
result = executor.map(download_data_full, symbols[:10])
result = list(result)
# ### without mutlithreading
# %%timeit
for symbol in symbols:
try:
params['symbol'] = symbol
response = requests.get(api_url, params=params)
if response.status_code == 200:
data = response.content.decode('utf-8')
else:
print('[!] HTTP {0} calling [{1}]'.format(response.status_code, api_url))
result = [data.split(',') for data in data.split('\r\n')]
except Exception as e:
print(e)
| notebooks/multithread_benchmark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
from matplotlib import cm
# %config InlineBackend.figure_format = 'retina'
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
from faceit.faceit import Faceit
faceit = Faceit()
nickname = "-mblw-"
# nickname = "_zhk"
# nickname = "-huracan--"
player = faceit.player(nickname)
statistics = faceit.matches_stats(player.player_id)
table = []
for index, stats in enumerate(statistics):
if index % 100 == 0:
print(f"Get match details done for {index + 1}")
# print(f"Get {index + 1} match details for {stats.match_id}")
match = faceit.match(stats.match_id)
for teammate in match.get_players_team(player):
entry = {
"match_id": match.match_id,
"date": match.date,
"nickname": teammate.nickname,
"elo": teammate.elo,
}
if teammate == player:
entry["kd"] = stats.info.kills
table.append(entry)
df = pd.DataFrame(table)
fig = plt.figure()
ax = fig.add_subplot(111)
cmap = cm.get_cmap('jet')
group_by_date = df.groupby(by=pd.Grouper(key="date", freq='Y'))
for index, (name, group) in enumerate(group_by_date):
group_elo = group.groupby(by="match_id")\
.aggregate({"elo": "mean"})\
.reset_index()
group_elo.columns = ["match_id", "mean_elo"]
group_merged = group.dropna() \
.merge(group_elo, on="match_id", how="outer") \
.sort_values(by=["mean_elo"]) \
.reset_index()
group_merged["level"] = (group_merged["mean_elo"] / 100).round(decimals=0) * 100
group_by_level = group_merged.groupby(by="level")
group_hist = group_by_level.aggregate({"kd": ["mean", "count"]}).reset_index()
group_hist.columns = ["elo", name, "count"]
group_hist["count"] = group_hist["count"].astype(int)
color = cmap(index / len(group_by_date))
group_hist.plot(ax=ax, x="elo", y=name, color=color)
group_hist.plot.scatter(ax=ax, x="elo", y=name, s=group_hist["count"] * 2, color=color)
# for _, row in group_hist.iterrows():
# label = "{:.0f}".format(row["count"])
# ax.annotate(label, (row["elo"], row[name]))
ax.grid()
| player_kda.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ADM Homework 1
# # Problem1
# ## Introduction
# ### Say "Hello, World!" With Python
print("Hello, World!")
# ### Arithmetic Operators
a = int(input())
b = int(input())
print(a+b)
print(a-b)
print(a*b)
# ### Python: Division
a = int(input())
b = int(input())
print(a//b)
print(a/b)
# ### Loops
n = int(input())
for i in range(n):
print(i*i)
# ### Write a function
# +
def is_leap(year):
leap = False
# Write your logic here
if(year%400==0 or (year%100!=0 and year%4==0)):
leap = True
return leap
year = int(input())
print(is_leap(year))
# -
# ### Print Function
from __future__ import print_function
if __name__ == '__main__':
n = int(input())
if(n<10):
t=10**(n-1)
s=0
for i in range(1,n+1):
s=s+i*t
t=t/10
print(s)
else:
for i in range(1,n+1):
print(i,end='')
# ### Python If-Else
# +
# #!/bin/python
import math
import os
import random
import re
import sys
if __name__ == '__main__':
n = int(input().strip())
if(n%2!=0):
print("Weird")
elif(n%2==0 and n>=2 and n<=5):
print("Not Weird")
elif(n%2==0 and n>=6 and n<=20):
print("Weird")
else:
print("Not Weird")
# -
# ## Basic Datatypes
# ### Find the Runner-Up Score!
if __name__ == '__main__':
n = int(input())
arr = list(map(int,input().split()))
max1=-101
for i in range(n):
if(max1<arr[i]):
max1=arr[i]
max2=-101
for i in range(n):
if(arr[i]== max1):
arr[i]=-200
for i in range(n):
if(max2<arr[i]):
max2=arr[i]
print(max2)
# ### List Comprehensions
if __name__ == '__main__':
x = int(input())
y = int(input())
z = int(input())
n = int(input())
l=[]
l=[[i,j,k] for i in range(x+1) for j in range(y+1) for k in range(z+1) if(i+j+k)!=n]
print(l)
# ### Nested Lists
if __name__ == '__main__':
l=[]
for _ in range(int(input())):
name = input()
score = float(input())
l.append([name,score])
m = sorted(set([i[1] for i in l]))[1]
print("\n".join(sorted([i[0] for i in l if i[1] == m])))
# ### Finding the percentage
if __name__ == '__main__':
n = int(input())
student_marks = {}
for _ in range(n):
line = input().split()
name, scores = line[0], line[1:]
scores = list(map(float, scores))
student_marks[name] = scores
query_name = input()
s = sum(student_marks[query_name])/len(student_marks[query_name])
format_s = "{:.2f}".format(s)
print(format_s)
student_marks[query_name]=sum(scores)
# ### List
# +
def insert(ind, val):
l.insert(ind, val)
def append(val):
l.append(val)
def pop():
l.pop()
def remove(val):
l.remove(val)
def sort():
l.sort()
def reverse():
l.reverse()
n = int(input())
l = []
arr = [input() for i in range(n)]
for i in arr:
s = i.split()
if s[0] == 'insert':
insert(ind=int(s[1]), val=int(s[2]))
elif s[0] == 'append':
append(val=int(s[1]))
elif s[0] == 'pop':
pop()
elif s[0] == 'print':
print(l)
elif s[0] == 'remove':
remove(val=int(s[1]))
elif s[0] == 'sort':
sort()
elif s[0] == 'reverse':
reverse()
# -
# ### Tuples
if __name__ == '__main__':
n = int(input())
integer_list = list(map(int,input().split()))
print(hash(tuple(integer_list)))
# ## Strings
# ### sWAP cASE
def swap_case(s):
l=[]
sn=''
for i in range(len(s)):
if(s[i].isupper()):
l.append(s[i].lower())
else:
l.append(s[i].upper())
for i in range(len(l)):
sn=sn+l[i]
return sn
if __name__ == '__main__':
s = input()
result = swap_case(s)
print(result)
# ### String Split and Join
def split_and_join(line):
l=line.split()
l="-".join(l)
return l
if __name__ == '__main__':
line = input()
result = split_and_join(line)
print(result)
# ### What's Your Name?
def print_full_name(first, last):
print('Hello '+ first + ' ' + last + '! You just delved into python.')
if __name__ == '__main__':
first_name = input()
last_name = input()
print_full_name(first_name, last_name)
# ### Mutations
def mutate_string(string, position, character):
l=[]
s_new=""
for i in range(len(string)):
if(i == position):
l.append(character)
else:
l.append(string[i])
for i in range(len(l)):
s_new=s_new+l[i]
string = s_new
return string
if __name__ == '__main__':
s = input()
i, c = input().split()
s_new = mutate_string(s, int(i), c)
print(s_new)
# ### Find a string
def count_substring(string, sub_string):
l=[]
for i in range(len(string)-len(sub_string)+1):
s=string[i:i+len(sub_string)]
l.append(s)
s=""
count=0
for i in range(len(l)):
if(l[i]==sub_string):
count=count+1;
return count
if __name__ == '__main__':
string = input().strip()
sub_string = input().strip()
count = count_substring(string, sub_string)
print(count)
# ### String Validators
if __name__ == '__main__':
s = input()
t1=t2=t3=t4=t5=False
for i in range(len(s)):
if(s[i].isalnum()):
t1=True
if(s[i].isalpha()):
t2=True
if(s[i].isdigit()):
t3=True
if(s[i].islower()):
t4=True
if(s[i].isupper()):
t5=True
print(t1)
print(t2)
print(t3)
print(t4)
print(t5)
# ### Capitalize!
# +
def solve(s):
sn=""
sn=s[0].upper()
for i in range(len(s)-1):
if(s[i]==" "):
sn=sn + s[i+1].upper()
else:
sn=sn+s[i+1]
print(sn)
s = input()
result = solve(s)
# -
# ### TextWrap
# +
import textwrap
def wrap(string, max_width):
return '\n'.join([(string[i:i+max_width]) for i in range(0,len(string),max_width)])
#first I used list comprehension to be compress my loop and got list: ['ABCD', 'EFGH', 'IJKL', 'IMNO', 'QRST','UVWX', 'YZ']
#after that I needed to change the list to string with ''.join function
#modify '' like '\n',cuz i need new line
if __name__ == '__main__':
string, max_width = input(), int(input())
result = wrap(string, max_width)
print(result)
# -
# ### Text Alignment
l1 = 5
s = 'H'
for i in range(l1):
print((s*i+s).rjust(5,' ')+(s*i))
l2 = 6
for i in range(l2):
print((5*s).rjust(7,' ')+(5*s).rjust(20,' '))
l3 = 3
for i in range(l3):
print((25*s).rjust(27,' '))
l4 = 6
for i in range(l4):
print((5*s).rjust(7,' ')+(5*s).rjust(20,' '))
l5 = 5
for i in range(l5-1,0,-1):
print((s*i+s).rjust(25,' ')+(s*i))
print(s.rjust(25,' '))
# ### String Formatting
def print_formatted(x):
for i in range(1,x+1):
print(i,oct(i)[2:],hex(i)[2:].capitalize(),bin(i)[2:],sep = " ")
if __name__ == '__main__':
n = int(input())
print_formatted(n)
# ### Merge the Tools!
# +
def merge_the_tools(s,k):
sub=[(s[i:i+k]) for i in range(0,len(s),k)]
ss=[]
for i in range(len(sub)):
ss.append("".join(set(sub[i])))
for i in range(len(sub)):
print(ss[i][::-1])
if __name__ == '__main__':
string, k = input(), int(input())
merge_the_tools(string, k)
# -
# ## Sets
# ### Introduction to Sets
def average(array):
array=set(array)
return sum(array)/len(array)
if __name__ == '__main__':
n = int(input())
arr = list(map(int, input().split()))
result = average(arr)
print(result)
# ### No Idea!
N,M = list(map(int,input().split()))
arr=map(int,input().split())
a = set(map(int,input().split()))
b = set(map(int,input().split()))
c=0
for i in arr:
if(i in a):
c=c+1
elif(i in b):
c=c-1
print(c)
# ### Symmetric Difference
M = int(input())
a = set(map(int,input().split()))
N = int(input())
b = set(map(int,input().split()))
s1=a.difference(b)
s2=b.difference(a)
s=sorted(s1.union(s2))
for i in s:
print(i, end="\n")
# ### Set .add()
n = int(input())
count = set()
for i in range(n):
count.add(input())
print(len(count))
# ### Set .discard(), .remove() & .pop()
n = int(input())
s = set(map(int, input().split()))
N = int(input())
for i in range(N):
l = input().split()
if l[0]=="remove" :
s.remove(int(l[1]))
elif l[0]=="discard" :
s.discard(int(l[1]))
elif l[0]=="pop" :
s.pop()
print(sum(s))
# ### Set .union()
M = int(input())
a = set(map(int,input().split()))
N = int(input())
b = set(map(int,input().split()))
print(len(a.union(b)))
# ### Set .intersection()
M = int(input())
a = set(map(int,input().split()))
N = int(input())
b = set(map(int,input().split()))
print(len(a.intersection(b)))
# ### Set .difference()
M = int(input())
a = set(map(int,input().split()))
N = int(input())
b = set(map(int,input().split()))
print(len(a.difference(b)))
# ### Set .symmetric_difference()
M = int(input())
a = set(map(int,input().split()))
N = int(input())
b = set(map(int,input().split()))
print(len(a.symmetric_difference(b)))
# ### Set Mutations
M = int(input())
a = set(map(int,input().split()))
N = int(input())
for i in range(N):
l,i = input().split()
b = set(map(int, input().split()))
if(l == "update"):
a.update(b)
elif(l == "intersection_update"):
a.intersection_update(b)
elif(l == "symmetric_difference_update"):
a.symmetric_difference_update(b)
elif(l == "difference_update"):
a.difference_update(b)
print(sum(a))
# ### The Captain's Room
K = int(input())
s = input().split()
s.sort()
p = (set(s[0::2]) ^ set(s[1::2]))
capt=p.pop()
print(capt)
# ### Check Subset
T= int(input())
for i in range(T):
n1 = int(input())
A=set(input().split())
n2 = int(input())
B=set(input().split())
print(A.intersection(B) == A)
# ### Check Strict Superset
A=set(input().split())
N=int(input())
for i in range(N):
s=set(map(int,(input().split())))
if A.issuperset(s):
print(True)
else:
print(False)
break
# ## Collections
# ### collections.Counter()
# +
from collections import Counter
X = int(input())
l= Counter(map(int,input().split()))
N = int(input())
s=0
for i in range(N):
size,price = map(int, input().split())
if l[size]:
s=s+price
l[size]=l[size]-1
print(s)
# -
# ### DefaultDict Tutorial
from collections import defaultdict
d = defaultdict(list)
n , m = map(int,input().split())
a,b = [input() for i in range(n)],[input() for i in range(m)]
for i in range(len(a)):
d[a[i]].append(str(i+1))
for i in b:
if i in a:
print(' '.join(d[i]))
else:
print(-1)
# ### Collections.namedtuple()
N = int(input())
m = input().split().index("MARKS")
d = [input().split() for i in range(N)]
print(sum(int(i[m]) for i in d)/N)
# ### Collections.deque()
#
from collections import deque
d = deque()
N=int(input())
for i in range(N):
a = input().split()
if a[0]=="append":
d.append(a[1])
elif a[0]=="appendleft":
d.appendleft(a[1])
elif a[0]=="pop":
d.pop()
elif a[0]=="popleft":
d.popleft()
print(*d)
# ### Company Logo
# +
# #!/bin/python3
import math
import os
import random
import re
import sys
from collections import Counter
if __name__ == '__main__':
s = input()
x=Counter(s)
for i in x.most_common(3):
print(*i)
# -
# ## Date and Time
# ### Calendar Module
import calendar
l=list(map(int,input().split()))
month = l[0]
day = l[1]
year = l[2]
d={0:'Monday' , 1:'Tuesday', 2:'Wednesday', 3:'Thursday', 4:'Friday', 5:'Saturday', 6:'Sunday'}
print(d[calendar.weekday(year,month,day)].upper())
# ### Time Delta
# +
# #!/bin/python3
import math
import os
import random
import re
import sys
from datetime import datetime
# Complete the time_delta function below.
def time_delta(t1, t2):
dt1=datetime.strptime(t1,'%a %d %b %Y %H:%M:%S %z')
dt2=datetime.strptime(t2,'%a %d %b %Y %H:%M:%S %z')
print(str(int(abs(dt1-dt2).total_seconds())))
t = int(input())
for t_itr in range(t):
t1 = input()
t2 = input()
delta = time_delta(t1, t2)
# -
# ## Errors and Exceptions
# ### Exceptions
T=int(input())
for i in range(T):
try:
a,b=map(int,input().split())
print(a//b)
except Exception as e:
print("Error Code:",e)
# ### Built-Ins
# ### Zipped!
N,X = map(int, input().split())
d=[map(float, input().split()) for _ in range(X)]
for i in zip(*d):
print( sum(i)/len(i) )
# ### Input()
x,k=map(int, input().split())
print (k==eval(input()))
# ### Python Evaluation
eval(input())
# ### Athlete Sort
# +
# #!/bin/python3
import math
import os
import random
import re
import sys
if __name__ == '__main__':
nm = input().split()
n = int(nm[0])
m = int(nm[1])
arr = []
for _ in range(n):
arr.append(list(map(int, input().rstrip().split())))
k = int(input())
arr.sort(key=lambda i: i[k])
for i in range(n):
print(*arr[i])
# -
# ### Any or All
#
N=int(input())
a=input().split(' ')
print(all(int(i)>=0 for i in a) and any(i == i[::-1]for i in a))
# ### ginortS
s = input()
low=[]
up=[]
num=[]
odd=[]
ev=[]
ss=''
for i in range(len(s)):
if(s[i].isdigit()):
num.append(s[i])
elif(s[i].isupper()):
up.append(s[i])
elif(s[i].islower()):
low.append(s[i])
low=sorted(low)
up=sorted(up)
#print(num)
for i in range(len(num)):
if(int(num[i])%2!=0):
odd.append(num[i])
else:
ev.append(num[i])
odd=sorted(odd)
ev=sorted(ev)
num=odd+ev
#print(num)
ss=low+up+num
for i in ss:
print(i, end="")
# ## Python Functionals
# ### Map and Lambda Function
# +
cube = lambda x: x*x*x
def fibonacci(n):
l = [0,1]
for i in range(2,n):
l.append(l[i-2] + l[i-1])
return(l[0:n])
if __name__ == '__main__':
n = int(input())
print(list(map(cube, fibonacci(n))))
# -
# ## Regex and Parsing
# ### Detect Floating Point Number
for _ in range(int(input())):
try:
print(bool(float(input())))
except:
print('False')
# ### Re.split()
# +
regex_pattern = r"[,.]" # Do not delete 'r'.
import re
print("\n".join(re.split(regex_pattern, input())))
# -
# ### Group(), Groups() & Groupdict()
import re
text=input()
x = re.search(r"([a-zA-Z0-9])\1",text )
print(x.group(1) if x else -1)
# ### Validating Roman Numerals
# +
regex_pattern = r"(?<=^)M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})(?=$)" # Do not delete 'r'.
import re
print(str(bool(re.match(regex_pattern, input()))))
# -
# ### Validating phone numbers
import re
N=int(input())
for i in range(N):
s=input()
if re.match("^[789][0-9]{9}$",s):
print("YES")
else:
print("NO")
# ## XML
# ### XML 1 - Find the Score
# +
import sys
import xml.etree.ElementTree as etree
def get_attr_number(node):
s=0
for i in node.iter():
s=s+len(i.items())
return s
if __name__ == '__main__':
sys.stdin.readline()
xml = sys.stdin.read()
tree = etree.ElementTree(etree.fromstring(xml))
root = tree.getroot()
print(get_attr_number(root))
##Compilation success in hackerrank
# -
# ### XML2 - Find the Maximum Depth
# +
import xml.etree.ElementTree as etree
maxdepth = 0
def depth(elem, level):
global maxdepth
for i in elem:
depth(i, level+1)
maxdepth = max(level+1, maxdepth)
if __name__ == '__main__':
n = int(input())
xml = ""
for i in range(n):
xml = xml + input() + "\n"
tree = etree.ElementTree(etree.fromstring(xml))
depth(tree.getroot(), -1)
print(maxdepth)
# -
# ## Numpy
# ### Arrays
# +
import numpy as np
def arrays(arr):
b = np.array(arr,float)
return b[::-1]
arr = input().strip().split(' ')
result = arrays(arr)
print(result)
# -
# ### Shape and Reshape
import numpy as np
print(np.array(input().split(),int).reshape(3,3))
# ### Transpose and Flatten
import numpy as np
N,M = map(int,(input().split()))
a = np.array([input().split() for i in range(N)],int)
print(a.transpose())
print(a.flatten())
# ### Concatenate
import numpy as np
N,M,P = map(int,(input().split()))
a = np.array([input().split() for i in range(N)],int)
b = np.array([input().split() for i in range(M)],int)
print(np.concatenate((a, b)))
# ### Zeros and Ones
import numpy as np
n = list(map(int,(input().split())))
print(np.zeros((n),int))
print(np.ones((n),int))
# ### Eye and Identity
import numpy as np
N,M = map(int,(input().split()))
np.set_printoptions(legacy='1.13')
print(np.eye(N,M))
# ### Array Mathematics
# +
import numpy as np
N,M = map(int,(input().split()))
a = np.array([(list(map(int, input().split()))) for i in range(N)],int)
b = np.array([(list(map(int, input().split()))) for i in range(N)],int)
print(np.add(a, b))
print(np.subtract(a, b))
print(np.multiply(a, b))
print(np.floor_divide(a, b))
print(np.mod(a, b))
print(np.power(a, b))
# -
# ### Floor, Ceil and Rint
import numpy as np
a=(np.array(input().split(),float))
np.set_printoptions(legacy='1.13')
print(np.floor(a))
print(np.ceil(a))
print(np.rint(a))
# ### Sum and Prod
import numpy as np
N,M = map(int,(input().split()))
a = np.array([(list(map(int, input().split()))) for i in range(N)],int)
print(np.prod(np.sum(a,axis=0),axis=0))
# ### Min and Max
import numpy as np
N,M = map(int,(input().split()))
a = np.array([(list(map(int, input().split()))) for i in range(N)],int)
print(max(np.min(a,axis=1)))
# ### Mean, Var, and Std
import numpy as np
N,M = map(int,(input().split()))
a = np.array([(list(map(int, input().split()))) for i in range(N)],int)
print(np.mean(a,axis=1))
print(np.var(a,axis=0))
print(round(np.std(a),11))
# ### Dot and Cross
import numpy as np
N = int(input())
a = np.array([(list(map(int, input().split()))) for i in range(N)],int)
b = np.array([(list(map(int, input().split()))) for i in range(N)],int)
print(np.dot(a,b))
# ### Inner and Outer
import numpy as np
a = np.array(input().split(),int)
b = np.array(input().split(),int)
print(np.inner(a,b))
print(np.outer(a,b))
# ### Polynomials
import numpy as np
P = np.array(input().split(),float)
x = int(input())
print(np.polyval(P,x))
# ### Linear Algebra
import numpy as np
N = int(input())
a = np.array([(list(map(float, input().split()))) for i in range(N)],float)
print(round(np.linalg.det(a),2))
# # Problem2
# ## Birthday Cake Candles
# +
# #!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'birthdayCakeCandles' function below.
#
# The function is expected to return an INTEGER.
# The function accepts INTEGER_ARRAY candles as parameter.
#
def birthdayCakeCandles(candles):
max_c=-1000
s=0
for i in range(len(candles)):
if(candles[i]>max_c):
max_c=candles[i]
for i in range(len(candles)):
if(candles[i]==max_c):
s+=1
print(s)
candles_count = int(input().strip())
candles = list(map(int, input().rstrip().split()))
result = birthdayCakeCandles(candles)
# -
# ## Number Line Jumps
# +
# #!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'kangaroo' function below.
#
# The function is expected to return a STRING.
# The function accepts following parameters:
# 1. INTEGER x1
# 2. INTEGER v1
# 3. INTEGER x2
# 4. INTEGER v2
#
def kangaroo(x1, v1, x2, v2):
s1=x1
s2=x2
for i in range(10001):
s1=s1+v1
s2=s2+v2
if(s1==s2):
print('YES')
else:
print('NO')
first_multiple_input = input().rstrip().split()
x1 = int(first_multiple_input[0])
v1 = int(first_multiple_input[1])
x2 = int(first_multiple_input[2])
v2 = int(first_multiple_input[3])
result = kangaroo(x1, v1, x2, v2)
# -
# ## Viral Advertising
# +
# #!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'viralAdvertising' function below.
#
# The function is expected to return an INTEGER.
# The function accepts INTEGER n as parameter.
#
def viralAdvertising(n):
shared=5
liked=math.floor(shared/2)
c=liked
for i in range (1,n):
shared=liked*3
liked=math.floor(shared/2)
c=c+liked
return(c)
n = int(input().strip())
result = viralAdvertising(n)
print(result)
# -
# ## Recursive Digit Sum
# +
# #!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'superDigit' function below.
#
# The function is expected to return an INTEGER.
# The function accepts following parameters:
# 1. STRING n
# 2. INTEGER k
#
def superDigit(n,k):
if (int(n)<10):
return int(n)
l = list(map(int, str(n)))
return superDigit(str(sum(l)*k),1)
first_multiple_input = input().rstrip().split()
n = first_multiple_input[0]
k = int(first_multiple_input[1])
result = superDigit(n, k)
print(result)
# -
# ## Insertion Sort - Part 1
# +
# #!/bin/python3
##Use pseudocode from wikipedia
import math
import os
import random
import re
import sys
#
# Complete the 'insertionSort1' function below.
#
# The function accepts following parameters:
# 1. INTEGER n
# 2. INTEGER_ARRAY arr
#
def insertionSort1(n, arr):
i=1
while(i < len(arr)):
key = arr[i]
j = i-1
while(j >= 0 and key < arr[j]):
arr[j + 1] = arr[j]
j =j - 1
print(*arr)
arr[j + 1] = key
i=i+1
print(*arr)
n = int(input().strip())
arr = list(map(int, input().rstrip().split()))
insertionSort1(n, arr)
# -
# ## Insertion Sort - Part 2
# +
# #!/bin/python3
## Just delete print from insertion_sort1
import math
import os
import random
import re
import sys
#
# Complete the 'insertionSort2' function below.
#
# The function accepts following parameters:
# 1. INTEGER n
# 2. INTEGER_ARRAY arr
#
def insertionSort2(n, arr):
i=1
while(i < len(arr)):
key = arr[i]
j = i-1
while(j >= 0 and key < arr[j]):
arr[j + 1] = arr[j]
j =j - 1
arr[j + 1] = key
i=i+1
print(*arr)
if __name__ == '__main__':
n = int(input().strip())
arr = list(map(int, input().rstrip().split()))
insertionSort2(n, arr)
| Homework_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab 03: Resizing and slicing in PyTorch -- exercise
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/gdrive')
file_name = 'pytorch_tensor_part2_exercise.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
print(path_to_file)
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
os.chdir(path_to_file)
# !pwd
import torch
import utils
# ### Make a 10 x 2 matrix random matrix A. Then store its third row (index = 2) in to a vector v. Then store the first 5 rows (index 0 to index 4) into a submatrix B. The important information is that B has a total of five rows. Print A, v and B.
# +
# write your code here
# -
# ### Extract entry (0,0) of the matrix A and store it into a PYTHON NUMBER x
# +
# write your code here
# -
# ### Let's download 60,000 gray scale pictures as well as their label. Each picture is 28 by 28 pixels.
# +
from utils import check_mnist_dataset_exists
data_path=check_mnist_dataset_exists()
data=torch.load(data_path+'mnist/train_data.pt')
label=torch.load(data_path+'mnist/train_label.pt')
# -
# ### Find the size of these two tensors
# +
# write your code here
# -
# ### Print the first picture by slicing the data tensor. You will see the intensity of each pixel (a value between 0 and 1)
# +
# write your code here
# -
# ### The function show() from the "utils" package will display the picture:
utils.show(data[10])
# ### Print the first entry of the label vector. The label is 5 telling you that this is the picture of a five.
# +
# write your code here
# -
# ### Display picture 20 of the dataset and print its label
# +
# write your code here
# -
# ### Print the label corresponding to picture 10,000 10,001 10,002 10,003 and 10,004. So you need to extract 5 entries starting from entry 10,000.
# +
# write your code here
# -
# ### Display the two pictures that have label 9
# +
# write your code here
# -
# ### Lets now play with the CIFAR data set. These are RGB pictures
# +
from utils import check_cifar_dataset_exists
data_path=check_cifar_dataset_exists()
data=torch.load(data_path+'cifar/train_data.pt')
label=torch.load(data_path+'cifar/train_label.pt')
# -
# ### Find the size of these two tensors. How many pictures? How many pixels? Note that it is a 4-dimensional Tensor. Dimension 0 gives you the index of the picture, dimension 1 gives you the chanel (R, G or B) and the last two dimension gives you the pixel location.
# +
# write your code here
# -
# ### Extract the first picture (a 3 x 32 x 32 Tensor) and check its size.
# +
# write your code here
# -
# ### Display picture 7, 40 and 100 of the data set with utils.show() and print its label. For CIFAR, the label are:
# 0) Airplane
# 1) Automobile
# 2) Bird
# 3) Cat
# 4) Deer
# 5) Dog
# 6) Frog
# 7) Horse
# 8) Ship
# 9) Truck
#
# For example, a picture of a dog will have label 5.
# +
# write your code here
| codes/labs_lecture02/lab03_pytorch_tensor2/pytorch_tensor_part2_exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import numpy as np
from PIL import Image
import tensorflow as tf
import pathlib
classes = ['building', 'forest', 'glacier', 'mountain', 'sea', 'street']
dataset_dir = pathlib.Path('../dataset')
dataset_test_dir = list(dataset_dir.glob('test/*'))
img1 = Image.open(str(dataset_test_dir[0]))
img1
img2 = Image.open(str(dataset_test_dir[1]))
img2
img3 = Image.open(str(dataset_test_dir[2]))
img3
img4 = Image.open(str(dataset_test_dir[3]))
img4
img5 = Image.open(str(dataset_test_dir[4]))
img5
img6 = Image.open(str(dataset_test_dir[5]))
img6
interpreter = tf.lite.Interpreter(model_path='../TFLiteModel/model.tflite')
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
input_details
output_details = interpreter.get_output_details()
output_details
def predict(image):
# [1 150 150 3]
input_shape = input_details[0]['shape']
input_data = tf.keras.utils.img_to_array(image).reshape(input_shape)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index']).reshape(6,)
return classes[np.argmax(output_data)]
for i in range(6):
print(predict(Image.open(str(dataset_test_dir[i]))))
| notebooks/Model Prediction With TensorFlow Lite.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # ะัะพะดะฐะถะธ ะฐะฒัััะฐะปะธะนัะบะพะณะพ ะฒะธะฝะฐ
# ะะทะฒะตััะฝั ะตะถะตะผะตัััะฝัะต ะฟัะพะดะฐะถะธ ะฐะฒัััะฐะปะธะนัะบะพะณะพ ะฒะธะฝะฐ ะฒ ัััััะฐั
ะปะธััะพะฒ ั ัะฝะฒะฐัั 1980 ะฟะพ ะธัะปั 1995, ะฝะตะพะฑั
ะพะดะธะผะพ ะฟะพัััะพะธัั ะฟัะพะณะฝะพะท ะฝะฐ ัะปะตะดัััะธะต ััะธ ะณะพะดะฐ.
# +
# %pylab inline
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
import warnings
from itertools import product
def invboxcox(y,lmbda):
if lmbda == 0:
return(np.exp(y))
else:
return(np.exp(np.log(lmbda*y+1)/lmbda))
# -
wine = pd.read_csv('monthly-australian-wine-sales.csv',',', index_col=['month'], parse_dates=['month'], dayfirst=True)
wine.sales = wine.sales * 1000
plt.figure(figsize(15,7))
wine.sales.plot()
plt.ylabel('Wine sales')
pylab.show()
# ะัะพะฒะตัะบะฐ ััะฐัะธะพะฝะฐัะฝะพััะธ ะธ STL-ะดะตะบะพะผะฟะพะทะธัะธั ััะดะฐ:
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(wine.sales).plot()
print("<NAME>: p=%f" % sm.tsa.stattools.adfuller(wine.sales)[1])
# ### ะกัะฐะฑะธะปะธะทะฐัะธั ะดะธัะฟะตััะธะธ
# ะกะดะตะปะฐะตะผ ะฟัะตะพะฑัะฐะทะพะฒะฐะฝะธะต ะะพะบัะฐ-ะะพะบัะฐ ะดะปั ััะฐะฑะธะปะธะทะฐัะธะธ ะดะธัะฟะตััะธะธ:
wine['sales_box'], lmbda = stats.boxcox(wine.sales)
plt.figure(figsize(15,7))
wine.sales_box.plot()
plt.ylabel(u'Transformed wine sales')
print("ะะฟัะธะผะฐะปัะฝัะน ะฟะฐัะฐะผะตัั ะฟัะตะพะฑัะฐะทะพะฒะฐะฝะธั ะะพะบัะฐ-ะะพะบัะฐ: %f" % lmbda)
print("<NAME>: p=%f" % sm.tsa.stattools.adfuller(wine.sales_box)[1])
# ### ะกัะฐัะธะพะฝะฐัะฝะพััั
# <NAME> ะพัะฒะตัะณะฐะตั ะณะธะฟะพัะตะทั ะฝะตััะฐัะธะพะฝะฐัะฝะพััะธ, ะฝะพ ะฒะธะทัะฐะปัะฝะพ ะฒ ะดะฐะฝะฝัั
ะฒะธะดะตะฝ ััะตะฝะด. ะะพะฟัะพะฑัะตะผ ัะตะทะพะฝะฝะพะต ะดะธััะตัะตะฝัะธัะพะฒะฐะฝะธะต; ัะดะตะปะฐะตะผ ะฝะฐ ะฟัะพะดะธััะตัะตะฝัะธัะพะฒะฐะฝะฝะพะผ ััะดะต STL-ะดะตะบะพะผะฟะพะทะธัะธั ะธ ะฟัะพะฒะตัะธะผ ััะฐัะธะพะฝะฐัะฝะพััั:
wine['sales_box_diff'] = wine.sales_box - wine.sales_box.shift(12)
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(wine.sales_box_diff[12:]).plot()
print("<NAME>: p=%f" % sm.tsa.stattools.adfuller(wine.sales_box_diff[12:])[1])
# <NAME> ะฝะต ะพัะฒะตัะณะฐะตั ะณะธะฟะพัะตะทั ะฝะตััะฐัะธะพะฝะฐัะฝะพััะธ, ะธ ะฟะพะปะฝะพัััั ะธะทะฑะฐะฒะธัััั ะพั ััะตะฝะดะฐ ะฝะต ัะดะฐะปะพัั. ะะพะฟัะพะฑัะตะผ ะดะพะฑะฐะฒะธัั ะตัั ะพะฑััะฝะพะต ะดะธััะตัะตะฝัะธัะพะฒะฐะฝะธะต:
wine['sales_box_diff2'] = wine.sales_box_diff - wine.sales_box_diff.shift(1)
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(wine.sales_box_diff2[13:]).plot()
print("<NAME>: p=%f" % sm.tsa.stattools.adfuller(wine.sales_box_diff2[13:])[1])
# ะะธะฟะพัะตะทะฐ ะฝะตััะฐัะธะพะฝะฐัะฝะพััะธ ะพัะฒะตัะณะฐะตััั, ะธ ะฒะธะทัะฐะปัะฝะพ ััะด ะฒัะณะปัะดะธั ะปัััะต โ ััะตะฝะดะฐ ะฑะพะปััะต ะฝะตั.
# ## ะะพะดะฑะพั ะผะพะดะตะปะธ
# ะะพัะผะพััะธะผ ะฝะฐ ACF ะธ PACF ะฟะพะปััะตะฝะฝะพะณะพ ััะดะฐ:
plt.figure(figsize(15,8))
ax = plt.subplot(211)
sm.graphics.tsa.plot_acf(wine.sales_box_diff2[13:].values.squeeze(), lags=48, ax=ax)
pylab.show()
ax = plt.subplot(212)
sm.graphics.tsa.plot_pacf(wine.sales_box_diff2[13:].values.squeeze(), lags=48, ax=ax)
pylab.show()
# ะะฐัะฐะปัะฝัะต ะฟัะธะฑะปะธะถะตะฝะธั: Q=1, q=2, P=1, p=4
ps = range(0, 5)
d=1
qs = range(0, 3)
Ps = range(0, 2)
D=1
Qs = range(0, 2)
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
len(parameters_list)
# +
# %%time
results = []
best_aic = float("inf")
warnings.filterwarnings('ignore')
for param in parameters_list:
#try except ะฝัะถะตะฝ, ะฟะพัะพะผั ััะพ ะฝะฐ ะฝะตะบะพัะพััั
ะฝะฐะฑะพัะฐั
ะฟะฐัะฐะผะตััะพะฒ ะผะพะดะตะปั ะฝะต ะพะฑััะฐะตััั
try:
model=sm.tsa.statespace.SARIMAX(wine.sales_box, order=(param[0], d, param[1]),
seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1)
#ะฒัะฒะพะดะธะผ ะฟะฐัะฐะผะตััั, ะฝะฐ ะบะพัะพััั
ะผะพะดะตะปั ะฝะต ะพะฑััะฐะตััั ะธ ะฟะตัะตั
ะพะดะธะผ ะบ ัะปะตะดัััะตะผั ะฝะฐะฑะพัั
except ValueError:
print('wrong parameters:', param)
continue
aic = model.aic
#ัะพั
ัะฐะฝัะตะผ ะปััััั ะผะพะดะตะปั, aic, ะฟะฐัะฐะผะตััั
if aic < best_aic:
best_model = model
best_aic = aic
best_param = param
results.append([param, model.aic])
warnings.filterwarnings('default')
# -
# ะัะปะธ ะฒ ะฟัะตะดัะดััะตะน ััะตะนะบะต ะฒะพะทะฝะธะบะฐะตั ะพัะธะฑะบะฐ, ัะฑะตะดะธัะตัั, ััะพ ะพะฑะฝะพะฒะธะปะธ statsmodels ะดะพ ะฒะตััะธะธ ะฝะต ะผะตะฝััะต 0.8.0rc1.
result_table = pd.DataFrame(results)
result_table.columns = ['parameters', 'aic']
print(result_table.sort_values(by = 'aic', ascending=True).head())
# ะัััะฐั ะผะพะดะตะปั:
print(best_model.summary())
# ะั ะพััะฐัะบะธ:
# +
plt.figure(figsize(15,8))
plt.subplot(211)
best_model.resid[13:].plot()
plt.ylabel(u'Residuals')
ax = plt.subplot(212)
sm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax)
print("ะัะธัะตัะธะน ะกัััะดะตะฝัะฐ: p=%f" % stats.ttest_1samp(best_model.resid[13:], 0)[1])
print("ะัะธัะตัะธะน ะะธะบะธ-ะคัะปะปะตัะฐ: p=%f" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])
# -
# ะััะฐัะบะธ ะฝะตัะผะตัะตะฝั (ะฟะพะดัะฒะตัะถะดะฐะตััั ะบัะธัะตัะธะตะผ ะกัััะดะตะฝัะฐ) ััะฐัะธะพะฝะฐัะฝั (ะฟะพะดัะฒะตัะถะดะฐะตััั ะบัะธัะตัะธะตะผ ะะธะบะธ-ะคัะปะปะตัะฐ ะธ ะฒะธะทัะฐะปัะฝะพ), ะฝะตะฐะฒัะพะบะพััะตะปะธัะพะฒะฐะฝั (ะฟะพะดัะฒะตัะถะดะฐะตััั ะบัะธัะตัะธะตะผ ะััะฝะณะฐ-ะะพะบัะฐ ะธ ะบะพััะตะปะพะณัะฐะผะผะพะน).
# ะะพัะผะพััะธะผ, ะฝะฐัะบะพะปัะบะพ ั
ะพัะพัะพ ะผะพะดะตะปั ะพะฟะธััะฒะฐะตั ะดะฐะฝะฝัะต:
wine['model'] = invboxcox(best_model.fittedvalues, lmbda)
plt.figure(figsize(15,7))
wine.sales.plot()
wine.model[13:].plot(color='r')
plt.ylabel('Wine sales')
pylab.show()
# ### ะัะพะณะฝะพะท
# +
wine2 = wine[['sales']]
date_list = [datetime.datetime.strptime("1994-09-01", "%Y-%m-%d") + relativedelta(months=x) for x in range(0,36)]
future = pd.DataFrame(index=date_list, columns= wine2.columns)
wine2 = pd.concat([wine2, future])
wine2['forecast'] = invboxcox(best_model.predict(start=176, end=211), lmbda)
plt.figure(figsize(15,7))
wine2.sales.plot()
wine2.forecast.plot(color='r')
plt.ylabel('Wine sales')
pylab.show()
| 5 Data analysis applications/Homework/2 project wage forecast for Russia/wine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="jR26RFkwXtvi"
# # **[HW5] Language Model**
# 1. DataLoader
# 2. Model
# 3. Trainer
# 4. Generation
#
# ์ด๋ฒ ์ค์ต์์๋ RNN๊ธฐ๋ฐ์ Language Model๋ฅผ ๊ตฌํํด์ ํ
์คํธ๋ฅผ ์ง์ ์์ฑํด๋ณด๋ ์ค์ต์ ์งํํด๋ณด๊ฒ ์ต๋๋ค.
#
# - dataset: WikiText2 (https://github.com/pytorch/examples/tree/master/word_language_model/data/wikitext-2)
# - model: LSTM
#
# + [markdown] id="crVJ36mMlaXP"
#
#
# ## Import packages
# + [markdown] id="zpvlE_XOWS33"
# ๋ฐํ์์ ์ ํ์ ๋ณ๊ฒฝํด์ค๋๋ค.
#
# ์๋จ ๋ฉ๋ด์์ [๋ฐํ์]->[๋ฐํ์์ ํ๋ณ๊ฒฝ]->[ํ๋์จ์ด๊ฐ์๊ธฐ]->[GPU]
#
# ๋ณ๊ฒฝ ์ดํ ์๋์ cell์ ์คํ ์์ผฐ์ ๋, torch.cuda.is_avialable()์ด True๊ฐ ๋์์ผ ํฉ๋๋ค.
#
#
# + id="cqVdEuPQzMAH" colab={"base_uri": "https://localhost:8080/"} outputId="81146039-bf1a-453b-e90e-18f40bb8ec37"
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torch.optim as optim
print(torch.__version__)
print(torch.cuda.is_available())
# + id="2o3-HPdHLZma"
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
import tqdm
import os
import random
import time
import datetime
# for reproducibility
random.seed(1234)
np.random.seed(1234)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# + [markdown] id="T1GnKJCB4T_Q"
# # 1. DataLoader
#
# ์ด์ ์ ์ค์ต๋ค์์ ์ฌ์ฉํ๊ฒ๊ณผ ๋ง์ฐฌ๊ฐ์ง๋ก, PyTorch style์ dataloader๋ฅผ ๋จผ์ ๋ง๋ค์ด ๋๊ฒ ์ต๋๋ค.
# + [markdown] id="wcNl0aWbS0OA"
# ### Dataset
#
# ์ ํฌ๊ฐ ์ด๋ฒ ์ค์ต์์ ์ฌ์ฉํ ๋ฐ์ดํฐ์
์ Wikipedia์ ์๋ ์๋ฌธ ๊ธ๋ค์ ๊ฐ์ ธ์จ WikiTree dataset์
๋๋ค.
# ์ ํฌ๊ฐ ๋ถ๋ฌ์ฌ ๋ฐ์ดํฐ๋ ๊ฐ์ฅ ์์ WikiTree dataset์์ ์์ฃผ ์ฌ์ฉ๋์ง ์๋ ๋จ์ด๋ ์์ด๊ฐ ์๋ ๋จ์ด๋ค์ unknown token ([unk]) ์ผ๋ก ์ด๋ฏธ ์ ์ฒ๋ฆฌ๊ฐ ๋์ด์์ต๋๋ค.
# + id="CKf8zNuISiC2"
import urllib
with urllib.request.urlopen('https://raw.githubusercontent.com/yunjey/pytorch-tutorial/master/tutorials/02-intermediate/language_model/data/train.txt') as f:
data = f.readlines()
# + id="jBLNOlRKSpOI" colab={"base_uri": "https://localhost:8080/"} outputId="d996d7c0-c4a2-4edf-9821-4d578c141a7d"
print('num_sentence:',len(data))
data[100]
# + colab={"base_uri": "https://localhost:8080/"} id="SYouCxF8dP19" outputId="39b19fed-5022-4090-b153-6454e7a49a47"
data[100].split()
# + colab={"base_uri": "https://localhost:8080/"} id="rRQUPLbpdbwU" outputId="09407193-c492-49b3-deb9-9f6a82123a33"
# "๋๋ ๋ฐฅ์ ๋จน๋๋ค."
kor_data = "๋๋ ๋ฐฅ์ ๋จน๋๋ค."
kor_data.split()
# + id="RWBv1J5XdbNx"
# + id="OfLTv1EPbSwj" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="74adb73f-9d19-4249-b52a-f50e15c60560"
seq_length_list = []
for line in data:
seq_length_list.append(len(line.split()))
counts, bins = np.histogram(seq_length_list, bins=20)
plt.hist(bins[:-1], bins, weights=counts)
plt.show()
# + [markdown] id="4SdattmOcRwC"
# ๋ฐ์ดํฐ์ ์๋ ๋ฌธ์ฅ ๊ธธ์ด๋ค์ histogram์ ๋ณผ ๋ ๋๋ถ๋ถ์ data์ ๋ฌธ์ฅ ๊ธธ์ด๊ฐ 50์ ๋ฏธ์น์ง ๋ชปํ๊ธฐ ๋๋ฌธ์ \\
# model์ ์ง์ด๋ฃ์ ์ต๋ ๋ฌธ์ฅ ๊ธธ์ด๋ฅผ 50์ผ๋ก ์ธํ
ํด๋๋๋ก ํ๊ฒ ์ต๋๋ค.
# + id="g7MuFqsKcd4U"
max_seq_len = 50
# + [markdown] id="IyMpsyX8TwYy"
# ### Build Dictionary
#
# ๋จผ์ text ๋ฐ์ดํฐ๋ฅผ ๋ชจ๋ธ์ ๋ฃ์ด์ฃผ๊ธฐ ์ํด์๋ text์ ์กด์ฌํ๋ ๋จ์ด๋ค์ index๋ก ๋ณํํด์ฃผ์ด์ผ ํฉ๋๋ค.
#
# ์ด๋ฅผ ์ํด์๋ ๋จ์ด๋ฅผ index๋ก ๋ณํํด์ฃผ๋ word2idx dictionary์ ๋ค์ index๋ฅผ ๋จ์ด๋ก ๋ณํํด์ฃผ๋ idx2word dictionary๋ฅผ ๋ง๋ค์ด์ผ ํฉ๋๋ค.
#
# + id="cZmyZhcpTvZz"
def build_dictionary(data, max_seq_len):
word2idx = {}
idx2word = {}
## Build Dictionary
word2idx['<pad>'] = 0
word2idx['<unk>'] = 1
idx2word[0] = '<pad>'
idx2word[1] = '<unk>'
idx = 2
for line in data:
words = line.decode('utf-8').split()
words = words[:max_seq_len]
### Build Dictionary to convert word to index and index to word
### YOUR CODE HERE (~ 5 lines)
for word in words:
if word not in word2idx:
word2idx[word] = idx
idx2word[idx] = word
idx += 1
return word2idx, idx2word
word2idx, idx2word = build_dictionary(data, max_seq_len)
# + id="EPfV0OTc4Xdr" outputId="f7663ec8-7327-437a-c0c9-985348ae7473" colab={"base_uri": "https://localhost:8080/"}
if len(word2idx) == len(idx2word) == 10000:
print("Test Passed!")
else:
raise AssertionError
# + [markdown] id="me_m8njoXHrv"
# ### Preprocessing
#
# ์ด์ ์์ ๋ง๋ dictionary๋ฅผ ์ด์ฉํด์ text๋ก๋ ๋ฐ์ดํฐ์
์ index๋ค๋ก ๋ณํ์ํค๊ฒ ์ต๋๋ค.
# + id="I6fuARgzXEDU"
def preprocess(data, word2idx, idx2word, max_seq_len):
tokens = []
for line in data:
words = line.decode('utf-8').split()
words = words[:max_seq_len]
### Convert dataset with tokens
### For each line, append <pad> token to match the number of max_seq_len
### YOUR CODE HERE (~ 4 lines)
words += ['<pad>']*(max_seq_len - len(words))
for word in words:
token = word2idx[word]
tokens.append(token)
return tokens
tokens = preprocess(data, word2idx, idx2word, max_seq_len)
# + id="VjyvqMgbZnfP" outputId="f5395696-9089-4635-8095-646c274dfc80" colab={"base_uri": "https://localhost:8080/"}
if len(tokens) == 2103400:
print("Test Passed!")
else:
raise AssertionError
# + [markdown] id="jmQxX3BH-SAv"
# ์ด์ ์ ์ฒ๋ฆฌ๋ Token๋ค์ ๋ฌธ์ฅ ๋จ์์ ๋ฐฐ์ด๋ก ๋ณํ์์ผ ๋๊ฒ ์ต๋๋ค.
# + id="knMvtp23-Jye" outputId="0b39a31a-210a-4821-9c95-a3c6db352382" colab={"base_uri": "https://localhost:8080/"}
tokens = np.array(tokens).reshape(-1, max_seq_len)
print(tokens.shape)
tokens[100]
# + [markdown] id="pceBqmtTZ9g9"
# ### DataLoader
#
# ์ด์ ์ ์ฒ๋ฆฌ๋ dataset์ ํ์ฉํ์ฌ PyTorch style์ dataset๊ณผ dataloader๋ฅผ ๋ง๋ค๋๋ก ํ๊ฒ ์ต๋๋ค.
#
# Tokenํํ์ ๋ฐ์ดํฐ๋ฅผ PyTorch ์คํ์ผ์ dataset์ผ๋ก ๋ง๋ค ๋ ์ฃผ์ํ ์ ์, ์ถํ embedding matrix์์ indexing์ ํด์ฃผ๊ธฐ ์ํด์ ๊ฐ token์ด LongTensor ํํ๋ก ์ ์๋์ด์ผ ํ๋ค๋ ์ ์
๋๋ค.
# + id="1hAwhG1K9iBI"
class LMDataset(torch.utils.data.Dataset):
def __init__(self, tokens):
super(LMDataset, self).__init__()
self.PAD = 0
self.UNK = 1
self.tokens = tokens
self._getitem(2)
def _getitem(self, index):
X = self.tokens[index]
y = np.concatenate((X[1:], [self.PAD]))
X = torch.from_numpy(X).unsqueeze(0).long()
y = torch.from_numpy(y).unsqueeze(0).long()
return X, y
def __getitem__(self, index):
X = self.tokens[index]
y = np.concatenate((X[1:], [self.PAD]))
X = torch.from_numpy(X).long()
y = torch.from_numpy(y).long()
return X, y
def __len__(self):
return len(self.tokens)
# + id="BiLNqM6kAda1" outputId="e97b3551-80f9-4398-a1d4-59fed80883b5" colab={"base_uri": "https://localhost:8080/"}
batch_size = 64
dataset = LMDataset(tokens)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
print(len(dataset))
print(len(dataloader))
# + [markdown] id="b1nhBnqWxw4a"
# # 2. Model
#
# ์ด๋ฒ section์์๋ Language Modeling์ ์ํ Recurrent Model์ ์ง์ ๋ง๋ค์ด๋ณด๋๋ก ํ๊ฒ ์ต๋๋ค.
#
# Standardํ Recurrent Neural Network (RNN) model์ vanishing gradient ๋ฌธ์ ์ ์ทจ์ฝํ๊ธฐ ๋๋ฌธ์, ์ด๋ฒ ์ค์ต์์๋ ๋ณํ๋ RNN๊ตฌ์กฐ์ธ LSTM model์ ํ์ฉํ๋๋ก ํ๊ฒ ์ต๋๋ค.
#
# + [markdown] id="aOoNVt3MDOjl"
# ### LSTM
# + [markdown] id="9lycT_9vwaJN"
# LSTM model์ ์ ์ฒด์ ์ธ ๊ตฌ์กฐ์ ๊ฐ gate์ ์์์ ์๋์ ๊ฐ์ต๋๋ค.
#
# 
# + [markdown] id="S1h6nfvYwN8n"
# 
#
# LSTM์ ์์ธํ ๋์๋ฐฉ์์ด ๊ถ๊ธํ์ ๋ถ์ ์๋์ ๋ธ๋ก๊ทธ๋ฅผ ์ฐธ์กฐํด์ฃผ์ธ์.
#
# https://colah.github.io/posts/2015-08-Understanding-LSTMs/
# + id="YDNAysVqxxOk"
class LSTMCell(nn.Module):
def __init__(self, input_size, hidden_size):
super(LSTMCell, self).__init__()
# input-gate
self.Wi = nn.Linear(input_size + hidden_size, hidden_size)
# forget-gate
self.Wf = nn.Linear(input_size + hidden_size, hidden_size)
# gate-gate
self.Wg = nn.Linear(input_size + hidden_size, hidden_size)
# output-gate
self.Wo = nn.Linear(input_size + hidden_size, hidden_size)
# non-linearity
self.sigmoid = nn.Sigmoid()
self.tanh = nn.Tanh()
def forward(self, x, h_0, c_0):
"""
Inputs
input (x): [batch_size, input_size]
hidden_state (h_0): [batch_size, hidden_size]
cell_state (c_0): [batch_size, hidden_size]
Outputs
next_hidden_state (h_1): [batch_size, hidden_size]
next_cell_state (c_1): [batch_size, hidden_size]
"""
h_1, c_1 = None, None
input = torch.cat((x, h_0), 1)
# Implement LSTM cell as noted above
### YOUR CODE HERE (~ 6 lines)
i = self.sigmoid(self.Wi(input))
f = self.sigmoid(self.Wf(input))
g = self.tanh(self.Wg(input))
o = self.sigmoid(self.Wo(input))
c_1 = f * c_0 + i * g
h_1 = o * self.tanh(c_1)
return h_1, c_1
# + id="N0Tff2VCJ56D" outputId="e57289aa-f49a-44f0-a400-3b88ae82517e" colab={"base_uri": "https://localhost:8080/"}
def test_lstm():
batch_size = 2
input_size = 5
hidden_size = 3
#torch.manual_seed(1234)
lstm = LSTMCell(input_size ,hidden_size)
def init_weights(m):
if isinstance(m, nn.Linear):
torch.nn.init.constant_(m.weight, 0.1)
m.bias.data.fill_(0.01)
lstm.apply(init_weights)
x = torch.ones(batch_size, input_size)
hx = torch.zeros(batch_size, hidden_size)
cx = torch.zeros(batch_size, hidden_size)
hx, cx = lstm(x, hx, cx)
assert hx.detach().allclose(torch.tensor([[0.1784, 0.1784, 0.1784],
[0.1784, 0.1784, 0.1784]]), atol=2e-1), \
f"Output of the hidden state does not match."
assert cx.detach().allclose(torch.tensor([[0.2936, 0.2936, 0.2936],
[0.2936, 0.2936, 0.2936]]), atol=2e-1), \
f"Output of the cell state does not match."
print("==LSTM cell test passed!==")
test_lstm()
# + [markdown] id="0DxU-78B33dG"
# ## Language Model
#
# ์ด์ , ์์์ ์ ์ํ LSTM Cell์ ํ์ฉํด์ ์๋์ ๊ฐ์ Langauge Model์ ๋ง๋ค์ด๋ณด๋๋ก ํ๊ฒ ์ต๋๋ค.
#
#
# 
# + id="l0U2s0hux_n6"
class LanguageModel(nn.Module):
def __init__(self, input_size=64, hidden_size=64, vocab_size=10000):
super(LanguageModel, self).__init__()
self.input_layer = nn.Embedding(vocab_size, input_size)
self.hidden_layer = LSTMCell(input_size, hidden_size)
self.output_layer = nn.Linear(hidden_size, vocab_size)
def forward(self, x, hx, cx, predict=False):
"""
Inputs
input (x): [batch_size]
hidden_state (h_0): [batch_size, hidden_size]
cell_state (c_0): [batch_size, hidden_size]
predict: whether to predict and sample the next word
Outputs
output (ox): [batch_size, hidden_size]
next_hidden_state (h_1): [batch_size, hidden_size]
next_cell_state (c_1): [batch_size, hidden_size]
"""
x = self.input_layer(x)
hx, cx = self.hidden_layer(x, hx, cx)
ox = self.output_layer(hx)
if predict == True:
probs = F.softmax(ox, dim=1)
# torch distribution allows sampling operation
# see https://pytorch.org/docs/stable/distributions.html
dist = torch.distributions.Categorical(probs)
ox = dist.sample()
return ox, hx, cx
# + [markdown] id="G-ZpuMhsbBS8"
# # 3. Trainer
#
# ์ ์ด์ ์์์ ๊ตฌํํ dataloader์ langauge model์ ํ์ฉํด์ ๋ชจ๋ธ์ ํ์ต์ ์งํํด๋ณด๋๋ก ํ๊ฒ ์ต๋๋ค.
#
# + id="y7TY7HmvbRlB"
class Trainer():
def __init__(self,
word2idx,
idx2word,
dataloader,
model,
criterion,
optimizer,
device):
"""
dataloader: dataloader
model: langauge model
criterion: loss function to evaluate the model (e.g., BCE Loss)
optimizer: optimizer for model
"""
self.word2idx = word2idx
self.idx2word = idx2word
self.dataloader = dataloader
self.model = model
self.criterion = criterion
self.optimizer = optimizer
self.device = device
def train(self, epochs = 1):
self.model.to(self.device)
start_time = time.time()
for epoch in range(epochs):
losses = []
for iter, (x_batch, y_batch) in tqdm.tqdm(enumerate(self.dataloader)):
self.model.train()
batch_size, max_seq_len = x_batch.shape
x_batch = x_batch.to(self.device)
y_batch = y_batch.to(self.device)
# initial hidden-states
hx = torch.zeros(batch_size, hidden_size).to(self.device)
cx = torch.zeros(batch_size, hidden_size).to(self.device)
# Implement LSTM operation
ox_batch = []
# Get output logits for each time sequence and append to the list, ox_batch
# YOUR CODE HERE (~ 4 lines)
for s_idx in range(max_seq_len):
x = x_batch[:, s_idx]
ox, hx, cx = self.model(x, hx, cx)
ox_batch.append(ox)
# outputs are ordered by the time sequence
ox_batch = torch.cat(ox_batch).reshape(max_seq_len, batch_size, -1)
ox_batch = ox_batch.permute(1,0,2).reshape(batch_size*max_seq_len, -1)
y_batch = y_batch.reshape(-1)
self.model.zero_grad()
loss = self.criterion(ox_batch, y_batch)
loss.backward()
self.optimizer.step()
losses.append(loss.item())
end_time = time.time() - start_time
end_time = str(datetime.timedelta(seconds=end_time))[:-7]
print('Time [%s], Epoch [%d/%d], loss: %.4f'
% (end_time, epoch+1, epochs, np.mean(losses)))
if epoch % 5 == 0:
generated_sentences = self.test()
print('[Generated Sentences]')
for sentence in generated_sentences:
print(sentence)
def test(self):
# Test model to genereate the sentences
self.model.eval()
num_sentence = 5
max_seq_len = 50
# initial hidden-states
outs = []
x = torch.randint(0, 10000, (num_sentence,)).to(self.device)
hx = torch.zeros(num_sentence, hidden_size).to(self.device)
cx = torch.zeros(num_sentence, hidden_size).to(self.device)
outs.append(x)
with torch.no_grad():
for s_idx in range(max_seq_len-1):
x, hx, cx = self.model(x, hx, cx, predict=True)
outs.append(x)
outs = torch.cat(outs).reshape(max_seq_len, num_sentence)
outs = outs.permute(1, 0)
outs = outs.detach().cpu().numpy()
sentences = []
for out in outs:
sentence = []
for token_idx in out:
word = self.idx2word[token_idx]
sentence.append(word)
sentences.append(sentence)
return sentences
# + id="fgEJv1vWqNkS" outputId="893b70d8-24cd-41cc-a885-6224d301f17b" colab={"base_uri": "https://localhost:8080/"}
lr = 1e-2
input_size = 128
hidden_size = 128
batch_size = 256
dataset = LMDataset(tokens)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
model = LanguageModel(input_size=input_size, hidden_size=hidden_size)
# NOTE: you should use ignore_index to ignore the loss from predicting the <PAD> token
criterion = nn.CrossEntropyLoss(ignore_index=0)
optimizer = optim.Adam(model.parameters(), lr=lr)
device = torch.device('cuda')
trainer = Trainer(word2idx = word2idx,
idx2word = idx2word,
dataloader=dataloader,
model = model,
criterion=criterion,
optimizer = optimizer,
device=device)
trainer.train(epochs=50)
# + [markdown] id="nDhlrcENM4Dx"
# ์์ฑ๋ ํ
์คํธ์ ํ๋ฆฌํฐ๋ ์ด๋ค๊ฐ์?
#
# ์์ผ๋ก ๋ฅ๋ฌ๋ ๊ฐ์๊ฐ ๋๋๋ฉด ์์ฐ์ด์ฒ๋ฆฌ ๊ฐ์ข์์ ํ
์คํธ ์ฒ๋ฆฌ์ ์ ํฉํ ์ ์ฒ๋ฆฌ ๊ณผ์ , ๋ชจ๋ธ๊ตฌ์กฐ๋ค์ ๋ณธ๊ฒฉ์ ์ผ๋ก ๋ฐฐ์ฐ์๊ฒ ๋ ๊ฒ์
๋๋ค.
# + [markdown] id="1Ua-_6W2a5Lt"
# # References
#
# 1. https://github.com/pytorch/examples/tree/master/word_language_model
# 2. https://github.com/yunjey/pytorch-tutorial/tree/master/tutorials/02-intermediate/language_model
| 05_Deep_Learning/sol/[HW5]Language_Model_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dream Bank
#
# # Part 2: Dimensionality Reduction \& Time Series
#
# **Packages**
# +
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('default')
from sklearn.feature_extraction.text import TfidfVectorizer
from umap import UMAP
RANDOM_STATE = 1805
# -
# # 1. Load Data
# +
dreams_cleaned_df = pd.read_csv('dreams_cleaned_df.csv')
dreams_cleaned_df = dreams_cleaned_df.dropna(subset=['text_cleaned'])
# Filter German dreams
german_dreamers = dreams_cleaned_df['dreamer'].unique().tolist()
german_dreamers = [el for el in german_dreamers if '.de' in el]
dreams_cleaned_df = dreams_cleaned_df[~dreams_cleaned_df['dreamer'].isin(german_dreamers)].copy()
print(dreams_cleaned_df.shape)
# +
clean_corpus = dreams_cleaned_df['text_cleaned'].values
tfv = TfidfVectorizer(min_df=20, max_features=10000,
strip_accents='unicode', analyzer='word',
ngram_range=(1, 2), use_idf=1, smooth_idf=1,
sublinear_tf=1, stop_words='english')
tfidf_matrix = tfv.fit_transform(clean_corpus)
print(tfidf_matrix.shape)
# -
# # 2. Dimensionality Reduction
# ## 2.1. UMAP
# %%time
umap_model = UMAP(metric='hellinger', random_state=RANDOM_STATE)
tfidf_embedding = umap_model.fit_transform(tfidf_matrix)
tfidf_embedding_df = pd.DataFrame(tfidf_embedding, columns=['Z1', 'Z2'])
print(tfidf_embedding.shape)
tfidf_embedding_df.isnull().sum()
plt.figure(figsize=(5,4))
plt.scatter(tfidf_embedding_df['Z1'], tfidf_embedding_df['Z2'], s=1,
color='red', alpha=0.1)
plt.xticks([])
plt.yticks([])
plt.show()
# ## 2.2. Supervised UMAP
labels = dreams_cleaned_df['dreamer'].copy()
labels = labels.factorize()[0]
# %%time
umap_model = UMAP(metric='hellinger', random_state=RANDOM_STATE)
tfidf_embedding = umap_model.fit_transform(tfidf_matrix, y=labels)
tfidf_embedding_df = pd.DataFrame(tfidf_embedding, columns=['Z1', 'Z2'])
print(tfidf_embedding.shape)
plt.figure(figsize=(5,4))
plt.scatter(tfidf_embedding_df['Z1'], tfidf_embedding_df['Z2'], s=1,
color='red', alpha=0.1)
plt.xticks([])
plt.yticks([])
plt.show()
# ## 2.3. Save Results
dreams_umap_df = dreams_cleaned_df.copy()
dreams_umap_df['Z1'] = tfidf_embedding_df['Z1'].values
dreams_umap_df['Z2'] = tfidf_embedding_df['Z2'].values
dreams_umap_df.head()
dreams_umap_df.isnull().sum()
dreams_umap_df.to_csv('dreams_umap_df.csv', index=0)
dreams_umap_df = pd.read_csv('dreams_umap_df.csv')
# # 3. Time Series
dreams_time_series = dreams_umap_df.dropna(subset=['date'])
print(dreams_time_series.shape)
print(dreams_umap_df['date'].str.contains('\d{4}').sum())
print(dreams_umap_df['date'].str.contains('^\d{2}-\d{2}-\d{2}').sum())
print(dreams_umap_df['date'].str.contains('^\d{2}\/\d{2}\/\d{2}').sum())
| notebooks/Dream_Bank_Dimensionality_Reduction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Looking for Sneaky Clickbait
# The aim of this experiment is to evaluate the clickbait detector model and find out what kind of clickbait does it fail to detect.
# +
from keras.models import load_model
from keras.preprocessing import sequence
import sys
import string
import re
UNK = "<UNK>"
PAD = "<PAD>"
MATCH_MULTIPLE_SPACES = re.compile("\ {2,}")
SEQUENCE_LENGTH = 20
# -
# ## Load the model and vocabulary
# +
model = load_model("../models/detector.h5")
vocabulary = open("../data/vocabulary.txt").read().split("\n")
inverse_vocabulary = dict((word, i) for i, word in enumerate(vocabulary))
# -
# ## Load validation data
# +
clickbait = open("../data/clickbait.valid.txt").read().split("\n")
genuine = open("../data/genuine.valid.txt").read().split("\n")
print "Clickbait: "
for each in clickbait[:5]:
print each
print "-" * 50
print "Genuine: "
for each in genuine[:5]:
print each
# +
def words_to_indices(words):
return [inverse_vocabulary.get(word, inverse_vocabulary[UNK]) for word in words]
def clean(text):
for punctuation in string.punctuation:
text = text.replace(punctuation, " " + punctuation + " ")
for i in range(10):
text = text.replace(str(i), " " + str(i) + " ")
text = MATCH_MULTIPLE_SPACES.sub(" ", text)
return text
# -
# ## Genuine news marked as clickbait
# +
wrong_genuine_count = 0
for each in genuine:
cleaned = clean(each.encode("ascii", "ignore").lower()).split()
indices = words_to_indices(cleaned)
indices = sequence.pad_sequences([indices], maxlen=SEQUENCE_LENGTH)
prediction = model.predict(indices)[0, 0]
if prediction > .5:
print prediction, each
wrong_genuine_count += 1
print "-" * 50
print "{0} out of {1} wrong.".format(wrong_genuine_count, len(genuine))
# -
# ## Clickbait not detected
# +
wrong_clickbait_count = 0
for each in clickbait:
cleaned = clean(each.encode("ascii", "ignore").lower()).split()
indices = words_to_indices(cleaned)
indices = sequence.pad_sequences([indices], maxlen=SEQUENCE_LENGTH)
prediction = model.predict(indices)[0, 0]
if prediction < .5:
print prediction, each
wrong_clickbait_count += 1
print "-" * 50
print "{0} out of {1} wrong.".format(wrong_clickbait_count, len(clickbait))
| py_back/clickbait_detector/notebooks/Looking for Sneaky Clickbait.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import math
from pprint import pprint
import pandas as pd
import numpy as np
import nltk
import matplotlib.pyplot as plt
import seaborn as sns
nltk.download('vader_lexicon')
nltk.download('stopwords')
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
from nltk.tokenize import word_tokenize, RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
import datetime as dt
from langdetect import detect
# detects the language of the comment
def language_detection(text):
try:
return detect(text)
except:
return None
raw_data = pd.read_csv("/IS5126airbnb_reviews_full.csv",low_memory = False)
raw_data.head()
raw_data_filtered= raw_data[['listing_id','id','comments']]
raw_data_filtered.dtypes
# +
# group by hosts and count the number of unique listings --> cast it to a dataframe
reviews_per_listing = pd.DataFrame(raw_data.groupby('listing_id')['id'].nunique())
# sort unique values descending and show the Top20
reviews_per_listing.sort_values(by=['id'], ascending=False, inplace=True)
reviews_per_listing.head(20)
# -
def language_detection(text):
try:
return detect(text)
except:
return None
raw_data_filtered['language'] = raw_data_filtered['comments'].apply(language_detection)
raw_data_filtered.language.value_counts().head(10)
# visualizing the comments' languages a) quick and dirty
ax = raw_data_filtered.language.value_counts(normalize=True).head(6).sort_values().plot(kind='barh', figsize=(9,5));
df_eng = raw_data_filtered[(raw_data_filtered['language']=='en')]
# +
# import necessary libraries
from nltk.corpus import stopwords
from wordcloud import WordCloud
from collections import Counter
from PIL import Image
import re
import string
# -
def plot_wordcloud(wordcloud, language):
plt.figure(figsize=(12, 10))
plt.imshow(wordcloud, interpolation = 'bilinear')
plt.axis("off")
plt.title('Word Cloud for Comments\n', fontsize=18, fontweight='bold')
plt.show()
# +
wordcloud = WordCloud(max_font_size=None, max_words=200, background_color="lightgrey",
width=3000, height=2000,
stopwords=stopwords.words('english')).generate(str(raw_data_filtered.comments.values))
plot_wordcloud(wordcloud, 'English')
# -
# load the SentimentIntensityAnalyser object in
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# assign it to another name to make it easier to use
analyzer = SentimentIntensityAnalyzer()
# use the polarity_scores() method to get the sentiment metrics
def print_sentiment_scores(sentence):
snt = analyzer.polarity_scores(sentence)
print("{:-<40} {}".format(sentence, str(snt)))
# +
# getting only the negative score
def negative_score(text):
negative_value = analyzer.polarity_scores(str(text))['neg']
return negative_value
# getting only the neutral score
def neutral_score(text):
neutral_value = analyzer.polarity_scores(str(text))['neu']
return neutral_value
# getting only the positive score
def positive_score(text):
positive_value = analyzer.polarity_scores(str(text))['pos']
return positive_value
# getting only the compound score
def compound_score(text):
compound_value = analyzer.polarity_scores(str(text))['compound']
return compound_value
# -
raw_data_filtered['sentiment_neg'] = raw_data_filtered['comments'].apply(negative_score)
raw_data_filtered['sentiment_neu'] = raw_data_filtered['comments'].apply(neutral_score)
raw_data_filtered['sentiment_pos'] = raw_data_filtered['comments'].apply(positive_score)
raw_data_filtered['sentiment_compound'] = raw_data_filtered['comments'].apply(compound_score)
# +
# all scores in 4 histograms
fig, axes = plt.subplots(2, 2, figsize=(10,8))
# plot all 4 histograms
df_eng.hist('sentiment_neg', bins=25, ax=axes[0,0], color='lightcoral', alpha=0.6)
axes[0,0].set_title('Negative Sentiment Score')
df_eng.hist('sentiment_neu', bins=25, ax=axes[0,1], color='lightsteelblue', alpha=0.6)
axes[0,1].set_title('Neutral Sentiment Score')
df_eng.hist('sentiment_pos', bins=25, ax=axes[1,0], color='chartreuse', alpha=0.6)
axes[1,0].set_title('Positive Sentiment Score')
df_eng.hist('sentiment_compound', bins=25, ax=axes[1,1], color='navajowhite', alpha=0.6)
axes[1,1].set_title('Compound')
# plot common x- and y-label
fig.text(0.5, 0.04, 'Sentiment Scores', fontweight='bold', ha='center')
fig.text(0.04, 0.5, 'Number of Reviews', fontweight='bold', va='center', rotation='vertical')
# plot title
plt.suptitle('Sentiment Analysis of Airbnb Reviews for Singapore\n\n', fontsize=12, fontweight='bold');
# -
percentiles = df_eng.sentiment_compound.describe(percentiles=[.05, .1, .2, .3, .4, .5, .6, .7, .8, .9])
percentiles
# +
# assign the data
neg = percentiles['10%']
mid = percentiles['30%']
pos = percentiles['max']
names = ['Negative Comments', 'Okayish Comments','Positive Comments']
size = [neg, mid, pos]
# call a pie chart
plt.pie(size, labels=names, colors=['lightcoral', 'lightsteelblue', 'chartreuse'],
autopct='%.5f%%', pctdistance=0.8,
wedgeprops={'linewidth':7, 'edgecolor':'white' })
# create circle for the center of the plot to make the pie look like a donut
my_circle = plt.Circle((0,0), 0.6, color='white')
# plot the donut chart
fig = plt.gcf()
fig.set_size_inches(7,7)
fig.gca().add_artist(my_circle)
plt.show()
# -
df_eng.head()
pd.set_option("max_colwidth", 1000)
df_neu = df_eng.loc[df_eng.sentiment_compound <= 0.5]
# +
# full dataframe with POSITIVE comments
df_pos = df_eng.loc[df_eng.sentiment_compound >= 0.95]
# only corpus of POSITIVE comments
pos_comments = df_pos['comments'].tolist()
# +
# full dataframe with NEGATIVE comments
df_neg = df_eng.loc[df_eng.sentiment_compound < 0.0]
# only corpus of NEGATIVE comments
neg_comments = df_neg['comments'].tolist()
# +
df_pos['text_length'] = df_pos['comments'].apply(len)
df_neg['text_length'] = df_neg['comments'].apply(len)
sns.set_style("whitegrid")
plt.figure(figsize=(8,5))
sns.distplot(df_pos['text_length'], kde=True, bins=50, color='chartreuse')
sns.distplot(df_neg['text_length'], kde=True, bins=50, color='lightcoral')
plt.title('\nDistribution Plot for Length of Comments\n')
plt.legend(['Positive Comments', 'Negative Comments'])
plt.xlabel('\nText Length')
plt.ylabel('Percentage of Comments\n');
# -
df_eng.head()
merged_reviews = pd.merge(left=df_eng, right=raw_data, left_on='id', right_on='id')
merged_reviews.head()
merged_reviews.drop(['id', 'comments_x', 'language', 'listing_id_y','reviewer_id','reviewer_name','comments_y'], inplace=True, axis=1)
merged_reviews['month'] = pd.to_datetime(merged_reviews['date']).dt.month
merged_reviews['year'] = pd.to_datetime(merged_reviews['date']).dt.year
summary_reviews=merged_reviews.groupby(['year','month','listing_id_x'],as_index=False).mean()
summary_reviews['time_period']=summary_reviews['month'].astype(str)+'-'+summary_reviews['year'].astype(str)
summary_reviews.head()
summary_reviews.to_csv('/Users/sonakshimendiratta/Documents/NUS/YR 2 SEM 2/IS5126/Final Project/data/review_sentiments.csv')
| code/Group07_Review_Sentiment_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Parte 4
#
# O objetivo dessa parte seria focado na reducao de parametros para a criacao de um modelo adequado, identificando
# quais metricas mais se correlacionam ao numero de quadros na saida.
# 1. Construct a training set and a test set from the trace as above.
#
# 2. Method 1: Build all subsets of the feature set X that contain either one or two features (i.e., device statistics). Compute the models for each of these sets for linear regression over the training set. Plot a histogram of the error values (NMAE) of all the models for the test set. Identify the feature set that produces the model with the smallest error and give the device statistic(s) in this set.
#
# 3. Method 2: Linear univariate feature selection. Take each feature of $X$ and compute the sample correlation of the feature with the corresponding $Y$ value over the training set. For observations $x_i, y_i$, the sample correlation is computed as $\frac{1}{m}\sum_{i=1}^m\frac{(x_i - \bar{x})(y_i - \bar{y}}{(\sigma_X * \sigma_Y)}$ whereby $\bar{x}$ and $\bar{y}$ are sample means and $m$ is the size of the training set; $\sigma_X$ is the standard deviation $\sqrt{(\frac{1}{m}\sum_{i=1}^m(x_i - \bar{x})^2)}$ and likewise for $\sigma_Y$. The correlation values fall into the interval [โ1, +1]. Rank the features according to the square of the correlation values; the top feature has the highest value. Build nine feature sets composed of the top $k$ features, $k = 1..9$. Compute the model for each of these nine sets for linear regression over the training set and compute the error (NMAE) of these models over the test set. Produce a plot that shows the error value in function of the set $k$.
#
# 4. Describe your observations and conclusions.
# +
from itertools import chain, combinations
import pandas as pd
import matplotlib.pyplot as pp
class Subset:
def __init__(self, x_df, y_df, size, columns):
self.columns = columns
self.data = x_df[columns]
x_train, x_test, y_train, y_test = \
self.data[:size], self.data[size:], y_df[:size], y_df[size:]
self.x_train = x_train
self.x_test = x_test
self.y_train = y_train
self.y_test = y_test
def data_width(self):
return len(self.columns)
def superset(values):
return map(lambda t: list(t), chain.from_iterable(combinations(values, r) for r in range(len(values) + 1)))
def create_subsets(x, y):
subset_columns = list(filter(lambda t: len(t) > 0, superset(x.columns)))
return [Subset(x, y, 2520, columns) for columns in list(subset_columns)]
x_data = pd.read_csv('./data/X.csv')
y_output = pd.read_csv('./data/Y.csv')
subsets = create_subsets(x_data, y_output)
print(f"Criado {len(subsets)} sub-sets de dados")
# +
from sklearn import linear_model
def absolute_errors(expected, found):
return [abs(y1 - y2) for (y1, y2) in zip(expected, found)]
def mean_errors(expected, found):
return sum(absolute_errors(expected, found)) / len(expected)
def normalized_mean_absolute_error(expected, found):
return mean_errors(expected, found) / expected.mean()
class TrainedModel:
def __init__(self, data):
self.data = data
self.model = linear_model.LinearRegression()
self.predicted = []
def train(self):
self.model.fit(self.data.x_train, self.data.y_train['DispFrames'])
def predict(self):
self.predicted = self.model.predict(self.data.x_test)
def calculate_nmae(self):
return normalized_mean_absolute_error(self.data.y_test['DispFrames'], self.predicted)
filtered_subsets = list(filter(lambda subset: subset.data_width() <= 2, subsets))
def create_and_train_model(data):
model = TrainedModel(data)
model.train()
model.predict()
return model, model.calculate_nmae()
models = [create_and_train_model(subset) for subset in filtered_subsets]
minimum_model = min(models, key = lambda t: t[1])
print(f"O modelo com as features {minimum_model[0].data.columns} possui NMAE: {minimum_model[1]}")
gathered_data = [[nmae] for (_, nmae) in models]
pd.DataFrame(gathered_data, columns=['NMAE']).plot(kind='hist', legend=True)
# +
import numpy as np
def get_sample_correlation(x_data, y_data):
x_mean = x_data.mean()
y_mean = y_data.mean()
x_std = np.std(x_data)
y_std = np.std(y_data)
correlation = sum([((x_data[idx] - x_mean) * (y_data[idx] - y_mean)) / (x_std * y_std) for idx, _ in enumerate(x_data)]) / x_data.size
return correlation**2
up_to = 2520
x_train, x_test, y_train, y_test = \
x_data[:up_to], x_data[up_to:], y_output[:up_to], y_output[up_to:]
x_column_names = ['TimeStamp',
'all_..idle',
'X..memused',
'proc.s',
'cswch.s',
'file.nr',
'sum_intr.s',
'ldavg.1',
'tcpsck',
'pgfree.s']
results = []
for name in x_column_names:
results.append(tuple((name, get_sample_correlation(x_train[name].to_numpy(), y_train['DispFrames'].to_numpy()))))
topNine = sorted(results, key=lambda tup: tup[1], reverse=True)[:9]
print(topNine)
nmaeList = []
for featureName, score in topNine:
model = linear_model.LinearRegression()
model.fit(x_train[[featureName]], y_train['DispFrames'])
y_pred = model.predict(x_test[[featureName]])
nmae = normalized_mean_absolute_error(y_test['DispFrames'], y_pred)
nmaeList.append(tuple((featureName, nmae)))
print("\nFeature name:", featureName)
print("\nScore:", score)
print("NMAE error:", nmae)
topNineNmae = sorted(nmaeList, key=lambda tup: tup[1], reverse=False)
pd.DataFrame(topNineNmae, columns =['Feature', 'NMAE']).plot(kind='bar', legend=True, x = 'Feature')
# -
# Como pode ser observado no exercรญcio 2, o modelo com as features ['sum_intr.s', 'tcpsck'] possui um NMAE de valor 0.07798705254028886, o menor entre todos os outros modelos. Os resultados obtidos no exercรญcio 3 corroboram com essa afirmaรงรฃo ao mostrar que a feature 'tcpsck' sozinha possui o menor NMAE dentre todas as outras features, enquanto a 'sum_intr.s' possui o quinto menor NMAE.
#
# A feature proc.s possui o maior NMAE dentre todas as 9 features, o que demonstra que ela รฉ uma forte candidata a ser desconsiderada no processo de classificaรงรฃo/regressรฃo. Semรขnticamente essa informaรงรฃo faz bastante sentido, pois a taxa de criaรงรฃo de processos em uma primeira anรกlise nรฃo parece ser muito relevante para o problema que estamos resolvendo.
| task4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import networkx as nx
import seaborn as sns
# ## ะัะฟะพะผะธะฝะฐะตะผ ะฟัะพัะปัะน ัะตะผะธะฝะฐั
#
# ะกัะธัะฐะตะผ ะดะฐะฝะฝัะต ะพ ััะฐะฝัะธัั
ะผะพัะบะพะฒัะบะพะณะพ ะผะตััะพะฟะพะปะธัะตะฝะฐ ะฒ 2014 ะณะพะดั:
metro_data = pd.read_csv('metro_2014_pairwise.csv')
# ะคะพัะผะฐั ัะฐะบะพะน: ะดะฒะต ััะฐะฝัะธะธ ะทะฐะฟะธัะฐะฝั ะฒ ะพะดะฝะพะน ัััะพะบะต, ะตัะปะธ ะผะตะถะดั ะฝะธะผะธ ะตััั ะฟะตัะตะณะพะฝ.
# ะะฐะณััะทะธะผ ะดะฐะฝะฝัะต ะฒ ะณัะฐั ะธะท ะฟะพะดะณะพัะพะฒะปะตะฝะฝะพะน ัะฐะฑะปะธัั:
#
#
# +
metro_graph = nx.from_pandas_edgelist(metro_data, source='Start station', target='End station')
# ะฃะบะฐะทัะฒะฐะตะผ, ััะพ ะฝะฐะฟัะฐะฒะปะตะฝะธะต ะฟะตัะตะณะพะฝะฐ ะผะตะถะดั ััะฐะฝัะธัะผะธ ะฝะฐั ะฝะต ะธะฝัะตัะตััะตั.
# (ะบะฐะบ ะฟัะฐะฒะธะปะพ, ะผะพะถะตะผ ะฟะพะตั
ะฐัั ะฒ ะพะฑะต ััะพัะพะฝั)
metro_graph = nx.to_undirected(metro_graph)
print(nx.info(metro_graph))
# -
# ### ะะตััะธะบะธ ะฝะฐ ะณัะฐัะต
# ะััะธัะปะธะผ ะฟะปะพัะฝะพััั ัะตัะธ:
nx.density(metro_graph)
# ะััะธัะปะธะผ ะบะพัััะธัะธะตะฝั ะบะปะฐััะตัะธะทะฐัะธะธ ะดะปั ะณัะฐัะฐ:
nx.transitivity(metro_graph)
# ะััะธัะปะธะผ ัะตะฝััะฐะปัะฝะพััะธ:
degree = nx.degree_centrality(metro_graph)
betweenness = nx.betweenness_centrality(metro_graph)
closeness = nx.closeness_centrality(metro_graph)
# +
graph_measures = {
'degree': degree,
'betweenness': betweenness,
'closeness': closeness,
}
pd.DataFrame(graph_measures)
# -
# ะะพัะผะพััะธะผ, ะบะฐะบะธะต ััะฐะฝัะธะธ ะพะฑะปะฐะดะฐัั ะผะฐะบัะธะผะฐะปัะฝัะผะธ ะฟะพะบะฐะทะฐัะตะปัะผะธ ะธ ะฟัะพะธะฝัะตัะฟัะตัะธััะตะผ:
pd.DataFrame(graph_measures).sort_values(by='betweenness', ascending=False)
# ## ะ ะตะฐะปัะฝัะต ะดะฐะฝะฝัะต
#
# ะะพะดะณััะทะบะฐ ะดะฐะฝะฝัั
ะธะท ัะฐะนะปะฐ-ะฝะต ัะฐะฑะปะธัั ะพัััะตััะฒะปัะตััั ะดะพััะฐัะพัะฝะพ ะฟัะพััะพ, ะตัะปะธ ะดะฐะฝะฝัะต ะทะฐะฟะธัะฐะฝั ะฒ ะฟัะฐะฒะธะปัะฝะพะผ ัะพัะผะฐัะต.
#
# ะัะธะผะตัั ะฝะฐะธะฑะพะปะตะต ะฟะพะฟัะปััะฝัั
ัะพัะผะฐัะพะฒ ะดะปั ััะตะฝะธั ะธ ัะพั
ัะฐะฝะตะฝะธั ะณัะฐัะพะฒ (ะฑะพะปััะต ะผะพะถะฝะพ ะฝะฐะนัะธ ะฒ ะดะพะบัะผะตะฝัะฐัะธะธ NetworkX):
# - ัะฟะธัะพะบ ัะผะตะถะฝัั
ะฒะตััะธะฝ (`nx.read_adjlist`, `nx.write_adjlist`, ะธะผะตะฝะฝะพ ัะฐะบ ั
ัะฐะฝัััั ะณัะฐัั ะฒ NetworkX)
# - ัะฟะธัะพะบ ะฒัะตั
ััะฑะตั (`nx.read_edgelist`, `nx.write_edgelist`)
#
# ะะตัะฒัะต ัััะพะบะธ ะฝะฐัะตะณะพ ัะฐะนะปะฐ `facebook_combined.txt` ะฒัะณะปัะดัั ัะฐะบ:(ะฝะธัะตะณะพ ะฟะพะดะพะฑะฝะพะณะพ, ะบะฐะบะพะน ัะฐะนะป ะฝะฐัะปะธ, ัะฐะบ ะธ ะฒัะณะปัะดะฐั)
# ```
# 214328887 34428380
# 17116707 28465635
# 380580781 18996905
# 221036078 153460275
# 107830991 17868918
# 151338729 222261763
# ```
#
# ะะฐะถะดะพะต ัะธัะปะพ ะพะฑะพะทะฝะฐัะฐะตั ะธะผั ะฒะตััะธะฝั (ะณััะฑะพ ะณะพะฒะพัั, id ะฟะพะปัะทะพะฒะฐัะตะปั) ะฒ ะณัะฐัะต. ะัะปะธ ะฒ ะพะดะฝะพะน ัััะพะบะต ะทะฐะฟะธัะฐะฝะฐ ะฟะฐัะฐ ัะธัะตะป, ะทะฝะฐัะธั, ะฟะพะปัะทะพะฒะฐัะตะปะธ ั ัะพะพัะฒะตัััะฒัััะธะผะธ ะฝะพะผะตัะฐะผะธ ะฝะฐั
ะพะดัััั ะดััะณ ั ะดััะณะฐ ะฒ ัะฟะธัะบะต ะดััะทะตะน.
facebook_users = nx.read_edgelist("facebook_combined.txt")
# ะฃะทะฝะฐะนัะต, ัะบะพะปัะบะพ ะณัะฐั ัะพะดะตัะถะธั ะฒะตััะธะฝ ะธ ัะฒัะทะตะน:
print('Number of nodes:', facebook_users.number_of_nodes())
print('Number of edges:', facebook_users.number_of_edges())
# ะะฐัะธััะนัะต ะณัะฐั ะดะฐะฝะฝะพะน ัะตัะธ:
# %%time
nx.draw_networkx(facebook_users)
# ะะฐัะธััะตะผ ะณัะฐัะธะบ, ะพััะฐะถะฐััะธะน ัะฐัะฟัะตะดะตะปะตะฝะธะต ััะตะฟะตะฝะตะน ะฒะตััะธะฝั:
# +
degrees = dict(facebook_users.degree()) # dictionary node:degree
values = sorted(set(degrees.values()))
g_hist = [list(degrees.values()).count(x) for x in values]
plt.figure(figsize=(7, 5))
plt.plot(values, g_hist, 'o-') # degree
plt.xlabel('Degree')
plt.ylabel('Number of nodes')
plt.title('Facebook users connectivity degrees')
# -
# ะะพััะธัะฐะนัะต ะธะทะฒะตัะฝัะต ะฒะฐะผ ัะตะฝััะฐะปัะฝะพััะธ, ะฝะฐะนะดะธัะต ะฒัะตะผั ะฒััะธัะปะตะฝะธั ะดะปั ะบะฐะถะดะพะน ะธะท ะฝะธั
. ะกะพััะฐะฒััะต ัะฐะฑะปะธัั ัะพ ะทะฝะฐัะตะฝะธัะผะธ ัะตะฝััะฐะปัะฝะพััะตะน. ะััะพััะธััะนัะต ะตะต ะฟะพ ะพะดะฝะพะน ะธะท ัะตะฝััะฐะปัะฝะพััะตะน.
# %time
degree = nx.degree_centrality(facebook_users)
# %time
betweenness = nx.betweenness_centrality(facebook_users)
# %time
closeness = nx.closeness_centrality(facebook_users)
# +
graph_measures = {
'degree': degree,
'betweenness': betweenness,
'closeness': closeness,
}
pd.DataFrame(graph_measures).sort_values(by='degree', ascending=False)
| NetworkX_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Enhanced chroma and chroma variants
#
# This notebook demonstrates a variety of techniques for enhancing chroma features and
# also, introduces chroma variants implemented in librosa.
#
# ## Enhanced chroma
# Beyond the default parameter settings of librosa's chroma functions, we apply the following
# enhancements:
#
# 1. Over-sampling the frequency axis to reduce sensitivity to tuning deviations
# 2. Harmonic-percussive-residual source separation to eliminate transients.
# 3. Nearest-neighbor smoothing to eliminate passing tones and sparse noise. This is inspired by the
# recurrence-based smoothing technique of
# `Cho <NAME>lo, 2011 <http://ismir2011.ismir.net/papers/OS8-4.pdf>`_.
# 4. Local median filtering to suppress remaining discontinuities.
#
#
# +
# Code source: <NAME>
# License: ISC
# sphinx_gallery_thumbnail_number = 6
from __future__ import print_function
import numpy as np
import scipy
import matplotlib.pyplot as plt
import librosa
import librosa.display
# -
# We'll use a track that has harmonic, melodic, and percussive elements
#
#
y, sr = librosa.load('audio/Karissa_Hobbs_-_09_-_Lets_Go_Fishin.mp3')
# First, let's plot the original chroma
#
#
# +
chroma_orig = librosa.feature.chroma_cqt(y=y, sr=sr)
# For display purposes, let's zoom in on a 15-second chunk from the middle of the song
idx = tuple([slice(None), slice(*list(librosa.time_to_frames([45, 60])))])
# And for comparison, we'll show the CQT matrix as well.
C = np.abs(librosa.cqt(y=y, sr=sr, bins_per_octave=12*3, n_bins=7*12*3))
plt.figure(figsize=(12, 4))
plt.subplot(2, 1, 1)
librosa.display.specshow(librosa.amplitude_to_db(C, ref=np.max)[idx],
y_axis='cqt_note', bins_per_octave=12*3)
plt.colorbar()
plt.subplot(2, 1, 2)
librosa.display.specshow(chroma_orig[idx], y_axis='chroma')
plt.colorbar()
plt.ylabel('Original')
plt.tight_layout()
# -
# We can correct for minor tuning deviations by using 3 CQT
# bins per semi-tone, instead of one
#
#
# +
chroma_os = librosa.feature.chroma_cqt(y=y, sr=sr, bins_per_octave=12*3)
plt.figure(figsize=(12, 4))
plt.subplot(2, 1, 1)
librosa.display.specshow(chroma_orig[idx], y_axis='chroma')
plt.colorbar()
plt.ylabel('Original')
plt.subplot(2, 1, 2)
librosa.display.specshow(chroma_os[idx], y_axis='chroma', x_axis='time')
plt.colorbar()
plt.ylabel('3x-over')
plt.tight_layout()
# -
# That cleaned up some rough edges, but we can do better
# by isolating the harmonic component.
# We'll use a large margin for separating harmonics from percussives
#
#
# +
y_harm = librosa.effects.harmonic(y=y, margin=8)
chroma_os_harm = librosa.feature.chroma_cqt(y=y_harm, sr=sr, bins_per_octave=12*3)
plt.figure(figsize=(12, 4))
plt.subplot(2, 1, 1)
librosa.display.specshow(chroma_os[idx], y_axis='chroma')
plt.colorbar()
plt.ylabel('3x-over')
plt.subplot(2, 1, 2)
librosa.display.specshow(chroma_os_harm[idx], y_axis='chroma', x_axis='time')
plt.colorbar()
plt.ylabel('Harmonic')
plt.tight_layout()
# -
# There's still some noise in there though.
# We can clean it up using non-local filtering.
# This effectively removes any sparse additive noise from the features.
#
#
# +
chroma_filter = np.minimum(chroma_os_harm,
librosa.decompose.nn_filter(chroma_os_harm,
aggregate=np.median,
metric='cosine'))
plt.figure(figsize=(12, 4))
plt.subplot(2, 1, 1)
librosa.display.specshow(chroma_os_harm[idx], y_axis='chroma')
plt.colorbar()
plt.ylabel('Harmonic')
plt.subplot(2, 1, 2)
librosa.display.specshow(chroma_filter[idx], y_axis='chroma', x_axis='time')
plt.colorbar()
plt.ylabel('Non-local')
plt.tight_layout()
# -
# Local discontinuities and transients can be suppressed by
# using a horizontal median filter.
#
#
# +
chroma_smooth = scipy.ndimage.median_filter(chroma_filter, size=(1, 9))
plt.figure(figsize=(12, 4))
plt.subplot(2, 1, 1)
librosa.display.specshow(chroma_filter[idx], y_axis='chroma')
plt.colorbar()
plt.ylabel('Non-local')
plt.subplot(2, 1, 2)
librosa.display.specshow(chroma_smooth[idx], y_axis='chroma', x_axis='time')
plt.colorbar()
plt.ylabel('Median-filtered')
plt.tight_layout()
# -
# A final comparison between the CQT, original chromagram
# and the result of our filtering.
#
#
plt.figure(figsize=(12, 8))
plt.subplot(3, 1, 1)
librosa.display.specshow(librosa.amplitude_to_db(C, ref=np.max)[idx],
y_axis='cqt_note', bins_per_octave=12*3)
plt.colorbar()
plt.ylabel('CQT')
plt.subplot(3, 1, 2)
librosa.display.specshow(chroma_orig[idx], y_axis='chroma')
plt.ylabel('Original')
plt.colorbar()
plt.subplot(3, 1, 3)
librosa.display.specshow(chroma_smooth[idx], y_axis='chroma', x_axis='time')
plt.ylabel('Processed')
plt.colorbar()
plt.tight_layout()
plt.show()
# ## Chroma variants
# There are three chroma variants implemented in librosa: `chroma_stft`, `chroma_cqt`, and `chroma_cens`.
# `chroma_stft` and `chroma_cqt` are two alternative ways of plotting chroma.
#
# `chroma_stft` performs short-time fourier transform of an audio input and maps each STFT bin to chroma, while `chroma_cqt` uses constant-Q transform and maps each cq-bin to chroma.
#
# A comparison between the STFT and the CQT methods for chromagram.
#
#
# +
chromagram_stft = librosa.feature.chroma_stft(y=y, sr=sr)
chromagram_cqt = librosa.feature.chroma_cqt(y=y, sr=sr)
plt.figure(figsize=(12, 4))
plt.subplot(2, 1, 1)
librosa.display.specshow(chromagram_stft[idx], y_axis='chroma')
plt.colorbar()
plt.ylabel('STFT')
plt.subplot(2, 1, 2)
librosa.display.specshow(chromagram_cqt[idx], y_axis='chroma', x_axis='time')
plt.colorbar()
plt.ylabel('CQT')
plt.tight_layout()
# -
# CENS features (`chroma_cens`) are variants of chroma features introduced in
# `<NAME>, 2011 <http://ismir2011.ismir.net/papers/PS2-8.pdf>`_, in which
# additional post processing steps are performed on the constant-Q chromagram to obtain features
# that are invariant to dynamics and timbre.
#
# Thus, the CENS features are useful for applications, such as audio matching and retrieval.
#
# Following steps are additional processing done on the chromagram, and are implemented in `chroma_cens`:
# 1. L1-Normalization across each chroma vector
# 2. Quantization of the amplitudes based on "log-like" amplitude thresholds
# 3. Smoothing with sliding window (optional parameter)
# 4. Downsampling (not implemented)
#
# A comparison between the original constant-Q chromagram and the CENS features.
#
#
# +
chromagram_cens = librosa.feature.chroma_cens(y=y, sr=sr)
plt.figure(figsize=(12, 4))
plt.subplot(2, 1, 1)
librosa.display.specshow(chromagram_cqt[idx], y_axis='chroma')
plt.colorbar()
plt.ylabel('Orig')
plt.subplot(2, 1, 2)
librosa.display.specshow(chromagram_cens[idx], y_axis='chroma', x_axis='time')
plt.colorbar()
plt.ylabel('CENS')
plt.tight_layout()
| 0.7.2/_downloads/dd829ebb4b42787aeb07019d4e849af0/plot_chroma.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/MIT-LCP/2019_hack_aotearoa_eicu/blob/master/05_timeseries.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="dImagXnW2Lfz" colab_type="text"
# # eICU Collaborative Research Database
#
# # Notebook 5: Timeseries for a single patient
#
# This notebook explores timeseries data for a single patient.
#
# + [markdown] id="mUTUkkJb2YTK" colab_type="text"
# ## Load libraries and connect to the database
# + id="F9DjPZSV2Vyn" colab_type="code" colab={}
# Import libraries
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.path as path
# Make pandas dataframes prettier
from IPython.display import display, HTML
# Access data using Google BigQuery.
from google.colab import auth
from google.cloud import bigquery
# + id="MmKc7haE2bbQ" colab_type="code" colab={}
# authenticate
auth.authenticate_user()
# + id="3I7O9JpE2c4q" colab_type="code" colab={}
# Set up environment variables
project_id='new-zealand-2018-datathon'
os.environ["GOOGLE_CLOUD_PROJECT"]=project_id
# + id="Iw3Fz5sK2eAq" colab_type="code" colab={}
# Helper function to read data from BigQuery into a DataFrame.
def run_query(query):
return pd.io.gbq.read_gbq(query, project_id=project_id,
configuration={'query':{'useLegacySql': False}})
# + [markdown] id="yiPWgbRb2hDV" colab_type="text"
# ## Selecting a single patient stay
#
# + [markdown] id="JS3FAZa7G_pg" colab_type="text"
# ### The patient table
#
# The patient table includes general information about the patient admissions (for example, demographics, admission and discharge details). See: http://eicu-crd.mit.edu/eicutables/patient/
# + id="eZ4UO8kwG-sp" colab_type="code" colab={}
# select a single ICU stay
patientunitstayid = 210014
# + id="u0WMh7hv2fLQ" colab_type="code" colab={}
# Get demographic details
query = """
SELECT *
FROM `physionet-data.eicu_crd_demo.patient`
WHERE patientunitstayid = {}
""".format(patientunitstayid)
patient = run_query(query)
# + id="XwcsarL9KYhA" colab_type="code" colab={}
query = """
SELECT *
FROM `physionet-data.eicu_crd_demo.patient`
"""
patient = run_query(query)
# + id="vfS1ibvPHU6V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 368} outputId="b22bfa53-e356-4b2b-d06a-54f9db768b3a"
patient.head()
# + [markdown] id="8WkUTZ66Hmp3" colab_type="text"
# ### The `vitalperiodic` table
#
# The `vitalperiodic` table comprises data that is consistently interfaced from bedside vital signs monitors into eCareManager. Data are generally interfaced as 1 minute averages, and archived into the `vitalperiodic` table as 5 minute median values. For more detail, see: http://eicu-crd.mit.edu/eicutables/vitalPeriodic/
# + id="__dKFPdlHh_a" colab_type="code" colab={}
# Get periodic vital signs
query = \
"""
SELECT *
FROM `physionet-data.eicu_crd_demo.vitalperiodic`
WHERE patientunitstayid = {}
""".format(patientunitstayid)
vitalperiodic = run_query(query)
# + id="q2cCCvmLH8_K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="49b8056f-926c-4c3c-b32c-11a57f27763c"
vitalperiodic.head()
# + id="VcZegYL3IB94" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="6a27edb7-7818-4f58-9ff7-9199a3698c61"
# sort the values by the observationoffset (time in minutes from ICU admission)
vitalperiodic = vitalperiodic.sort_values(by='observationoffset')
vitalperiodic.head()
# + id="rIRB7WPzIIdU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="dc2b8ad0-a6b1-4bb8-a2a0-4cd463769d97"
# subselect the variable columns
columns = ['observationoffset','temperature','sao2','heartrate','respiration',
'cvp','etco2','systemicsystolic','systemicdiastolic','systemicmean',
'pasystolic','padiastolic','pamean','icp']
vitalperiodic = vitalperiodic[columns].set_index('observationoffset')
vitalperiodic.head()
# + id="iVe95ZiZILtR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 553} outputId="6f220a4d-aadd-43b5-9fa7-218f085877c1"
# plot the data
plt.rcParams['figure.figsize'] = [12,8]
title = 'Vital signs (periodic) for patientunitstayid = {} \n'.format(patientunitstayid)
ax = vitalperiodic.plot(title=title, marker='o')
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
ax.set_xlabel("Minutes after admission to the ICU")
ax.set_ylabel("Absolute value")
# + [markdown] id="AKwbjEpzITad" colab_type="text"
# ## Questions
#
# - Which variables are available for this patient?
# - What is the peak heart rate during the period?
# + [markdown] id="EtnH7O-5IZzB" colab_type="text"
# ### The vitalaperiodic table
#
# The vitalAperiodic table provides invasive vital sign data that is recorded at irregular intervals. See: http://eicu-crd.mit.edu/eicutables/vitalAperiodic/
#
# + id="IVSNgRD4INwW" colab_type="code" colab={}
# Get aperiodic vital signs
query = \
"""
SELECT *
FROM `physionet-data.eicu_crd_demo.vitalaperiodic`
WHERE patientunitstayid = {}
""".format(patientunitstayid)
vitalaperiodic = run_query(query)
# + id="Hz56o6w2IkYc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="0956a427-7cf4-4b8b-8391-964db0750748"
# display the first few rows of the dataframe
vitalaperiodic.head()
# + id="9lKJNUHwIm4u" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="3cb7ec79-a1a4-47bb-9a1d-70b0b2aa2d20"
# sort the values by the observationoffset (time in minutes from ICU admission)
vitalaperiodic = vitalaperiodic.sort_values(by='observationoffset')
vitalaperiodic.head()
# + id="bWRvZ09XIo7d" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 228} outputId="25c68566-5868-4afa-e4cd-32650d409643"
# subselect the variable columns
columns = ['observationoffset','noninvasivesystolic','noninvasivediastolic',
'noninvasivemean','paop','cardiacoutput','cardiacinput','svr',
'svri','pvr','pvri']
vitalaperiodic = vitalaperiodic[columns].set_index('observationoffset')
vitalaperiodic.head()
# + id="yh6dETxTIr_h" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 553} outputId="8eb4e507-5ba9-46aa-9514-7ca4f96c590f"
# plot the data
plt.rcParams['figure.figsize'] = [12,8]
title = 'Vital signs (aperiodic) for patientunitstayid = {} \n'.format(patientunitstayid)
ax = vitalaperiodic.plot(title=title, marker='o')
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
ax.set_xlabel("Minutes after admission to the ICU")
ax.set_ylabel("Absolute value")
# + [markdown] id="4cj_8AdxIz0l" colab_type="text"
# ## Questions
#
# - What do the non-invasive variables measure?
# - How do you think the mean is calculated?
# + [markdown] id="pN5pMDMDI69_" colab_type="text"
# ## 3.4. The lab table
# + id="k55Wyi7rIxND" colab_type="code" colab={}
# Get aperiodic vital signs
query = \
"""
SELECT *
FROM `physionet-data.eicu_crd_demo.lab`
WHERE patientunitstayid = {}
""".format(patientunitstayid)
lab = run_query(query)
# + id="wfIxq_ZcI88o" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="3d093094-401a-400b-d116-7cb03ea4c548"
lab.head()
# + id="hkLUJy8YJDIb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 253} outputId="e846f83b-584e-4256-fa73-0626c8dfbd75"
# sort the values by the offset time (time in minutes from ICU admission)
lab = lab.sort_values(by='labresultoffset')
lab.head()
# + id="Pnk3XWaYJE4V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 228} outputId="659d3125-b927-4d74-d9f6-369170f71a6c"
lab = lab.set_index('labresultoffset')
columns = ['labname','labresult','labmeasurenamesystem']
lab = lab[columns]
lab.head()
# + id="_96UMG-SJOK-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 138} outputId="ebe9e3d5-8159-442d-cd22-d5c7d3be6987"
# list the distinct labnames
lab['labname'].unique()
# + id="bjb944cJJQ_2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="76e07d9d-4e89-4387-a145-ee7f4d137dd3"
# pivot the lab table to put variables into columns
lab = lab.pivot(columns='labname', values='labresult')
lab.head()
# + id="MTyVuu4kJTde" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 228} outputId="b132b59d-543d-4348-9c00-cf05df06b211"
# plot laboratory tests of interest
labs_to_plot = ['creatinine','pH','BUN', 'glucose', 'potassium']
lab[labs_to_plot].head()
# + id="BPd3TrJGJVfa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 553} outputId="eecd0da2-0b06-4d92-f078-9f6a5cf4cf9c"
# plot the data
plt.rcParams['figure.figsize'] = [12,8]
title = 'Laboratory test results for patientunitstayid = {} \n'.format(patientunitstayid)
ax = lab[labs_to_plot].plot(title=title, marker='o',ms=10, lw=0)
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
ax.set_xlabel("Minutes after admission to the ICU")
ax.set_ylabel("Absolute value")
| 05_timeseries.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Show Standard VeRoViz 3D Models
# This notebook provides code for displaying, in Cesium, all of the 3D models that ship with VeRoViz.
#
# Before running the code in this notebook, you will need to:
# 1. **Install Cesium**. See https://veroviz.org/gettingstarted.html for instructions.
# 2. **Install the VeRoViz Cesium Viewer Plugin**. This may be downloaded from https://veroviz.org/downloads/veroviz_cesium_viewer.zip. Simply extract this `.zip` archive into the `cesium` directory (which was created in Step 1 above).
# 3. **Install the VeRoViz Python Package**. See https://veroviz.org/gettingstarted.html for instructions.
#
# ---
# +
import veroviz as vrv
import os
import pandas as pd
# -
vrv.checkVersion()
# This dictionary contains all of the 3D models:
stdModels = [
{'model': 'box_blue.gltf', 'objectID': 'blue box', 'alt': 0},
{'model': 'box_yellow.gltf', 'objectID': 'yellow box', 'alt': 0},
{'model': 'rectangle_red.gltf', 'objectID': 'red rectangle', 'alt': 0},
{'model': 'rectangle_blue.gltf', 'objectID': 'blue rectangle', 'alt': 0},
{'model': 'rectangle_green.gltf', 'objectID': 'green rectangle', 'alt': 0},
{'model': 'rectangle_white.gltf', 'objectID': 'white rectangle', 'alt': 0},
{'model': 'rectangle_black.gltf', 'objectID': 'black rectangle', 'alt': 0},
{'model': 'wedge_red.gltf', 'objectID': 'red wedge', 'alt': 15},
{'model': 'wedge_blue.gltf', 'objectID': 'blue wedge', 'alt': 15},
{'model': 'wedge_green.gltf', 'objectID': 'green wedge', 'alt': 15},
{'model': 'wedge_white.gltf', 'objectID': 'white wedge', 'alt': 15},
{'model': 'wedge_black.gltf', 'objectID': 'black wedge', 'alt': 15},
{'model': 'car_blue.gltf', 'objectID': 'blue car', 'alt': 0},
{'model': 'car_green.gltf', 'objectID': 'green car', 'alt': 0},
{'model': 'car_red.gltf', 'objectID': 'red car', 'alt': 0},
{'model': 'drone_package.gltf', 'objectID': 'drone with package', 'alt': 15},
{'model': 'drone.gltf', 'objectID': 'drone', 'alt': 15},
{'model': 'ub_airplane.gltf', 'objectID': 'UB airplane', 'alt': 15},
{'model': 'ub_truck.gltf', 'objectID': 'UB truck', 'alt': 0}
]
# We'll create a dataframe to hold the model info:
modelsDF = pd.DataFrame(stdModels)
modelsDF
# +
# Arrange all of the models in two circles.
# Flying vehicles will be at altitude, ground vehicles will be on the ground.
assignmentsDF = vrv.initDataframe('assignments')
center = [42.99913934731591, -78.77751946449281] # UB Field
radius = 20 # meters
groundAngleDelta = 359/len(modelsDF[modelsDF['alt'] == 0])
airAngleDelta = 359/len(modelsDF[modelsDF['alt'] > 0])
groundAngle = 0
airAngle = 0
for i in range(len(modelsDF)):
if modelsDF.loc[i]['alt'] > 0:
airAngle += airAngleDelta
angle = airAngle
else:
groundAngle += groundAngleDelta
angle = groundAngle
# Find the GPS coordinates at a point along the circle:
loc = vrv.pointInDistance2D(loc=[center[0], center[1], modelsDF.loc[i]['alt']],
direction=angle, distMeters=radius)
# Create a "static" assignment (for stationary objects):
assignmentsDF = vrv.addStaticAssignment(initAssignments = assignmentsDF,
objectID = modelsDF.loc[i]['objectID'],
modelFile = 'veroviz/models/%s' % (modelsDF.loc[i]['model']),
loc = loc,
startTimeSec = 0.0,
endTimeSec = 30)
assignmentsDF
# -
# Create the Cesium code and save it within our Cesium directory.
# NOTE: The VeRoViz Cesium Viewer download already has this example.
vrv.createCesium(assignments = assignmentsDF,
nodes = None,
startDate = None, # None <-- today
startTime = '08:00:00',
postBuffer = 30,
cesiumDir = os.environ['CESIUMDIR'],
problemDir = 'veroviz/view_models' # <-- a sub-directory of cesiumDir
)
# ---
# ## We are now ready to view our solution.
#
# 1. Make sure you have a 'node.js' server running:
# 1. Open a terminal window.
# 2. Change directories to the location where Cesium is installed. For example, `cd ~/cesium`.
# 3. Start a 'node.js' server: `node server.cjs`
# 2. Visit http://localhost:8080/veroviz in your web browser.
# 3. Use the top left icon to select `;veroviz;view_models.vrv`, which will be located in the `veroviz/view_models` subdirectory of Cesium.
| examples/show_cesium_models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# <!-- dom:TITLE: 2MA100 - Devoir Maison 1 -->
# # 2MA100 - Devoir Maison 1
# <!-- dom:AUTHOR: Sorbonne Universitรฉ - 2 mars 2020 -->
# <!-- Author: -->
# **Sorbonne Universitรฉ - 2 mars 2020**
#
# Les exercices de ce devoir maison notรฉ sur 20 points doivent รชtre rendus au choix au format **Notebook** (`.ipynb`) ou **Script** (`.py`) en un ou plusieurs fichiers au plus tard le **13 mars 2020** ร 23h59 sur [moodle](https://moodle-sciences.upmc.fr/moodle-2019/).
#
#
# **Attention Plagiat !**
#
#
# Les devoirs maisons doivent รชtre codรฉs et rendus de maniรจre individuelle. A titre d'information,
# voici une dรฉfinition du plagiat adaptรฉe du [Memento de l'Universitรฉ de Genรจve](https://memento.unige.ch/doc/0008/) :
#
# *Le plagiat consiste ร insรฉrer, dans un travail acadรฉmique, des formulations, des
# phrases, des passages, des morceaux de code, des images, de mรชme que des idรฉes ou
# analyses repris de travaux d'autres auteurs, en les faisant passer pour siens.*
#
# En particulier, le copier-coller ร partir de sources trouvรฉes sur Internet ou sur des travaux d'autres
# รฉtudiantยทes sans citer les sources est considรฉrรฉ comme du plagiat et implique une note zรฉro.
# Le plagiat constitue รฉgalement une tentative de tricherie sanctionnรฉe par
# le rรจglement de l'universitรฉ. La solution est d'indiquer dans vos devoirs tout de ce qui ne vient
# pas de vous en mentionnant les sources (page Internet, livres, autre รฉtudiantยทe,...). Tout les
# fichiers rendus seront analysรฉs automatiquement avec un logiciel de dรฉtection des similaritรฉs
# (entre รฉtudiantยทes et depuis Internet).
#
#
#
#
#
# <!-- --- begin exercise --- -->
#
# # Exercice 1: Nombre d'or
#
# On s'intรฉresse ici ร une approximation du nombre d'or, $\varphi = \dfrac{1+\sqrt{5}}{2}$.
#
# On dรฉfinit tout d'abord la suite $(F_n)_{n\geqslant0}$ par $F_0=F_1=1$ et $F_{n}=F_{n-1}+F_{n-2}$ pour $n\ge 2$. On a alors le rรฉsultat suivant,
# $$
# \lim_{n \rightarrow +\infty} \dfrac{F_{n+1}}{F_{n}} = \varphi.
# $$
# **a)**
# รcrire une fonction `Fibo(epsilon)` qui, ร $\varepsilon$ prรฉcision donnรฉe, renvoie le plus petit entier $n$ tel que $\left| \varphi - \dfrac{F_{n+1}}{F_{n}} \right| < \varepsilon$.
#
#
#
# On souligne รฉgalement que $\varphi$ est l'unique solution positive de l'รฉquation $x^2-x-1=0$.
#
# **b)**
# รcrire une nouvelle fonction `Newton(epsilon, x0)` qui, ร $\varepsilon$ prรฉcision donnรฉe, renvoie le plus petit entier $k$
# tel que $\left| \varphi - x_k \right| < \varepsilon$ oรน $x_k$ est dรฉterminรฉ par la mรฉthode de Newton appliquรฉe ร $f(x)=x^2-x-1$ dont on rappelle le principe:
# $$
# x_{k+1} = x_k - \dfrac{f(x_k)}{f^\prime(x_k)} \,, \quad k \ge 0.
# $$
# En pratique on choisira $x_0=3$.
#
#
#
# **c)**
# Pour $\varepsilon = 10^{-i}$ et $i \in \{2, 4, 6, 8\}$ comparer le nombre d'itรฉrations nรฉcessaire entre les deux stratรฉgies proposรฉes pour approximer $\varphi$ ร prรฉcision donnรฉe. Quelle stratรฉgie vous semble la plus *efficace* ?
#
#
#
# **d)**
# Et qu'en est-il si on initialise la mรฉthode de Newton par $x_0=0$ ?
#
#
# <!-- --- end exercise --- -->
#
#
#
#
# <!-- --- begin exercise --- -->
#
# # Exercice 2: Matrice de Vandermonde
#
# Soit $p, n\in \mathbb{N}^*$ et $x := (x_1, \ldots, x_p) \in \mathbb{R}^p$, on introduit la matrice $V(x,n)$ dรฉfinit par:
# $$
# V(x,n)=\begin{pmatrix}
# 1 & x_1 & x_1^2 & \cdots & x_1^{n-1} & x_1^n \\
# 1 & x_2 & x_2^2 & \cdots & x_2^{n-1} & x_2^n \\
# \vdots & \vdots& \vdots & \ddots & \vdots & \vdots\\
# 1 & x_{p-1} & x_{p-1}^2 & \cdots & x_{p-1}^{n-1} & x_{p-1}^n \\
# 1 & x_p & x_p^2 & \cdots & x_p^{n-1} & x_p^n
# \end{pmatrix}.
# $$
# **a)**
# รcrire une fonction qui construit la matrice $V(x,n)$ รฉlรฉment par รฉlรฉment ร l'aide d'une double boucle.
#
#
#
# **b)**
# Aprรจs avoir รฉtabli une relation permettant d'รฉcrire la $k$-iรจme colonne de $V(x,n)$ uniquement en fonction de $x$ et de $k$, รฉcrire une seconde fonction qui construit la matrice $V(x,n)$ colonne par colonne ร l'aide de cette relation.
#
#
#
# **c)**
# Aprรจs avoir รฉtabli une relation entre la $k$-iรจme colonne de $V(x,n)$, sa $(k-1)$-iรจme colonne et le vecteur $x$, รฉcrire une troisiรจme fonction qui construit la matrice $V(x,n)$ colonne par colonne ร l'aide de cette relation.
#
#
#
# **d)**
# Comparer les temps d'exรฉcution de ces trois fonctions pour $n=150$, $p=100$ et $x$ gรฉnรฉrรฉ alรฉatoirement.
#
#
#
#
# <!-- --- end exercise --- -->
#
#
#
#
# <!-- --- begin exercise --- -->
#
# # Exercice 3: Reconnaissance de chiffres manuscrits
#
# Le but de cet exercice est d'รฉcrire un programme de classification d'image de chiffres manuscrits.
# Ceci fut l'une des premiรจres applications industrielles du machine learning ร la lecture automatique des chรจques ou des codes postaux.
#
# Les instructions suivantes permettent de charger un jeu de donnรฉes de chiffres manuscrits numรฉrisรฉs disponible dans le package `scikit-learn` (nom d'import `sklearn`):
from sklearn.datasets import load_digits
digits = load_digits()
X, y = digits.data, digits.target
# Ainsi `X` est un tableau Numpy qui contient de nombreux exemples de chiffres manuscrits numรฉrisรฉes en image de 8x8 pixels stockรฉs sous la forme de tableau de 64 nombres entiers stockรฉs en flottants.
# La variable `y` contient l'entier entre 0 et 9 correspondant au chiffre numรฉrisรฉ. On parle de *label*.
#
#
# **a)**
# Quelle commande Python permet de connaรฎtre les dimensions de `X` et `y` et ainsi de connaรฎtre le nombre d'exemples contenus dans la base de donnรฉes?
#
#
#
# **b)**
# Afficher ร l'aide de la commande `print` les donnรฉes contenus dans `X` associรฉes ร l'indice `idx=12`? Il s'agit donc de la douziรจme ligne du tableau `X`.
#
#
#
# **c)**
# ร l'aide des fonctions `reshape` de Numpy et `imshow` de Matplotlib afficher l'image d'indice `idx=12`.
# Il est possible d'utiliser l'argument `cmap='gray'` dans l'appel de `imshow` pour afficher le rรฉsultat en niveau de gris.
# Quel chiffre est ainsi codรฉ?
#
#
#
# Pour chacune des classes de chiffre (de 0 ร 9), on souhaite calculer son centroรฏde, *i.e.* la reprรฉsentation "moyenne" d'une classe.
#
# **d)**
# Comment dรฉfinir les sous-tableaux de `X` et de `y` correspondant ร tous les chiffres 0 numรฉrisรฉs?
#
#
#
# **e)**
# Pour l'ensemble des 0 de la question prรฉcรฉdente, calculer pour chaque pixel la valeur moyenne, afin de dรฉfinir le "zรฉro moyen".
#
#
#
# **f)**
# Pour l'ensemble des chiffres de 0 ร 9 tracer sur une mรชme ligne (ร l'aide de la fonction `subplot` de Matplotlib et en initialisant la figure avec `plt.figure(figsize=(20,2))`) l'image moyenne associรฉe:
# <!-- dom:FIGURE: [fig/reconnaissance-chiffres.png, width=1000 frac=0.8] -->
# <!-- begin figure -->
#
# <p></p>
# <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEU<KEY>QCAAAAADbiZMeAAAAAXNCSVQI5gpbmQAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4xLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvAOZPmwAACyhJREFUeNrt3XlwVeUdxvFzhQAhC0kgNBZihCiLWhAsUhYLlqXghkilIyDINqKjpaOCDCpCK9CKSqWDo4KCAZkRUIdBQLBtAlJAKzRMWV2AQEBCIZEkZAGSU2j/ed7OvDXJOVxf7Pcz4ziPhusv7zn34Upyf4n4HgDARVdwBABAQQMAKGgAoKABAJdvQVeOSUx7yZ3PeHn3xr0dugBPXJvQLsudcSanJ2bMdOkGLUzt6c4wDzSIj4+vcmacP3WOS1/uzDTXXzib+ne6c+P8slmz4cXOjHN0UErLV90s6Olf5GU//6EzJ5Xy6ykuFVDc6tNvTdzizDhj9xVvWfaeQ+fzZHuXrtbk0tLSeq4Ms2fYzNO5NzlzNrtLS0uuuteZcZ4uOvBVwXRnxhnRqmDN1OwoF/TVczrEjS0YmNC3yPO2dU/qmON5i9ontH7N83Javtj8ykX/+aisZ5Lbj18chTOo2Th9h/4wOpekZuPMaHdF11u2OjNO27gL98SXzozjbd012qGLFS01m+a5BwfWb5rp0uFsOjHEmXEO3p3YZPBuV8YpzXkqpuMv3qzDw/sBZHQ9np/aaUfFrdP9/JQ1VRtSTvgffFmdE7vdz673zNk1sYUXP6jQO+77K27wL70ajXPBgl6+79A4flnaOnfGmR3ntTrizDjnO322qIc7F2tUcnLnlc5M0+rpG9KGn3LpTh49yp1rtXpgYeGtc10Zp9gr8P1xN9b+0YMV9FLfv2eC788b5P9uxIXcf/G///GgP/jZjc75furWi+mwV+77GzKicVI1GSeKBV3DcfyRP692aJzqHdOKnRnnpQl+lAq6RuNsP3luTfxmV6aJydhfcs8wh26dMwnZ7lyro30ikb6VzozT45Hy7cltav/owf4M+geeF3vxr1Ivb0VSUtLmr711P0lJWnvS85rW97zGpRc/Jt4r9rzihGj8n2FNxomiGo4zadfyiEPjRDrFPuvKOMfmzXTqYnVuWv+24e+5Mk3s6DbxU9c6dOu8l9LLnWt1b5uS4swRzozz9sH0h4a3rP2D1w9pyPT7F1z8W+WQrEExdxtvH0++cmc/b+f10e1G+zjfif81zrPrNiZ6Tp3O+a9cGefTr6/zysvTjtZz6HQivivXqkPErTvZe2tkxJ1xdr4S503o6cw4GR943rCbo/xFQv0i5er1VRU5+WcrU+uv22D+q5HPFe1b8ECUv2ZqH6eq4nx1xTlnxpm97KOmUf+KsnWc6teK/E/n93FlnIGHcnN/0ym3nisXa2Vp9Yald7kyzehFB8p+f4czd7KXnz3KnTvZ67KwvPz1js6Ms7fk7NINj313BZ2+alZq+pzqhHlDk5f91x08IzOj16QBUf6tzD7OktiHPo4d78w4Uw9fGx8/y5lx3s9MGPHoo66M0zAtLa1JTJozp/Nyi6RJC3q7Ms2YkV0zGs5z5nC8Jd0yPc+Zcd481LLFgcXOjLO+dfKrH6bW/iEjbLMDADfxVm8AoKABABQ0AFDQAAAKGgD+z3zLG1Xs33h+uwZjl8whDQ9r+Mz6aDX8XhL7ONdpeEKD8a0tazS8o6Eo6DhJGt7QMFiD8T60aRr2WieoyzhxGmZrGG/9Nc9rmKOhJNSL1dB6sYzvTRpmvauMCaqCXqyFGrpoaKLhsIZJGtaHejrGt8Ub20SPapis4XjAZ1bE+qJtqPUGaabhEw2vazD2WBZ6Ace5T8MU67Waq8HY8lke9FpdYX0uTdRQZu3HVRqOfdudzCtoAOCPOAAAFDQAUNAAAAoaAChoAAAFDQCgoAGAggYAUNAAQEEDAKKuNj801ni3+ysaYqwPabwPvp+GoqCjG++QN35e7i0ajF0cBzSE+6PujDUSt2n4QoOx66GrBuNHtVYGHeenGozFKcs0tNMwSEOWhpJQL5ZxBsbP11phfYBkDaeDjnOThiEadmp413p9LuHP1TX2O3TT8LiGsks2QJKGkRoKNBzUkGZ9YgW+k1trmGm9DbZpGKPhIw27go5jbP2ZZp2gsYaxGjZrOMYraAC4TFHQAEBBAwAoaACgoAEAFDQAUNAAAAoaAEBBAwAFDQCgoAGAggYARF1tliV11nCVhlEatmhYq6GDho1BR0/RYGy/uUZDqYZCDRWhnqSxT2i+hr9qMNarNNIQCXUcY5uPsazF2LKzUEOuhhOhjhOn4SkNxs6dTRp+puGA9cPqIl2Dsc7nZQ3Z1gMJd1VRvIbhGoztUcus91uozmt4Q8NuDYM12LeDBX6WJVk/a6NcYjVcr+FcqKdjLF+r0rBew40aBlifCtW8ggaAyxQFDQAUNACAggYAChoAQEEDAAUNAKCgAQAUNABQ0AAAChoAKGgAQNTVZlmSsWEmX4OxDsjYfrNNw481BF6WlKwhU0O5BmNVykkNZ0M9yb9o2KGhiwZjp8sxDedDHce4CsaKmd9qaKthrvXXBNZPQ38Nj2kwltx017BKw5+DjmOcta9honWClRqyQ72RWmgwtvEc12BsK8vTcEhDdZiHY+w66qvhPg31NFyt4ZOgh5NnfbSHrS81X9RwONQ72Xi0zzU8qSHF+gBxvIIGgO8DChoAKGgAAAUNABQ0AICCBgAKGgBAQQMAKGgAoKABABQ0AFDQAICoq82yJGPLh7F26JQGY1eLsaooPtTRizTs17BHQ4aG0kt3ksUaGmgYqCHfOnVVqOMYG28qNRibrW7WMM76YZ8HHaef9VVBDw3dNCRpyAr1dDZreEFDooaGGnpq2KchL+g4xn4kY83OYA19NBzRMENDbsBpmmi4R8MdGppr2KLhTKjXqsT6WSdaC+nvGipCHeeQhika2mswNnDVcI0Sr6AB4DJCQQMABQ0AoKABgIIGAFDQAEBBAwAoaAAABQ0AFDQAgIIGAAoaABB1tVmWtFuDsYMoQYOxU+lHGv4W6ujG4iNjB9FRDS00NLh0Jxmr4XYNxkKi9zUYG42SNBQFHaeBNRgTGCtmZmkYoKEuy5IiGozlQsZmnY4aWmpYpyEn1Itl3C7zNFynobeGROvVDnccY83OFxre0XCnhkc0PBhwmrMaNmrooOEfGmZr2Brq4RjlYuyLWqgh3vr826ahIOg4vrUS92popeErDSxLAoDvBQoaAChoAAAFDQAUNACAggYAChoAQEEDAChoAKCgAQAUNABQ0ACAqKvNsqRcDcaen/kaGmrI1PBxqKMbO10qNMRZfwu6SkO6huNBx2miYYKGG6wH3lPDJg0vBB2nkYZxGr7RcI2GJA2VGiJ1mMDYKvO2hs0ahmoYq8FYYpQf6r3TWMNdGoxlSR2sd29RqOMc0bBWQ1vriTaz3vExAacp1LDH+p95XUOOhqpQD8d4tAbWJ321hlQNTTUEXpYUa71f/2l9Lhl7ts7U6gUyr6ABwFEUNABQ0AAAChoAKGgAAAUNABQ0AICCBgBQ0ABAQQMAKGgAoKABAFFXm2VJxRpGaViu4ZyGxzXsD3V0Y3XMbg15GuzrgJI1lAUdp1zDKeuBGDtujI1Txral5kHHMXYdtdFwv4ZSDUs1rAr1Yp2q2VkbW4M+j9Lrkns19NewRMO7Gk6GOo6xD+ghDW9oyNJwQMOvrJe+DozNWN2td8tG62cQrm80/FGDsVGsgfXfhHutyqxHPcP6PF+tYYu1xHgFDQCXEQoaAChoAAAFDQAUNACAggYAChoAQEEDAChoAKCgAQAUNABQ0ACAqIv4nAEA8AoaAEBBAwAFDQCgoAGAggYAUNAAAAoaAChoAAAFDQAUNACAggYAUNAAQEEDAChoAKCgAQAUNABQ0AAAChoAQEEDAAUNAAjNvwBIU1WE+na67QAAAABJRU5ErkJggg==" width=1000>
#
# <!-- end figure -->
#
#
#
#
# Finalement, nous allons implรฉmenter notre propre classifieur: pour une nouvelle image de chiffre numรฉrisรฉ, on prรฉdit la classe dont le chiffre moyen est le plus proche.
# Pour cela on partage notre jeu de donnรฉes en deux parties de tailles semblables:
# la premiรจre partie servira de donnรฉes d'entraรฎnement (`X_train` et `y_train`); la seconde partie servira de donnรฉes de tests (`X_test` et `y_test`).
#
# **g)**
# Dรฉfinir les variables: `X_train`, `y_train`, `X_test` et `y_test`.
#
#
#
# **h)**
# Pour chaque chiffre de l'ensemble d'entraรฎnement, calculer les centroรฏdes (*i.e.* les chiffres moyens) des classes de 0 ร 9.
# On notera la variable contenant l'ensemble des moyennes `centroids_train`.
#
#
#
# **i)**
# Pour chaque chiffre de l'ensemble de test (`X_test`), calculer le centroรฏde appartenant ร `centroids_train` le plus proche (dans la norme euclidienne).
#
#
#
# **j)**
# Finalement, รฉvaluer si le chiffre ainsi obtenu correspond au vrai chiffre en utilisant `y_test`
# et en dรฉduire une estimation du pourcentage de bonnes prรฉdictions sur l'ensemble de test.
#
#
#
# <!-- --- end exercise --- -->
| DM1/devoir-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (pangeo)
# language: python
# name: pangeo
# ---
# +
# Import some python libraries
# %matplotlib inline
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
# +
# Setup a dask cluster
from dask.distributed import Client
from dask_kubernetes import KubeCluster
cluster = KubeCluster(n_workers=10)
cluster
# -
client = Client(cluster)
client
| notebooks/hello_world.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#import the necessary packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data= pd.read_csv(r'Telco_Churn_Data.csv')
data.head(5)
len(data)
data.shape
data.isnull().values.any()
data.info()
## Bonus method for renaming the columns
data.columns=data.columns.str.replace(' ','_')
data.info()
data.describe()
data.describe(include='object')
# +
### Change some of the columns to categorical
### Columns to be change to categorical objects are
### Target Code
### Condition of Current Handset
### Current TechSupComplaints
data['Target_Code']=data.Target_Code.astype('object')
data['Condition_of_Current_Handset']=data.Condition_of_Current_Handset.astype('object')
data['Current_TechSupComplaints']=data.Current_TechSupComplaints.astype('object')
data['Target_Code']=data.Target_Code.astype('int64')
# -
data.describe(include='object')
# +
## Percentage of missing Values present
round(data.isnull().sum()/len(data)*100,2)
# -
data.Complaint_Code.value_counts()
data.Condition_of_Current_Handset.value_counts()
# +
### we will impute the values of both of complaint code and condition_of_current_handset with
### the most occuring values
data['Complaint_Code']=data['Complaint_Code'].fillna(value='Billing Problem')
data['Condition_of_Current_Handset']=data['Condition_of_Current_Handset'].fillna(value=1)
data['Condition_of_Current_Handset']=data.Condition_of_Current_Handset.astype('object')
# -
data['Target_Churn'].value_counts(0)
data['Target_Churn'].value_counts(1)*100
summary_churn = data.groupby('Target_Churn')
summary_churn.mean()
corr = data.corr()
plt.figure(figsize=(15,8))
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,annot=True)
corr
# ### Univariate Analysis
# +
f, axes = plt.subplots(ncols=3, figsize=(15, 6))
sns.distplot(data.Avg_Calls_Weekdays, kde=True, color="darkgreen", ax=axes[0]).set_title('Avg_Calls_Weekdays')
axes[0].set_ylabel('No of Customers')
sns.distplot(data.Avg_Calls, kde=True,color="darkblue", ax=axes[1]).set_title('Avg_Calls')
axes[1].set_ylabel('No of Customers')
sns.distplot(data.Current_Bill_Amt, kde=True, color="maroon", ax=axes[2]).set_title('Current_Bill_Amt')
axes[2].set_ylabel('No of Customers')
# -
# ### Bivariate Analysis
plt.figure(figsize=(17,10))
p=sns.countplot(y="Complaint_Code", hue='Target_Churn', data=data,palette="Set2")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Complaint Code Distribution')
plt.figure(figsize=(15,4))
p=sns.countplot(y="Acct_Plan_Subtype", hue='Target_Churn', data=data,palette="Set2")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Acct_Plan_Subtype Distribution')
# +
plt.figure(figsize=(15,4))
p=sns.countplot(y="Current_TechSupComplaints", hue='Target_Churn', data=data,palette="Set2")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Current_TechSupComplaints Distribution')
# -
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Target_Code'] == 0),'Avg_Days_Delinquent'] , color=sns.color_palette("Set2")[0],shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Target_Code'] == 1),'Avg_Days_Delinquent'] , color=sns.color_palette("Set2")[1],shade=True, label='churn')
ax.set(xlabel='Average No of Days Deliquent/Defaluted from paying', ylabel='Frequency')
plt.title('Average No of Days Deliquent/Defaluted from paying - churn vs no churn')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Target_Code'] == 0),'Account_Age'] , color=sns.color_palette("Set2")[0],shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Target_Code'] == 1),'Account_Age'] , color=sns.color_palette("Set2")[1],shade=True, label='churn')
ax.set(xlabel='Account_Age', ylabel='Frequency')
plt.title('Account_Age - churn vs no churn')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Target_Code'] == 0),'Percent_Increase_MOM'] , color=sns.color_palette("Set2")[0],shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Target_Code'] == 1),'Percent_Increase_MOM'] , color=sns.color_palette("Set2")[1],shade=True, label='churn')
ax.set(xlabel='Percent_Increase_MOM', ylabel='Frequency')
plt.title('Percent_Increase_MOM- churn vs no churn')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Target_Code'] == 1),'Percent_Increase_MOM'] ,color=sns.color_palette("Set2")[1],shade=True, label='churn')
ax.set(xlabel='Percent_Increase_MOM', ylabel='Frequency')
plt.title('Percent_Increase_MOM- churn')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Target_Code'] == 0),'Percent_Increase_MOM'] ,color=sns.color_palette("Set2")[0],shade=True, label='no churn')
ax.set(xlabel='Percent_Increase_MOM', ylabel='Frequency')
plt.title('Percent_Increase_MOM- no churn')
| 7). Supervised Learning - Predicting Customer Churn/.ipynb_checkpoints/Telco Activity 1(Lesson 7)-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
from fastai.text import *
import numpy as np
from sklearn.model_selection import train_test_split
import pickle
import sentencepiece as spm
import fastai, torch
fastai.__version__ , torch.__version__
torch.cuda.set_device(0)
path = Path('/home/gaurav/PycharmProjects/nlp-for-gujarati/language-model')
from inltk.tokenizer import GujaratiTokenizer
GujaratiTokenizer
# +
# class GujaratiTokenizer(BaseTokenizer):
# def __init__(self, lang:str):
# self.lang = lang
# self.sp = spm.SentencePieceProcessor()
# self.sp.Load(str(path/"../tokenizer/gujarati_lm.model"))
# def tokenizer(self, t:str) -> List[str]:
# return self.sp.EncodeAsPieces(t)
# -
sp = spm.SentencePieceProcessor()
sp.Load(str(path/"../tokenizer/gujarati_lm.model"))
itos = [sp.IdToPiece(int(i)) for i in range(20000)]
itos
# 20,000 is the vocab size that we chose in sentencepiece
gujarati_vocab = Vocab(itos)
tokenizer = Tokenizer(tok_func=GujaratiTokenizer, lang='gu')
tokenizer.special_cases
data_lm = TextLMDataBunch.from_folder(path=path/'GujaratiDataset', tokenizer=tokenizer, vocab=gujarati_vocab)
data_lm.batch_size
data_lm.save()
data_lm.show_batch()
len(data_lm.vocab.itos)
data_lm.vocab.stoi
learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3)
gc.collect()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7))
learn.save('first', with_opt=True)
learn.load('first', with_opt=True);
learn.unfreeze()
learn.fit_one_cycle(5, 1e-2, moms=(0.8,0.7))
learn.save('second_gu_lm', with_opt=True)
learn.load('second_gu_lm', with_opt=True);
learn.fit_one_cycle(40, 1e-3, moms=(0.8,0.7))
learn.save('third_gu_lm', with_opt=True)
learn.load('third_gu_lm', with_opt=True);
TEXT = "เชเซเชเชฐเชพเชค"
N_WORDS = 40
N_SENTENCES = 2
print("\n".join(learn.predict(TEXT, N_WORDS, temperature=0.75) for _ in range(N_SENTENCES)))
np.exp(3.53)
defaults.device = torch.device('cpu')
learn.model.eval()
learn.export()
# +
# Generating embedding vectors for visualization
# -
path
defaults.device = torch.device('cpu')
learn = load_learner(path / 'GujaratiDataset/')
encoder = get_model(learn.model)[0]
encoder.state_dict()['encoder.weight'].shape
embeddings = encoder.state_dict()['encoder.weight']
embeddings = np.array(embeddings)
embeddings[0].shape
df = pd.DataFrame(embeddings)
df.shape
df.to_csv('embeddings.tsv', sep='\t', index=False, header=False)
df.head()
df2 = pd.DataFrame(itos)
df2.head()
df2.shape
df2.to_csv('embeddings_metadata.tsv', sep='\t', index=False, header=False)
encoder.state_dict()['encoder.weight'][1]
| language-model/Gujarati_Language_Model_ULMFiT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 10 Minutes to cuDF
# =======================
#
# Modeled after 10 Minutes to Pandas, this is a short introduction to cuDF, geared mainly for new users.
pip install cudf
# +
import os
import numpy as np
import pandas as pd
import cudf
np.random.seed(12)
#### Portions of this were borrowed from the
#### cuDF cheatsheet, existing cuDF documentation,
#### and 10 Minutes to Pandas.
#### Created November, 2018.
# -
# Object Creation
# ---------------
# Creating a `Series`.
s = cudf.Series([1,2,3,None,4])
print(s)
# Creating a `DataFrame` by specifying values for each column.
df = cudf.DataFrame([('a', list(range(20))),
('b', list(reversed(range(20)))),
('c', list(range(20)))])
print(df)
# Creating a `Dataframe` from a pandas `Dataframe`.
pdf = pd.DataFrame({'a': [0, 1, 2, 3],'b': [0.1, 0.2, None, 0.3]})
gdf = cudf.DataFrame.from_pandas(pdf)
print(gdf)
# Viewing Data
# -------------
# Viewing the top rows of the GPU dataframe.
print(df.head(2))
# Sorting by values.
print(df.sort_values(by='a', ascending=False))
# Selection
# ------------
#
# ## Getting
# Selecting a single column, which yields a `cudf.Series`, equivalent to `df.a`.
print(df['a'])
# ## Selection by Label
# Selecting rows from index 2 to index 5 from columns 'a' and 'b'.
print(df.loc[2:5, ['a', 'b']])
# ## Selection by Position
# Selecting by integer slicing, like numpy/pandas.
print(df[3:5])
# Selecting elements of a `Series` with direct index access.
print(s[2])
# ## Boolean Indexing
# Selecting rows in a `Series` by direct Boolean indexing.
print(df.b[df.b > 15])
# Selecting values from a `DataFrame` where a Boolean condition is met, via the `query` API.
print(df.query("b == 3"))
# Supported logical operators include `>`, `<`, `>=`, `<=`, `==`, and `!=`.
# ## Setting
# Missing Data
# ------------
# Missing data can be replaced by using the `fillna` method.
print(s.fillna(999))
# Operations
# ------------
# ## Stats
# Calculating descriptive statistics for a `Series`.
print(s.mean(), s.var())
# ## Applymap
# Applying functions to a `Series`.
# +
def add_ten(num):
return num + 10
print(df['a'].applymap(add_ten))
# -
# ## Histogramming
# Counting the number of occurrences of each unique value of variable.
print(df.a.value_counts())
# ## String Methods
# Merge
# ------------
# ## Concat
# Concatenating `Series` and `DataFrames` row-wise.
print(cudf.concat([s, s]))
print(cudf.concat([df.head(), df.head()], ignore_index=True))
# ## Join
# Performing SQL style merges.
# +
df_a = cudf.DataFrame()
df_a['key'] = [0, 1, 2, 3, 4]
df_a['vals_a'] = [float(i + 10) for i in range(5)]
df_b = cudf.DataFrame()
df_b['key'] = [1, 2, 4]
df_b['vals_b'] = [float(i+10) for i in range(3)]
df_merged = df_a.merge(df_b, on=['key'], how='left')
print(df_merged.sort_values('key'))
# -
# ## Append
# Appending values from another `Series` or array-like object. `Append` does not support `Series` with nulls. For handling null values, use the `concat` method.
print(df.a.head().append(df.b.head()))
# ## Grouping
# Like pandas, cuDF supports the Split-Apply-Combine groupby paradigm.
df['agg_col1'] = [1 if x % 2 == 0 else 0 for x in range(len(df))]
df['agg_col2'] = [1 if x % 3 == 0 else 0 for x in range(len(df))]
# Grouping and then applying the `sum` function to the grouped data.
print(df.groupby('agg_col1').sum())
# Grouping hierarchically then applying the `sum` function to grouped data.
print(df.groupby(['agg_col1', 'agg_col2']).sum())
# Grouping and applying statistical functions to specific columns, using `agg`.
print(df.groupby('agg_col1').agg({'a':'max', 'b':'mean', 'c':'sum'}))
# Reshaping
# ------------
# Time Series
# ------------
#
# cuDF supports `datetime` typed columns, which allow users to interact with and filter data based on specific timestamps.
# +
import datetime as dt
date_df = cudf.DataFrame()
date_df['date'] = pd.date_range('11/20/2018', periods=72, freq='D')
date_df['value'] = np.random.sample(len(date_df))
search_date = dt.datetime.strptime('2018-11-23', '%Y-%m-%d')
print(date_df.query('date <= @search_date'))
# -
# Categoricals
# ------------
# cuDF supports categorical columns.
# +
pdf = pd.DataFrame({"id":[1,2,3,4,5,6], "grade":['a', 'b', 'b', 'a', 'a', 'e']})
pdf["grade"] = pdf["grade"].astype("category")
gdf = cudf.DataFrame.from_pandas(pdf)
print(gdf)
# -
# Accessing the categories of a column.
print(gdf.grade.cat.categories)
# Accessing the underlying code values of each categorical observation.
print(gdf.grade.cat.codes)
# Plotting
# ------------
#
# Converting Data Representation
# --------------------------------
# ## Pandas
# Converting a cuDF `DataFrame` to a pandas `DataFrame`.
print(df.head().to_pandas())
# ## Numpy
# Converting a cuDF `DataFrame` to a numpy `rec.array`.
print(df.to_records())
# Converting a cuDF `Series` to a numpy `ndarray`.
print(df['a'].to_array())
# ## Arrow
# Converting a cuDF `DataFrame` to a PyArrow `Table`.
print(df.to_arrow())
# Getting Data In/Out
# ------------------------
#
# ## CSV
# Writing to a CSV file, by first sending data to a pandas `Dataframe` on the host.
df.to_pandas().to_csv('foo.txt', index=False)
# Reading from a csv file.
df = cudf.read_csv('foo.txt', delimiter=',',
names=['a', 'b', 'c', 'a1', 'a2'],
dtype=['int64', 'int64', 'int64', 'int64', 'int64'],
skiprows=1)
print(df)
# ## Known Issue(s)
# If you are attempting to perform Boolean indexing directly or using the `query` API, you might see an exception like:
#
# ```
# ---------------------------------------------------------------------------
# AssertionError Traceback (most recent call last)
# ...
# 103 from .numerical import NumericalColumn
# --> 104 assert column.null_count == 0 # We don't properly handle the boolmask yet
# 105 boolbits = cudautils.compact_mask_bytes(boolmask.to_gpu_array())
# 106 indices = cudautils.arange(len(boolmask))
#
# AssertionError:
#
# ```
# Boolean indexing a `Series` containing null values will cause this error. Consider filling or removing the missing values.
#
| 10min_to_cuDF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## [SEE THIS NOTEBOOK FOR HOW WE DEFINED "SALTINESS"](https://colab.research.google.com/drive/1M8gBDOieb8dcBW4Sr7TKlnlQD1s6-u-V)
# + [markdown] _uuid="01bd76ab9782c73646930a8c6ac495eb93d52786"
# # Hacker News Data Processing
#
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
from textblob import TextBlob
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import html
import re
from tqdm import tqdm_pandas
from tqdm import tqdm_notebook as tqdm
# Load TQDM
tqdm_pandas(tqdm())
import json
def save_df(df):
df.to_pickle('data/df_save.pkl')
print('Dataframe Saved')
def load_df():
df = pd.read_pickle('data/df_save.pkl')
print('Dataframe Loaded')
return df
# Load TQDM
tqdm_pandas(tqdm())
# Load the profiler into Jupyter notebook
# %load_ext line_profiler
# -
# ## Part A. Get the data.
# ### Ran the query in Google BigQuery and copied the files to Google Cloud Storage
#
# Then I moved my query data to Google Cloud Storage, downloaded it via the Jupyter Terminal onto my notebook server.
# 
# ### Dask is giving me errors when I tried to .read( `*.csv` ). Manual approach:
# %%time
#dask kept giving me errors.
d0 = pd.read_csv('data/hn_main_query/hacker_news_full_comments0.csv',engine='python')
print("dataframe1")
d1 = pd.read_csv('data/hn_main_query/hacker_news_full_comments1.csv',engine='python')
print("dataframe2")
d2 = pd.read_csv('data/hn_main_query/hacker_news_full_comments2.csv',engine='python')
print("dataframe3")
d3 = pd.read_csv('data/hn_main_query/hacker_news_full_comments3.csv',engine='python')
print("dataframe4")
d4 = pd.read_csv('data/hn_main_query/hacker_news_full_comments4.csv',engine='python')
print("dataframe5")
d5 = pd.read_csv('data/hn_main_query/hacker_news_full_comments5.csv',engine='python')
print("dataframe6")
d6 = pd.read_csv('data/hn_main_query/hacker_news_full_comments6.csv',engine='python')
print("dataframe7")
d7 = pd.read_csv('data/hn_main_query/hacker_news_full_comments7.csv',engine='python')
print("dataframe8")
d8 = pd.read_csv('data/hn_main_query/hacker_news_full_comments8.csv',engine='python')
print("dataframe9")
d9 = pd.read_csv('data/hn_main_query/hacker_news_full_comments9.csv',engine='python')
# %%time
df = pd.concat([d0, d1, d2, d3, d4, d5, d6, d7, d8, d9])
# ### Inspect shape of concatenated csv import. Verify that all rows are present. (15,825,859)
# + _uuid="b88e551ef1d6c2562e1beff9d60be3c88e83ce02"
display(df.shape)
display(df.head(10))
# -
# ### Remove all `author` and `text` NaN rows from Dataframe.
# +
nans = df.text.isna().sum()
print('This many nans:', nans)
df = df.dropna(subset=['commentor', 'text'])
print('New Shape after nan removal:', df.shape)
nans = df.parent_type.isna().sum()
print('This many parent_type nans:', nans)
nans = df.story_title.isna().sum()
print('This many story_title nans:', nans)
# -
# ### Fill in empty `parent_titles` for each comment.
# %%time
df['story_title'] = df.story_title.fillna('Another Comment')
nans = df.story_title.isna().sum()
print('This many story_title nans:', nans)
display(df.head(3))
# ## Part B. Apply Sentiment Analysis and more Text Cleaning
# ### Define utility functions
# + _uuid="800fd0be97d8d4111ebcb7c6d5968cc944bc08b7"
def encode_decode(text):
"""Utility function to clean text by decoding HTML text."""
unescaped = html.unescape(text)
return unescaped
def noHTML(text):
"""Utility function to clean text by removing HTML flags."""
cleanr = re.compile('<.*?>')
cleantext = re.sub(cleanr, ' ', text)
return cleantext
def noURLS(text):
"""Utility function to clean text by removing links using simple regex."""
return ''.join(re.sub(r"http\S+", "", text))
def get_sentiment(text):
"""Evaluates the sentiment of given text.
Utility function to classify sentiment of passed text
using textblob's sentiment method. Return the polarity
score as a float within the range [-1.0, 1.0]
Polarity score is a float within the range [-1.0, 1.0]
where negative value indicates negative text
and positive value indicates that the given
text is positive.
Subjectivity is a float within the range [0.0, 1.0]
where 0.0 is very objective and 1.0 is very subjective.
Args:
text: the text of a comment.
Returns:
polarity
subjectivity
"""
analysis = TextBlob(text).sentiment
polarity = analysis.polarity
subjectivity = analysis.subjectivity
return polarity, subjectivity
# -
# ### Apply text cleaning to comment texts and create new column in Dataframe
df['cleaned_comment'] = df.text.progress_apply(lambda x: noURLS(noHTML(encode_decode(x))))
df['cleaned_title'] = df.story_title.progress_apply(lambda x: noURLS(noHTML(encode_decode(x))))
# ### Apply sentiment analysis (TextBlob.polarity) to each cleaned Comment text.
df['comment_sentiment'] = df.cleaned_comment.progress_apply(get_sentiment)
df.iloc[:, -6:].head(3)
# ### Drop the original *uncleaned* columns
df = df.drop(columns=['text', 'story_title'])
# ### Split the comment sentiment tuple into `polarity` and `subjectivity` then replace `comment sentiment`
# %%time
alt_df = pd.DataFrame(df['comment_sentiment'].tolist(), index=df.index)
alt_df.columns = ['polarity', 'subjectivity']
df["comment_polarity"] = alt_df.polarity
df["comment_subjectivity"] = alt_df.subjectivity
df = df.drop(columns=['comment_sentiment'])
df.head(3)
# %%time
df.to_csv('data/hn_all_w_sentiment_cleaned_inplace.csv',index=False)
# ## Part C. Load cleaned / analyzed data back into dataframe from CSV
# %%time
# IMPORT FROM CSV's
df = pd.read_csv('data/hn_all_w_sentiment_cleaned_inplace.csv')
print(df.shape)
# ### Do some Data Cleanup
# +
# %%time
nans = df.comment_deleted.isna().sum()
print('This many nans:', nans)
nans = df.comment_dead.isna().sum()
print('This many nans:', nans)
nans = df.parent_deleted.isna().sum()
print('This many nans:', nans)
nans = df.parent_dead.isna().sum()
print('This many nans:', nans)
df['comment_deleted'] = df.comment_deleted.fillna(value=False)
df['comment_dead'] = df.comment_dead.fillna(value=False)
df['parent_deleted'] = df.parent_deleted.fillna(value=False)
df['parent_dead'] = df.parent_dead.fillna(value=False)
# -
# %%time
nans = df.comment_deleted.isna().sum()
print('This many comment_deleted nans:', nans)
nans = df.comment_dead.isna().sum()
print('This many comment_dead nans:', nans)
nans = df.parent_deleted.isna().sum()
print('This many parent_deleted nans:', nans)
nans = df.parent_dead.isna().sum()
print('This many parent_dead nans:', nans)
nans = df.ranking.isna().sum()
print('This many ranking nans:', nans)
# ### Oops, looks like `ranking` column was actually empty for each `comment` on that BigQuery `full` table. I'll need to pull it in from a different table and merge it here by commentid from the comments table.
#
# After a bit of investigation I found that the table `bigquery-public-data.hacker_news.full_201510` does contain `comment_ranking` type entries, but the `bigquery-public-data.hacker_news.full` (the one that is continuously updated) does not.
#
# I'm going to add in the comment_ranking data as a column, but not calculate any summary stats off it for the API.
#
# 
# %%time
comment_ranking_df = pd.read_csv("data/comment_ranking.csv")
comment_ranking_df = comment_ranking_df[['id','ranking']].copy()
comment_ranking_df.set_index('id')
comment_ranking_df.head(3)
# ### Add in the missing `ranking` data
# %%time
df = df.drop(columns=['ranking'])
df = df.merge(comment_ranking_df, how='left', left_on='commentid',
right_on='id')
nans = df.ranking.isna().sum()
print('This many ranking nans:', nans)
print(df.columns)
# ## Part D. Make some features to aid in the final stat aggregation.
# ### Fill Nan's and rename columns so API (JSON) is easier to read.
# +
# %%time
df = df.fillna(value=0)
df = df.rename(columns={'author': 'parent_author',
'cleaned_title': 'parent_title',
'score': 'parent_score',
'story_time': 'parent_time',
'ranking': 'comment_rank',
'commentid':'comment_id',
'parentid':'parent_id'})
df = df.drop(columns=['id'])
df = df.sort_values(by = ['comment_subjectivity','comment_polarity'])
display(df.head(3))
display(df.tail(3))
# -
# ### Make a copy of the Dataframe to preserve our work incase of error.
data = df
# ### Normalize comment subjectivity from sub `-1 to 1` to obj. Create booleans for +/- classes.
# +
# %%time
def sentiment_helpers(df):
"""Creates new columns in the given dataframe
Comment Subjectivity:
Type: Float, 0.0 to 1.0,
Legend: 0 = Objective, 1 = Subjective
Calc: x = TextBlob(text).sentiment.subjectivity,
f(x): get_sentiment
Use: Seperate critisism from saltiness & enthusiasum from support.
As subjectivity decreases sentiment becomes less personal,
more objective.
Comment Saltiness:
Type: Float, -1.0 to 0.0 to 1.0,
Legend: -1 = Salty, 0 = Neutral, 1 = Enthuisastic
Calc: TextBlob(text).sentiment.subjectivity *
TextBlob(text).sentiment.polarity
Use: Seperate critisism from saltiness & enthusiasum from support.
As subjectivity decreases sentiment is less personal,
more objective.
Subjectivity_spectrum(Revised):
Type: Float, -1.0 to 1.0,
Legend: -1 = Objective, 1 = Subjective
Calc: x = TextBlob(text).sentiment.subjectivity, Negated, center on
zero, multiply by 2.
f(x): get_sentiment
Use: Seperate critisism from saltiness & enthusiasum from support.
Used for graphing -1 to 1 Arc & Bar data.
As subjectivity decreases sentiment is less personal,
more objective.
Boolean Columns:
Used for filtering - is_subjective, is_negative, is_salty
Args:
df: The full comment dataframe.
"""
df['comment_saltiness'] = (df['comment_polarity']
.multiply(df['comment_subjectivity']))
df['subjectivity_spectrum'] = (df['comment_subjectivity'].multiply(-1)
.add(.5).multiply(2))
df['is_subjective'] = (df['comment_subjectivity']
.map(lambda x: True if (x > .5) else False))
df['is_negative'] = (df['comment_polarity']
.map(lambda x: True if (x < 0) else False))
df['is_salty'] = (df['comment_saltiness']
.map(lambda x: True if (x < 0) else False))
print ("Sentiment helpers created...")
sentiment_helpers(data)
# -
# %%time
# Should see spectrum from -1 to 1, and saltiness -1 to 0 (from diminishing effect of objectivity).
data = data.sort_values(by = ['subjectivity_spectrum','comment_saltiness'])
display(data.iloc[:,-6:].head(3))
display(data.iloc[:,-6:].tail(3))
# ### Create `quadrant` column for categorical class for use in Groupby function.
# +
# %%time
def determine_quadrant(df):
"""Calculates Quadrants and creates column.
Creates columns for a polarity/subjectivity quadrant type groupby filter.
Quadrants are as follows:
`neg_obj` - Critic
`neg_sub` - Salty
`pos_obj` - Advocate
`pos_sub` - Happy
Args:
df: The full comment dataframe.
Returns:
df: The same dataframe with added `quadrants` column.
"""
df['polarity'] = (df['comment_polarity'].map(lambda x:'neg'
if (x < 0) else 'pos'))
df['basis'] = (df['comment_subjectivity'].map(lambda x: 'sub'
if (x > .5) else 'obj'))
df = df.assign(quadrant=[str(x) + '_' + str(y) for x, y
in zip(df['polarity'], df['basis'])])
df = df.drop(columns=['polarity','basis'])
return df
data = determine_quadrant(data)
# -
display(data.shape)
display(data.iloc[0:2, -8:])
# ### Send every row of these columns into a Json string.
# +
# %%time
def create_comment_JSON_records(df):
"""Turns comments + stats into json objects, creates column in given df.
Saves filtered dataframe columns as json object oriented on row records.
Decodes the JSON string into a list containing 1 JSON object per row.
Adds new column in the given dataframe that stores the row's JSON Object.
Args:
df: The full comment dataframe.
"""
saved = (df[['commentor', 'comment_time', 'comment_saltiness',
'comment_polarity', 'comment_subjectivity',
'subjectivity_spectrum', 'is_salty', 'is_subjective',
'is_negative', 'parent_type', 'parent_author', 'parent_title',
'cleaned_comment', 'comment_rank', 'comment_id', 'parent_id']]
.to_json(orient='records'))
decoded = json.JSONDecoder().decode(saved)
df['comment_JSON'] = decoded
print( "JSON Uploaded")
create_comment_JSON_records(data)
# -
display(data.shape)
display(data.iloc[0:2, -6:])
data.iloc[3:4].comment_JSON.values
# ### Get a count of how many unique commentors are in this data
# %%time
userKeys = data["commentor"]
ukeys, index = np.unique(userKeys, True)
display(ukeys.shape)
# ### Export a Sample of our data to compare polarity vs saltiness to see how it effects the spread.
#
# Doing this to gut-check our assumptions of how our calculated metric `saltiness` will function.
# (*See link at top of notebook.*)
# +
data_sample = data[['comment_polarity','comment_saltiness',
'comment_subjectivity','is_subjective','is_negative',
'is_salty','quadrant']].sample(n=100000, random_state=42)
data_sample.to_pickle('data/polarity_salty_compare.pkl')
print('Dataframe Saved')
# -
# ## Part E. Create groupby stats for our selected metrics
# ### Calculate Commentor `count comments` & `first/latest` comment dates.
#
# Also create Dataframe_Commentor_Table, `df_ct`.
# +
# %%time
def commentor_stats(df):
"""Returns stats about the commentor's comment history
Groups by `commentor` and calculates agg stats for 'count',`min`, `max`.
Columns Created:
`count_comments` - count the number of comments.
`time_of_last_comment` - Unix Epoch time of the last comment before our
data was pulled on Mar 16, 2019, 12:24:46 AM.
`time_of_first_comment` - Unix Epoch time of the earliest comment.
Args:
df: The full comment dataframe.
Returns:
out: A dataframe with index `commentor` and created columns.
"""
out = (df.groupby('commentor', as_index=False)['comment_time']
.agg(['count','max','min']))
out = out.rename({'count': 'count_comments',
'max': 'time_of_last_comment',
'min': 'time_of_first_comment'}, axis='columns')
print("Calculated commentor stats.")
return out
# Run Function & create df_ct
df_ct = commentor_stats(data)
# -
display(df_ct.shape)
display(df_ct.iloc[:, -6:].head(3))
# ### Group `Count` & `Sum` of the Saltiness Scores by Month for Plotting. Create list of plotpoints for each Commentor.
# +
# %%time
def calculate_monthly_summaries(df):
"""Creates summary of stats over `commentors` history by month for graphing.
Calculates the `count` and `sum` aggregated stats of `comment_saltiness`
grouped by `is_salty` & `month_text`.
Formats the stats into a JSON object for each commenters' period.
Concatenates JSON Objects into a sequential sparse list (no empty months)
for each commentor.
Stats in the `montly_plot` list are:
y_m: Year-Month period of stat aggregation from the `month-text` group.
c_h: Stat, count of Happy Comments for the month.
c_s: Stat, count of Salty Comments for the month.
t_h: Stat, total (sum) of Happy Comment Scores for the month.
t_s: Stat, total (sum) of Salty Comment Scores for the month.
Args:
df: The full comment dataframe.
Returns:
out: A dataframe with index `commentor`, and a column `monthly_plot`.
"""
df['month_text'] = (pd.to_datetime(df['comment_time'],unit='s')
.dt.strftime('%Y_%m')).str[-5:]
out = df['comment_saltiness'].groupby([df['commentor'],
df['month_text'],
df['is_salty']]
).agg(['count','sum']).unstack()
out.columns = [''.join(str(col)).strip() for col in out.columns.values]
out = out.rename({"('count', False)": 'c_h', # Count Happy
"('count', True)": 'c_s', # Count Salty
"('sum', False)": 't_h', # Sum Happy
"('sum', True)": 't_s'}, axis='columns') # Sum Salty
print("Calculated monthly stats")
# Combine the monthly_stats into an object.
out.reset_index(inplace=True)
out = out.rename({"month_text": 'y_m'},axis='columns')
out = out.fillna(0.0)
out["t_h"] = out["t_h"].round(decimals=2)
out["t_s"] = out["t_s"].round(decimals=2)
out_json = (out[["y_m","t_s","t_h","c_s","c_h"]].to_json(orient='records'))
decoded = json.JSONDecoder().decode(out_json)
out['monthly_graph'] = decoded
# Combine the montly_stats_object into a list for each commentor.
out.sort_values(['commentor','y_m'], ascending=[True, True])
keys, values = out[['commentor', 'monthly_graph']].values.T
ukeys, index = np.unique(keys, True)
arrays = np.split(values,index[1:])
df = pd.DataFrame(data = {'monthly_plot':[list(a) for a in arrays]},
index = ukeys)
print("Created monthly stat lists.")
return df
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, calculate_monthly_summaries(data),
left_index=True, right_index=True, how='left')
# -
display(df_ct.shape)
display(df_ct.iloc[:, -6:].head(5))
display(df_ct.iloc[3:5].monthly_plot.values)
# ### Create the 50 `top_cmnts_s `list for each Commentor - Filter by `is_salty`
# +
# %%time
def top_salty_comments(df):
"""Creates list object for each `commentor` of top 50 saltiest comments.
Filters by `is_salty` = True.
Sorts values by `comment_saltiness` from the most salty (lowest value).
Groups dataframe by `commentor'.
Concatenates top 50 `comment_JSON` comment objects into a list object.
Creates a new column from the list of obj(commentor's list of json objects).
Args:
df: The full comment dataframe.
Returns:
df: A dataframe w/ index `commentor` and a column 'top_cmnts_s'.
"""
# Grab the right comments, pulls up to 50 comments by saltiest.
df = df[df['is_salty'] == True]
df = df.sort_values(['commentor','comment_saltiness'],
ascending=[True, True])
df = (df[['commentor','comment_JSON']].groupby(df['commentor']).head(50)
.reset_index(drop=True))
# Group the comments into a list for each user.
keys, values = df.values.T
ukeys, index = np.unique(keys, True)
arrays = np.split(values,index[1:])
df = pd.DataFrame(data = {'top_cmnts_s':[list(a) for a in arrays]},
index = ukeys)
print("Grabbed the SALTIEST comments.")
return df
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, top_salty_comments(data),
left_index=True, right_index=True, how='left')
# -
display(df_ct.shape)
display(df_ct.iloc[:, -6:].head(5))
# ### Get the `top_salty_comment` for each Commentor - Need it for `Rank` Lists
# +
# %%time
def the_top_salty_comment(df):
"""Returns the top salty comment of each `commentor`.
Filters by `is_salty`
Sorts on `commentor` and `comment_saltiness` to bring saltiest to top.
Groups dataframe by `commentor`.
Creates a list containing the top comment for each `commentor`.
Turns the list into a new column: `top_salty_comment`
Args:
df: The full comment dataframe.
Returns:
df: A dataframe w/ index `commentor` and column `top_salty_comment`.
"""
# Grab the right comments, will pull the top salty comment.
df = df[df['is_salty'] == True]
df = df.sort_values(['commentor','comment_saltiness'],
ascending=[True, True])
df = (df[['commentor','comment_JSON']].groupby(df['commentor']).head(1)
.reset_index(drop=True))
# Group the comments into a list for each user.
keys, values = df.values.T
ukeys, index = np.unique(keys, True)
arrays = np.split(values,index[1:])
df = pd.DataFrame(data = {'top_salty_comment':[list(a) for a in arrays]},
index = ukeys)
print("Grabbed the top SALTIEST comment.")
return df
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, the_top_salty_comment(data),
left_index=True, right_index=True, how='left')
# -
# ### Group the top 20 `Happy` Comments for each commentor - Filter by `is_salty`
# +
# %%time
def top_happy_comments(df):
""" Creates list object for each `commentor` of their top 50 happy comments.
Filters by `is_salty` = False
Sorts values by `comment_saltiness` from the most happy (highest value).
Groups dataframe by `commentor'.e
Concatenates top 20 `comment_JSON` comment objects into a list object.
Creates a new column from the list of obj(commentor's list of json objects).
Args:
df: The full comment dataframe.
Returns:
df: A dataframe w/ index `commentor` and column `top_cmnts_h`.
"""
# Grab the right comments, will pull up to 20 comments by happiest.
df = df[df['is_salty'] == False]
df = df.sort_values(['commentor','comment_saltiness'],
ascending=[True, False])
df = (df[['commentor','comment_JSON']].groupby(df['commentor']).head(20)
.reset_index(drop=True))
# Group the comments into a list for each user.
keys, values = df.values.T
ukeys, index = np.unique(keys, True)
arrays = np.split(values,index[1:])
df = pd.DataFrame(data = {'top_cmnts_h':[list(a) for a in arrays]},
index = ukeys)
print("Grabbed the HAPPIEST comments.")
return df
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, top_happy_comments(data),
left_index=True, right_index=True, how='left')
# -
display(df_ct.shape)
display(df_ct.iloc[:, -6:].head(3))
# ### Calculate stats for Saltiness - `Overall`
# +
# %%time
def saltiness_stats(df):
"""Creates stats of `comment_saltiness` overall.
First groups dataframe by `commentor`.
Aggregates `count`, `sum`, & `mean` stats of `comment_saltiness` by `is_salty`.
Creates new column for each aggregate stat: 4 new columns.
Args:
df: The full comment dataframe.
Returns:
out: A dataframe with index `commentor`, and a column for each agg.stat.
"""
out = (df.groupby('commentor', as_index=False)['comment_saltiness']
.agg(['sum', 'mean', 'min', 'max']))
out = out.rename({'sum': 'sum_slt_oall',
'mean': 'average_slt_oall',
'min': 'min_slt_oall',
'max': 'max_slt_oall'}, axis='columns')
print("Calculated saltiness overall stats.")
return out
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, saltiness_stats(data), left_index=True,
right_index=True, how='left')
# -
display(df_ct.shape)
display(df_ct.iloc[:, -6:].head(3))
# ### Calculate stats for Saltiness - Split `Happy/Salty`
# +
# %%time
def saltiness_stats_split(df):
"""Creates stats of `comment_saltiness` by `is_salty`.
First groups dataframe by `commentor`.
Aggregates `count`, `sum`, & `mean` stats of `comment_saltiness` by `is_salty`.
Creates new column for each aggregate stat: 6 new columns.
Args:
df: The full comments dataframe.
Returns:
out: A dataframe with index `commentor`, and a column for each agg.stat.
"""
out = (df['comment_saltiness'].groupby([df['commentor'],df['is_salty']])
.agg(['count','sum', 'mean']).unstack())
out.columns = [''.join(str(col)).strip() for col in out.columns.values]
out = out.rename({"('count', False)": 'cnt_slt_h',
"('count', True)": 'cnt_slt_s',
"('sum', False)": 'sum_slt_h',
"('sum', True)": 'sum_slt_s',
"('mean', False)":"avg_slt_h",
"('mean', True)":"avg_slt_s"},
axis='columns')
print("Calculated saltiness grouped stats - split by salty/happy.")
return out
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, saltiness_stats_split(data), left_index=True,
right_index=True, how='left')
# -
display(df_ct.shape)
display(df_ct.iloc[:, -6:].head(3))
# ### Calculate stats for Saltiness - Split `Quadrant`
# +
# %%time
def saltiness_stats_quadrants(df):
"""Creates stats of comment_saltiness by quadrant.
First groups dataframe by `commentor`.
Aggregates `sum`, `count`, & `mean` of `comment_saltiness` by `quadrant`.
Creates new column for each aggregate stat: 12 new columns.
Quadrants are as follows:
`neg_obj` - Critic
`neg_sub` - Salty
`pos_obj` - Advocate
`pos_sub` - Happy
Args:
df: The full-comment dataframe.
Returns:
out: A dataframe with index `commentor`, and a column for each agg.stat.
"""
out = (df['comment_saltiness'].groupby([df['commentor'],df['quadrant']])
.agg(['sum', 'mean', 'count']).unstack())
out = out.rename({"sum": 'sum_slt',"mean":"avg_slt",
'count':"cnt_slt"},axis='columns')
out.columns = ['_'.join(col).strip() for col in out.columns.values]
print("Calculated saltiness quadrant stats.")
return out
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, saltiness_stats_quadrants(data), left_index=True,
right_index=True, how='left')
# -
display(df_ct.shape)
display(df_ct.iloc[:, -6:].head(3))
# ### Calculate stats for Subjectivity - `Overall`
# +
# %%time
def subjectivity_stats(df):
"""Creates overall stats of comment_subjectivity.
First groups dataframe by `commentor`.
Aggregates stats `Sum` & `Mean` of `comment_subjectivity` overall.
Creates new column for each aggregate stat: 2 new columns.
Args:
df: The full-comment dataframe.
Returns:
out: A dataframe with index `commentor`, and a column for each agg.stat.
"""
out = (df.groupby('commentor', as_index=False)['comment_subjectivity']
.agg(['sum', 'mean']))
out = out.rename({'sum': 'sum_subj_oall',
'mean': 'avg_subj_oall'}, axis='columns')
print("Calculated subjectivity overall stats.")
return out
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, subjectivity_stats(data),
left_index=True, right_index=True, how='left')
# -
display(df_ct.shape)
display(df_ct.iloc[:, -6:].head(3))
# ### Calculate stats for Subjectivity - Split `Happy/Salty`
# +
# %%time
def subjectivity_stats_split(df):
"""Creates stats from dataframe.
First groups dataframe by `commentor`.
Aggregates `Sum` & `Mean` of `comment_subjectivity` grouped by `is_salty`.
Creates new column for each aggregate stat: 4 new columns.
Args:
df: The full-comment dataframe.
Returns:
out: A dataframe with index `commentor`, and a column for each agg.stat.
"""
out = (df['comment_subjectivity'].groupby([df['commentor'],df['is_salty']])
.agg(['sum', 'mean']).unstack())
out.columns = [''.join(str(col)).strip() for col in out.columns.values]
out = out.rename({"('sum', False)": 'sum_subj_h',
"('sum', True)": 'sum_subj_s',
"('mean', False)":"avg_subj_h",
"('mean', True)":"avg_subj_s"},
axis='columns')
print("Calculated commentor subjectivity stats, split by salty/happy")
return out
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, subjectivity_stats_split(data),
left_index=True, right_index=True, how='left')
# -
display(df_ct.shape)
display(df_ct.iloc[:, -5:].head(3))
# ### Calculate stats for Polarity - `Overall`
# +
# %%time
def polarity_stats(df):
"""Creates overall stats for comment_polarity.
First groups dataframe by `commentor`.
Aggregates stats `Sum & Mean` of `comment_polarity` overall.
Creates new column for each aggregate stat: 2 new columns.
Args:
df: The full-comment dataframe.
Returns:
out: A dataframe with index `commentor`, and a column for each agg.stat.
"""
out = df.groupby('commentor', as_index=False)['comment_polarity'].agg(['sum', 'mean'])
out = out.rename({'sum': 'sum_polr_oall',
'mean': 'avg_polr_oall'}, axis='columns')
print("Calculated commentor polarity stats, overall")
return out
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, polarity_stats(data),
left_index=True, right_index=True, how='left')
# -
display(df_ct.shape)
display(df_ct.iloc[:, -6:].head(3))
# ### Calculate stats for Polarity - Split `Happy/Salty`
# +
# %%time
def polarity_stats_split(df):
"""Creates stats for comment_polarity grouped by `is_salty`
First groups dataframe by `commentor`.
Aggregates stats `sum` & `mean` of `comment_polarity` grouped by `is_salty`.
Creates new column for each aggregate stat: 4 new columns.
Args:
df: The full-comment dataframe.
Returns:
out: A dataframe with index `commentor`, and a column for each agg.stat.
"""
out = (df['comment_polarity'].groupby([df['commentor'],df['is_salty']])
.agg(['sum', 'mean']).unstack())
out.columns = [''.join(str(col)).strip() for col in out.columns.values]
out = out.rename({"('sum', False)": 'sum_polr_h',
"('sum', True)": 'sum_polr_s',
"('mean', False)":"avg_polr_h",
"('mean', True)":"avg_polr_s"},
axis='columns')
print("Calculated commentor polarity stats, split by salty/happy")
return out
# Run Function & Merge into df_ct
df_ct = pd.merge(df_ct, polarity_stats_split(data), left_index=True,
right_index=True, how='left')
# -
display(df_ct.shape)
display(df_ct.iloc[:, -6:].head(3))
# ## Part F. Define final `hn_cs` table, create rankings, create top 100 lists, and export final data.
hn_cs = df_ct.iloc[:, :-24]
# ### Create Ranking Columns for AMT of Salt Contributed Rank, Qty of Salty Comments Rank, Overall_Saltiest_Rank, & Saltiest_Trolls_Rank
# +
# %%time
def rank_sum_lifetime_amount(df):
""" Ranks all commentors by the sum of their total salt contributed.
Sorts by the sum of salty comment scores `sum_slt_s` from lowest to highest.
More negative (lower) = more salty.
Assigns a rank based on position after sorting.
Creates a new column for the rank.
Args:
df: The commentor_summary dataframe.
Returns:
out: A dataframe with index `commentor` and column `rank_lt_amt_slt`.
"""
out = (df[df['sum_slt_s'] < 0].sort_values(by=['sum_slt_s']))
out["rank_lt_amt_slt"] = (out.sum_slt_s.rank(axis=0, method='first'))
out = out["rank_lt_amt_slt"]
print("Created rank_sum_lifetime_amount.")
return out
hn_cs = pd.merge(hn_cs, rank_sum_lifetime_amount(hn_cs),left_index=True,
right_index=True, how='left')
def rank_sum_lifetime_qty(df):
"""Rank all commentors on the quantity of salty comments contributed.
Sorts by the count of salty comments `cnt_slt_s` from highest to lowest.
Assigns a rank based on position after sorting.
Creates a new column for the rank.
Args:
df: The commentor_summary dataframe.
Returns:
out: A dataframe with index `commentor` and column `rank_lt_qty_sc`.
"""
out = df.sort_values(by='cnt_slt_s', ascending=False)
out["rank_lt_qty_sc"] = (out.cnt_slt_s.rank(axis=0, method='first',
ascending=False))
out = out["rank_lt_qty_sc"]
print("Created rank_sum_lifetime_qty.")
return out
hn_cs = pd.merge(hn_cs, rank_sum_lifetime_qty(hn_cs), left_index=True,
right_index=True, how='left')
def rank_overall_saltiest(df):
"""Rank commmentors on overall sum of their lifetime happy & salty scores.
Filters commentors to ensure each:
Has some happy and some salty comments.
Has more than 40 total comments.
Has overall Saltiness < 0.
Sorts by the overall saltiness score `sum_slt_oall`, i.e. sum of happy+salty
scores across all comments. From lowest to highest.
Assigns a rank based on position after sorting.
Creates a new column for the rank.
Indicates: Indicates a tendancy towards a majority of comments being salty.
Args:
df: The commentor_summary dataframe.
Returns:
out: A dataframe with index `commentor` and column `rank_oall_slt`.
"""
out = (df[(df['sum_slt_oall'] < 0) & (df['cnt_slt_s'] > 0) &
(df['cnt_slt_h'] > 0) & (df['count_comments'] > 20)]
.sort_values(by=['sum_slt_oall']))
out["rank_oall_slt"] = out.sum_slt_oall.rank(axis=0, method='first')
out = out["rank_oall_slt"]
print("Created rank_overall_saltiest.")
return out
hn_cs = pd.merge(hn_cs, rank_overall_saltiest(hn_cs), left_index=True,
right_index=True, how='left')
def rank_saltiest_trolls(df):
"""Rank commentors, who lack any positive comments, by overall saltiness.
Filters commentors to ensure each:
Has no happy comments.
Has overall Saltiness < 0.
Sorts by the overall saltiness score `sum_slt_oall`, i.e. sum of happy+salty
scores across all comments. From lowest to highest.
Assigns a rank based on position after sorting.
Creates a new column for the rank.
Reasoning:
Absolute Lack of positive comments is rare. Typically indicates a
purpose made "trolling" account.
Args:
df: The commentor_summary dataframe.
Returns:
out: A dataframe with index `commentor` and column `rank_oall_slt`.
"""
out = df[(df['top_cmnts_h'].isnull()) &
(df['sum_slt_oall'] < 0)].sort_values(by=['sum_slt_oall'])
out["rank_slt_trolls"] = out.sum_slt_oall.rank(axis=0, method='first')
out = out["rank_slt_trolls"]
print("Created rank_saltiest_trolls.")
return out
hn_cs = pd.merge(hn_cs, rank_saltiest_trolls(hn_cs), left_index=True,
right_index=True, how='left')
hn_cs.reset_index(inplace=True)
# -
# ### Create Top100 Lists for AMT of Salt Contributed Rank, Qty of Salty Comments Rank, Overall_Saltiest_Rank, & Saltiest_Trolls_Rank & SAVE AS JSON
# +
# %%time
def top100_amt_salt(df):
"""Saves a .JSON of the Top 100 Commentors by `rank_lt_amt_slt`
Sorts by `rank_lt_amt_slt`
Creates dataframe of rows [0:100] by `rank_lt_amt_slt`
Saves dataframe as `top100_AMT_Salt_Contributed.json`
Args:
df: The commentor_summary dataframe w/ ranks.
"""
top100 = (df[df["rank_lt_amt_slt"].notnull()]
.sort_values(by=["rank_lt_amt_slt"]).head(100))
top100 = top100[["commentor", "rank_lt_amt_slt",
"sum_slt_s", "top_salty_comment"]]
top100.to_json('Final_Data/top100_AMT_Salt_Contributed.json',
orient='records')
print("Saved top100_AMT_Salt_Contributed.json")
top100_amt_salt(hn_cs)
def top100_qty_salty_comments(df):
"""Creates a dataframe of the Top100 Commentors by `rank_lt_qty_sc`
Sorts by `rank_lt_qty_sc`
Makes a dataframe of rows [0:100] by `rank_lt_qty_sc`
Saves dataframe as `top100_AMT_Salt_Contributed.json`
Args:
df: The commentor_summary dataframe w/ ranks.
"""
top100 = (df[df["rank_lt_qty_sc"].notnull()]
.sort_values(by=["rank_lt_qty_sc"]).head(100))
top100 = top100[["commentor", "rank_lt_qty_sc",
"cnt_slt_s", "top_salty_comment"]]
top100.to_json('Final_Data/top100_QTY_Salty_Comments.json',
orient='records')
print("Saved top100_AMT_Salt_Contributed.json")
top100_qty_salty_comments(hn_cs)
def top100_overall_saltiest(df):
"""Creates a dataframe of the Top100 Commentors by `rank_oall_slt`
Sorts by `rank_oall_slt`
Makes a dataframe of rows [0:100] by `rank_oall_slt`
Saves df as a json record of with name `top100_Overall_Saltiest.json`
Args:
df: The commentor_summary dataframe w/ ranks.
"""
top100 = (df[df["rank_oall_slt"].notnull()]
.sort_values(by=["rank_oall_slt"]).head(100))
top100 = top100[["commentor", "rank_oall_slt",
"sum_slt_oall", "top_salty_comment"]]
top100.to_json('Final_Data/top100_Overall_Saltiest.json', orient='records')
print("Saved top100_Overall_Saltiest.json")
top100_overall_saltiest(hn_cs)
def top100_saltiest_trolls(df):
"""Creates a dataframe of the Top100 Trolls by `rank_slt_trolls`
Sorts by `rank_slt_trolls`
Makes a dataframe of rows [0:100] by `rank_slt_trolls`
Saves df as a json record of with name `top100_Saltiest_Trolls.json`
Args:
df: The commentor_summary dataframe w/ ranks.
"""
top100 = (df[df["rank_slt_trolls"].notnull()]
.sort_values(by=["rank_slt_trolls"]).head(100))
top100 = top100[["commentor", "rank_slt_trolls",
"sum_slt_oall", "top_salty_comment"]]
top100.to_json('Final_Data/top100_Saltiest_Trolls.json',
orient='records')
print("Saved top100_Saltiest_Trolls.json")
top100_saltiest_trolls(hn_cs)
# -
# ### Prepare and Save `hn_cs` as `.csv` for upload to PostgreSQL.
# %%time
hn_cs.to_csv('Final_Data/hn_commentor_summary.csv',index=False)
hn_cs.to_pickle('data/hn_cs.pkl')
print('Dataframe Saved')
# ## Create HackerNews Overall "Scorecard" Stats
data["year"] = pd.to_datetime(df['comment_time'],unit='s').dt.strftime('%Y')
data["month"] = (pd.to_datetime(df['comment_time'],unit='s').dt.strftime('%Y_%m')).str[-5:]
data["all_time"] = "all_time"
# +
# %%time
def hn_overall_stats(df):
df = df.copy()
# Calculate by All Time
df["period"] = df['all_time']
df_s = df[df['is_salty'] == True]
split = df_s['comment_saltiness'].groupby([df_s['period']]).agg(['count','sum'])
split = split.rename({'count': 'hn_cnt_slt_s', 'sum': 'hn_sum_slt_s'}, axis='columns')
overall = df['comment_saltiness'].groupby([df['period']]).agg(['count','sum', 'mean'])
overall = overall.rename({'sum': 'hn_sum_slt_oall','mean': 'hn_avg_oall','count': 'hn_count_oall'}, axis='columns')
overall = pd.merge(overall, split, left_index=True, right_index=True, how='left')
df_a = overall
# Calculate by Year
df["period"] = df['year']
df_s = df[df['is_salty'] == True]
split = df_s['comment_saltiness'].groupby([df_s['period']]).agg(['count','sum'])
split = split.rename({'count': 'hn_cnt_slt_s', 'sum': 'hn_sum_slt_s'}, axis='columns')
overall = df['comment_saltiness'].groupby([df['period']]).agg(['count','sum', 'mean'])
overall = overall.rename({'sum': 'hn_sum_slt_oall','mean': 'hn_avg_oall','count': 'hn_count_oall'}, axis='columns')
overall = pd.merge(overall, split, left_index=True, right_index=True, how='left')
df_b = overall
# Calculate by Month
df["period"] = df['month']
df_s = df[df['is_salty'] == True]
split = df_s['comment_saltiness'].groupby([df_s['period']]).agg(['count','sum'])
split = split.rename({'count': 'hn_cnt_slt_s', 'sum': 'hn_sum_slt_s'}, axis='columns')
overall = df['comment_saltiness'].groupby([df['period']]).agg(['count','sum', 'mean'])
overall = overall.rename({'sum': 'hn_sum_slt_oall','mean': 'hn_avg_oall','count': 'hn_count_oall'}, axis='columns')
overall = pd.merge(overall, split, left_index=True, right_index=True, how='left')
df_c = overall
# Concat them together
df = pd.concat([df_a, df_b, df_c])
return df
hn_stats_summary = hn_overall_stats(data)
display(hn_stats_summary.head(4))
# -
# ## Get a Summary of User Stats by Month for finding the Saltiest Commenter for each month.
# +
# %%time
# CREATE OUR SUMMARY OF USER STATS BY MONTH
def css_get(df, period_text):
"""Uses comments_data not commentor_summary
Prepare the df by sorting.
Calculate `sum_slt_oall` for each commentor/period.
Calculate the `top_salty_comment` for each commentor/period.
Filter by is_salty
Calculate `sum_slt_s` & `cnt_slt_s` for each commentor/period.
"""
df["period"] = df[period_text]
df = df.sort_values(['commentor','comment_saltiness'], ascending=[True, True])
df_a = df['comment_saltiness'].groupby([df['commentor'],df['period']]).agg(['sum'])
df_a = df_a.rename({'sum': 'sum_slt_oall'}, axis='columns')
df_b = (df[['period','commentor','comment_JSON','comment_saltiness']].groupby([df['commentor'], df['period']]).head(1))
df_b.set_index(['commentor', 'period'], inplace=True)
df = df[df['is_salty'] == True]
df_c = df['comment_saltiness'].groupby([df['commentor'],df['period']]).agg(['count','sum'])
df_c = df_c.rename({'count': 'cnt_slt_s', 'sum': 'sum_slt_s'}, axis='columns')
df = df_c.join([df_a,df_b], how = 'left')
df = df.rename(columns = {'comment_JSON': 'top_salty_comment'})
return df
css_data = pd.concat([css_get(data,"all_time"), css_get(data,"year"), css_get(data,"month")])
css_table = css_data.sort_values(["period"]).reset_index()
css_table.head(4)
# -
# ## Select the top Saltiest by each of our rank methods for `all_time`, `year`, and by `month`. Merge them, then merge with `hn_scorecard_summary`. Save as json.
# +
# %%time
# By Count of Salty Comments CSC
def hn_agg_a(df):
df = df.copy()
df = df.sort_values(['period','cnt_slt_s','sum_slt_s'], ascending=[True, False, True])
df_b = df[['period','commentor','cnt_slt_s', 'top_salty_comment']].groupby([df['period']]).head(1)
df_b.set_index(['period'], inplace=True)
df_b.columns = ['csc_'+ str(col) for col in df_b.columns]
return df_b
# By Sum of Salty Comments SSC
def hn_agg_b(df):
df = df.copy()
df = df.sort_values(['period','sum_slt_s','cnt_slt_s'], ascending=[True, True, False])
df_b = df[['period','commentor','sum_slt_s', 'top_salty_comment']].groupby([df['period']]).head(1)
df_b.set_index(['period'], inplace=True)
df_b.columns = ['ssc_' + str(col) for col in df_b.columns]
return df_b
# By Sum of Overall Salt (Postive + Negative) - SOS
def hn_agg_c(df): # Uses comments_data not commentor_summary
df = df.copy()
df = df.sort_values(['period','sum_slt_oall','cnt_slt_s'], ascending=[True, True, False])
df_b = df[['period','commentor','sum_slt_oall', 'top_salty_comment']].groupby([df['period']]).head(1)
df_b.set_index(['period'], inplace=True)
df_b.columns = ['sos_' + str(col) for col in df_b.columns]
return df_b
# By Saltiest Comment for the Period - SCP
def hn_agg_d(df): # Uses comments_data not commentor_summary
df = df.copy()
df = df.sort_values(['period', 'comment_saltiness', 'sum_slt_s', 'cnt_slt_s'], ascending=[True, True, True, False])
df_b = df[['period', 'commentor', 'comment_saltiness', 'top_salty_comment']].groupby([df['period']]).head(1)
df_b.set_index(['period'], inplace=True)
df_b.columns = ['scp_' + str(col) for col in df_b.columns]
return df_b
hn_agg_csc = hn_agg_a(css_table)
hn_agg_ssc = hn_agg_b(css_table)
hn_agg_sos = hn_agg_c(css_table)
hn_agg_scp = hn_agg_d(css_table)
hn_agg = pd.concat([hn_agg_csc, hn_agg_ssc, hn_agg_sos, hn_agg_scp], axis = 1)
hn_stats_summary_w_agg = pd.concat([hn_stats_summary, hn_agg], axis = 1)
display(hn_stats_summary_w_agg.shape)
display(hn_stats_summary_w_agg.head(4))
display(hn_stats_summary_w_agg.columns)
hn_stats_summary_w_agg.to_json('Final_Data/hn_stats_summary_w_agg.json',
orient='records')
display(print("saved hn_stats_summary_w_agg.json"))
# -
# ## Save the comment dataframe with all of the custom fields as a CSV.
# %%time
data.to_csv('Final_Data/hn_comments_full_db_w_custom_fields.csv',index=False)
# # ALL DONE!
#
# # Checklist:
# ### ----- Commentors Summary Table ----
#
# * **X** Stats for Commenting
# * **X** Stats for Saltiness (Overall)
# * **X** Stats for Saltiness (Split by `is_salty`)
# * **X** Stats for Saltiness (Grouped by `quadrant`)
# * **X** Stats for Subjectivity (Overall)
# * **X** Stats for Subjectivity (Grouped by `is_salty`)
# * **X** List of top 50 Saltiest Comments (Salty only filtered by `is_salty`)
# * **X** List of top 20 Happy Comments (Happy only filtered by `is_salty`)
# * **X** List of Plotpoints (Sorted by Timestamp, binned by `Year_Month`)
# * **X** Rank AMT of Salt Contributed
# * **X** Rank Qty of Salty Comments
# * **X** Rank Overall_Saltiest
# * **X** Rank Saltiest_Trolls
# * **X** Drop Unneeded Columns
# * **X** Export as CSV for PostgreSQL
#
# ### ----- Seperate Tables/JSON ------
# * **X** Top100 - Rank AMT of Salt Contributed,
# * **X** Top100 - Rank Qty of Salty Comments,
# * **X** Top100 - Rank Overall_Saltiest,
# * **X** Top100 - Rank Saltiest_Trolls
#
# * **X** HN_Scorecard_Summary (All-Time)
# * **X** HN_Scorecard_Summary (Yearly, 2006 to 2018)
# * **X** Add Stats to HN_Scorecard_Summaries.
# #### This is helpful.
#
# ``` console
# Where to find the dask distributed Bokeh dashboard on aws.
#
# URL of accessing Dask Dashboard will be:
# https://myinstance.notebook.us-east-1.sagemaker.aws/proxy/8787/```
# # Thanks for reading!
#
# ### Helpful Links
#
# * https://chrisalbon.com/python/data_wrangling/pandas_apply_operations_to_groups/
# * https://stackoverflow.com/questions/22219004/grouping-rows-in-list-in-pandas-groupby
# * https://www.dataquest.io/blog/loading-data-into-postgres
#
| Data Query & Processing/HN-1_Original_Data_Processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base=automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session=Session(engine)
first_row = session.query(Measurement).first()
first_row.__dict__
first_row = session.query(Station).first()
first_row.__dict__
# # Exploratory Climate Analysis
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
# First calculate the last data point in the database
last_date=session.query(Measurement.date).order_by(Measurement.date.desc()).first()
last_date[0]
# -
# Convert the last_date to datetime format
last_date_dt=dt.datetime.strptime(last_date[0],'%Y-%m-%d')
last_date_dt
# Now, we can calculate the date one year ago from our last_date
year_ago=last_date_dt-dt.timedelta(days=365)
year_ago
# Perform a query to retrieve the data and precipitation scores
precipitation=session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= year_ago).all()
precipitation[0]
# Save the query results as a Pandas DataFrame and set the index to the date column
precipitation_df=pd.DataFrame(precipitation)
precipitation_df=precipitation_df.set_index("date")
precipitation_df
# +
# Sort the dataframe by date
precipitation_df.sort_index()
# Use Pandas Plotting with Matplotlib to plot the data
precipitation_df.plot()
plt.show()
# -
# Use Pandas to calcualte the summary statistics for the precipitation data
precipitation_df.describe()
# Design a query to show how many stations are available in this dataset?
session.query(func.count(Station.id)).all()
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
station_activity=session.query(Measurement.station, func.count(Measurement.station)).group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all()
most_active_station=station_activity[0].station
most_active_station
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
min_tobs=session.query(func.min(Measurement.tobs)).all()
max_tobs=session.query(func.max(Measurement.tobs)).all()
avg_tobs=session.query(func.avg(Measurement.tobs)).filter(Measurement.station==most_active_station).all()
print(f"The lowest temperature recorded is {min_tobs} degrees F")
print(f"The highest temperature recorded is {max_tobs} degrees F")
print(f"The average temperature recorded in the most active station is {avg_tobs} degrees F")
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
active_tobs=session.query(Measurement.tobs).\
filter(Measurement.station==most_active_station).\
filter(Measurement.date > year_ago).\
filter(Measurement.date < last_date_dt).all()
df = pd.DataFrame(active_tobs)
df.plot.hist(bins=12)
plt.show()
# ## Bonus Challenge Assignment
# +
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# -
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
temp_values=(calc_temps(year_ago, last_date_dt))
print(temp_values)
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
tmin=temp_values[0][0]
tavg=int(temp_values[0][1])
tmax=temp_values[0][2]
tdiff=tmax-tmin
fig, ax = plt.subplots(figsize=(1,4))
ax.bar(x=0,height=tavg,width=1,yerr=tdiff)
ax.set_title('Trip Avg Temp')
ax.set_ylabel("Temp (F)")
ax.set_xticks([])
plt.show()
# +
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
prcp_totals=session.query(Station.station
, Station.name
, Station.latitude
, Station.longitude
, Station.elevation
, Measurement.station
, func.sum(Measurement.prcp)).\
group_by(Measurement.station).\
filter(Measurement.station==Station.station).\
filter(Measurement.date >= year_ago).\
filter(Measurement.date <= last_date_dt).\
order_by(func.sum(Measurement.prcp).desc()).all()
prcp_totals
# +
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# +
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
start_date=dt.datetime.strptime("12-26-2020",'%m-%d-%Y')
end_date=dt.datetime.strptime("01-02-2021",'%m-%d-%Y')
# Use the start and end date to create a range of dates
trip_days=pd.date_range(start_date, end_date)
# Stip off the year and save a list of %m-%d strings
month_day=trip_days.strftime("%m-%d")
# Loop through the list of %m-%d strings and calculate the normals for each date
normals = []
for day in month_day:
normal=daily_normals(day)[0]
normals.append(normal)
normals
# +
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
conditions_df=pd.DataFrame(normals, columns=["tmin", "tavg", "tmax"])
conditions_df["date"] = trip_days
conditions_df=conditions_df.set_index(["date"])
conditions_df
# +
# Plot the daily normals as an area plot with `stacked=False`
conditions_df.plot.area(stacked=False)
plt.xlabel("date")
plt.ylabel("Temperature (F)")
# -
| Instructions/climate_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 0.่งๅไป็ป
# - 
# - ๆฌ็จๅบๅฐฑๆฏ่ฆๆ นๆฎไธ้ข็่งๅ๏ผๅฐ็ฉๅฎถ็ๆๅ
็่ฟ่กๅคงๅฐ็ๆฏ่พ
# # 1. ๆๅปบๅบๆฌๆฏ่พๅคงๅฐ็จๅบ
# - ๆๅบ่งๅhand_rank
# - ไธๅ็ๅ็่ฏๅซๅฝๆฐ
# +
# -----------
#
#
# Modify the hand_rank function so that it returns the
# correct output for the remaining hand types, which are:
# full house, flush, straight, three of a kind, two pair,
# pair, and high card hands.
#
#
#
# straight(ranks): returns True if the hand is a straight.
# flush(hand): returns True if the hand is a flush.
# kind(n, ranks): returns the first rank that the hand has
# exactly n of. For A hand with 4 sevens
# this function would return 7.
# two_pair(ranks): if there is a two pair, this function
# returns their corresponding ranks as a
# tuple. For example, a hand with 2 twos
# and 2 fours would cause this function
# to return (4, 2).
# card_ranks(hand) returns an ORDERED tuple of the ranks
# in a hand (where the order goes from
# highest to lowest rank).
#
# Since we are assuming that some functions are already
# written, this code will not RUN. Clicking SUBMIT will
# tell you if you are correct.
# ๆ
def poker(hands):
"Return the best hand: poker([hand,...]) => hand"
return max(hands, key=hand_rank)
# ๆๅบ่งๅ
def hand_rank(hand):
ranks = card_ranks(hand)
if straight(ranks) and flush(hand): # straight flush
return (8, max(ranks))
elif kind(4, ranks): # 4 of a kind
return (7, kind(4, ranks), kind(1, ranks))
elif kind(3, ranks) and kind(2, ranks): # full house
return (6,kind(3,ranks),kind(2,ranks))
elif flush(hand): # flush
return (5,ranks)
elif straight(ranks): # straight
return (4,max(ranks))
elif kind(3, ranks): # 3 of a kind
return (3,kind(3,ranks),ranks)
elif two_pair(ranks): # 2 pair
return (2,two_pair(ranks),ranks)
elif kind(2, ranks): # kind
return (1,kind(2,ranks),ranks)
else: # high card
return (0,ranks)
# -
def card_ranks(cards):
"Return a list of the ranks, sorted with higher first."
# ranks = [r for r,s in cards]
# temp = {'T':'10','J':'11','Q':'12','K':'13','A':'14'}
# ranks = [int(temp[i]) if i in temp else int(i) for i in ranks]
# ranks.sort(reverse=True)
ranks = ['--23456789TJQKA'.index(r) for r, s in cards]
ranks.sort(reverse = True)
return ranks
assert card_ranks(['AC', '3D', '4S', 'KH'])==[14,13,4,3] #should output [14, 13, 4, 3]
# +
def straight(ranks):
"Return True if the ordered ranks form a 5-card straight."
# ้กบๅญ
flag=True
for i in range(len(ranks)-1):
if ranks[i]-ranks[i+1]!=1:
flag=False
return flag
def flush(hand):
"Return True if all the cards have the same suit."
# ๅไธ่ฑ่ฒ
s = set([s for r,s in hand])
return len(s)==1
sf = "6C 7C 8C 9C TC".split()
fk = "9D 9H 9S 9C 7D".split()
fh = "TD TC TH 7C 7D".split()
assert straight(card_ranks(sf)) == True
assert straight(card_ranks(fk)) == False
assert flush(sf) == True
assert flush(fk) == False
# -
def kind(n, ranks):
"""Return the first rank that this hand has exactly n of.
Return None if there is no n-of-a-kind in the hand."""
#
for r in ranks:
if ranks.count(r) == n: return r
return None
assert kind(4, card_ranks(fk)) == 9
assert kind(3, card_ranks(fk)) == None
assert kind(2, card_ranks(fk)) == None
assert kind(1, card_ranks(fk)) == 7
# +
def two_pair(ranks):
"""If there are two pair, return the two ranks as a
tuple: (highest, lowest); otherwise return None."""
# Your code here.
# highest = kind(2,ranks)
# if highest:
# lowest = kind(2,ranks.remove(highest))
# if highest and lowest:
# return (highest,lowest)
# else:
# return None
pair = kind(2,ranks)
lowpair = kind(2,list(reversed(ranks)))
if pair and pair!=lowpair:
return(pair,lowpair)
else:
return None
tp = "TD 9H TH 9C 3S".split() # Two Pair
print(two_pair(card_ranks(tp)))
# -
"Test cases for the functions in poker program"
sf = "6C 7C 8C 9C TC".split() # Straight Flush
fk = "9D 9H 9S 9C 7D".split() # Four of a Kind
fh = "TD TC TH 7C 7D".split() # Full House
assert poker([sf, fk, fh]) == sf
assert poker([fk, fh]) == fk
assert poker([fh, fh]) == fh
assert poker([sf]) == sf
assert poker([sf] + 99*[fh]) == sf
# # 2. ่งๅ็่กฅๅ
# - A๏ผ2๏ผ3๏ผ4๏ผ5 ไฝไธบๆๅฐ้กบๅญ
# - ๆๆๆๅคงallmax
#
# A๏ผ2๏ผ3๏ผ4๏ผ5 ไฝไธบ้กบๅญ
def card_ranks(cards):
"Return a list of the ranks, sorted with higher first."
# ranks = [r for r,s in cards]
# temp = {'T':'10','J':'11','Q':'12','K':'13','A':'14'}
# ranks = [int(temp[i]) if i in temp else int(i) for i in ranks]
# ranks.sort(reverse=True)
ranks = ['--23456789TJQKA'.index(r) for r, s in cards]
ranks.sort(reverse = True)
if ranks==[14,5,4,3,2]:ranks = [5,4,3,2,1]
return ranks
assert card_ranks(['AC', '3D', '4S', '2H','5S'])==[5,4,3,2,1] #should output [14, 13, 4, 3]
# +
# ๆๆๆๅคง
def poker(hands):
"Return a list of winning hands: poker([hand,...]) => [hand,...]"
return allmax(hands, key=hand_rank)
def allmax(iterable, key=None):
"Return a list of all items equal to the max of the iterable."
# Your code here.
max_hand = None
reslut = []
key = key or (lambda x:x)
for i in iterable:
value = key(i)
if not max_hand or value>max_hand:
max_hand = value
reslut = [i]
elif value==max_hand:
reslut.append(i)
return reslut
"Test cases for the functions in poker program."
sf1 = "6C 7C 8C 9C TC".split() # Straight Flush
sf2 = "6D 7D 8D 9D TD".split() # Straight Flush
fk = "9D 9H 9S 9C 7D".split() # Four of a Kind
fh = "TD TC TH 7C 7D".split() # Full House
poker([sf1, sf2, fk, fh])
assert poker([sf1, sf2, fk, fh]) == [sf1, sf2]
# +
# ้ๆบ็ๆ่งๅ
import random # this will be a useful library for shuffling
# This builds a deck of 52 cards. If you are unfamiliar
# with this notation, check out Andy's supplemental video
# on list comprehensions (you can find the link in the
# Instructor Comments box below).
mydeck = [r+s for r in '23456789TJQKA' for s in 'SHDC']
def deal(numhands, n=5, deck=mydeck):
random.shuffle(deck)
return [deck[n*i:n*(i+1)] for i in range(numhands)]
print(deal(5))
# -
# hand ๆฆ็้ฎ้ข
hand_names=["Straight flush","Four of a kind","Full house","Flush","Straight","Three of a kind","two pair","one pair","no pair"]
reversed(hand_names)
def hand_percentage(n=700*1000):
counts = [0]*9
for i in range(int(n/10)):
for hand in deal(10):
ranking = hand_rank(hand)[0]
counts[ranking]+=1
for i in reversed(range(9)):
print("%14s:%6.3f%%" % (hand_names[i],100.*counts[i]/n))
hand_percentage()
# # 3 ไปฃ็ ้ๆ
# - ็จๅบ่ฎพ่ฎก็ๅคไธช็บฌๅบฆ๏ผๆญฃ็กฎ๏ผๆ็๏ผ็นๆง๏ผไผ้
ใๅ
ไฟ่ฏๆญฃ็กฎ
# - hand_rank ้ๆ๏ผๅพๅคๅฐๆนๆง่กไบkindๅฝๆฐ๏ผๆฒกๆDRY๏ผdon not repeat youself
# +
#
def group(items):
"""return the list of [(count,x)...] highest count first then highest x first"""
groups = [(items.count(x),x) for x in set(items)]
groups.sort(reverse=True)
return groups
def unzip(pairs): return zip(*pairs)
# groups = group(["--23456789TJQKA".index(r) for r, s in fk])
# print(groups)
# counts,ranks=unzip(groups)
# print(counts,ranks)
# -
def hand_rank1(hand):
groups = group(['--23456789TJQKA'.index(r) for r,s in hand])
counts,ranks = unzip(groups)
if ranks==[14,5,4,3,2]:ranks = [5,4,3,2,1]
straight = len(counts)==5 and max(ranks)-min(ranks)==4
flush = len(set([s for r,s in hand]))==1
return (9 if (5,) == counts else
8 if straight and flush else
7 if (4,1)==counts else
6 if (3,2) == counts else
5 if flush else
4 if straight else
3 if (3,1,1)==counts else
2 if (2,2,1)== counts else
1 if (2,1,1,1)== counts else
0),ranks
# +
"Test cases for the functions in poker program."
sf1 = "6C 7C 8C 9C TC".split() # Straight Flush
sf2 = "6D 7D 8D 9D TD".split() # Straight Flush
fk = "9D 9H 9S 9C 7D".split() # Four of a Kind
fh = "TD TC TH 7C 7D".split() # Full House
poker([sf1,sf2,fk,fh])
# -
# # 4. shuffle ้ฎ้ข
# +
import random
# teacher's
def swap(deck,i,j):
deck[i],deck[j]=deck[j],deck[i]
def shuffle1(deck):
N=len(deck)
swapped = [False]*N
while not all(swapped):
i,j=random.randrange(N),random.randrange(N)
swapped[i]=swapped[j]=True
swap(deck,i,j)
def shuffle(deck):
N=len(deck)
for i in range(N-1):
j = random.randrange(i,N)
swap(deck,i,j)
def shuffle2(deck):
N=len(deck)
swapped = [False]*N
while not all(swapped):
i,j=random.randrange(N),random.randrange(N)
swapped[i]=True
swap(deck,i,j
def shuffle3(deck):
N = len(deck)
for i in range(N):
swap(deck,i,random.randrange(N))
# ๆต่ฏ
deck=list("fhashfiurhfghs")
shuffle2(deck)
deck
# +
from collections import defaultdict
def test_shuffler(shuffler,deck='abcd',n=10000):
counts = defaultdict(int)
for _ in range(n):
input = list(deck)
shuffler(input)
counts[''.join(input)] +=1
e = n*1./factorial(len(deck))
ok = all((0.9<=counts[item]/e<=1.1) for item in counts)
name = shuffler.__name__
print('%s(%s)%s' % (name,deck,('ok' if ok else '*** BAD ***')))
for item ,count in sorted(counts.items()):
print("%s:%4.1f" % (item,count*100./n))
def factorial(n):return 1 if(n<=1) else n*factorial(n-1)
for deck in ["abc","ab"]:
for f in [shuffle1,shuffle2,shuffle3]:
test_shuffler(f,deck)
# -
# # 5. ไฝไธ
# - 7ๅผ ๅก็ไธญๆพๅฐๆๅฅฝ็5ๅผ
# - jokerไฝไธบ่ตๅญๆพๆๅฅฝ็
# +
# 7ๅผ ๅก็ไธญๆพๆๅฅฝ็5ๅผ
import itertools
def best_hand(hand):
five_hands = list(itertools.combinations(hand,5))
return max(five_hands,key=hand_rank)
assert (sorted(best_hand("6C 7C 8C 9C TC 5C JS".split()))
== ['6C', '7C', '8C', '9C', 'TC'])
assert (sorted(best_hand("TD TC TH 7C 7D 8C 8S".split()))
== ['8C', '8S', 'TC', 'TD', 'TH'])
assert (sorted(best_hand("JD TC TH 7C 7D 7S 7H".split()))
== ['7C', '7D', '7H', '7S', 'JD'])
# +
# jokersไฝไธบ่ตๅญ๏ผbalck->S,C ๅ red->H,D
allranks = '23456789TJQK'
redcards = [r+s for r in "23456789TJQKA" for s in 'HD']
blackcards = [r+s for r in "23456789TJQKA" for s in 'SC']
def best_wild_hand(hand):
hands = set(best_hand(h) for h in itertools.product(*map(replacements,hand)))
return max(hands,key=hand_rank)
def replacements(card):
if card == '?B': return blackcards
elif card == '?R': return redcards
else: return [card]
assert (sorted(best_wild_hand("6C 7C 8C 9C TC 5C ?B".split()))
== ['7C', '8C', '9C', 'JC', 'TC'])
assert (sorted(best_wild_hand("TD TC 5H 5C 7C ?R ?B".split()))
== ['7C', 'TC', 'TD', 'TH', 'TS'])
assert (sorted(best_wild_hand("JD TC TH 7C 7D 7S 7H".split()))
== ['7C', '7D', '7H', '7S', 'JD'])
# -
| 1_poker_game.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # This notebook shows the pandas dataframe of the JUGRI python package
#
# The JUGRI can help you translate your Gremlin query responses into nice Pandas DataFrames.
# You can import JUGRI just like how you would import any other module.
# Import Jugri
import jugri
# You also need the Gremlin_python library to run the queries.
# +
#Import Gremlin_python
from gremlin_python import statics
from gremlin_python.structure.graph import Graph
from gremlin_python.process.graph_traversal import __
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
graph = Graph()
g = graph.traversal().withRemote(
DriverRemoteConnection('ws://<gremlin-server>:8182/gremlin',
'g')
)
# -
# The only thing you need to do is to pass the query results to the `toDF` function of the module. It can resolve nodes and list their `id` and `label`.
jugri.to_df(g.V().limit(5).toList())
# You can call the `valueMap` step to get the properties on the object. Note that you can pass the traversal object too and the `toList` method will be called automatically.
jugri.to_df(g.V().valueMap(True).limit(5))
# Edges are resolved similarly. `id`, `label` of the edge is resolved together with the inward (`inV`) and the outward edge id (`outV`).
jugri.to_df(g.E().limit(5))
# When paths are retrieved each step is numbered and the ID of each node is returned.
jugri.to_df(g.V().out('mentioned').in_('mentioned').path().limit(5))
# Multiple nodes can also be selected with their properties. Nested properties are resolved as the dot concatenated path of the property. (e.g. destination.CountryField.country_name)
jugri.to_df(g.V().as_("original").out('mentioned').in_('mentioned').as_("destination").select("original","destination").by(__.valueMap(True)).by(__.valueMap(True)).limit(5))
| example/Pandification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.0 64-bit (''.venv'': venv)'
# name: python3
# ---
# +
import requests, pandas, getpass
api = 'https://api.earthref.org/v1/MagIC/{}'
username = input()
password = <PASSWORD>()
# -
# ### Create a Private Contribution and delete it
# If no errors are reported, this cell can be repeated without any side effects. If noone has also made a private contribution between repeated executions, the same contribution ID should be reused.
# + tags=[]
create_response = requests.post(api.format('private'), auth=(username, password))
print(create_response.request.method, create_response.request.url)
if (create_response.status_code == 200):
contribution_id = create_response.json()['id']
print('Created private contribution with ID', contribution_id, '\n')
delete_response = requests.delete(api.format('private'), params={'id': contribution_id}, auth=(username, password))
print(delete_response.request.method, delete_response.request.url)
if (delete_response.status_code == 200):
print('Deleted private contribution with ID', contribution_id, '\n')
else:
print('Delete Private Contribution Error:', delete_response.json()['errors'][0]['message'], '\n')
else:
print('Create Private Contribution Error:', create_response.json()['errors'][0]['message'], '\n')
# -
# ### Validate a Private Contribution and mark it as valid if there are no errors
# The contribution ID should be in your private workspace or it will not be found.
contribution_id = 19296
response = requests.put(api.format('private/validate'), params={'id': contribution_id}, auth=(username, password))
print(response.request.method, response.request.url)
if (response.status_code == 200):
validation_results = response.json()['validation']
print('Validated contribution with ID', contribution_id, '\n', len(validation_results['errors']))
elif (response.status_code == 204):
print('A private contribution with ID', contribution_id, 'could not be found in your private workspace for validation\n')
else:
print('Error Validating a Private Contribution:', response.json(), '\n')
# + tags=[]
contribution_id = 19295
response = requests.put(api.format('private/validate'), params={'id': contribution_id}, auth=(username, password))
print(response.request.method, response.request.url)
if (response.status_code == 200):
validation_results = response.json()['validation']
print('Validated contribution with ID', contribution_id, '\n', len(validation_results['errors']))
elif (response.status_code == 204):
print('A private contribution with ID', contribution_id, 'could not be found in your private workspace for validation\n')
else:
print('Error Validating a Private Contribution:', response.json(), '\n')
# -
# ### Download a Public Contribution and create a Private Contribution to upload it to
# + tags=[]
contribution_id = 16901
response = requests.get(api.format('data'), params={'id': contribution_id})
print(response.request.method, response.request.url)
if (response.status_code == 200):
contribution_file = 'downloads/magic_contribution_{}.txt'.format(contribution_id)
open(contribution_file, 'w').write(response.text)
print('Retrieved contribution data with ID', contribution_id, '\n')
create_response = requests.post(api.format('private'), auth=(username, password))
print(create_response.request.method, create_response.request.url)
if (create_response.status_code == 200):
new_contribution_id = create_response.json()['id']
print('Created private contribution with ID', new_contribution_id, '\n')
with open(contribution_file, 'rb') as f:
upload_response = requests.put(api.format('private'),
params={'id': new_contribution_id},
auth=(username, password),
headers={'Content-Type': 'text/plain'},
data=f
)
print(upload_response.request.method, upload_response.request.url)
if (upload_response.status_code == 200):
print('Uploaded a text file to private contribution with ID', contribution_id, '\n')
else:
print('Upload Private Contribution Error:', upload_response.json()['errors'][0]['message'], '\n')
else:
print('Create Private Contribution Error:', create_response.json()['errors'][0]['message'], '\n')
else:
print('Retrieve Public Contribution Error:', response.json()['errors'][0]['message'], '\n')
| v1/MagIC API - Private Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.2 64-bit ('venvintel')
# language: python
# name: python39264bitvenvinteldab226f90c154cd0b34282430769e342
# ---
# # Tests for the consistency of the rotations
# +
# %matplotlib inline
import numpy as onp
import jax.numpy as np
from jax.ops import index, index_update
from jax.config import config
from numpy.random import default_rng
from scipy.stats import multivariate_normal as mvn
from tqdm.notebook import tqdm, trange
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
sns.set_theme('talk', 'darkgrid')
plt.rcParams["figure.figsize"] = (15,4)
config.update("jax_enable_x64", True)
seed = 0
rng = default_rng(seed)
# +
# Functions
isqrt = lambda x: 1. / np.sqrt(x)
funs = {'sqrt': np.sqrt,
'isqrt': isqrt,
'log': np.log,
'exp': np.exp}
def norm_frob_squared(X):
return np.einsum('...ji,...ji', X, X)
def dist_frob_squared(X, Y):
return norm_frob_squared(X - Y)
def transform_mat(X, func='sqrt'):
u, v = np.linalg.eigh(X)
return np.einsum('...ij,...j,...kj', v, funs[func](u), v)
def dist_riem_squared(X, Y):
x = transform_mat(X, 'isqrt')
mid = np.einsum('...ij,...jk,...kl', x, Y, x)
return norm_frob_squared(transform_mat(mid, 'log'))
def costfunc(X, Y):
return np.sum(dist_riem_squared(X, Y))
def costfuncproc(X, Y):
return np.sum(dist_frob_squared(X, Y))
def rotate(X, Omega):
return np.einsum('...ij,...jk,...lk', Omega, X, Omega)
def optimal_rotation(X, M):
_, g_m = np.linalg.eigh(M)
_, g_x = np.linalg.eigh(X)
return np.einsum('...ij,...kj', g_m, g_x)
def optimal_reference_eigval(X):
u = np.linalg.eigvalsh(X)
return np.power(np.prod(u, axis=0), 1 / X.shape[0])
def optimal_reference_eigvec(X):
_, vs = np.linalg.eigh(X)
U, _, V = np.linalg.svd(np.sum(vs, axis=0))
return np.einsum('...ij,...jk', U, V)
def optimal_reference(X):
u, vs = np.linalg.eigh(X)
Lam = np.power(np.prod(u, axis=0), 1 / X.shape[0])
U, _, V = np.linalg.svd(np.sum(vs, axis=0))
Gam = np.einsum('...ij,...jk', U, V)
return np.einsum('...ij,...j,...kj', Gam, Lam, Gam)
def emp_cov(data):
l, p = data.shape
mn = data.mean(axis=0)
data = data - mn
return (data.T @ data) / l
# -
# # Two matrix test
# In this test, for a fixed dimension $p$, we generate a random SPD matrix $\Sigma_1$ and a random rotation matrix $\mathbf{R}$. From those two we then obtain $\Sigma_2 = \mathbf{R}\Sigma_1\mathbf{R}^\top$.
#
# We then generate two datasets from this two matrices (namely $\mathbf{X}_1$ and $\mathbf{X}_2$) by sampling $n$ draws from two multivariate normal distribution with mean $\boldsymbol{\mu} = (0, \dots, 0)$ and covariance matrices $\Sigma_1$ and $\Sigma_2$.
#
# Then we compute the empirical covariance matrices $\hat\Sigma_1$ and $\hat\Sigma_2$ (which, as known, are consistent and unbiased estimators of the true covariance matrices) and finally we perform the optimal rotation to send $\hat\Sigma_2$ to $\hat\Sigma_1$. As can be seen in the figures, this rotation behave consistenly and the riemannian distance between $\hat\Sigma_2^\star$ and $\hat\Sigma_1$ goes to 0 with $n$.
# +
# Hyperparameters:
p = 3
m = 10
rep = 50
datapoints = 32
ns = np.logspace(0.9, 4.1, datapoints, dtype=int)
# True values
Sigma_one = np.array(rng.normal(size=(p, p)))
Sigma_one = Sigma_one @ Sigma_one.T
TrueMean = np.zeros(shape=(p))
TrueRotation = np.linalg.qr(rng.normal(size=(p, p)))[0]
Sigma_two = rotate(Sigma_one, TrueRotation)
f, ax = plt.subplots(2, 1, sharex=True, sharey=False, figsize=(15, 10))
ax[0].hlines(y=0, xmin=ns.min(), xmax=ns.max(), colors='k', linestyles='--')
ax[1].hlines(y=0, xmin=ns.min(), xmax=ns.max(), colors='k', linestyles='--')
dists_one_mean = np.zeros_like(ns)
dists_two_mean = np.zeros_like(ns)
data = pd.DataFrame({'Number of samples': ns,
'Riemannian distance (original)': dists_one_mean,
'Riemannian distance (rotated)': dists_two_mean,
})
for _ in trange(rep):
dists_one = []
dists_two = []
for k, n in enumerate(ns):
data_one = np.array(mvn.rvs(mean=TrueMean, cov=Sigma_one, size=n))
data_two = np.array(mvn.rvs(mean=TrueMean, cov=Sigma_two, size=n))
Sigma_emp_one = emp_cov(data_one)
Sigma_emp_two = emp_cov(data_two)
Rotation_emp = optimal_rotation(Sigma_emp_two, Sigma_emp_one)
dists_one.append(dist_riem_squared(Sigma_emp_one, Sigma_one))
dists_two.append(dist_riem_squared(rotate(Sigma_emp_two, Rotation_emp), Sigma_emp_one))
dists_one_mean = index_update(dists_one_mean, k, dists_one_mean[k] + dists_one[k])
dists_two_mean = index_update(dists_two_mean, k, dists_two_mean[k] + dists_two[k])
data['Riemannian distance (original)'] = dists_one
data['Riemannian distance (rotated)'] = dists_two
dtmp = data[['Riemannian distance (original)', 'Riemannian distance (rotated)']].rolling(window=3, center=True).mean()
data[['Riemannian distance (original)', 'Riemannian distance (rotated)']] = dtmp.reset_index()[['Riemannian distance (original)', 'Riemannian distance (rotated)']]
sns.lineplot(data=data,
x='Number of samples',
y='Riemannian distance (original)',
ax=ax[0],
color='b',
alpha=0.2
)
sns.lineplot(data=data,
x='Number of samples',
y='Riemannian distance (rotated)',
ax=ax[1],
color='b',
alpha=0.2
)
sns.lineplot(x=ns[1:-1], y=dists_one_mean[1:-1]/rep, ax=ax[0], color='b')
sns.lineplot(x=ns[1:-1], y=dists_two_mean[1:-1]/rep, ax=ax[1], color='b')
plt.xscale('log')
plt.show()
# -
# # Simulation with $M$ matrices
#
# We generate $M$ matrices $p\times p$ (which represents the *true* covariances for the $M$ subjects) $\Sigma_m$.
# Then, for each subject, we generate a dataset of $n_m$ samples from a multivariate normal $\mathcal{N}_p\left(\mathbf{0}, \Sigma_m\right)$ and we compute the empirical covariance matrices $\hat\Sigma_m$ and their eigenvalue decompositions $\Gamma_m\Lambda_m\Gamma_m^\top$.
#
# We then compute the optimal reference matrix $\mathbf{R}$ that has eigenvalues $\Lambda_h^\mathbf{R} = \left[\prod_m^M\lambda_h^m\right]^{\frac{1}{M}}$ and eigenvectors $\Gamma_\mathbf{R}=\mathbf{U}\mathbf{V}^\top$ with $\mathbf{U}\mathbf{D}\mathbf{V}^\top = \sum_m^M\Gamma_m$ the singular value decomposition of the sum of the eigenvector decompositions.
#
# Finally, we rotate each $\hat\Sigma_m$ with the optimal rotation $\Omega_m=\Gamma_\mathbf{R}\Gamma_m^\top$.
#
# To check for the consistency of this procedure, we compare the sum of the pairwise distances $\sum_{m,k}^Md(\Sigma_m, \Sigma_k)$ between the empirical covariances and the true covariances, both for the original matrices and for the rotated ones. As known, the empirical covariance is a consistent estimator of the true covariance for a multivariate normal, and the distance between the matrices should maintain this consistency. Moreover, the same happens for the matrices in the rotated space.
# +
# Hyperparameters:
p = 3
m = 10
rep = 50
datapoints = 32
ns = np.logspace(0.9, 4.1, datapoints, dtype=int)
# Generate true subject covariances
# TODO: use von Mises - Fisher instead of uniform
Sigmas = np.array(rng.normal(size=(m, p, p)))
Sigmas = np.einsum('...ij,...kj', Sigmas, Sigmas)
Means = np.zeros(shape=(m, p))
def emp_cov(data):
mn = np.expand_dims(data.mean(axis=1), axis=1)
data = data - mn
return np.einsum('...ji,...jk', data, data) / data.shape[-2]
def costfunc(X, Y):
d = 0
for i, y in enumerate(Y):
x = np.delete(X, i, axis=0)
d += np.sum(dist_riem_squared(x, y))
return d
# Determine the optimal reference matrix
Ref = optimal_reference(Sigmas)
# Perform the rotations
Sigmas_rot = rotate(Sigmas, optimal_rotation(Sigmas, Ref))
# Compute the distances
dists_ori = costfunc(Sigmas, Sigmas)
dists_rot = costfunc(Sigmas_rot, Sigmas_rot)
# print("Pairwise distances True:\t\t\t", dists_ori)
# print("Pairwise distances True Rotated:\t\t", dists_rot)
f, ax = plt.subplots(2, 1, sharex=True, sharey=False, figsize=(15, 10))
ax[0].hlines(y=dists_ori, xmin=ns.min(), xmax=ns.max(), colors='k', linestyles='--')
ax[1].hlines(y=dists_rot, xmin=ns.min(), xmax=ns.max(), colors='k', linestyles='--')
data = pd.DataFrame({'Number of samples': ns,
'Pairwise distance (original)': dists_ori,
'Pairwise distance (rotated)': dists_rot,
})
dists_ori_mean = np.zeros_like(ns)
dists_rot_mean = np.zeros_like(ns)
for _ in trange(rep):
dists_ori_emp = []
dists_rot_emp = []
for k, n in enumerate(ns):
datasets = np.array([mvn.rvs(mean=Means[i], cov=Sigmas[i], size=n) for i in range(m)])
Sigmas_emp = emp_cov(datasets)
# Determine the optimal reference matrix
Ref_emp = optimal_reference(Sigmas_emp)
# Perform the rotations
Sigmas_rot_emp = rotate(Sigmas_emp, optimal_rotation(Sigmas_emp, Ref_emp))
# Compute the distances
dists_ori_emp.append(costfunc(Sigmas_emp, Sigmas_emp))
dists_rot_emp.append(costfunc(Sigmas_rot_emp, Sigmas_rot_emp))
dists_ori_mean = index_update(dists_ori_mean, k, dists_ori_mean[k] + dists_ori_emp[k])
dists_rot_mean = index_update(dists_rot_mean, k, dists_rot_mean[k] + dists_rot_emp[k])
#print("\tPairwise distances Empirical ({}):\t\t{}".format(n, dists_ori_emp[-1]))
#print("\tPairwise distances Empirical Rotated ({}):\t{}".format(n, dists_rot_emp[-1]))
data['Pairwise distance (original)'] = dists_ori_emp
data['Pairwise distance (rotated)'] = dists_rot_emp
dtmp = data[['Pairwise distance (original)', 'Pairwise distance (rotated)']].rolling(window=3, center=True).mean()
data[['Pairwise distance (original)', 'Pairwise distance (rotated)']] = dtmp.reset_index()[['Pairwise distance (original)', 'Pairwise distance (rotated)']]
sns.lineplot(data=data,
x='Number of samples',
y='Pairwise distance (original)',
ax=ax[0],
color='b',
alpha=0.2
)
sns.lineplot(data=data,
x='Number of samples',
y='Pairwise distance (rotated)',
ax=ax[1],
color='b',
alpha=0.2
)
sns.lineplot(x=ns[1:-1], y=dists_ori_mean[1:-1]/rep, ax=ax[0], color='b')
sns.lineplot(x=ns[1:-1], y=dists_rot_mean[1:-1]/rep, ax=ax[1], color='b')
plt.xscale('log')
plt.show()
| notebooks/Consistency_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="WqRHhRUCjznn" outputId="d17d18bd-6c7c-4a90-96e0-929309dc5946"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="Y-Roh_sin4zM"
# **ESTUDO DA MICROCEFALIA**
#
# existe correlaรงรฃo com zika ou outra doenรงa?
#
# **VIOLรNCIA DE GรNERO**
#
#
#
# **1. Quais sรฃo os grupos que mais sofrem com a violรชncia?**
#
# R. As mulheres
#
# **2. Os registros sรฃo todos feitos corretamentes? Existe o registro do agressor? Principalmente com os indรญgenas**
#
# R. Nรฃo, falta o preenchimento de muitos dados e a classificaรงรฃo correta do CIE-10
#
# **3. Serรก que existe correlaรงรฃo entre os casos de violencia sexual na amazรดnia e algum tipo de pedofilia ou exploraรงรฃo de menor?**
#
#
#
#
# + [markdown] id="63xxWPulm6Yi"
# # IMPORTS AND LOAD FILES
# + cellView="form" id="ULpCqrAlj9zU"
#@title IMPORTS
# configuraรงรตes de diretรณrios
import os
# manipulaรงรฃo e operaรงรฃo dos dados
import numpy as np
import pandas as pd
import glob
# visualizaรงรฃo dos dados
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.graph_objects as go
# + id="qbdSFfO1lyY1" cellView="form"
#@title LOAD FILES AND PRE-PROCESSING
#configuraรงรตes adicionais de visualizaรงรฃo dos dados
# %matplotlib inline
pd.options.display.max_columns = None
pd.options.display.max_rows = 100
PATH_ROOT = '/content/drive/MyDrive/Colab Notebooks/dataflowproject'#@param {type:'string'}
DIR_2018 = 'Equador datasets 2018' #@param {type:'string'}
DIR_2019 = 'Equador datasets 2019'#@param {type:'string'}
DIR_2020 = 'Equador datasets 2020'#@param {type:'string'}
#@markdown dataframe names: df_2018, df_2019, df_2020
BASE_ROOT = os.path.join(os.path.abspath(PATH_ROOT))
DATA_DIR_2018 = os.path.join( BASE_ROOT, 'Equador datasets 2018') # diretรณrio dos dados 2018
DATA_DIR_2019 = os.path.join( BASE_ROOT,DIR_2019 ) # diretรณrio dos dados 2019
DATA_DIR_2020 = os.path.join( BASE_ROOT,DIR_2020) # diretรณrio dos dados 2020
filenames = glob.glob(f'{DATA_DIR_2018}/*.xlsx')
files_2018 = []
count_files = 0
for filename in filenames:
if count_files ==0:
files_2018.append(pd.read_excel(filename))
count_files += 1
else:
files_2018.append(pd.read_excel(filename))
count_files +=1
load_data_2018 = True #@param {type: 'boolean'}
load_data_2019 = True #@param {type: 'boolean'}
load_data_2020 = True #@param {type: 'boolean'}
df_2018 = pd.concat(files_2018, ignore_index=True)
df_2019 = pd.read_excel(f'{DATA_DIR_2019}/0 PRAS ENERO A NOVIEMBRE 2019.xlsx')
df_2020 = pd.read_excel(f'{DATA_DIR_2020}/PRAS Y RDACAA 2020.xlsx')
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="UpnMqpF8GDVT" cellView="form" outputId="9e30f20c-f9a8-492c-8c75-f5ee6b3ac9db"
#@title Informa alguns detalhes dos dados como nome da coluna, tipo, quantidade e percentual de valores nulos
exploration = pd.DataFrame({
'column': df_2020.columns, 'type': df_2020.dtypes, 'NA #': df_2020.isna().sum(), 'NA %': (df_2020.isna().sum() / df_2020.shape[0]) * 100
})
# filtra apenas os registros com valores faltantes por ordenaรงรฃo decrescente pela porcentagem de valores nulos
exploration[exploration['NA %'] > 0].sort_values(by='NA %', ascending=False)[:100]
# + id="FlPEpdw5691O" cellView="form"
#@title Data Filter
def maltratos (data, GRP_PRI='Trabajador/A Sexual'):
data1 = data.loc[data['PCTE_GRP_PRI'] == GRP_PRI]
data2 = data.loc[data['PCTE_GRP_PRI'] == GRP_PRI]
data3 = data.loc[data['PCTE_GRP_PRI'] == GRP_PRI]
data4 = data.loc[data['PCTE_GRP_PRI'] == GRP_PRI]
data5 = data.loc[data['PCTE_GRP_PRI'] == GRP_PRI]
data6 = data.loc[data['PCTE_GRP_PRI'] == GRP_PRI]
data7 = data.loc[data['PCTE_GRP_PRI'] == GRP_PRI]
data8 = data.loc[data['PCTE_GRP_PRI'] == GRP_PRI]
data9 = data.loc[data['PCTE_GRP_PRI'] == GRP_PRI]
data_geral = data1.append(data2).append(data3).append(data4).append(data5).append(data6).append(data7).append(data8).append(data9)
df_data = data_geral[['PROF_SEXO','PCTE_NOMBRES','PCTE_SEXO',
'PCTE_ORI_SEX',
'PCTE_IDE_GEN',
'PCTE_FEC_NAC','PCTE_EDAD_COMPUESTA',
'PCTE_NACIONALIDAD','PCTE_NAC_ETN',
'PCTE_PUEBLO','ATEMED_CIE10',
'ATEMED_DES_CIE10','PCTE_ULT_IMC_CATEGORIA', 'PCTE_GRP_PRI']]
return df_data
df = maltratos(df_2020)
df_ts= maltratos (df_2020, GRP_PRI='Trabajador/A Sexual')
df_vf = maltratos (df_2020, GRP_PRI='Vรญctimas De Violencia Fรญsica')
df_vp = maltratos (df_2020, GRP_PRI='Vรญctimas De Violencia Psicolรณgica')
df_vs = maltratos (df_2020, GRP_PRI='Vรญctimas De Violencia Sexual')
df_vf = maltratos (df_2020, GRP_PRI='Vรญctimas De Violencia Fรญsica|Vรญctimas De Violencia Psicolรณgica')
df_ps = maltratos (df_2020, GRP_PRI='Vรญctimas De Violencia Psicolรณgica|Vรญctimas De Violencia Sexual')
df_ec = maltratos (df_2020, GRP_PRI='Enfermedades Catastrรณficas y Raras|Trabajador/A Sexual')
df_pl = maltratos (df_2020, GRP_PRI='Privadas De La Libertad')
df_em = maltratos (df_2020, GRP_PRI='Embarazadas|Vรญctimas De Violencia Psicolรณgica')
# + [markdown] id="XM3x9sS7Qxu6"
# # 1. TRABAJADORAS SEXUALES
# + colab={"base_uri": "https://localhost:8080/", "height": 309} id="8ibyHtSl6ixQ" cellView="form" outputId="660932b2-a215-495d-a10f-8449502baaaf"
#@title Trabajadores(as) Sexuales y enfermidades por CIE-10
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_ts.ATEMED_DES_CIE10.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(15+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + colab={"base_uri": "https://localhost:8080/"} id="rwS1EPAJb8vo" outputId="84e98652-9043-4ccc-929b-e9d2f8f69a1b"
df_ts.ATEMED_DES_CIE10.value_counts(1)
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 590} id="9E3pRppXN6LB" outputId="6a577942-b2c5-4e55-8957-7aa4fd955798"
#@title Trabajadores(as) Sexuales por sexo
plt.figure(figsize=(10,10))
rects1 = df_ts.PCTE_SEXO.value_counts().sort_values().plot(kind = 'pie')
#plt.title('Trabajadores(as) y sexo')
plt.show()
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 309} id="V-sDEWYcPK3_" outputId="ad606c1d-4856-48cc-9a12-33831addbe5f"
#@title Trabajadores(as) sexuales por identidad de genero
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_ts.PCTE_ORI_SEX.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(15+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 309} id="oKPr8bhzPM3O" outputId="a47f6733-d7e7-4a52-fbfa-3ac44e1f193e"
#@title Trabajadores(as) sexuales por Nacionalidad
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_ts.PCTE_NACIONALIDAD.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(15+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + id="MFj07OfiQjJg"
# + [markdown] id="64_3nxvVRElq"
# # 2. VIOLENCIA FรSICA
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 433} id="uXHSvl0qQjW1" outputId="ab988ade-cb95-4707-838f-6e4aa3d029de"
#@title Violencia Fรญsica y enfermidades por CIE-10
plt.figure(figsize=(15,15))
plt.subplot(211)
rects1 = df_vf.ATEMED_DES_CIE10.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(0.2+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 574} id="a37bkUD4QjaT" outputId="a84a53e8-e2c5-4499-d0ea-c1777714018f"
#@title Violencia Fรญsica por sexo de los pacientes
plt.figure(figsize=(10,10))
rects1 = df_vf.PCTE_SEXO.value_counts().sort_values().plot(kind = 'pie')
#plt.title('Trabajadores(as) y sexo')
plt.show()
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 309} id="ftp_0m6HSSWI" outputId="e1a069c1-b3b9-4cbf-c5b8-f448f97e9c19"
#@title Violencia Fรญsica por identidad de genero
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_vf.PCTE_ORI_SEX.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(0.3+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + id="6EFNDSBtEoh7" colab={"base_uri": "https://localhost:8080/", "height": 309} cellView="form" outputId="3bafecd2-5c2c-4b49-d759-a2ca3b293ebc"
#@title Violencia Fรญsica por Nacionalidad
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_vf.PCTE_NACIONALIDAD.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(0.4+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + colab={"base_uri": "https://localhost:8080/", "height": 309} cellView="form" id="U_naiNOhw3bl" outputId="4df2f7fa-d48c-4e4c-a7e4-f6117eb4c659"
#@title Violencia Fรญsica por Etnia
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_vf.PCTE_NAC_ETN.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(0.4+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + id="xUdp09cbTi4d"
# + [markdown] id="HSK3I80XTopo"
# # 3. VIOLENCIA PSICOLOGICA
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 433} id="MATKXZ0ZTs0v" outputId="d4b62a3c-e8ee-44a9-b370-1fb37bad781b"
#@title Violencia Psicologica y enfermidades por CIE-10
plt.figure(figsize=(15,15))
plt.subplot(211)
rects1 = df_vp.ATEMED_DES_CIE10.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(0.2+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 574} id="_Zh3h65eTzxh" outputId="63257ef0-0eb0-461e-94a1-bf88e1ea252d"
#@title Violencia Psicologica por sexo de los pacientes
plt.figure(figsize=(10,10))
labels = 'Hombres', 'Mujeres'
rects1 = df_vp.PCTE_SEXO.value_counts().sort_values()
plt.pie(rects1, autopct='%1.1f%%',labels=labels,
shadow=True, startangle=90)
#plt.title('Trabajadores(as) y sexo')
plt.show()
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 309} id="0RfMMDkgUDdf" outputId="f99763a0-58cb-4ae7-bcab-95cb79d5b6b2"
#@title Violencia Psicologica por identidad de genero
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_vp.PCTE_ORI_SEX.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(1+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 309} id="5jwcGDQgVZ6I" outputId="46a8375e-37af-4d81-d2f0-3c6c21963802"
#@title Violencia Psicologica por Nacionalidad
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_vp.PCTE_NACIONALIDAD.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(2.5+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 309} id="g-uiptZcVtz3" outputId="0cfa0e0d-42b3-4595-e0c2-b2a3b2a0a11e"
#@title Violencia Psicologica por Etnia
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_vp.PCTE_NAC_ETN.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(0.4+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + id="y6-8hBJzV7Ye"
# + [markdown] id="0t-7kjD1V_VY"
# # 4. VIOLENCIA SEXUAL
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 433} id="feIU377zWElK" outputId="f598a66f-5390-44f1-81aa-6f4df6360b8f"
#@title Violencia Sexual y enfermidades por CIE-10
plt.figure(figsize=(15,15))
plt.subplot(211)
rects1 = df_vs.ATEMED_DES_CIE10.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(0.2+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 574} id="9qUeHrH7WLBu" outputId="f8673a24-06ba-47b1-d0b3-cad3c703b89b"
#@title Violencia Sexo por sexo de los pacientes
plt.figure(figsize=(10,10))
labels = 'Hombres', 'Mujeres'
rects1 = df_vs.PCTE_SEXO.value_counts().sort_values()
plt.pie(rects1, autopct='%1.1f%%',labels=labels,
shadow=True, startangle=90)
#plt.title('Trabajadores(as) y sexo')
plt.show()
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 309} id="EVoGTLGZWX6e" outputId="254a4411-7841-4869-880f-77a7d5b5289e"
#@title Violencia Sexual por orientacion sexual
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_vs.PCTE_ORI_SEX.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(1+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 309} id="eeVU5qkWWhHG" outputId="40ea0580-a5b8-4c75-f900-f90940282d76"
#@title Violencia Sexual por Nacionalidad
plt.figure(figsize=(10,10))
plt.subplot(211)
rects1 = df_vs.PCTE_NACIONALIDAD.value_counts().sort_values().plot(kind = 'barh')
#plt.title('Trabajadores(as) y enfermidades por CIE-10')
plt.xlabel('Cantidade')
for p in rects1.patches:
width = p.get_width()
plt.text(2.5+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1}'.format(width),
ha='center', va='center')
# + cellView="form" id="wSrRkidwWlRu"
#@title Violencia Sexual por Etnia
#@markdown No hay registros
# + id="sF2mzg6BXcVB"
| notebooks/dataflow_project_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import pandas as pd
from oemof import solph
from oemof.solph.plumbing import sequence
import datetime as dt
import matplotlib.pyplot as plt
# +
# function for summarize information about flows
def calc_total_flows(storage_flow, boiler_flow, total_flow, demand, schedule=None):
total_flows = pd.DataFrame(data={"storage_flow": storage_flow[storage_flow.columns[0]],
"boiler_flow": boiler_flow[boiler_flow.columns[0]]})
if schedule:
total_flows['schedule'] = schedule
total_flows['total_heat_flow'] = total_flow[total_flow.columns[0]]
total_flows['demand'] = demand["demand"]
return total_flows
# function for preparing area plots
def prep_area_plot(df):
stor_boil_flow = df.drop(columns=["total_heat_flow", "demand"])
stor_boil_flow = stor_boil_flow.asfreq(freq="0.1min", method="backfill")
columns = list(stor_boil_flow.columns)
columns.insert(2, columns[0])
columns.remove(columns[0])
stor_boil_flow = stor_boil_flow.reindex(columns = columns)
return stor_boil_flow
# -
filename = 'test_schedule_flow.csv'
full_filename = os.path.join(os.getcwd(), filename)
data = pd.read_csv(full_filename, sep=",")
timeindex = pd.date_range('1/1/2017', periods=8, freq='H')
periods = len(timeindex)
energysystem = solph.EnergySystem(timeindex=timeindex)
# +
b_gas = solph.Bus(label = "natural_gas")
source_gas = solph.Source(label="source_gas",
outputs={b_gas: solph.Flow()})
energysystem.add(b_gas, source_gas)
# -
# # Adding components
#
# Three boilers with their own demands, source, busses and storages will be added.
#
# The three systems are not interfering with each other.
#
# The following energy system is modeled:
#
# 
#
# ## Add regular components
# - Source
# - Demand
# - Transformer
# - GenericStorage
# - Bus
#
# without any restrictions.
# +
b_th_regular = solph.Bus(label = "heat_regular")
source_th_regular = solph.Source(label='source_th_regular',
outputs={b_th_regular: solph.Flow(variable_costs=10000)})
demand_th_regular = solph.Sink(label='demand_th_regular', inputs={b_th_regular: solph.Flow(
fix=data['demand_th'], nominal_value=200)})
boiler_regular = solph.Transformer(
label="boiler_regular",
inputs={b_gas: solph.Flow()},
outputs={b_th_regular: solph.Flow(nominal_value=200, variable_costs=1)},
conversion_factors={b_th_regular: 1}
)
storage_th_regular = solph.components.GenericStorage(
label='storage_th_regular',
inputs={b_th_regular: solph.Flow()},
outputs={b_th_regular: solph.Flow()},
initial_storage_level=1,
balanced=False,
nominal_storage_capacity=1000)
energysystem.add(b_th_regular, source_th_regular, demand_th_regular, boiler_regular, storage_th_regular)
# -
# ## Add schedule components
# - Source
# - Demand
# - Transformer
# - GenericStorage
# - Bus
#
# with a schedule for the boiler (Transformer) flow.
# +
b_th_schedule = solph.Bus(label = "heat_schedule")
source_th_schedule = solph.Source(label='source_th_schedule',
outputs={b_th_schedule: solph.Flow(variable_costs=10000)})
demand_th_schedule = solph.Sink(label='demand_th_schedule', inputs={b_th_schedule: solph.Flow(
fix=data['demand_th'], nominal_value=200)})
schedule = [300,30,10,300,120,120,50,10]
schedule_df = pd.DataFrame(index = timeindex, data={"schedule": schedule})
boiler_schedule = solph.Transformer(
label="boiler_schedule",
inputs={b_gas: solph.Flow()},
outputs={b_th_schedule: solph.Flow(nominal_value=200, variable_costs=0,
schedule_cost_pos = [999,999,999,999,999,999,999,999],
schedule_cost_neg = [999,999,999,999,999,999,999,999],
schedule=schedule)},
conversion_factors={b_th_schedule: 1}
)
storage_th_schedule = solph.components.GenericStorage(
label='storage_th_schedule',
inputs={b_th_schedule: solph.Flow()},
outputs={b_th_schedule: solph.Flow()},
initial_storage_level=0.5,
balanced=False,
nominal_storage_capacity=2000)
energysystem.add(b_th_schedule, source_th_schedule, demand_th_schedule, boiler_schedule, storage_th_schedule)
# -
# ## Add flexible components
# - Source
# - Demand
# - Transformer
# - GenericStorage
# - Bus
#
# with a schedule for a few timesteps (not all!) for the flow of the boiler (Transformer).
# +
b_th_flex = solph.Bus(label = "heat_flex")
source_th_flex = solph.Source(label='source_th_flex',
outputs={b_th_flex: solph.Flow(variable_costs=10000)})
demand_th_flex = solph.Sink(label='demand_th_flex', inputs={b_th_flex: solph.Flow(
fix=data['demand_th'], nominal_value=200)})
schedule_flex = [None,20,None,None,100,30,10,100]
schedule_flex_df = pd.DataFrame(index = timeindex, data={"schedule": schedule_flex})
boiler_flex = solph.Transformer(
label="boiler_flex",
inputs={b_gas: solph.Flow()},
outputs={b_th_flex: solph.Flow(nominal_value=200, variable_costs=1,
schedule_cost_pos = 999,
schedule_cost_neg = 999,
schedule=schedule_flex)},
conversion_factors={b_th_flex: 1}
)
storage_th_flex = solph.components.GenericStorage(
label='storage_th_flex',
inputs={b_th_flex: solph.Flow()},
outputs={b_th_flex: solph.Flow()},
initial_storage_level=1,
balanced=False,
nominal_storage_capacity=1000)
energysystem.add(b_th_flex, source_th_flex, demand_th_flex, boiler_flex, storage_th_flex)
# +
om = solph.Model(energysystem)
om.solve(solver='cbc', solve_kwargs={'tee': True})
results = solph.processing.results(om)
# -
# # Show results
#
# +
# boiler heat flow
res_boiler_regular = results[(boiler_regular, b_th_regular)]["sequences"]
res_boiler_schedule = results[(boiler_schedule, b_th_schedule)]["sequences"]
res_boiler_flex = results[(boiler_flex, b_th_flex)]["sequences"]
# storage heat flows
stor2h_reg = results[(storage_th_regular, b_th_regular)]["sequences"]
stor2h_sched = results[(storage_th_schedule, b_th_schedule)]["sequences"]
stor2h_flex = results[(storage_th_flex, b_th_flex)]["sequences"]
# demand
demand_list = list((data["demand_th"])*200)
demand = pd.DataFrame(data = {"demand": demand_list})
demand.index = res_boiler_regular.index
# actual flow to demand
th2demand_reg = results[(b_th_regular, demand_th_regular)]["sequences"]
th2demand_sched = results[(b_th_schedule, demand_th_schedule)]["sequences"]
th2demand_flex = results[(b_th_flex, demand_th_flex)]["sequences"]
# summarize heat flows from boiler and storage compared to demand
tot_flow_reg = calc_total_flows(stor2h_reg, res_boiler_regular, th2demand_reg, demand)
tot_flow_sched = calc_total_flows(stor2h_sched, res_boiler_schedule, th2demand_sched, demand, schedule)
tot_flow_flex = calc_total_flows(stor2h_flex, res_boiler_flex, th2demand_flex, demand, schedule_flex)
# put all boiler activities in one df
all_activities = pd.DataFrame(data={"reg_boiler": res_boiler_regular["flow"],
"sched_boiler": res_boiler_schedule["flow"],
"flex_boiler": res_boiler_flex["flow"]})
# storage capacities
res_stor_reg = results[(storage_th_regular, None)]["sequences"]
res_stor_sched = results[(storage_th_schedule, None)]["sequences"]
res_stor_flex = results[(storage_th_flex, None)]["sequences"]
# storage heat flows
stor2h_flex = results[(storage_th_flex, b_th_flex)]["sequences"]
# -
# ## View boiler activities
ax = all_activities.plot(kind='line', drawstyle='steps-pre', grid=True)
ax.set_xlabel('Time (h)')
ax.set_ylabel('Q (kW)')
plt.show()
all_activities["schedule_flex"]=schedule_flex
all_activities["schedule"]=schedule
all_activities
# ## View storage capacities compared to demand and boiler activity
print("\n\n#### REGULAR HEAT FLOWS COMPARED TO DEMAND (AREA PLOT) ####\n\n")
stor_boil_flow_reg = prep_area_plot(tot_flow_reg)
ax = stor_boil_flow_reg.plot(kind="area", colormap = "Set1")
ax.set_xlabel('Time (h)')
ax.set_ylabel('Q (kW)')
demand.plot(kind = "line", drawstyle='steps-pre', ax=ax, linewidth=1.5, color = "GREEN")
plt.show()
tot_flow_reg
stor_boil_flow_sched = prep_area_plot(tot_flow_sched)
print("\n\n#### SCHEDULE HEAT FLOWS COMPARED TO DEMAND (AREA PLOT) ####\n\n")
stor_boil_flow_sched = prep_area_plot(tot_flow_sched)
ax = stor_boil_flow_sched[stor_boil_flow_sched.columns.difference(['schedule'])].plot(kind="area", colormap = "Set1")
ax.set_xlabel('Time (h)')
ax.set_ylabel('Q (kW)')
demand.plot(kind = "line", drawstyle='steps-pre', ax=ax, linewidth=1.5, color = "GREEN", ylim=(0,350))
schedule_df.plot(kind = "line", drawstyle='steps-pre', ax=ax, linewidth=1.5, color = "BLUE")
plt.show()
tot_flow_sched
print("\n\n#### FLEXIBLE HEAT FLOWS COMPARED TO DEMAND (AREA PLOT) ####\n\n")
stor_boil_flow_flex = prep_area_plot(tot_flow_flex)
ax = stor_boil_flow_flex[stor_boil_flow_flex.columns.difference(['schedule'])].plot(kind="area", colormap = "Set1")
ax.set_xlabel('Time (h)')
ax.set_ylabel('Q (kW)')
demand.plot(kind = "line", drawstyle='steps-pre', ax=ax, linewidth=1.5, color = "GREEN")
schedule_flex_df.plot(kind = "line", drawstyle='steps-pre', ax=ax, linewidth=1.5, color = "BLUE")
plt.show()
tot_flow_flex
# The excess of the boiler heat production during the last timestep was stored into the storage:
h2stor_flex = results[(b_th_flex, storage_th_flex)]["sequences"]
h2stor_flex
| oemof_examples/oemof.solph/v0.4.x/schedule_flow/flow_schedule.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
from dateutil.relativedelta import relativedelta
import numpy as np
import pandas as pd
import datetime as dt
# ### Define constants to be used later
last_day_of_data_dt = dt.date(2017,8,23)
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# +
# reflect an existing database into a new model
Base = automap_base()
Base.prepare(engine, reflect=True)
# reflect the tables
Base.classes.keys()
# -
# We can view all of the classes that automap found
inspector = inspect(engine)
table_names = inspector.get_table_names()
for table in table_names:
print(f"{table}")
column_names = inspector.get_columns(table)
for column in column_names:
PK = " -PK-" if column['primary_key'] == 1 else ""
print(f"\t{column['name']} {column['type']} {PK}")
print("-"*50)
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# # Exploratory Climate Analysis
plt.style.available
import matplotlib.ticker as ticker
# Played with different styles, but in the end kept the fivethirtyeight as it was defined above
# Get the last day of data from the database
from datetime import datetime
last_day_of_data_dt = session.query(func.max(Measurement.date)).first()[0]
last_day_of_data_dt = datetime.strptime(last_day_of_data_dt, '%Y-%m-%d').date()
print(type(last_day_of_data_dt))
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
twelve_months_delta = relativedelta(months=12)
one_year_ago_date = last_day_of_data_dt - twelve_months_delta
# Perform a query to retrieve the data and precipitation scores
last_12_months = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= one_year_ago_date).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
last_12_months_df = pd.DataFrame(last_12_months, columns=['date', 'prcp'])
last_12_months_df.set_index("date", inplace=True)
last_12_months_df.index = pd.to_datetime(last_12_months_df.index)
# Sort the dataframe by date
last_12_months_df.sort_values("date", inplace=True)
xticks = []
min_date = last_12_months_df.index.min()
max_date = last_12_months_df.index.max()
min_tick_date = min_date
date_tick = min_tick_date
#HOW THE *#LDK WERE THESE XTICKS NATURALLY DETERMINED!!!!!!!
#I PROBABLY SHOULD HAVE JUST HARD CODED!!!!!!
days = 39
iterations = 0
max_comparison_date = max_date - dt.timedelta(days=days)
step = 1
while date_tick < max_comparison_date:
xticks.append(date_tick)
date_tick = date_tick + dt.timedelta(days=days)
iterations += step
if iterations == 2:
days+=1
if iterations == 6:
days+=3
if iterations == 7:
days-=3
# Use Pandas Plotting with Matplotlib to plot the data
ax = last_12_months_df.plot(xlim=min_tick_date, xticks=xticks, rot=90,figsize=(8,5))
patches, labels = ax.get_legend_handles_labels()
labels[0] = "precipitation"
ax.set_xlabel("Date")
ax.set_ylabel("Inches")
ax.legend(patches, labels, loc='upper right')
ax.set_xlim(min_date, max_date)
#Cener the horizontal ticks
for tick in ax.xaxis.get_major_ticks():
tick.tick1line.set_markersize(0)
tick.tick2line.set_markersize(0)
tick.label1.set_horizontalalignment('center')
plt.show()
# -
# Use Pandas to calcualte the summary statistics for the precipitation data
last_12_months_df.describe()
# Design a query to show how many stations are available in this dataset?
session.query(func.count(Station.id)).all()
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
sel=[Station.station,
func.count(Measurement.date)]
active_stations_query = session.query(*sel).filter(Measurement.station == Station.station).group_by(Station.station) \
.order_by(func.count(Measurement.date).desc())
active_stations_query.all()
# +
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
most_active_station = active_stations_query.limit(1)[0][0]
most_active_station
sel=[func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.avg(Measurement.tobs)]
statistics = session.query(*sel).filter(Measurement.station == Station.station).filter(Station.station == most_active_station).all()
print(f"Statistics for most active station: {most_active_station}")
print(f"\tlowest temperature recorded : {statistics[0][0]}")
print(f"\thighest temperature recorded: {statistics[0][1]}")
print(f"\taverage temperature recorded: {round(statistics[0][2], 1)}")
# +
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
sel=[Station.station,
func.count(Measurement.tobs)]
active_stations_query = session.query(*sel).filter(Measurement.station == Station.station) \
.filter(Measurement.tobs != None) \
.group_by(Station.station) \
.order_by(func.count(Measurement.tobs).desc())
selected_station = active_stations_query.limit(1).all()[0][0]
#selected_station = most_active_station
print(selected_station)
last_12_months_temp = session.query(Measurement.tobs) \
.filter(Measurement.date >= one_year_ago_date) \
.filter(Measurement.station == selected_station) \
.filter(Measurement.tobs != None).order_by(Measurement.tobs).all()
last_12_months_temp = list(np.ravel(last_12_months_temp))
#print(last_12_months_temp)
temperatures = []
for temperature in last_12_months_temp:
temperatures.append(temperature)
min_temp = min(temperatures)
max_temp = max(temperatures)
print(len(temperatures))
fig = plt.figure(figsize=(8,5))
#ax = fig.add_axes([0.1, 0.1, 0.6, 0.75])
ax = plt.hist(temperatures, bins=12)
plt.xlabel("Temperature")
plt.ylabel("Frequency")
plt.xlim(min_temp -1, max_temp + 1)
plt.legend(("tobs",), loc="best")
# -
# ## Bonus Challenge Assignment
import scipy.stats as stats
from scipy.stats import ttest_ind, ttest_ind_from_stats
# +
#temperature Analysis I
# The date is a test....
# Use the like where the month is june
june_temperatures = session.query(Measurement.tobs).filter(Measurement.date.ilike('____-06-__')).all()
december_temperatures = session.query(Measurement.tobs).filter(Measurement.date.ilike('____-12-__')).all()
june_temperatures = list(np.ravel(june_temperatures))
december_temperatures = list(np.ravel(december_temperatures))
june_df = pd.DataFrame(june_temperatures)
december_df = pd.DataFrame(december_temperatures)
t, p = ttest_ind(june_temperatures, december_temperatures, equal_var=False)
print(f"ttest_ind: t = {t} p = {p}")
# Compute the descriptive statistics of june and december data.
#referenced from https://stackoverflow.com/questions/22611446/perform-2-sample-t-test
june_bar = june_df.mean()
june_var = june_df.var(ddof=1)
njune = june_df.size
june_dof = njune - 1
december_bar = december_df.mean()
december_var = december_df.var(ddof=1)
ndecember = december_df.size
december_dof = ndecember - 1
#std deviation
s = np.sqrt((june_var + december_var)/2)
print(f"std deviation={s}")
## Calculate the t-statistics
t = (june_bar - december_bar)/(s*np.sqrt(2/njune))
print(f"t-statistics = {t}")
t2, p2 = ttest_ind_from_stats(june_bar, np.sqrt(june_var), njune,
december_bar, np.sqrt(december_var), ndecember,
equal_var=False)
print("ttest_ind_from_stats: t = %g p = %g" % (t2, p2))
# -
# #### Bonus: Temperature Analysis I
#
# A t value greater than 2.8 indicates a difference, and with the value above 31 they are very different. With the p Value significantly below .05, we can have strong faith in the difference. I did not use paired t-test because of different different number of items in the sample.
# +
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# +
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
trip_dates = [dt.date(2018,6,1), dt.date(2018,6,2), dt.date(2018,6,3), dt.date(2018,6,4),
dt.date(2018,6, 5), dt.date(2018,6,6), dt.date(2018,6,7)]
last_year_dates = [dt - twelve_months_delta for dt in trip_dates]
min_last_year_date = min(last_year_dates)
max_last_year_date = max(last_year_dates)
vacation_temp_stats = calc_temps(min_last_year_date, max_last_year_date)
print(f"For vacation starting {trip_dates[0]} and ending {trip_dates[-1]} last year statistics are:")
min_temp, avg_temp, max_temp = np.ravel(vacation_temp_stats)
print(f"Min Temp={min_temp}, Avg Temp={round(avg_temp, 1)}, Max Temp={max_temp}")
# +
font = {'family' : 'arial',
'weight' : 'ultralight',
'size' : 9}
plt.rc('font', **font)
# to remove the vertical lines
# https://stackoverflow.com/questions/16074392/getting-vertical-gridlines-to-appear-in-line-plot-in-matplotlib
fig, ax = plt.subplots(figsize=(1.75, 5))
ax.xaxis.grid(False)
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
plt.bar(min_temp, height=avg_temp, color="lightsalmon", alpha=0.75, yerr=(max_temp - min_temp))
plt.ylim(0, 101)
plt.title("Trip Avg Temp")
plt.ylabel("Temp (F)")
#to remove the bottom xticks
#https://stackoverflow.com/questions/12998430/remove-xticks-in-a-matplotlib-plot
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False) # labels along the bottom edge are off
plt.show()
# -
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
sel = (Station.station, Station.name, Station.latitude, Station.longitude, Station.elevation, func.sum(Measurement.prcp))
total_rainfall_per_station = session.query(*sel).filter(Station.station == Measurement.station) \
.group_by(Station.station) \
.order_by(func.sum(Measurement.prcp).desc()).all()
total_rainfall_per_station
# +
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# +
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
normals = []
# Set the start and end date of the trip
start_date = min(trip_dates)
end_date = max(trip_dates)
#start_date = f"{str(trip_dates[0].month).zfill(2)}-{str(trip_dates[0].day).zfill(2)}"
#end_date = f"{str(trip_dates[-1].month).zfill(2)}-{str(trip_dates[-1].day).zfill(2)}"
# Use the start and end date to create a range of dates
number_of_vacation_days = (end_date - start_date).days + 1
date_list = [start_date + relativedelta(days=x) for x in range(0, number_of_vacation_days)]
# Stip off the year and save a list of %m-%d strings
stripped_date_list = [f"{str(dt.month).zfill(2)}-{str(dt.day).zfill(2)}" for dt in date_list]
# Loop through the list of %m-%d strings and calculate the normals for each date
for stripped_dt in stripped_date_list:
normals.append(np.ravel(daily_normals(stripped_dt)))
normals
# -
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
trip_days_df = pd.DataFrame(normals, columns=['tmin', 'tavg', 'tmax'])
trip_days_df['trip_dates'] = trip_dates
trip_days_df.set_index('trip_dates', inplace=True)
trip_days_df
# +
# Plot the daily normals as an area plot with `stacked=False`
plt.rc('font', **font)
y_ticks = [0, 20, 40, 60, 80]
ax = trip_days_df.plot.area(stacked=False, rot=45, alpha=.25, clip_on=True)
ax.set_ylabel("Temperature", fontsize=22)
ax.set_xlabel("Date", fontsize=22)
ax.set_yticks(y_ticks)
#ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.tight_layout()
for tick in ax.xaxis.get_major_ticks():
tick.tick1line.set_markersize(0)
tick.tick2line.set_markersize(0)
tick.label.set_fontsize(16)
tick.label1.set_horizontalalignment('right')
for tick in ax.yaxis.get_major_ticks():
tick.tick1line.set_markersize(0)
tick.tick2line.set_markersize(0)
tick.label.set_fontsize(16)
ax.legend(loc='best', prop={'size': 20})
from matplotlib.font_manager import FontProperties
fontP = FontProperties(weight=550)
fontP.set_size(20)
#ax.legend(loc='best', ncol=1, bbox_to_anchor=(0, 0, 1, 1),
# prop = fontP, facecolor='white', edgecolor='skyblue')
legend = plt.legend(frameon = 1, fontsize=22)
#legend.prop(fontP)
frame = legend.get_frame()
frame.set_facecolor('white')
frame.set_edgecolor('skyblue')
plt.rcParams.update({'font.size': 20})
ax.tick_params(axis='y',length=0)
ax.spines['top'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['right'].set_visible(False)
#Trying to turn off ticks
ax.tick_params(
axis='both',
which='both',
bottom='off',
top='off',
left='off',
right='off',
pad=3)
print( "Major ticks of y axis" )
for tick in ax.yaxis.get_major_ticks():
#tick.gridline.set_visible(False)
print( tick.tick1line.get_visible(), tick.tick2line.get_visible(), tick.gridline.get_visible() )
print( "Major ticks of x axis" )
for tick in ax.xaxis.get_major_ticks():
#tick.gridline.set_visible(true)
print( tick.tick1line.get_visible(), tick.tick2line.get_visible(), tick.gridline.get_visible() )
plt.show()
# -
| climate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="pWS6oL4-dz6G"
# # Combining Multiple Quarters of *Kepler* Data with Lightkurve
# + [markdown] colab_type="text" id="kvwmGwXod-ez"
# ## Learning Goals
#
# By the end of this tutorial, you will:
#
# - Understand a *Kepler* Quarter.
# - Understand how to download multiple quarters of data at once.
# - Learn how to normalize *Kepler* data.
# - Understand how to combine multiple quarters of data.
#
# + [markdown] colab_type="text" id="jbksIHc6ebWv"
# ## Introduction
# + [markdown] colab_type="text" id="PSZ_8PvYe9_L"
# The [*Kepler*](https://archive.stsci.edu/kepler), [*K2*](https://archive.stsci.edu/k2), and [*TESS*](https://archive.stsci.edu/tess) telescopes observe stars for long periods of time. These long, time series observations are broken up into separate chunks, called quarters for the *Kepler* mission, campaigns for *K2*, and sectors for *TESS*.
#
# Building light curves with as much data as is available is useful when searching for small signals, such as planetary transits or stellar pulsations. In this tutorial, we will learn how to use Lightkurve's tools to download and stitch together multiple quarters of *Kepler* observations.
#
# It is recommended to first read the tutorial discussing how to use *Kepler* light curve products with Lightkurve. That tutorial will introduce you to some specifics of how *Kepler*, *K2*, and *TESS* make observations, and how these are displayed as light curves. It also introduces some important terms and concepts that are referred to in this tutorial.
#
# This tutorial demonstrates how to access and combine multiple quarters of data from the *Kepler* space telescope, using the Lightkurve package.
#
# When accessing *Kepler* data through MAST, it will be stored in three-month chunks, corresponding to a quarter of observations. By combining and normalizing these separate observations, you can form a single light curve that spans all observed quarters. Utilizing all of the data available is especially important when looking at repeating signals, such as planet transits and stellar oscillations.
#
# We will use the *Kepler* mission as an example, but these tools are extensible to *TESS* and *K2* as well.
# + [markdown] colab_type="text" id="0wEdptxneRHW"
# ## Imports
# This tutorial requires the [**Lightkurve**](http://docs.lightkurve.org/) package, which in turn uses `matplotlib` for plotting.
# + colab={} colab_type="code" id="2PSbUM__eZ2f"
import lightkurve as lk
# %matplotlib inline
# + [markdown] colab_type="text" id="UEQFmg6I0_ug"
# ## 1. What is a *Kepler* Quarter?
# + [markdown] colab_type="text" id="xjaRMY6h5KUp"
# In order to search for planets around other stars, the *Kepler* space telescope performed near-continuous monitoring of a single field of view, from an Earth-trailing orbit. However, this posed a challenge. If the space telescope is trailing Earth and maintaining steady pointing, its solar panels would slowly receive less and less sunlight.
#
# In order to make sure the solar panels remained oriented towards the Sun, *Kepler* performed quarterly rolls, one every 93 days. The infographic below helps visualize this, and shows the points in the orbit where the rolls took place.
#
# After each roll, *Kepler* retained its fine-pointing at the same field of view. Because the camera rotated by 90 degrees, all of the target stars fell on different parts of the charge-coupled device (CCD) camera. This had an effect on the amount of flux recorded for the same star, because different CCD pixels have different sensitivities. The way in which the flux from the same stars was distributed on the CCD (called the point spread function or PSF) also changed after each roll, due to focus changes and other instrumental effects. As a result, the aperture mask set for a star had to be recomputed after each roll, and may capture slightly different amounts of flux.
#
# The data obtained between rolls is referred to as a quarter. While there are changes to the flux *systematics*, not much else changes quarter to quarter, and the majority of the target list remains identical. This means that, after removing systematic trends (such as was done for the presearch data conditioning simple aperture photometry (PDCSAP) flux), multiple quarters together can form one continuous observation.
#
# <!--  -->
# <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/84/Kepler_space_telescope_orbit.png/800px-Kepler_space_telescope_orbit.png" width="800">
#
# *Figure*: Infographic showcasing the necessity of *Kepler*'s quarterly rolls and its Earth-trailing orbit. Source: [Kepler Science Center](https://keplergo.arc.nasa.gov/ExtendedMissionOverview.shtml).
# + [markdown] colab_type="text" id="iJ6UwtaVMkc9"
# **Note**:
# Observations by *K2* and *TESS* are also broken down into chunks of a month or more, called campaigns (for *K2*) and sectors (for *TESS*). While not discussed in this tutorial, the tools below work for these data products as well.
# + [markdown] colab_type="text" id="P-YzWvbEgS-F"
# ## 2. Downloading Multiple `KeplerLightCurve` Objects at Once
# + [markdown] colab_type="text" id="wNT61D9ugYzF"
# To start, we can use Lightkurve's [`search_lightcurve()`](https://docs.lightkurve.org/api/lightkurve.search.search_lightcurve.html) function to see what data are available for our target star on the [Mikulski Archive for Space Telescopes](https://archive.stsci.edu/kepler/) (MAST) archive. We will use the star [Kepler-8](http://www.openexoplanetcatalogue.com/planet/Kepler-8%20b/), a star somewhat larger than the Sun, and the host of a [hot Jupiter planet](https://en.wikipedia.org/wiki/Hot_Jupiter).
# + colab={"base_uri": "https://localhost:8080/", "height": 442} colab_type="code" executionInfo={"elapsed": 25325, "status": "ok", "timestamp": 1598466288814, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="DP80G4aVh011" outputId="5ae20acd-16a3-4dc4-9f9e-3fb7fa527b6f"
search_result = lk.search_lightcurve("Kepler-8", mission="Kepler")
search_result
# + [markdown] colab_type="text" id="de-_j_QMh7JH"
# In this list, each row represents a different observing quarter, for a total of 18 quarters across four years. The **observation** column lists the *Kepler* Quarter. The **target_name** represents the *Kepler* Input Catalogue (KIC) ID of the target, and the **productFilename** column is the name of the FITS files downloaded from MAST. The **distance** column shows the separation on the sky between the searched coordinates and the downloaded objects โ this is only relevant when searching for specific coordinates in the sky, and not when looking for individual objects.
#
# Instead of downloading a single quarter using the [`download()`](https://docs.lightkurve.org/api/lightkurve.search.SearchResult.html#lightkurve.search.SearchResult.download) function, we can use the [`download_all()`](https://docs.lightkurve.org/api/lightkurve.search.SearchResult.html#lightkurve.search.SearchResult.download_all) function to access all 18 quarters at once (this might take a while).
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" executionInfo={"elapsed": 42502, "status": "ok", "timestamp": 1598466306008, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="dHkx6vNDiLzI" outputId="2d34bd24-5ebe-444a-e231-680f4a9bf83b"
lc_collection = search_result.download_all()
lc_collection
# + [markdown] colab_type="text" id="S-RmBfXKiOaQ"
# All of the downloaded data are stored in a [`LightCurveCollection`](https://docs.lightkurve.org/api/lightkurve.collections.LightCurveCollection.html#lightkurve.collections.LightCurveCollection). This object acts as a wrapper for 18 separate [`KeplerLightCurve`](http://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html) objects, listed above.
#
# We can access the [`KeplerLightCurve`](http://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html) objects and interact with them as usual through the [`LightCurveCollection`](https://docs.lightkurve.org/api/lightkurve.collections.LightCurveCollection.html#lightkurve.collections.LightCurveCollection).
# + colab={"base_uri": "https://localhost:8080/", "height": 548} colab_type="code" executionInfo={"elapsed": 42488, "status": "ok", "timestamp": 1598466306014, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="pEZ1bh8OjNYg" outputId="3d05fdd6-de96-469a-8ba5-66dadfc83bcb"
lc_Q4 = lc_collection[4]
lc_Q4
# + colab={"base_uri": "https://localhost:8080/", "height": 387} colab_type="code" executionInfo={"elapsed": 43310, "status": "ok", "timestamp": 1598466306853, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="w-KTcjk5jR3K" outputId="0bd635f5-f033-426e-9226-c9ebeed01da2"
lc_Q4.plot();
# + [markdown] colab_type="text" id="xEQSm6E4kDRS"
# #### Note:
# The example given above also works for downloading target pixel files (TPFs). This will produce a [`TargetPixelFileCollection`](https://docs.lightkurve.org/api/lightkurve.collections.TargetPixelFileCollection.html#lightkurve.collections.TargetPixelFileCollection) object instead.
# + [markdown] colab_type="text" id="jghJAa5ckPLW"
# ## 3. Investigating the Data
# + [markdown] colab_type="text" id="vf6NDWeqk-4g"
# Let's first have a look at how these observations differ from one another. We can plot the simple aperture photometry (SAP) flux of all of the observations in the [`LightCurveCollection`](https://docs.lightkurve.org/api/lightkurve.collections.LightCurveCollection.html#lightkurve.collections.LightCurveCollection) to see how they compare.
# + colab={"base_uri": "https://localhost:8080/", "height": 387} colab_type="code" executionInfo={"elapsed": 43864, "status": "ok", "timestamp": 1598466307447, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="WB55sn5z7myH" outputId="b8d1fcfc-be06-49fa-e41b-756bd7e44611"
ax = lc_collection[0].plot(column='sap_flux', label=None)
for lc in lc_collection[1:]:
lc.plot(ax=ax, column='sap_flux', label=None)
# + [markdown] colab_type="text" id="mzHYwsU1mEZN"
# In the figure above, each quarter of data looks strikingly different, with global patterns repeating every four quarters as *Kepler* has made a full rotation.
#
# The change in flux within each quarter is in part driven by changes in the telescope focus, which are caused by changes in the temperature of *Kepler*'s components as the spacecraft orbits the Sun. The changes are also caused by an effect called *differential velocity aberration* (DVA), which causes stars to drift over the course of a quarter, depending on their distance from the center of *Kepler*'s field of view.
#
# While the figure above looks messy, all the systematic effects mentioned above are well understood, and have been detrended in the PDCSAP flux. For a more detailed overview, see the [*Kepler* Data Characteristics Handbook](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/Data_Characteristics.pdf), specifically: *Section 5. Ongoing Phenomena*.
# + [markdown] colab_type="text" id="rghAhfNzqaR_"
# ## 4. Normalizing a Light Curve
# + [markdown] colab_type="text" id="cof4vrNiobH9"
# If we want to see the actual variation of the targeted object over the course of these observations, the plot above isn't very useful to us. It is also not useful to have flux expressed in physical units, because it is affected by the observing conditions such as telescope focus and pointing (see above).
#
# Instead, it is a common practice to normalize light curves by dividing by their median value. This means that the median of the newly normalized light curve will be equal to 1, and that the relative size of signals in the observation (such as transits) will be maintained.
#
# A normalization can be performed using the [`normalize()`](https://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html#lightkurve.lightcurve.KeplerLightCurve.normalize) method of a [`KeplerLightCurve`](https://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html), for example:
# + colab={"base_uri": "https://localhost:8080/", "height": 387} colab_type="code" executionInfo={"elapsed": 44414, "status": "ok", "timestamp": 1598466308014, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KST<KEY>o=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="qN3F6nxilRVd" outputId="3d72e3f5-6de2-4e90-cb58-46c348459975"
lc_collection[4].normalize().plot();
# + [markdown] colab_type="text" id="Nro9qhV3lT3f"
# In the figure above, we have plotted the normalized PDCSAP flux for Quarter 4. The median normalized flux is at 1, and the transit depths lie around 0.991, indicating a 0.9% dip in brightness due to the planet transiting the star.
# + [markdown] colab_type="text" id="Zkip4rM8paC4"
# The [`LightCurveCollection`](https://docs.lightkurve.org/api/lightkurve.collections.LightCurveCollection.html) also has a `plot()` method. We can use it to plot the PDCSAP flux. The method automatically normalizes the flux in same way we did for a single quarter above.
# + colab={"base_uri": "https://localhost:8080/", "height": 387} colab_type="code" executionInfo={"elapsed": 45362, "status": "ok", "timestamp": 1598466308978, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="7oyn8KpApjJf" outputId="129b53d3-6172-44a1-8590-6c24aa7cb906"
lc_collection.plot();
# + [markdown] colab_type="text" id="ie7A1JRFpkEo"
# As you can see above, because we have normalized the data, all of the observations form a single consistent light curve.
# + [markdown] colab_type="text" id="oU4wvbg6pqTc"
# ## 5. Combining Multiple Observations into a Single Light Curve
# + [markdown] colab_type="text" id="28XuRx21qU21"
# Finally, we can combine these different light curves into a single [`KeplerLightCurve`](https://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html) object. This is done using the [`stitch()`](https://docs.lightkurve.org/api/lightkurve.collections.LightCurveCollection.html#lightkurve.collections.LightCurveCollection.stitch) method. This method concatenates all quarters in our [`LightCurveCollection`](https://docs.lightkurve.org/api/lightkurve.collections.LightCurveCollection.html#lightkurve.collections.LightCurveCollection) together, and normalizes them at the same time, in the manner we saw above.
# + colab={"base_uri": "https://localhost:8080/", "height": 548} colab_type="code" executionInfo={"elapsed": 45353, "status": "ok", "timestamp": 1598466308986, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="tdt-OSTVrgVL" outputId="a6aa34cc-eefe-4b35-d5b3-b8d4e62ff49f"
lc_stitched = lc_collection.stitch()
lc_stitched
# + [markdown] colab_type="text" id="2m4t5VsDriLk"
# This returns a single [`KeplerLightCurve`](https://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html)! It is in all ways identical to [`KeplerLightCurve`](https://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html) of a single quarter, just longer. We can plot it the usual way.
# + colab={"base_uri": "https://localhost:8080/", "height": 387} colab_type="code" executionInfo={"elapsed": 46378, "status": "ok", "timestamp": 1598466310026, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="tYzdKj-7r6N6" outputId="9d2ce19e-0548-4d35-e3a8-0a86bcbe5f01"
lc_stitched.plot();
# + [markdown] colab_type="text" id="U7uOGZXJ62Qm"
# In this final normalized light curve, the interesting observational features of the star are more clear. Specifically: repeating transits that can be used to [characterize planets](https://docs.lightkurve.org/tutorials/02-recover-a-planet.html) and a noisy stellar flux that can be used to study brightness variability through [asteroseismology](http://docs.lightkurve.org/tutorials/02-asteroseismology.html).
# + [markdown] colab_type="text" id="oz2KOdF5LYJm"
# Normalizing individual *Kepler* Quarters before combining them to form a single light curve isn't the only way to make sure different quarters are consistent with one another. For a breakdown of other available methods and their benefits, see *Section 6. Stitching Kepler Quarters Together* in [Kinemuchi et al. 2012](https://arxiv.org/pdf/1207.3093.pdf).
# + [markdown] colab_type="text" id="lhbv9ZKRPmMY"
# ## About this Notebook
# + [markdown] colab_type="text" id="nU-5JtvpPmMZ"
# **Authors:** <NAME> (<EMAIL>), <NAME>
#
# **Updated On**: 2020-09-15
# + [markdown] colab_type="text" id="ZANsIso_B_si"
# # Citing Lightkurve and Astropy
#
# If you use `lightkurve` or `astropy` for published research, please cite the authors. Click the buttons below to copy BibTeX entries to your clipboard.
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" executionInfo={"elapsed": 46365, "status": "ok", "timestamp": 1598466310031, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="7vUtrWVjnlY7" outputId="05c053a6-5366-47d2-a2fd-2d0a92a11eb0"
lk.show_citation_instructions()
# + [markdown] colab_type="text" id="CNf3nI0trtA-"
# <img style="float: right;" src="https://raw.githubusercontent.com/spacetelescope/notebooks/master/assets/stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="Space Telescope Logo" width="200px"/>
#
| notebooks/MAST/Kepler/kepler_combining_multiple_quarters/kepler_combining_multiple_quarters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <small><small><i>
# All the IPython Notebooks in this lecture series by Dr. <NAME> are available @ **[GitHub](https://github.com/milaan9/02_Python_Datatypes/tree/main/002_Python_String_Methods)**
# </i></small></small>
# # Python String `split()`
#
# The string **`split()`** method breaks up a string at the specified separator and returns a list of strings.
#
# **Syntax**:
#
# ```python
# str.split([separator [, maxsplit]])
# ```
# + [markdown] heading_collapsed=true
# ## `split()` Parameters
#
# The **`split()`** method takes a maximum of 2 parameters:
#
# * **`separator`** (optional)- It is a delimiter. The string splits at the specified **`separator`**.
# If the **`separator`** is not specified, any whitespace (space, newline etc.) string is a separator.
#
# * **`maxsplit`** (optional) - The **`maxsplit`** defines the maximum number of splits.
# The default value of **`maxsplit`** is -1, meaning, no limit on the number of splits.
# -
# ## Return Value from `split()`
#
# The **`split()`** method breaks the string at the **`separator`** and returns a list of strings.
# +
# Example 1: How split() works in Python?
text= 'Love thy neighbor'
# splits at space
print(text.split())
grocery = 'Milk, Chicken, Bread'
# splits at ','
print(grocery.split(', '))
# Splitting at ':'
print(grocery.split(':'))
# +
# Example 2: How split() works when maxsplit is specified?
grocery = 'Milk, Chicken, Bread, Butter'
# maxsplit: 2
print(grocery.split(', ', 2))
# maxsplit: 1
print(grocery.split(', ', 1))
# maxsplit: 5
print(grocery.split(', ', 5))
# maxsplit: 0
print(grocery.split(', ', 0))
# -
# >**Note:** If **`maxsplit`** is specified, the list will have the maximum of **`maxsplit+1`** items.
| 002_Python_String_Methods/038_Python_String_split().ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.12 ('base')
# language: python
# name: python3
# ---
# # โ ๏ธCrime Analysis of India๐ฉ
# ๐ **Project Title** : Crime Analysis of India
#
# ๐ **Aim of the Project** : Analyze the crime of India in different aspects and different sectors.
#
# ๐ **Dataset** : https://www.kaggle.com/rajanand/crime-in-india
#
# ๐ **Libraries Required :** ```Pandas, Numpy, Seaborn, Matplotlib, Plotly``` and lot more...
#
# <hr>
#
# ## Importing Libraries
# +
import os
from zipfile import ZipFile
from io import BytesIO
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
import plotly.express as px
from matplotlib.pyplot import figure
# -
zip = ZipFile("../Dataset/archive.zip")
data = (zip.open(
"crime/crime/01_District_wise_crimes_committed_IPC_2001_2012.csv")).read()
data = pd.read_csv(BytesIO(data))
data
data.shape
data.describe()
data.info()
# ## Performing EDA
#
total = data[(data["DISTRICT"] == "TOTAL")]
total
# ### Here - taken TN into consideration
#
# **P.S**: Can consider any state
#
tn = total[(total["STATE/UT"] == "TAMIL NADU")]
tn.head()
# #### Murder VS Attempt to Murder
#
# +
fig, axes = plt.subplots(1, 2, figsize=(15, 5), sharey=True)
fig.suptitle("ATTEMPT TO MURDER VS MURDER")
# Attempt to Murder
sns.pointplot(data=tn, x="YEAR", y="ATTEMPT TO MURDER", ax=axes[0])
axes[0].set_title("ATTEMPT TO MURDER")
# Murder
sns.pointplot(data=tn, x="YEAR", y="MURDER", ax=axes[1], color="red")
axes[1].set_title("MURDER")
# plt.savefig("../images/plot-1.png")
plt.show()
# +
fig, axes = plt.subplots(1, 3, figsize=(18, 10), sharey=True)
fig.suptitle("KIDNAPPING")
# Kidnapping
sns.histplot(data=tn, x="YEAR", y="KIDNAPPING & ABDUCTION", ax=axes[0])
axes[0].set_title("KIDNAPPING & ABDUCTION")
# Kidnapping and Abduction of Women and Girls
sns.histplot(data=tn,
x="YEAR",
y="KIDNAPPING AND ABDUCTION OF WOMEN AND GIRLS",
ax=axes[1],
color="red")
axes[1].set_title("KIDNAPPING AND ABDUCTION OF WOMEN AND GIRLS")
# Kidnapping and Abduction of Others
sns.histplot(data=tn,
x="YEAR",
y="KIDNAPPING AND ABDUCTION OF OTHERS",
ax=axes[2],
color="green")
axes[2].set_title("KIDNAPPING AND ABDUCTION OF OTHERS")
# plt.savefig("../images/plot-2.png")
plt.show()
# -
# #### Total Rape Cases in Tamilnadu
# +
fig = px.bar(tn,
x="YEAR",
y="RAPE",
color_discrete_sequence=["blue"],
title="Rape crime in Tamil nadu")
fig.layout.template= "plotly_dark"
fig.show()
# -
# ### IPC Crimes [2001 - 2012]
total_crime = pd.DataFrame(
total.groupby(["STATE/UT"])["TOTAL IPC CRIMES"].sum().reset_index())
total_crime_sorted = total_crime.sort_values("TOTAL IPC CRIMES",
ascending=False)[:10]
fig = px.bar(data_frame=total_crime_sorted,
x="TOTAL IPC CRIMES",
y="STATE/UT",
orientation='h',
color_discrete_sequence=["red"])
fig.update_layout(yaxis=dict(autorange="reversed"))
fig.update_layout(
title="Top 10 States with highest number of IPC Crimes")
fig.layout.template = "plotly_dark"
fig.show()
# ### Overall Report
zip = ZipFile("../Dataset/archive.zip")
ipc14 = (zip.open(
"crime/crime/01_District_wise_crimes_committed_IPC_2014.csv")).read()
ipc14 = pd.read_csv(BytesIO(ipc14))
grouping_state_crimes=ipc14.groupby('States/UTs').sum()
grouping_state_crimes
all_india_murder_cases = grouping_state_crimes['Murder']
all_india_rape_cases = grouping_state_crimes['Rape']
labels = [
"MURDER CASES", "EXTORTION", "RAPE CASES", "RIOTS", "DOWRY ISSUES",
"KIDNAPPING"
]
y = [68268,16420,77356,132412,16916,156824]
fig1, ax1 = plt.subplots(figsize=(15, 6))
ax1.pie(y, labels=labels, autopct='%1.1f%%', startangle=90, shadow=True)
ax1.set_title("Overall Crimes in India")
my_circle = plt.Circle((0, 0), 0.7, color="white")
p = plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
# plt.savefig("../images/plot-4.png")
| Crime Analysis of India/model/crime-analysis-india.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Tarunvats9068/oops_with_python/blob/main/OOP_TASK_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="WefFaUYaWEDv"
# **Question** 1 oop task 5 **TARUN** **SHARMA** **20BCS133**
# + id="5dKMk4b5SAWG"
from abc import ABC,abstractmethod
from math import *
class Shape(ABC):
def Area(self):
pass
class Circle(Shape):
def __init__(self,radius):
self.radius = radius
def Area(self):
a = 2*pi*self.radius*self.radius
return a
class Square(Shape):
def __init__(self,side):
self.side = side
def Area(self):
a = self.side*self.side
return a
class Rectangle(Shape):
def __init__(self,length,breath):
self.length = length
self.breath = breath
def Area(self):
a = self.length*self.breath
return a
rect = Rectangle(5,6)
squa = Square(4)
cir = Circle(3)
print(rect.Area())
print(squa.Area())
print(cir.Area())
# + [markdown] id="X92A3HekWZRq"
# **QUESTION 2**
# + id="Bcq03UMUWd78"
class Travel:
def __init__(self,no_of_passengers,distance,mode):
self.__no_of_passengers = no_of_passengers
self.distance = distance
self.mode = mode
def No_of_passengers(self):
print(self.no_of_passengers)
def Distance(self):
print(self.distance)
def Mode(self):
print(self.mode)
class Bus(Travel):
def __init__(self,no_of_passengers,distance,mode):
Travel.__init__(self,no_of_passengers,distance,mode)
def trip_cost(self):
total = self._Travel__no_of_passengers*100
return total
class Train(Travel):
def __init__(self,no_of_passengers,distance,mode):
Travel.__init__(self,no_of_passengers,distance,mode)
def trip_cost(self):
total = self._Travel__no_of_passengers*60
return total
b1 = Bus(4,12,"bus");
print(b1.trip_cost())
t1 = Train(5,6,"train")
print(t1.trip_cost())
# + [markdown] id="55ovqehTbfxo"
# **QUESTION 3**
# + id="Szsow34vbkL5"
class Car:
def __init__(self,model_no):
self.model_no = model_no
def Swap_model(self,c1):
temp = c1.model_no
c1.model_no = self.model_no
self.model_no = temp
c1 = Car("M2571")
c2 = Car("BM984")
c1.Swap_model(c2)
print(c1.model_no)
print(c2.model_no)
| OOP_TASK_5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Control Flow Statements
# The key thing to note about Python's control flow statements and program structure is that it uses _indentation_ to mark blocks. Hence the amount of white space (space or tab characters) at the start of a line is very important. This generally helps to make code more readable but can catch out new users of python.
# ## Conditionals
#
# ### if statement
#
#
# ```python
# if some_condition:
# code block
# ```
#
#
#
# Only execute the code if some condition is satisfied
x = 12
if x > 10:
print("Hello")
# ### if-else
#
# ```python
# if some_condition:
# algorithm 1
# else:
# algorithm 2
# ```
#
#
#
# As above but if the condition is False, then execute the second algorithm
x = 12
if 10 < x < 11:
print("hello")
else:
print("world")
# ### else if
# ```python
# if some_condition:
# algorithm
# elif some_condition:
# algorithm
# else:
# algorithm
# ```
#
# Any number of conditions can be chained to find which part we want to execute.
x = 10
y = 12
if x > y:
print("x>y")
elif x < y:
print("x<y")
else:
print("x=y")
x = 10
y = 12
if x > y:
print( "x>y")
elif x < y:
print( "x<y")
if x==10:
print ("x=10")
else:
print ("invalid")
else:
print ("x=y")
# ## Loops
#
# ### For
# ```python
# for variable in something:
# algorithm
# ```
#
# The "something" can be any of of the collections discussed previously (lists, sets, dictionaries). The variable is assigned each element from the collection in turn and the algorithm executed once with that value.
#
# When looping over integers the `range()` function is useful which generates a range of integers:
#
# * range(n) = 0, 1, ..., n-1
# * range(m,n)= m, m+1, ..., n-1
# * range(m,n,s)= m, m+s, m+2s, ..., m + ((n-m-1)//s) * s
#
# In mathematical terms range `range(a,b)`$=[a,b)\subset\mathbb Z$
for ch in 'abc':
print(ch)
total = 0
for i in range(5):
total += i
for i,j in [(1,2),(3,1)]:
total += i**j
print("total =",total)
# In the above example, i iterates over the 0,1,2,3,4. Every time it takes each value and executes the algorithm inside the loop. It is also possible to iterate over a nested list illustrated below.
list_of_lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
for list1 in list_of_lists:
print(list1)
# ### While
# ```python
# while some_condition:
# algorithm
# ```
#
# Repeately execute the algorithm until the condition fails (or exit via a break statement as shown below)
i = 1
while i < 3:
print(i ** 2)
i = i+1
print('Bye')
# ### Break
# The `break` keyword is used to abandon exection of a loop immediately. This statement can only be used in **for** and **while** loops.
for i in range(100):
print(i,end="...")
if i>=7:
break
print("completed.")
# ### Continue
# The `continue` statement skips the remainder of a loop and starts the next iteration. Again this can only be used in a **while** or **for** loop. It is typically only used within an **if** statement (otherwise the remainder of the loop would never be executed).
for i in range(10):
if i>4:
print("Ignored",i)
continue
# this statement is not reach if i > 4
print("Processed",i)
# ### Else statements on loops
# Sometimes we want to know if a loop exited 'normally' or via a break statement. This can be achieved with an `else:` statement in a loop which only executes if there was no break
count = 0
while count < 10:
count += 1
if count % 2 == 0: # even number
count += 2
continue
elif 5 < count < 9:
break # abnormal exit if we get here!
print("count =",count)
else: # while-else
print("Normal exit with",count)
# ## Catching exceptions
# Sometimes it is desirable to deal with errors without stopping the whole program. This can be achieved using a **try** statement. Appart from dealing with with system errors, it also alows aborting from somewhere deep down in nested execution. It is possible to attach multiple error handlers depending on the type of the exception
#
# ```python
# try:
# code
# except <Exception Type> as <variable name>:
# # deal with error of this type
# except:
# # deal with any error
# finally:
# # execute irrespective of whether an exception occured or not
# ```
try:
count=0
while True:
while True:
while True:
print("Looping")
count = count + 1
if count > 3:
raise Exception("abort") # exit every loop or function
if count > 4:
raise StopIteration("I'm bored") # built in exception type
except StopIteration as e:
print("Stopped iteration:",e)
except Exception as e: # this is where we go when an exception is raised
print("Caught exception:",e)
finally:
print("All done")
try:
for i in [2,1.5,0.0,3]:
inverse = 1.0/i
except Exception as e:
print("Cannot calculate inverse because:", e)
| 05_control_flow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# #Tutorial Brief
#
# **Video Tutorial:** https://www.youtube.com/user/roshanRush
#
# Jupyter has an implementation of markdown language that can be used in in markdown cells to create formatted text and media documentation in your notebook. LaTeX is also implemented to create high quality mathematical typeset.
#
# #Markdown
#
# ##Headings
#
# #H1
# ##H2
# ###H3
# ####H4
# #####H5
# ######H6
#
# **Code:**
# ```markdown
# #H1
# ##H2
# ###H3
# ####H4
# #####H5
# ######H6
# ```
#
# ##Alternative Headings
#
# Heading 1
# =========
#
# Heading 2
# ----------
#
# **Code:**
# ```markdown
# Heading 1
# =========
#
# Heading 2
# ---------
# ```
#
# ##Font Styles
#
# **Bold Font** or __Bold Font__
#
# *Italic* or _Italic Font_
#
# ~~Scratched Text~~
#
# Markdown doesn't support underline. but you can do that using HTML <u>Text</u>
#
# **Code:**
# ```markdown
# **Bold Font** or __Bold Font__
#
# *Italic* or _Italic Font_
#
# ~~Scratched Text~~
#
# Markdown doesn't support underline. but you can do that using HTML <u>Text</u>
# ```
#
# ##Lists
#
# - item
# - item
# - subitem
# - subitem
# - item
#
#
# 1. item
# 2. item
# 1. sub item
# 2. sub item
# 3. item
#
# **Code:**
# ```markdown
# - item
# - item
# - subitem
# - subitem
# - item
#
#
# 1. item
# 2. item
# 1. sub item
# 2. sub item
# 3. item
# ```
#
# ##Links
#
# http://www.github.com/
#
# [Github](http://www.github.com/)
#
#
# **Code:**
# ```
# http://www.github.com/
#
# [Github](http://www.github.com/)
# ```
#
# ##Images
# 
#
# **Code:**
# ```markdown
# 
# ```
#
# ##Quotes
#
# > Why, oh why, Javascript??? Wars, famine, planetary destruction... I guess as a species, we deserve this abomination...
# >
# > [<NAME>](https://twitter.com/fperez_org)
#
# **Code:**
# ```
# > Why, oh why, Javascript??? Wars, famine, planetary destruction... I guess as a species, we deserve this abomination...
# >
# > [<NAME>](https://twitter.com/fperez_org)
# ```
#
# ##Horizontal Line
#
# ---
#
# **Code:**
# ```markdown
# ---
# ```
#
# ##Tables
#
# | Tables | Are | Cool |
# | ------------- |:-------------:| -----:|
# | col 3 is | right-aligned | 1600 |
# | col 2 is | centered | 12 |
# | zebra stripes | are neat | 1 |
#
# **Code:**
#
# ```
# | Tables | Are | Cool |
# | ------------- |:-------------:| -----:|
# | col 3 is | right-aligned | 1600 |
# | col 2 is | centered | 12 |
# | zebra stripes | are neat | 1 |
# ```
#
# ##HTML
#
# <b>You</b> can <i>render</i> almost any <span style="color:red;">HTML</span> code you <u>like</u>.
#
# **Code:**
#
# ```
# <b>You</b> can <i>render</i> almost any <span style="color:red;">HTML</span> code you <u>like</u>.
# ```
#
# ##Code
#
# You can add in line code like this `import numpy as np`
#
# Or block code:
#
# Python Code:
# ```python
# x = 5
# print "%.2f" % x
# ```
#
# Java Script Code:
# ```javascript
# x = 5
# alert(x);
# ```
#
# **Code:**
#
# <pre>
# Python Code:
# ```python
# x = 5
# print "%.2f" % x
# ```
#
# Java Script Code:
# ```javascript
# x = 5
# alert(x);
# ```
# </pre>
# + [markdown] deletable=true editable=true
# #LaTeX
#
# **References:**
#
# - [LaTeX Wiki](http://en.wikibooks.org/wiki/LaTeX/Mathematics)
# - [Duke University, Department of Statistical Science](https://stat.duke.edu/resources/computing/latex)
# - [Equation Sheet](http://www.equationsheet.com/)
#
# LaTeX is a large typeset system for scientific documentation which symbols for mathematics, statistics, physics, quantum mechanics, and computer science. It is beyond the scope of this tutorial to cover everything, but we will go over the basics of writing high quality mathematical equations using LaTeX.
#
# You can use LaTeX in line by like this $y = x^2$ or in block like this $$y = x^2$$
#
# **Code:**
# ```markdown
# You can use LaTeX in line by like this $y = x^2$ or in block like this $$y = x^2$$
# ```
#
# ##Operators:
#
# - Add:
# - $x + y$
# - Subtract:
# - $x - y$
# - Multiply
# - $x * y$
# - $x \times y$
# - $x . y$
# - Divide
# - $x / y$
# - $x \div y$
# - $\frac{x}{y}$
#
# **Code:**
# ```markdown
# - Add:
# - $x + y$
# - Subtract:
# - $x - y$
# - Multiply
# - $x * y$
# - $x \times y$
# - $x . y$
# - Divide
# - $x / y$
# - $x \div y$
# - $\frac{x}{y}$
# ```
#
# ##Relations
#
# - $\pi \approx 3.14159$
# - ${1 \over 0} \neq \inf$
# - $0 < x > 1$
# - $0 \leq x \geq 1$
#
# **Code:**
# ```
# - $\pi \approx 3.14159$
# - ${1 \over 0} \neq \inf$
# - $0 < x > 1$
# - $0 \leq x \geq 1$
# ```
#
# ##Fractions
#
# - $^1/_2$
# - $\frac{1}{2x}$
# - ${3 \over 4}$
#
#
# **Code:**
# ```
# - $^1/_2$
# - $\frac{1}{2x}$
# - ${3 \over 4}$
# ```
#
# ##Greek Alphabet
#
# | Small Letter | Capical Letter | Alervative |
# | --------------------- | -------------------- | --------------------------- |
# | $\alpha$ `\alpha` | $A$ `A` | |
# | $\beta$ `\beta` | $B$ `B` | |
# | $\gamma$ `\gamma` | $\Gamma$ `\Gamma` | |
# | $\delta$ `\delta` | $\Delta$ `\Delta` | |
# | $\epsilon$ `\epsilon` | $E$ `E` | $\varepsilon$ `\varepsilon` |
# | $\zeta$ `\zeta` | $Z$ `Z` | |
# | $\eta$ `\eta` | $H$ `H` | |
# | $\theta$ `\theta` | $\Theta$ `\Theta` | $\vartheta$ `\vartheta` |
# | $\iota$ `\zeta` | $I$ `I` | |
# | $\kappa$ `\kappa` | $K$ `K` | $\varkappa$ `\varkappa` |
# | $\lambda$ `\lambda` | $\Lambda$ `\Lambda` | |
# | $\mu$ `\mu` | $M$ `M` | |
# | $\nu$ `\nu` | $N$ `N` | |
# | $\xi$ `\xi` | $Xi$ `\Xi` | |
# | $\omicron$ `\omicron` | $O$ `O` | |
# | $\pi$ `\pi` | $\Pi$ `\Pi` | $\varpi$ `\varpi` |
# | $\rho$ `\rho` | $P$ `P` | $\varrho$ `\varrho` |
# | $\sigma$ `\sigma` | $\Sigma$ `\Sigma` | $\varsigma$ `\varsigma` |
# | $\tau$ `\tau` | $T$ `T` | |
# | $\upsilon$ `\upsilon` | $\Upsilon$ `\Upsilon`| |
# | $\phi$ `\phi` | $\Phi$ `\Phi` | $\varphi$ `\varphi` |
# | $\chi$ `\chi` | $X$ `X` | |
# | $\psi$ `\psi` | $\Psi$ `\Psi` | |
# | $\omega$ `\omega` | $\Omega$ `\Omega` | |
#
# ##Power & Index
#
# You can add power using the carrot `^` symbol. If you have more than one character you have to enclose them in a curly brackets.
#
# $$f(x) = x^2 - x^{1 \over \pi}$$
#
# For index you can use the underscore symbol:
#
# $$f(X,n) = X_n + X_{n-1}$$
#
# **Code:**
# ```markdown
# $$f(x) = x^2 - x^{1 \over \pi}$$
# $$f(X,n) = X_n + X_{n-1}$$
#
# ```
#
# ##Roots & Log
#
# You can express a square root in LaTeX using the `\sqrt` and to change the level of the root you can use `\sqrt[n]` where `n` is the level of the root.
#
# $$f(x) = \sqrt[3]{2x} + \sqrt{x-2}$$
#
# To represent a log use `\log[base]` where `base` is the base of the logarithmic term.
#
# $$\log[x] x = 1$$
#
# **Code:**
# ```markdown
# $$f(x) = \sqrt[3]{2x} + \sqrt{x-2}$$
# ```
#
# ##Sums & Products
#
# You can represent a sum with a sigma using `\sum\limits_{a}^{b}` where a and b are the lower and higher limits of the sum.
#
# $$\sum\limits_{x=1}^{\infty} {1 \over x} = 2$$
#
# Also you can represent a product with `\prod\limits_{a}^{a}` where a and b are the lower and higher limits.
#
# $$\prod\limits_{i=1}^{n} x_i - 1$$
#
# **Code:**
# ```
# $$\sum\limits_{x=1}^{\infty} {1 \over x} = 2$$
# $$\prod\limits_{i=1}^{n} x_i - 1$$
# ```
#
# ##Statistics
#
# To represent basic concepts in statistics about sample space `S`, you can represent a maximum:
#
# $$max(S) = \max\limits_{i: S_i \in S} S_i$$
#
# In the same way you can get the minimum:
#
# $$min(S) = \min\limits_{i: S_i \in S} S_i$$
#
# To represent a [binomial coefficient](http://en.wikipedia.org/wiki/Binomial_coefficient) with n choose k, use the following:
#
# $$\frac{n!}{k!(n-k)!} = {n \choose k}$$
#
# for :
#
# **Code:**
# ```
# $$max(S) = \max\limits_{i: x_i \in \{S\}} x_i$$
# $$min (S) = \min\limits_{i: x_i \in \{S\}} x_i$$
# $$\frac{n!}{k!(n-k)!} = {n \choose k}$$
# ```
#
# ##Calculus
#
# Limits are represented using `\lim\limits_{x \to a}` as `x` approaches `a`.
#
# $$\lim\limits_{x \to 0^+} {1 \over 0} = \infty$$
#
# For integral equations use `\int\limits_{a}^{b}` where `a` and `b` are the lower and higher limits.
#
# $$\int\limits_a^b 2x \, dx$$
#
#
# **Code:**
# ```markdown
# $$\lim\limits_{x \to 0^+} {1 \over 0} = \inf$$
# $$\int\limits_a^b 2x \, dx$$
# ```
#
# ##Function definition over periods
#
# Defining a function that is calculated differently over a number of period can done using LaTeX. There are a few tricks that we will use to do that:
#
# - The large curly bracket `\left\{ ... \right.` Notice it you want to use `(` or `[` you don't have to add a back slash(`\`). You can also place a right side matching bracket by replacing the `.` after `\right` like this `.right}`
# - Array to hold the definitions in place. it has two columns with left alignment. `\begin{array}{ll} ... \end{array}`
# - Line Breaker `\\`
# - Text alignment box ` \mbox{Text}`
#
# $f(x) =\left\{\begin{array}{ll}0 & \mbox{if } x = 0 \\{1 \over x} & \mbox{if } x \neq 0\end{array}\right.$
#
# **Code:**
# ```
# $f(x) =
# \left\{
# \begin{array}{ll}
# 0 & \mbox{if } x = 0 \\
# {1 \over x} & \mbox{if } x \neq 0
# \end{array}
# \right.$
# ```
#
# **Note:** If you are planning to show your notebook in NBViewer write your latex code in one line. For example you can write the code above like this:
#
# ```
# $f(x) =\left\{\begin{array}{ll}0 & \mbox{if } x = 0 \\{1 \over x} & \mbox{if } x \neq 0\end{array}\right.$
# ```
#
# #Quick Quiz (Normal Distribution)
#
# Try to replicate the [Normal Distribution](http://en.wikipedia.org/wiki/Normal_distribution) formula using LaTeX. If you solve it, leave the LaTeX code in the comments below. $Don't\ cheat$.
#
# $$P(x,\sigma,\mu) = \frac{1}{{\sigma \sqrt {2\pi } }}e^{{-(x - \mu)^2 } / {2\sigma ^2}}$$
#
# Tips to help with the quiz:
#
# - $\mu$ is `\mu`
# - $\sigma$ is `\sigma`
# - $e$ is `e`
| Tutorials/IPython.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import xlwings as xw
import os
try:
aux=os.environ['RPprefix']
except:
os.environ['RPprefix']='C:\\Users\\Public\\REFPROP'
import ccp
from ccp import State, Q_
import numpy as np
wb = xw.Book('Modelo_2sections.xlsx') # connect to an existing file in the current working directory
FD_sheet=wb.sheets['DataSheet']
TP_sheet=wb.sheets['Test Procedure Data']
### Reading and writing SECTION 1 from the FD sheet
Ps_FD = Q_(FD_sheet.range('T23').value,'bar')
Ts_FD = Q_(FD_sheet.range('T24').value,'degC')
Pd_FD = Q_(FD_sheet.range('T31').value,'bar')
Td_FD = Q_(FD_sheet.range('T32').value,'degC')
if FD_sheet.range('T21').value==None:
V_test=True
flow_v_FD = Q_(FD_sheet.range('T29').value,'mยณ/h')
else:
V_test=False
flow_m_FD = Q_(FD_sheet.range('T21').value,'kg/h')
#flow_m_FD = Q_(FD_sheet.range('T21').value,'kg/h')
#flow_v_FD = Q_(FD_sheet.range('T29').value,'m**3/h')
speed_FD = Q_(FD_sheet.range('T38').value,'rpm')
brake_pow1_FD = Q_(FD_sheet.range('T36').value,'kW')
D = Q_(FD_sheet.range('AB132').value,'mm')
b = Q_(FD_sheet.range('AQ132').value,'mm')
GasesFD = FD_sheet.range('B69:B85').value
mol_fracFD = FD_sheet.range('K69:K85').value
fluid_FD={GasesFD[i] : mol_fracFD[i] for i in range(len(GasesFD))}
sucFD=State.define(fluid=fluid_FD , p=Ps_FD , T=Ts_FD)
if V_test:
flow_m_FD=flow_v_FD*sucFD.rho()
FD_sheet['AS34'].value=flow_m_FD.to('kg/h').magnitude
FD_sheet['AQ34'].value='Mass Flow'
FD_sheet['AU34'].value='kg/h'
else:
flow_v_FD=flow_m_FD/sucFD.rho()
FD_sheet['AS34'].value=flow_v_FD.to('mยณ/h').magnitude
FD_sheet['AQ34'].value='Inlet Volume Flow'
FD_sheet['AU34'].value='mยณ/h'
dischFD=State.define(fluid=fluid_FD , p=Pd_FD , T=Td_FD)
P_FD=ccp.Point(speed=speed_FD,flow_m=flow_m_FD,suc=sucFD,disch=dischFD)
P_FD_=ccp.Point(speed=speed_FD,flow_m=flow_m_FD*0.001,suc=sucFD,disch=dischFD)
Imp_FD = ccp.Impeller([P_FD,P_FD_],b=b,D=D)
FD_sheet['AS25'].value=Imp_FD._mach(P_FD).magnitude
FD_sheet['AS26'].value=Imp_FD._reynolds(P_FD).magnitude
FD_sheet['AS27'].value=1/P_FD._volume_ratio().magnitude
FD_sheet['AS28'].value=Imp_FD._phi(P_FD).magnitude
FD_sheet['AS29'].value=Imp_FD._psi(P_FD).magnitude
FD_sheet['AS30'].value=Imp_FD._work_input_factor(P_FD).magnitude
FD_sheet['AS32'].value=P_FD._eff_pol_schultz().magnitude
FD_sheet['AS33'].value=P_FD._power_calc().to('kW').magnitude
### Reading and writing SECTION 2 from the FD sheet
SS_config = FD_sheet.range('W18').value
Ps2_FD = Pd_FD*0.995
if SS_config=='IN':
TSS_FD = Q_(FD_sheet.range('W24').value,'degC')
else:
TSS_FD = Td_FD
Pd2_FD = Q_(FD_sheet.range('Z31').value,'bar')
Td2_FD = Q_(FD_sheet.range('Z32').value,'degC')
if FD_sheet.range('W21').value==None:
V_test=True
flowSS_v_FD = Q_(FD_sheet.range('W29').value,'mยณ/h')
else:
V_test=False
flowSS_m_FD = Q_(FD_sheet.range('W21').value,'kg/h')
brake_pow2_FD = Q_(FD_sheet.range('Z36').value,'kW')
D2 = Q_(FD_sheet.range('AB133').value,'mm')
b2 = Q_(FD_sheet.range('AQ133').value,'mm')
if SS_config=='IN':
GasesFD = FD_sheet.range('B69:B85').value
mol_fracSS_FD = FD_sheet.range('N69:N85').value
fluidSS_FD={GasesFD[i] : mol_fracSS_FD[i] for i in range(len(GasesFD))}
else:
fluidSS_FD=fluid_FD
SS_FD = State.define(fluid=fluidSS_FD , p=Ps2_FD , T=TSS_FD)
if V_test:
flowSS_m_FD=flowSS_v_FD*SS_FD.rho()
FD_sheet['AS36'].value=flowSS_m_FD.to('kg/h').magnitude
FD_sheet['AQ36'].value='SS Mass Flow'
FD_sheet['AU36'].value='kg/h'
else:
flowSS_v_FD=flowSS_m_FD/SS_FD.rho()
FD_sheet['AS36'].value=flowSS_v_FD.to('mยณ/h').magnitude
FD_sheet['AQ36'].value='SS Volume Flow'
FD_sheet['AU36'].value='mยณ/h'
if SS_config=='IN':
flow2_m_FD=flow_m_FD+flowSS_m_FD
RSS=flowSS_m_FD/flow2_m_FD
R1=flow_m_FD/flow2_m_FD
fluid2_FD={GasesFD[i] : mol_fracSS_FD[i]*RSS+mol_fracFD[i]*R1 for i in range(len(GasesFD))}
h2_FD=dischFD.h()*R1+SS_FD.h()*RSS
suc2FD=State.define(fluid=fluid2_FD , p=Ps2_FD , h=h2_FD)
disch2FD=State.define(fluid=fluid2_FD , p=Pd2_FD , T=Td2_FD)
FD_sheet['AT35'].value=suc2FD.T().to('degC').magnitude
else:
fluid2_FD=fluid_FD
flow2_m_FD=flow_m_FD-flowSS_m_FD
suc2FD=State.define(fluid=fluid2_FD , p=Ps2_FD , T=Td_FD)
disch2FD=State.define(fluid=fluid2_FD , p=Pd2_FD , T=Td2_FD)
FD_sheet['AT35'].value=suc2FD.T().to('degC').magnitude
P2_FD=ccp.Point(speed=speed_FD,flow_m=flow2_m_FD,suc=suc2FD,disch=disch2FD)
P2_FD_=ccp.Point(speed=speed_FD,flow_m=flow2_m_FD*0.001,suc=suc2FD,disch=disch2FD)
if V_test:
FD_sheet['AT34'].value=P2_FD.flow_m.to('kg/h').magnitude
else:
FD_sheet['AT34'].value=P2_FD.flow_v.to('mยณ/h').magnitude
Imp2_FD = ccp.Impeller([P2_FD,P2_FD_],b=b2,D=D2)
Q1d_FD=flow_m_FD/dischFD.rho()
FD_sheet['AS37'].value=flowSS_v_FD.to('mยณ/h').magnitude/Q1d_FD.to('mยณ/h').magnitude
FD_sheet['AT25'].value=Imp2_FD._mach(P2_FD).magnitude
FD_sheet['AT26'].value=Imp2_FD._reynolds(P2_FD).magnitude
FD_sheet['AT27'].value=1/P2_FD._volume_ratio().magnitude
FD_sheet['AT28'].value=Imp2_FD._phi(P2_FD).magnitude
FD_sheet['AT29'].value=Imp2_FD._psi(P2_FD).magnitude
FD_sheet['AT30'].value=Imp2_FD._work_input_factor(P2_FD).magnitude
FD_sheet['AT32'].value=P2_FD._eff_pol_schultz().magnitude
FD_sheet['AT33'].value=P2_FD._power_calc().to('kW').magnitude
FD_sheet['K90'].value=sucFD.molar_mass().to('g/mol').magnitude
FD_sheet['N90'].value=SS_FD.molar_mass().to('g/mol').magnitude
### Reading and writing SECTION 1 from the TP sheet
Ps_TP = Q_(TP_sheet.range('L6').value,TP_sheet.range('M6').value)
Ts_TP = Q_(TP_sheet.range('N6').value,TP_sheet.range('O6').value)
Pd_TP = Q_(TP_sheet.range('P6').value,TP_sheet.range('Q6').value)
if TP_sheet.range('F6').value==None:
V_test=True
flow_v_TP = Q_(TP_sheet.range('H6').value,TP_sheet.range('I6').value)
else:
V_test=False
flow_m_TP = Q_(TP_sheet.range('F6').value,TP_sheet.range('G6').value)
speed_TP = Q_(TP_sheet.range('J6').value,TP_sheet.range('K6').value)
GasesT = TP_sheet.range('B4:B20').value
mol_fracT = TP_sheet.range('D4:D20').value
fluid_TP={}
for i in range(len(GasesT)):
if mol_fracT[i]>0:
fluid_TP.update({GasesT[i]:mol_fracT[i]})
sucTP=State.define(fluid=fluid_TP , p=Ps_TP , T=Ts_TP)
dischTPk=State.define(fluid=fluid_TP , p=Pd_TP , s=sucTP.s())
hd_TP=sucTP.h()+(dischTPk.h()-sucTP.h())/P_FD._eff_isen()
dischTP=State.define(fluid=fluid_TP , p=Pd_TP , h=hd_TP)
if V_test:
flow_m_TP=flow_v_TP*sucTP.rho()
TP_sheet['F6'].value=flow_m_TP.to(TP_sheet['G6'].value).magnitude
else:
flow_v_TP=flow_m_TP/sucTP.rho()
TP_sheet['H6'].value=flow_v_TP.to(TP_sheet['I6'].value).magnitude
P_TP=ccp.Point(speed=speed_TP,flow_m=flow_m_TP,suc=sucTP,disch=dischTP)
P_TP_=ccp.Point(speed=speed_TP,flow_m=flow_m_TP*0.001,suc=sucTP,disch=dischTP)
Imp_TP = ccp.Impeller([P_TP,P_TP_],b=b,D=D)
# Imp_TP.new_suc = P_FD.suc
# P_TPconv = Imp_TP._calc_from_speed(point=P_TP,new_speed=P_FD.speed)
N_ratio=speed_FD/speed_TP
if TP_sheet['C23'].value=='Yes':
rug=TP_sheet['D24'].value
ReTP=Imp_TP._reynolds(P_TP)
ReFD=Imp_FD._reynolds(P_FD)
RCTP=0.988/ReTP**0.243
RCFD=0.988/ReFD**0.243
RBTP=np.log(0.000125+13.67/ReTP)/np.log(rug+13.67/ReTP)
RBFD=np.log(0.000125+13.67/ReFD)/np.log(rug+13.67/ReFD)
RATP=0.066+0.934*(4.8e6*b.to('ft').magnitude/ReTP)**RCTP
RAFD=0.066+0.934*(4.8e6*b.to('ft').magnitude/ReFD)**RCFD
corr=RAFD/RATP*RBFD/RBTP
eff=1-(1-P_TP._eff_pol_schultz())*corr
TP_sheet['H37'].value=eff.magnitude
P_TPconv = ccp.Point(suc=P_FD.suc, eff=eff,
speed=speed_FD,flow_v=P_TP.flow_v*N_ratio,
head=P_TP._head_pol_schultz()*N_ratio**2)
else:
P_TPconv = ccp.Point(suc=P_FD.suc, eff=P_TP._eff_pol_schultz(),
speed=speed_FD,flow_v=P_TP.flow_v*N_ratio,
head=P_TP._head_pol_schultz()*N_ratio**2)
TP_sheet['H37'].value=''
TP_sheet['R6'].value=dischTP.T().to(TP_sheet['S6'].value).magnitude
TP_sheet['G19'].value=1/P_TP._volume_ratio().magnitude
TP_sheet['H19'].value=1/(P_TP._volume_ratio().magnitude/P_FD._volume_ratio().magnitude)
TP_sheet['G20'].value=Imp_TP._mach(P_TP).magnitude
TP_sheet['H21'].value=Imp_TP._mach(P_TP).magnitude-Imp_FD._mach(P_FD).magnitude
TP_sheet['G22'].value=Imp_TP._reynolds(P_TP).magnitude
TP_sheet['H23'].value=Imp_TP._reynolds(P_TP).magnitude/Imp_FD._reynolds(P_FD).magnitude
TP_sheet['G24'].value=Imp_TP._phi(P_TP).magnitude
TP_sheet['H25'].value=Imp_TP._phi(P_TP).magnitude/Imp_FD._phi(P_FD).magnitude
TP_sheet['G26'].value=Imp_TP._psi(P_TP).magnitude
TP_sheet['H27'].value=Imp_TP._psi(P_TP).magnitude/Imp_FD._psi(P_FD).magnitude
TP_sheet['G28'].value=P_TP._head_pol_schultz().to('kJ/kg').magnitude
TP_sheet['H29'].value=P_TP._head_pol_schultz().to('kJ/kg').magnitude/P_FD._head_pol_schultz().to('kJ/kg').magnitude
TP_sheet['G30'].value=P_TPconv._head_pol_schultz().to('kJ/kg').magnitude
TP_sheet['H31'].value=P_TPconv._head_pol_schultz().to('kJ/kg').magnitude/P_FD._head_pol_schultz().to('kJ/kg').magnitude
TP_sheet['G32'].value=P_TP._power_calc().to('kW').magnitude
TP_sheet['H33'].value=P_TP._power_calc().to('kW').magnitude/P_FD._power_calc().to('kW').magnitude
if TP_sheet['C25'].value=='Yes':
HL_FD=Q_(((sucFD.T()+dischFD.T()).to('degC').magnitude*0.8/2-25)*1.166*TP_sheet['D26'].value,'W')
HL_TP=Q_(((sucTP.T()+dischTP.T()).to('degC').magnitude*0.8/2-25)*1.166*TP_sheet['D26'].value,'W')
TP_sheet['G34'].value=(P_TPconv._power_calc()-HL_TP+HL_FD).to('kW').magnitude
TP_sheet['H35'].value=(P_TPconv._power_calc()-HL_TP+HL_FD).to('kW').magnitude/(P_FD._power_calc()).to('kW').magnitude
else:
TP_sheet['G34'].value=P_TPconv._power_calc().to('kW').magnitude
TP_sheet['H35'].value=P_TPconv._power_calc().to('kW').magnitude/P_FD._power_calc().to('kW').magnitude
TP_sheet['G36'].value=P_TP._eff_pol_schultz().magnitude
### Reading and writing SECTION 2 from the TP sheet
Ps2_TP = Pd_TP*0.995
if SS_config=='IN':
TSS_TP = Q_(TP_sheet.range('R9').value,TP_sheet.range('S9').value)
else:
TSS_TP = Td_TP
Pd2_TP = Q_(TP_sheet.range('P14').value,TP_sheet.range('Q14').value)
if TP_sheet.range('L9').value==None:
V_test=True
flowSS_v_TP = Q_(TP_sheet.range('N9').value,TP_sheet.range('O9').value)
else:
V_test=False
flowSS_m_TP = Q_(TP_sheet.range('L9').value,TP_sheet.range('M9').value)
speed2_TP = Q_(TP_sheet.range('J14').value,TP_sheet.range('K14').value)
fluidSS_TP=fluid_TP
SS_TP = State.define(fluid=fluidSS_TP , p=Ps2_TP , T=TSS_TP)
if V_test:
flowSS_m_TP=flowSS_v_TP*SS_TP.rho()
TP_sheet['L9'].value=flowSS_m_TP.to(TP_sheet.range('M9').value).magnitude
else:
flowSS_v_TP=flowSS_m_TP/SS_TP.rho()
TP_sheet['N9'].value=flowSS_v_TP.to(TP_sheet.range('O9').value).magnitude
if SS_config=='IN':
flow2_m_TP=flow_m_TP+flowSS_m_TP
TP_sheet['F14'].value=flow2_m_TP.to(TP_sheet.range('G14').value).magnitude
RSS=flowSS_m_TP/flow2_m_TP
R1=flow_m_TP/flow2_m_TP
fluid2_TP=fluidSS_TP
h2_TP=dischTP.h()*R1+SS_TP.h()*RSS
suc2TP=State.define(fluid=fluid2_TP , p=Ps2_TP , h=h2_TP)
flow2_v_TP=flow2_m_TP*suc2TP.v()
TP_sheet['H14'].value=flow2_v_TP.to(TP_sheet.range('I14').value).magnitude
TP_sheet['N14'].value=suc2TP.T().to(TP_sheet.range('O14').value).magnitude
else:
fluid2_TP=fluid_TP
flow2_m_TP=flow_m_TP-flowSS_m_TP
TP_sheet['F14'].value=flow2_m_TP.to(TP_sheet.range('G14').value).magnitude
suc2TP=State.define(fluid=fluid2_TP , p=Ps2_TP , T=Td_TP)
flow2_v_TP=flow2_m_TP*suc2TP.v()
TP_sheet['H14'].value=flow2_v_TP.to(TP_sheet.range('I14').value).magnitude
TP_sheet['N14'].value=suc2FD.T().to(TP_sheet.range('O14').value).magnitude
disch2TPk=State.define(fluid=fluid2_TP , p=Pd2_TP , s=suc2TP.s())
hd2_TP=suc2TP.h()+(disch2TPk.h()-suc2TP.h())/P2_FD._eff_isen()
disch2TP=State.define(fluid=fluid2_TP , p=Pd2_TP , h=hd2_TP)
TP_sheet['R14'].value=disch2TP.T().to(TP_sheet.range('S14').value).magnitude
P2_TP=ccp.Point(speed=speed2_TP,flow_m=flow2_m_TP,suc=suc2TP,disch=disch2TP)
P2_TP_=ccp.Point(speed=speed2_TP,flow_m=flow2_m_TP*0.001,suc=suc2TP,disch=disch2TP)
Imp2_TP = ccp.Impeller([P2_TP,P2_TP_],b=b2,D=D2)
# Imp2_TP.new_suc = P2_FD.suc
# P2_TPconv = Imp2_TP._calc_from_speed(point=P2_TP,new_speed=P_FD.speed)
N2_ratio=speed_FD/speed2_TP
if TP_sheet['C23'].value=='Yes':
rug=TP_sheet['D24'].value
Re2TP=Imp2_TP._reynolds(P2_TP)
Re2FD=Imp2_FD._reynolds(P2_FD)
RCTP=0.988/Re2TP**0.243
RCFD=0.988/Re2FD**0.243
RBTP=np.log(0.000125+13.67/Re2TP)/np.log(rug+13.67/Re2TP)
RBFD=np.log(0.000125+13.67/Re2FD)/np.log(rug+13.67/Re2FD)
RATP=0.066+0.934*(4.8e6*b2.to('ft').magnitude/Re2TP)**RCTP
RAFD=0.066+0.934*(4.8e6*b2.to('ft').magnitude/Re2FD)**RCFD
corr=RAFD/RATP*RBFD/RBTP
eff=1-(1-P2_TP._eff_pol_schultz())*corr
TP_sheet['M37'].value=eff.magnitude
P2_TPconv = ccp.Point(suc=P2_FD.suc, eff=eff,
speed=speed_FD,flow_v=P2_TP.flow_v*N2_ratio,
head=P2_TP._head_pol_schultz()*N2_ratio**2)
else:
P2_TPconv = ccp.Point(suc=P2_FD.suc, eff=P2_TP._eff_pol_schultz(),
speed=speed_FD,flow_v=P2_TP.flow_v*N2_ratio,
head=P2_TP._head_pol_schultz()*N2_ratio**2)
TP_sheet['M37'].value=''
Q1d_TP=flow_m_TP/dischTP.rho()
TP_sheet['R28'].value=flowSS_v_TP.to('mยณ/h').magnitude/Q1d_TP.to('mยณ/h').magnitude
TP_sheet['S28'].value=flowSS_v_TP.to('mยณ/h').magnitude/Q1d_TP.to('mยณ/h').magnitude/(flowSS_v_FD.to('mยณ/h').magnitude/Q1d_FD.to('mยณ/h').magnitude)
TP_sheet['R14'].value=disch2TP.T().to(TP_sheet.range('S14').value).magnitude
TP_sheet['L19'].value=1/P2_TP._volume_ratio().magnitude
TP_sheet['M19'].value=1/(P2_TP._volume_ratio().magnitude/P2_FD._volume_ratio().magnitude)
TP_sheet['L20'].value=Imp2_TP._mach(P2_TP).magnitude
TP_sheet['M21'].value=Imp2_TP._mach(P2_TP).magnitude-Imp2_FD._mach(P2_FD).magnitude
TP_sheet['L22'].value=Imp2_TP._reynolds(P2_TP).magnitude
TP_sheet['M23'].value=Imp2_TP._reynolds(P2_TP).magnitude/Imp2_FD._reynolds(P2_FD).magnitude
TP_sheet['L24'].value=Imp2_TP._phi(P2_TP).magnitude
TP_sheet['M25'].value=Imp2_TP._phi(P2_TP).magnitude/Imp2_FD._phi(P2_FD).magnitude
TP_sheet['L26'].value=Imp2_TP._psi(P2_TP).magnitude
TP_sheet['M27'].value=Imp2_TP._psi(P2_TP).magnitude/Imp2_FD._psi(P2_FD).magnitude
TP_sheet['L28'].value=P2_TP._head_pol_schultz().to('kJ/kg').magnitude
TP_sheet['M29'].value=P2_TP._head_pol_schultz().to('kJ/kg').magnitude/P2_FD._head_pol_schultz().to('kJ/kg').magnitude
TP_sheet['L30'].value=P2_TPconv._head_pol_schultz().to('kJ/kg').magnitude
TP_sheet['M31'].value=P2_TPconv._head_pol_schultz().to('kJ/kg').magnitude/P2_FD._head_pol_schultz().to('kJ/kg').magnitude
TP_sheet['L32'].value=P2_TP._power_calc().to('kW').magnitude
TP_sheet['M33'].value=P2_TP._power_calc().to('kW').magnitude/P2_FD._power_calc().to('kW').magnitude
if TP_sheet['C27'].value=='Yes':
HL_FD=Q_(((suc2FD.T()+disch2FD.T()).to('degC').magnitude*0.8/2-25)*1.166*TP_sheet['D28'].value,'W')
HL_TP=Q_(((suc2TP.T()+disch2TP.T()).to('degC').magnitude*0.8/2-25)*1.166*TP_sheet['D28'].value,'W')
TP_sheet['L34'].value=(P2_TPconv._power_calc()-HL_TP+HL_FD).to('kW').magnitude
TP_sheet['M35'].value=(P2_TPconv._power_calc()-HL_TP+HL_FD).to('kW').magnitude/(P2_FD._power_calc()).to('kW').magnitude
else:
TP_sheet['L34'].value=P2_TPconv._power_calc().to('kW').magnitude
TP_sheet['M35'].value=P2_TPconv._power_calc().to('kW').magnitude/P2_FD._power_calc().to('kW').magnitude
TP_sheet['L36'].value=P2_TP._eff_pol_schultz().magnitude
# -
P2_TP.eff
corr
| scripts/val_test_proc_2sec.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Challenge Problem Week 1
# Hepatitis B (HEP B) is a liver infection caused by the hepatitis B virus (HBV). The infection causes inflammation of the liver and if not properly treated, the virus can lead to liver disease such as cirrhosis or liver cancer.
# HEP B is the most primary causes of liver cancer, the one of the leading cause of cancer deaths in the world, therfore making it a major global health problem. HEP B is up to 100 times more infectious than the HIV/AIDS virus. Two billion people (1 in 3) have been infected and more than 292 million people are living with a chronic hepatitis B infection. Although HEP B is treatable and preventable about 884,000 people die each year.
#
# The virus is transmitted through the blood and infected bodily fluids. It can be passed to others through direct contact with blood, unprotected sex, use of illegal drugs, unsterilized or contaminated needles, and from an infected woman to her newborn during pregnancy or childbirth. Most people do not show symptoms and the only way to know you are infected is by getting tested.
#
# 
#
# **Goal**: Use the NHANES data set to predict whether a patient has HEP B or not. We want to determine which attributes are the most meaningful to the predictive models. We want to create a balanced model that can predict with a high sensitivity and high specificity while using the **least amount of features**. Essentially is there a way to identify the population of those infected without testing them?
#
# Source: https://www.hepb.org/what-is-hepatitis-b/what-is-hepb/
# # National Health and Nutrition Examination Survey NHANES
# To investigate our research problem we will be using the NHANES database. NHANES is a program of studies designed to assess the health and nutritional status of adults and children in the United States. The survey is unique in that it combines interviews and physical examinations. The survey examines a nationally representative sample of about 5,000 persons each year. These persons are located in counties across the country, 15 of which are visited each year. The NHANES interview includes demographic, socioeconomic, dietary, and health-related questions. The examination component consists of medical, dental, and physiological measurements, as well as laboratory tests administered by highly trained medical personnel.
#
# Source: Centers for Disease Control and Prevention (CDC). National Center for Health Statistics (NCHS). National Health and Nutrition Examination Survey Data. Hyattsville, MD: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, [2019][https://www.cdc.gov/nchs/nhanes/about_nhanes.htm#data].
# Below are some general steps to begin analyzing this problem. Apply the new material you learned in class and have fun! (:
#
# 1. Import the data
# 2. Decide what variables are most relevant
# 3. Summary statistics of the data
# 4. Data Cleaning (Important!) Note this may a tedious process
# a. Missing data
# b. Transform/Normalize data
# 4. Data Visualization
# 5. Data analysis
# a. Create dummy variables
# b. Create training and test sets
# c. Statistical methodology
# 6. Scoring metrics
# confusion matrix, roc curve
#
# +
#import needed libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
import seaborn as sns
import os
#os.chdir("./Week1Public")
# -
# ## Import data
# Read in the data set and look at the first ten lines
#import data
url = 'challengeProblem1.train.csv'
train_data = pd.read_csv(url,low_memory=False)
# data.head() # Write your code here
train_data.head()
s = train_data[['LBXHBC']] # this is the HEP B factor
s.describe()
train_data.shape[0]
# +
#dropping unnecessary variables (don't worry about this)
#dataset = dataset[dataset.columns.drop(list(dataset.filter(regex='WT')))]
#dataset = dataset[dataset.columns.drop(list(dataset.filter(regex='SDM')))]
# -
# Awesome, looks like the data loaded in properly. Let's continue by looking at variables that may be predictive of hepatitis B. For beginners, I would suggest conducting a literature review on previous research of hepatitis B.
# ## Select Features of Interest
# Once you have selected some variables in the NHANES data set only looking at what you are interested in. It is in your best interest to rename the variables.
# +
# Write your code here
# create a subset of the data you want to analyze
mytraindat = train_data[['LBXHBC','LBXHBS', 'LBDHI', 'IMQ020','SXQ292','LBDTCSI','MCQ203','ALQ120Q','MCQ092','BPAARM','RIDRETH3']] # create a subset of the selected features you want to investigate
# rename the variables
mytraindat = mytraindat.rename(index = str, columns = {"LBXHBC":"HEPB","LBXHBS": "HEPB_antibody", "LBDHI": "HIV", "IMQ020":"Immunization","SXQ292":"Sexual Orientation","LBDTCSI":"Total Cholesterol","MCQ203":"Jaundice","ALQ120Q":"Alcohol Consumption","MCQ092":"Blood Transfusion","BPAARM":"Blood Pressure","RIDRETH3":"Ethnicity" }) #renaming variables
#s = train_data[['HCQ100']] # this is the HEP B factor
#s.describe()
#1982100/train_data.shape[0]
mytraindat.describe()
# -
# Remember the goal is to create a balanced model that can predict with a high sensitivity and high specificity while using the **least amount of features**. Next, we will look at some summary statistics of the variables you chose.
# ## View summary statistics
# Some useful functions in pandas are describe() and info()
# Write your code here
mytraindat.corr()
# Note the data types are float64, int64 or objects--if there are columns that are obviously numeric like Age but show as objects (or vice versa), we need to change.
# ## Data Cleaning
#
# Ensure that numeric and categorical variables are coded correctly (turn numeric from strings to numbers).
# Write your code here
sum(mytraindat['HEPB_antibody']==2)
# Notice the counts for the columns are different because of missing values therefore you will have to figure out how to remediate that issue..some suggestions are found in https://scikit-learn.org/stable/modules/impute.html#impute
# +
# Write your code here
# remove, impute, some remedial procedure for NAs
# -
# Lastly, we will convert HEP B into indicator variables
# +
# write your code here
# -
# ## Data Analysis and Visualization
#
# Take a look at your data. I would suggest doing univariate, bivariate, and multi-variate analysis of most if not all the features you chose.
# +
# Write your code here
# -
# ## Preprocessing data
# Before we begin to implement a model, we need to prepare the variables that will be used. At this step we convert categorical variables into dummy\indicator variables (https://chrisalbon.com/python/data_wrangling/pandas_convert_categorical_to_dummies/). Additionally, you'll have to normalize and transform variables if necessary.
# +
# Write your code here
# -
# ## Model training and selection
# Now, let's split our data into training and testing in an 80-20 split, stratified by HEPB distribution (this tries to keep the HEPB distribution approximately equal for the training and test set). For consistency, let's use a random seed 0.
# +
# Write your code here
from sklearn.model_selection import train_test_split
# -
# Now we can use our training data to create the model and make predicitons with your test data.
# +
#Write your code here
from sklearn.linear_model import LogisticRegression
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# -
# # Scoring Metrics
# ## Confusion Matrix Metrics
#
# There are several useful metrics that are derived from the confusion matrix:
#
# 
#
# * sensitivity, **recall**, hit rate, or true positive rate (TPR) : $ \mathrm {TPR} ={\frac {\mathrm {TP} }{P}}={\frac {\mathrm {TP} }{\mathrm {TP} +\mathrm {FN} }}$
#
# * **precision** or positive predictive value (PPV) : $ \mathrm {PPV} ={\frac {\mathrm {TP} }{\mathrm {TP} +\mathrm {FP} }}$
#
# * specificity or true negative rate (TNR) : $\mathrm {TNR} ={\frac {\mathrm {TN} }{N}}={\frac {\mathrm {TN} }{\mathrm {TN} +\mathrm {FP} }}$
#
# * miss rate or false negative rate (FNR) : $ \mathrm {FNR} ={\frac {\mathrm {FN} }{P}}={\frac {\mathrm {FN} }{\mathrm {FN} +\mathrm {TP} }}=1-\mathrm {TPR}$
#
# * fall-out or false positive rate (FPR) : $\mathrm {FPR} ={\frac {\mathrm {FP} }{N}}={\frac {\mathrm {FP} }{\mathrm {FP} +\mathrm {TN} }}=1-\mathrm {TNR} $
#
# * accuracy (ACC) : $\mathrm {ACC} ={\frac {\mathrm {TP} +\mathrm {TN} }{P+N}}={\frac {\mathrm {TP} +\mathrm {TN} }{\mathrm {TP} +\mathrm {TN} +\mathrm {FP} +\mathrm {FN} }}$
#
# Now use code below to calculate the confusion matrix.
# +
# write your code here
from sklearn import metrics
from sklearn.metrics import confusion_matrix
# -
# The model is classifying everything as class 1... Pretty terrible. :( Well maybe there's a threshold where this doesn't happen. Let's look at the AUC ROC.
#
# ## AUC ROC
#
# A receiver operating characteristic (ROC) is a probability curve that plots the true positive rate (y) against the false positive rate (x) at many decision threshold settings. The area under the curve (AUC) represents a measure of separability or how much the model is capable of distinguishing between classes. An AUC closer to 1 is desirable as it shows the model is perfectly distinguishing between patients with disease and no disease. A poor model has an AUC $\leq$ 0.50.
#extract fpr and tpr to plot ROC curve and calculate AUC (Note: fpr-false positive rate and tpr -true positive rate)
fpr, tpr, threshold = metrics.roc_curve()
# This model only looks at three possible features and leaves lots of room for improvement! Try using more features, different models, and see if you can do anything about the data we threw out earlier.
# ## Economic Cost
# Similar to the confusion matrix, we want you to keep in mind the other aspects of healthcare analytics--in this case, economic feasibility. In essence, we want you to minimize the amount of time and money spent on data collection by **reducing the number of features** collected. Each record certainly required a lot of time and money from several individuals and businesses to reliably create, and we hope you gain a better understanding of conducting a useful cost-benefit analysis with this scoring method. This won't be evaluated quantitatively, but please consider discussing it for your presentation.
# For your presentation on Friday, don't foget to mention why you selected the features you used, the model implemented, the scoring metrics mentioned above, and the limitations of your analysis.
# # Next steps
# For those that finish early, try different classification models such as decision trees, KNN, SVM etc. You can try tackling the multiclass classifier (predicting the different cases instead of simply negative or positive)!
#
# Given the rich data set provided feel free to study a research question of your interest. Have fun! (:
| .ipynb_checkpoints/Challenge Problem Week 1-Student-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.preprocessing import normalize
from sklearn.cluster import KMeans
from pprint import pprint
from collections import Counter
import numpy as np
import pandas as pd
import pickle
import operator
import argparse
from os import listdir
from os.path import isfile, join
import talos
import os
import seaborn as sns
import matplotlib.pyplot as plt
DATA_PATH = '../dataset/'
data = pd.read_csv("../../data/sample_data_intw.csv")
data.head(5)
# +
#seperate categorical and numerical columns
non_num=[]
for i in data.columns:
if data[i].dtype=="object":
non_num.append(i)
print(non_num)
# +
#drop these columns
data.drop(labels=non_num,axis=1,inplace=True)
#Prepare feature and target sets
x = data.drop(labels=["label"],axis=1)
y = data["label"]
x = talos.utils.rescale_meanzero(x)
#since our data is completely numeric, we can proceed for upsampling
from imblearn.over_sampling import SMOTE
#Initialise smote
smote = SMOTE()
# fit predictor and target variable on smote
x_smote, y_smote = smote.fit_resample(x,y)
print('old shape', x.shape)
print('new shape', x_smote.shape)
# -
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x_smote,y_smote,test_size=0.25,random_state=123)
print("length of x_train {} and y_train {}".format(len(x_train),len(y_train)))
print("length of x_test {} and y_test {}".format(len(x_test),len(y_test)))
# split for validation set
x_train,x_valid,y_train,y_valid = train_test_split(x_train,y_train,test_size=0.20,random_state=123)
print("length of x_train {} and y_train {}".format(len(x_train),len(y_train)))
print("length of x_valid {} and y_valid {}".format(len(x_valid),len(y_valid)))
dataset_name = "tele"
pickle.dump(x_smote, open(DATA_PATH+dataset_name+'_features.p', 'wb'))
pickle.dump(y_smote, open(DATA_PATH+dataset_name+'_labels.p', 'wb'))
| proj/data/tele.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py35]
# language: python
# name: conda-env-py35-py
# ---
import matplotlib.pyplot as plt
import matplotlib as mpl
import sys
import numpy as np
# %matplotlib inline
sys.version
mpl.__version__
def generatefigure():
font = {'family':'serif','size':16, 'serif': ['computer modern roman']}
plt.rc('font', **font)
plt.rc('text', usetex=True)
fig, ax = plt.subplots()#figsize=(4, 4))
xvals = np.arange(0, 11, 1)
yvals = np.arange(0, 11, 1)
ax.plot(xvals, yvals, color='0')
ax.set_xlabel(r'x derp')
ax.set_ylabel(r'y derp')
ax.set_title(r'derptitle with $\sum^\inf$ math')
ax.set_xticks([0, 5, 10])
ax.set_yticks([0, 5, 10])
return fig, ax
fig, ax = generatefigure()
#fig.savefig('./derp.pdf', format='pdf', bbox_inches='tight')
| matplotlib_latex_font.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
my_list = ['1234', '5678', '3456']
my_list[0]
for my_item in my_list:
print(my_item)
for my_item in my_list:
print(int(my_item) + 1)
for my_item in my_list:
int(my_item) + 1
for my_item in my_list:
a_new_integer = int(my_item) + 1
my_list
my_item
print(a_new_integer)
my_new_list = []
for my_item in my_list:
a_new_integer = int(my_item) + 1
my_new_list = my_new_list + [a_new_integer]
print(my_new_list)
# +
# del my_new_list
# +
my_list = ['1234', '5678', '3456']
print(my_list)
my_list.append('2345')
my_new_list = []
for my_item in my_list:
a_new_integer = int(my_item) + 1
print(a_new_integer)
my_new_list.append(a_new_integer)
print(my_new_list)
# +
# Write a for loop that gives the sum of those numbers.
# Save it to a variable.
# Print the variable once at the end.
# HINT: Is there a way to accumulate the sum
# as you work through the items in the list
even_numbers = [2, 4, 6, 8, 10]
the_sum = 0
for value in even_numbers:
the_sum = the_sum + value
print(the_sum)
# -
sum(even_numbers)
# +
# sum = __builtin__.sum
# +
height = 0.25
if height > 0.5:
print("I'm TALLL!!!")
if height < 0.3:
print("Preeeety short")
# -
height = 0.25
if height > 0.5:
print("I'm TALLL!!!")
else:
print("Not tall, I guess.")
# +
temperature = 10.0000000000000001
# Print "boiling" if the temperature is greater than 100
# And "not boiling" if it's less than 100.
# ">=" means greater than or equal to.
# ">" means strictly greater than
if temperature >= 100:
print("boiling")
else:
print("not boiling")
# -
temperatures = [10, 105, 2000]
for temp in temperatures:
if temp >= 100:
print("Boiling")
else:
print("Not boiling")
temperatures = [10, 105, 2000]
for temp in temperatures:
# If the temperature gets to be boiling STOP THE PROCESS
if temp >= 100:
break
else:
last_temp_under_boiling = temp
print("The final temperature was", temp)
print("The final temperature before it boiled was",
last_temp_under_boiling)
temperatures = [10, 81, 105, 2000]
for temp in temperatures:
if temp >= 1000:
print("Blazin'")
elif temp >= 100:
print("Boiling")
elif temp >= 80:
print("Simmering")
else:
print("Not boiling")
# +
def to_fahrenheit(temp_in_c):
print((temp_in_c * 9/5) + 32)
to_fahrenheit(100)
# +
# Write a new function that takes a temperature in fahrenheit
# and return it in celsius.
def to_celsius(temp_in_f):
return (temp_in_f - 32) * 5 / 9
def to_fahrenheit(temp_in_c):
return (temp_in_c * 9/5) + 32
temp_in_celsius = to_celsius(212)
# -
print(to_celsius(to_fahrenheit(100)))
# +
# Define a third function that takes the temperature
# in celsius and converts it to kelvin
def to_kelvin(temp_in_c):
return temp_in_c + 273.15
# -
# Write a line of code to convert a value from F to Kelvin
to_kelvin(to_celsius(170))
# +
import pandas as pd
import matplotlib.pyplot as plt
def plot_gdp_file(path_to_file, color):
data = pd.read_csv(path_to_file, index_col='country')
new_column_names = []
for name in data.columns:
new_column_names.append(int(name[-4:]))
data.columns = new_column_names
for country in data.index:
values_per_country = data.loc[country]
if values_per_country.mean() > 10000:
plt.plot(values_per_country, color=color)
else:
plt.plot(values_per_country, color=color, lw=0.5)
plot_gdp_file('gapminder_gdp_oceania.csv', 'blue')
plot_gdp_file('gapminder_gdp_africa.csv', 'red')
plt.yscale('log')
# -
data = pd.read_csv('gapminder_gdp_africa.csv', index_col='country')
data[data['gdpPercap_1977'] > 10000]['gdpPercap_1977']
| files/day-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Estimating Proportions
# + [markdown] tags=[]
# Think Bayes, Second Edition
#
# Copyright 2020 <NAME>
#
# License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
# + tags=[]
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
# !pip install empiricaldist
# + tags=[]
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
# !wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
# + tags=[]
from utils import set_pyplot_params
set_pyplot_params()
# -
# In the previous chapter we solved the 101 Bowls Problem, and I admitted that it is not really about guessing which bowl the cookies came from; it is about estimating proportions.
#
# In this chapter, we take another step toward Bayesian statistics by solving the Euro problem.
# We'll start with the same prior distribution, and we'll see that the update is the same, mathematically.
# But I will argue that it is a different problem, philosophically, and use it to introduce two defining elements of Bayesian statistics: choosing prior distributions, and using probability to represent the unknown.
# ## The Euro Problem
#
# In *Information Theory, Inference, and Learning Algorithms*, <NAME> poses this problem:
#
# "A statistical statement appeared in *The Guardian* on Friday January 4, 2002:
#
# > When spun on edge 250 times, a Belgian one-euro coin came up heads 140 times and tails 110. \`It looks very suspicious to me,' said <NAME>, a statistics lecturer at the London School of Economics. \`If the coin were unbiased, the chance of getting a result as extreme as that would be less than 7%.'
#
# "But [MacKay asks] do these data give evidence that the coin is biased rather than fair?"
#
# To answer that question, we'll proceed in two steps.
# First we'll use the binomial distribution to see where that 7% came from; then we'll use Bayes's Theorem to estimate the probability that this coin comes up heads.
#
# ## The Binomial Distribution
#
# Suppose I tell you that a coin is "fair", that is, the probability of heads is 50%. If you spin it twice, there are four outcomes: `HH`, `HT`, `TH`, and `TT`. All four outcomes have the same probability, 25%.
#
# If we add up the total number of heads, there are three possible results: 0, 1, or 2. The probabilities of 0 and 2 are 25%, and the probability of 1 is 50%.
#
# More generally, suppose the probability of heads is $p$ and we spin the coin $n$ times. The probability that we get a total of $k$ heads is given by the [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution):
#
# $$\binom{n}{k} p^k (1-p)^{n-k}$$
#
# for any value of $k$ from 0 to $n$, including both.
# The term $\binom{n}{k}$ is the [binomial coefficient](https://en.wikipedia.org/wiki/Binomial_coefficient), usually pronounced "n choose k".
#
# We could evaluate this expression ourselves, but we can also use the SciPy function `binom.pmf`.
# For example, if we flip a coin `n=2` times and the probability of heads is `p=0.5`, here's the probability of getting `k=1` heads:
# +
from scipy.stats import binom
n = 2
p = 0.5
k = 1
binom.pmf(k, n, p)
# -
# Instead of providing a single value for `k`, we can also call `binom.pmf` with an array of values.
# +
import numpy as np
ks = np.arange(n+1)
ps = binom.pmf(ks, n, p)
ps
# -
# The result is a NumPy array with the probability of 0, 1, or 2 heads.
# If we put these probabilities in a `Pmf`, the result is the distribution of `k` for the given values of `n` and `p`.
#
# Here's what it looks like:
# +
from empiricaldist import Pmf
pmf_k = Pmf(ps, ks)
pmf_k
# -
# The following function computes the binomial distribution for given values of `n` and `p` and returns a `Pmf` that represents the result.
def make_binomial(n, p):
"""Make a binomial Pmf."""
ks = np.arange(n+1)
ps = binom.pmf(ks, n, p)
return Pmf(ps, ks)
# Here's what it looks like with `n=250` and `p=0.5`:
pmf_k = make_binomial(n=250, p=0.5)
# + tags=[]
from utils import decorate
pmf_k.plot(label='n=250, p=0.5')
decorate(xlabel='Number of heads (k)',
ylabel='PMF',
title='Binomial distribution')
# -
# The most likely quantity in this distribution is 125:
pmf_k.max_prob()
# But even though it is the most likely quantity, the probability that we get exactly 125 heads is only about 5%.
pmf_k[125]
# In MacKay's example, we got 140 heads, which is even less likely than 125:
pmf_k[140]
# In the article MacKay quotes, the statistician says, "If the coin were unbiased the chance of getting a result as extreme as that would be less than 7%."
#
# We can use the binomial distribution to check his math. The following function takes a PMF and computes the total probability of quantities greater than or equal to `threshold`.
def prob_ge(pmf, threshold):
"""Probability of quantities greater than threshold."""
ge = (pmf.qs >= threshold)
total = pmf[ge].sum()
return total
# Here's the probability of getting 140 heads or more:
prob_ge(pmf_k, 140)
# `Pmf` provides a method that does the same computation.
pmf_k.prob_ge(140)
# The result is about 3.3%, which is less than the quoted 7%. The reason for the difference is that the statistician includes all outcomes "as extreme as" 140, which includes outcomes less than or equal to 110.
#
# To see where that comes from, recall that the expected number of heads is 125. If we get 140, we've exceeded that expectation by 15.
# And if we get 110, we have come up short by 15.
#
# 7% is the sum of both of these "tails", as shown in the following figure.
# + tags=[]
import matplotlib.pyplot as plt
def fill_below(pmf):
qs = pmf.index
ps = pmf.values
plt.fill_between(qs, ps, 0, color='C5', alpha=0.4)
qs = pmf_k.index
fill_below(pmf_k[qs>=140])
fill_below(pmf_k[qs<=110])
pmf_k.plot(label='n=250, p=0.5')
decorate(xlabel='Number of heads (k)',
ylabel='PMF',
title='Binomial distribution')
# -
# Here's how we compute the total probability of the left tail.
pmf_k.prob_le(110)
# The probability of outcomes less than or equal to 110 is also 3.3%,
# so the total probability of outcomes "as extreme" as 140 is 6.6%.
#
# The point of this calculation is that these extreme outcomes are unlikely if the coin is fair.
#
# That's interesting, but it doesn't answer MacKay's question. Let's see if we can.
# ## Bayesian Estimation
#
# Any given coin has some probability of landing heads up when spun
# on edge; I'll call this probability `x`.
# It seems reasonable to believe that `x` depends
# on physical characteristics of the coin, like the distribution
# of weight.
# If a coin is perfectly balanced, we expect `x` to be close to 50%, but
# for a lopsided coin, `x` might be substantially different.
# We can use Bayes's theorem and the observed data to estimate `x`.
#
# For simplicity, I'll start with a uniform prior, which assumes that all values of `x` are equally likely.
# That might not be a reasonable assumption, so we'll come back and consider other priors later.
#
# We can make a uniform prior like this:
hypos = np.linspace(0, 1, 101)
prior = Pmf(1, hypos)
# `hypos` is an array of equally spaced values between 0 and 1.
#
# We can use the hypotheses to compute the likelihoods, like this:
likelihood_heads = hypos
likelihood_tails = 1 - hypos
# I'll put the likelihoods for heads and tails in a dictionary to make it easier to do the update.
likelihood = {
'H': likelihood_heads,
'T': likelihood_tails
}
# To represent the data, I'll construct a string with `H` repeated 140 times and `T` repeated 110 times.
dataset = 'H' * 140 + 'T' * 110
# The following function does the update.
def update_euro(pmf, dataset):
"""Update pmf with a given sequence of H and T."""
for data in dataset:
pmf *= likelihood[data]
pmf.normalize()
# The first argument is a `Pmf` that represents the prior.
# The second argument is a sequence of strings.
# Each time through the loop, we multiply `pmf` by the likelihood of one outcome, `H` for heads or `T` for tails.
#
# Notice that `normalize` is outside the loop, so the posterior distribution only gets normalized once, at the end.
# That's more efficient than normalizing it after each spin (although we'll see later that it can also cause problems with floating-point arithmetic).
#
# Here's how we use `update_euro`.
posterior = prior.copy()
update_euro(posterior, dataset)
# And here's what the posterior looks like.
# + tags=[]
def decorate_euro(title):
decorate(xlabel='Proportion of heads (x)',
ylabel='Probability',
title=title)
# + tags=[]
posterior.plot(label='140 heads out of 250', color='C4')
decorate_euro(title='Posterior distribution of x')
# -
# This figure shows the posterior distribution of `x`, which is the proportion of heads for the coin we observed.
#
# The posterior distribution represents our beliefs about `x` after seeing the data.
# It indicates that values less than 0.4 and greater than 0.7 are unlikely; values between 0.5 and 0.6 are the most likely.
#
# In fact, the most likely value for `x` is 0.56 which is the proportion of heads in the dataset, `140/250`.
posterior.max_prob()
# ## Triangle Prior
#
# So far we've been using a uniform prior:
# + tags=[]
uniform = Pmf(1, hypos, name='uniform')
uniform.normalize()
# -
# But that might not be a reasonable choice based on what we know about coins.
# I can believe that if a coin is lopsided, `x` might deviate substantially from 0.5, but it seems unlikely that the Belgian Euro coin is so imbalanced that `x` is 0.1 or 0.9.
#
# It might be more reasonable to choose a prior that gives
# higher probability to values of `x` near 0.5 and lower probability
# to extreme values.
#
# As an example, let's try a triangle-shaped prior.
# Here's the code that constructs it:
# +
ramp_up = np.arange(50)
ramp_down = np.arange(50, -1, -1)
a = np.append(ramp_up, ramp_down)
triangle = Pmf(a, hypos, name='triangle')
triangle.normalize()
# -
# `arange` returns a NumPy array, so we can use `np.append` to append `ramp_down` to the end of `ramp_up`.
# Then we use `a` and `hypos` to make a `Pmf`.
#
# The following figure shows the result, along with the uniform prior.
# + tags=[]
uniform.plot()
triangle.plot()
decorate_euro(title='Uniform and triangle prior distributions')
# -
# Now we can update both priors with the same data:
#
update_euro(uniform, dataset)
update_euro(triangle, dataset)
# Here are the posteriors.
# + tags=[]
uniform.plot()
triangle.plot()
decorate_euro(title='Posterior distributions')
# -
# The differences between the posterior distributions are barely visible, and so small they would hardly matter in practice.
#
# And that's good news.
# To see why, imagine two people who disagree angrily about which prior is better, uniform or triangle.
# Each of them has reasons for their preference, but neither of them can persuade the other to change their mind.
#
# But suppose they agree to use the data to update their beliefs.
# When they compare their posterior distributions, they find that there is almost nothing left to argue about.
#
# This is an example of **swamping the priors**: with enough
# data, people who start with different priors will tend to
# converge on the same posterior distribution.
#
# ## The Binomial Likelihood Function
#
# So far we've been computing the updates one spin at a time, so for the Euro problem we have to do 250 updates.
#
# A more efficient alternative is to compute the likelihood of the entire dataset at once.
# For each hypothetical value of `x`, we have to compute the probability of getting 140 heads out of 250 spins.
#
# Well, we know how to do that; this is the question the binomial distribution answers.
# If the probability of heads is $p$, the probability of $k$ heads in $n$ spins is:
#
# $$\binom{n}{k} p^k (1-p)^{n-k}$$
#
# And we can use SciPy to compute it.
# The following function takes a `Pmf` that represents a prior distribution and a tuple of integers that represent the data:
# +
from scipy.stats import binom
def update_binomial(pmf, data):
"""Update pmf using the binomial distribution."""
k, n = data
xs = pmf.qs
likelihood = binom.pmf(k, n, xs)
pmf *= likelihood
pmf.normalize()
# -
# The data are represented with a tuple of values for `k` and `n`, rather than a long string of outcomes.
# Here's the update.
uniform2 = Pmf(1, hypos, name='uniform2')
data = 140, 250
update_binomial(uniform2, data)
# + [markdown] tags=[]
# And here's what the posterior looks like.
# + tags=[]
uniform.plot()
uniform2.plot()
decorate_euro(title='Posterior distributions computed two ways')
# -
# We can use `allclose` to confirm that the result is the same as in the previous section except for a small floating-point round-off.
np.allclose(uniform, uniform2)
# But this way of doing the computation is much more efficient.
# ## Bayesian Statistics
#
# You might have noticed similarities between the Euro problem and the 101 Bowls Problem inย <<_101Bowls>>.
# The prior distributions are the same, the likelihoods are the same, and with the same data the results would be the same.
# But there are two differences.
#
# The first is the choice of the prior.
# With 101 bowls, the uniform prior is implied by the statement of the problem, which says that we choose one of the bowls at random with equal probability.
#
# In the Euro problem, the choice of the prior is subjective; that is, reasonable people could disagree, maybe because they have different information about coins or because they interpret the same information differently.
#
# Because the priors are subjective, the posteriors are subjective, too.
# And some people find that problematic.
# The other difference is the nature of what we are estimating.
# In the 101 Bowls problem, we choose the bowl randomly, so it is uncontroversial to compute the probability of choosing each bowl.
# In the Euro problem, the proportion of heads is a physical property of a given coin.
# Under some interpretations of probability, that's a problem because physical properties are not considered random.
#
# As an example, consider the age of the universe.
# Currently, our best estimate is 13.80 billion years, but it might be off by 0.02 billion years in either direction (see [here](https://en.wikipedia.org/wiki/Age_of_the_universe)).
#
# Now suppose we would like to know the probability that the age of the universe is actually greater than 13.81 billion years.
# Under some interpretations of probability, we would not be able to answer that question.
# We would be required to say something like, "The age of the universe is not a random quantity, so it has no probability of exceeding a particular value."
#
# Under the Bayesian interpretation of probability, it is meaningful and useful to treat physical quantities as if they were random and compute probabilities about them.
#
# In the Euro problem, the prior distribution represents what we believe about coins in general and the posterior distribution represents what we believe about a particular coin after seeing the data.
# So we can use the posterior distribution to compute probabilities about the coin and its proportion of heads.
# The subjectivity of the prior and the interpretation of the posterior are key differences between using Bayes's Theorem and doing Bayesian statistics.
#
# Bayes's Theorem is a mathematical law of probability; no reasonable person objects to it.
# But Bayesian statistics is surprisingly controversial.
# Historically, many people have been bothered by its subjectivity and its use of probability for things that are not random.
#
# If you are interested in this history, I recommend <NAME>'s book, *[The Theory That Would Not Die](https://yalebooks.yale.edu/book/9780300188226/theory-would-not-die)*.
# ## Summary
#
# In this chapter I posed David MacKay's Euro problem and we started to solve it.
# Given the data, we computed the posterior distribution for `x`, the probability a Euro coin comes up heads.
#
# We tried two different priors, updated them with the same data, and found that the posteriors were nearly the same.
# This is good news, because it suggests that if two people start with different beliefs and see the same data, their beliefs tend to converge.
#
# This chapter introduces the binomial distribution, which we used to compute the posterior distribution more efficiently.
# And I discussed the differences between applying Bayes's Theorem, as in the 101 Bowls problem, and doing Bayesian statistics, as in the Euro problem.
#
# However, we still haven't answered MacKay's question: "Do these data give evidence that the coin is biased rather than fair?"
# I'm going to leave this question hanging a little longer; we'll come back to it in <<_Testing>>.
#
# In the next chapter, we'll solve problems related to counting, including trains, tanks, and rabbits.
#
# But first you might want to work on these exercises.
# ## Exercises
#
# **Exercise:** In Major League Baseball, most players have a batting average between .200 and .330, which means that their probability of getting a hit is between 0.2 and 0.33.
#
# Suppose a player appearing in their first game gets 3 hits out of 3 attempts. What is the posterior distribution for their probability of getting a hit?
# + [markdown] tags=[]
# For this exercise, I'll construct the prior distribution by starting with a uniform distribution and updating it with imaginary data until it has a shape that reflects my background knowledge of batting averages.
#
# Here's the uniform prior:
# + tags=[]
hypos = np.linspace(0.1, 0.4, 101)
prior = Pmf(1, hypos)
# + [markdown] tags=[]
# And here is a dictionary of likelihoods, with `Y` for getting a hit and `N` for not getting a hit.
# + tags=[]
likelihood = {
'Y': hypos,
'N': 1-hypos
}
# + [markdown] tags=[]
# Here's a dataset that yields a reasonable prior distribution.
# + tags=[]
dataset = 'Y' * 25 + 'N' * 75
# + [markdown] tags=[]
# And here's the update with the imaginary data.
# + tags=[]
for data in dataset:
prior *= likelihood[data]
prior.normalize()
# + [markdown] tags=[]
# Finally, here's what the prior looks like.
# + tags=[]
prior.plot(label='prior')
decorate(xlabel='Probability of getting a hit',
ylabel='PMF')
# + [markdown] tags=[]
# This distribution indicates that most players have a batting average near 250, with only a few players below 175 or above 350. I'm not sure how accurately this prior reflects the distribution of batting averages in Major League Baseball, but it is good enough for this exercise.
#
# Now update this distribution with the data and plot the posterior. What is the most likely quantity in the posterior distribution?
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# -
# **Exercise:** Whenever you survey people about sensitive issues, you have to deal with [social desirability bias](https://en.wikipedia.org/wiki/Social_desirability_bias), which is the tendency of people to adjust their answers to show themselves in the most positive light.
# One way to improve the accuracy of the results is [randomized response](https://en.wikipedia.org/wiki/Randomized_response).
#
# As an example, suppose you want to know how many people cheat on their taxes.
# If you ask them directly, it is likely that some of the cheaters will lie.
# You can get a more accurate estimate if you ask them indirectly, like this: Ask each person to flip a coin and, without revealing the outcome,
#
# * If they get heads, they report YES.
#
# * If they get tails, they honestly answer the question "Do you cheat on your taxes?"
#
# If someone says YES, we don't know whether they actually cheat on their taxes; they might have flipped heads.
# Knowing this, people might be more willing to answer honestly.
#
# Suppose you survey 100 people this way and get 80 YESes and 20 NOs. Based on this data, what is the posterior distribution for the fraction of people who cheat on their taxes? What is the most likely quantity in the posterior distribution?
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# -
# **Exercise:** Suppose you want to test whether a coin is fair, but you don't want to spin it hundreds of times.
# So you make a machine that spins the coin automatically and uses computer vision to determine the outcome.
#
# However, you discover that the machine is not always accurate. Specifically, suppose the probability is `y=0.2` that an actual heads is reported as tails, or actual tails reported as heads.
#
# If we spin a coin 250 times and the machine reports 140 heads, what is the posterior distribution of `x`?
# What happens as you vary the value of `y`?
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# -
# **Exercise:** In preparation for an alien invasion, the Earth Defense League (EDL) has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, `x`.
#
# Based on previous tests, the distribution of `x` in the population of designs is approximately uniform between 0.1 and 0.4.
#
# Now suppose the new ultra-secret Alien Blaster 9000 is being tested. In a press conference, an EDL general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report: "The same number of targets were hit in the two tests, so we have reason to think this new design is consistent."
#
# Is this data good or bad?
# That is, does it increase or decrease your estimate of `x` for the Alien Blaster 9000?
# + [markdown] tags=[]
# Hint: If the probability of hitting each target is $x$, the probability of hitting one target in both tests
# is $\left[2x(1-x)\right]^2$.
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# -
| notebooks/chap04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Machine Learning for Humanists Day 2: Case Study
# ## Identifying Jim Crow Laws*
# ## Getting Started
#
#
# ----
# ## Importing Libraries
#
# +
# Library for Importing CSV file into a Dataframe
import pandas as pd
# Library to split the data into train and test sets
from sklearn.model_selection import train_test_split
# Libraries to process section_text Tokenize=find words, Stopwords=remove stopwords, Regular Expression=remove non-word characters, Lemmatize text
import re
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
# A transformer LengthExtractor to extract length of each sentences in the section_text
from sklearn.base import BaseEstimator, TransformerMixin
# Machine Learning Libraries
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
#Model Tuning Libraries
from sklearn.model_selection import GridSearchCV
#Evaluation Libraries
import numpy as np
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
# Allows the use of display() for DataFrames
from IPython.display import display
# Pretty display for notebooks
# %matplotlib inline
#Ignore warnings = clean notebook
import warnings
warnings.filterwarnings('ignore')
# -
nltk.download('stopwords')
nltk.download('punkt')
# ----
# ## Exploring the Data
#
dataframe = pd.read_csv('training_set_clean.csv')
dataframe.head()
# Before the file was imported, we performed simple preprocessing on the text (these are outlined in the code bellow):
# * Replaced hyphenated and line broken words with unbroken words.
# * Removed section numbering from the law text ("section_text").
# * We used session or volume identified ("csv") information to extract a numeric year. In the case of multi-year volumes (e.g. 1956-1957) the earlier year was used.
# +
#Fix hyphenated words
#data["chapter_text"] = data.text.str.replace(r"-[ \|]+(?P<letter>[a-zA-Z])",repl).astype("str")
#data["section_text"] = data.section_text.str.replace(r"-[ \|]+(?P<letter>[a-zA-Z])",repl).astype("str")
#data["section_text"] = [re.sub(r'- *\n+(\w+ *)', r'\1\n',r) for r in data["section_text"]]
#Remove section titles (e.g. "Sec. 1") from law text.
#data["start"] = data.section_raw.str.len().fillna(0).astype("int")
#data["section_text"] = data.apply(lambda x: x['section_text'][(x["start"]):], axis=1).str.strip()
# -
dataframe.info()
# ### Data Exploration
#
dataframe['James_Assessment'] = dataframe['James_Assessment'].astype(np.int64)
dataframe['James_Assessment'].value_counts()
# +
# Total number of records
n_records = len(dataframe.index)
# jim crow laws
jim_crow_laws = len(dataframe[dataframe.James_Assessment == 1])
# non-jim crow laws
regular_laws = len(dataframe[dataframe.James_Assessment == 0])
# Percent of Jim Crow Laws
jimcrow_percent = (jim_crow_laws / float(n_records)) * 100
# Print the results
print("Total number of records: {}".format(n_records))
print("Jim Crow Laws: {}".format(jim_crow_laws))
print("Non-Jim Crow Laws: {}".format(regular_laws))
print("Percentage of Jim Crow Laws: {}%".format(jimcrow_percent))
# -
# ----
# ## Preparing the Data
# Before data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured โ this is typically known as **preprocessing**. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms.
# Split the data into features and target label
features = dataframe['section_text']
target = dataframe['James_Assessment']
# ### Shuffle and Split Data
# Now all _categorical variables_ have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.
#
# Run the code cell below to perform this split.
# +
# split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=42)
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))
# -
# ### Data Preprocessing
#
#
#Text Processing
# extract the english stopwords and save it to a variable
stopword = stopwords.words('english')
# define regular expression to identify non-ascii characters in text
non_ascii_regex = r'[^\x00-\x7F]+'
def tokenize(text):
# use library re to replace non ascii characters by a space
text = re.sub(non_ascii_regex, ' ', text)
# use word_tokenize to tokenize the sentences
tokens = word_tokenize(text)
# instantiate an object of class WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
# use a list comprehension to lemmatize the tokens and remove the the stopwords
clean_tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stopword]
# return the tokens
return clean_tokens
# ### Transformer of text = turning text into numbers
# A transformer LengthExtractor to extract length of each sentences in the section_text and make that a feature
class LengthExtractor(BaseEstimator, TransformerMixin):
def compute_length(self, text):
sentence_list = word_tokenize(text)
return len(sentence_list)
def fit(self, x, y=None):
return self
def transform(self, X):
X_length = pd.Series(X).apply(self.compute_length)
return pd.DataFrame(X_length)
from sklearn.feature_extraction.text import CountVectorizer
# list of text documents
text = ["The quick brown fox jumped over the lazy dog."]
# create the transform
vectorizer = CountVectorizer()
# tokenize and build vocab
vectorizer.fit(text)
# summarize
print(vectorizer.vocabulary_)
# encode document
vector = vectorizer.transform(text)
# summarize encoded vector
print(vector.shape)
print(type(vector))
print(vector.toarray())
# We can see that there are 8 words in the vocab, and therefore encoded vectors have a length of 8.
#
# We can then see that the encoded vector is a sparse matrix. Finally, we can see an array version of the encoded vector showing a count of 1 occurrence for each word except the (index and id 7) that has an occurrence of 2
from sklearn.feature_extraction.text import TfidfVectorizer
# list of text documents
text = ["The quick brown fox jumped over the lazy dog.", "The dog.", "The fox"]
# create the transform
vectorizer = TfidfVectorizer()
# tokenize and build vocab
vectorizer.fit(text)
# summarize
print(vectorizer.vocabulary_)
print(vectorizer.idf_)
# encode document
vector = vectorizer.transform([text[0]])
# summarize encoded vector
print(vector.shape)
print(vector.toarray())
# A vocabulary of 8 words is learned from the documents and each word is assigned a unique integer index in the output vector.
#
# The inverse document frequencies are calculated for each word in the vocabulary, assigning the lowest score of 1.0 to the most frequently observed word: โtheโ at index 7.
#
# Finally, the first document is encoded as an 8-element sparse array
# ----
# ## Training our Model
# ### Creating a Training and Predicting Pipeline
# create an instance of Pipeline class
pipeline = Pipeline([
# create a FeatureUnion pipeline
('features', FeatureUnion([
# add a pipeline element to extract features using CountVectorizer and TfidfTransformer
('text_pipleline', Pipeline([
('vect', CountVectorizer(decode_error = "ignore",
min_df = 2, max_df = 1000)),
('tfidf', TfidfTransformer()),
])),
# add the pipeline element - LengthExtractor to extract lenght of each sentence as feature
('text_len', LengthExtractor()),
])),
# use the predictor estimator RandomForestClassifier to train the model
('dlf', RandomForestClassifier())
])
# Random forest, like its name implies, consists of a large number of individual decision trees that operate as an ensemble. Each individual tree in the random forest spits out a class prediction and the class with the most votes becomes our modelโs prediction
#
# https://towardsdatascience.com/understanding-random-forest-58381e0602d2
#Run the Model
pipeline.fit(X_train, y_train)
# ### Supervised Learning Models
# **The following are some of the supervised learning models that are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:**
# - Gaussian Naive Bayes (GaussianNB)
# - Decision Trees
# - Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
# - K-Nearest Neighbors (KNeighbors)
# - Stochastic Gradient Descent Classifier (SGDC)
# - Support Vector Machines (SVM)
# - Logistic Regression
# ### Naive Predictor Performace
# * If we chose a model that always predicted if a law was jim crow, what would that model's accuracy and F-score be on this dataset?
#
# ** Please note ** that the the purpose of generating a naive predictor is simply to show what a base model without any intelligence would look like. In the real world, ideally your base model would be either the results of a previous model or could be based on a research paper upon which you are looking to improve. When there is no benchmark model set, getting a result better than random choice is a place you could start from.
#
# ** NOTE: **
#
# * When we have a model that always predicts '1' (i.e. there is always a jim crow law) then our model will have no True Negatives(TN) or False Negatives(FN) as we are not making any negative('0' value) predictions. Therefore our Accuracy in this case becomes the same as our Precision(True Positives/(True Positives + False Positives)) as every prediction that we have made with value '1' that should have '0' becomes a False Positive; therefore our denominator in this case is the total number of records we have in total.
# * Our Recall score(True Positives/(True Positives + False Negatives)) in this setting becomes 1 as we have no False Negatives.
# +
#Calculate Accuracy, Recall, Precision
accuracy = (np.sum(target)) / ((np.sum(target)) + float(((target.count()) - np.sum(target))))
recall = np.sum(target) / float((np.sum(target) + 0))
precision = np.sum(target) / float(((np.sum(target) + ((target.count()) - np.sum(target)))))
#Calculate F-score using the formula above for beta = 0.5 and correct values for precision and recall.
beta = 0.5
fscore = (1+ beta**2) * (precision * recall) / ((beta ** 2 * precision) + recall)
#Print the results
print("Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore))
# -
# ### Initial Model Evaluation
# #### What is accuracy, precision, recall?
#
# ** Accuracy ** measures how often the classifier makes the correct prediction. Itโs the ratio of the number of correct predictions to the total number of predictions (the number of test data points).
#
# ** Precision ** tells us what proportion of messages we classified as Jim Crow, actually were Jim Crow.
# It is a ratio of true positives(laws classified as <NAME>, and which are actually Jim Crow) to all positives(all laws classified as Jim Crow, irrespective of whether that was the correct classificatio), in other words it is the ratio of
#
# `[True Positives/(True Positives + False Positives)]`
#
# ** Recall(sensitivity)** tells us what proportion of laws that actually were Jim Crow were classified by us as Jim Crow.
# It is a ratio of true positives(laws classified as Jim Crow, and which are actually Jim Crow) to all the laws that were actually Crow, in other words it is the ratio of
#
# `[True Positives/(True Positives + False Negatives)]`
#
# These two metrics can be combined to get the F1 score, which is weighted average(harmonic mean) of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score(we take the harmonic mean as we are dealing with ratios). We can use **F-beta score** as a metric that considers both precision and recall:
#
#
# $$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
#
# In particular, when $\beta = 0.5$, more emphasis is placed on precision. This is called the **F$_{0.5}$ score** (or F-score for simplicity).
#
# Another Resource for understanding this report: https://medium.com/@kohlishivam5522/understanding-a-classification-report-for-your-machine-learning-model-88815e2ce397
#
#Make Predictions on the Test Data
y_pred = pipeline.predict(X_test)
# +
# count the number of labels
labels = np.unique(y_pred)
data = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(data, columns=np.unique(y_test), index = np.unique(y_test))
df_cm.index.name = 'Actual'
df_cm.columns.name = 'Predicted'
# use sns.heatmap on top of confusion_matrix to show the confusion matrix
ax = sns.heatmap(df_cm,xticklabels=True, annot=True, fmt='.0f')
ax.set(title="Overall")
# -
# True Negative = 240
# False Positive = 12
# False Negative = 16
# True Postive = 97
print(classification_report(y_test, y_pred))
| intro_to_ml_day2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Seja um investidor na bolsa de valores com Python
#
# **<NAME>**
#
# > Nessa Talk iremos descobrir recursos e ferramentas prรฉ prontas para ajudar traders formularem estratรฉgias na bolsa de valores. Mas o que python tem de tรฃo bom que eu nรฃo sei? Diversas ferramentas para matemรกtica financeira, anรกlise de risco na bolsa de valores, ferramentas que geram sรฉries temporais, e vรกrias bibliotecas estatรญstica. Vale a pena assistir essa Talk para entender o universo de Trading e o que python tem a oferecer para os investidores.
#
# **Base de dados usada foi do Yahoo Finanรงas**
#
# [Site Yahoo Finanรงas](https://br.financas.yahoo.com/industries/Energia-Petroleo-Gas)
#
#
# ## Principais Bibliotecas
#
# 1. **Pandas** -
# 2. **Pandas_Datareader** - Responsรกvel por ler uma base de dados em um servidor externo e armazenar os dados em um objeto do tipo DataFrame
# 3. **Numpy** - Biblioteca usada para computaรงรฃo numรฉrica e cientรญfica
# 4. **Matplotlib** - Biblioteca para visualizaรงรฃo dos dados em um grรกfico
# 5. **DateTime** - Biblioteca para formataรงรฃo de data
#
#
# +
import pandas as pd
from pandas_datareader import data as dt #Obter dados das cotaรงรตes no Yahoo Finnance
import numpy as np
import matplotlib.pyplot as plt
import datetime
# %matplotlib inline
# -
#Formataรงรฃo e definiรงรฃo da data em um determinado perรญodo de tempo
dataInicio = datetime.datetime(2019,1,1)
dataFim = datetime.datetime(2020,1,1)
# ### Acesso a Base de dados
#
# Os dados estรฃo organizados no objeto DataFrame e abaixo รฉ acessado os dados da Bolsa de Valores da Multinacional **'TSLA'-Tesla**
# +
yahooData = dt.get_data_yahoo('TSLA', dataInicio, dataFim)
# -
# ### Campos da Tabela
#
# 1. **High** - O valor(reais) mais alto do dia
# 2. **Low** - O valor(reais) mais baixo do dia
# 3. **Open** - O primeiro valor(reais) do dia quando a Bolsa abriu
# 4. **Close** - O รบltimo valor(reais) do dia quando a Bolsa fechou
# 5. **Adj Close** - O valor(reais) do fechamento ajustado no dia
yahooData
yahooData.describe()
# +
#Taxa simples de retorno
yahooData['Retorno'] = (yahooData['Adj Close'] / yahooData['Adj Close'].shift(1) - 1) *100
#Rentabilidade Anual
yahooData['Retorno'].max()
# -
yahooData[yahooData['Retorno'] == yahooData['Retorno'].max()]
yahooData['Retorno'].min()
yahooData[yahooData['Retorno'] == yahooData['Retorno'].min()]
#Taxa de Retorno Anual
yahooData['Retorno'].mean() * 250
yahooData['Retorno'].plot(figsize=(16,8))
yahooData['Adj Close'].plot()
yahooData['Variacao'] = yahooData['High'] - yahooData['Low']
yahooData
# +
yahooData.head()
# +
yahooData.tail()
# -
yahooData['Close'].plot(figsize=(15,8),label='IBOVESPA')
yahooData['Close'].rolling(25).mean().plot(label='MM25')
yahooData['Close'].rolling(60).mean().plot(label='MM60')#Media mรณvel
plt.legend()
# +
dadosTeslaJulho = yahooData[yahooData.index.month == 7 ]
dadosTeslaJulho
# -
# ### Termos Importantes
#
# 1. Renda Fixa
# - forma de aplicaรงรฃo na qual os valores dos ativos jรก estรฃo estipulados
# - menor risco
# 2. Renda variรกvel
# - os valores oscilam de acordo com variรกveis externas(pandemia, politica, economia global)
# - maior risco
# 3. Taxa de Retorno sobre o investimento
# - Retorno Esperado - quanto a empresa irรก ganhar com o dinheiro aplicado depois de um certo tempo, porรฉm esse retorno รฉ uma estimativa de alto risco quando se trata de renda variรกvel, uma vez que vocรช poderรก ganhar mais ou menos dinheiro.
# - Retorno sobre o investimento realizado - Retorno sobre um investimento especifico realizado
#
#
#
# +
#Extraindo, Lendo e manipulando dados de mais de uma empresa simultaneamente
organizacoesTI = ['AMZN', 'TSLA', 'AAPL','GOGL','MSFT']
#extraindo dados apenas o fechamento ajustado
dadosYahooEmpresas = dt.get_data_yahoo(organizacoesTI, dataInicio, dataFim)['Adj Close']
# -
dadosYahooEmpresas
dadosYahooEmpresas.plot(figsize=(16,8))
(dadosYahooEmpresas/ dadosYahooEmpresas.iloc[0]).plot(figsize=(16,8)) # normalizaรงรฃo estatรญstica dos dados
# Aproximar os dados da distribuiรงรฃo normal
# Normalizar facilita a comparaรงรฃo
rentabilidadeAcoes = (dadosYahooEmpresas.pct_change()*100).mean() *250 #rentabilidade anual
rentabilidadeAcoes
pesoAcoes = np.array([0.25,0.10,0.05,0.30,0.30])
pesoAcoes
rentabilidade=np.dot(rentabilidadeAcoes,pesoAcoes)
rentabilidade #rentabilidade desse perรญodo
dadosYahooEmpresas.info()#ativos tem a mesma quantidade de dados
retornoDadosEmpresas = dadosYahooEmpresas.pct_change() * 100
retornoDadosEmpresas
risco = (np.dot(pesoAcoes.T, np.dot(retornoDadosEmpresas.cov()*250, pesoAcoes)))**.5
risco
# ## Outras Bibliotecas
#
# 1. Statistics
# 2. Numpy
# 3. Pandas
# 4. Matplotlib
# 5. Scipy
# 6. Math
#
# ## Plus Plus
dadosYahooEmpresas.describe() #Informaรงรตes gerais do DataFrame
type(dadosYahooEmpresas)#verificando o tipo de dados
dadosYahooEmpresas.info()#Detalhes dos dados dentro da base
dadosYahooEmpresas.to_csv(index=False)#coloca os dados do DataFrame em um arquivo CSV
dadosYahooEmpresas['AMZN'].describe(include='all') #Avaliando dados do DataFrame isoladamente
# ## Referรชncias
#
# 1. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html
# 2. https://www.udemy.com/course/python-para-investimentos-na-bolsa-de-valores
# 3. https://www.youtube.com/watch?v=UoB8w_RDXfM&t=1015s
# 4. https://www.youtube.com/channel/UCzCrdOO2GLYVnNhZUvG03lg
# # ;-D
#
# ## Thanks !!
| tradingPython.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
# +
import cv2
import time
# %matplotlib inline
# +
#load test iamge
test1 = cv2.imread('cute-baby.jpg')
#convert the test image to gray image as opencv face detector expects gray images
gray_img = cv2.cvtColor(test1, cv2.COLOR_BGR2GRAY)
# +
#if you have matplotlib installed then
plt.imshow(gray_img, cmap='gray')
# -
#load cascade classifier training file for haarcascade
haar_face_cascade = cv2.CascadeClassifier('D:\\opencv\\build\\etc\\haarcascades\\haarcascade_frontalface_default.xml')
# +
#let's detect multiscale (some images may be closer to camera than others) images
faces = haar_face_cascade.detectMultiScale(gray_img, scaleFactor=1.1, minNeighbors=5);
#print the number of faces found
print('Faces found: ', len(faces))
# -
#go over list of faces and draw them as rectangles on original colored
for (x, y, w, h) in faces:
cv2.rectangle(test1, (x, y), (x+w, y+h), (0, 255, 0), 2)
def convertToRGB(img):
return cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(convertToRGB(test1))
def detect_faces(f_cascade, colored_img, scaleFactor = 1.1):
#just making a copy of image passed, so that passed image is not changed
img_copy = colored_img.copy()
#convert the test image to gray image as opencv face detector expects gray images
gray = cv2.cvtColor(img_copy, cv2.COLOR_BGR2GRAY)
#let's detect multiscale (some images may be closer to camera than others) images
faces = f_cascade.detectMultiScale(gray, scaleFactor=scaleFactor, minNeighbors=5);
#go over list of faces and draw them as rectangles on original colored img
for (x, y, w, h) in faces:
cv2.rectangle(img_copy, (x, y), (x+w, y+h), (0, 255, 0), 2)
return img_copy
test2 = cv2.imread('2.jpg')
faces_detected_img = detect_faces(haar_face_cascade, test2,scaleFactor=1.1)
plt.imshow(convertToRGB(faces_detected_img))
| Face detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Sentdex
#
# In this notebook, we'll take a look at Sentdex's *Sentiment* dataset, available through the [Quantopian partner program](https://www.quantopian.com/data). This dataset spans 2012 through the current day, and documents the mood of traders based on their messages.
#
# ## Notebook Contents
#
# There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
#
# - <a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
# - <a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.
#
# ### Free samples and limits
# One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
#
# There is a *free* version of this dataset as well as a paid one. The free sample includes data until 2 months prior to the current date.
#
# To access the most up-to-date values for this data set for trading a live algorithm (as with other partner sets), you need to purchase acess to the full set.
#
# With preamble in place, let's get started:
#
# <a id='interactive'></a>
# #Interactive Overview
# ### Accessing the data with Blaze and Interactive on Research
# Partner datasets are available on Quantopian Research through an API service known as [Blaze](http://blaze.pydata.org). Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
#
# Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
#
# It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
#
# Helpful links:
# * [Query building for Blaze](http://blaze.readthedocs.io/en/latest/queries.html)
# * [Pandas-to-Blaze dictionary](http://blaze.readthedocs.io/en/latest/rosetta-pandas.html)
# * [SQL-to-Blaze dictionary](http://blaze.readthedocs.io/en/latest/rosetta-sql.html).
#
# Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
# > `from odo import odo`
# > `odo(expr, pandas.DataFrame)`
#
#
# ###To see how this data can be used in your algorithm, search for the `Pipeline Overview` section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>
# +
# import the free sample of the dataset
from quantopian.interactive.data.sentdex import sentiment_free as dataset
# or if you want to import the full dataset, use:
# from quantopian.interactive.data.sentdex import sentiment
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
# -
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
# The Sentdex Sentiment data feed is elegant and simple. Just a few fields:
#
# Let's go over the columns:
# - **asof_date**: The date to which this data applies.
# - **symbol**: stock ticker symbol of the affected company.
# - **timestamp**: the datetime at which the data is available to the Quantopian system. For historical data loaded, we have simulated a lag. For data we have loaded since the advent of loading this data set, the timestamp is an actual recorded value.
# - **sentiment_signal**: A standalone sentiment score from -3 to 6 for stocks
# - **sid**: the equity's unique identifier. Use this instead of the symbol.
#
# From the [Sentdex documentation](http://sentdex.com/blog/back-testing-sentdex-sentiment-analysis-signals-for-stocks):
#
# ```
# The signals currently vary from -3 to a positive 6, where -3 is as equally strongly negative of sentiment as a 6 is strongly positive sentiment.
#
# Sentiment signals:
#
# 6 - Strongest positive sentiment.
# 5 - Extremely strong, positive, sentiment.
# 4 - Very strong, positive, sentiment.
# 3 - Strong, positive sentiment.
# 2 - Substantially positive sentiment.
# 1 - Barely positive sentiment.
# 0 - Neutral sentiment
# -1 - Sentiment trending into negatives.
# -2 - Weak negative sentiment.
# -3 - Strongest negative sentiment.
# ```
#
# We've done much of the data processing for you. Fields like `timestamp` and `sid` are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the `sid` across all our equity databases.
#
# We can select columns and rows with ease. Below, we'll fetch all rows for Apple (sid 24) and explore the scores a bit with a chart.
# Filtering for AAPL
aapl = dataset[dataset.sid == 24]
aapl_df = odo(aapl.sort('asof_date'), pd.DataFrame)
plt.plot(aapl_df.asof_date, aapl_df.sentiment_signal, marker='.', linestyle='None', color='r')
plt.plot(aapl_df.asof_date, pd.rolling_mean(aapl_df.sentiment_signal, 30))
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("Sentiment")
plt.title("Sentdex Sentiment for AAPL")
plt.legend(["Sentiment - Single Day", "30 Day Rolling Average"], loc=1)
x1,x2,y1,y2 = plt.axis()
plt.axis((x1,x2,-4,7.5))
# Let's check out Comcast's sentiment for fun
comcast = dataset[dataset.sid == 1637]
comcast_df = odo(comcast.sort('asof_date'), pd.DataFrame)
plt.plot(comcast_df.asof_date, comcast_df.sentiment_signal, marker='.', linestyle='None', color='r')
plt.plot(comcast_df.asof_date, pd.rolling_mean(comcast_df.sentiment_signal, 30))
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("Sentiment")
plt.title("Sentdex Sentiment for Comcast")
plt.legend(["Sentiment - Single Day", "30 Day Rolling Average"], loc=1)
x1,x2,y1,y2 = plt.axis()
plt.axis((x1,x2,-4,7.5))
# <a id='pipeline'></a>
#
# #Pipeline Overview
#
# ### Accessing the data in your algorithms & research
# The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this PsychSignal data, you can add this data to your pipeline as follows:
#
# Import the data set
# > `from quantopian.pipeline.data.sentdex import sentiment`
#
# Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
# > `pipe.add(sentiment.sentiment_signal.latest, 'sentdex_sentiment')`
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# +
# For use in your algorithms
# Using the full paid dataset in your pipeline algo
# from quantopian.pipeline.data.sentdex import sentiment
# Using the free sample in your pipeline algo
from quantopian.pipeline.data.sentdex import sentiment_free
# -
# Now that we've imported the data, let's take a look at which fields are available for each dataset.
#
# You'll find the dataset, the available fields, and the datatypes for each of those fields.
# +
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
for data in (sentiment_free,):
_print_fields(data)
print "---------------------------------------------------\n"
# -
# Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
#
#
# This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:
# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
# +
# Let's see what this data looks like when we run it through Pipeline
# This is constructed the same way as you would in the backtester. For more information
# on using Pipeline in Research view this thread:
# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
pipe = Pipeline()
pipe.add(sentiment_free.sentiment_signal.latest, 'sentiment_signal')
# +
# Setting some basic liquidity strings (just for good habit)
dollar_volume = AverageDollarVolume(window_length=20)
top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000
pipe.set_screen(top_1000_most_liquid & sentiment_free.sentiment_signal.latest.notnan())
# -
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
# Taking what we've seen from above, let's see how we'd move that into the backtester.
# +
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
# For use in your algorithms
# Using the full paid dataset in your pipeline algo
# from quantopian.pipeline.data.sentdex import sentiment
# Using the free sample in your pipeline algo
from quantopian.pipeline.data.sentdex import sentiment_free
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Screen out penny stocks and low liquidity securities.
dollar_volume = AverageDollarVolume(window_length=20)
is_liquid = dollar_volume.rank(ascending=False) < 1000
# Create the mask that we will use for our percentile methods.
base_universe = (is_liquid)
# Add pipeline factors
pipe.add(sentiment_free.sentiment_signal.latest, 'sentiment_signal')
# Set our pipeline screens
pipe.set_screen(is_liquid)
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
# -
# Now you can take that and begin to use it as a building block for your algorithms, for more examples on how to do that you can visit our <a href='https://www.quantopian.com/posts/pipeline-factor-library-for-data'>data pipeline factor library</a>
| docs/memo/notebooks/data/sentdex.sentiment/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from blimpy import read_header, Waterfall, Filterbank
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
import matplotlib.pyplot as plt
import sys, os, glob
sys.path.insert(0, "../../")
import setigen as stg
# -
all_means = np.load('all_means.npy')
all_stds = np.load('all_stds.npy')
all_mins = np.load('all_mins.npy')
real_noise = np.load('real_noise_dists.npy')
real_noise.shape
plt.hist(all_means, bins=100)
plt.show()
np.save('all_means.npy', all_means[(all_means > 2e5) & (all_means < 5.5e5)])
all_means.shape
shape = (4,4)
means = all_means[np.random.randint(0, len(all_means), shape)]
stds = all_stds[np.random.randint(0, len(all_stds), shape)]
np.maximum(means, stds)
stds
# +
def choose_from_dist(dist, shape):
return dist[np.random.randint(0, len(dist), shape)]
def make_normal(means_dist, stds_dist, mins_dist, shape):
means = choose_from_dist(means_dist, shape)
stds = choose_from_dist(stds_dist, shape)
mins = choose_from_dist(mins_dist, shape)
means = np.maximum(means, stds)
return means, stds, mins
# +
tsamp = 1.4316557653333333
fch1 = 6000.464843051508
df = -1.3969838619232178e-06
fchans = 1024
tchans = 32
fs = np.arange(fch1, fch1 + fchans*df, df)
ts = np.arange(0, tchans*tsamp, tsamp)
means, stds, mins = make_normal(all_means, all_stds, all_mins, 1)
print(means, stds, mins)
frame = np.maximum(np.random.normal(means, stds, [tchans, fchans]), mins)
plt.imshow(frame, aspect='auto')
plt.colorbar()
plt.show()
start_index = np.random.randint(0,fchans)
drift_rate = np.random.uniform(-start_index*df/(tsamp*tchans), (fchans-1-start_index)*df/(tsamp*tchans))
line_width = np.random.uniform(1e-6, 30e-6)
level = stds * 100 / np.sqrt(tchans)
drift_rate=0
signal = stg.generate(ts,
fs,
stg.constant_path(f_start = fs[start_index], drift_rate = drift_rate),
stg.constant_t_profile(level = level),
stg.gaussian_f_profile(width = line_width),
stg.constant_bp_profile(level = 1.0))
plt.imshow(signal, aspect='auto')
plt.colorbar()
plt.show()
plt.imshow(signal + frame, aspect='auto')
plt.colorbar()
plt.show()
# -
list(range(2))
# +
spec = np.mean(signal + frame - means, axis=0)
plt.plot(spec)
plt.show()
print(np.max(spec), np.std(spec[:800]), np.max(spec)/np.std(spec[:800]))
print(np.max(spec)/np.std(spec[:800])/np.sqrt(tchans))
# +
spec = np.mean(stg.normalize(signal + frame, cols=128, exclude=0.2), axis=0)
plt.plot(spec)
plt.show()
print(np.max(spec), np.std(spec[:800]), np.max(spec)/np.std(spec[:800]))
print(np.max(spec)/np.std(spec[:800])/np.sqrt(tchans))
# +
tsamp = 1.4316557653333333
fch1 = 6000.464843051508
df = -1.3969838619232178e-06
fchans = 1024
tchans = 32
fs = np.arange(fch1, fch1 + fchans*df, df)
ts = np.arange(0, tchans*tsamp, tsamp)
means, stds, mins = make_normal(real_noise[:,0], real_noise[:,1], real_noise[:,2], 1)
print(means, stds, mins)
frame = np.random.normal(means, stds, [tchans, fchans])
# frame = np.maximum(np.random.normal(means, stds, [tchans, fchans]), mins)
plt.imshow(frame, aspect='auto')
plt.colorbar()
plt.show()
start_index = np.random.randint(0,fchans)
drift_rate = np.random.uniform(-start_index*df/(tsamp*tchans), (fchans-1-start_index)*df/(tsamp*tchans))
line_width = np.random.uniform(0.02, 0.03) ** 3
level = stds * 1 / np.sqrt(tchans)
drift_rate=0
signal = stg.generate(ts,
fs,
stg.constant_path(f_start = fs[start_index], drift_rate = drift_rate),
stg.constant_t_profile(level = level),
stg.gaussian_f_profile(width = line_width),
stg.constant_bp_profile(level = 1.0))
plt.imshow(signal, aspect='auto')
plt.colorbar()
plt.show()
plt.imshow(signal + frame, aspect='auto')
plt.colorbar()
plt.show()
# +
spec = np.mean(signal + frame - means, axis=0)
plt.plot(spec)
plt.show()
print(np.max(spec), np.std(spec[:800]), np.max(spec)/np.std(spec[:800]))
print(np.max(spec)/np.std(spec[:800])/np.sqrt(tchans))
# +
tsamp = 1.4316557653333333
fch1 = 6000.464843051508
df = -1.3969838619232178e-06
fchans = 1024
tchans = 32
fs = np.arange(fch1, fch1 + fchans*df, df)
ts = np.arange(0, tchans*tsamp, tsamp)
frame, mean, std, minimum = stg.gaussian_frame_from_dist(all_means, all_stds, all_mins, [tchans, fchans])
print(mean, std, minimum)
plt.imshow(frame, aspect='auto')
plt.colorbar()
plt.show()
start_index = np.random.randint(0,fchans)
drift_rate = np.random.uniform(-start_index*df/(tsamp*tchans), (fchans-1-start_index)*df/(tsamp*tchans))
line_width = np.random.uniform(0.02, 0.04) ** 3
level = std * 10
# drift_rate=0
signal = stg.generate(ts,
fs,
stg.constant_path(f_start = fs[start_index], drift_rate = drift_rate),
stg.sine_t_profile(period=10, amplitude=level, level=level),
stg.gaussian_f_profile(width = line_width),
stg.constant_bp_profile(level = 1.0),
integrate_time=False,
average_f_pos=False)
plt.imshow(signal, aspect='auto')
plt.colorbar()
plt.show()
plt.imshow(signal + frame, aspect='auto')
plt.colorbar()
plt.show()
# +
tsamp = 1.4316557653333333
fch1 = 6000.464843051508
df = -1.3969838619232178e-06
fchans = 1024
tchans = 32
fs = np.arange(fch1, fch1 + fchans*df, df)
ts = np.arange(0, tchans*tsamp, tsamp)
frame, mean, std, minimum = stg.gaussian_frame_from_dist(real_noise[:,0], real_noise[:,1], real_noise[:,2], [tchans, fchans])
print(mean, std, minimum)
plt.imshow(frame, aspect='auto')
plt.colorbar()
plt.show()
start_index = np.random.randint(0,fchans)
drift_rate = np.random.uniform(-start_index*df/(tsamp*tchans), (fchans-1-start_index)*df/(tsamp*tchans))
line_width = np.random.uniform(0.02, 0.04) ** 3
level = std * 1
print(start_index, drift_rate, line_width, level)
# drift_rate=0
signal = stg.generate(ts,
fs,
stg.constant_path(f_start = fs[start_index], drift_rate = drift_rate),
stg.sine_t_profile(period=10, amplitude=level, level=level),
stg.gaussian_f_profile(width = line_width),
stg.constant_bp_profile(level = 1.0),
integrate_time=False,
average_f_pos=False)
plt.imshow(signal, aspect='auto')
plt.colorbar()
plt.show()
plt.imshow(signal + frame, aspect='auto')
plt.colorbar()
plt.show()
# +
from time import time
A = []
B = []
C = []
D = []
for i in range(100):
a = time()
signal = stg.generate(ts,
fs,
stg.constant_path(f_start = fs[start_index], drift_rate = drift_rate),
stg.constant_t_profile(level = level),
stg.gaussian_f_profile(width = line_width),
stg.constant_bp_profile(level = 1.0),
integrate_time=True,
average_f_pos=True)
b = time()
signal = stg.generate(ts,
fs,
stg.constant_path(f_start = fs[start_index], drift_rate = drift_rate),
stg.constant_t_profile(level = level),
stg.gaussian_f_profile(width = line_width),
stg.constant_bp_profile(level = 1.0),
integrate_time=True,
average_f_pos=False)
c = time()
signal = stg.generate(ts,
fs,
stg.constant_path(f_start = fs[start_index], drift_rate = drift_rate),
stg.constant_t_profile(level = level),
stg.gaussian_f_profile(width = line_width),
stg.constant_bp_profile(level = 1.0),
integrate_time=False,
average_f_pos=True)
d = time()
signal = stg.generate(ts,
fs,
stg.constant_path(f_start = fs[start_index], drift_rate = drift_rate),
stg.constant_t_profile(level = level),
stg.gaussian_f_profile(width = line_width),
stg.constant_bp_profile(level = 1.0),
integrate_time=False,
average_f_pos=False)
e = time()
A.append(b-a)
B.append(c-b)
C.append(d-c)
D.append(e-d)
print(np.mean(A))
print(np.mean(B))
print(np.mean(C))
print(np.mean(D))
# -
0.0017479219436645507/0.0012787878513336182
plt.imshow(stg.normalize(signal + frame, cols=128, exclude=0.2), aspect='auto')
plt.colorbar()
plt.show()
plt.imshow(10*np.log10(signal + frame), aspect='auto')
plt.colorbar()
plt.show()
frame[frame==0].shape
import pandas as pd
ds = pd.DataFrame(frame.flatten())
ds.describe()
| misc/Select_from_distribution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PythonData
# language: python
# name: pythondata
# ---
# Define the function "say_hello" so it prints "Hello!" when called.
def say_hello():
print("Hello!")
# Call the function.
say_hello()
# Define the function "say_something" so it prints whatever is passed as the variable when called.
def say_something(something):
print(something)
# Call the function.
say_something("Hello World")
# +
Jane_says = "Hi, my name is Jane. I'm earning Python"
say_something(Jane_says)
# +
# Define a function that calculates the percentage of students that passed both
# math and reading and returns the passing percentage when the function is called.
def passing_math_percent(pass_math_count, student_count):
return pass_math_count / float(student_count) * 100
# -
passing_math_count = 29370
total_student_count = 39170
# Call the function.
passing_math_percent(passing_math_count, total_student_count)
# A list of my grades.
my_grades = ['B', 'C', 'B' , 'D']
# Import pandas.
import pandas as pd
# Convert the my_grades to a Series
my_grades = pd.Series(my_grades)
my_grades
# Change the grades by one letter grade.
my_grades.map({'B': 'A', 'C': 'B', 'D': 'C'})
# +
# Using the format() function.
my_grades = [92.34, 84.56, 86.78, 98.32]
for grade in my_grades:
print("{:.0f}".format(grade))
# -
# Convert the numerical grades to a Series.
my_grades = pd.Series([92.34, 84.56, 86.78, 78.32])
my_grades
# Format the grades to the nearest whole number percent.
my_grades.map("{:.0f}".format)
class Cat:
def __init__(self, name):
self.name = name
first_cat = Cat('Felix')
print(first_cat.name)
second_cat = Cat('Garfield')
print(second_cat.name)
class Dog:
def __init__(self, name, color, sound):
self.name = name
self.color = color
self.sound = sound
def bark(self):
return self.sound + ' ' + self.sound
first_dog = Dog('Fido', 'brown', 'woof!')
print( first_dog.name)
print(first_dog.color)
first_dog.bark()
second_dog = Dog('Lady', 'blonde', 'arf!')
second_dog.bark()
| function.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project name here
#
# > Summary description here.
# This file will become your README and also the index of your documentation.
# ## Install
# `pip install your_project_name`
# ## Usage
# very simple usage,
# init the search object then call sortedSearch()
#
#hide
import pickle, os
KEY = ''
PW = ''
keypath = '/Users/nic/.villa-search-2'
if KEY and PW:
with open (keypath, 'wb') as f:
pickle.dump({
'KEY': KEY,
'PW': PW
}, f)
with open(keypath, 'rb') as f:
creden = pickle.load(f)
USER = creden['KEY']
PW = creden['PW']
ACCESS_KEY_ID = USER
SECRET_ACCESS_KEY = PW
os.environ['DATABASE_TABLE_NAME'] = 'product-table-dev-manual'
os.environ['REGION'] = 'ap-southeast-1'
os.environ['INVENTORY_BUCKET_NAME'] = 'product-bucket-dev-manual'
os.environ['INPUT_BUCKET_NAME'] = 'input-product-bucket-dev-manual'
# os.environ['DAX_ENDPOINT'] = None
REGION = 'ap-southeast-1'
from cloudsearch.cloudsearch import Search
searchEndpoint = 'https://search-villa-cloudsearch-2-4izacsoytzqf6kztcyjhssy2ti.ap-southeast-1.cloudsearch.amazonaws.com'
searcher = Search(searchTerm = 'banana', key=USER, pw= PW , endpoint=searchEndpoint)
result = searcher.search(size=1000)
print(f'found {len(list(result))} results, the first item is \n{next(iter(result))}')
# ## For a more complex requirement
# You can extend the class, please have a look at sortedSearch example
_ = searcher.sortedSearch()
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import csv
import math
# from edu.kaist.ie.datastructure.practice.decisiontree.decisiontreenode import Node
# from edu.kaist.ie.datastructure.practice.decisiontree.voterecord import Record
from decisiontreenode import Node
from voterecord import Record
class DecisionTree:
def __init__(self, records):
self.root = Node(0,records)
def performID3(self, node = None):
if node == None:
node = ????????????????
node.splitNode()
for key in node.children.keys():
if ????????????????
pass
else:
self.performID3(????????????????)
return node
def classify(self, test):
types = Record.types
currentNode = ????????????????
while True:
child = currentNode.????????????????
if child.blnSplit == ????????????????:
result = None
for type in types:
if child.stat[type] ????????????????:
result = type
break
break
else:
currentNode = ????????????????
print('Test Data : ',test,', Classification : ', result)
def __str__(self):
ret = str(self.root)
return ret
if __name__ == "__main__":
csvfile = open('house-votes-84.csv', 'rt')
reader = csv.reader(csvfile, delimiter=',')
records = []
for row in reader:
record = Record(row)
records.append(record)
tree = DecisionTree(records)
tree.performID3()
print(tree)
test = ['y', 'y', '?', 'y', 'n', '?', '?', '?', 'n', 'n', 'n', 'y', 'n', '?', 'y', 'n']
# classify
tree.classify(test)
| week6-Binary Search Tree/dicisiontree.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Chemometrics
# <br>
# <NAME> / 2017 / Universidad del Valle
#
#
# An up-to-date version of this notebook can be found here: https://github.com/jwist/chemometrics/
options(repr.plot.width=4, repr.plot.height=4) # change these setting to plot larger figures
# ## A first example - simple linear regression
#
# we make repeated measurement of an observable Y as a function of X. You might think of a calibration curve where you measure the transmitted intensity as a function of the concentration of a solution.
#
# ### create data and visualize it
# +
#rm(list=ls(all=TRUE)) # we clear the variable space
N <- 20 # we define the number of observations
# we create a fake dataset using a quadratic
#equation and adding some noise
# first create a vector of x repeated rep times
rep <- 2 # number of replicates
X <- rep(seq(from=43, to=96, length.out=N),rep)
# then create the Y vector according to the equation:
Y <- (0.075 * X + -1.874 + 0.01* X^2)
# create some noise
noise <-runif(length(Y), -1, 1)
# add some noise to Y
Y <- Y + 1*noise
# -
# we take a look to the data that we just created.
plot(X,Y)
# In order to prepare a nice dataset for R we need to sort the data.
x_sorted <- sort(X,index.return=TRUE)
x <-x_sorted$x
y <- Y[x_sorted$ix]
# and now we can create a data.frame with columns x,y and $x^2$.
#
# <span style="background-color: #DDEBF6">consider data.frame is equivalent to an excel spreadsheet, with columns headers, and data of the same length, however data could be of different kind, such as strings, numbers (numeric), factors, etc.</span>
data <- data.frame(x=x, y=y, x2=x^2)
# to access the data within the data frame, use:
x_sorted$ix
data$y
data['x'][data['y']>20.10071]
# ### linear model with ```lm()```
#
# Now that we have defined a ```data.frame``` we can use it as input into most pre-built functions of R. Here we would like to test a simple linear regression. Therefore we use the ```lm()``` function and store the results in the ```fit.lm``` object. The linear model function accept a data.frame as input (here data) and allows you to define a model, in this case the function $y=ax$. <mark>Therefore it is important to define your data.frame properly so that lm will find the $x$ and $y$ columns in data.</mark>
fit.lm = lm(y ~ x, data=data)
names(fit.lm)
# At this stage we can have a look to the coefficient found, in this case of a linear model
# have a look to the coefficients
fit.lm$coefficients
# You find a lot of information about your optimization by using the ```summary()``` function of R. It allows you to get some information about any object.
summary(fit.lm)
# In the case you need to access one of the data shown here to use it further in your script, you can find the name of the objects displayed by ```summary()```
names(summary(fit.lm))
# So if you want to use the $r^2$ coefficient you can use the following command.
round(summary(fit.lm)$r.squared,3)
# Maybe you want a nice picture of your data and the regression line.
plot(data$x,data$y); lines(data$x, fitted(fit.lm), col=2)
# If you want more control on what you do, you can display the regression by ploting a line that crosses 0 at "intercept" and with a slope of "x".
plot(data$x,data$y,main="linear regression",ylab="intensity",xlab="concentration",ylim=c(-50,100),xlim=c(0,100));
abline(fit.lm$coefficients[1],fit.lm$coefficients[2],col="blue")
# You may have noticed that the data look to have some quadratic component (that we included intentionally at the beginning). You might want to see how good it fits if you use a quadratic function to fit your data.
fit.lm = lm(y ~ x + x2, data=data)
plot(data$x,data$y, main="linear regression",ylab="intensity",xlab="concentration")
lines(data$x, fitted(fit.lm), col=2)
paste("the coefficient $r^2$ = ",round(summary(fit.lm)$r.squared,3))
text(60,80,paste("r2 = ",round(summary(fit.lm)$r.squared,3)))
# ## Experimental design
#
# ### linear regression
# Use the data of Table 2.2 in Brereton, R. G. (2003). Chemometrics. Technometrics. Chichester, UK: John Wiley & Sons, Ltd. http://doi.org/10.1002/0470863242
#
# Or import the table from the dataset section provided in this course.
#
# #### import dataset
data <- read.csv("./datasets/table2.2.csv")
data
# As you may notice, the data are not in the way we want. We would like to have two columns and we also would like the index of the row to be correct. We can solve this readily.
#
data <- t(data) # we transpose the data
rownames(data) <- seq(1,10)
colnames(data) <- c("conc","A","B")
data
# <mark>Although the table looks good there is a problem **may be** a problem with it. R sees each data in the table as characters and not as number. This means that we cannot use these data because they are not "numbers". We should change this. It is done very simply, but it is sometime annoying because it is not something we are used to think of. In order to "see" what R "sees", use the following command.</mark>
is(data[1])
data[,1]
# Putting this altogether in something nicer, it gives:
remove(data)
data <- t(read.csv2("./datasets/table2.2.csv",header=TRUE,sep=",",dec="."))
rownames(data) <- seq(1:10)
data <- data.frame("conc"=(data[,1]), "A"=data[,2],"B"=data[,3])
data
# #### Analysis of variance (ANOVA)
#
# In order to estimate the quality of our regression we perform an analysis of variance. Here we will perform this analysis step by step as described in Chapter 2 of <NAME>. (2003). Chemometrics. Technometrics. Chichester, UK: John Wiley & Sons, Ltd. http://doi.org/10.1002/0470863242
#
# One of the most important feature of ANOVA is to compare the lack of fit to the replicate error. The error on replicate gives a estimation of the experimental error of the data, while the lack of fit gives an estimation of how good is our model for our data.
#
# ##### with intercept
# We first find duplicated data in our table
duplicated(data[,1])
Rep <- duplicated(data$conc) # find replicates
R <- sum( Rep ) # enumerate replicates
N <- dim(data)[1] # find out dimension of data
P <- 2 # if our model is y = ax + b
# find out the degree of freedom for our experiment
D <- N - P - R
D
R
N
# +
library(plyr) # this library helps a lot
meanRep <- aggregate(A ~ conc, data = data, mean) # this calculates the mean for all replicates
sumRep=0
sumRep2=0
for (i in seq(1,nrow(data))) {
sumRep[i] <- meanRep$A[ meanRep$conc == data$conc[i] ] - data$A[i]
sumRep2[i] <- ( meanRep$A[ meanRep$conc == data$conc[i] ] - data$A[i] )^2
}
S_replicate <- sum(sumRep2) # R degrees of freedom
result <- data.frame("Sum Replicate"=S_replicate)
result
# -
# We can verify our result by computing the strait sum of the replicate that gives 0.
round(sum(sumRep),3)
# Using the models proposed in Chapter 2 of Brereton, we can compute the sum of residuals, that is the sum of the difference between the experimental data and the predicted ones.
predicted <- 0.6113 + 2.4364 * as.numeric(data$conc)
S_residual <- sum( (data$A - predicted)^2 )
result["Sum Residual"] <- S_residual # N-P degrees of freedom (# or S_total - S_predicted)
result["Sum Residual"]
# +
S_total <- sum( data$A^2 ) # N degrees of freedom
result["Sum Total"] <- S_total; result["Sum Total"]
S_predicted <- sum( predicted^2 ) # P degrees of freedom
result["Sum Predicted"] <- S_predicted; result["Sum Predicted"]
# -
S_lackOfFit <- S_residual - S_replicate # N-P-R degrees of freedom
result["Sum Lack Of Fit"] <- S_lackOfFit; result["Sum Lack Of Fit"]
result["F"] <- (S_lackOfFit / D) / (S_replicate / R); result["F"]
plot(data$conc, data$A)
abline(0.6113, 2.4364, col="red")
fit <- lm(A ~ conc, data)
aov(A ~ conc, data)
anova(fit)
# ##### lack of fit
#
# We can now put this in order and run the four cases, for both datasets A and B and with or without intercept. Table of comparison is shown below.
plot(as.numeric(data$conc), as.numeric(data$A))
# +
#var.test((data$A - predicted),sumRep2[c(1,4,6,9)])
#sumRep2[c(1,4,6,9)]
#to be tested. Should be possible to use this command in this context.
# +
Rep <- duplicated(data$conc) # find replicates
R <- sum( Rep ) # enumerate replicates
N <- dim(data)[1] # find out dimension of data
P <- 2
D <- N - P - R
meanRep <- aggregate(A ~ conc, data = data, mean)
result <- data.frame("Number Replicate"=R)
result["Number of Data"] <- N
result["Number of Parameters"] <- P
sumRep=0
sumRep2=0
for (i in seq(1,nrow(data))) {
sumRep[i] <- meanRep$A[ meanRep$conc == data$conc[i] ] - data$A[i]
sumRep2[i] <- ( meanRep$A[ meanRep$conc == data$conc[i] ] - data$A[i] )^2
}
S_replicate <- sum(sumRep2) # R degrees of freedom
result["Sum Replicate"] <- S_replicate
predicted <- 0.6113 + 2.4364 * as.numeric(data$conc)
S_residual <- sum( (data$A - predicted)^2 ) # N-P degrees of freedom (# or S_total - S_predicted)
result["Sum Residual"] <- S_residual
S_total <- sum( data$A^2 ) # N degrees of freedom
result["Sum Total"] <- S_total
S_predicted <- sum( predicted^2 ) # P degrees of freedom
result["Sum Predicted"] <- S_predicted
S_lackOfFit <- S_residual - S_replicate # N-P-R degrees of freedom
result["Sum Lack Of Fit"] <- S_lackOfFit
result["Mean Residuals"] <- S_residual / (N-P)
result["Mean Total"] <- S_total / N
result["Mean Predicted"] <- S_predicted / P
result["Mean Replicate"] <- S_replicate / R
result["Lack Of Fit"] <- S_lackOfFit / D
A2 <- t(result)
# +
Rep <- duplicated(data$conc) # find replicates
R <- sum( Rep ) # enumerate replicates
N <- dim(data)[1] # find out dimension of data
P <- 1
D <- N - P - R
meanRep <- aggregate(A ~ conc, data = data, mean)
result <- data.frame("Number Replicate"=R)
result["Number of Data"] <- N
result["Number of Parameters"] <- P
sumRep=0
sumRep2=0
for (i in seq(1,nrow(data))) {
sumRep[i] <- meanRep$A[ meanRep$conc == data$conc[i] ] - data$A[i]
sumRep2[i] <- ( meanRep$A[ meanRep$conc == data$conc[i] ] - data$A[i] )^2
}
S_replicate <- sum(sumRep2) # R degrees of freedom
result["Sum Replicate"] <- S_replicate
predicted <- 2.576 * as.numeric(data$conc)
S_residual <- sum( (data$A - predicted)^2 ) # N-P degrees of freedom (# or S_total - S_predicted)
result["Sum Residual"] <- S_residual
S_total <- sum( data$A^2 ) # N degrees of freedom
result["Sum Total"] <- S_total
S_predicted <- sum( predicted^2 ) # P degrees of freedom
result["Sum Predicted"] <- S_predicted
S_lackOfFit <- S_residual - S_replicate # N-P-R degrees of freedom
result["Sum Lack Of Fit"] <- S_lackOfFit
result["Mean Residuals"] <- S_residual / (N-P)
result["Mean Total"] <- S_total / N
result["Mean Predicted"] <- S_predicted / P
result["Mean Replicate"] <- S_replicate / R
result["Lack Of Fit"] <- S_lackOfFit / D
A1 <- t(result)
# +
Rep <- duplicated(data$conc) # find replicates
R <- sum( Rep ) # enumerate replicates
N <- dim(data)[1] # find out dimension of data
P <- 2
D <- N - P - R
meanRep <- aggregate(B ~ conc, data = data, mean)
result <- data.frame("Number Replicate"=R)
result["Number of Data"] <- N
result["Number of Parameters"] <- P
sumRep=0
sumRep2=0
for (i in seq(1,nrow(data))) {
sumRep[i] <- meanRep$B[ meanRep$conc == data$conc[i] ] - data$B[i]
sumRep2[i] <- ( meanRep$B[ meanRep$conc == data$conc[i] ] - data$B[i] )^2
}
S_replicate <- sum(sumRep2) # R degrees of freedom
result["Sum Replicate"] <- S_replicate
predicted <- 2.032 + 2.484 * as.numeric(data$conc)
S_residual <- sum( (data$B - predicted)^2 ) # N-P degrees of freedom (# or S_total - S_predicted)
result["Sum Residual"] <- S_residual
S_total <- sum( data$B^2 ) # N degrees of freedom
result["Sum Total"] <- S_total
S_predicted <- sum( predicted^2 ) # P degrees of freedom
result["Sum Predicted"] <- S_predicted
S_lackOfFit <- S_residual - S_replicate # N-P-R degrees of freedom
result["Sum Lack Of Fit"] <- S_lackOfFit
result["Mean Residuals"] <- S_residual / (N-P)
result["Mean Total"] <- S_total / N
result["Mean Predicted"] <- S_predicted / P
result["Mean Replicate"] <- S_replicate / R
result["Lack Of Fit"] <- S_lackOfFit / D
B2 <- t(result)
# +
Rep <- duplicated(data$conc) # find replicates
R <- sum( Rep ) # enumerate replicates
N <- dim(data)[1] # find out dimension of data
P <- 1
D <- N - P - R
meanRep <- aggregate(B ~ conc, data = data, mean)
result <- data.frame("Number Replicate"=R)
result["Number of Data"] <- N
result["Number of Parameters"] <- P
sumRep=0
sumRep2=0
for (i in seq(1,nrow(data))) {
sumRep[i] <- meanRep$B[ meanRep$conc == data$conc[i] ] - data$B[i]
sumRep2[i] <- ( meanRep$B[ meanRep$conc == data$conc[i] ] - data$B[i] )^2
}
S_replicate <- sum(sumRep2) # R degrees of freedom
result["Sum Replicate"] <- S_replicate
predicted <- 2.948 * as.numeric(data$conc)
S_residual <- sum( (data$B - predicted)^2 ) # N-P degrees of freedom (# or S_total - S_predicted)
result["Sum Residual"] <- S_residual
S_total <- sum( data$B^2 ) # N degrees of freedom
result["Sum Total"] <- S_total
S_predicted <- sum( predicted^2 ) # P degrees of freedom
result["Sum Predicted"] <- S_predicted
S_lackOfFit <- S_residual - S_replicate # N-P-R degrees of freedom
result["Sum Lack Of Fit"] <- S_lackOfFit
result["Mean Residuals"] <- S_residual / (N-P)
result["Mean Total"] <- S_total / N
result["Mean Predicted"] <- S_predicted / P
result["Mean Replicate"] <- S_replicate / R
result["Lack Of Fit"] <- S_lackOfFit / D
B1 <- t(result)
# -
tableOfResult1 <- data.frame(A1,A2,B1,B2)
tableOfResult1
# <mark>Both datasets are similar. However, for the second set, excluding the intercept increase the lack of fit. It is important to note that it is not possible to judge the lack of fit other than by comparing it to the Mean Sum of Replicates. In all cases the LOF is smaller than the MSR, except for B1. The conclusion is that we need 2 parameters to best fit our data.</mark>
# Brereton p.42
qf(0.95,5,4) # this would be the F-ratio for 95% confidence.
# above this number we can conclude with 95% confidence that intercept is useful.
2.1395742/1.193648
0.6900860/1.416398
pf(1.792,5,4,lower.tail = TRUE)
pf(0.48721,5,4,lower.tail = TRUE)
# So we can conclude with 70.4% of confidence that intercept is useful in the case of the B dataset. It is not possible to conclude in the case of the dataset A, since the value for F-ratio is very low.
# ##### programing a function
#
# The way we performed the calculation above is not very efficient. We copy and paste 4 time the same code. If we want to make a modification we would have to propagate in all four pieces. In this case, a better practise consists in creating a funcion. If we are interested in more flexibility we can program a function to perform the calculation. This is done like this:
lof <- function(x, y, fit.lm) {
data=data.frame("x"=as.numeric(x), "y"=y)
#fit.lm <- lm( y ~ x, data=data )
Rep <- duplicated(data$x) # find replicates
R <- sum(Rep) # enumerate replicates
N <- dim(data)[1] # find out dimension of data
P <- length(fit.lm$coefficients)
D <- N - P - R
result <- data.frame("Number Replicate"=R)
result["Number of Data"] <- N
result["Number of Parameters"] <- P
result["Degrees of Freedom"] <- D
meanRep <- aggregate(y ~ x, data = data, mean)
sumRep=0
sumRep2=0
for (i in seq(1,nrow(data))) {
sumRep[i] <- meanRep$y[ meanRep$x == data$x[i] ] - data$y[i]
sumRep2[i] <- ( meanRep$y[ meanRep$x == data$x[i] ] - data$y[i] )^2
}
S_replicate <- sum(sumRep2) # R degrees of freedom
result["Sum Replicate"] <- round(S_replicate,3)
S_residual <- sum ( resid(fit.lm)^2 ) # N-P degrees of freedom (# or S_total - S_predicted)
result["Sum Residual"] <- round(S_residual,3)
S_total <- sum( data$y^2 ) # N degrees of freedom
result["Sum Total"] <- round(S_total,3)
S_predicted <- sum( fitted(fit.lm)^2 ) # P degrees of freedom
result["Sum Predicted"] <- round(S_predicted,3)
S_lackOfFit <- S_residual - S_replicate # N-P-R degrees of freedom
result["Sum Lack Of Fit"] <- round(S_lackOfFit,3)
result["Mean Residuals"] <- round(S_residual / (N-P),3)
result["Mean Total"] <- round(S_total / N,3)
result["Mean Predicted"] <- round(S_predicted / P,3)
result["Mean Replicate"] <- round(S_replicate / R,3)
result["Lack Of Fit"] <- round(S_lackOfFit / D,3)
result["F-value"] <- round(result["Lack Of Fit"] / result["Mean Replicate"],3)
result["p-value"] <- df(as.numeric(result["F-value"]),D,R)
result["r2"] <- round(summary(fit.lm)$r.squared,3)
result["a(slope)"] <- round(fit.lm$coefficients[2],3)
result["b(intercept)"] <- round(fit.lm$coefficients[1],3)
return( t(result) )
}
# We can use this function on our dataset A
# +
fit.lm <- lm( A ~ conc, data=data.frame("conc"=as.numeric(data$conc), "A"=data$A) )
r1 <- lof(data$conc, data$A, fit.lm)
plot(data$conc,data$A, main="linear regression",ylab="intensity",xlab="concentration")
#lines(data$conc, fitted(fit.lm), col=2)
abline(fit.lm$coefficients[1],fit.lm$coefficients[2],col="blue")
# -
# We repeated the calculation for the A dataset, but this time we use ```lm()``` to fit the data. It means that we are not using the models from the book, but the one we are optimizing here. In addition we computed the p-value. A p-value higher than 0.05 means that we cannot conclude that our model doesn't fit our data acurrately.
#
# We repeat the same calculation but this time we concatenate A and B datasets to see the effect of a larger dataset.
# +
fit.lm <- lm( A ~ conc, data=data.frame("conc"=as.numeric(c(data$conc,data$conc)), "A"=c(data$A, data$B)) )
r2 <- lof(c(data$conc,data$conc), c(data$A, data$B), fit.lm)
plot(as.factor(c(data$conc,data$conc)), c(data$A, data$B), main="linear regression",ylab="intensity",xlab="concentration")
abline(fit.lm$coefficients[1],fit.lm$coefficients[2],col="blue")
# -
# The next table shows the comparison. The first thing to note is that the degree of freedom is not changed because we added more points, but also more replicates. Maybe there are too many replicates in this case. The next thing to note is that the lack of fit is smaller which is good, but the MSR is larger. p-values for both are larger than 0.05 and we cannot rule out that our model is not accurate, which is ok.
tableOfResult2 <- cbind(r1,r2)
tableOfResult2
# ##### Replicates
#
# If we are curious to understand the effect of the replicates we can remove them and apply the test again. Clearly we will have no estimation of the experimental error to compare with. We can compute the LOF but we have no MRS to compute a p-value.
#
# <mark>Finally it is important to look at the coefficient. After all we are interested in a calibration curve, how the different experimental designs affect the value of the coefficients?</mark>
# +
F <- duplicated(data$conc)
fit.lm <- lm( A ~ conc, data=data.frame("conc"=as.numeric(data$conc[F]), "A"=data$A[F]) )
lof(data$conc[F], data$A[F], fit.lm)
plot(data$conc[F], data$A[F], main="linear regression",ylab="intensity",xlab="concentration")
abline(fit.lm$coefficients[1],fit.lm$coefficients[2],col="blue")
# -
# ### p-values, model and noise
#
# As already mentionned, the p-value can be used to decide if we can conclude that our model is not appropriate. We can go back to the data of the first example to check demonstrate how p-value help us decide if our model is accurate or not and to illustrate the effect of noise in the data.
# +
N <- 20 # we define the number of observations
# we create a fake dataset using a quadratic
#equation and adding some noise
# first create a vector of x repeated rep times
rep <- 2 # number of replicates
X <- rep(seq(from=43, to=96, length.out=N),rep)
# then create the Y vector according to the equation:
Y <- (0.075 * X + -1.874 + 0.01* X^2)
# create some noise
noise <-runif(length(Y), -1, 1)
# add some noise to Y
Y <- Y + 1*noise
x_sorted <- sort(X,index.return=TRUE)
x <-x_sorted$x
y <- Y[x_sorted$ix]
data <- data.frame("x"=x, "y"=y, x2=x^2)
# we fit the data and evaluate the goodness of it
fit.lm <- lm( y ~ x, data=data )
noQuadLowNoise <- lof(data$x, data$y, fit.lm)
# -
plot(data$x,data$y, cex=0.7, cex.axis=0.8, cex.main=0.8, cex.lab=0.8, main="low noise data / linear model", xlab="concentration", ylab="intensity")
lines(data$x, fitted(fit.lm), col=2)
noQuadLowNoise[17]
# Although the correlation coefficient is high, we can see that there is a trend in the data that is not correctly described by the model, the model is linear and we know that there is a quadratic term in the data. If we only look at the lack of fit and at the correlation coefficient we can conclude that the model is not bad.
#
# However the LOF is bigger than the MSR, which indicates a problem with our model. If we compute the p-value
noQuadLowNoise[16] # p-value
# Brereton p.42
qf(0.95,noQuadLowNoise[4],noQuadLowNoise[1]) # this would be the F-ratio for 95% confidence.
# above this number we can conclude with 95% confidence that intercept is useful.
noQuadLowNoise[15]
pf(noQuadLowNoise[15],noQuadLowNoise[4],noQuadLowNoise[1],lower.tail = TRUE)
# + active=""
# we find a very small number that means that we can conclude that our model is not accurate. Let's repeat this but this time we include a quadratic term in the model.
# -
fit.lm <- lm( y ~ x + x2, data=data )
QuadLowNoise <- lof(data$x, data$y, fit.lm)
plot(data$x,data$y, cex=0.7, cex.axis=0.8, cex.main=0.8, cex.lab=0.8, main="low noise data / quadratic model", xlab="concentration", ylab="intensity")
lines(data$x, fitted(fit.lm), col=2)
QuadLowNoise[16]
# Brereton p.42
qf(0.95,QuadLowNoise[4],QuadLowNoise[1]) # this would be the F-ratio for 95% confidence.
# above this number we can conclude with 95% confidence that intercept is useful.
QuadLowNoise[15]
pf(QuadLowNoise[15],QuadLowNoise[4],QuadLowNoise[1],lower.tail = TRUE)
# Clearly the model is more accurate and we obtain a p-value that is larger than 0.05.
#
# Now let's see what happens if the noise is large.
# +
N <- 20
rep <- 2
X <- rep(seq(from=43, to=96, length.out=N),rep)
Y <- (0.075 * X + -1.874 + 0.01* X^2)
noise <-runif(length(Y), -1, 1)
Y <- Y + 4*noise # we increment the noise
x_sorted <- sort(X,index.return=TRUE)
x <-x_sorted$x
y <- Y[x_sorted$ix]
data <- data.frame("x"=x, "y"=y, x2=x^2)
fit.lm <- lm( y ~ x + x2, data=data )
QuadHighNoise <- lof(data$x, data$y, fit.lm)
plot(data$x,data$y, cex=0.7, cex.axis=0.8, cex.main=0.8, cex.lab=0.8, main="high noise data / quadratic model", xlab="concentration", ylab="intensity")
lines(data$x, fitted(fit.lm), col=2)
QuadHighNoise[16]
# -
# Brereton p.42
qf(0.95,QuadHighNoise[4],QuadHighNoise[1]) # this would be the F-ratio for 95% confidence.
# above this number we can conclude with 95% confidence that intercept is useful.
QuadHighNoise[15]
pf(QuadHighNoise[15],QuadHighNoise[4],QuadHighNoise[1],lower.tail = TRUE)
# We repeat now the same operation but without the quadratic term.
# +
fit.lm <- lm( y ~ x, data=data )
noQuadHighNoise <- lof(data$x, data$y, fit.lm)
plot(data$x,data$y, cex=0.7, cex.axis=0.8, cex.main=0.8, cex.lab=0.8, main="high noise data / linear model", xlab="concentration", ylab="intensity")
lines(data$x, fitted(fit.lm), col=2)
noQuadHighNoise[16]
# -
# Brereton p.42
qf(0.95,noQuadHighNoise[4],noQuadHighNoise[1]) # this would be the F-ratio for 95% confidence.
# above this number we can conclude with 95% confidence that intercept is useful.
noQuadHighNoise[15]
pf(noQuadHighNoise[15],noQuadHighNoise[4],noQuadHighNoise[1],lower.tail = TRUE)
# It can be observed than both p-values are larger than 0.05 and thus we cannot rule out a model and must **statistically** accept both model as acceptable for our data. This is the effect of noise, it increase the MSR resulting in larger p-values.
#
# <mark>You may have to re-run the last cells several time, this because the data are generated randomly and the p-value may vary largely.</mark>
# It may be interesting to plot the residual. Here we clearly distinguish the quadratic behavior that is not taken into account by our linear model. Plotting residual is always a good idea to help validating a model.
plot( resid(fit.lm) )
abline(0,0)
# Finally, we may want to add the 95% confidence interval for our calibration curve. This can be achieved with a single command.
interval <- confint(fit.lm)
interval90 <- confint(fit.lm, level=0.8)
plot(data$x,data$y, cex=0.7, cex.axis=0.8, cex.main=0.8, cex.lab=0.8, main="high noise data / linear model", xlab="concentration", ylab="intensity")
lines(data$x, fitted(fit.lm), col="red")
abline(interval[1,1], interval[2,1], col="gray")
abline(interval[1,2], interval[2,2], col="gray")
abline(interval90[1,1], interval90[2,1], col="gray", lty=2)
abline(interval90[1,2], interval90[2,2], col="gray", lty=2)
# Here is a calibration curve with confidence interval at 80% (gray dashed line) and 95% (gray line). More concrete information can be found here for calibration curves (https://raw.githubusercontent.com/jwist/chemometrics/master/pdf/Calibration-curve-guide.pdf)
# According to the decision tree, in this simple decision tree we have 1 IV (X, causes) and 1 DV (Y, effect). Since both are continuous variable we are in the domain of regression. Because we only have a single DV and it is continuous we speak about simple linear regression. In the next example we will study the effect of pH, temperature and concentration on the yield of a reaction, thus we will have more than one IV but a single DV and we speaks about multiple regression or multivariable regression.
#
# for more about this read: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3518362/
# + jupyter={"outputs_hidden": true}
# -
| 3_chemometrics_linearRegression.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.2
# language: julia
# name: julia-1.4
# ---
#
# <a id='perm-income-cons'></a>
# <div id="qe-notebook-header" style="text-align:right;">
# <a href="https://quantecon.org/" title="quantecon.org">
# <img style="width:250px;display:inline;" src="https://assets.quantecon.org/img/qe-menubar-logo.svg" alt="QuantEcon">
# </a>
# </div>
# # Optimal Savings II: LQ Techniques
#
#
# <a id='index-1'></a>
# ## Contents
#
# - [Optimal Savings II: LQ Techniques](#Optimal-Savings-II:-LQ-Techniques)
# - [Overview](#Overview)
# - [Introduction](#Introduction)
# - [The LQ Approach](#The-LQ-Approach)
# - [Implementation](#Implementation)
# - [Two Example Economies](#Two-Example-Economies)
# Co-authored with <NAME>.
# ## Overview
#
# This lecture continues our analysis of the linear-quadratic (LQ) permanent income model of savings and consumption.
#
# As we saw in our [previous lecture](perm_income.html) on this topic, <NAME> [[Hal78]](../zreferences.html#hall1978) used the LQ permanent income model to restrict and interpret intertemporal comovements of nondurable consumption, nonfinancial income, and financial wealth.
#
# For example, we saw how the model asserts that for any covariance stationary process for nonfinancial income
#
# - consumption is a random walk
# - financial wealth has a unit root and is cointegrated with consumption
#
#
# Other applications use the same LQ framework.
#
# For example, a model isomorphic to the LQ permanent income model has been used by <NAME> [[Bar79]](../zreferences.html#barro1979) to interpret intertemporal comovements of a governmentโs tax collections, its expenditures net of debt service, and its public debt.
#
# This isomorphism means that in analyzing the LQ permanent income model, we are in effect also analyzing the Barro tax smoothing model.
#
# It is just a matter of appropriately relabeling the variables in Hallโs model.
#
# In this lecture, weโll
#
# - show how the solution to the LQ permanent income model can be obtained using LQ control methods
# - represent the model as a linear state space system as in [this lecture](../tools_and_techniques/linear_models.html)
# - apply [QuantEcon](http://quantecon.org/quantecon-jl)โs [LSS](https://github.com/QuantEcon/QuantEcon.jl/blob/master/src/lss.jl) type to characterize statistical features of the consumerโs optimal consumption and borrowing plans
#
#
# Weโll then use these characterizations to construct a simple model of cross-section wealth and
# consumption dynamics in the spirit of Truman Bewley [[Bew86]](../zreferences.html#bewley86).
#
# (Later weโll study other Bewley modelsโsee [this lecture](../multi_agent_models/aiyagari.html))
#
# The model will prove useful for illustrating concepts such as
#
# - stationarity
# - ergodicity
# - ensemble moments and cross section observations
# ### Setup
# + hide-output=true
using InstantiateFromURL
# optionally add arguments to force installation: instantiate = true, precompile = true
github_project("QuantEcon/quantecon-notebooks-julia", version = "0.8.0")
# + hide-output=false
using LinearAlgebra, Statistics
# -
# ## Introduction
#
# Letโs recall the basic features of the model discussed in [permanent income model](perm_income.html).
#
# Consumer preferences are ordered by
#
#
# <a id='equation-old1'></a>
# $$
# E_0 \sum_{t=0}^\infty \beta^t u(c_t) \tag{1}
# $$
#
# where $ u(c) = -(c - \gamma)^2 $.
#
# The consumer maximizes [(1)](#equation-old1) by choosing a
# consumption, borrowing plan $ \{c_t, b_{t+1}\}_{t=0}^\infty $
# subject to the sequence of budget constraints
#
#
# <a id='equation-old2'></a>
# $$
# c_t + b_t = \frac{1}{1 + r} b_{t+1} + y_t,
# \quad t \geq 0 \tag{2}
# $$
#
# and the no-Ponzi condition
#
#
# <a id='equation-old42'></a>
# $$
# E_0 \sum_{t=0}^\infty \beta^t b_t^2 < \infty \tag{3}
# $$
#
# The interpretation of all variables and parameters are the same as in the
# [previous lecture](perm_income.html).
#
# We continue to assume that $ (1 + r) \beta = 1 $.
#
# The dynamics of $ \{y_t\} $ again follow the linear state space model
#
#
# <a id='equation-sprob15ab2'></a>
# $$
# \begin{aligned}
# z_{t+1} & = A z_t + C w_{t+1}
# \\
# y_t & = U z_t
# \end{aligned} \tag{4}
# $$
#
# The restrictions on the shock process and parameters are the same as in our [previous lecture](perm_income.html).
# ### Digression on a useful isomorphism
#
# The LQ permanent income model of consumption is mathematically isomorphic with a version of
# Barroโs [[Bar79]](../zreferences.html#barro1979) model of tax smoothing.
#
# In the LQ permanent income model
#
# - the household faces an exogenous process of nonfinancial income
# - the household wants to smooth consumption across states and time
#
#
# In the Barro tax smoothing model
#
# - a government faces an exogenous sequence of government purchases (net of interest payments on its debt)
# - a government wants to smooth tax collections across states and time
#
#
# If we set
#
# - $ T_t $, total tax collections in Barroโs model to consumption $ c_t $ in the LQ permanent income model
# - $ G_t $, exogenous government expenditures in Barroโs model to nonfinancial income $ y_t $ in the permanent income model
# - $ B_t $, government risk-free one-period assets falling due in Barroโs model to risk-free one period consumer debt $ b_t $ falling due in the LQ permanent income model
# - $ R $, the gross rate of return on risk-free one-period government debt in Barroโs model to the gross rate of return $ 1+r $ on financial assets in the permanent income model of consumption
#
#
# then the two models are mathematically equivalent.
#
# All characterizations of a $ \{c_t, y_t, b_t\} $ in the LQ permanent income model automatically apply to a $ \{T_t, G_t, B_t\} $ process in the Barro model of tax smoothing.
#
# See [consumption and tax smoothing models](smoothing.html) for further exploitation of an isomorphism between consumption and tax smoothing models.
# ### A specification of the nonfinancial income process
#
# For the purposes of this lecture, letโs assume $ \{y_t\} $ is a second-order univariate autoregressive process:
#
# $$
# y_{t+1} = \alpha + \rho_1 y_t + \rho_2 y_{t-1} + \sigma w_{t+1}
# $$
#
# We can map this into the linear state space framework in [(4)](#equation-sprob15ab2), as
# discussed in our lecture on [linear models](../tools_and_techniques/linear_models.html).
#
# To do so we take
#
# $$
# z_t =
# \begin{bmatrix}
# 1 \\
# y_t \\
# y_{t-1}
# \end{bmatrix},
# \quad
# A = \begin{bmatrix}
# 1 & 0 & 0 \\
# \alpha & \rho_1 & \rho_2 \\
# 0 & 1 & 0
# \end{bmatrix},
# \quad
# C= \begin{bmatrix}
# 0 \\
# \sigma \\
# 0
# \end{bmatrix},
# \quad \text{and} \quad
# U = \begin{bmatrix} 0 & 1 & 0 \end{bmatrix}
# $$
# ## The LQ Approach
#
# [Previously](perm_income.html#odr-pi) we solved the permanent income model by solving a system of linear expectational difference equations subject to two boundary conditions.
#
# Here we solve the same model using [LQ methods](lqcontrol.html) based on dynamic programming.
#
# After confirming that answers produced by the two methods agree, we apply [QuantEcon](http://quantecon.org/quantecon-jl)โs [LSS](https://github.com/QuantEcon/QuantEcon.jl/blob/master/src/lss.jl)
# type to illustrate features of the model.
#
# Why solve a model in two distinct ways?
#
# Because by doing so we gather insights about the structure of the model.
#
# Our earlier approach based on solving a system of expectational difference equations brought to the fore the role of the consumerโs expectations about future nonfinancial income.
#
# On the other hand, formulating the model in terms of an LQ dynamic programming problem reminds us that
#
# - finding the state (of a dynamic programming problem) is an art, and
# - iterations on a Bellman equation implicitly jointly solve both a forecasting problem and a control problem
# ### The LQ Problem
#
# Recall from our [lecture on LQ theory](lqcontrol.html) that the optimal linear regulator problem is to choose
# a decision rule for $ u_t $ to minimize
#
# $$
# \mathbb E
# \sum_{t=0}^\infty \beta^t \{x'_t R x_t+ u'_t Q u_t\},
# $$
#
# subject to $ x_0 $ given and the law of motion
#
#
# <a id='equation-pilqsd'></a>
# $$
# x_{t+1} = \tilde A x_t+ \tilde B u_t+ \tilde C w_{t+1},
# \qquad t\geq 0, \tag{5}
# $$
#
# where $ w_{t+1} $ is iid with mean vector zero and $ \mathbb E w_t w'_t= I $.
#
# The tildes in $ \tilde A, \tilde B, \tilde C $ are to avoid clashing with notation in [(4)](#equation-sprob15ab2).
#
# The value function for this problem is $ v(x) = - x'Px - d $, where
#
# - $ P $ is the unique positive semidefinite solution of the [corresponding matrix Riccati equation](lqcontrol.html#riccati-equation).
# - The scalar $ d $ is given by $ d=\beta (1-\beta)^{-1} {\rm trace} ( P \tilde C \tilde C') $.
#
#
# The optimal policy is $ u_t = -Fx_t $, where $ F := \beta (Q+\beta \tilde B'P \tilde B)^{-1} \tilde B'P \tilde A $.
#
# Under an optimal decision rule $ F $, the state vector $ x_t $ evolves according to $ x_{t+1} = (\tilde A-\tilde BF) x_t + \tilde C w_{t+1} $.
# ### Mapping into the LQ framework
#
# To map into the LQ framework, weโll use
#
# $$
# x_t :=
# \begin{bmatrix}
# z_t \\
# b_t
# \end{bmatrix}
# =
# \begin{bmatrix}
# 1 \\
# y_t \\
# y_{t-1} \\
# b_t
# \end{bmatrix}
# $$
#
# as the state vector and $ u_t := c_t - \gamma $ as the control.
#
# With this notation and $ U_\gamma := \begin{bmatrix} \gamma & 0 & 0
# \end{bmatrix} $, we can write the state dynamics as in [(5)](#equation-pilqsd) when
#
# $$
# \tilde A :=
# \begin{bmatrix}
# A & 0 \\
# (1 + r)(U_\gamma - U) & 1 + r
# \end{bmatrix}
# \quad
# \tilde B :=
# \begin{bmatrix}
# 0 \\
# 1 + r
# \end{bmatrix}
# \quad \text{and} \quad
# \tilde C :=
# \begin{bmatrix}
# C \\ 0
# \end{bmatrix}
# w_{t+1}
# $$
#
# Please confirm for yourself that, with these definitions, the LQ dynamics [(5)](#equation-pilqsd) match the dynamics of $ z_t $ and $ b_t $ described above.
#
# To map utility into the quadratic form $ x_t' R x_t + u_t'Q u_t $ we can set
#
# - $ Q := 1 $ (remember that we are minimizing) and
# - $ R := $ a $ 4 \times 4 $ matrix of zeros
#
#
# However, there is one problem remaining.
#
# We have no direct way to capture the non-recursive restriction [(3)](#equation-old42)
# on the debt sequence $ \{b_t\} $ from within the LQ framework.
#
# To try to enforce it, weโre going to use a trick: put a small penalty on $ b_t^2 $ in the criterion function.
#
# In the present setting, this means adding a small entry $ \epsilon > 0 $ in the $ (4,4) $ position of $ R $.
#
# That will induce a (hopefully) small approximation error in the decision rule.
#
# Weโll check whether it really is small numerically soon.
# ## Implementation
#
# Letโs write some code to solve the model.
#
# One comment before we start is that the bliss level of consumption $ \gamma $ in the utility function has no effect on the optimal decision rule.
#
# We saw this in the previous lecture [permanent income](perm_income.html).
#
# The reason is that it drops out of the Euler equation for consumption.
#
# In what follows we set it equal to unity.
# ### The exogenous noinfinancial income process
#
# First we create the objects for the optimal linear regulator
# + hide-output=false
using QuantEcon, LinearAlgebra
using Plots
gr(fmt=:png);
# Set parameters
ฮฑ, ฮฒ, ฯ1, ฯ2, ฯ = 10.0, 0.95, 0.9, 0.0, 1.0
R = 1 / ฮฒ
A = [1.0 0.0 0.0;
ฮฑ ฯ1 ฯ2;
0.0 1.0 0.0]
C = [0.0; ฯ; 0.0]''
G = [0.0 1.0 0.0]
# Form LinearStateSpace system and pull off steady state moments
ฮผ_z0 = [1.0, 0.0, 0.0]
ฮฃ_z0 = zeros(3, 3)
Lz = LSS(A, C, G, mu_0=ฮผ_z0, Sigma_0=ฮฃ_z0)
ฮผ_z, ฮผ_y, ฮฃ_z, ฮฃ_y = stationary_distributions(Lz)
# Mean vector of state for the savings problem
mxo = [ฮผ_z; 0.0]
# Create stationary covariance matrix of x -- start everyone off at b=0
a1 = zeros(3, 1)
aa = hcat(ฮฃ_z, a1)
bb = zeros(1, 4)
sxo = vcat(aa, bb)
# These choices will initialize the state vector of an individual at zero debt
# and the ergodic distribution of the endowment process. Use these to create
# the Bewley economy.
mxbewley = mxo
sxbewley = sxo
# -
# The next step is to create the matrices for the LQ system
# + hide-output=false
A12 = zeros(3,1)
ALQ_l = hcat(A, A12)
ALQ_r = [0 -R 0 R]
ALQ = vcat(ALQ_l, ALQ_r)
RLQ = [0.0 0.0 0.0 0.0;
0.0 0.0 0.0 0.0;
0.0 0.0 0.0 0.0;
0.0 0.0 0.0 1e-9]
QLQ = 1.0
BLQ = [0.0; 0.0; 0.0; R]
CLQ = [0.0; ฯ; 0.0; 0.0]
ฮฒ_LQ = ฮฒ
# -
# Letโs print these out and have a look at them
# + hide-output=false
println("A = $ALQ")
println("B = $BLQ")
println("R = $RLQ")
println("Q = $QLQ")
# -
# Now create the appropriate instance of an LQ model
# + hide-output=false
LQPI = QuantEcon.LQ(QLQ, RLQ, ALQ, BLQ, CLQ, bet=ฮฒ_LQ);
# -
# Weโll save the implied optimal policy function soon and compare with what we get by
# employing an alternative solution method.
# + hide-output=false
P, F, d = stationary_values(LQPI) # compute value function and decision rule
ABF = ALQ - BLQ * F # form closed loop system
# -
# ### Comparison with the difference equation approach
#
# In our [first lecture](perm_income.html) on the infinite horizon permanent
# income problem we used a different solution method.
#
# The method was based around
#
# - deducing the Euler equations that are the first-order conditions with respect to consumption and savings
# - using the budget constraints and boundary condition to complete a system of expectational linear difference equations
# - solving those equations to obtain the solution
#
#
# Expressed in state space notation, the solution took the form
#
# $$
# \begin{aligned}
# z_{t+1} & = A z_t + C w_{t+1} \\
# b_{t+1} & = b_t + U [ (I -\beta A)^{-1} (A - I) ] z_t \\
# y_t & = U z_t \\
# c_t & = (1-\beta) [ U (I-\beta A)^{-1} z_t - b_t ]
# \end{aligned}
# $$
#
# Now weโll apply the formulas in this system
# + hide-output=false
# Use the above formulas to create the optimal policies for b_{t+1} and c_t
b_pol = G * (inv(I - ฮฒ * A)) * (A - I)
c_pol = (1 - ฮฒ) * (G * inv(I - ฮฒ * A))
# Create the A matrix for a LinearStateSpace instance
A_LSS1 = vcat(A, b_pol)
A_LSS2 = [0, 0, 0, 1]
A_LSS = hcat(A_LSS1, A_LSS2)
# Create the C matrix for LSS methods
C_LSS = vcat(C, 0)
# Create the G matrix for LSS methods
G_LSS1 = vcat(G, c_pol)
G_LSS2 = vcat(0, -(1 - ฮฒ))
G_LSS = hcat(G_LSS1, G_LSS2)
# Use the following values to start everyone off at b=0, initial incomes zero
ฮผ_0 = [1.0, 0.0, 0.0, 0.0]
ฮฃ_0 = zeros(4, 4)
# -
# A_LSS calculated as we have here should equal ABF calculated above using the LQ model
# + hide-output=false
ABF - A_LSS
# -
# Now compare pertinent elements of c_pol and F
# + hide-output=false
println(c_pol, -F)
# -
# We have verified that the two methods give the same solution.
#
# Now letโs create instances of the [LSS](https://github.com/QuantEcon/QuantEcon.jl/blob/master/src/lss.jl) type and use it to do some interesting experiments.
#
# To do this, weโll use the outcomes from our second method.
# ## Two Example Economies
#
# In the spirit of Bewley models [[Bew86]](../zreferences.html#bewley86), weโll generate panels of consumers.
#
# The examples differ only in the initial states with which we endow the consumers.
#
# All other parameter values are kept the same in the two examples
#
# - In the first example, all consumers begin with zero nonfinancial income and zero debt.
#
# - The consumers are thus *ex ante* identical.
#
# - In the second example, while all begin with zero debt, we draw their initial income levels from the invariant distribution of financial income.
#
# - Consumers are *ex ante* heterogeneous.
#
#
#
# In the first example, consumersโ nonfinancial income paths display
# pronounced transients early in the sample
#
# - these will affect outcomes in striking ways.
#
#
# Those transient effects will not be present in the second example.
#
# We use methods affiliated with the [LSS](https://github.com/QuantEcon/QuantEcon.jl/blob/master/src/lss.jl) type to simulate the model.
# ### First set of initial conditions
#
# We generate 25 paths of the exogenous non-financial income process and the associated optimal consumption and debt paths.
#
# In a first set of graphs, darker lines depict a particular sample path, while the lighter lines describe 24 other paths.
#
# A second graph plots a collection of simulations against the population distribution that we extract from the LSS instance LSS.
#
# Comparing sample paths with population distributions at each date $ t $ is a useful exerciseโsee [our discussion](../tools_and_techniques/lln_clt.html#lln-mr) of the laws of large numbers.
# + hide-output=false
lss = LSS(A_LSS, C_LSS, G_LSS, mu_0=ฮผ_0, Sigma_0=ฮฃ_0);
# -
# ### Population and sample panels
#
# In the code below, we use the [LSS](https://github.com/QuantEcon/QuantEcon.jl/blob/master/src/lss.jl) type to
#
# - compute and plot population quantiles of the distributions of
# consumption and debt for a population of consumers
# - simulate a group of 25 consumers and plot sample paths on the same
# graph as the population distribution
# + hide-output=false
function income_consumption_debt_series(A, C, G, ฮผ_0, ฮฃ_0, T = 150, npaths = 25)
lss = LSS(A, C, G, mu_0=ฮผ_0, Sigma_0=ฮฃ_0)
# simulation/Moment Parameters
moment_generator = moment_sequence(lss)
# simulate various paths
bsim = zeros(npaths, T)
csim = zeros(npaths, T)
ysim = zeros(npaths, T)
for i in 1:npaths
sims = simulate(lss,T)
bsim[i, :] = sims[1][end, :]
csim[i, :] = sims[2][2, :]
ysim[i, :] = sims[2][1, :]
end
# get the moments
cons_mean = zeros(T)
cons_var = similar(cons_mean)
debt_mean = similar(cons_mean)
debt_var = similar(cons_mean)
for (idx, t) = enumerate(moment_generator)
(ฮผ_x, ฮผ_y, ฮฃ_x, ฮฃ_y) = t
cons_mean[idx], cons_var[idx] = ฮผ_y[2], ฮฃ_y[2, 2]
debt_mean[idx], debt_var[idx] = ฮผ_x[4], ฮฃ_x[4, 4]
idx == T && break
end
return bsim, csim, ysim, cons_mean, cons_var, debt_mean, debt_var
end
function consumption_income_debt_figure(bsim, csim, ysim)
# get T
T = size(bsim, 2)
# create first figure
xvals = 1:T
# plot consumption and income
plt_1 = plot(csim[1,:], label="c", color=:blue, lw=2)
plot!(plt_1, ysim[1, :], label="y", color=:green, lw=2)
plot!(plt_1, csim', alpha=0.1, color=:blue, label="")
plot!(plt_1, ysim', alpha=0.1, color=:green, label="")
plot!(plt_1, title="Nonfinancial Income, Consumption, and Debt",
xlabel="t", ylabel="y and c",legend=:bottomright)
# plot debt
plt_2 = plot(bsim[1,: ], label="b", color=:red, lw=2)
plot!(plt_2, bsim', alpha=0.1, color=:red,label="")
plot!(plt_2, xlabel="t", ylabel="debt",legend=:bottomright)
plot(plt_1, plt_2, layout=(2,1), size=(800,600))
end
function consumption_debt_fanchart(csim, cons_mean, cons_var,
bsim, debt_mean, debt_var)
# get T
T = size(bsim, 2)
# create percentiles of cross-section distributions
cmean = mean(cons_mean)
c90 = 1.65 * sqrt.(cons_var)
c95 = 1.96 * sqrt.(cons_var)
c_perc_95p, c_perc_95m = cons_mean + c95, cons_mean - c95
c_perc_90p, c_perc_90m = cons_mean + c90, cons_mean - c90
# create percentiles of cross-section distributions
dmean = mean(debt_mean)
d90 = 1.65 * sqrt.(debt_var)
d95 = 1.96 * sqrt.(debt_var)
d_perc_95p, d_perc_95m = debt_mean + d95, debt_mean - d95
d_perc_90p, d_perc_90m = debt_mean + d90, debt_mean - d90
xvals = 1:T
# first fanchart
plt_1=plot(xvals, cons_mean, color=:black, lw=2, label="")
plot!(plt_1, xvals, Array(csim'), color=:black, alpha=0.25, label="")
plot!(xvals, fillrange=[c_perc_95m, c_perc_95p], alpha=0.25, color=:blue, label="")
plot!(xvals, fillrange=[c_perc_90m, c_perc_90p], alpha=0.25, color=:red, label="")
plot!(plt_1, title="Consumption/Debt over time",
ylim=(cmean-15, cmean+15), ylabel="consumption")
# second fanchart
plt_2=plot(xvals, debt_mean, color=:black, lw=2,label="")
plot!(plt_2, xvals, Array(bsim'), color=:black, alpha=0.25,label="")
plot!(xvals, fillrange=[d_perc_95m, d_perc_95p], alpha=0.25, color=:blue,label="")
plot!(xvals, fillrange=[d_perc_90m, d_perc_90p], alpha=0.25, color=:red,label="")
plot!(plt_2, ylabel="debt", xlabel="t")
plot(plt_1, plt_2, layout=(2,1), size=(800,600))
end
# -
# Now letโs create figures with initial conditions of zero for $ y_0 $ and $ b_0 $
# + hide-output=false
out = income_consumption_debt_series(A_LSS, C_LSS, G_LSS, ฮผ_0, ฮฃ_0)
bsim0, csim0, ysim0 = out[1:3]
cons_mean0, cons_var0, debt_mean0, debt_var0 = out[4:end]
consumption_income_debt_figure(bsim0, csim0, ysim0)
# + hide-output=false
consumption_debt_fanchart(csim0, cons_mean0, cons_var0,
bsim0, debt_mean0, debt_var0)
# -
# Here is what is going on in the above graphs.
#
# For our simulation, we have set initial conditions $ b_0 = y_{-1} = y_{-2} = 0 $.
#
# Because $ y_{-1} = y_{-2} = 0 $, nonfinancial income $ y_t $ starts far below its stationary mean $ \mu_{y, \infty} $ and rises early in each simulation.
#
# Recall from the [previous lecture](perm_income.html) that we can represent the optimal decision rule for consumption in terms of the **co-integrating relationship**.
#
#
# <a id='equation-old12'></a>
# $$
# (1-\beta) b_t + c_t = (1-\beta) E_t \sum_{j=0}^\infty \beta^j y_{t+j} \tag{6}
# $$
#
# So at time $ 0 $ we have
#
# $$
# c_0 = (1-\beta) E_0 \sum_{t=0}^\infty \beta^j y_{t}
# $$
#
# This tells us that consumption starts at the income that would be paid by an annuity whose value equals the expected discounted value of nonfinancial income at time $ t=0 $.
#
# To support that level of consumption, the consumer borrows a lot early and consequently builds up substantial debt.
#
# In fact, he or she incurs so much debt that eventually, in the stochastic steady state, he consumes less each period than his nonfinancial income.
#
# He uses the gap between consumption and nonfinancial income mostly to service the interest payments due on his debt.
#
# Thus, when we look at the panel of debt in the accompanying graph, we see that this is a group of *ex ante* identical people each of whom starts with zero debt.
#
# All of them accumulate debt in anticipation of rising nonfinancial income.
#
# They expect their nonfinancial income to rise toward the invariant distribution of income, a consequence of our having started them at $ y_{-1} = y_{-2} = 0 $.
# #### Cointegration residual
#
# The following figure plots realizations of the left side of [(6)](#equation-old12), which,
# [as discussed in our last lecture](perm_income.html#coint-pi), is called the **cointegrating residual**.
#
# As mentioned above, the right side can be thought of as an
# annuity payment on the expected present value of future income
# $ E_t \sum_{j=0}^\infty \beta^j y_{t+j} $.
#
# Early along a realization, $ c_t $ is approximately constant while
# $ (1-\beta) b_t $ and
# $ (1-\beta) E_t \sum_{j=0}^\infty \beta^j y_{t+j} $ both rise
# markedly as the householdโs present value of income and borrowing rise
# pretty much together.
#
# This example illustrates the following point: the definition
# of cointegration implies that the cointegrating residual is
# *asymptotically* covariance stationary, not *covariance stationary*.
#
# The cointegrating residual for the specification with zero income and zero
# debt initially has a notable transient component that dominates its
# behavior early in the sample.
#
# By altering initial conditions, we shall remove this transient in our second example to be presented below
# + hide-output=false
function cointegration_figure(bsim, csim)
# create figure
plot((1 - ฮฒ) * bsim[1, :] + csim[1, :], color=:black,lw=2,label="")
plot!((1 - ฮฒ) * bsim' + csim', color=:black, alpha=.1,label="")
plot!(title="Cointegration of Assets and Consumption", xlabel="t")
end
cointegration_figure(bsim0, csim0)
# -
# ### A โborrowers and lendersโ closed economy
#
# When we set $ y_{-1} = y_{-2} = 0 $ and $ b_0 =0 $ in the
# preceding exercise, we make debt โhead northโ early in the sample.
#
# Average debt in the cross-section rises and approaches asymptote.
#
# We can regard these as outcomes of a โsmall open economyโ that
# borrows from abroad at the fixed gross interest rate $ R = r+1 $ in
# anticipation of rising incomes.
#
# So with the economic primitives set as above, the economy converges to a
# steady state in which there is an excess aggregate supply of risk-free
# loans at a gross interest rate of $ R $.
#
# This excess supply is filled by โforeigner lendersโ willing to make those loans.
#
# We can use virtually the same code to rig a โpoor manโs Bewley [[Bew86]](../zreferences.html#bewley86) modelโ in the following way
#
# - as before, we start everyone at $ b_0 = 0 $
# - But instead of starting everyone at $ y_{-1} = y_{-2} = 0 $, we
# draw $ \begin{bmatrix} y_{-1} \\ y_{-2} \end{bmatrix} $ from
# the invariant distribution of the $ \{y_t\} $ process
#
#
# This rigs a closed economy in which people are borrowing and lending
# with each other at a gross risk-free interest rate of
# $ R = \beta^{-1} $.
#
# Across the group of people being analyzed, risk-free loans are in zero excess supply.
#
# We have arranged primitives so that $ R = \beta^{-1} $ clears the market for risk-free loans at zero aggregate excess supply.
#
# So the risk-free loans are being made from one person to another within our closed set of agent.
#
# There is no need for foreigners to lend to our group.
#
# Letโs have a look at the corresponding figures
# + hide-output=false
out = income_consumption_debt_series(A_LSS, C_LSS, G_LSS, mxbewley, sxbewley)
bsimb, csimb, ysimb = out[1:3]
cons_meanb, cons_varb, debt_meanb, debt_varb = out[4:end]
consumption_income_debt_figure(bsimb, csimb, ysimb)
# + hide-output=false
consumption_debt_fanchart(csimb, cons_meanb, cons_varb,
bsimb, debt_meanb, debt_varb)
# -
# The graphs confirm the following outcomes:
#
# - As before, the consumption distribution spreads out over time.
#
#
# But now there is some initial dispersion because there is *ex ante* heterogeneity in the initial draws of $ \begin{bmatrix} y_{-1} \\ y_{-2} \end{bmatrix} $.
#
# - As before, the cross-section distribution of debt spreads out over time.
# - Unlike before, the average level of debt stays at zero, confirming that this is a closed borrower-and-lender economy.
# - Now the cointegrating residual seems stationary, and not just asymptotically stationary.
#
#
# Letโs have a look at the cointegration figure
# + hide-output=false
cointegration_figure(bsimb, csimb)
| dynamic_programming/perm_income_cons.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <center> Keras </center>
# ## <center>1.8 Overfitting</center>
# # Overfitting
#
# Overfitting refers to a model that models the training data too well.
#
# Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize.
#
#
# <img src="https://i.stack.imgur.com/13vdb.png" width = "70%" /><br>
# There have only been 56 presidential elections and 43 presidents. That is not a lot of data to learn from. When the predictor space expands to include things like having false teeth and the Scrabble point value of names, it's pretty easy for the model to go from fitting the generalizable features of the data (the signal) and to start matching the noise. When this happens, the fit on the historical data may improve, but the model will fail miserably when used to make inferences about future presidential elections.
# <br> <br>
#
# Here's a graph which illustrates overfitting:
# <img src="img/Overfitting.png" /><br>
#
# The left graph does not capture the required behaviour, where as the right most graph overfits to the trained data. Such an overfitted network will perform very accurately on training data but not so well on test data. The middle figure represents a good compromise.
#
# <br>
# ### Generalization
#
#
# Generalization refers to how well the concepts learned by a machine learning model applys to specific examples not seen by the model when it was learning.
# ## Best Practice
#
# It makes sense to first overfit your network for various different reasons:
# - To find out where the boundary is
# - To gain confidence that the problem can be solved
# # Code
# +
# Importing the MNIST dataset
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Processing the input data
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
# Processing the output data
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# Build a network
from keras import models
from keras import layers
network = models.Sequential()
network.add(layers.Dense(units=512, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dense(units=10, activation='softmax'))
# Compile the network
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the network
history = network.fit(train_images, train_labels, epochs=5, batch_size=128,
verbose=1, validation_data=(test_images, test_labels))
# -
import matplotlib.pyplot as plt
def plot_training_history(history):
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
#loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Plot the training results
plot_training_history(history)
# # Task
#
# Questions:
# - How to identify if a network is overfitting?
# - What could be the reasons for overfitting?
# - What countermeasures can be taken to avoid overfitting?
# - Should we in any case avoid overfitting?
# # Feedback
# <aย href = "http://goto/ml101_doc/Keras13">Feedback: Overfitting</a> <br>
# # Navigation
# <div>
# <span> <h3 style="display:inline"><< Prev: <aย href = "Keras12.ipynb">Batch size and Epochs</a></h3> </span>
# <span style="float: right"><h3 style="display:inline">Next: <aย href = "Keras14.ipynb">Dropout</a> >> </h3></span>
# </div>
| Keras/keras13.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: jax
# language: python
# name: jax
# ---
# ## 5.3 ์๊ณ์ด ๋ฐ์ดํฐ๋ฅผ ์์ธกํ๋ LSTM ๊ตฌํ
# LSTM์ ์ด์ฉํด ์๊ณ์ด ๋ฐ์ดํฐ์ ๋ํ ์์ธก
# +
# set to use CPU
#import os
#os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
# -
# ### 5.2.1 ๋ผ์ด๋ธ๋ฌ๋ฆฌ ํจํค์ง ์ํฌํธ
# 1. LSTM์ ์ด์ฉํ ํ๋ณ๋ง ๊ตฌํ์ ํ์ํ ๋ผ์ด๋ธ๋ฌ๋ฆฌ ์ํฌํธ
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import model_selection
import seaborn as sns
from keras import models, layers
from keraspp import skeras
# -
# ### 5.3.2 ์ฝ๋ ์คํ ๋ฐ ๊ฒฐ๊ณผ ๋ณด๊ธฐ
# 2. ์ธ๋ถ ์ฝ๋๋ฅผ ๋ณด๊ธฐ ์ ์ ๋จธ์ ์ ๋ง๋ค๊ณ ์คํํ๋ ๋ถ๋ถ
def main():
machine = Machine()
machine.run(epochs=400)
# ### 5.3.3 ํ์ตํ๊ณ ํ๊ฐํ๊ธฐ
# 3. ๋จธ์ ํด๋์ค๋ ์๊ณ์ด LSTM์ ํ์ตํ๊ณ ํ๊ฐํ๋ ํ๋ซํผ
class Machine():
def __init__(self):
self.data = Dataset()
shape = self.data.X.shape[1:]
self.model = rnn_model(shape)
def run(self, epochs=400):
d = self.data
X_train, X_test, y_train, y_test = d.X_train, d.X_test, d.y_train, d.y_test
X, y = d.X, d.y
m = self.model
h = m.fit(X_train, y_train, epochs=epochs, validation_data=[X_test, y_test], verbose=0)
skeras.plot_loss(h)
plt.title('History of training')
plt.show()
yp = m.predict(X_test)
print('Loss:', m.evaluate(X_test, y_test))
plt.plot(yp, label='Origial')
plt.plot(y_test, label='Prediction')
plt.legend(loc=0)
plt.title('Validation Results')
plt.show()
yp = m.predict(X_test).reshape(-1)
print('Loss:', m.evaluate(X_test, y_test))
print(yp.shape, y_test.shape)
df = pd.DataFrame()
df['Sample'] = list(range(len(y_test))) * 2
df['Normalized #Passengers'] = np.concatenate([y_test, yp], axis=0)
df['Type'] = ['Original'] * len(y_test) + ['Prediction'] * len(yp)
plt.figure(figsize=(7, 5))
sns.barplot(x="Sample", y="Normalized #Passengers",
hue="Type", data=df)
plt.ylabel('Normalized #Passengers')
plt.show()
yp = m.predict(X)
plt.plot(yp, label='Origial')
plt.plot(y, label='Prediction')
plt.legend(loc=0)
plt.title('All Results')
plt.show()
# ### 5.3.4 LSTM ์๊ณ์ด ํ๊ท ๋ชจ๋ธ๋ง
# 4. ์๊ณ์ด ๋ฐ์ดํฐ์ ํ๊ท ๋ชจ๋ธ๋ง์ ์ํ LSTM ๋ชจ๋ธ์ ๊ตฌ์ฑ
def rnn_model(shape):
m_x = layers.Input(shape=shape) #X.shape[1:]
m_h = layers.LSTM(10)(m_x)
m_y = layers.Dense(1)(m_h)
m = models.Model(m_x, m_y)
m.compile('adam', 'mean_squared_error')
m.summary()
return m
# ### 5.3.5 ๋ฐ์ดํฐ ๋ถ๋ฌ์ค๊ธฐ
# 5. ๋ฐ์ดํฐ๋ Dataset ํด๋์ค๋ฅผ ๊ตฌ์ฑํด์ ๋ถ๋ฌ์ด
class Dataset:
def __init__(self, fname='international-airline-passengers.csv', D=12):
data_dn = load_data(fname=fname)
self.X, self.y = get_Xy(data_dn, D=D)
self.X_train, self.X_test, self.y_train, self.y_test = \
model_selection.train_test_split(self.X, self.y,
test_size=0.2, random_state=42)
def load_data(fname='international-airline-passengers.csv'):
dataset = pd.read_csv(fname, usecols=[1], engine='python', skipfooter=3)
data = dataset.values.reshape(-1)
plt.plot(data)
plt.xlabel('Time'); plt.ylabel('#Passengers')
plt.title('Original Data')
plt.show()
# data normalize
data_dn = (data - np.mean(data)) / np.std(data) / 5
plt.plot(data_dn)
plt.xlabel('Time'); plt.ylabel('Normalized #Passengers')
plt.title('Normalized data by $E[]$ and $5\sigma$')
plt.show()
return data_dn
def get_Xy(data, D=12):
# make X and y
X_l = []
y_l = []
N = len(data)
assert N > D, "N should be larger than D, where N is len(data)"
for ii in range(N-D-1):
X_l.append(data[ii:ii+D])
y_l.append(data[ii+D])
X = np.array(X_l)
X = X.reshape(X.shape[0], X.shape[1], 1)
y = np.array(y_l)
print(X.shape, y.shape)
return X, y
main()
# ---
# ### ์ ์ฒด ์ฝ๋
# +
# %%
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import model_selection
from keras import models, layers
import seaborn as sns
from keraspp import skeras
# %%
def main():
machine = Machine()
machine.run(epochs=400)
class Machine():
def __init__(self):
self.data = Dataset()
shape = self.data.X.shape[1:]
self.model = rnn_model(shape)
def run(self, epochs=400):
d = self.data
X_train, X_test, y_train, y_test = d.X_train, d.X_test, d.y_train, d.y_test
X, y = d.X, d.y
m = self.model
h = m.fit(X_train, y_train, epochs=epochs, validation_data=[X_test, y_test], verbose=0)
skeras.plot_loss(h)
plt.title('History of training')
plt.show()
yp = m.predict(X_test)
print('Loss:', m.evaluate(X_test, y_test))
plt.plot(yp, label='Origial')
plt.plot(y_test, label='Prediction')
plt.legend(loc=0)
plt.title('Validation Results')
plt.show()
yp = m.predict(X_test).reshape(-1)
print('Loss:', m.evaluate(X_test, y_test))
print(yp.shape, y_test.shape)
df = pd.DataFrame()
df['Sample'] = list(range(len(y_test))) * 2
df['Normalized #Passengers'] = np.concatenate([y_test, yp], axis=0)
df['Type'] = ['Original'] * len(y_test) + ['Prediction'] * len(yp)
plt.figure(figsize=(7, 5))
sns.barplot(x="Sample", y="Normalized #Passengers",
hue="Type", data=df)
plt.ylabel('Normalized #Passengers')
plt.show()
yp = m.predict(X)
plt.plot(yp, label='Origial')
plt.plot(y, label='Prediction')
plt.legend(loc=0)
plt.title('All Results')
plt.show()
def rnn_model(shape):
m_x = layers.Input(shape=shape) #X.shape[1:]
m_h = layers.LSTM(10)(m_x)
m_y = layers.Dense(1)(m_h)
m = models.Model(m_x, m_y)
m.compile('adam', 'mean_squared_error')
m.summary()
return m
class Dataset:
def __init__(self, fname='international-airline-passengers.csv', D=12):
data_dn = load_data(fname=fname)
self.X, self.y = get_Xy(data_dn, D=D)
self.X_train, self.X_test, self.y_train, self.y_test = \
model_selection.train_test_split(self.X, self.y,
test_size=0.2, random_state=42)
def load_data(fname='international-airline-passengers.csv'):
dataset = pd.read_csv(fname, usecols=[1], engine='python', skipfooter=3)
data = dataset.values.reshape(-1)
plt.plot(data)
plt.xlabel('Time'); plt.ylabel('#Passengers')
plt.title('Original Data')
plt.show()
# data normalize
data_dn = (data - np.mean(data)) / np.std(data) / 5
plt.plot(data_dn)
plt.xlabel('Time'); plt.ylabel('Normalized #Passengers')
plt.title('Normalized data by $E[]$ and $5\sigma$')
plt.show()
return data_dn
def get_Xy(data, D=12):
# make X and y
X_l = []
y_l = []
N = len(data)
assert N > D, "N should be larger than D, where N is len(data)"
for ii in range(N-D-1):
X_l.append(data[ii:ii+D])
y_l.append(data[ii+D])
X = np.array(X_l)
X = X.reshape(X.shape[0], X.shape[1], 1)
y = np.array(y_l)
print(X.shape, y.shape)
return X, y
main()
# -
| colab_py37_k28/nb_ex5_2_lstm_airplane_cl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Funciรณn filter()
# +
numeros = [2, 3, 4, 5, 6, 7, 8, 9, 10]
def multiple(numero):
if numero % 5 == 0:
return True
filter(multiple, numeros)
# -
list(filter(multiple, numeros))
multiples = filter(multiple, numeros)
next(multiples)
next(multiples)
# ## Filter + Anonymous lambda functions
list(filter(lambda numero: numero % 5 == 0, numeros))
multiples = list(filter(lambda numero: numero % 5 == 0, numeros))
multiples
for n in multiples:
print(n)
# +
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def __str__(self):
return "{} with {} years old".format(self.name, self.age)
persons = [
Person("Peter", 12),
Person("Maria", 45),
Person("Spike", 22),
Person("Luther", 15),
Person("Jonatan", 78)
]
# -
minors = filter(lambda person: person.age < 18, persons)
minors
for minor in minors:
print(minor)
| Fase 4 - Temas avanzados/Tema 15 - Funcionalidades avanzadas/Leccion 06 - Funcion filter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Problemas
# ---
# Defina una funciรณn que tome como argumentos los parametros de Denavit - Hartenberg, y cree una matriz de transformaciรณn homogรฉnea.
# + deletable=false nbgrader={"checksum": "923a03b899b400c77ec16655a8ae1f2c", "grade": false, "grade_id": "cell-d796d71f88ca3f1a", "locked": false, "schema_version": 1, "solution": true}
def DH_simbolico(a, d, ฮฑ, ฮธ):
from sympy import Matrix, sin, cos
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"checksum": "7c3a56d78554424e5a4b5db0fe78f499", "grade": true, "grade_id": "cell-2e02d67eabed1e86", "locked": true, "points": 2, "schema_version": 1, "solution": false}
from sympy import Matrix, sin, cos, pi
from nose.tools import assert_equal
assert_equal(DH_simbolico(0,0,0,pi/2), Matrix([[0,-1,0,0],[1,0,0,0], [0,0,1,0],[0,0,0,1]]))
assert_equal(DH_simbolico(0,0,pi/2,0), Matrix([[1,0,0,0],[0,0,-1,0], [0,1,0,0],[0,0,0,1]]))
assert_equal(DH_simbolico(0,1,0,0), Matrix([[1,0,0,0],[0,1,0,0], [0,0,1,1],[0,0,0,1]]))
assert_equal(DH_simbolico(1,0,0,0), Matrix([[1,0,0,1],[0,1,0,0], [0,0,1,0],[0,0,0,1]]))
# -
# ---
# Cree una funciรณn que tome como argumentos los parametros de los grados de libertad de un manipulador tipo PUMA y devuelva las matrices de transformaciรณn homogรฉnea asociadas a cada eslabon.
# + deletable=false nbgrader={"checksum": "3109c2b6f4f7dfbd45bb6ce1edd15ef9", "grade": false, "grade_id": "cell-8759bf18b64c88c0", "locked": false, "schema_version": 1, "solution": true}
def cinematica_PUMA(q1, q2, q3):
from sympy import pi, var
var("l1:4")
# YOUR CODE HERE
raise NotImplementedError()
return A1, A2, A3
# + deletable=false editable=false nbgrader={"checksum": "0f674094b605f0ebce3677160614c0a5", "grade": true, "grade_id": "cell-5bdcfe97ca2cef34", "locked": true, "points": 2, "schema_version": 1, "solution": false}
from nose.tools import assert_equal
from sympy import pi, var, Matrix
var("l1:4")
A1, A2, A3 = cinematica_PUMA(0, 0, 0)
assert_equal(A1*A2*A3, Matrix([[1,0,0,l2+l3], [0,0,-1,0], [0,1,0,l1], [0,0,0,1]]))
A1, A2, A3 = cinematica_PUMA(pi/2, 0, 0)
assert_equal(A1*A2*A3, Matrix([[0,0,1,0], [1,0,0,l2+l3], [0,1,0,l1], [0,0,0,1]]))
A1, A2, A3 = cinematica_PUMA(0, pi/2, 0)
assert_equal(A1*A2*A3, Matrix([[0,-1,0,0], [0,0,-1,0], [1,0,0,l1+l2+l3], [0,0,0,1]]))
A1, A2, A3 = cinematica_PUMA(0, 0, pi/2)
assert_equal(A1*A2*A3, Matrix([[0,-1,0,l2], [0,0,-1,0], [1,0,0,l1+l3], [0,0,0,1]]))
# -
# ---
# Cree una funciรณn que dados los angulos del manipulador devuelva la transformaciรณn total del manipulador (ayudese de la funciรณn creada en el segundo problema).
# + deletable=false nbgrader={"checksum": "92c599a7b4948093e5522e4882f821a8", "grade": false, "grade_id": "cell-67941966e2bb0f7f", "locked": false, "schema_version": 1, "solution": true}
def transformacion_PUMA(q1, q2, q3):
from sympy import pi, var
var("l1:4")
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"checksum": "3c294ce18cd9d95fbbc4b506a21e7b2e", "grade": true, "grade_id": "cell-1360716371127399", "locked": true, "points": 1, "schema_version": 1, "solution": false}
from nose.tools import assert_equal
from sympy import pi, var, Matrix
var("l1:4")
assert_equal(transformacion_PUMA(0, 0, 0), Matrix([[1,0,0,l2+l3], [0,0,-1,0], [0,1,0,l1], [0,0,0,1]]))
assert_equal(transformacion_PUMA(pi/2, 0, 0), Matrix([[0,0,1,0], [1,0,0,l2+l3], [0,1,0,l1], [0,0,0,1]]))
assert_equal(transformacion_PUMA(0, pi/2, 0), Matrix([[0,-1,0,0], [0,0,-1,0], [1,0,0,l1+l2+l3], [0,0,0,1]]))
assert_equal(transformacion_PUMA(0, 0, pi/2), Matrix([[0,-1,0,l2], [0,0,-1,0], [1,0,0,l1+l3], [0,0,0,1]]))
# -
# ---
# Cree una funciรณn que dados los angulos del manipulador, grafique las posiciones de los eslabones del manipulador del primer punto (ayudese de las funciones creadas en el primer y segundo problemas, modificadas ligeramente para aceptar matrices numรฉricas, asรญ como la funciรณn creada en la prรกctica anterior para la graficaciรณn de un sistema robรณtico).
# + deletable=false nbgrader={"checksum": "74d7f8a5e1747a92091e03efeae1f7b7", "grade": false, "grade_id": "cell-d9e16df1267dfeb6", "locked": false, "schema_version": 1, "solution": true}
def DH_numerico(a, d, ฮฑ, ฮธ):
# YOUR CODE HERE
raise NotImplementedError()
def cinematica_PUMA(q1, q2, q3):
# Considere que las longitudes son todas iguales a 1
l1, l2, l3 = 1, 1, 1
from numpy import pi
# YOUR CODE HERE
raise NotImplementedError()
return A1, A2, A3
def grafica_PUMA(q1, q2, q3):
from numpy import matrix
# YOUR CODE HERE
raise NotImplementedError()
fig = figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.plot(xs, ys, zs, "-o")
ax.set_xlim(-1.1, 1.1)
ax.set_ylim(-1.1, 1.1)
ax.set_zlim(-0.1, 2.1)
return ax
# + deletable=false editable=false nbgrader={"checksum": "2f4b0e7910b47d1a6526a83b0043688a", "grade": true, "grade_id": "cell-4306e8821b779c0e", "locked": true, "points": 3, "schema_version": 1, "solution": false}
# %matplotlib inline
from matplotlib.pyplot import figure, plot, style
from mpl_toolkits.mplot3d import Axes3D
style.use("ggplot")
from numpy.testing import assert_allclose
from numpy import array
ax = grafica_PUMA(0,0.5,0.5)
ls = ax.get_lines()
assert_allclose(ls[0].get_xdata(), array([0, 0, 0.8775, 1.417885]), rtol=1e-01, atol=1e-01)
assert_allclose(ls[0].get_ydata(), array([-0.0384900179, 0, 0.00915, 0.03809]), rtol=1e-01, atol=1e-01)
# -
# ---
# Utilice la funciรณn ```interact``` para manipular la posiciรณn del manipulador, de tal manera que su posiciรณn sea aproximadamente $q_1=0.6rad$, $q_2=0.2rad$ y $q_3 = -0.8rad$
# + deletable=false nbgrader={"checksum": "5053e6bed7e8e3691d63e3441e7f3846", "grade": false, "grade_id": "cell-b66fd1a7b96109ff", "locked": false, "schema_version": 1, "solution": true}
# %matplotlib inline
from matplotlib.pyplot import figure, plot, style
from mpl_toolkits.mplot3d import Axes3D
style.use("ggplot")
from ipywidgets import interact
from numpy import pi
ฯ = 2*pi
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"checksum": "7addcff42e50d4344f177a24d46c2b16", "grade": true, "grade_id": "cell-f447987899e058b0", "locked": true, "points": 2, "schema_version": 1, "solution": false}
from nose.tools import assert_almost_equal
from numpy import pi
ฯ = 2*pi
| Practicas/practica4/Problemas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# Creating a GeoDataFrame from a DataFrame with coordinates
# ---------------------------------------------------------
#
# This example shows how to create a ``GeoDataFrame`` when starting from
# a *regular* ``DataFrame`` that has coordinates either WKT
# (`well-known text <https://en.wikipedia.org/wiki/Well-known_text>`_)
# format, or in
# two columns.
#
#
#
import pandas as pd
import geopandas
from shapely.geometry import Point
import matplotlib.pyplot as plt
# From longitudes and latitudes
# =============================
#
# First, let's consider a ``DataFrame`` containing cities and their respective
# longitudes and latitudes.
#
#
df = pd.DataFrame(
{'City': ['Buenos Aires', 'Brasilia', 'Santiago', 'Bogota', 'Caracas'],
'Country': ['Argentina', 'Brazil', 'Chile', 'Colombia', 'Venezuela'],
'Latitude': [-34.58, -15.78, -33.45, 4.60, 10.48],
'Longitude': [-58.66, -47.91, -70.66, -74.08, -66.86]})
# A ``GeoDataFrame`` needs a ``shapely`` object, so we create a new column
# **Coordinates** as a tuple of **Longitude** and **Latitude** :
#
#
df['Coordinates'] = list(zip(df.Longitude, df.Latitude))
# Then, we transform tuples to ``Point`` :
#
#
df['Coordinates'] = df['Coordinates'].apply(Point)
# Now, we can create the ``GeoDataFrame`` by setting ``geometry`` with the
# coordinates created previously.
#
#
gdf = geopandas.GeoDataFrame(df, geometry='Coordinates')
# ``gdf`` looks like this :
#
#
print(gdf.head())
# Finally, we plot the coordinates over a country-level map.
#
#
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
world.head()
# +
# We restrict to South America.
ax = world[world.continent == 'South America'].plot(color='white', edgecolor='black')
# We can now plot our GeoDataFrame.
gdf.plot(ax=ax, color='red')
plt.show()
| create_geopandas_from_pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
scores = '../out/yeasts_scores.txt'
cofile = '../out/sorted_conditions.txt'
# +
# plotting imports
# %matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import colors
import seaborn as sns
sns.set_style('white')
plt.rc('font', size=12)
# -
# other imports
import numpy as np
import pandas as pd
from scipy import cluster
import fastcluster as fst
from sklearn.decomposition import PCA
from sklearn.ensemble import IsolationForest
def plot_pca(pca, p,
cstrains=None,
rstrains=None,
lstrains=None):
if cstrains is None:
cstrains = {}
if rstrains is None:
rstrains = {}
if lstrains is None:
lstrains = {}
plt.figure(figsize=(10, 3))
ax = plt.subplot(133)
sns.barplot(data=[[x] for x in pca.explained_variance_ratio_[:6]],
color=sns.xkcd_rgb['light grey'])
plt.xticks(range(6),
['%d' % (x + 1)
for x in range(6)])
plt.xlabel('Principal component')
plt.ylabel('Explained variance')
sns.despine(ax=ax)
for i in range(2):
plt.subplot(1, 3, i+1)
tmp = plt.plot(p.values[:, i],
p.values[:, i+1],
'.',
alpha=0.3,
color='k')
for t in tmp:
t.set_rasterized(True)
for strain in ['Y8205',
'OS_693',
'OS_801',
'OS_104']:
plt.plot(p.loc[strain].values[i],
p.loc[strain].values[i+1],
'o',
color=cstrains.get(rstrains.get(strain, ''),
'k'),
ms=10,
label=lstrains.get(rstrains.get(strain, ''),
rstrains.get(strain, '')))
plt.xlabel('PC %d' % (i + 1))
plt.ylabel('PC %d' % (i + 2))
plt.axvline(0,
color='grey',
ls='dashed',
zorder=0)
plt.axhline(0,
color='grey',
ls='dashed',
zorder=0)
if i == 1:
lg = plt.legend(loc=(1.85, 0.55),
frameon=True,
title='Strain',
ncol=1)
for x in lg.legendHandles:
lg.set_alpha(1)
plt.subplots_adjust(hspace=0.3,
wspace=0.3);
strains = ['S288C', 'Y55',
'UWOP', 'YPS']
rstrains = {'Y8205': 'S288C',
'OS_801': 'Y55',
'OS_693': 'UWOP',
'OS_104': 'YPS'}
lstrains = {'S288C': 'Y8205',
'YPS': 'YPS128'}
cstrains = {x: c
for x, c in zip(strains, sns.color_palette('Set1', len(strains)))}
m = pd.read_table(scores, index_col=[0, 1]).sort_index()
m['phenotype'] = m['qvalue'] < 0.05
m['pos-phenotype'] = (m['qvalue'] < 0.05) & (m['score'] > 0)
m['neg-phenotype'] = (m['qvalue'] < 0.05) & (m['score'] < 0)
p = m.pivot_table(index='strain',
columns='condition',
values='score')
c = p.copy(deep=True)
c[np.isnan(c)] = 0.
rl = fst.linkage(c, method='average')
cl = fst.linkage(c.T, method='average')
cmap = sns.diverging_palette(76, 217, l=89, n=100, center="dark", as_cmap=True)
cmap.set_bad(sns.xkcd_rgb['grey'], alpha=0.55)
mclust = sns.clustermap(p.T,
cmap=cmap,
vmax=5,
vmin=-5,
xticklabels=False,
yticklabels=True,
row_linkage=cl,
col_linkage=rl,
figsize=(18, 9));
# +
plt.figure(figsize=(6, 8))
gs = plt.GridSpec(1, 2,
wspace=0.025,
width_ratios=[1, 8])
ax1 = plt.subplot(gs[1])
ax2 = plt.subplot(gs[0])
plt.sca(ax1)
yticklabels = True
hm = sns.heatmap(mclust.data2d,
cmap=cmap,
vmax=4,
vmin=-4,
yticklabels=yticklabels,
xticklabels=False,
cbar=False)
plt.xlabel('Strains')
ax1.collections[0].set_rasterized(True)
plt.ylabel('')
plt.gca().yaxis.tick_right()
plt.yticks(rotation=0)
plt.sca(ax2)
with plt.rc_context({'lines.linewidth': 0.5}):
cluster.hierarchy.dendrogram(cl, no_plot=False,
color_threshold=-np.inf,
above_threshold_color='k',
orientation='left',
no_labels=True)
plt.xticks([])
plt.gca().invert_yaxis()
sns.despine(bottom=True,
left=True)
plt.savefig('heatmap_natural.png',
dpi=300, bbox_inches='tight',
transparent=True)
plt.savefig('heatmap_natural.svg',
dpi=300, bbox_inches='tight',
transparent=True);
# -
o = open('natural_sorted_all.txt', 'w')
for x in mclust.data2d.index:
o.write('%s\n' % x)
o.close()
co = [x.rstrip() for x in open(cofile)]
p = p[[x for x in p.columns if x in co]]
c = p.copy(deep=True)
c[np.isnan(c)] = 0.
rl = fst.linkage(c, method='average')
cl = fst.linkage(c.T, method='average')
cmap = sns.diverging_palette(76, 217, l=89, n=100, center="dark", as_cmap=True)
cmap.set_bad(sns.xkcd_rgb['grey'], alpha=0.55)
mclust = sns.clustermap(p.T,
cmap=cmap,
vmax=5,
vmin=-5,
xticklabels=False,
yticklabels=True,
row_linkage=cl,
col_linkage=rl,
figsize=(18, 9));
# +
plt.figure(figsize=(4.5, 8))
gs = plt.GridSpec(1, 2,
wspace=0.025,
width_ratios=[1, 8])
ax1 = plt.subplot(gs[1])
ax2 = plt.subplot(gs[0])
plt.sca(ax1)
yticklabels = True
hm = sns.heatmap(mclust.data2d,
cmap=cmap,
vmax=4,
vmin=-4,
yticklabels=yticklabels,
xticklabels=False,
cbar=False)
plt.xlabel('Strains')
ax1.collections[0].set_rasterized(True)
plt.ylabel('')
plt.gca().yaxis.tick_right()
plt.yticks(rotation=0)
plt.sca(ax2)
with plt.rc_context({'lines.linewidth': 0.5}):
cluster.hierarchy.dendrogram(cl, no_plot=False,
color_threshold=-np.inf,
above_threshold_color='k',
orientation='left',
no_labels=True)
plt.xticks([])
plt.gca().invert_yaxis()
sns.despine(bottom=True,
left=True)
plt.savefig('heatmap_natural_restrict.png',
dpi=300, bbox_inches='tight',
transparent=True)
plt.savefig('heatmap_natural_restrict.svg',
dpi=300, bbox_inches='tight',
transparent=True);
# -
o = open('natural_sorted.txt', 'w')
for x in mclust.data2d.index:
o.write('%s\n' % x)
o.close()
pca1 = PCA().fit(c)
p1 = pd.DataFrame(pca1.transform(c),
index=p.index)
plot_pca(pca1, p1,
cstrains=cstrains,
rstrains=rstrains,
lstrains=lstrains)
plt.savefig('pca_natural.png',
dpi=300, bbox_inches='tight',
transparent=True)
plt.savefig('pca_natural.svg',
dpi=300, bbox_inches='tight',
transparent=True);
| notebooks/heatmaps-natural.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import flaski
from flaski import scatterplot
from flaski import iscatterplot
import matplotlib.pylab as plt
import pandas as pd
help(scatterplot.figure_defaults)
plot_arguments, lists, notUpdateList, checkboxes = scatterplot.figure_defaults()
plot_arguments
df=pd.read_excel("/Users/jboucas/Desktop/flask_test_input/Book1.xlsx")
print(df.columns.tolist())
plot_arguments["xvals"]='xxxxxxx'
plot_arguments["yvals"]='yyyyyyyy'
fig=scatterplot.make_figure(df,plot_arguments)
plt.ylabel("my new label")
fig
| pyflaski/dev/.ipynb_checkpoints/dev1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="https://raw.githubusercontent.com/Qiskit/qiskit-tutorials/master/images/qiskit-heading.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# ## _*The Simon Algorithm*_
#
# The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
#
# The Simon algorithm is an example that shows a quantum algorithm can solve a problem exponentially efficient than any classical algorithms. Like the Grover search, it depends on the existence of a blackbox (or, oracle) function that returns a predefined output over specific input or query. In the query-complexity setting, one cares only about how many queries are required to solve a specific problem, but does not care how the blackbox is realized. However, in this tutorial we have to implement it using the unit gates available in QISKit, just like we have done with the Grover search.
#
# We first describe the problem addressed by the Simon algorithm, show the steps of the algorithm and the construction of the blackbox function, and present the experimental results on simulators and real devices.
#
# ***
# ### Contributors
# <NAME>
#
# ### Qiskit Package Versions
import qiskit
qiskit.__qiskit_version__
# ## The Problem <a id='introduction'></a>
#
# The Simon algorithm deals with finding a hidden integer $s \in \{0,1\}^n$ from an oracle $f_s$ that satisfies $f_s(x) = f_s(y)$ if and only if $y = x \oplus s$ for all $x \in \{0,1\}^n$. Here, the $\oplus$ is the bitwise XOR operation. Thus, if $s = 0\ldots 0$, i.e., the all-zero bitstring, then $f_s$ is a 1-to-1 (or, permutation) function. Otherwise, if $s \neq 0\ldots 0$, then $f_s$ is a 2-to-1 function.
#
# The Simon algorithm can find the hidden integer using only $O(n)$ queries to the blackbox function, while any classical algorithms require $\Omega(\sqrt{2^n})$ queries.
# ## The Algorithm to Find the Hidden Integer
#
# The Simon algorithm finds the hidden integer by combining quantum algorithm with postprocessing on classical computers as below.
#
# 1. Prepare two quantum registers each of length $n$ that are initialized to all-zero bitstring: the first one as input and the second one as output of the blackbox function.
# $$
# |0\rangle |0\rangle
# $$
#
# 2. Apply Hadamard gates to the first register to create superposition of all possible inputs.
# $$
# H^{\otimes n} |0\rangle |0\rangle = \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} |x\rangle |0\rangle
# $$
#
# 3. Query the blackbox function to obtain the answer to queries on the second register.
# $$
# \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} U_{f_s}|x\rangle |0\rangle = \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} |x\rangle |f_s(x)\rangle
# $$
#
# 4. Apply Hadamard gates to the first register.
# $$
# \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} H^{\otimes n}|x\rangle |f_s(x)\rangle = \frac{1}{2^n} \sum_{y=0}^{2^n-1}\sum_{x=0}^{2^n-1} (-1)^{x \cdot y}|y\rangle |f_s(x)\rangle = \frac{1}{2^n} \sum_{y=0}^{2^n-1} |y\rangle \sum_{x=0}^{2^n-1} ( (-1)^{x \cdot y} + (-1)^{(x\oplus s) \cdot y} ) |f_s(x)\rangle
# $$
#
# Notice that at the right-hand side of the above equation, because $(-1)^{(x\oplus s) \cdot y} = (-1)^{x\cdot y + s \cdot y}$ we can conclude that the probability amplitude of the basis state $|y\rangle |f_s(x)\rangle$ is $(-1)^{x\cdot y} (1 + (-1)^{s \cdot y} )$, which is zero if and only if $s \cdot y = 1$. Thus, measuring the first register will always give $y$ such that $s \cdot y = 0$. Moreover, we can obtain many different $y$'s by repeating Step 1 to 4.
#
# 5. Repeat Step 1 to 4 for $m$ times to obtain $y_1, y_2, \ldots, y_m$.
#
# 6. **(Classical post-processing)** Let $\mathbf{Y}$ be an $m\times n$ matrix whose $i$-th row is $y_i$ in Step 5, and $\vec{s}$ be the column vector whose $j$-th element is the $j$-th bit of $s$. Solve the following system of linear equations to obtain $s$.
# $$
# \mathbf{Y} \vec{s} = 0
# $$
# ## The Circuit <a id="circuit"></a>
#
# We now implement the Simon algorithm with Qiskit by first preparing the environment.
# +
#initialization
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# importing Qiskit
from qiskit import BasicAer, IBMQ
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
from qiskit.compiler import transpile
from qiskit.tools.monitor import job_monitor
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
# -
# Load the saved IBMQ accounts
IBMQ.load_accounts()
# We then set the hidden bitstring $s$ that will be used to construct the circuit of the blackbox function (whose details will be given later). The number of qubits used in the experiment is twice the length of the bitstring $s$.
# +
s = "010101" # the hidden bitstring
assert 1 < len(s) < 20, "The length of s must be between 2 and 19"
for c in s:
assert c == "0" or c == "1", "s must be a bitstring of '0' and '1'"
n = len(s) #the length of the bitstring
# -
# We then use Qiskit to create the circuit of the Simon algorithm prior the querying the blackbox function.
# +
# Step 1
# Creating registers
# qubits for querying the oracle and recording its output
qr = QuantumRegister(2*n)
# for recording the measurement on the first register of qr
cr = ClassicalRegister(n)
circuitName = "Simon"
simonCircuit = QuantumCircuit(qr, cr)
# Step 2
# Apply Hadamard gates before querying the oracle
for i in range(n):
simonCircuit.h(qr[i])
# Apply barrier to mark the beginning of the blackbox function
simonCircuit.barrier()
# -
# ### Constructing a Circuit for the Blackbox Function
#
# We now details the construction of 1-to-1 and 2-to-1 permutation circuit of the blackbox function. Let us assume the blackbox function receive $|x\rangle|0\rangle$ as input. With regards to a predetermined $s$, the blackbox function writes its output to the second register so that it transforms the input to $|x\rangle|f_s(x)\rangle$ such that $f(x) = f(x\oplus s)$ for all $x \in \{0,1\}^n$.
#
# Such a blackbox function can be realized by the following procedures.
#
# - Copy the content of the first register to the second register.
# $$
# |x\rangle|0\rangle \rightarrow |x\rangle|x\rangle
# $$
#
# - **(Creating 1-to-1 or 2-to-1 mapping)** If $s$ is not all-zero, then there is the least index $j$ so that $s_j = 1$. If $x_j = 0$, then XOR the second register with $s$. Otherwise, do not change the second register.
# $$
# |x\rangle|x\rangle \rightarrow |x\rangle|x \oplus s\rangle~\mbox{if}~x_j = 0~\mbox{for the least index j}
# $$
#
# - **(Creating random permutation)** Randomly permute and flip the qubits of the second register.
# $$
# |x\rangle|y\rangle \rightarrow |x\rangle|f_s(y)\rangle
# $$
#
# Below is the circuit of the blackbox function based on the above procedures.
# +
# Step 3 query the blackbox function
# # copy the content of the first register to the second register
for i in range(n):
simonCircuit.cx(qr[i], qr[n+i])
# get the least index j such that s_j is "1"
j = -1
for i, c in enumerate(s):
if c == "1":
j = i
break
# Creating 1-to-1 or 2-to-1 mapping with the j-th qubit of x as control to XOR the second register with s
for i, c in enumerate(s):
if c == "1" and j >= 0:
simonCircuit.cx(qr[j], qr[n+i]) #the i-th qubit is flipped if s_i is 1
# get random permutation of n qubits
perm = list(np.random.permutation(n))
#initial position
init = list(range(n))
i = 0
while i < n:
if init[i] != perm[i]:
k = perm.index(init[i])
simonCircuit.swap(qr[n+i], qr[n+k]) #swap qubits
init[i], init[k] = init[k], init[i] #marked swapped qubits
else:
i += 1
# randomly flip the qubit
for i in range(n):
if np.random.random() > 0.5:
simonCircuit.x(qr[n+i])
# Apply the barrier to mark the end of the blackbox function
simonCircuit.barrier()
# -
# Now we can continue with the steps of the Simon algorithm: applying the Hadamard gates to the first register and measure.
# +
# Step 4 apply Hadamard gates to the first register
for i in range(n):
simonCircuit.h(qr[i])
# Step 5 perform measurement on the first register
for i in range(n):
simonCircuit.measure(qr[i], cr[i])
#draw the circuit
simonCircuit.draw(output='mpl')
# -
# ## Experimenting with Simulators
#
# We show the experiments of finding the hidden integer with simulators.
# +
# use local simulator
backend = BasicAer.get_backend("qasm_simulator")
# the number of shots is twice the length of the bitstring
shots = 2*n
job = execute(simonCircuit, backend=backend, shots=shots)
answer = job.result().get_counts()
plot_histogram(answer)
# -
# We can see that the results of the measurements are the basis whose inner product with the hidden string $s$ are zero.
#
# *(Notice that the basis on the label of the x-axis in the above plot are numbered from right to left instead of from left to right that we used for $s$)*
#
# Gathering the measurement results, we proceed to post-processing with computations that can be done on classical computers.
#
# ### Post Processing with Gaussian Elimination
#
# The post processing is done with Gaussian elimination to solve the system of linear equations to determine $s$.
# +
# Post-processing step
# Constructing the system of linear equations Y s = 0
# By k[::-1], we reverse the order of the bitstring
lAnswer = [ (k[::-1],v) for k,v in answer.items() if k != "0"*n ] #excluding the trivial all-zero
#Sort the basis by their probabilities
lAnswer.sort(key = lambda x: x[1], reverse=True)
Y = []
for k, v in lAnswer:
Y.append( [ int(c) for c in k ] )
#import tools from sympy
from sympy import Matrix, pprint, MatrixSymbol, expand, mod_inverse
Y = Matrix(Y)
#pprint(Y)
#Perform Gaussian elimination on Y
Y_transformed = Y.rref(iszerofunc=lambda x: x % 2==0) # linear algebra on GF(2)
#to convert rational and negatives in rref of linear algebra on GF(2)
def mod(x,modulus):
numer, denom = x.as_numer_denom()
return numer*mod_inverse(denom,modulus) % modulus
Y_new = Y_transformed[0].applyfunc(lambda x: mod(x,2)) #must takecare of negatives and fractional values
#pprint(Y_new)
print("The hidden bistring s[ 0 ], s[ 1 ]....s[",n-1,"] is the one satisfying the following system of linear equations:")
rows, cols = Y_new.shape
for r in range(rows):
Yr = [ "s[ "+str(i)+" ]" for i, v in enumerate(list(Y_new[r,:])) if v == 1 ]
if len(Yr) > 0:
tStr = " + ".join(Yr)
print(tStr, "= 0")
# -
# As seen above, the system of linear equations is satisfied by the hidden integer $s$. Notice that there can be more than one solutions to the system. In fact, all-zero bitstring is a trivial solution to the system of linear equations. But by having more samples one can narrow down the candidates of the solution, and then test the solution by querying the blackbock in the classical manner.
#
# ## Experimenting with Real Devices
#
# We see how one can still find out the hidden integer by running the Simon algorithm on real devices. Due to imperfect quantum computers, obtaining the conclusion is not as easy as done with the simulator of perfect quantum computers.
# +
#Use one of the available backends
backend = IBMQ.get_backend("ibmq_16_melbourne")
# show the status of the backend
print("Status of", backend, "is", backend.status())
shots = 10*n #run more experiments to be certain
max_credits = 3 # Maximum number of credits to spend on executions.
simonCompiled = transpile(simonCircuit, backend=backend, optimization_level=1)
job_exp = execute(simonCompiled, backend=backend, shots=shots, max_credits=max_credits)
job_monitor(job_exp)
# +
results = job_exp.result()
answer = results.get_counts(simonCircuit)
plot_histogram(answer)
# +
# Post-processing step
# Constructing the system of linear equations Y s = 0
# By k[::-1], we reverse the order of the bitstring
lAnswer = [ (k[::-1][:n],v) for k,v in answer.items() ] #excluding the qubits that are not part of the inputs
#Sort the basis by their probabilities
lAnswer.sort(key = lambda x: x[1], reverse=True)
Y = []
for k, v in lAnswer:
Y.append( [ int(c) for c in k ] )
Y = Matrix(Y)
#Perform Gaussian elimination on Y
Y_transformed = Y.rref(iszerofunc=lambda x: x % 2==0) # linear algebra on GF(2)
Y_new = Y_transformed[0].applyfunc(lambda x: mod(x,2)) #must takecare of negatives and fractional values
#pprint(Y_new)
print("The hidden bistring s[ 0 ], s[ 1 ]....s[",n-1,"] is the one satisfying the following system of linear equations:")
rows, cols = Y_new.shape
for r in range(rows):
Yr = [ "s[ "+str(i)+" ]" for i, v in enumerate(list(Y_new[r,:])) if v == 1 ]
if len(Yr) > 0:
tStr = " + ".join(Yr)
print(tStr, "= 0")
# -
# ## References
#
# [1] "[On the power of quantum computation](https://epubs.siam.org/doi/abs/10.1137/S0097539796298637)", <NAME>, SIAM J. Comput., 26(5), 1474โ1483 (1997)
| algorithms/simon_algorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Misc-tutorials
# ## 01- Startups and defaults
# ### Part III: Jupyter-notebook names
#
# gully
# August 2016
# I usually name my Jupyter notebooks with the same convention:
#
# `proj_XX-YY_description_separated_by_underscores.ipynb`
#
# where:
# - `proj` is the project name or abbreviation, usually same as GitHub repo
# - `XX` is a GitHub issue describing the task examined in the Notebook
# - `YY` is a sub task in that GitHub issue, or simply a running number
# - `description_separated_by_underscores` is a description to jog your memory on what that notebook does
#
# I've found that this system works for me.
#
# I still haven't found a great way to version control notebooks, but [nbdime](https://github.com/jupyter/nbdime) is promising.
#
#
# The key part is the GitHub issue tracker. This is a great way to organize a project, since it's traceable back to the conversation (whether a monologue or dialogue) describing the task in more detail. The issue need not be an "issue" by the normal definition, but can simply say "spot check the data with plots" or "do some exploratory analysis to get a feel for the data".
#
#
| notebooks/misc01-03_Jupyter_notebook_names.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import gzip
import numpy as np
from sklearn.externals import joblib
# -
dataset = 'train' # train, validate
base_path = "D:/DCASE_2019/audio/augmented"
spec_path = os.path.join(base_path, f'audio_spec/{dataset}')
vgg_path = os.path.join(base_path, f'vgg_embeddings/{dataset}')
all_spec = [os.path.join(spec_path, p) for p in os.listdir(spec_path)]
all_vgg = [os.path.join(vgg_path, p) for p in os.listdir(vgg_path)]
for i, spec in enumerate(all_spec):
if i % 100 == 0:
print(i, 'of', len(all_spec))
spec_name, ext = os.path.splitext(spec)
spec_name = os.path.basename(spec_name)
spec_name = spec_name.replace('_melspec-128_1', '')
save_name = os.path.join(base_path, f'spec_vgg/{dataset}/{spec_name}.pkl')
if not os.path.exists(save_name):
for vgg in all_vgg:
vgg_name = os.path.basename(vgg)
vgg_name = vgg_name.replace('.npy.gz', '')
# print(vgg_name, spec_name)
# print(spec_name)
# print(vgg_name)
# print()
if vgg_name == spec_name:
spec_pickle, spec_labels = joblib.load(spec)
with gzip.open(vgg) as vgg_emb:
vgg_emb = np.load(vgg_emb, allow_pickle=True)
joblib.dump((spec_pickle, vgg_emb, spec_labels), save_name)
break
a = joblib.load(all_vgg[0])
a[0].shape
with gzip.open(all_vgg[0]) as f:
x = np.load(f, allow_pickle=True)
x.shape
| Combine spectrogram and vgg pkl files from batch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # USB Sensor API and Plots
# ## The following notebook provides some examples of using the API to access data from the Urban Science Building (USB), collected by the Urban Observatory (https://urbanobservatory.ac.uk/).
# ## Further guidance can be found at https://api.usb.urbanobservatory.ac.uk/
# +
## Import Modules
import matplotlib
import pandas as pd
import json
import urllib.request
import datetime as dt
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
matplotlib.rcParams.update({
'font.size': 16
})
# -
# ## Use the API to call number of rooms on the _nth_ floor of USB, filter to see which are currently occupied.
# +
## Select Floor - Options include G, and 1-6
floor = '2'
## Create API Call Components.
## 'meta:buildingFloor' used to define floor
## 'metric=occupied' selects on those with occupancy sensors
## 'pageSize' (Default 10, currently set to 100, may need to be more depending on chosen floor).
callBase = 'https://api.usb.urbanobservatory.ac.uk/api/v2/sensors/entity/'
floorCall = callBase + '?meta:buildingFloor=' + floor + '&metric=occupied&pageSize=100'
## Call API - note the different steps used within the json.loads process.
usbRmsOnFlr = json.loads(
urllib
.request
.urlopen(floorCall)
.read()
.decode('utf-8')
)['items']
## Print API Call - Click to see JSON Output
print(floorCall)
# -
print('Sensors located in %u rooms.' % len(usbRmsOnFlr))
## List currently occupied rooms
## Includes check to discount rooms that don't have a 'latest' field
for rm in usbRmsOnFlr:
if 'latest' in rm['feed'][0]['timeseries'][0]:
if 'value' in rm['feed'][0]['timeseries'][0]['latest']:
if rm['feed'][0]['timeseries'][0]['latest']['value'] == 1:
if "Room" in rm["name"]:
print(rm["name"])
# ## List all measured variables in a specific room.
# +
room = 'room-5-009'
roomCall = callBase + room
usbRoomData = json.loads(
urllib
.request
.urlopen(roomCall)
.read()
.decode('utf-8')
)['feed']
print(roomCall)
# -
## List Measured Variables
for variable in usbRoomData:
print(variable['metric'])
# ## Getting Historic Data
# ### Two techniques for demonstrated building up an API call to achieve this.
# ### 1. Using links with an entity/variable API return.
# ### 2. Constructing an API call programmatically.
#
# ## Method 1.
# ### Using a call of a specific entity and variable to quickly find measurements for the previous 24 hours.
#
# ### The following call will return information on measuring CO2 in Room 1.043.
# +
apiCall = 'https://api.usb.urbanobservatory.ac.uk/api/v2/sensors/entity?meta:roomNumber=1.043&metric=CO2'
rawData = json.loads(urllib.request.urlopen(apiCall).read().decode('utf-8'))
## Print JSON data
## The HTTPS of interest is located in items>feed>timeseries>links
## Look for "rel" variables of "archives" and "archives.friendly"
print(json.dumps(rawData, indent=1))
# -
## Loop through JSON data structure to location of historic time-series HTTPS link, and call API using this.
## Note - some rooms will have multiple feeds depending on the sensor deployment.
## ['links'][1]['href'] and ['links'][3]['href'] should produce the same results, albeit with a different HTTPS.
historicAPI = rawData['items'][0]['feed'][0]['timeseries'][0]['links'][3]['href']
historicData = json.loads(urllib.request.urlopen(historicAPI).read().decode('utf-8'))['historic']['values']
print(historicAPI)
# ### Convert to pandas dataframe, and plot.
## Convert JSON to Pandas DataFrame, Keeping Time and Value
dfHist = pd.DataFrame.from_records(historicData, exclude=['duration'])
dfHist.index = pd.to_datetime(dfHist["time"])
dfHist = dfHist.drop(columns="time")
dfHist
# +
## Plot Data
fig, ax = plt.subplots(figsize=(26,9))
axFmt = mdates.DateFormatter('%H:%M')
ax.xaxis.set_major_formatter(axFmt)
ax.set_xlabel('Time')
ax.set_ylabel('CO2', color='darkred')
ax.plot(dfHist["value"], color='darkred')
ax.tick_params(axis='y', labelcolor='darkred')
plt.show()
# -
# ## Method 2.
# ### We will compare Temperature and CO2 measurements in a room over a 2 week period.
#
# ### Set analysis time limits
## Recommended to not exceed 25 days to allow inter0perability of graphs
startDate = dt.datetime.strptime('2020-02-01T00:00:00+0000', "%Y-%m-%dT%H:%M:%S%z")
endDate = dt.datetime.strptime('2020-02-14T23:59:59+0000', "%Y-%m-%dT%H:%M:%S%z")
# ### Construct API Call
# +
## Choose Room of Interest
## Note some rooms will return a "Forbidden" error - if you recieve this, choose another!
room = 'room-5-009'
## Set Variables of Interest
var1 = 'Room Temperature'
var2 = 'CO2'
## Update Call Base
callBase = 'https://api.usb.urbanobservatory.ac.uk/api/v2/sensors/timeseries/'
callVar1 = callBase + room + '/' + str(var1).replace(' ', '%20') + '/raw/historic/?startTime=' +\
startDate.isoformat().replace('+00:00', 'Z') + '&endTime=' + endDate.isoformat().replace('+00:00', 'Z')
callVar2 = callBase + room + '/' + str(var2).replace(' ', '%20') + '/raw/historic/?startTime=' +\
startDate.isoformat().replace('+00:00', 'Z') + '&endTime=' + endDate.isoformat().replace('+00:00', 'Z')
usbVar1 = json.loads(urllib.request.urlopen(callVar1).read().decode('utf-8'))['historic']['values']
usbVar2 = json.loads(urllib.request.urlopen(callVar2).read().decode('utf-8'))['historic']['values']
print(callVar1)
print(callVar2)
# -
# ## Check number of readings, and convert to pandas dataframe.
print(var1+' has '+str(len(usbVar1))+' readings, '+ var2+' has '+str(len(usbVar2))+' readings.')
## Convert JSON to Pandas DataFrame, Keeping Time (as Index) and Value
dfVar1 = pd.DataFrame.from_records(usbVar1, exclude=['duration'])
dfVar1.index = pd.to_datetime(dfVar1["time"])
dfVar1 = dfVar1.drop(columns="time")
dfVar2 = pd.DataFrame.from_records(usbVar2, exclude=['duration'])
dfVar2.index = pd.to_datetime(dfVar2["time"])
dfVar2 = dfVar2.drop(columns="time")
# ## Plot time-series comparing Temperature and CO2.
# +
## Plot Data
fig, ax1 = plt.subplots(figsize=(26,9))
plt.xlim(startDate,endDate)
ax1.set_xlabel('Date')
ax1.set_ylabel('Temperature (Celsius)', color='forestgreen')
ax1.plot(dfVar1["value"], color='forestgreen')
ax1.tick_params(axis='y', labelcolor='forestgreen')
ax2 = ax1.twinx()
ax2.set_ylabel('CO2', color='navy')
ax2.plot(dfVar2["value"], color='navy')
ax2.tick_params(axis='y', labelcolor='navy')
fig.tight_layout()
plt.show()
| building/USB_API_Call_Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
# import cv2
# import csv
import os
import sys
import time
import struct
import h5py
import scipy.io as sio
# from scipy import ndimage
from numpy import linalg as LA
from IPython.display import display, Image
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
import tensorflow as tf
# Config the matplotlib backend as plotting inline in IPython
# %matplotlib inline
# +
import scipy.io
# Load synthetic dataset
X = scipy.io.loadmat('/Users/angelsrates/Documents/PhD/4th Semester/Project/poles_data2.mat')
y = scipy.io.loadmat('/Users/angelsrates/Documents/PhD/4th Semester/Project/poles_y2.mat')
data = X['data']
data = np.squeeze(np.transpose(data))
#data_noise = X['data_noise']
#data_noise = np.squeeze(np.transpose(data_noise))
#sys_par = np.squeeze(np.transpose(X['dic_par']))
#sys_par = [np.append(np.array([-1]), sys_par[i]) for i in range(sys_par.shape[0])]
y = y['label']
y = np.squeeze(y - 1)
n_classes = max(y) + 1
#num_poles = np.squeeze(X['num_poles'])
num_poles = 2
# -
sys_par = [[-1,1.39954943237774, -1], [-1,0.411382829503097, -1]]
np.random.seed(4294967295)
[N, T] = data.shape
permutation = np.random.permutation(data.shape[0])
data = [data[perm] for perm in permutation]
y = [y[perm] for perm in permutation]
X = data
# If data with noise, change to:
# X = data_noise
# +
#Select training and testing (75% and 25%)
thr = int(N*0.75)
y = [int(i) for i in y]
X_train = np.asarray(X[:thr])
y_train = np.asarray(y[:thr])
X_test = np.asarray(X[thr:])
y_test = np.asarray(y[thr:])
print('Training data size', X_train.shape)
print('Training Ground-Truth size', y_train.shape)
print('Testing data size', X_test.shape)
print('Testing Ground-Truth size', y_test.shape)
# +
def extract_batch_size(_train, step, batch_size):
# Function to fetch a "batch_size" amount of data from "(X|y)_train" data.
shape = list(_train.shape)
#shape = list((batch_size, 1843200))
shape[0] = batch_size
#shape[1] = 1843200
batch_s = np.empty(shape)
for i in range(batch_size):
# Loop index
index = ((step-1)*batch_size + i) % len(_train)
batch_s[i] = _train[index]
#batch_s[i] = np.reshape(load_video(_train[index]), (1,1843200))
return batch_s
def one_hot(y_):
# Function to encode output labels from number indexes
# e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]
y_ = y_.reshape(len(y_))
n_values = np.max(y_) + 1
return np.eye(n_values)[np.array(y_, dtype=np.int32)] # Returns FLOATS
# +
from scipy import signal
import control
from scipy.signal import step2
import math
# Parameters
learning_rate = 0.0015
batch_size = 1
# Network Parameters
n_input = T
#dropout = 0.75 # Dropout, probability to keep units
# tf Graph input
x = tf.placeholder(tf.float64, [n_input])
y = tf.placeholder(tf.float32, [1, n_classes])
#labels = tf.placeholder(tf.int32, [1,1])
def index_along_every_row(array, index):
N,_ = array.shape
return array[np.arange(N), index]
def build_hankel_tensor(x, nr, nc, N, dim):
cidx = np.arange(0, nc, 1)
ridx = np.transpose(np.arange(1, nr+1, 1))
Hidx = np.transpose(np.tile(ridx, (nc,1))) + dim*np.tile(cidx, (nr,1))
Hidx = Hidx - 1
arr = tf.reshape(x[:], (1,N))
return tf.py_func(index_along_every_row, [arr, Hidx], [tf.float64])[0]
def build_hankel(x, nr, nc, N, dim):
cidx = np.arange(0, nc, 1)
ridx = np.transpose(np.arange(1, nr+1, 1))
Hidx = np.transpose(np.tile(ridx, (nc,1))) + dim*np.tile(cidx, (nr,1))
Hidx = Hidx - 1
arr = x[:]
return arr[Hidx]
# Create model
def poles_net(x, sys_par, T, num_poles):
# Operate over single-channel trajectories
# Sampling rates at 0.3
W_col = []
for i in range(num_poles):
sys = control.TransferFunction([1, 0], sys_par[i], 0.3)
[y1, _] = control.matlab.impulse(sys, T=np.arange(T))
y1 = tf.transpose(y1[0,:T])
W_col.append(y1)
W = tf.reshape(tf.stack(W_col, axis=1), (T,num_poles))
coeff = tf.abs(tf.matrix_solve_ls(W, tf.reshape(x, (T,1)), l2_regularizer=0.0, fast=False, name=None))
coeff = tf.transpose(coeff)
out = tf.add(tf.matmul(tf.cast(coeff, tf.float32), weights['out']), biases['out'])
return [coeff, out]
weights = {
'out': tf.Variable(tf.random_normal([num_poles, n_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([1, n_classes]))
}
[coeff, pred]= poles_net(x, sys_par, T, num_poles)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1.0).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
init = tf.global_variables_initializer()
# -
y_test = one_hot(y_test)
# Launch the graph
n_epochs = 1
training_iters = X_train.shape[0]*n_epochs
display_step = 1
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
train_acc = 0
while step * batch_size <= training_iters:
batch_x = np.squeeze(extract_batch_size(X_train,step,batch_size))
batch_y = extract_batch_size(one_hot(y_train),step,batch_size)
#batch_y = np.reshape(extract_batch_size(y_train,step,batch_size), (1,1))
print(batch_y.shape)
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % display_step == 0:
# Calculate batch loss and accuracy
loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
y: batch_y})
train_acc += acc
print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc))
step += 1
print('Final Training Accuracy:', train_acc/(X_train.shape[0]*n_epochs))
print("Optimization Finished!")
acc = 0
for i in range(X_test.shape[0]):
test = np.squeeze(X_test[i,:])
label = np.reshape(y_test[i,:], (1,n_classes))
#label = np.reshape(y_test[i], (1,1))
print(label)
print("Trajectory:", i, \
sess.run([coeff], feed_dict={x: test, y: label}))
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={x: test, y: label}))
acc += sess.run(accuracy, feed_dict={x: test, y: label})
print('Final Testing Accuracy:', acc/X_test.shape[0])
| fixedPoles_network_oneShot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import torch
from torch import nn
import numpy as np
from tqdm import tqdm
from typing import Optional
from collections import OrderedDict
class VariationalDropout(nn.Module):
"""
Applies the same dropout mask across the temporal dimension
See https://arxiv.org/abs/1512.05287 for more details.
Note that this is not applied to the recurrent activations in the LSTM like the above paper.
Instead, it is applied to the inputs and outputs of the recurrent layer.
"""
def __init__(self, dropout: float, batch_first: Optional[bool]=False):
super().__init__()
self.dropout = dropout
self.batch_first = batch_first
def forward(self, x: torch.Tensor) -> torch.Tensor:
if not self.training or self.dropout <= 0.:
return x
is_packed = isinstance(x, PackedSequence)
if is_packed:
x, batch_sizes = x
max_batch_size = int(batch_sizes[0])
else:
batch_sizes = None
max_batch_size = x.size(0)
# Drop same mask across entire sequence
if self.batch_first:
m = x.new_empty(max_batch_size, 1, x.size(2), requires_grad=False).bernoulli_(1 - self.dropout)
else:
m = x.new_empty(1, max_batch_size, x.size(2), requires_grad=False).bernoulli_(1 - self.dropout)
x = x.masked_fill(m == 0, 0) / (1 - self.dropout)
if is_packed:
return PackedSequence(x, batch_sizes)
else:
return x
class LSTMNew(nn.LSTM):
def __init__(self, *args, dropouti: float=0.,
dropoutw: float=0., dropouto: float=0.,
batch_first=True, unit_forget_bias=True, **kwargs):
super().__init__(*args, **kwargs, batch_first=batch_first)
self.unit_forget_bias = unit_forget_bias
self.dropoutw = dropoutw
self.input_drop = VariationalDropout(dropouti,
batch_first=batch_first)
self.output_drop = VariationalDropout(dropouto,
batch_first=batch_first)
self._init_weights()
def _init_weights(self):
"""
Use orthogonal init for recurrent layers, xavier uniform for input layers
Bias is 0 except for forget gate
"""
for name, param in self.named_parameters():
if "weight_hh" in name:
nn.init.orthogonal_(param.data)
elif "weight_ih" in name:
nn.init.xavier_uniform_(param.data)
elif "bias" in name and self.unit_forget_bias:
nn.init.zeros_(param.data)
param.data[self.hidden_size:2 * self.hidden_size] = 1
def _drop_weights(self):
for name, param in self.named_parameters():
if "weight_hh" in name:
getattr(self, name).data = \
torch.nn.functional.dropout(param.data, p=self.dropoutw,
training=self.training).contiguous()
def forward(self, input, hx=None):
self._drop_weights()
self.flatten_parameters()
input = self.input_drop(input)
seq, state = super().forward(input, hx=hx)
return self.output_drop(seq), state
class EventsDataEncoder(nn.Module):
def __init__(self, input_dim=390, hidden_dim=512, lstm_layers=3,
filter_kernels=[2,3,4], filters=100, output_dim=1024,
add_embeds=True, embed_dim=700,
dropout=0.3, dropout_w=0.2, dropout_conv=0.2):
#dim, batch_norm, dropout, rec_dropout, task,
#target_repl = False, deep_supervision = False, num_classes = 1,
#depth = 1, input_dim = 390, ** kwargs
super(EventsDataEncoder, self).__init__()
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.layers = lstm_layers
self.bidirectional = True
# some more parameters
self.dropout = dropout
self.rec_dropout = dropout_w
self.depth = lstm_layers
self.drop_conv = dropout_conv
self.num_classes = 1
self.output_dim = output_dim
self.add_embeds = add_embeds
self.embed_dim = embed_dim if add_embeds else 0
# define the LSTM layer
# in keras we have inputs: A 3D tensor with shape [batch, timesteps, feature]
# units: Positive integer, dimensionality of the output space. = dim=num_units=hidden_size
if self.layers >=2:
self.lstm1 = LSTMNew(input_size=self.input_dim,
hidden_size=self.hidden_dim,
num_layers=self.layers-1,
dropoutw=self.rec_dropout,
dropout=self.rec_dropout,
bidirectional=self.bidirectional,
batch_first=True)
self.do0 = nn.Dropout(self.dropout)
# this is not in the original model
if self.layers >=2:
self.lstm2 = LSTMNew(input_size=self.hidden_dim*2,
hidden_size=self.hidden_dim*2,
num_layers=1,
dropoutw=self.rec_dropout,
dropout=self.rec_dropout,
bidirectional=False,
batch_first=True)
else:
self.lstm2 = LSTMNew(input_size=self.input_dim,
hidden_size=self.hidden_dim*2,
num_layers=1,
dropoutw=self.rec_dropout,
dropout=self.rec_dropout,
bidirectional=False,
batch_first=True)
# three Convolutional Neural Networks with different kernel sizes
nfilters= filter_kernels
nb_filters= filters
# 48 hrs of events data
L_out = [(48 - k) + 1 for k in nfilters]
maxpool_padding, maxpool_dilation, maxpool_kernel_size, maxpool_stride = (0, 1, 2, 2)
dim_ = self.embed_dim + int(np.sum([100 * np.floor(
(l + 2 * maxpool_padding - maxpool_dilation * (maxpool_kernel_size - 1) - 1) / maxpool_stride + 1) for l in
L_out]))
self.cnn1 = nn.Sequential(OrderedDict([
("cnn1_conv1d", nn.Conv1d(in_channels=self.hidden_dim*2, out_channels=nb_filters, kernel_size=nfilters[0],
stride=1, padding=0, dilation=1, groups=1, bias=True,
padding_mode='zeros')),
("cnn1_relu", nn.ReLU()),
("cnn1_maxpool1d", nn.MaxPool1d(kernel_size=2)),
("cnn1_flatten", nn.Flatten())
]))
self.cnn2 = nn.Sequential(OrderedDict([
("cnn2_conv1d", nn.Conv1d(in_channels=self.hidden_dim * 2, out_channels=nb_filters, kernel_size=nfilters[1],
stride=1, padding=0, dilation=1, groups=1, bias=True,
padding_mode='zeros')),
("cnn2_relu", nn.ReLU()),
("cnn2_maxpool1d", nn.MaxPool1d(kernel_size=2)),
("cnn2_flatten", nn.Flatten())
]))
self.cnn3 = nn.Sequential(OrderedDict([
("cnn3_conv1d", nn.Conv1d(in_channels=self.hidden_dim * 2, out_channels=nb_filters, kernel_size=nfilters[2],
stride=1, padding=0, dilation=1, groups=1, bias=True,
padding_mode='zeros')),
("cnn3_relu", nn.ReLU()),
("cnn3_maxpool1d", nn.MaxPool1d(kernel_size=2)),
("cnn3_flatten", nn.Flatten())
]))
self.encoder = nn.Sequential(OrderedDict([
("enc_relu", nn.ReLU()),
("enc_fc1", nn.Linear(dim_, self.output_dim)),
#("enc_fc1", nn.Linear(dim_, dim_//2)),
#("enc_relu2", nn.ReLU()),
#("enc_fc2", nn.Linear(dim_//2, self.output_dim)),
#("enc_bn", nn.BatchNorm1d(self.output_dim)),
("enc_layernorm", nn.LayerNorm(self.output_dim)),
("enc_flatten", nn.Flatten())
]))
self.do2 = nn.Dropout(self.drop_conv)
#self.final = nn.Linear(dim_, self.num_classes)
def forward(self, inputs, embeds=None):
out = inputs
if self.layers >=2:
out, h = self.lstm1(out)
out = self.do0(out)
out, h = self.lstm2(out)
pooling_reps = []
pool_vecs = self.cnn1(out.permute((0,2,1)))
pooling_reps.append(pool_vecs)
pool_vecs = self.cnn2(out.permute((0,2,1)))
pooling_reps.append(pool_vecs)
pool_vecs = self.cnn3(out.permute((0,2,1)))
pooling_reps.append(pool_vecs)
# concatenate all vectors
representation = torch.cat(pooling_reps, dim=1).contiguous()
out = self.do2(representation)
if embeds is not None:
out = torch.cat([out, embeds], dim=1)
encoding = self.encoder(out)
#out = self.final(out)
# return encoding in the shape of (output_dim)
return encoding
# -
model = EventsDataEncoder()
model
model(torch.randn(10,48,390), torch.randn(10,700)).shape
| src/notebooks/EventsDataEncoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
from scipy import linalg
from scipy import optimize
import sympy as sm
from sympy import *
# -
# # Model description: The Keyniesian cross and the IS-LM model
#
# ## Keyniesian cross
# The Keynasian cross is a part of Keynes general theory. It proposed that an economyโs total income was, in the short run, determined largely by the spending plans of households, businesses, and government. The more people want to spend, the more goods and services firms can sell. Keynes believed that the problem during recessions and depressions was inadequate spending. The Keynesian cross is an attempt to model this insight.
#
# **Structure of the model**
#
# The planned expenditure is determined as:
#
# $$ AD = C + I + G + NX $$
#
# To this equation we add the consumption function:
#
# $$ C = a + b(Y-T), a>0, 0<b<1 $$
#
# This equation states that concumption depends on disposable income, and is a linear function. b is the marginal consumption rate. Further more we take planned investment as exogenously fixed, and negatively related to the rate of interest:
#
# $$ I = \bar{I} - di $$
# Because of the free capital movement, the real domestic interest rate equals the real foreign interest rate, $r^*$
#
# $$ r = \bar{r^*} $$
#
# Furter more we assume that governments puchases and taxes are fixed
#
# $$ T = \bar{T} $$
#
# $$ G = \bar{G} $$
#
# Combining these eqations we get
#
# $$ AD = a + b(Y- \bar{T}) + \bar{I} - di + \bar{G} + NX $$
#
# This equation shows that planned expenditure is a function of income Y, the
# level of planned investment I , the fiscal policy variables G and T and the net export NX.
# The Keynesian cross is in equilibrium when actual expenditure equals planned expenditure
#
# $$ Y = AD $$
#
# ### Keynesian cross equilibrium analysis
# we define the symbols
Y = sm.symbols('Y')
C = sm.symbols('C')
PE = sm.symbols('PE')
T = sm.symbols('T')
I = sm.symbols('I')
G = sm.symbols('G')
NX = sm.symbols('NX')
d = sm.symbols('d')
i = sm.symbols('i')
a = sm.symbols('a')
b = sm.symbols('b')
# +
# We now set Y=AD to solve for Y
eq_AD = sm.Eq(Y, a + b*(Y-T) + I-(d*i) + G + NX)
eq = sm.solve(eq_AD, Y)[0]
yeq = sm.factor(eq)
print('Y =')
yeq
# -
# We have now found the equilibrium for Y. We now want to plot the keynesian cross
# +
# Define the values for our parameters
T = 30
I = 40
G = 30
NX = 10
a = 30
b = 0.3
d = 5
i = 5
# The data for production and AD is plotted
Y_arrey = np.linspace(0,300)
AD_arrey = (a + b * (Y_arrey - T) + I - d*i + G + NX)
degree = Y_arrey
# The figure
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(1,1,1)
ax.plot(Y_arrey, degree, label="45-degree line", color='lightblue',linewidth=3)
ax.plot(Y_arrey, AD_arrey, label="AD=C+I+G+NX", color='darkorange',linewidth=3)
ax.set_xlabel("Y")
ax.set_ylabel("AD")
ax.legend(loc="upper left")
ax.grid()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
# -
Y = -(G + I + NX - T*b + a - d*i)/(b-1)
print ('The equilibrium for the Keynesian cross is')
Y
# ### An Increase in Government Purchases in the Keynesian Cross
# We now want to examine how goverment purchase effects the equilibrium of the economy, because higher government expenditure results in higher planned ecpenditure. We'll therefore examine hov big of a change a movement in G will make in Y.
del G
G = sm.symbols('G')
diff_Y = sm.diff(yeq, G)
print('Y will change by')
diff_Y
# Where b is the marginal propensity to consume.
#the increase is set:
G_change = -(1/(b-1))
print('This means when G rises by 1 amount, Y will rise by')
G_change
# We now want to compare our old equilibrium with our new equilibrium (higher public expenditure)
# +
# New G:
G = 30
#Public expenditure rises by amount 20
delta_G = 20
G_new = G + delta_G
# The data for production and AD is plotted
Y_arrey = np.linspace(0,300)
AD_arrey_new = (a + b * (Y_arrey - T) + (I - d*i) + G_new + NX)
degree = Y_arrey
# The figure
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(1,1,1)
ax.plot(Y_arrey, degree, label="45-degree line", color='lightblue',linewidth=3)
ax.plot(Y_arrey, AD_arrey, label="AD=C+I+G+NX", color='darkorange',linewidth=3)
ax.plot(Y_arrey, AD_arrey_new, label="AD_2=C+I+G'+NX", color='red',linewidth=3)
ax.set_xlabel("Y")
ax.set_ylabel("AD")
ax.legend(loc="upper left")
ax.grid()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
# -
Y_G = -(G_new + I + NX - T*b + a - d*i)/(b-1)
print('The equilibrium have risen to')
Y_G
change_G = Y_G - Y
print('The equlibrium have changed by')
change_G
# ### A Decrease in Taxes in the Keynesian Cross
# We now want to examine how a decrease in taxes in the Keynesian cross will affect the equilibrium of the economy, because lower taxes results in higher planned ecpenditure. We'll therefore examine hov big of a change a movement in T will make in Y.
del T
T = sm.symbols('T')
diff_Y = sm.diff(yeq, T)
print('Y will change by')
diff_Y
# Higher taxes has a negative effect on Y, because 0<b<1
#the increase is set:
T_change = -(b/(b-1))
print('This means when T falls by 1 amount, Y will rise by')
T_change
# +
# New T:
T = 30
#Taxs falls by amount 20
delta_T = -20
T_new = T + delta_T
# The data for production and AD is plotted
Y_arrey = np.linspace(0,300)
AD_arrey_new_2 = (a + b*(Y_arrey - T_new) + (I - d*i) + G + NX)
degree = Y_arrey
# The figure
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(1,1,1)
ax.plot(Y_arrey, degree, label="45-degree line", color='lightblue',linewidth=3)
ax.plot(Y_arrey, AD_arrey, label="AD=C(Y-T)+I+G+NX", color='darkorange',linewidth=3)
ax.plot(Y_arrey, AD_arrey_new_2, label="AD_2=C*(Y-T')+I+G + NX", color='red',linewidth=3)
ax.set_xlabel("Y")
ax.set_ylabel("PE")
ax.legend(loc="upper left")
ax.grid()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
# -
Y = -(G + I + NX - T_new*b + a - d*i)/(b-1)
print('The equilibrium have risen to')
Y
# It's clear to see that a rise in public expenditure has a bigger effect on equilibrium, since 117.1 < 137.1
# ## The IS-curve
# The Keynesian cross is only a stepping-stone on our path to the ISโcurve, which explains the economyโs aggregate demand curve. The Keynesian cross is useful because it shows how the spending plans of households, firms, and the government determine the economyโs income. We'll now derive the IS-curve from the AD-curve
#
# $$Y-bY=a+b\bar{T}+\bar{I}-di+\bar{G}+NX$$
#
# $$Y(1-b)=a+b\bar{T}+\bar{I}-di+\bar{G}+NX$$
#
# $$Y=\frac{1}{1-b}(a+\bar{I}+\bar{G}+NX)-\frac{1}{1-b}(b\bar{T}+di)$$
#
# Our function for IS depends on the variables from the Keynesian cross. We can therefore define our function.
# We'll now define our function for the IS-curve
del i
i = sm.symbols('i')
Y_IS = (1/(1-b))*(a+I+G+NX)-(1/(1-b))*(b*T+d*i)
print('The function for the IS-curve =')
Y_IS
# ## The LM-curve
# Having derived algebraically equation for IS curve we now turn to the derivation of equation for LM curve. It will be recalled that LM curve is a curve that shows combinations of interest rates and levels of income at which money market is in equilibrium, that is, at which demand for money equals supply of money. Let us assume that money demand function is linear. Then:
#
# $$ L(Y,i)=kY-hik, h > 0 $$
#
# Parameter k represents how much demand for real money balances increases when level of income rises. Parameter h represents how much demand for real money balances decreases when rate of interest rises.
# The equilibrium in the money market is established where demand for real money balances equals supply of real money balances and is given by
#
# $$ M/P = kY โ hi $$
#
# Solving equation above for interest rate, we'll get
#
# $$i = 1/h (kY โ M/P) $$
#
# The above equation describes the equation for the LM-curve. To be precise it gives us the equilibrium interest rate for any given value of level of income (Y) and real money balances. For the money market to be in equilibrium we have that:
#
# $$M_d=M_s$$
#
# Where $M_d$ is the demand for money, and $M_s$ is the rate of interest. We have that:
#
# $$M_d=Yโ2i$$
#
# $$M_s=20$$
#
# The solution is:
#
# $$Y=20+2i$$
# We'll now define our function for the LM-curve
Y_LM = 20 + 2*i
print('The function for the LM-curve =')
Y_LM
# ## The IS-LM model
# We'll now put the IS- and LM curve together. The IS-LM model shows the relationship between interest rates and output.
# +
# The functions
I_arrey = np.linspace(0,25)
IS_arrey = 144-7*I_arrey
LM_arrey = 20+2*I_arrey
# The figure
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(1,1,1)
ax.plot(I_arrey, IS_arrey, label="IS-curve", color='darkorange',linewidth=3)
ax.plot(I_arrey, LM_arrey, label="LM-curve", color='blue',linewidth=3)
ax.set_xlabel("Y")
ax.set_ylabel("I")
ax.legend(loc="upper left")
ax.grid()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
# -
# The equilibrium is 13.7. This means that the economy are in equiibrium when output is equal to 13.7.
#
# ## An Increase in Government Purchases in the IS-LM model
# We'll now examine how a change in public expenditure will effect the IS-LM curve. We'll change it with the amount 20 - the same for the keynisian cross
# We'll now define our new function for the IS-curve when we change the public expenditure
del i
i = sm.symbols('i')
Y_IS_G = (1/(1-b))*(a+I+G_new+NX)-(1/(1-b))*(b*T+d*i)
print('The function for the new IS-curve =')
Y_IS_G
# +
# The functions
I_arrey = np.linspace(0,25)
IS_arrey_G = 172-7*I_arrey
LM_arrey_G = 20+2*I_arrey
# The figure
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(1,1,1)
ax.plot(I_arrey, IS_arrey_G, label="IS_2-curve", color='darkorange',linewidth=3)
ax.plot(I_arrey, LM_arrey_G, label="LM-curve", color='blue',linewidth=3)
ax.plot(I_arrey, IS_arrey, label="IS-curve", color='red',linewidth=3)
ax.set_xlabel("Y")
ax.set_ylabel("I")
ax.legend(loc="upper left")
ax.grid()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
# -
# The new equilibrium is 16.8. We can therefore conclude that a rise in public expenditure rises output and the interest rate.
# # Extension
# We will now analyze how much a change in public expenditure makes on output in Keynes cross, and the IS-LM model, when we have respectively a lump sum tax, and taxes levied as proportion of income.
# For lump sum taxes we assume that the consumption function is defined as:
#
# $$C=a+b(Y-T+R)$$
#
# Where R is the lump sum. If we assume proportionate income tax, then consumption is defined as:
#
# $$C=a+b(Y-tY)$$
#
# ## Lump sum
#
# ### Lump Sum equilibrium analysis
# +
del a
del b
del Y
del T
del I
del d
del G
del NX
a = sm.symbols('a')
b = sm.symbols('b')
Y = sm.symbols('Y')
T = sm.symbols('T')
I = sm.symbols('I')
d = sm.symbols('d')
G = sm.symbols('G')
NX = sm.symbols('NX')
R = sm.symbols('R')
#We now set Y=AD to solve for Y
eq_AD_Lump = sm.Eq(Y, a + b*(Y-T+R) + I-(d*i) + G + NX)
eq_Lump = sm.solve(eq_AD_Lump, Y)[0]
yeq_Lump = sm.factor(eq_Lump)
print('Y =')
yeq_Lump
# -
# We have now found the equilibrium for Y when implementing a lump sum tax. We want solve for how much a public expentuture makes in Y
diff_Y_Lump = sm.diff(yeq_Lump, G)
print('Y will change by')
diff_Y_Lump
# This is exactly the same the change as with no lump sum tax. We'll therefore look at the change in Y for a proportionel income tax.
# ## Proportionate income tax
#
# ### Proportionate income tax equilirium analysis
# +
del a
del b
del Y
del T
del I
del d
del G
del NX
a = sm.symbols('a')
b = sm.symbols('b')
Y = sm.symbols('Y')
T = sm.symbols('T')
I = sm.symbols('I')
d = sm.symbols('d')
G = sm.symbols('G')
NX = sm.symbols('NX')
R = sm.symbols('R')
#We now set Y=PE to solve for Y
t = sm.symbols('t')
eq_AD_Prop = sm.Eq(Y, a + b*(Y-t*Y) + I-(d*i) + G + NX)
eq_Prop = sm.solve(eq_AD_Prop, Y)[0]
yeq_Prop = sm.factor(eq_Prop)
print('Y =')
yeq_Prop
# -
# We have now found the equilibrium for Y when implementing a proportionate income tax. We now want solve for how big a change public expentuture makes in Y.
diff_Y_Prop = sm.diff(yeq_Prop, G)
print('Y will change by')
diff_Y_Prop
# We can see that the new slope is less steep. This means that the multiplicator effect is smaller.
#
# ### An Increase in Government Purchases in the Keynesian Cross with proportionate income tax
# +
# Define the values for our parameters
T = 30
I = 40
G = 30
NX = 10
a = 30
b = 0.3
d = 5
i = 5
t = 0.3
Y_arrey = np.linspace(0,300)
AD_arrey_Prop = (a + b*(Y_arrey - t*Y_arrey) + I - d*i + G + NX)
AD_arrey_Prop_new = (a + b*(Y_arrey - t*Y_arrey) + I - d*i + G_new + NX)
degree = Y_arrey
# The figure
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(1,1,1)
ax.plot(Y_arrey, degree, label="45-degree line", color='lightblue',linewidth=3)
ax.plot(Y_arrey, AD_arrey_Prop, label="AD=C(Y-tY)+I+G+NX", color='darkorange',linewidth=3)
ax.plot(Y_arrey, AD_arrey_Prop_new, label="AD_2=C*(Y-tY)+I+G' + NX", color='red',linewidth=3)
ax.set_xlabel("Y")
ax.set_ylabel("PE")
ax.legend(loc="upper left")
ax.grid()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
# -
Y_Prop = (G + I + NX + a - d*i)/(b*t-b+1)
Y_Prop_new = (G_new + I + NX + a - d*i)/(b*t-b+1)
print('The old equilibrium for the economy with proportionate income tax, but without public expenditure was')
Y_Prop
print('The new equilibrium for the economy with proportionate income tax, and with public expenditure is')
Y_Prop_new
change_prop = Y_Prop_new - Y_Prop
print('The change is')
change_prop
# Because of the dependent tax, the shift in the total production is smaller 25.3 < 28.57.
#
# ### An Increase in Government Purchases in the IS-LM model with proportionate income tax
# We'll now find our new function for the IS-curve.
# We'll now define our function for the IS-curve
del i
i = sm.symbols('i')
Y_IS = (1/(b*t-b+1))*(a+I+G+NX-d*i)
print('The function for the IS-curve =')
Y_IS
# +
# The functions
I_arrey = np.linspace(0,25)
IS_arrey_prop = 139-6*I_arrey
LM_arrey_prop = 20+2*I_arrey
# The figure
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(1,1,1)
ax.plot(I_arrey, IS_arrey_prop, label="IS_prop-curve", color='darkorange',linewidth=3)
ax.plot(I_arrey, LM_arrey_prop, label="LM_prop-curve", color='blue',linewidth=3)
ax.set_xlabel("Y")
ax.set_ylabel("I")
ax.legend(loc="upper left")
ax.grid()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
# -
# The equlibrium is equal to 14.8. It's clear to see that a proportionate income tax have a smaller effect on the economy when changing the public expenditure, because 14.8 < 16.8
# # Conclusion
# We can therefore conclude that a rise in government purchase have a bigger effect on production than a fall in taxes. Furthermore we can conclude that implementation of a lump sum tax will not have any effect, when changing the public expenditure. However a proportionate income tax have an effect. By including a proportionate income tax, the economy gets more stable and shifts in exogenous variables have a smaller effect on the economy.
#
#
| modelproject/ModelprojectKeynesISLM-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit
# name: python385jvsc74a57bd0916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1
# ---
# + [markdown] id="wZ4beVPzZmP1"
# # Trabalho 1 de Computaรงรฃo Grรกfica
# ## Integrantes
# <NAME>, NUSP 10786170
#
# <NAME>, NUSP 10819029
#
# <NAME>, NUSP 10692190
# + [markdown] id="dDG_MiT8ZmP2"
# Importaรงรฃo dos mรณdulos necessรกrios
# + id="WZTLKhadZmP3"
import glfw
from OpenGL.GL import *
import OpenGL.GL.shaders
import numpy as np
import math
# + [markdown] id="OabCAQUTZoFX"
# A funรงรฃo abaixo recebe eventos de teclado.
#
# As teclas "1" e "2" do teclado (E NรO DO TECLADO NUMรRICO) alteram o cenรกrio de "manhรฃ" e "sol" para "noite" e "lua".
#
# As teclas W, A, S e D controlam o homenzinho. As setas direcionais controlam o passarinho.
#
# Vale ressaltar que limitamos atรฉ onde vocรช pode subir o homenzinho e o passarinho.
# + id="_MapfdP_ZmP6"
def key_event(window, key, scancode, action, mods):
# print('[key event] key=', key)
# print('[key event] scancode=', scancode)
# print('[key event] action=', action)
# print('[key event] mods=', mods)
# print('-------')
global angle, t_x_man, t_y_man, e_x_man, e_y_man
global t_x_bird, t_y_bird
# Man
if key == 87 and t_y_man < 0.0 : # W
t_y_man += 0.01
e_x_man -= 0.015
e_y_man -= 0.015
if key == 65: # A
t_x_man -= 0.01
if key == 83 and t_y_man > -1.0 + 0.2: # S
t_y_man -= 0.01
e_x_man += 0.015
e_y_man += 0.015
if key == 68: # D
t_x_man += 0.01
# Controls the bird
if key == 265 and t_y_bird < 0.0 : # Up arrow
t_y_bird += 0.01
if key == 263: # Left arrow
t_x_bird -= 0.01
if key == 264 and t_y_bird > -1.0 + 0.05: # Down arrow
t_y_bird -= 0.01
if key == 262: # Right arrow
t_x_bird += 0.01
# Controls the scene switch between "day" and "night"
if key == 49: # keyboard (NOT NUMPAD) number 1
angle += 2
if key == 50: # keyboard (NOT NUMPAD) number 2
angle -= 2
# + [markdown] id="QRT4ynYPbf6n"
# As funรงรตes abaixo representam as transformaรงรตes disponรญveis: escala, rotaรงรฃo e translaรงรฃo em 2D.
# + id="Gt1kkX6bZmP7"
def scale(e_x, e_y):
return np.array([e_x, 0.0, 0.0, 0,
0.0, e_y, 0.0, 0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0], np.float32)
# + id="EL3sdvw8ZmP8"
def rotation(angle):
rad = math.radians(angle)
c = math.cos(rad)
s = math.sin(rad)
return np.array([c, -s, 0.0, 0,
s, c, 0.0, 0,
0.0, 0.0, 0, 0.0,
0.0, 0.0, 0.0, 1.0], np.float32)
# + id="NzPmc-sfZmP8"
def translation(t_x, t_y):
return np.array([1.0, 0.0, 0.0, t_x,
0.0, 1.0, 0.0, t_y,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0], np.float32)
# + [markdown] id="4p2ukpfabr_1"
# Essa funรงรฃo รฉ responsรกvel por criar a janela onde os objetos geomรฉtricos serรฃo mostrados.
# + id="WSlDpG6OZmP9"
def start_window():
glfw.init()
glfw.window_hint(glfw.VISIBLE, glfw.FALSE)
window = glfw.create_window(1000, 1000, "First project", None, None)
glfw.make_context_current(window)
# glfw.set_mouse_button_callback(window, mouse_event) # gets mouse inputs
glfw.set_key_callback(window, key_event) # gets keyboard inputs
return window
# + [markdown] id="Nrgs3d16b_--"
# Funรงรฃo responsรกvel por definir e compilar os slots (vertex e fragment) dos shaders
# + id="ACF5zWkAZmP_"
def set_and_compile_shader(program, slot, slot_code):
# Set shaders source
glShaderSource(slot, slot_code)
# Compiler shaders source
glCompileShader(slot)
if not glGetShaderiv(slot, GL_COMPILE_STATUS):
error = glGetShaderInfoLog(slot).decode()
print(error)
raise RuntimeError("Shader compilation error")
# Attach shader objects to the program
glAttachShader(program, slot)
return program, slot, slot_code
# + [markdown] id="QgVYeSBqcH3-"
# A funรงรฃo abaixo cria o vertor de vรฉrtices contendo todos os objetos que serรฃo mostrados na janela.
# + id="EFoxxu8lZmP_"
def draw_object():
vertices_list = [
# house
# smaller roof
(-0.8, -0.5), # vertice 0
(-0.6, -0.3), # vertice 1
(-0.4, -0.5), # vertice 2
# bigger roof
(-0.9, -0.5), # vertice 0 #3
(-0.6, -0.2), # vertice 1
(-0.3, -0.5), # vertice 2
# house base
(-0.8, -0.5), # vertice 0 #6
(-0.8, -0.9), # vertice 3
(-0.4, -0.5), # vertice 1
(-0.4, -0.9), # vertice 2
# house door
(-0.6, -0.7), # vertice 0 #10
(-0.6, -0.9), # vertice 3
(-0.5, -0.7), # vertice 1
(-0.5, -0.9), # vertice 2
# sun
# normal square
(0.1, 0.6), # vertice 0 #14
(0.1, 0.4), # vertice 1
(-0.1, 0.6), # vertice 3
(-0.1, 0.4), # vertice 2
# diagonal square
(0.0, 0.64), # vertice 1
(0.14, 0.5), # vertice 2
(-0.14, 0.5), # vertice 0 #18
(0.0, 0.36), # vertice 3
# bird
# wing
(-0.5, 0.0), # vertice 0 #22
(-0.53, 0.04), # vertice 1
(-0.52, 0.0), # vertice 2
# tail
(-0.54, 0.0), # vertice 0 #25
(-0.55, 0.01), # vertice 1
(-0.54, 0.0), # vertice 2
(-0.54, 0.0), # vertice 3
(-0.54, 0.0), # vertice 4
(-0.55, -0.01), # vertice 5
# bird beak
(-0.47, 0.025), # vertice 0 #31
(-0.47, -0.025), # vertice 1
(-0.45, 0.0), # vertice 2
# man
# torso
(0.01, -0.025), # vertice 0 #34
(0.01, 0.01), # vertice 1
(-0.01, -0.025), # vertice 2
(-0.01, 0.01), # vertice 3
# legs
(0.01, -0.025), # vertice 0 #38
(-0.01, -0.025), # vertice 1
(0.01, -0.05), # vertice 2
(-0.01, -0.05), # vertice 3
(0.0, -0.025), # vertice 6
# right arm
(-0.01, 0.01), # vertice 0 #43
(-0.01, -0.025), # vertice 3
(-0.015, 0.0), # vertice 1
(-0.015, -0.01), # vertice 2
# left arm
(0.01, 0.01), # vertice 0 #47
(0.01, -0.025), # vertice 3
(0.015, 0.0), # vertice 1
(0.015, -0.01), # vertice 2
# hat
(0.01, 0.02), # vertice 0 #51
(-0.01, 0.02), # vertice 1
(0.0, 0.04), # vertice 2
(0.01, 0.035), # vertice 3
(0.002, 0.036), # vertice 4
(0.01, 0.02), # vertice 5
# tree
# trunk tree
(0.8, -0.5), # vertice 1 #57
(0.8, -0.8), # vertice 0
(0.6, -0.5), # vertice 2
(0.6, -0.8), # vertice 3
# tree top
(0.45, -0.5), # vertice 1 #61
(0.7, 0.0), # vertice 2
(0.945, -0.5), # vertice 3
# birds house
# rectangle
(0.63, -0.35), # vertice 1
(0.63, -0.22), # vertice 0 #64
(0.77, -0.35), # vertice 2
(0.77, -0.22), # vertice 3
# roof
(0.61, -0.22), # vertice 0 #68
(0.7, -0.15), # vertice 1
(0.79, -0.22), # vertice 2
# field
(-1.0, 0.0), #71
(-1.0, -1.0),
(1.0, 0.0),
(1.0, -1.0),
# sky
(-1.0, 0.0), #75
(-1.0, 1.0),
(1.0, 0.0),
(1.0, 1.0)
]
pi = 3.14
counter = 0
radius = 0.04
angle = 1.0
num_vertices = 64 # define the "quality" of the circle
# bird's body
for counter in range(num_vertices): #79
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.04
y = math.sin(angle) * 0.04
vertices_list.append((x - 0.5, y))
# bird's eye
for counter in range(num_vertices): #143
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.004
y = math.sin(angle) * 0.004
vertices_list.append((x - 0.485,y + 0.015))
# bird's house entrance
for counter in range(num_vertices): #207
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.0546
y = math.sin(angle) * 0.0546
vertices_list.append((x + 0.7,y - 0.28))
# bird's house entrance 2
for counter in range(num_vertices): #271
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.05
y = math.sin(angle) * 0.05
vertices_list.append((x + 0.7,y - 0.28))
# man head
for counter in range(num_vertices): #335
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.01
y = math.sin(angle) * 0.01
vertices_list.append((x , y + 0.02))
# first cloud
for counter in range(num_vertices): #399
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.084
y = math.sin(angle) * 0.084
vertices_list.append((x - 0.5, y + 0.8))
for counter in range(num_vertices): #463
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.094
y = math.sin(angle) * 0.094
vertices_list.append((x - 0.4 , y + 0.8))
for counter in range(num_vertices): #527
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.084
y = math.sin(angle) * 0.084
vertices_list.append((x - 0.3, y + 0.8))
# second cloud
for counter in range(num_vertices): #591
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.084
y = math.sin(angle) * 0.084
vertices_list.append((x + 0.3, y + 0.8))
for counter in range(num_vertices): #655
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.09
y = math.sin(angle) * 0.09
vertices_list.append((x + 0.4 , y + 0.8))
for counter in range(num_vertices): #719
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.096
y = math.sin(angle) * 0.096
vertices_list.append((x + 0.5, y + 0.8))
# moon
for counter in range(num_vertices): #783
angle += 2*pi/num_vertices
x = math.cos(angle) * 0.1
y = math.sin(angle) * 0.1
vertices_list.append((x, y - 0.8))
total_vertices = len(vertices_list)
vertices = np.zeros(total_vertices, [("position", np.float32, 2)])
vertices['position'] = np.array(vertices_list)
return vertices
# + [markdown] id="Qg_t4yrqcXjY"
# Essa funรงรฃo envia o vetor de pontos pra GPU.
# + id="tjcJ5dyyZmQB"
def send_to_gpu(vertices):
# Request a buffer slot from GPU
buffer = glGenBuffers(1)
# Make this buffer the default one
glBindBuffer(GL_ARRAY_BUFFER, buffer)
# Upload data
glBufferData(GL_ARRAY_BUFFER, vertices.nbytes, vertices, GL_DYNAMIC_DRAW)
glBindBuffer(GL_ARRAY_BUFFER, buffer)
return buffer
# + [markdown] id="oHsJ5RGzccDC"
# Funรงรฃo responsรกvel por fazer as nuvens se movimentarem na janela.
# + id="KUpNGp7uZmQC"
def cloud_movement():
global t_x_clouds, increase
# if the cloud touchs the screen right edge,
# starts to move backward
if t_x_clouds >= 0.4: increase = False
# if the cloud touchs the screen left edge,
# starts to move forward
if t_x_clouds <= -0.4: increase = True
if increase: t_x_clouds = t_x_clouds + 0.0001 # to move the cloud forward
else: t_x_clouds = t_x_clouds - 0.0001 # to move the cloud backward
# + [markdown] id="qL28F68IckwM"
# Funรงรฃo responsรกvel por fazer a "asa" do passarinho se movimentar. Fazemos isso modificando apenas 1 ponto, de forma a fazer o triรขngulo (que serve de asa) ficar de "cabeรงa pra baixo". Como estamos alterando o vetor de vรฉrtices apรณs jรก o termos enviado para a GPU, precisamos atualizรก-lo e, por isso, fazemos o glBufferData e o glBindBuffer.
# + id="nIJFtToSZmQD"
def bird_wings_swing(vertices,buffer):
global wings
if wings is True:
vertices["position"][23] = (-0.53, vertices["position"][23][1] + 0.08)
wings = False
else:
vertices["position"][23] = (-0.53, vertices["position"][23][1] - 0.08)
wings = True
# Update the vertices on GPU
glBufferData(GL_ARRAY_BUFFER, vertices.nbytes, vertices, GL_DYNAMIC_DRAW)
glBindBuffer(GL_ARRAY_BUFFER, buffer)
# + [markdown] id="JW6kzlW1c4Eu"
# Funรงรฃo que controla o laรงo principal da janela a ser exibida. ร aqui que plotamos todos os pontos e os colorimos para criar o cenรกrio.
# + id="7IPYeolWZmQD"
def show_window(window, program, loc, loc_color, vertices,buffer):
R = 1.0
G = 0.0
B = 0.0
glfw.show_window(window)
while not glfw.window_should_close(window):
global angle
glfw.poll_events()
glClear(GL_COLOR_BUFFER_BIT)
glClearColor(1.0, 1.0, 1.0, 1.0)
#########################
########## SKY ##########
#########################
# Transformation matrix
mat_tranformation = scale(1,1)
loc = glGetUniformLocation(program, "mat_transformation")
glUniformMatrix4fv(loc, 1, GL_TRUE, mat_tranformation)
# Checks if sun is between the 3rd and 4th quadrants or not
# If so, we change the sky color to night
# (because the moon will be showing on)
if int(angle) % 360 > 90 and int(angle) % 360 < 270:
glUniform4f(loc_color, 0, 0.062, 0.478, 1.0)
else:
glUniform4f(loc_color, 0.098, 0.611, 0.921, 1.0)
# Drawing
glDrawArrays(GL_TRIANGLE_STRIP, 75, 4)
#########################
########## MOON #########
#########################
# Transformation matrix
mat_rotation_sol = rotation(angle)
loc = glGetUniformLocation(program, "mat_transformation")
glUniformMatrix4fv(loc, 1, GL_TRUE, mat_rotation_sol)
# Drawing
glUniform4f(loc_color,0.709, 0.717, 0.768, 1.0)
glDrawArrays(GL_TRIANGLE_FAN, 783, 64)
#########################
########## SUN ##########
#########################
# Transformation matrix
mat_rotation_sol = rotation(angle)
loc = glGetUniformLocation(program, "mat_transformation")
glUniformMatrix4fv(loc, 1, GL_TRUE, mat_rotation_sol)
# Drawing
glUniform4f(loc_color, 0.960, 0.894, 0, 1.0) # first square
glDrawArrays(GL_TRIANGLE_STRIP, 14, 4)
glUniform4f(loc_color, 0.960, 0.894, 0, 1.0) # second square
glDrawArrays(GL_TRIANGLE_STRIP, 18, 4)
#########################
######### FIELD #########
#########################
# Transformation matrix
mat_tranformation = scale(1,1)
loc = glGetUniformLocation(program, "mat_transformation")
glUniformMatrix4fv(loc, 1, GL_TRUE, mat_tranformation)
# Drawing
glUniform4f(loc_color, 0.368, 0.662, 0.356, 1.0)
glDrawArrays(GL_TRIANGLE_STRIP, 71, 4)
#########################
######### HOUSE #########
#########################
glUniform4f(loc_color, 0.858, 0.454, 0.101, 1.0) # bigger roof
glDrawArrays(GL_TRIANGLES, 3, 3)
glUniform4f(loc_color, 0.101, 0.207, 0.858, 1.0) # smaller roof
glDrawArrays(GL_TRIANGLES, 0, 3)
glUniform4f(loc_color, 0.101, 0.207, 0.858, 1.0) # base
glDrawArrays(GL_TRIANGLE_STRIP, 6, 4)
glUniform4f(loc_color, 0.721, 0.321, 0.196, 1.0) # door
glDrawArrays(GL_TRIANGLE_STRIP, 10, 4)
#########################
######### TREE ##########
#########################
glUniform4f(loc_color, 0.721, 0.321, 0.196, 1.0) # trunk
glDrawArrays(GL_TRIANGLE_STRIP, 57, 4)
glUniform4f(loc_color, 0.129, 0.270, 0.156, 1.0) # top
glDrawArrays(GL_TRIANGLE_STRIP, 61, 3)
#########################
###### BIRD HOUSE #######
#########################
glUniform4f(loc_color, 0.721, 0.321, 0.196, 1.0) # rectangle
glDrawArrays(GL_TRIANGLE_STRIP, 64, 4)
glUniform4f(loc_color, 0.0, 0.0, 0.0, 1.0) # roof
glDrawArrays(GL_TRIANGLE_STRIP, 68, 3)
glUniform4f(loc_color, 0.2, 0.2, 0.2, 1.0) # house's entrance
glDrawArrays(GL_TRIANGLE_FAN, 207, 64)
glUniform4f(loc_color, 0.0, 0.0, 0.0, 1.0) # house's entrance 2
glDrawArrays(GL_TRIANGLE_FAN, 271, 64)
#########################
######### MAN ###########
#########################
# Translation and Scale matrices
man_translation_mat = translation(t_x_man, t_y_man)
man_scale_mat = scale(e_x_man, e_y_man)
mtm = man_translation_mat.reshape((4,4))
msm = man_scale_mat.reshape((4,4))
man_transformation_mat = np.matmul(mtm, msm).reshape((1,16))
loc = glGetUniformLocation(program, "mat_transformation")
glUniformMatrix4fv(loc, 1, GL_TRUE, man_transformation_mat)
# Drawing
glUniform4f(loc_color, 0.0, 0.0, 1.0, 1.0) # head
glDrawArrays(GL_TRIANGLE_FAN, 335, 64)
glUniform4f(loc_color, 0.960, 0.894, 0, 1.0) # torso
glDrawArrays(GL_TRIANGLE_STRIP, 34, 4)
glUniform4f(loc_color, 0.0, 0.894, 0.5, 1.0) # legs
glDrawArrays(GL_TRIANGLE_STRIP, 38, 4)
glUniform4f(loc_color, 0.0, 1.0, 0.0, 1.0) # right arm
glDrawArrays(GL_TRIANGLE_STRIP, 43, 4)
glUniform4f(loc_color, 0.0, 1.0, 0.0, 1.0) # left arm
glDrawArrays(GL_TRIANGLE_STRIP, 47, 4)
#########################
######### BIRD ##########
#########################
# Translation matrix
bird_translation_mat = translation(t_x_bird, t_y_bird)
loc = glGetUniformLocation(program, "mat_transformation")
glUniformMatrix4fv(loc, 1, GL_TRUE, bird_translation_mat)
# Drawing
glUniform4f(loc_color, 0.960, 0.894, 0, 1.0) # body
glDrawArrays(GL_TRIANGLE_FAN, 79, 64)
glUniform4f(loc_color, 0.0, 0.0, 0.0, 1.0) # eye
glDrawArrays(GL_TRIANGLE_FAN, 143, 64)
bird_wings_swing(vertices,buffer)
glUniform4f(loc_color, 0.858, 0.796, 0, 1.0) # wing
glDrawArrays(GL_TRIANGLE_STRIP, 22, 4)
glUniform4f(loc_color, 0.921, 0.603, 0.098, 1.0) # beak
glDrawArrays(GL_TRIANGLES, 31, 3)
glUniform4f(loc_color, 0.960, 0.894, 0, 1.0) # tail 1
glDrawArrays(GL_LINE_STRIP, 25, 2)
glUniform4f(loc_color, 0.960, 0.894, 0, 1.0) # tail 2
glDrawArrays(GL_LINE_STRIP, 27, 2)
glUniform4f(loc_color, 0.960, 0.894, 0, 1.0) # tail 3
glDrawArrays(GL_LINE_STRIP, 29, 2)
#########################
######## CLOUDS #########
#########################
cloud_movement()
# Translation matrix
clouds_translation_mat = translation(t_x_clouds, t_y_clouds)
loc = glGetUniformLocation(program, "mat_transformation")
glUniformMatrix4fv(loc, 1, GL_TRUE, clouds_translation_mat)
# Changes clouds color according to the angle position
# (if the angle is between the 3rd and 4th quadrants)
if int(angle) % 360 > 90 and int(angle) % 360 < 270:
glUniform4f(loc_color,0.529, 0.537, 0.592, 1.0)
else:
glUniform4f(loc_color, 1, 1, 1, 1.0)
# first cloud
glDrawArrays(GL_TRIANGLE_FAN, 399, 64)
glDrawArrays(GL_TRIANGLE_FAN, 463, 64)
glDrawArrays(GL_TRIANGLE_FAN, 527, 64)
# second cloud
glDrawArrays(GL_TRIANGLE_FAN, 591, 64)
glDrawArrays(GL_TRIANGLE_FAN, 655, 64)
glDrawArrays(GL_TRIANGLE_FAN, 719, 64)
glfw.swap_buffers(window)
glfw.terminate()
# + [markdown] id="RMtLozAgdAin"
# Funรงรฃo principal do cรณdigo. Ela รฉ responsรกvel por preparar a GPU e o OpenGL para a exibiรงรฃo da janela e dos objetos geomรฉtricos.
# + id="PZznypfUZmQE"
def init():
window = start_window()
vertex_code = """
attribute vec2 position;
uniform mat4 mat_transformation;
void main(){
gl_Position = mat_transformation * vec4(position,0.0,1.0);
}
"""
fragment_code = """
uniform vec4 color;
void main(){
gl_FragColor = color;
}
"""
# Request a program and shader slots from GPU
program = glCreateProgram()
vertex = glCreateShader(GL_VERTEX_SHADER)
fragment = glCreateShader(GL_FRAGMENT_SHADER)
set_and_compile_shader(program, vertex, vertex_code)
set_and_compile_shader(program, fragment, fragment_code)
# Build program
glLinkProgram(program)
if not glGetProgramiv(program, GL_LINK_STATUS):
print(glGetProgramInfoLog(program))
raise RuntimeError('Linking error')
# Make program the default program
glUseProgram(program)
vertices = draw_object()
buffer = send_to_gpu(vertices)
# Bind the position attribute
stride = vertices.strides[0]
offset = ctypes.c_void_p(0)
loc = glGetAttribLocation(program, "position")
glEnableVertexAttribArray(loc)
glVertexAttribPointer(loc, 2, GL_FLOAT, False, stride, offset)
loc_color = glGetUniformLocation(program, "color")
show_window(window, program, loc, loc_color, vertices, buffer)
# + [markdown] id="vkFrJYv_dKrc"
# Ambiente global do cรณdigo. Utilizamos principalmente para definir as variรกveis utilizadas nas funรงรตes de transformaรงรฃo.
# + tags=[] id="FOBjj1CqZmQF"
# x-translation and y-translation for clouds
t_x_clouds = 0
t_y_clouds = 0
increase = True # controls the cloud movement
# x-translation and y-translation for the bird
t_x_bird = 0
t_y_bird = 0
# x-translation, y-translation, x-scale and y-scale for the man
t_x_man = 0
t_y_man = 0
e_x_man = 1
e_y_man = 1
angle = 0 # controls rotation angle to switch between "day" and "night" scenes
wings = False # controls the bird wings swing
init()
| Computer Graphics/OpenGL 2D/First project/project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import everything and define a test runner function
from importlib import reload
from helper import run_test
import ecc
import helper
import script
import tx
# -
# ### Exercise 1
#
# #### 1.1. Make [this test](/edit/session5/tx.py) pass
# ```
# tx.py:TxTest::test_verify_input
# ```
# +
# Exercise 1.1
reload(tx)
run_test(tx.TxTest('test_verify_input'))
# +
# Transaction Construction Example
from ecc import PrivateKey
from helper import decode_base58, p2pkh_script, SIGHASH_ALL
from script import Script
from tx import TxIn, TxOut, Tx
# Step 1
tx_ins = []
prev_tx = bytes.fromhex('0025bc3c0fa8b7eb55b9437fdbd016870d18e0df0ace7bc9864efc38414147c8')
tx_ins.append(TxIn(
prev_tx=prev_tx,
prev_index=0,
script_sig=b'',
sequence=0xffffffff,
))
# Step 2
tx_outs = []
h160 = decode_base58('mzx5YhAH9kNHtcN481u6WkjeHjYtVeKVh2')
tx_outs.append(TxOut(
amount=int(0.99*100000000),
script_pubkey=p2pkh_script(h160),
))
h160 = decode_base58('mnrVtF8DWjMu839VW3rBfgYaAfKk8983Xf')
tx_outs.append(TxOut(
amount=int(0.1*100000000),
script_pubkey=p2pkh_script(h160),
))
tx_obj = Tx(version=1, tx_ins=tx_ins, tx_outs=tx_outs, locktime=0, testnet=True)
# Step 3
hash_type = SIGHASH_ALL
z = tx_obj.sig_hash(0, hash_type)
pk = PrivateKey(secret=8675309)
der = pk.sign(z).der()
sig = der + hash_type.to_bytes(1, 'big')
sec = pk.point.sec()
tx_obj.tx_ins[0].script_sig = Script([sig, sec])
print(tx_obj.serialize().hex())
# -
# ### Exercise 2
#
# #### 2.1. Make [this test](/edit/session5/tx.py) pass
# ```
# tx.py:TxTest:test_sign_input
# ```
# +
# Exercise 2.1
reload(tx)
run_test(tx.TxTest('test_sign_input'))
# -
# ### Exercise 3
#
# #### 3.1. Send 0.04 TBTC to this address `muvpVznkBtk8rRSxLRVQRdUhsMjS7aKRne`
#
# #### Go here to send your transaction: https://testnet.blockexplorer.com/tx/send
#
# #### Bonus. Get some testnet coins and spend both outputs (one from your change address and one from the testnet faucet) to `muvpVznkBtk8rRSxLRVQRdUhsMjS7aKRne`
#
# #### You can get some free testnet coins at: https://testnet.coinfaucet.eu/en/
# +
# Exercise 3.1
reload(tx)
from ecc import PrivateKey
from helper import decode_base58, p2pkh_script, SIGHASH_ALL
from script import Script
from tx import TxIn, TxOut, Tx
prev_tx = bytes.fromhex('0025bc3c0fa8b7eb55b9437fdbd016870d18e0df0ace7bc9864efc38414147c8')
prev_index = 0
target_address = 'miKegze5FQNCnGw6PKyqUbYUeBa4x2hFeM'
target_amount = 0.02
change_address = 'mzx5YhAH9kNHtcN481u6WkjeHjYtVeKVh2'
change_amount = 1.07
secret = 8675309
priv = PrivateKey(secret=secret)
print(priv.point.address(testnet=True))
# initialize inputs
tx_ins = []
# create a new tx input with prev_tx, prev_index, blank script_sig and max sequence
tx_ins.append(TxIn(
prev_tx=prev_tx,
prev_index=prev_index,
script_sig=b'',
sequence=0xffffffff,
))
# initialize outputs
tx_outs = []
# decode the hash160 from the target address
h160 = decode_base58(target_address)
# convert hash160 to p2pkh script
script_pubkey = p2pkh_script(h160)
# convert target amount to satoshis (multiply by 100 million)
target_satoshis = int(target_amount*100000000)
# create a new tx output for target with amount and script_pubkey
tx_outs.append(TxOut(
amount=target_satoshis,
script_pubkey=script_pubkey,
))
# decode the hash160 from the change address
h160 = decode_base58(change_address)
# convert hash160 to p2pkh script
script_pubkey = p2pkh_script(h160)
# convert change amount to satoshis (multiply by 100 million)
change_satoshis = int(change_amount*100000000)
# create a new tx output for target with amount and script_pubkey
tx_outs.append(TxOut(
amount=change_satoshis,
script_pubkey=script_pubkey,
))
# create the transaction
tx_obj = Tx(version=1, tx_ins=tx_ins, tx_outs=tx_outs, locktime=0, testnet=True)
# now sign the 0th input with the private key using SIGHASH_ALL using sign_input
tx_obj.sign_input(0, priv, SIGHASH_ALL)
# SANITY CHECK: change address corresponds to private key
if priv.point.address(testnet=True) != change_address:
raise RuntimeError('Private Key does not correspond to Change Address, check priv_key and change_address')
# SANITY CHECK: output's script_pubkey is the same one as your address
if tx_ins[0].script_pubkey(testnet=True).elements[2] != decode_base58(change_address):
raise RuntimeError('Output is not something you can spend with this private key. Check that the prev_tx and prev_index are correct')
# SANITY CHECK: fee is reasonable
if tx_obj.fee(testnet=True) > 0.05*100000000 or tx_obj.fee(testnet=True) <= 0:
raise RuntimeError('Check that the change amount is reasonable. Fee is {}'.format(tx_obj.fee()))
# serialize and hex()
print(tx_obj.serialize().hex())
# +
# Bonus
from ecc import PrivateKey
from helper import decode_base58, p2pkh_script, SIGHASH_ALL
from script import Script
from tx import TxIn, TxOut, Tx
prev_tx_1 = bytes.fromhex('89cbfe2eddaddf1eb11f5c4adf6adaa9bca4adc01b2a3d03f8dd36125c068af4')
prev_index_1 = 0
prev_tx_2 = bytes.fromhex('19069e1304d95f70e03311d9d58ee821e0978e83ecfc47a30af7cd10fca55cf4')
prev_index_2 = 0
target_address = 'muvpVznkBtk8rRSxLRVQRdUhsMjS7aKRne'
target_amount = 1.71
secret = 61740721216174072121
priv = PrivateKey(secret=secret)
# initialize inputs
tx_ins = []
# create the first tx input with prev_tx_1, prev_index_1, blank script_sig and max sequence
tx_ins.append(TxIn(
prev_tx=prev_tx_1,
prev_index=prev_index_1,
script_sig=b'',
sequence=0xffffffff,
))
# create the second tx input with prev_tx_2, prev_index_2, blank script_sig and max sequence
tx_ins.append(TxIn(
prev_tx=prev_tx_2,
prev_index=prev_index_2,
script_sig=b'',
sequence=0xffffffff,
))
# initialize outputs
tx_outs = []
# decode the hash160 from the target address
h160 = decode_base58(target_address)
# convert hash160 to p2pkh script
script_pubkey = p2pkh_script(h160)
# convert target amount to satoshis (multiply by 100 million)
target_satoshis = int(target_amount*100000000)
# create a single tx output for target with amount and script_pubkey
tx_outs.append(TxOut(
amount=target_satoshis,
script_pubkey=script_pubkey,
))
# create the transaction
tx_obj = Tx(1, tx_ins, tx_outs, 0, testnet=True)
# sign both inputs with the private key using SIGHASH_ALL using sign_input
tx_obj.sign_input(0, priv, SIGHASH_ALL)
tx_obj.sign_input(1, priv, SIGHASH_ALL)
# SANITY CHECK: output's script_pubkey is the same one as your address
if tx_ins[0].script_pubkey(testnet=True).elements[2] != decode_base58(priv.point.address(testnet=True)):
raise RuntimeError('Output is not something you can spend with this private key. Check that the prev_tx and prev_index are correct')
# SANITY CHECK: fee is reasonable
if tx_obj.fee(testnet=True) > 0.05*100000000 or tx_obj.fee(testnet=True) <= 0:
raise RuntimeError('Check that the change amount is reasonable. Fee is {}'.format(tx_obj.fee()))
# serialize and hex()
print(tx_obj.serialize().hex())
# -
# ### Exercise 4
#
# #### 4.1. Find the hash160 of the RedeemScript
# ```
# 5221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152ae
# ```
# +
# Exercise 4.1
from helper import hash160
hex_redeem_script = '5221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152ae'
# bytes.fromhex script
redeem_script = bytes.fromhex(hex_redeem_script)
# hash160 result
h160 = hash160(redeem_script)
# hex() to display
print(h160.hex())
# +
# P2SH address construction example
from helper import encode_base58_checksum
print(encode_base58_checksum(b'\x05'+bytes.fromhex('74d691da1574e6b3c192ecfb52cc8984ee7b6c56')))
# -
# ### Exercise 5
#
# #### 5.1. Make [these tests](/edit/session5/helper.py) pass
# ```
# helper.py:HelperTest:test_p2pkh_address
# helper.py:HelperTest:test_p2sh_address
# ```
# +
# Exercise 5.1
reload(helper)
run_test(helper.HelperTest('test_p2pkh_address'))
run_test(helper.HelperTest('test_p2sh_address'))
# +
# z for p2sh example
from helper import double_sha256
sha = double_sha256(bytes.fromhex('0100000001868278ed6ddfb6c1ed3ad5f8181eb0c7a385aa0836f01d5e4789e6bd304d87221a000000475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152aeffffffff04d3b11400000000001976a914904a49878c0adfc3aa05de7afad2cc15f483a56a88ac7f400900000000001976a914418327e3f3dda4cf5b9089325a4b95abdfa0334088ac722c0c00000000001976a914ba35042cfe9fc66fd35ac2224eebdafd1028ad2788acdc4ace020000000017a91474d691da1574e6b3c192ecfb52cc8984ee7b6c56870000000001000000'))
z = int.from_bytes(sha, 'big')
print(hex(z))
# +
# p2sh verification example
from ecc import S256Point, Signature
from helper import double_sha256
sha = double_sha256(bytes.fromhex('0100000001868278ed6ddfb6c1ed3ad5f8181eb0c7a385aa0836f01d5e4789e6bd304d87221a000000475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152aeffffffff04d3b11400000000001976a914904a49878c0adfc3aa05de7afad2cc15f483a56a88ac7f400900000000001976a914418327e3f3dda4cf5b9089325a4b95abdfa0334088ac722c0c00000000001976a914ba35042cfe9fc66fd35ac2224eebdafd1028ad2788acdc4ace020000000017a91474d691da1574e6b3c192ecfb52cc8984ee7b6c56870000000001000000'))
z = int.from_bytes(sha, 'big')
point = S256Point.parse(bytes.fromhex('022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb70'))
sig = Signature.parse(bytes.fromhex('3045022100dc92655fe37036f47756db8102e0d7d5e28b3beb83a8fef4f5dc0559bddfb94e02205a36d4e4e6c7fcd16658c50783e00c341609977aed3ad00937bf4ee942a89937'))
print(point.verify(z, sig))
# -
# ### Exercise 6
#
# #### 6.1. Validate the second signature of the first input
#
# ```
# 0100000001868278ed6ddfb6c1ed3ad5f8181eb0c7a385aa0836f01d5e4789e6bd304d87221a000000db00483045022100dc92655fe37036f47756db8102e0d7d5e28b3beb83a8fef4f5dc0559bddfb94e02205a36d4e4e6c7fcd16658c50783e00c341609977aed3ad00937bf4ee942a8993701483045022100da6bee3c93766232079a01639d07fa869598749729ae323eab8eef53577d611b02207bef15429dcadce2121ea07f233115c6f09034c0be68db99980b9a6c5e75402201475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152aeffffffff04d3b11400000000001976a914904a49878c0adfc3aa05de7afad2cc15f483a56a88ac7f400900000000001976a914418327e3f3dda4cf5b9089325a4b95abdfa0334088ac722c0c00000000001976a914ba35042cfe9fc66fd35ac2224eebdafd1028ad2788acdc4ace020000000017a91474d691da1574e6b3c192ecfb52cc8984ee7b6c568700000000
# ```
#
# The sec pubkey of the second signature is:
# ```
# 03b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb71
# ```
#
# The der signature of the second signature is:
# ```
# 3045022100da6bee3c93766232079a01639d07fa869598749729ae323eab8eef53577d611b02207bef15429dcadce2121ea07f233115c6f09034c0be68db99980b9a6c5e75402201475221022
# ```
#
# The redeemScript is:
# ```
# 475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152ae
# ```
# +
# Exercise 6.1
from io import BytesIO
from ecc import S256Point, Signature
from helper import double_sha256, int_to_little_endian
from script import Script
from tx import Tx, SIGHASH_ALL
hex_sec = '03b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb71'
hex_der = '3045022100da6bee3c93766232079a01639d07fa869598749729ae323eab8eef53577d611b02207bef15429dcadce2121ea07f233115c6f09034c0be68db99980b9a6c5e754022'
hex_redeem_script = '5221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152ae'
sec = bytes.fromhex(hex_sec)
der = bytes.fromhex(hex_der)
redeem_script = bytes.fromhex(hex_redeem_script)
hex_tx = '0100000001868278ed6ddfb6c1ed3ad5f8181eb0c7a385aa0836f01d5e4789e6bd304d87221a000000db00483045022100dc92655fe37036f47756db8102e0d7d5e28b3beb83a8fef4f5dc0559bddfb94e02205a36d4e4e6c7fcd16658c50783e00c341609977aed3ad00937bf4ee942a8993701483045022100da6bee3c93766232079a01639d07fa869598749729ae323eab8eef53577d611b02207bef15429dcadce2121ea07f233115c6f09034c0be68db99980b9a6c5e75402201475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152aeffffffff04d3b11400000000001976a914904a49878c0adfc3aa05de7afad2cc15f483a56a88ac7f400900000000001976a914418327e3f3dda4cf5b9089325a4b95abdfa0334088ac722c0c00000000001976a914ba35042cfe9fc66fd35ac2224eebdafd1028ad2788acdc4ace020000000017a91474d691da1574e6b3c192ecfb52cc8984ee7b6c568700000000'
stream = BytesIO(bytes.fromhex(hex_tx))
# parse the S256Point and Signature
point = S256Point.parse(sec)
sig = Signature.parse(der)
# parse the Tx
t = Tx.parse(stream)
# change the first input's scriptSig to redeemScript (use Script.parse on the redeemScript)
t.tx_ins[0].script_sig = Script.parse(redeem_script)
# get the serialization
ser = t.serialize()
# add the sighash (4 bytes, little-endian of SIGHASH_ALL)
ser += int_to_little_endian(SIGHASH_ALL, 4)
# double_sha256 the result
to_sign = double_sha256(ser)
# this interpreted is a big-endian number is your z
z = int.from_bytes(to_sign, 'big')
# now verify the signature using point.verify
print(point.verify(z, sig))
| session5/complete/session5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# style the notebook
from IPython.core.display import HTML
import urllib.request
response = urllib.request.urlopen('http://bit.ly/1LC7EI7')
HTML(response.read().decode("utf-8"))
# # Perceptron Learning Algorithm
# ** Not 'written' yet, just notes to an article**. Based on development in chapter 1 of "Learning from Data", Abu-Mostafa et. al.
#
# This is a poorly performing algorithm, but it illustrates the idea of machine learning.
#
# So the idea here is we encode information as a vector. For example, we may want to make a credit decision. Factors could include age, debt, income, and more. We cannot know if somebody is a good credit risk or not, but we have a lot of data from previous loans. We want to create a model from the past data so we can decide if we should approve a new application.
#
# * input: x -> application
# * data: N sets of previous inputs and outcomes ($\mathbf{x}_i$, $y_i$) $\text{for i in 1..N}$
# * output: y -> extend credit
# * target function: f: x -> y No way to know this
# * hypthothesis function: g: x -> y we learn this
#
# We do not know what the true target function $f$ might be. So we use machine learning to find a hypothesis function $g$, which will be *approximately* equal to $f$, or $f\approx g$.
#
#
# Here is some example data. I use only two factors so I can plot it in 2 dimensional plots. Real data may have dozens to thousands of factors.
import numpy as np
data = np.array(((3.0, 4.0), (4.0, 6.0), (4.3, 4.0), (8.0, 7.0),
(6.0, 5.5), (6.4, 8.2), (1.0, 7.0), (4.0, 5.2),
(7.0, 7.5), (5.0, 2.0), (7.0, 6.0), (7.0, 3.0),
(6.0, 8.4), (2.0, 3.6), (1.0, 2.7)))
# A **peceptron** models the hypothesis function as a sum of weights. Maybe we should weight income very high, weight debt with a large negative value, model age with a modest positive value, and so on.
#
# Then, for a given set of factors we multiply the weights by the factors and sum them. If the sum exceeds a threshold we approve the credit, otherwise we deny it.
#
# If we let $D$ be the dimension of our factors (# of factors), then the perceptron is
#
# $$\text{approve credit if } \sum\limits_{i=1}^d w_i x_i >\text{ threshold}$$
#
# or
# $$h(\mathbf{x}) = \text{sign}\Big[\sum\limits_{i=1}^d w_i x_i - \text{threshold}\Big]$$
#
# In other words, $h(\mathbf{x})$ will be $1$ if we approve credit, and $-1$ if we deny credit.
#
# NumPy provides a `sign` routine, but it does not behave how we want. It returns 1 for positive numbers, -1 for negative numbers, but 0 for zero. Hence we will write our own `sign` function which uses `numpy.sign` but converts any value of 0 to 1.
def sign(data):
""" numpy.sign gives 0 for sign(0), we want 1."""
s = np.asarray(np.sign(data), dtype=int)
s[s==0] = 1
return s
# To test our code we need to simulate the function $f$, which normally we will not know. Here I arbitrarily define it as approving the credit if the sum of the x and y factors is less than 10.5. In other words, if $x_1 + x_2 < 10.5$ that person didn't make us money, otherwise they did make us money.
def real_cost(data):
return sign(data[:, 1] + data[:, 0] - 10.5)
# Let's look at that in a plot. I'll write a function to plot the data points in blues plus marks if they made us money, and black minuses if they cost us money.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
def plot_costs(x0, x1, y):
for i, c in enumerate(y):
plt.scatter(x0[i], x1[i], marker='+' if c==1 else '$-$',
c='b', s=50)
y = real_cost(data)
plot_costs(data[:, 0], data[:, 1], y)
plt.plot([9, 1], [3, 9], ls='--', color='g');
plt.xlim(0, 9); plt.ylim(1, 9);
# -
# I drew a dotted line through the data which separates the pluses from the minuses. The perceptron equation is a linear combination of factors, so it can only ever linearly discriminate between groups. Thus our data must be **linearly separable** for PLA to work.
#
# We want to implement this code using linear algebra. We can get rid of the $> \text{threshold}$ term by introducing a dummy term $x_0$, which we always set to 1. We introduce a new weight $w_0$ corresponding to it. This lets us write our hypothesis function as
#
# $$h(\mathbf{x}) = \text{sign}\Big[\sum\limits_{i=0}^d w_i x_i \Big]$$
#
# $\sum\limits_{i=0}^d w_i x_i$ is called an **inner product** in linear algebra, and we can calculate it extremely quickly with `numpy.inner()`. It is written as
#
# $$h(\mathbf{x}) = \mathbf{w}^\mathsf{T}\mathbf{x}$$
#
# Here is an example for $1*4 + 2*7$ (weights are 1 and 2, x's are 4 and 7):
x = np.array([[4, 7]])
w = np.array([[1],
[2]])
np.inner(w.T, x)
# Alternatively we could use `numpy.dot` to compute the same value, but inner better conveys what we are doing:
np.dot(w.T, x.T)
# I prefer having my data be a vector (a column) so I am prone to write the following, and will do so in the rest of this paper:
x = np.array([[4], [7]])
w = np.array([[1, 2]])
np.dot(w, x)
# We need to add $x_0 = 1$ to our data, so let's get that out of the way so we can discuss the algorithm. Our data is stored in column format (each row is a separate record), so we need to add a column of ones to the left of the matrix `data`. The opaquely named `numpy.c_()` concatenates columns together:
def add_one_column(data):
N = len(data) # number of data records
return np.c_[np.ones(N), data] # add column of ones for x_0
xs = add_one_column(data)
xs
# Now, the algorithm.
#
# We start by assigning random numbers to the weight vector. Perform the inner product against our data set. Compare to actual results. Almost certainly 1 or more will be misclassified.
#
# Randomly take *one* of the misclassified points and 'nudge' the weight so that the point is no longer misclassified. This nudge fixes this point, but of course might cause one or more other points to become misclassified. But against that point we might also fix the categorization of some points as well.
#
# $\mathbf{w}^\mathsf{T}\mathbf{x}$ is a linear operator - it creates a line. When we start with random weights this is the same as creating a random line drawn through our space. It is unlikely to correctly partition our data points correctly. When we 'nudge' the weight we are shifting the line so the point is on the other side of the line. So you can visualize the algorithm as moving the line around until it correctly separates our points.
# +
weights = [8.4805, -.5, -1.351]
def plot_weight_line(weights, x0, x1):
def eq(w, x):
""" convert w0 + w1*x + w2*y into y = mx + b"""
return (-w[1]*x - w[0]) / w[2]
plt.plot([x0, x1], [eq(weights, x0), eq(weights, x1)], ls='--', color='g')
def plot_weight_example(weights):
plot_weight_line(weights, 0, 9)
plot_costs(data[:, 0], data[:, 1], y)
plt.xlim(0,9); plt.ylim(0, 10);
plot_weight_example(weights)
# -
# And after some weight change to move the line so that the point at (4, 5.2) is on the correct side of the line:
weights = [10.1782, -.6, -1.351]
plot_weight_example(weights)
# You can see that it had the side benefit of also putting the point at (1, 7) on the right side of the line.
#
# I caused this by carefully choosing the weights by trial and error; the algorithm uses a different technique. Lets think about the geometric interpretation of the inner product $\mathbf{w}^\mathsf{T}\mathbf{x}$.
#
# If the inner product is positive (accept the application) then the angle formed by $\mathbf{w}$ and $\mathbf{x}$ will be acute. If the inner product is negative (deny the application) then the angle will be oblique.
# +
def plot_vector(x, c='b', label=''):
plt.gca().quiver(0,0,x[0], x[1],angles='xy',scale_units='xy',scale=1, color=c)
plt.plot([0], [0], color=c, label=label)
x = [1.0, 0.0]
w = [.7, 0.9]
plt.subplot(121)
plot_vector(x, 'b', 'x')
plot_vector(w, 'r', 'w')
plt.xlim(-1.5, 1.5); plt.ylim(-1.5,1.5);
plt.title('inner product is: {}'.format(np.dot(x, w)))
plt.legend(loc=4)
w = [-.9, .7]
plt.subplot(122)
plot_vector(x, 'b', 'x')
plot_vector(w, 'r', 'w')
plt.xlim(-1.5, 1.5); plt.ylim(-1.5,1.5);
plt.title('inner product is: {}'.format(np.dot(x, w)))
plt.legend(loc=4);
# -
# If the angle is acute ($h(x)$ is positive) and the point is misclassified this means the inner product is positive, but it should be negative: $y=-1$. If we add $yx$ to the weight, i.e. $-x$ the angle will be converted to an oblique angle. Likewise, if the angle is oblique ($h(x)$ is negative) and the point is misclassified then $y=+1$. If we add $yx$ to the weight, i.e $+x$ the angle will be converted to an acute angle:
# +
x = [1.0, 0.0]
w = [.7, 0.9]
wyx = [-.3, .9]
plt.subplot(121)
plot_vector(x, 'b', 'x')
plot_vector(w, 'r', 'w')
plot_vector(wyx, 'g', 'w + yx')
plt.xlim(-1.5, 1.5); plt.ylim(-1.5,1.5);
plt.title('inner product is: {}'.format(np.dot(x, w)))
plt.legend(loc=4)
w = [-.9, .7]
wyx = [.1, 0.7]
plt.subplot(122)
plot_vector(x, 'b', 'x')
plot_vector(w, 'r', 'w')
plot_vector(wyx, 'g', 'w + yx')
plt.xlim(-1.5, 1.5); plt.ylim(-1.5,1.5);
plt.title('inner product is: {}'.format(np.dot(x, w)))
plt.legend(loc=4);
# -
# Therefore our 'nudging' algorithm is very simple to implement. Choose a point such that $\text{sign}(\mathbf{w}^\mathsf{T}\mathbf{x}_n) \neq y_n$, which means it is misclassified. Update the weight with $\mathbf{w} \gets \mathbf{w} y_n\mathbf{x}_n$.
#
# The PLA runs in a loop:
#
# ```python
#
# while some point is misclassified:
# randomly choose a miscategorized point i
# w = w + y_i * x_i
# ```
#
# PLA will not converge if the data is not linearly separable so we need to add checks for the number of iterations, and we also need to return a `success` flag indicating whether we found an answer or not. We use `numpy.random.permutation()` to randomly iterate over the points. If you don't do this you can easily enter into an infinite loop since you can endless fix point 1, which misclassifies point 2. You fix point 2, which misclassifies point 1, ad infinitum.
def PLA(xs, y, weights=None, max_iters=5000):
if weights is None:
weights = np.array([np.random.random(xs.shape[1])])
if weights.ndim == 1:
weights = np.array([weights])
misidentified = True
success = False
iters = 0
indexes = range(len(xs))
while misidentified and iters < max_iters:
misidentified = False
for i in np.random.permutation(indexes):
x = xs[i]
s = sign(np.dot(weights, x)[0])
if s != y[i]:
misidentified = True
weights += np.dot(y[i], x)
break
success = not misidentified
iters += 1
return weights, success, iters
# +
from numpy.random import randn
d = 2 # dimension of attributes
# I'm hard coding this to cause initial weights to be
# very bad. Uncommment next line to randomly generate weights.
weights = np.array([[-0.32551368, 1.20473617, -1.00629554]])
#weights = np.array([randn(d+1)*5 - 1.5])
# plot initial setup
plot_weight_line(weights[0, :], 0, 9)
plot_costs(xs[:, 1], xs[:, 2], y)
plt.title('Algorithm Start')
# run algorithm
weights, success, iters = PLA(xs, y, weights)
# plot and print the results
plt.figure()
plot_costs(xs[:, 1], xs[:, 2], y)
print('final weights', weights)
plot_weight_line(weights[0, :], 0, 9)
plt.title('Algorithm Result')
print('numer of iterations', iters)
# -
# # Non-Linearably Separable Data
#
#
# It should be reasonably clear to see why PCA cannot separate data which is very intermingled. It draws a straight line, and there is no way to draw a straight line through intermingled data to separate it into two groups. But how does it perform if the data is mostly separated with only modest overlap? Let's look at that.
# +
def make_near_separable_data():
d1 = np.random.multivariate_normal((0,5), ((4, 0), (0, 5)), 20)
d2 = np.random.multivariate_normal((5, 0), ((4, -3.5), (-3.5, 7)), 40)
data = np.vstack((d1, d2))
y = np.array([1]*20 + [-1]*40)
return data, y
ns_data, ns_y = make_near_separable_data()
plot_costs (ns_data[:, 0], ns_data[:, 1], ns_y)
# -
# Unless we were extremely unlucky with the random number generator we should have a cloud of pluses at the upper left, and a longer, thing, vertically leaning cloud of minuses at the lower right, with slight overlap. There is no way to linearly separate this data.
#
# Let's test that by running the algorithm and inspecting the `success` flag.
ns_xs = add_one_column(ns_data)
ns_weights, success, iters = PLA(ns_xs, ns_y, max_iters=5000)
print('success =', success)
# As you can see the algorithm could not linearly separate the data. But what do the results look like:
plot_costs(ns_xs[:, 1], ns_xs[:, 2], ns_y)
plot_weight_line(ns_weights[0, :], -5, 9)
# The solution is pretty good. It might not be optimal because we arbitrarily stopped running after 5,000 iterations. It is possible that we found a better solution on some earlier iteration, and subsequent changes made the result worse. A trivial change suggests itself. While iterating, save the current best result. In the case of failure, return the best answer instead of the last one. If the data is linearly separable the best answer will be the one with no misclassified data, so the algorithm will still work correctly for linearly separable data.
def PPLA(xs, y, weights=None, max_iters=5000):
N = len(xs)
if weights is None:
weights = np.array([np.random.random(xs.shape[1])])
if weights.ndim == 1:
weights = np.array([weights])
best = None
best_miscount = N + 1
success = False
iters = 0
indexes = range(N)
while iters < max_iters:
num_misidentified = 0
fix_index = -1
for i in np.random.permutation(indexes):
x = xs[i]
s = sign(np.dot(weights, x)[0])
if s != y[i]:
num_misidentified += 1
if fix_index < 0:
fix_index = i
if num_misidentified < best_miscount:
best = weights.copy()
best_miscount = num_misidentified
if num_misidentified == 0:
return weights, True, iters, 0
weights += np.dot(y[fix_index], xs[fix_index])
iters += 1
return best, False, iters, best_miscount
ns_weights, success, iters, num_errors = PPLA(ns_xs, ns_y, max_iters=5000)
plot_costs(ns_xs[:, 1], ns_xs[:, 2], ns_y)
plot_weight_line(ns_weights[0, :], -5, 9)
# I will not cover linear regression in detail here, other than to mention its existence and use to aid the Perceptron algorithm. We can use least squares to roughly compute out starting weights. Least squares seeks to minimize the squared error of every term. **blah blah**
#
# $$E_{in}(\mathbf{w}) = \frac{1}{N}\|\mathbf{Xw} - \mathbf{y}\|^2$$
#
# $$\nabla E_{in}(\mathbf{w}) \frac{2}{N}\mathbf{X}^\mathsf{T}(\mathbf{Xw} - \mathbf{y}) = 0$$
#
# $$ \mathbf{X}^\mathsf{T} \mathbf{Xw} = \mathbf{X}^\mathsf{T}\mathbf{y}$$
#
# $$ \mathbf{w} = (\mathbf{X}^\mathsf{T}\mathbf{X})^{-1}\mathbf{X}^\mathsf{T}\mathbf{y}$$
#
# $$ \mathbf{w} = \mathbf{X}^{\dagger}\mathbf{y}$$
#
# $\mathbf{X}^{\dagger}$, which equals $(\mathbf{X}^\mathsf{T}\mathbf{X})^{-1}\mathbf{X}^\mathsf{T}$, is called the **pseudo-inverse**.
#
#
# We can either use `scipy.linalg.pinv()` to compute the pseudo inverse, or use `numpy.linalg.lstsq` to compute the least squares solution. This works for classification problems because we are using +1 and -1 for the classification, which of course are real numbers.
#
# After generating the weights using least squares, pass them into the PLA.
# +
import scipy.linalg as la
xi = la.pinv(ns_xs)
w_lr = np.dot(xi, ns_y)
ns_weights, success, iters, num_errors = PPLA(ns_xs, ns_y, w_lr, max_iters=5000)
plot_costs(ns_xs[:, 1], ns_xs[:, 2], ns_y)
plot_weight_line(w_lr, -5, 9)
print(w_lr)
# -
# alternative way to compute the weights
w, _, _, _ = np.linalg.lstsq(ns_xs, ns_y)
print(w)
print(w_lr)
# ## Nonlinear Separable Data
#
# There is much to learn, but here is a quick trick. Consider this data:
# +
d = np.random.multivariate_normal((0,0), ((4, 0), (0, 5)), 50)
r = np.linalg.norm(d, axis=1)
y = sign(r-4)
plot_costs(d[:, 0], d[:, 1], y)
# -
# There is clearly no way to draw a line through the data to separate the pluses and minuses, so it is not linearly separable. There is also no way to get it "nearly" right as the boundaries are nonlinear. If you inspect the code, or the image, you'll see that the boundary is a circle. Everything further than 4.5 from the origin is positive, and everything less than that is negative.
#
# We seem to require entirely new methods. Yet, we don't. Our linear equation is
#
# $$h(\mathbf{x}) = \text{sign}\Big[\sum\limits_{i=0}^d w_i x_i \Big]$$
#
# During each iteration we only alter $w$, never $x$, so our requirements for linearity is only for $w$. We are allowed to perform any arbitrary nonlinear transform on x. Here I have squared $\mathbf{x}$.
# +
x2 = add_one_column(d*d)
plot_costs(x2[:, 1], x2[:, 2], y)
weights2, success, iters, num_errors = PPLA(x2, y, max_iters=5000)
plot_weight_line(weights2[0], 0, 12)
# -
# This data *is* linearly separable!. In general this is quite unsafe if you do not use proper theory, so I will stop here. The point is that you can perform a transform on the data to make the problem more tractable.
| Perceptron Learning Algorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Testing DEMV on _The Trump Effect_ dataset
# **Protected group:** `GENDER=0 & RELIGION=0`
# +
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.pipeline import Pipeline
import seaborn as sns
from fairlearn.reductions import ZeroOneLoss, ExponentiatedGradient, BoundedGroupLoss
from utils import *
from demv import DEMV
import warnings
warnings.filterwarnings('ignore')
sns.set_style('whitegrid')
# + tags=[]
def make_data():
data = pd.read_csv('data2/data_e28.csv - data_e28.csv.csv', index_col='[meta] uuid')
data.rename(columns = lambda c: c[c.find("]")+1:].replace("_", " ").upper().strip(), inplace=True)
voted = data['VOTED PARTY LAST ELECTION DE'][data['COUNTRY CODE'] == 'DE']\
.append(data['VOTED PARTY LAST ELECTION IT'][data['COUNTRY CODE'] == 'IT'])\
.append(data['VOTED PARTY LAST ELECTION FR'][data['COUNTRY CODE'] == 'FR'])\
.append(data['VOTED PARTY LAST ELECTION GB'][data['COUNTRY CODE'] == 'GB'])\
.append(data['VOTED PARTY LAST ELECTION ES'][data['COUNTRY CODE'] == 'ES'])\
.append(data['VOTED PARTY LAST ELECTION PL'][data['COUNTRY CODE'] == 'PL'])
rankingParty = data['RANKING PARTY DE'][data['COUNTRY CODE'] == 'DE']\
.append(data['RANKING PARTY IT'][data['COUNTRY CODE'] == 'IT'])\
.append(data['RANKING PARTY FR'][data['COUNTRY CODE'] == 'FR'])\
.append(data['RANKING PARTY GB'][data['COUNTRY CODE'] == 'GB'])\
.append(data['RANKING PARTY ES'][data['COUNTRY CODE'] == 'ES'])\
.append(data['RANKING PARTY PL'][data['COUNTRY CODE'] == 'PL'])
voteNextElection = pd.concat([data['VOTE NEXTELECTION DE'][data['COUNTRY CODE'] == 'DE'],
data['VOTE NEXTELECTION IT'][data['COUNTRY CODE'] == 'IT'],
data['VOTE NEXTELECTION FR'][data['COUNTRY CODE'] == 'FR'],
data['VOTE NEXTELECTION GB'][data['COUNTRY CODE'] == 'GB'],
data['VOTE NEXTELECTION ES'][data['COUNTRY CODE'] == 'ES'],
data['VOTE NEXTELECTION PL'][data['COUNTRY CODE'] == 'PL']], verify_integrity=True)
data['VOTED PARTY LAST ELECTION'] = voted
data['RANKING PARTY'] = rankingParty
data['VOTE NEXT ELECTION'] = voteNextElection
data.drop(['VOTED PARTY LAST ELECTION DE', 'VOTED PARTY LAST ELECTION IT', 'VOTED PARTY LAST ELECTION FR',
'VOTED PARTY LAST ELECTION GB', 'VOTED PARTY LAST ELECTION ES', 'VOTED PARTY LAST ELECTION PL',
'RANKING PARTY DE', 'RANKING PARTY IT', 'RANKING PARTY FR', 'RANKING PARTY GB', 'RANKING PARTY ES',
'RANKING PARTY PL', 'VOTE NEXTELECTION DE', 'VOTE NEXTELECTION IT', 'VOTE NEXTELECTION FR', 'VOTE NEXTELECTION GB',
'VOTE NEXTELECTION ES', 'VOTE NEXTELECTION PL'], axis=1, inplace=True)
data.drop('VOTE REFERENDUM', axis = 1, inplace=True)
data.drop('EMPLOYMENT STATUS IN EDUCATION', axis=1, inplace=True)
data.drop('ORIGIN', axis=1, inplace=True)
data['MEMBER ORGANIZATION'].fillna('Not member', inplace=True)
data.loc[data['MEMBER ORGANIZATION']=='Not member', 'ORGANIZATION ACTIVITIES TIMEPERWEEK'] = 'Not member'
data.drop(data.loc[data['HOUSEHOLD SIZE'].isnull()].index, inplace=True)
data.drop(data.loc[data['SOCIAL NETWORKS REGULARLY USED'].isnull()].index, inplace=True)
nullcols = data.isna().any()[data.isna().any()==True].index
data.drop(nullcols, axis=1, inplace=True)
data.drop('WEIGHT', axis=1, inplace=True)
data.loc[data['GENDER']=='male', 'GENDER'] = 1
data.loc[data['GENDER']!=1, 'GENDER'] = 0
data['GENDER'] = data['GENDER'].astype(int)
data.loc[data['RELIGION'] == 'Roman Catholic', 'RELIGION'] = 1
data.loc[data['RELIGION'] != 1, 'RELIGION'] = 0
data['RELIGION'] = data['RELIGION'].astype(int)
enc = LabelEncoder()
data['POLITICAL VIEW'] = enc.fit_transform(data['POLITICAL VIEW'].values)
data.rename(columns= lambda c: c.replace(" ", "_"), inplace=True)
for c in data.columns:
if len(data[c].unique())>6:
data.drop(c, axis=1, inplace=True)
return data
# -
data = make_data()
data = pd.get_dummies(data)
data.shape
data[(data['GENDER']==0)&(data['RELIGION']==0)].shape
label = 'POLITICAL_VIEW'
protected_group = {'GENDER': 0, 'RELIGION': 0}
sensitive_variables=['GENDER', 'RELIGION']
positive_label = 3
pipeline = Pipeline(
steps=[
("scaler", StandardScaler()),
(
"classifier",
LogisticRegression(),
),
]
)
# ## Bias dataset
# ### Logistic Regression
model, lr_metrics = cross_val(pipeline, data, label, protected_group, sensitive_features=sensitive_variables, positive_label=positive_label)
print_metrics(lr_metrics)
# ## DEMV application
demv = DEMV(round_level=1)
demv_data = data.copy()
# ### Logistic regression
model, metrics_demv = cross_val(pipeline, demv_data, label, protected_group, sensitive_features=sensitive_variables, debiaser=demv, positive_label=positive_label)
print_metrics(metrics_demv)
# ## Blackbox Postprocessing
model, blackboxmetrics, pred = cross_val2(pipeline, data, label, protected_group, sensitive_features=sensitive_variables, positive_label=positive_label)
# ## DEMV Evaluation
demv.get_iters()
metrics = eval_demv(15, 64, data.copy(), pipeline, label, protected_group, sensitive_variables, positive_label=positive_label)
df = prepareplots(metrics,'trump')
points = preparepoints(blackboxmetrics, 60)
plot_metrics_curves(df, points, 'Trump Multiclass Dataset')
unprivpergentage(data,protected_group, demv.get_iters())
blackboxmetrics
save_metrics('blackbox', 'trump', blackboxmetrics)
| trump.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Binary Digit Classifier Using QNN with GUI input
# ### Project Desciption
# The Project first aims to briefly introduce Quantum Neural Networks and then build a Quantum Neural Network (QNN) to classify handwritten 0 and 1 (using MNIST handwritten data). And then, we'll make a Graphical User Interface (GUI) using which the user can draw a digit. And afterward, we'll integrate the GUI with the QNN above. And then, we'll classify whether the user has made 0 or 1.
# ### References
# - https://arxiv.org/pdf/1802.06002.pdf
# - https://www.tensorflow.org/quantum/tutorials/mnist
# - https://docs.python.org/3/library/tk.html
# - https://tkdocs.com/tutorial/index.html
# - https://pennylane.ai/qml/glossary/quantum_neural_network.html
# - https://en.wikipedia.org/wiki/Quantum_neural_network
# ### What is Quantum Neural Networks ?
# A quantum neural network (QNN) is a machine learning model or algorithm that combines concepts from quantum computing and artifical neural networks.Quantum Neural Network extends the key features and structures of Neural Networks to quantum systems.
# Most Quantum neural networks are developed as feed-forward networks. Similar to their classical counterparts, this structure intakes input from one layer of qubits, and passes that input onto another layer of qubits. This layer of qubits evaluates this information and passes on the output to the next layer. Eventually the path leads to the final layer of qubits.
#
# <img src="images/QNN.png" width="800" />
# Fig1: Illustration of QNN with the input |ฯ>, the parameter ฮธ and linear entanglement structure.[image_source](https://arxiv.org/pdf/2108.01468.pdf)
# Now let's start building the QNN Model
# ### Libraries Used
# - **cirq**
# - **tensorflow**
# - **tensorflow_quantum**
# - **numpy**
# - **sympy**
# - **seaborn**
# - **matplotlib**
# - **tkinter**
# - **opencv**
# ### Importing Libraries
# +
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
import seaborn as sns
import collections
# %matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
# -
# ### Flowchart
# <img src="images/Flowchart.png" width="1000" />
# ### Index
#
# #### 1. Data Loading, Filtering and Encoding
# #####  1.1 Data Loading
# #####  1.2 Data Filtering
# #####  1.3 Downscaling Images to 4x4
# #####  1.4 Removing Contradictory Examples
# #####  1.5 Encoding the data as quantum Circuits
# #### 2. Building QNN (Quantum Neural Network)
# #####  2.1 Building the model Circuit
# #####  2.2 Wrapping the model_circuit in a tfq.keras model
# #####  2.3 Training and Evaluating QNN
# #### 3. Saving QNN Model
# #### 4. Making GUI using tkinter
# #### 5. Integrating GUI with QNN Model
# #### 1. Data Loading, Filtering and Encoding
# ##### 1.1 Data Loading
# +
#Loading MNIST Dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Rescaling the images to [0.0,1.0] Range.
x_train, x_test = x_train[..., np.newaxis]/255.0, x_test[..., np.newaxis]/255.0
print("Number of training examples before filtering:", len(x_train))
print("Number of testing examples before filtering:", len(x_test))
# -
# ##### 1.2 Data Filtering
# +
# Defining Function to filter dataset to keep just 0's and 1's.
def filter_01(x, y):
keep = (y == 0) | (y == 1)
x, y = x[keep], y[keep]
y = y == 0
return x,y
# Filtering using Above Function to keep 0's and 1's
x_train, y_train = filter_01(x_train, y_train)
x_test, y_test = filter_01(x_test, y_test)
print("Number of training examples after filtering:", len(x_train))
print("Number of testing examples after filtering:", len(x_test))
# -
# ##### 1.3 Downscaling Images to 4x4
downscaled_x_train = tf.image.resize(x_train, (4,4)).numpy()
downscaled_x_test = tf.image.resize(x_test, (4,4)).numpy()
# Displaying the first training image before and after downscaling
print("Before Downscaling:")
plt.imshow(x_train[0,:,:,0], vmin=0, vmax=1)
plt.colorbar()
print("After Downscaling:")
plt.imshow(downscaled_x_train[0,:,:,0], vmin=0, vmax=1)
plt.colorbar()
# ##### 1.4 Removing Contradictory Examples
# +
# Defining Function to remove conradictory Examples.
def remove_contradicting(xs, ys):
mapping = collections.defaultdict(set)
orig_x = {}
# Determine the set of labels for each unique image:
for x,y in zip(xs,ys):
orig_x[tuple(x.flatten())] = x
mapping[tuple(x.flatten())].add(y)
new_x = []
new_y = []
for flatten_x in mapping:
x = orig_x[flatten_x]
labels = mapping[flatten_x]
if len(labels) == 1:
new_x.append(x)
new_y.append(next(iter(labels)))
else:
# Throw out images that match more than one label.
pass
num_uniq_0 = sum(1 for value in mapping.values() if len(value) == 1 and True in value)
num_uniq_1 = sum(1 for value in mapping.values() if len(value) == 1 and False in value)
num_uniq_both = sum(1 for value in mapping.values() if len(value) == 2)
print("Number of unique images:", len(mapping.values()))
print("Number of unique 0s: ", num_uniq_0)
print("Number of unique 1s: ", num_uniq_1)
print("Number of unique contradicting labels (both 0 and 1): ", num_uniq_both)
print()
print("Initial number of images: ", len(xs))
print("Remaining non-contradicting unique images: ", len(new_x))
return np.array(new_x), np.array(new_y)
x_train_nocon, y_train_nocon = remove_contradicting(downscaled_x_train, y_train)
# -
# ##### 1.5 Encoding the data as quantum Circuits
# +
THRESHOLD = 0.5
x_train_bin = np.array(x_train_nocon > THRESHOLD, dtype=np.float32)
x_test_bin = np.array(downscaled_x_test > THRESHOLD, dtype=np.float32)
_ = remove_contradicting(x_train_bin, y_train_nocon)
# Defining Function to convert images to circuit
def convert_to_circuit(image):
"""Encode truncated classical image into quantum datapoint."""
values = np.ndarray.flatten(image)
qubits = cirq.GridQubit.rect(4, 4)
circuit = cirq.Circuit()
for i, value in enumerate(values):
if value:
circuit.append(cirq.X(qubits[i]))
return circuit
x_train_circ = [convert_to_circuit(x) for x in x_train_bin]
x_test_circ = [convert_to_circuit(x) for x in x_test_bin]
# -
print("Circuit for the first train example")
SVGCircuit(x_train_circ[0])
# Converting Cirq circuits to tensors for TensorflowQuantum
x_train_tfcirc = tfq.convert_to_tensor(x_train_circ)
x_test_tfcirc = tfq.convert_to_tensor(x_test_circ)
# #### 2. Building QNN (Quantum Neural Network)
# ##### 2.1 Building the model Circuit
# +
class CircuitLayerBuilder():
def __init__(self, data_qubits, readout):
self.data_qubits = data_qubits
self.readout = readout
def add_layer(self, circuit, gate, prefix):
for i, qubit in enumerate(self.data_qubits):
symbol = sympy.Symbol(prefix + '-' + str(i))
circuit.append(gate(qubit, self.readout)**symbol)
def create_quantum_model():
"""Create a QNN model circuit and readout operation to go along with it."""
data_qubits = cirq.GridQubit.rect(4, 4) # a 4x4 grid.
readout = cirq.GridQubit(-1, -1) # a single qubit at [-1,-1]
circuit = cirq.Circuit()
# Prepare the readout qubit.
circuit.append(cirq.X(readout))
circuit.append(cirq.H(readout))
builder = CircuitLayerBuilder(
data_qubits = data_qubits,
readout=readout)
# Then add layers (experiment by adding more).
builder.add_layer(circuit, cirq.XX, "xx1")
builder.add_layer(circuit, cirq.ZZ, "zz1")
# Finally, prepare the readout qubit.
circuit.append(cirq.H(readout))
return circuit, cirq.Z(readout)
model_circuit, model_readout = create_quantum_model()
# -
# ##### 2.2 Wrapping the model_circuit in a tfq.keras model
# +
# Build the Keras model.
model = tf.keras.Sequential([
# The input is the data-circuit, encoded as a tf.string
tf.keras.layers.Input(shape=(), dtype=tf.string),
# The PQC layer returns the expected value of the readout gate, range [-1,1].
tfq.layers.PQC(model_circuit, model_readout),
])
y_train_hinge = 2.0*y_train_nocon-1.0
y_test_hinge = 2.0*y_test-1.0
def hinge_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true) > 0.0
y_pred = tf.squeeze(y_pred) > 0.0
result = tf.cast(y_true == y_pred, tf.float32)
return tf.reduce_mean(result)
model.compile(
loss=tf.keras.losses.Hinge(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[hinge_accuracy])
print(model.summary())
# -
# ##### 2.3 Training and Evaluating QNN
# +
EPOCHS = 4
BATCH_SIZE = 32
NUM_EXAMPLES = len(x_train_tfcirc)
x_train_tfcirc_sub = x_train_tfcirc[:NUM_EXAMPLES]
y_train_hinge_sub = y_train_hinge[:NUM_EXAMPLES]
qnn_history = model.fit(
x_train_tfcirc_sub, y_train_hinge_sub,
batch_size=32,
epochs=EPOCHS,
verbose=1,
validation_data=(x_test_tfcirc, y_test_hinge))
qnn_results = model.evaluate(x_test_tfcirc, y_test)
# -
# #### 3. Saving QNN Model
model.save_weights('01_MNIST.h5')
# #### 4. Making GUI using tkinter
# #### [Will be updated]
# #### 5. Integrating GUI with QNN Model
# #### [Will be updated]
| Pt-1-Binary Digit Classifier Using QNN with GUI input.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import ipywidgets
import IPython.display
class Data:
def __init__(self):
self.index = np.linspace(0, 10, 11)
def signal(self, index):
t = np.linspace(0, 10, 11)
return np.sin(2*np.pi*index*t)
def signal2D(self, index):
t = np.linspace(-10, 10, 11)
X, Y = np.meshgrid(t, t)
return np.sin(2*np.pi*index*(X**2 + Y**2))
class DataAnalysis:
def __init__(self, data):
self.data = data
self.slider = ipywidgets.IntSlider(value=self.data.index[0],
min=self.data.index[0],
max=self.data.index[-1])
def _ipython_display_(self):
IPython.display.display(self.slider)
d = Data()
da = DataAnalysis(d)
da
| examples/holoviews_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Midterm Exam (48 pts)
#
# ## <NAME> nd8288
#
# ### You must submit this notebook to Canvas by 3:15 PM.
#
# * This exam is open notes, internet, etc.
# * However, you must complete the exam on your own without discussing it with anyone else.
# + active=""
# I, <NAME> promise to complete this exam without discussing it with anyone else (fill in your name).
# -
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
# ---
# **1.** (6 pts) Write a function that returns the sum of squared differences between two NumPy arrays (assume they are of the same shape). Use NumPy, do NOT use for loops or list comprehensions.
def sum_sq_diff(a1, a2):
return(((a1 - a2) ** 2).sum())
# ---
# **2.** (6 pts) Write a while loop that prints a new random number between 0 and 1 each time, and stops after printing a value greater than 0.8.
while True:
n = np.random.rand(1)
print(n)
if n > 0.8:
break
# ---
# **3.** (6 pts) For the 2-D `xyz` array below, each row refers to a $(x,y,z)$ coordinate. Compute the average $(x,y,z)$ position for all five points. **Do this in a single operation using NumPy.**
# Each row of the matrix xyz is a (x,y,z) coordinate.
xyz = np.array(
[[9, 7, 2],
[8, 3, 5],
[5, 7, 7],
[8, 0, 1],
[3, 2, 3]
], dtype=float)
xyz = np.average(xyz, axis = 1)
print(*xyz, sep = "\n")
# ---
# **4.** (6 pts) Mice are trained to press a lever to recieve a reward. On average, subjects lever press 5 times per minute. Use a bar plot to show the expected probability distribution of 0-20 lever presses in a minute. *Hint: Is the expected distribution continuous or discrete? Which of the distributions we discussed in lecture is most likely to apply?*
# +
rate = st.poisson(5)
n = np.arange(21)
plt.bar(n, rate.pmf(n), alpha=0.5)
plt.ylabel('pmf')
plt.xlabel('Lever Presses in Minute')
plt.show()
# -
# ---
# **5.** See a-d below which refer to the following data. You record the time-dependent current amplitude through a single ion channel both in the absence (`control_pA`) and presence of a toxin (`toxin_pA`). See data below which only includes time points where the channel was open - i.e. data where the channel was closed have been removed. Assume current fluctuations are uncorrelated.
# +
# Open channel currents (pA) in control conditions.
control_pA = np.array([4.03150921, 5.35005992, 4.9044136 , 5.75425045, 4.54202161,
4.35710467, 5.97752543, 5.05624353, 3.22346375, 7.11071582,
4.04086427, 4.32857646, 6.30056182, 3.65809927, 6.57265728,
4.70164081, 5.1101728 , 5.71270398, 5.00034292, 4.19906666,
2.03006266, 4.10125049, 5.57952774, 5.50038489, 5.97479919,
5.42698878, 5.88464693, 3.53925318, 4.86306604, 4.54504284,
4.06832375, 3.38257841, 5.72606498, 5.77082579, 3.94417216,
6.04297478, 6.03137911, 4.72622255, 4.31080346, 5.06943403,
4.13237601, 5.37546877, 5.48315923, 2.60443664, 4.58468215,
4.9446293 , 6.01987885, 5.15408473, 4.81054766, 5.33714209,
6.64552171, 7.0578201 , 5.36019945, 4.72538113, 6.30884626,
5.51767348, 3.35226856, 3.82817138, 6.97998826, 4.39735622,
7.54209114, 6.19864503, 4.97246172, 5.34602361, 5.82432497,
4.0865825 , 5.47517538, 5.40070897, 2.8524926 , 3.83639657,
4.93458818, 4.88141644, 6.01449063, 6.25857314, 4.03744697,
4.60863723, 5.35649482, 5.39405226, 6.22138368, 6.01617168,
4.19447619, 4.88831804, 4.88241037, 5.9060959 , 5.21696952,
5.86979465, 4.77714168, 3.53762488, 4.36346394, 4.40397988,
5.25795862, 4.31317957, 3.70375756, 3.8538846 , 5.47317128,
4.73139441, 4.37810953, 4.41140894, 5.18347364, 4.53585324,
4.11916743, 3.04444944, 4.76087713, 5.22170241, 5.79857067,
5.35625202, 6.43433742, 3.43649271, 4.61494332, 5.57264178,
3.930557 , 4.56218124, 4.61044655, 5.1246218 , 5.93238325,
4.72979243, 4.96153242, 5.32342659, 4.5894581 , 5.18472725,
4.01706299, 4.61919031, 5.94454731, 3.61618331, 5.69556144,
5.13398501, 4.17378522, 4.39720973, 5.15826113, 6.05233913,
4.17269185, 4.03900288, 4.45355939, 4.19994886, 4.12870401,
5.83701024, 4.38492446, 3.92021803, 4.40789588, 5.84415893,
5.05424301, 6.32789738, 3.47154195, 4.96423708, 5.83862982,
6.42686264, 4.75656097, 5.54022733, 3.53297469, 4.76121663,
5.01499506, 5.3697581 , 5.9614272 , 6.25372446, 5.75877715,
4.95992757, 3.94369449, 5.35967673, 3.41762373, 4.64050732,
5.99511177, 5.27236238, 5.59935983, 2.62828184, 4.2925427 ,
4.18171814, 5.06102011, 5.10920024, 6.80851243, 5.08496527,
4.76387311, 4.16885758, 4.8072182 , 4.61179928, 5.62581193,
4.61322343, 3.90061734, 5.65824602, 5.11203629, 5.98888234,
4.46230765, 3.37139586, 4.82700425, 5.95728518, 4.73280883,
4.11400828, 5.55439261, 6.1559831 , 4.74786815, 4.79568241,
4.11720113, 4.72263365, 6.93646713, 4.27758138, 4.9576273 ,
6.1331057 , 4.7093127 , 5.81270598, 5.71730717, 4.29894352,
6.36630565, 4.68713094, 6.37881931, 3.17309655, 2.63717159])
# Open channel currents (pA) in the presence of a toxin.
toxin_pA = np.array([ 7.60961679, 9.37034271, 7.07246212, 5.86773613, 5.92226577,
8.76583987, 7.32077966, 7.23182365, 8.40735501, 8.85710003,
5.92910102, 8.20628013, 9.23666421, 8.68871746, 8.33005897,
7.48336383, 7.80298365, 7.43452038, 7.46266961, 7.41682678,
9.69396569, 9.09118965, 7.49661445, 8.98263113, 8.81012844,
6.30884951, 8.21543216, 7.97151925, 8.74100316, 8.2157272 ,
7.32937124, 7.56515421, 9.58882995, 7.82420469, 5.26593059,
7.48153336, 8.3063745 , 7.67310726, 10.01612404, 7.79477974,
7.10440927, 5.92735657, 7.33097054, 7.86274952, 8.03131921,
11.24918233, 8.44044296, 8.21363828, 6.70968127, 8.9176313 ,
11.43552128, 8.70348016, 6.87658295, 8.67987668, 6.33068464,
8.89119019, 7.26888331, 8.27544728, 8.15041154, 6.07242236,
8.73241919, 5.98780195, 7.80820107, 7.79206312, 8.13674211,
9.64763637, 8.72122585, 9.14177842, 7.94235773, 7.57852162,
6.72718469, 7.55439714, 8.98093647, 7.82369591, 8.02313094,
9.19062157, 7.8682468 , 7.31675927, 9.94454295, 9.30453427,
9.36754654, 9.75977773, 5.1849242 , 8.74018526, 8.54621738,
7.33227804, 7.38875862, 5.94554764, 7.06422596, 6.73617612,
8.63929211, 7.97148873, 7.72168226, 7.84022914, 9.07691762,
8.40690996, 8.7771139 , 6.61492601, 6.10637652, 7.14955948,
6.74877077, 8.57666357, 6.18863655, 8.56078376, 7.14224161,
8.24987134, 9.49010618, 8.76482982, 9.17533594, 8.72207856,
8.17676082, 8.39039663, 8.96798519, 7.77505548, 8.90350684,
8.10008448, 8.46045961, 9.65848642, 8.25876851, 7.77492258,
8.58311361, 6.70798608, 6.70562358, 6.93360349, 8.3013277 ,
7.61311802, 8.56594907, 7.33282668, 11.00033713, 7.85895211,
7.44621012, 8.57509475, 7.05178452, 8.37078844, 10.62156803,
6.99158165, 7.81354149, 8.06160781, 7.90673138, 9.90885401,
6.81565899, 8.67192309, 7.9539827 , 8.25855893, 8.27149754,
7.17481818, 8.54761346, 7.83062659, 9.24647097, 6.6307797 ,
7.32669798, 8.28532766, 7.08691382, 6.38533146, 9.0104272 ,
8.52544934, 7.61334789, 6.77356794, 8.39287474, 6.86095398,
8.60455185, 9.35001121, 9.38519484, 6.9673516 , 7.41680611,
7.0467405 , 8.57751505, 9.69495461, 9.74565552, 8.33842592,
9.12333636, 7.4968431 , 8.43236925, 8.33333632, 8.22731799,
9.05307618, 8.26975749, 7.56401947, 7.12560856, 6.46681031,
8.71997107, 6.63361736, 8.28835295, 6.584427 , 6.2973554 ,
8.53158821, 7.45407834, 8.50039049, 8.4475556 , 8.28053785,
6.88277102, 7.41688387, 9.93133193, 5.9638023 , 6.68364453])
# -
# **5a.** (6 pts) Plot a histogram of the measured open channel currents in both control and toxin conditions (overlaid on a single plot). Use 20 bins for each histogram and make them semitransparent so any overlap is visible. Label the x-axis and include the proper units. Label the y-axis as 'Counts'. Include a legend for the conditions.
plt.hist(control_pA, bins=20, alpha=0.5, label='Control')
plt.hist(toxin_pA, bins=20, alpha=0.5, label='Toxin')
plt.xlabel('Current (pA)')
plt.ylabel('Counts')
plt.legend()
plt.show()
# **5b.** (6 pts) Based on your visual inspection of the plot in 5a, do you think the toxin has an effect on mean current amplitude? Also, do you think the toxin has an effect on the current fluctuations around the mean (i.e. variance or standard deviation)?
# + active=""
# Visually, it appears as if there is an effect of the toxin on the mean current amplitude (it increases).
#
# H0: The toxin has no effect on mean current amplitude (or current fluctuations around the mean).
#
# Ha: The toxin has an effect on mean current amplitude (or current fluctuations around the mean).
# -
# **5c.** (6 pts) Test the null hypothesis that the toxin has no effect on mean current amplitude using a permutation test with 10,000 permutations. Compute the difference in mean current for each permutation. Report the 95% confidence interval for the distribution of permuted mean current differences under the null hypothesis. *You don't need to plot anything here. That will be done in 5d below.*
# +
n_c = len(control_pA)
n_t = len(toxin_pA)
current = np.zeros((n_c + n_t,))
current[:n_c] = control_pA
current[-n_t:] = toxin_pA
perm = np.zeros((10000))
for i in range(10000):
np.random.shuffle(current)
p_1 = current[:n_c]
p_2 = current[-n_t:]
perm[i] = p_2.std() - p_1.std()
# -
# **5d.** (6 pts) Plot a histogram of the permuted differences in mean current amplitude with 100 bins. Plot dashed vertical lines for the 95% confidence interval of your permuted distribution. Also plot a solid vertical line for the measured difference in mean current (from the `control_pA` and `toxin_pA` data given above). Based on this plot, do you reject the null hypothesis that the toxin has no effect on mean current amplitude?
# +
plt.hist(perm, bins = 100, alpha = 0.5)
plt.xlabel('Current Stdev') # (Toxin - Control)
plt.ylabel('# Permutations');
lb, ub = np.quantile(perm, [0.025, 0.975])
plt.axvline(lb, linestyle = '--')
plt.axvline(ub, linestyle = '--');
plt.axvline(toxin_pA.std() - control_pA.std())
plt.show()
# + active=""
# The observed is within the 95% confidence interval for the expected differences with the null hypothesis.
# Therefore, fail to reject the null hypothesis.
# (The outcome could have been due to random error/chance.)
# -
| exams/exam-midterm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# +
from __future__ import print_function
import sys, os
import h2o
from h2o.estimators.deepwater import H2ODeepWaterEstimator
import importlib
h2o.init()
# -
if not H2ODeepWaterEstimator.available(): quit()
# # LeNET
#
# Here we define the famous LENET neural network, but you can define any deep neural network of your choice.
def lenet(num_classes):
import mxnet as mx
data = mx.symbol.Variable('data')
# first conv
conv1 = mx.symbol.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1 = mx.symbol.Activation(data=conv1, act_type="tanh")
pool1 = mx.symbol.Pooling(data=tanh1, pool_type="max", kernel=(2,2), stride=(2,2))
# second conv
conv2 = mx.symbol.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2 = mx.symbol.Activation(data=conv2, act_type="tanh")
pool2 = mx.symbol.Pooling(data=tanh2, pool_type="max", kernel=(2,2), stride=(2,2))
# first fullc
flatten = mx.symbol.Flatten(data=pool2)
fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3 = mx.symbol.Activation(data=fc1, act_type="tanh")
# second fullc
fc2 = mx.symbol.FullyConnected(data=tanh3, num_hidden=num_classes)
# loss
lenet = mx.symbol.SoftmaxOutput(data=fc2, name='softmax')
return lenet
# ## DeepWater for MXNET
#
# We can use the import functions provided by h2o to import the list of
# +
train = h2o.import_file("../../bigdata/laptop/mnist/train.csv.gz")
test = h2o.import_file("../../bigdata/laptop/mnist/test.csv.gz")
predictors = list(range(0,784))
resp = 784
train[resp] = train[resp].asfactor()
test[resp] = test[resp].asfactor()
nclasses = train[resp].nlevels()[0]
# -
# Let's create the lenet model architecture from scratch using the MXNet Python API
model = lenet(nclasses)
# To import the model inside the DeepWater training engine we need to save the model to a file:
model_path = "/tmp/symbol_lenet-py.json"
model.save(model_path)
# The model is just the structure of the network expressed as a json dict
# +
# #!head "/tmp/symbol_lenet-py.json"
# -
# ## Importing the LeNET model architecture for training in H2O
#
# We have defined the model and saved the structure to a file. We are ready to start the training procedure.
model = H2ODeepWaterEstimator(epochs=100, learning_rate=1e-3,
mini_batch_size=64,
network='user',
network_definition_file=model_path,
image_shape=[28,28], channels=1)
model.train(x=predictors,y=resp, training_frame=train, validation_frame=test)
model.show()
| h2o-py/demos/LeNET.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="bOChJSNXtC9g"
# # ้ป่พๅๅฝ
# + [markdown] colab_type="text" id="OLIxEDq6VhvZ"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logo.png" width=150>
#
# ๅจไธไธ่ไธญ๏ผๆไปฌ็ๅฐ็บฟๆงๅๅฝๅฏไปฅๅพๅฅฝ็ๆๅๅบไธๆก็บฟๅ่
ไธไธช่ถ
ๅนณ้ขๆฅๅๅบๅฏน่ฟ็ปญๅ้็้ขๆตใไฝๆฏๅจๅ็ฑป้ฎ้ขไธญๆไปฌๅธๆ็่พๅบๆฏ็ฑปๅซ็ๆฆ็๏ผ็บฟๆงๅๅฝๅฐฑไธ่ฝๅ็ๅพๅฅฝไบใ
#
#
#
#
# + [markdown] colab_type="text" id="VoMq0eFRvugb"
# # ๆฆ่ฟฐ
# + [markdown] colab_type="text" id="qWro5T5qTJJL"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logistic.jpg" width=270>
#
# $ \hat{y} = \frac{1}{1 + e^{-XW}} $
#
# *where*:
# * $\hat{y}$ = ้ขๆตๅผ | $\in \mathbb{R}^{NX1}$ ($N$ ๆฏๆ ทๆฌ็ไธชๆฐ)
# * $X$ = ่พๅ
ฅ | $\in \mathbb{R}^{NXD}$ ($D$ ๆฏ็นๅพ็ไธชๆฐ)
# * $W$ = ๆ้ | $\in \mathbb{R}^{DX1}$
#
# ่ฟไธชๆฏไบ้กนๅผ้ป่พๅๅฝใไธป่ฆๆๆณๆฏ็จ็บฟๆงๅๅฝ็่พๅบๅผ($z=XW$)็ป่ฟไธไธชsigmoidๅฝๆฐ($\frac{1}{1+e^{-z}}$)ๆฅๆ ๅฐๅฐ(0, 1)ไน้ดใ
# + [markdown] colab_type="text" id="YcFvkklZSZr9"
# ๅฝๆไปฌๆๅคไบไธคไธชๅ็ฑป็ฑปๅซ๏ผๆไปฌๅฐฑ้่ฆไฝฟ็จๅค้กนๅผ้ป่พๅๅฝ(softmaxๅ็ฑปๅจ)ใsoftmaxๅ็ฑปๅจๅฐไผ็จ็บฟๆงๆน็จ($z=XW$)ๅนถไธๅฝไธๅๅฎ๏ผๆฅไบง็ๅฏนๅบ็็ฑปๅซy็ๆฆ็ใ
#
# $ \hat{y} = \frac{e^{XW_y}}{\sum e^{XW}} $
#
# *where*:
# * $\hat{y}$ = ้ขๆตๅผ | $\in \mathbb{R}^{NX1}$ ($N$ ๆฏๆ ทๆฌ็ไธชๆฐ)
# * $X$ = ่พๅ
ฅ | $\in \mathbb{R}^{NXD}$ ($D$ ๆฏ็นๅพ็ไธชๆฐ)
# * $W$ = ๆ้ | $\in \mathbb{R}^{DXC}$ ($C$ ๆฏ็ฑปๅซ็ไธชๆฐ)
#
# + [markdown] colab_type="text" id="T4Y55tpzIjOa"
# * **็ฎๆ :** ้่ฟ่พๅ
ฅๅผ$X$ๆฅ้ขๆต$y$็็ฑปๅซๆฆ็ใsoftmaxๅ็ฑปๅจๅฐๆ นๆฎๅฝไธๅ็บฟๆง่พๅบๆฅ่ฎก็ฎ็ฑปๅซๆฆ็ใ
# * **ไผ็น:**
# * ๅฏไปฅ้ขๆตไธ่พๅ
ฅๅฏนๅบ็็ฑปๅซๆฆ็ใ
# * **็ผบ็น:**
# * ๅ ไธบไฝฟ็จ็ๆๅคฑๅฝๆฐๆฏ่ฆๆๅฐๅไบคๅ็ตๆๅคฑ๏ผๆไปฅๅฏน็ฆป็พค็นๅพๆๆใ(ๆฏๆๅ้ๆบ([SVMs](https://towardsdatascience.com/support-vector-machine-vs-logistic-regression-94cc2975433f)) ๆฏๅฏนๅค็็ฆป็พค็นไธไธชๅพๅฅฝ็้ๆฉ).
# * **ๅ
ถไป:** Softmaxๅ็ฑปๅจๅจ็ฅ็ป็ฝ็ป็ปๆไธญๅนฟๆณ็จไบๆๅไธๅฑ๏ผๅ ไธบๅฎไผ่ฎก็ฎๅบ็ฑปๅซ็ๆฆ็ใ
# + [markdown] colab_type="text" id="Jq65LZJbSpzd"
# # ่ฎญ็ป
# + [markdown] colab_type="text" id="-HBPn8zPTQfZ"
# *ๆญฅ้ชค*:
#
# 1. ้ๆบๅๅงๅๆจกๅๆ้$W$.
# 2. ๅฐ่พๅ
ฅๅผ $X$ ไผ ๅ
ฅๆจกๅๅนถไธๅพๅฐlogits ($z=XW$). ๅจlogitsไธไฝฟ็จsoftmaxๆไฝๅพๅฐ็ฌ็ญ็ผ็ ๅ็็ฑปๅซๆฆ็$\hat{y}$ใ ๆฏๅฆ, ๅฆๆๆไธไธช็ฑปๅซ, ้ขๆตๅบ็็ฑปๅซๆฆ็ๅฏ่ฝไธบ[0.3, 0.3, 0.4].
# 3. ไฝฟ็จๆๅคฑๅฝๆฐๅฐ้ขๆตๅผ$\hat{y}$ (ไพๅฆ[0.3, 0.3, 0.4]])ๅ็ๅฎๅผ$y$(ไพๅฆๅฑไบ็ฌฌไบไธช็ฑปๅซๅบ่ฏฅๅไฝ[0, 0, 1])ๅๅฏนๆฏ๏ผๅนถไธ่ฎก็ฎๅบๆๅคฑๅผ$J$ใไธไธชๅพๅธธ็จ็้ป่พๅๅฝๆๅคฑๅฝๆฐๆฏไบคๅ็ตๅฝๆฐใ
# * $J(\theta) = - \sum_i y_i ln (\hat{y_i}) = - \sum_i y_i ln (\frac{e^{X_iW_y}}{\sum e^{X_iW}}) $
# * $y$ = [0, 0, 1]
# * $\hat{y}$ = [0.3, 0.3, 0.4]]
# * $J(\theta) = - \sum_i y_i ln (\hat{y_i}) = - \sum_i y_i ln (\frac{e^{X_iW_y}}{\sum e^{X_iW}}) = - \sum_i [0 * ln(0.3) + 0 * ln(0.3) + 1 * ln(0.4)] = -ln(0.4) $
# * ็ฎๅๆไปฌ็ไบคๅ็ตๅฝๆฐ: $J(\theta) = - ln(\hat{y_i})$ (่ด็ๆๅคงไผผ็ถ).
# * $J(\theta) = - ln(\hat{y_i}) = - ln (\frac{e^{X_iW_y}}{\sum_i e^{X_iW}}) $
# 4. ๆ นๆฎๆจกๅๆ้่ฎก็ฎๆๅคฑๆขฏๅบฆ$J(\theta)$ใ่ฎฉๆไปฌๅ่ฎพ็ฑปๅซ็ๅ็ฑปๆฏไบๆฅ็(ไธ็ง่พๅ
ฅไป
ไป
ๅฏนๅบไธไธช่พๅบ็ฑปๅซ).
# * $\frac{\partial{J}}{\partial{W_j}} = \frac{\partial{J}}{\partial{y}}\frac{\partial{y}}{\partial{W_j}} = - \frac{1}{y}\frac{\partial{y}}{\partial{W_j}} = - \frac{1}{\frac{e^{W_yX}}{\sum e^{XW}}}\frac{\sum e^{XW}e^{W_yX}0 - e^{W_yX}e^{W_jX}X}{(\sum e^{XW})^2} = \frac{Xe^{W_j}X}{\sum e^{XW}} = XP$
# * $\frac{\partial{J}}{\partial{W_y}} = \frac{\partial{J}}{\partial{y}}\frac{\partial{y}}{\partial{W_y}} = - \frac{1}{y}\frac{\partial{y}}{\partial{W_y}} = - \frac{1}{\frac{e^{W_yX}}{\sum e^{XW}}}\frac{\sum e^{XW}e^{W_yX}X - e^{W_yX}e^{W_yX}X}{(\sum e^{XW})^2} = \frac{1}{P}(XP - XP^2) = X(P-1)$
# 5. ไฝฟ็จๆขฏๅบฆไธ้ๆณๆฅๅฏนๆ้ๅๅๅไผ ๆญไปฅๆดๆฐๆจกๅๆ้ใๆดๆฐๅ็ๆ้ๅฐไผไฝฟไธๆญฃ็กฎ็็ฑปๅซ(j)ๆฆ็ๅคงๅคง้ไฝ๏ผไป่ๅ้ซๆญฃ็กฎ็็ฑปๅซ(y)ๆฆ็ใ
# * $W_i = W_i - \alpha\frac{\partial{J}}{\partial{W_i}}$
# 6. ้ๅค2 - 4ๆญฅ็ดๅฐๆจกๅ่กจ็ฐๆๅฅฝ๏ผไนๅฏไปฅ่ฏด็ดๅฐๆๅคฑๆถๆ๏ผใ
# + [markdown] colab_type="text" id="r_hKrjzdtTgM"
# # ๆฐๆฎ
# + [markdown] colab_type="text" id="PyccHrQztVEu"
# ๆไปฌๆฅๅ ่ฝฝๅจ็ฌฌไธ่่ฏพไธญ็จๅฐ็titanicๆฐๆฎ้
# + colab={} colab_type="code" id="H385V4VUtWOv"
from argparse import Namespace
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import urllib
# + colab={} colab_type="code" id="pL67TlZO6Zg4"
# ๅๆฐ
args = Namespace(
seed=1234,
data_file="titanic.csv",
train_size=0.75,
test_size=0.25,
num_epochs=100,
)
# ่ฎพ็ฝฎ้ๅณ็งๅญๆฅไฟ่ฏๅฎ้ช็ปๆ็ๅฏ้ๅคๆงใ
np.random.seed(args.seed)
# + colab={} colab_type="code" id="7sp_tSyItf1_"
# ไปGitHubไธๅ ่ฝฝๆฐๆฎๅฐnotebookๆฌๅฐ้ฉฑๅจ
url = "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/titanic.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(args.data_file, 'wb') as f:
f.write(html)
# + colab={"base_uri": "https://localhost:8080/", "height": 272} colab_type="code" id="7alqmyzXtgE8" outputId="353702e3-76f7-479d-df7a-5effcc8a7461"
# ๆCSVๆไปถๅ
ๅฎน่ฏปๅฐDataFrameไธญ
df = pd.read_csv(args.data_file, header=0)
df.head()
# + [markdown] colab_type="text" id="k-5Y4zLIoE6s"
# # Scikit-learnๅฎ็ฐ
# + [markdown] colab_type="text" id="ILkbyBHQoIwE"
# **ๆณจๆ**: Scikit-learnไธญ`LogisticRegression`็ฑปไฝฟ็จ็ๆฏๅๆ ไธ้ๆณ๏ผcoordinate descent๏ผๆฅๅ็ๆๅใ็ถ่๏ผๆไปฌไผไฝฟ็จScikit-learnไธญ็`SGDClassifier`็ฑปๆฅๅ้ๆบๆขฏๅบฆไธ้ใๆไปฌไฝฟ็จ่ฟไธชไผๅๆนๆณๆฏๅ ไธบๅจๆชๆฅ็ๅ ่่ฏพ็จไธญๆไปฌไนไผไฝฟ็จๅฐๅฎใ
# + colab={} colab_type="code" id="W1MJODStIu8V"
# ่ฐๅ
from sklearn.linear_model import SGDClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
# + colab={} colab_type="code" id="kItBIOOCTi6p"
# ้ขๅค็
def preprocess(df):
# ๅ ้คๆๅซๆ็ฉบๅผ็่ก
df = df.dropna()
# ๅ ้คๅบไบๆๆฌ็็นๅพ (ๆไปฌไปฅๅ็่ฏพ็จๅฐไผๅญฆไน ๆไนไฝฟ็จๅฎไปฌ)
features_to_drop = ["name", "cabin", "ticket"]
df = df.drop(features_to_drop, axis=1)
# pclass, sex, ๅ embarked ๆฏ็ฑปๅซๅ้
categorical_features = ["pclass","embarked","sex"]
df = pd.get_dummies(df, columns=categorical_features)
return df
# + colab={"base_uri": "https://localhost:8080/", "height": 224} colab_type="code" id="QwQHDh4xuYTB" outputId="153ea757-b817-406d-dbde-d1fba88f194b"
# ๆฐๆฎ้ขๅค็
df = preprocess(df)
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="wsGRZNNiUTqj" outputId="c9364be7-3cae-487f-9d96-3210b3129199"
# ๅๅๆฐๆฎๅฐ่ฎญ็ป้ๅๆต่ฏ้
mask = np.random.rand(len(df)) < args.train_size
train_df = df[mask]
test_df = df[~mask]
print ("Train size: {0}, test size: {1}".format(len(train_df), len(test_df)))
# + [markdown] colab_type="text" id="oZKxFmATU95M"
# **ๆณจๆ**: ๅฆๆไฝ ๆ็ฑปไผผๆ ๅๅ็้ขๅค็ๆญฅ้ชค๏ผไฝ ้่ฆๅจๅๅๅฎ่ฎญ็ป้ๅๆต่ฏ้ไนๅๅไฝฟ็จๅฎไปฌใ่ฟๆฏๅ ไธบๆไปฌไธๅฏ่ฝไปๆต่ฏ้ไธญๅญฆๅฐไปปไฝๆ็จ็ไฟกๆฏใ
# + colab={} colab_type="code" id="cLzL_LJd4vQ-"
# ๅ็ฆป X ๅ y
X_train = train_df.drop(["survived"], axis=1)
y_train = train_df["survived"]
X_test = test_df.drop(["survived"], axis=1)
y_test = test_df["survived"]
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="AdTYbV472UNJ" outputId="214a8114-3fd3-407f-cd6e-5f5d07294f50"
# ๆ ๅๅ่ฎญ็ปๆฐๆฎ (mean=0, std=1)
X_scaler = StandardScaler().fit(X_train)
# ๆ ๅๅ่ฎญ็ปๅๆต่ฏๆฐๆฎ (ไธ่ฆๆ ๅๅๆ ็ญพๅ็ฑปy)
standardized_X_train = X_scaler.transform(X_train)
standardized_X_test = X_scaler.transform(X_test)
# ๆฃๆฅ
print ("mean:", np.mean(standardized_X_train, axis=0)) # mean ๅบ่ฏฅไธบ ~0
print ("std:", np.std(standardized_X_train, axis=0)) # std ๅบ่ฏฅไธบ 1
# + colab={} colab_type="code" id="7-vm9AZm1_f9"
# ๅๅงๅๆจกๅ
log_reg = SGDClassifier(loss="log", penalty="none", max_iter=args.num_epochs,
random_state=args.seed)
# + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="0e8U9NNluYVp" outputId="c5f22ade-bb8c-479b-d300-98758a82d396"
# ่ฎญ็ป
log_reg.fit(X=standardized_X_train, y=y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="hA7Oz97NAe8A" outputId="ab8a878a-6012-4727-8cd1-40bc5c69245b"
# ๆฆ็
pred_test = log_reg.predict_proba(standardized_X_test)
print (pred_test[:5])
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="-jZtTd7F6_ps" outputId="d2306e4c-88a4-4ac4-9ad5-879fa461617f"
# ้ขๆต (ๆชๆ ๅๅ)
pred_train = log_reg.predict(standardized_X_train)
pred_test = log_reg.predict(standardized_X_test)
print (pred_test)
# + [markdown] colab_type="text" id="dM7iYW8ANYjy"
# # ่ฏไผฐๆๆ
# + colab={} colab_type="code" id="uFXbczqu8Rno"
from sklearn.metrics import accuracy_score
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="sEjansj78Rqe" outputId="f5bfbe87-12c9-4aa5-fc61-e615ad4e63d4"
# ๆญฃ็กฎ็
train_acc = accuracy_score(y_train, pred_train)
test_acc = accuracy_score(y_test, pred_test)
print ("train acc: {0:.2f}, test acc: {1:.2f}".format(train_acc, test_acc))
# + [markdown] colab_type="text" id="WijzY-vDNbE9"
# ๅฐ็ฎๅไธบๆญขๆไปฌ็จ็ๆฏๆญฃ็กฎ็ไฝไธบๆไปฌ็่ฏไปทๆๆ ๆฅ่ฏๅฎๆจกๅ็ๅฅฝๅ็จๅบฆใไฝๆฏๆไปฌ่ฟๆๅพๅค็่ฏไปทๆๆ ๆฅๅฏนๆจกๅ่ฟ่ก่ฏไปทใ
#
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/metrics.jpg" width=400>
# + [markdown] colab_type="text" id="80MwyE0yOr-k"
# ่ฏไปทๆๆ ็้ๆฉ็็่ฆ็ๅบ็จ็ๆ
ๆฏใ
# positive - true, 1, tumor, issue, ็ญ็ญ, negative - false, 0, not tumor, not issue, ็ญ็ญใ
#
# $\text{accuracy}๏ผๆญฃ็กฎ็๏ผ = \frac{TP+TN}{TP+TN+FP+FN}$
#
# $\text{recall}๏ผๅฌๅ็๏ผ= \frac{TP}{TP+FN}$ โ (ๆๅคไธชๆญฃไพ่ขซๆๅไธบๆญฃไพ)
#
# $\text{precision} ๏ผ็ฒพ็กฎ็๏ผ= \frac{TP}{TP+FP}$ โ (ๅจๆๆๆ้ขๆตไธบๆญฃไพ็ๆ ทๆฌไธ๏ผๆๅคๅฐๆฏๅฏน็)
#
# $F_1 = 2 * \frac{\text{precision } * \text{ recall}}{\text{precision } + \text{ recall}}$
#
# where:
# * TP: ๅฐๆญฃ็ฑป้ขๆตไธบๆญฃ็ฑปๆฐ
# * TN: ๅฐ่ด็ฑป้ขๆตไธบ่ด็ฑปๆฐ
# * FP: ๅฐ่ด็ฑป้ขๆตไธบๆญฃ็ฑปๆฐ
# * FN: ๅฐๆญฃ็ฑป้ขๆตไธบ่ด็ฑปๆฐ
# + colab={} colab_type="code" id="opmu3hJm9LXA"
import itertools
from sklearn.metrics import classification_report, confusion_matrix
# + colab={} colab_type="code" id="wAzOL8h29m82"
# ็ปๅถๆททๆท็ฉ้ต
def plot_confusion_matrix(cm, classes):
cmap=plt.cm.Blues
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title("Confusion Matrix")
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
plt.grid(False)
fmt = 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], 'd'),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
# + colab={"base_uri": "https://localhost:8080/", "height": 520} colab_type="code" id="KqUVzahQ-5ic" outputId="bff8819e-3d5b-45b9-c221-179c873140b1"
# ๆททๆท็ฉ้ต
cm = confusion_matrix(y_test, pred_test)
plot_confusion_matrix(cm=cm, classes=["died", "survived"])
print (classification_report(y_test, pred_test))
# + [markdown] colab_type="text" id="iMk7tN1h98x9"
# ๅฝๆไปฌๆๅคงไบไธคไธชๆ ็ญพ๏ผไบๅ็ฑป๏ผ็ๆถๅ๏ผๆไปฌๅฏไปฅ้ๆฉๅจๅพฎ่ง/ๅฎ่งๅฑ้ข่ฎก็ฎ่ฏไผฐๆๆ ๏ผๆฏไธชclasๆ ็ญพ๏ผใๆ้็ญใ ๆด่ฏฆ็ปๅ
ๅฎนๅฏไปฅๅ่[offical docs](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html).
# + [markdown] colab_type="text" id="9v6zc1_1PWnz"
# # ๆจ่ฎบ
# + [markdown] colab_type="text" id="Zl9euDuMPYTN"
# ็ฐๅจๆไปฌๆฅ็็ไฝ ๆฏๅฆไผๅจTitanicไธญๅญๆดปไธๆฅ
# + colab={"base_uri": "https://localhost:8080/", "height": 80} colab_type="code" id="kX9428-EPUzx" outputId="ef100af7-9861-4900-e9c7-ed6d93c69069"
# ่พๅ
ฅไฝ ่ชๅทฑ็ไฟกๆฏ
X_infer = pd.DataFrame([{"name": "<NAME>", "cabin": "E", "ticket": "E44",
"pclass": 1, "age": 24, "sibsp": 1, "parch": 2,
"fare": 100, "embarked": "C", "sex": "male"}])
X_infer.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 80} colab_type="code" id="c6OAAQoaWxAb" outputId="85eb1c6d-6f53-4bd4-bcc3-90d9ebca74c8"
# ่ฟ่ก้ขๅค็
X_infer = preprocess(X_infer)
X_infer.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 80} colab_type="code" id="48sj5A0mX5Yw" outputId="d9571238-70ab-427d-f80c-7b13b00efc95"
# ๆทปๅ ็ผบๅคฑๅๅ้
missing_features = set(X_test.columns) - set(X_infer.columns)
for feature in missing_features:
X_infer[feature] = 0
# ้ๆดtitle
X_infer = X_infer[X_train.columns]
X_infer.head()
# + colab={} colab_type="code" id="rP_i8w9IXFiM"
# ๆ ๅๅ
standardized_X_infer = X_scaler.transform(X_infer)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="7O5PbOAvXTzF" outputId="f1c3597e-1676-476f-e970-168e5c3fca6c"
# ้ขๆต
y_infer = log_reg.predict_proba(standardized_X_infer)
classes = {0: "died", 1: "survived"}
_class = np.argmax(y_infer)
print ("Looks like I would've {0} with about {1:.0f}% probability on the Titanic expedition!".format(
classes[_class], y_infer[0][_class]*100.0))
# + [markdown] colab_type="text" id="8PLPFFP67tvL"
# # ๅฏ่งฃ้ๆง
# + [markdown] colab_type="text" id="jv6LKNXO7uch"
# ๅชไบ็นๅพๆฏๆๆๅฝฑๅๅ็๏ผ
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="KTSpxbwy7ugl" outputId="b37bf39c-f35d-4793-a479-6e61179fc5e5"
# ๆชๆ ๅๅ็ณปๆฐ
coef = log_reg.coef_ / X_scaler.scale_
intercept = log_reg.intercept_ - np.sum((coef * X_scaler.mean_))
print (coef)
print (intercept)
# + [markdown] colab_type="text" id="xJgiIupyE0Hd"
# ๆญฃ็ณปๆฐ่กจ็คบไธ้ณๆง็ฑป็็ธๅ
ณๆง๏ผ1 = ๅญๆดป๏ผ๏ผ่ด็ณปๆฐ่กจ็คบไธ้ดๆง็ฑป็็ธๅ
ณๆง๏ผ0 = ๆญปไบก๏ผใ
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="RKRB0er2C5l-" outputId="39ad0cf3-13b1-4aa8-9a6b-4456b8975a39"
indices = np.argsort(coef)
features = list(X_train.columns)
print ("Features correlated with death:", [features[i] for i in indices[0][:3]])
print ("Features correlated with survival:", [features[i] for i in indices[0][-3:]])
# + [markdown] colab_type="text" id="RhhFw3Kg-4aL"
# ### ้ๆ ๅๅ็ณปๆฐ็่ฏๆ:
#
#
# + [markdown] colab_type="text" id="ER0HFHXj-4h8"
# ๆณจๆๆไปฌ็Xๅy้ฝๅทฒ็ปๆ ๅๅไบใ
#
# $\mathbb{E}[y] = W_0 + \sum_{j=1}^{k}W_jz_j$
#
# $z_j = \frac{x_j - \bar{x}_j}{\sigma_j}$
#
# $ \hat{y} = \hat{W_0} + \sum_{j=1}^{k}\hat{W_j}z_j $
#
# $\hat{y} = \hat{W_0} + \sum_{j=1}^{k} \hat{W}_j (\frac{x_j - \bar{x}_j}{\sigma_j}) $
#
# $\hat{y} = (\hat{W_0} - \sum_{j=1}^{k} \hat{W}_j\frac{\bar{x}_j}{\sigma_j}) + \sum_{j=1}^{k} (\frac{\hat{w}_j}{\sigma_j})x_j$
# + [markdown] colab_type="text" id="5yBZLVHwGKSj"
# # Kๆไบคๅ้ช่ฏ
# + [markdown] colab_type="text" id="fHyLTMAAGJ_x"
# ไบคๅ้ช่ฏๆฏไธไธช้้ๆ ท็ๆจกๅ่ฏไผฐๆนๆณใไธๅ
ถๆไปฌๅจไธๅผๅงๅฐฑไป
ไป
ๅๅไธๆฌก่ฎญ็ป้ๅ้ช่ฏ้๏ผๆไปฌ็จไบคๅ้ช่ฏๆฅๅๅk(้ๅธธ k=5 ๆ่
10)ๆฌกไธๅ็่ฎญ็ป้ๅ้ช่ฏ้ใ
#
# ๆญฅ้ชค:
# 1. ้ๆบๆไนฑ่ฎญ็ปๆฐๆฎ้*train*ใ
# 2. ๅฐๆฐๆฎ้ๅๅฒๆไธๅ็kไธช็ๆฎตใ
# 3. ๅจkๆฌก็ๆฏๆฌกๅพช็ฏไธญ้ๆฉไธไธช็ๆฎตๆฅๅฝไฝ้ช่ฏ้๏ผๅ
ถไฝ็ๆๆ็ๆฎตๅฝๆ่ฎญ็ป้ใ
# 4. ้ๅค่ฟไธช่ฟ็จไฝฟๆฏไธช็ๆฎต้ฝๆๅฏ่ฝๆไธบ่ฎญ็ป้ๆ่
ๆต่ฏ้็ไธ้จๅใ
# 5. ้ๆบๅๅงๅๆ้ๆฅ่ฎญ็ปๆจกๅใ
# 6. ๅจkไธชๅพช็ฏไธญๆฏๆฌก้ฝ่ฆ้ๆฐๅๅงๅๆจกๅ๏ผไฝๆฏๆ้่ฆไฟๆ็ธๅ็้ๆบๅๅงๅ๏ผ็ถๅๅๅจ้ช่ฏ้ไธญ่ฟ่ก้ช่ฏใ
#
#
# + colab={} colab_type="code" id="6XB6X1b0KcvJ"
from sklearn.model_selection import cross_val_score
# + colab={} colab_type="code" id="UIqKmAEtVWMg"
# Kๆไบคๅ้ช่ฏ
log_reg = SGDClassifier(loss="log", penalty="none", max_iter=args.num_epochs)
scores = cross_val_score(log_reg, standardized_X_train, y_train, cv=10, scoring="accuracy")
print("Scores:", scores)
print("Mean:", scores.mean())
print("Standard Deviation:", scores.std())
# + [markdown] colab_type="text" id="L0aQUomQoni1"
# # TODO
# + [markdown] colab_type="text" id="jCpKSu53EA9-"
# - interaction terms
# - interpreting odds ratio
# - simple example with coordinate descent method (sklearn.linear_model.LogisticRegression)
| notebooks/05_Logistic_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # OSM-OpenStreetMapๅฐๅพๆฐๆฎๅค็
# - ็จๆณ๏ผhttps://my.oschina.net/u/2306127/blog/3030246
# - ๆๆกฃ๏ผhttps://imposm.org/docs/imposm.parser/latest/index.html
# - ๅฎ่ฃ
๏ผ
# ```
# sudo apt install build-essential python-devel protobuf-compiler libprotobuf-dev
# pip install imposm.parser
# ```
# ## API
#
# class imposm.parser.OSMParser(concurrency=None, nodes_callback=None, ways_callback=None, relations_callback=None, coords_callback=None, nodes_tag_filter=None, ways_tag_filter=None, relations_tag_filter=None, marshal_elem_data=False)
# High-level OSM parser.
#
# ### Parameters:
# - concurrency โ number of parser processes to start. Defaults to the number of CPUs.
# - xxx_callback โ callback functions for coords, nodes, ways and relations. Each callback function gets called with a list of multiple elements. See callback concepts.
# - xxx_filter โ functions that can manipulate the tag dictionary. Nodes and relations without tags will not passed to the callback. See tag filter concepts.
# ### parse(filename)
# Parse the given file. Detects the filetype based on the file suffix. Supports .pbf, .osm and .osm.bz2.
#
# ### parse_pbf_file(filename)
# Parse a PBF file.
#
# ### parse_xml_file(filename)
# Parse a XML file. Supports BZip2 compressed files if the filename ends with .bz2.
# ! pip install imposm.parser
# +
from imposm.parser import OSMParser
# simple class that handles the parsed OSM data.
class HighwayCounter(object):
highways = 0
def ways(self, ways):
# callback method for ways
for osmid, tags, refs in ways:
if 'highway' in tags:
self.highways += 1
# -
# instantiate counter and parser and start parsing
counter = HighwayCounter()
# +
datafile = 'germany.osm.pbf'
p = OSMParser(concurrency=4, ways_callback=counter.ways)
p.parse(datafile)
# done
print counter.highways
| gis/osm/imposm.ipynb |
// -*- coding: utf-8 -*-
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cpp
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C++14
// language: C++14
// name: xcpp14
// ---
// 
// <center> <h1>C++ backend for the jupyter-leaflet map visualization library</h1> </center>
// <center> <h1>GeoJSON</h1> </center>
// +
#include <iostream>
#include <string>
#include <fstream>
#include "xleaflet/xmap.hpp"
#include "xleaflet/xbasemaps.hpp"
#include "xleaflet/xgeo_json.hpp"
// +
auto black_and_white = xlf::basemap({"OpenStreetMap", "BlackAndWhite"});
auto map = xlf::map::initialize()
.layers({black_and_white})
.center({34.6252978589571, -77.34580993652344})
.zoom(10)
.finalize();
map
// -
// ##ย Load a local json file
std::ifstream file("geo.json");
nl::json geo_data;
file >> geo_data;
for (auto& feature: geo_data["features"])
{
feature["properties"]["style"] = {
{"weight", 1},
{"fillOpacity", 0.5}
};
}
void print_event_callback(nl::json event)
{
std::cout << event.dump(4);
}
auto geo_json = xlf::geo_json::initialize()
.data(geo_data)
.finalize();
// geo_json.on_hover(print_event_callback);
// geo_json.on_click(print_event_callback);
map.add_layer(geo_json);
| notebooks/GeoJSON.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import math as m
a=float(input("Digite o angulo que vocรช deseja: "))
print('O รขngulo de {} tem o SENO de {:.2f}'.format(a,m.sin(m.radians(a))))
print('O รขngulo de {} tem o COSSENO de {:.2f}'.format(a,m.cos(m.radians(a))))
print('O รขngulo de {} tem a TANGENTE de {:.2f}'.format(a,m.tan(m.radians(a))))
| .ipynb_checkpoints/EX018 - Seno, Cosseno e Tangente-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import confusion_matrix
from tqdm import tqdm_notebook,trange
import torch
from torch.nn import Parameter
from torch import nn
from torch.nn import functional as F
from torchvision import datasets, transforms
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
from torch.optim import SGD,Adam
# -
torch.__version__
# +
# Transformer function for image preprocessing
transforms_func = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))])
# mnist train_set
mnist_train = MNIST('./data',train=True,download=True,transform=transforms_func)
# mnist test_set
mnist_test = MNIST('./data',train=False,transform=transforms_func)
# -
train_len = int(0.9*mnist_train.__len__())
valid_len = mnist_train.__len__() - train_len
mnist_train, mnist_valid = torch.utils.data.random_split(mnist_train, lengths=[train_len, valid_len])
print("Size of:")
print("- Training-set:\t\t{}".format(mnist_train.__len__()))
print("- Validation-set:\t{}".format(mnist_valid.__len__()))
print("- Test-set:\t\t{}".format(mnist_test.__len__()))
# +
# Convolutional Layer 1.
filter_size1 = 5 # Convolution filters are 5 x 5 pixels.
num_filters1 = 16 # There are 16 of these filters.
# Convolutional Layer 2.
filter_size2 = 5 # Convolution filters are 5 x 5 pixels.
num_filters2 = 36 # There are 36 of these filters.
# Fully-connected layer.
fc_size = 128 # Number of neurons in fully-connected layer.
# +
# The number of pixels in each dimension of an image.
img_size = (28,28)
# The images are stored in one-dimensional arrays of this length.
img_size_flat = 784
# Tuple with height and width of images used to reshape arrays.
img_shape = (28,28)
# Number of classes, one class for each of 10 digits.
num_classes = 10
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# -
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# +
# Get the first images from the test-set.
images = mnist_train.dataset.train_data[0:9]
# Get the true classes for those images.
cls_true = mnist_train.dataset.train_labels[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
# -
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1,out_channels=16,kernel_size=(5,5),stride=1)
self.conv2 = nn.Conv2d(in_channels=16,out_channels=36,kernel_size=(5,5),stride=1)
self.fc1 = nn.Linear(in_features=576,out_features=128)
self.fc2 = nn.Linear(in_features=128,out_features=10)
def forward(self,x):
out = F.relu(F.max_pool2d(self.conv1(x),kernel_size=(2,2)))
out = F.relu(F.max_pool2d(self.conv2(out),kernel_size=(2,2)))
out = out.view(out.size(0),-1)
out = F.relu(self.fc1(out))
out = F.log_softmax(self.fc2(out))
return out
def train(model,device,train_loader,optimizer):
model.train()
correct = 0
for data,target in tqdm_notebook(train_loader,total=train_loader.__len__()):
#data = torch.reshape(data,(-1,784))
import pdb;pdb.set_trace()
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
print('Accuracy: {}/{} ({:.0f}%)\n'.format(correct, len(train_loader.dataset),100. * correct / len(train_loader.dataset)))
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for i,(data, target) in tqdm_notebook(enumerate(test_loader),total=test_loader.__len__()):
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
device = "cuda" if torch.cuda.is_available() else "cpu"
kwargs = {'num_workers': 1, 'pin_memory': True} if device=='cuda' else {}
train_loader = DataLoader(mnist_train,batch_size=64,shuffle=True,**kwargs)
test_loader = DataLoader(mnist_test,batch_size=1024,shuffle=False,**kwargs)
model = Net().to(device)
optimizer = Adam(model.parameters(),lr=0.5)
epochs = 2
for epoch in range(epochs):
train(model,device,train_loader,optimizer)
test(model,device,test_loader)
| 02_Convolutional_Neural_Network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # REINFORCE
#
# ---
#
# In this notebook, we will train REINFORCE with OpenAI Gym's Cartpole environment.
# ### 1. Import the Necessary Packages
# +
import gym
gym.logger.set_level(40) # suppress warnings (please remove if gives error)
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
# %matplotlib inline
import torch
torch.manual_seed(0) # set random seed
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical
# -
# ### 2. Define the Architecture of the Policy
# +
env = gym.make('CartPole-v0')
env.seed(0)
print('observation space:', env.observation_space)
print('action space:', env.action_space)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
debug = True
class Policy(nn.Module):
def __init__(self, s_size=4, h_size=16, a_size=2):
super(Policy, self).__init__()
self.fc1 = nn.Linear(s_size, h_size)
self.fc2 = nn.Linear(h_size, a_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.softmax(x, dim=1)
def act(self, state):
raw_state = state
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
probs = self.forward(state).cpu()
m = Categorical(probs)
action = m.sample()
if debug:
print(f"raw state: {raw_state}")
print(f"state: {state}")
print(f"action probabilities: {probs}")
print(f"selected action: {action}")
print(f"ln(probability of action): {m.log_prob(action)}")
raise Exception("stop here")
return action.item(), m.log_prob(action)
# -
# ### 3. Train the Agent with REINFORCE
# +
policy = Policy().to(device)
optimizer = optim.Adam(policy.parameters(), lr=1e-2)
def reinforce(n_episodes=2000, max_t=1000, gamma=1.0, print_every=100):
scores_deque = deque(maxlen=100)
scores = []
for i_episode in range(1, n_episodes+1):
saved_log_probs = []
rewards = []
state = env.reset()
for t in range(max_t):
action, log_prob = policy.act(state)
saved_log_probs.append(log_prob)
state, reward, done, _ = env.step(action)
rewards.append(reward)
if done:
break
scores_deque.append(sum(rewards))
scores.append(sum(rewards))
discounts = [gamma**i for i in range(len(rewards))]
R = sum([a*b for a,b in zip(discounts, rewards)])
policy_loss_new = [log_prob * R for log_prob in saved_log_probs]
policy_loss_new = -torch.cat(policy_loss_new).sum()
# policy_loss = []
# for log_prob in saved_log_probs:
# policy_loss.append(-log_prob * R)
# policy_loss = torch.cat(policy_loss).sum()
if debug:
print(f"discounts: {discounts}")
print(f"rewards: {rewards}")
print(f"R: {R}")
print(f"policy_loss: {policy_loss}")
print(f"policy_loss new: {policy_loss_new}")
raise Exception("stop here")
optimizer.zero_grad()
policy_loss_new.backward()
optimizer.step()
if i_episode % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if np.mean(scores_deque)>=195.0:
print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))
break
return scores
debug = False
scores = reinforce()
# -
# ### 4. Plot the Scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# ### 5. Watch a Smart Agent!
# +
env = gym.make('CartPole-v0')
state = env.reset()
for t in range(1000):
action, _ = policy.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
# -
# ### Testing
import torch
import numpy as np
print(np.dot([1, 2], [1, 2]))
# +
# Future rewards
rewards = np.array([0, 1, 0, 0, 2])
n_rewards = len(rewards)
discounts = np.array([0.9**i for i in range(n_rewards+1)])
R_future_discounted = torch.FloatTensor([rewards[i:].dot(discounts[:-(i+1)]) for i in range(n_rewards)])
R_future_normalized = (R_future_discounted - R_future_discounted.mean())/R_future_discounted.std()
print(f"n_rewards: {n_rewards}")
print(f"rewards: {rewards}, discounts: {discounts}")
print(f"R_future_discounted: {R_future_discounted}")
print(f"R_future_normalized: {R_future_normalized}")
# -
probs = torch.FloatTensor([0.5224, 0.9354, 0.7257, 0.1301, 0.2251])
probs.log().dot(R_future_normalized)
| reinforce/REINFORCE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# <div class="alert alert-info">
#
# **Code not tidied, but should work OK**
#
# </div>
# Dataset originaly obtained from https://github.com/suarasaur/dinosaurs
import numpy as np
import matplotlib.pyplot as plt
import pdb
# # Read and convert data
with open('dinosaurs.csv') as f:
data = [x.strip() for x in f.readlines()]
data[:4]
chars = list(set(''.join(data))) # ['x', 'z', 'j', 'g', ... ]
chars.insert(0, ' ') # use space as not-a-char tag, used for padding
ch2i = {ch:i for i,ch in enumerate(chars)} # {' ': 0, 'x': 1, 'z': 2, 'j': 3, 'g': 4, ... }
i2ch = {i:ch for ch,i in ch2i.items()} # {0: ' ', 1: 'x', 2: 'z', 3: 'j', 4: 'g', ... }
np.random.seed(0)
np.random.shuffle(data)
data[:4]
max_len = len(max(data, key=len)) # length of longest dino name
for i, dino in enumerate(data):
data[i] = dino.ljust(max_len) # pad all names with spaces to same length
data[:4]
vocab_size = len(chars)
# +
[ch2i[x] for x in dino]
# -
indices = np.zeros(shape=[len(data), max_len], dtype=int)
indices
for i, dino_name in enumerate(data):
indices[i] = [ch2i[x] for x in dino_name]
data[234]
''.join([i2ch[x] for x in indices[234]])
onehot = np.zeros(shape=[len(data), max_len, vocab_size], dtype=int)
for i in range(len(indices)):
for j in range(max_len):
onehot[i, j, indices[i,j]] = 1
indices[0, 0]
onehot[0]
''.join([i2ch[np.argmax(x)] for x in onehot[234]])
onehot.shape
# # Neural Network
# <img src="../Udacity_DL_Nanodegree/031%20RNN%20Super%20Basics/MultiMultiRNN01.png" align="left"/>
# <img src="assets/rnn_diag.png"/>
import numpy as np
import matplotlib.pyplot as plt
import pdb
# **Hyperbolic Tangent**
# +
def tanh(x):
return np.tanh(x)
def tanh_der(x):
return 1.0 - np.tanh(x)**2
# -
# **Softmax**
# see e.g. here: https://deepnotes.io/softmax-crossentropy
def softmax(x):
"""Numerically stable softmax"""
max_ = np.max(x, axis=-1, keepdims=True) # shape: (n_batch, 1)
ex = np.exp(x - max_) # shape: (n_batch, n_out)
ex_sum = np.sum(ex, axis=-1, keepdims=True) # shape: (n_batch, 1)
return ex / ex_sum # probabilities shape: (n_batch, n_out)
def cross_entropy(y, y_hat):
"""CE for one-hot targets y, averages over batch."""
assert np.alltrue(y.sum(axis=-1) == 1) # make sure y is one-hot encoded
assert np.alltrue(y.max(axis=-1) == 1)
prob_correct = y_hat[range(len(y_hat)), np.argmax(y, axis=-1)] # pick y_hat for correct class (n_batch,)
return np.average( -np.log(prob_correct) )
def forward(x, Wxh, Whh, Who):
assert x.ndim==3 and x.shape[1:]==(4, 3)
x_t = {}
s_t = {}
z_t = {}
s_t[-1] = np.zeros([len(x), len(Whh)]) # [n_batch, n_hid]
T = x.shape[1]
for t in range(T):
x_t[t] = x[:,t,:]
z_t[t] = s_t[t-1] @ Whh + x_t[t] @ Wxh
s_t[t] = tanh(z_t[t])
z_out = s_t[t] @ Who
y_hat = softmax( z_out )
return y_hat
def backprop(x, y, Wxh, Whh, Who):
assert x.ndim==3 and x.shape[1:]==(4, 3)
assert y.ndim==2 and y.shape[1:]==(1,)
assert len(x) == len(y)
# Init
x_t = {}
s_t = {}
z_t = {}
s_t[-1] = np.zeros([len(x), len(Whh)]) # [n_batch, n_hid]
T = x.shape[1]
# Forward
for t in range(T): # t = [0, 1, 2, 3]
x_t[t] = x[:,t,:] # pick time-step input x_[t].shape = (n_batch, n_in)
z_t[t] = s_t[t-1] @ Whh + x_t[t] @ Wxh
s_t[t] = tanh(z_t[t])
z_out = s_t[t] @ Who
y_hat = softmax( z_out )
# Backward
dWxh = np.zeros_like(Wxh)
dWhh = np.zeros_like(Whh)
dWho = np.zeros_like(Who)
ro = y_hat - y # Backprop through loss funt.
dWho = s_t[t].T @ ro #
ro = ro @ Who.T * tanh_der(z_t[t]) # Backprop into hidden state
for t in reversed(range(T)): # t = [3, 2, 1, 0]
dWxh += x_t[t].T @ ro
dWhh += s_t[t-1].T @ ro
if t != 0: # don't backprop into t=-1
ro = ro @ Whh.T * tanh_der(z_t[t-1]) # Backprop into previous time step
return y_hat, dWxh, dWhh, dWho
def train_rnn(x, y, nb_epochs, learning_rate, Wxh, Whh, Who):
losses = []
for e in range(nb_epochs):
y_hat, dWxh, dWhh, dWho = backward(x, y, Wxh, Whh, Who)
Wxh += -learning_rate * dWxh
Whh += -learning_rate * dWhh
Who += -learning_rate * dWho
# Log and print
loss_train = mse(x, y, Wxh, Whh, Who)
losses.append(loss_train)
if e % (nb_epochs / 10) == 0:
print('loss ', loss_train.round(4))
return losses
# # Gradient Check
def numerical_gradient(x, y, Wxh, Whh, Who):
dWxh = np.zeros_like(Wxh)
dWhh = np.zeros_like(Whh)
dWho = np.zeros_like(Who)
eps = 1e-4
for r in range(len(Wxh)):
for c in range(Wxh.shape[1]):
Wxh_pls = Wxh.copy()
Wxh_min = Wxh.copy()
Wxh_pls[r, c] += eps
Wxh_min[r, c] -= eps
l_pls = mse(x, y, Wxh_pls, Whh, Who)
l_min = mse(x, y, Wxh_min, Whh, Who)
dWxh[r, c] = (l_pls - l_min) / (2*eps)
for r in range(len(Whh)):
for c in range(Whh.shape[1]):
Whh_pls = Whh.copy()
Whh_min = Whh.copy()
Whh_pls[r, c] += eps
Whh_min[r, c] -= eps
l_pls = mse(x, y, Wxh, Whh_pls, Who)
l_min = mse(x, y, Wxh, Whh_min, Who)
dWhh[r, c] = (l_pls - l_min) / (2*eps)
for r in range(len(Who)):
for c in range(Who.shape[1]):
Who_pls = Who.copy()
Who_min = Who.copy()
Who_pls[r, c] += eps
Who_min[r, c] -= eps
l_pls = mse(x, y, Wxh, Whh, Who_pls)
l_min = mse(x, y, Wxh, Whh, Who_min)
dWho[r, c] = (l_pls - l_min) / (2*eps)
return dWxh, dWhh, dWho
| NumpyNN/1020_Char_RNN_Dinosaurs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Cleaning flights.csv
#
# **NOTE::** This is only one approach to cleaning up this dataset. Notice that an assumptions I've made are well documented.
# +
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pylab as plt
# %matplotlib inline
# -
# ### Loading data
data_df = pd.read_csv('assign_wk2/flights.csv', low_memory=False)
# <div class="alert alert-block alert-info">
# <b>pd.read_csv low_memory argument: Why did I include this?</b> <br>
# There are a couple of columns within the dataset that have a mixed data type. I was able to see this when I visually inspected the data file in Sublime first. If you didn't do this, Pandas would have presented a warning informing you of this.<br>
# <b>Absence of dropna(): Yes I could have done that when loading the data.</b> <br>
# However, if I did that when I loaded the dataset, I would have lost several of the columns that I need for this analysis. So, we will need to clean things up after a bit of analysis.
# </div>
data_df.head(10)
data_df.info()
# <div class="alert alert-block alert-info">
# <b>Notice anything missing? Where did the non-null attribute information go?</b> <br>
# Since our dataset is so large, that information is excluded from this view. So we will have test for it outside of info() function.
# </div>
# count the number of NaN in each column of the dataset
data_df.isnull().sum()
# converting all the column names to lowercase() - personal preference
data_df.columns = data_df.columns.str.lower()
# ### Imputing the arrival_delay column
# We can start to determine which columns we don't feel will support our analysis. Based on the fact that our dataset has 5.8+ million rows, we can see that the last six columns are missing over 50% of their data. So, I'm going to remove those columns.
# Additionally, our analysis is centered around arrival_delay and the originating airport, so we want to keep columns that might support that analysis. Columns that I'm going to keep at this point are:
# - date information: year, month, day, day_of_week
# - info to uniquely identify the flight: airline, flight_number, origin_airport,destination_airport
# - departure info: departure_time, departure_delay, scheduled_departure, scheduled_time, elapse_time
# - arrival info: scheduled_arrival, arrival_time, arrival_delay
# - canceled/diverted flight info: diverted, cancelled
drop_cols = ['tail_number','taxi_out','wheels_off','air_time','distance','wheels_on','taxi_in','cancellation_reason','air_system_delay','security_delay','airline_delay','late_aircraft_delay','weather_delay']
data_df.drop(drop_cols,axis=1,inplace=True)
data_df.info()
# Now we can start address filling in the missing arrival_delay values. <br>
#
# I'm going to define an arrival delay to be based on the scheduled_arrival - arrival_time.
data_df.head(10)
# Now let's take a closer look at only rows the are missing an arrival_delay.
data_df[data_df.arrival_delay.isnull()].head(10)
# <div class="alert alert-block alert-info">
# <b>Index numbers: Why are the index number to the far left non-sequential at this point?</b> <br>
# We asked to only see the rows of data that are missing an arrival_delay value. The index number to the far left is showing the row number (aka position) in the overall dataframe.
# </div>
# Very interesting! If we scroll to the right, we see that a majority of the flights missing an arrival_delay value were canceled. I'm going to contend that a canceled flight can't be delayed and falls outside our intended analysis. So, I'm going to drop rows where the flight was canceled.
data_df.drop(data_df[data_df.cancelled == 1].index, inplace=True)
# We can use value_counts to verify that we only have rows for flights that actually occurred (cancelled = 0).
data_df.cancelled.value_counts()
# So far so good, time to see how many missing arrival_delay values we have at this point.
data_df[data_df.arrival_delay.isnull()].shape
data_df.shape
# Wow!!! That dropped the number of rows with a missing arrival_delay value from over 105K to around 15K. Also, the overall size of our dataset wasn't minimally reduced in sized. Things are looking good at this point!
data_df[data_df.arrival_delay.isnull()].head(10)
# Well, it's time to just start making some assumptions and documenting our process. Here is my approach to filling in the missing arrival_delay values.
# 1. if we can calculate the arrival_delay value based on scheduled_arrival and arrival_time
# 2. if a flight leaves early or on-time, it will arrive early the same number of minutes
# 3. if a flight leaves late, 15% or less than the flight duration, it will make that time up in the air and arrive on time
# 4. if a fligt leaves late, more than 15% of the flight duration, it will be late the amount of time they left late - 15% of the flight duration.
#
# I'm going to create a UDF and then use a combination of apply() and lambda to fill in the missing arrival_delay values.
def fill_missing_delay(row):
delay = np.NaN
if np.isnan(row.arrival_delay):
if ~np.isnan(row.scheduled_arrival) and ~np.isnan(row.arrival_time):
delay = row.scheduled_arrival - row.arrival_time
elif row.departure_delay <= 0:
delay = np.negative(row.departure_delay)
elif row.departure_delay <= (0.15 * row.scheduled_time):
delay = float(0)
else:
delay = np.negative(row.departure_delay - (0.15 * row.scheduled_time))
else:
delay = row.arrival_delay
return delay
# <div class="alert alert-block alert-info">
# <b>Special Character: What are the '~' used for above?</b> <br>
# '~': is a way to negate a statement. So ~ np.isnan(xxx) means that we are testing that xxx does not equal NaN
# </div>
data_df.arrival_delay = data_df.apply(lambda x: fill_missing_delay(x), axis = 1)
data_df[data_df.arrival_delay.isnull()].head(10)
data_df[data_df.arrival_delay.isnull()].shape
# Awesome!!! We are down to 1 flight that needs a value. It looks like the issue with this row is the missing value for scheduled_time. Let's see if there are other flights that go between the origination and destination airports that we can use as a baseline.
data_df[(data_df.origin_airport == 'FLL') \
& (data_df.destination_airport == 'LGA') \
& (data_df.airline == 'NK')]
# <div class="alert alert-block alert-info">
# <b>Another Special Character: How about the '/'? What are they for?</b> <br>
# '\': is a line continuation marker and simply means that the code continues on the following line. <br>
# </div>
# Alright, I'm going to assume that flight duration for our 1 row above is the mean of all the other flights going between FLL and LGA and fill in the missing scheduled_time for this 1 row. <br>
# <br>
# I'm going to use a couple of intermediate variables to condense the code a bit.
avg_duration = round(data_df[(data_df.origin_airport == 'FLL') \
& (data_df.destination_airport == 'LGA') \
& (data_df.airline == 'NK')].scheduled_time.mean())
row_delay_departure = data_df[data_df.arrival_delay.isnull()].departure_delay.sum()
data_df.arrival_delay.fillna(float(np.negative(row_delay_departure - (0.15 * avg_duration))), inplace=True)
data_df[data_df.arrival_delay.isnull()].shape
# Hooray! We have managed to cleanup all of the arrival_delay missing values. Time proceed with the analysis.
#
# #### Weird originating airport codes
# Since I look at my data in a text editior prior to loading it, I noticed something interesting with the range of values in the originating_airport column.
data_df.origin_airport.unique()
# Very intersting, why do we have airports with a numerical name. This took some research and the FAA reports that these a smaller community/regional airports. I'm going to limit my analysis to major airports and remove these from the dataset. To do this, I'm going to create a temporary column that contains the length of the originating airport name. Based on visual inspection, I should only have 3 or 5 in this column. Then I'll drop all the rows with a length of 5.
data_df['name_len'] = 0
data_df.name_len = data_df.origin_airport.apply(lambda x: len(str(x)))
data_df.name_len.value_counts()
data_df.drop(data_df[data_df.name_len == 5].index, inplace=True)
# Double check our changes!
data_df.origin_airport.unique()
# No need to keep the name_len column at this point.
data_df.drop(['name_len'],axis=1,inplace=True)
data_df.info()
data_df.shape
# Write the cleaned version of the dataset to a csv
data_df.to_csv('assign_wk2/flights_clean.csv',index=False)
| week2/Assignments/assign_wk2/Clean_Flights_Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ะัะฑะพั ัะตะณะธะพะฝะฐ ะฝะตััะตะดะพะฑััะธ
# ### ะะฒะตะดะตะฝะธะต
# ะะตััะตะดะพะฑัะฒะฐััะฐั ะบะพะผะฟะฐะฝะธั ะดะพะปะถะฝะฐ ัะตัะธัั, ะณะดะต ะฑััะธัั ะฝะพะฒัะต ัะบะฒะฐะถะธะฝั.
# ะะผะตัััั ะฟัะพะฑั ะฝะตััะธ ะฒ ัััั
ัะตะณะธะพะฝะฐั
, ะฒ ะบะฐะถะดะพะผ โ 100 000 ะผะตััะพัะพะถะดะตะฝะธะน, ะณะดะต ะธะทะผะตัะธะปะธ ะบะฐัะตััะฒะพ ะฝะตััะธ ะธ ะพะฑััะผ ะตั ะทะฐะฟะฐัะพะฒ.
# ะะพัััะพะธะผ ะผะพะดะตะปั ะผะฐัะธะฝะฝะพะณะพ ะพะฑััะตะฝะธั, ะบะพัะพัะฐั ะฟะพะผะพะถะตั ะพะฟัะตะดะตะปะธัั ัะตะณะธะพะฝ, ะณะดะต ะดะพะฑััะฐ ะฟัะธะฝะตััั ะฝะฐะธะฑะพะปัััั ะฟัะธะฑัะปั. ะัะพะฐะฝะฐะปะธะทะธััะตะผ ะฒะพะทะผะพะถะฝัั ะฟัะธะฑัะปั ะธ ัะธัะบะธ ัะตั
ะฝะธะบะพะน *Bootstrap.*
#
# ะจะฐะณะธ ะดะปั ะฒัะฑะพัะฐ ะปะพะบะฐัะธะธ:
#
# - ะ ะธะทะฑัะฐะฝะฝะพะผ ัะตะณะธะพะฝะต ะธััั ะผะตััะพัะพะถะดะตะฝะธั, ะดะปั ะบะฐะถะดะพะณะพ ะพะฟัะตะดะตะปััั ะทะฝะฐัะตะฝะธั ะฟัะธะทะฝะฐะบะพะฒ;
# - ะกััะพัั ะผะพะดะตะปั ะธ ะพัะตะฝะธะฒะฐัั ะพะฑััะผ ะทะฐะฟะฐัะพะฒ;
# - ะัะฑะธัะฐัั ะผะตััะพัะพะถะดะตะฝะธั ั ัะฐะผัะผ ะฒััะพะบะธะผะธ ะพัะตะฝะบะฐะผะธ ะทะฝะฐัะตะฝะธะน. ะะพะปะธัะตััะฒะพ ะผะตััะพัะพะถะดะตะฝะธะน ะทะฐะฒะธัะธั ะพั ะฑัะดะถะตัะฐ ะบะพะผะฟะฐะฝะธะธ ะธ ััะพะธะผะพััะธ ัะฐะทัะฐะฑะพัะบะธ ะพะดะฝะพะน ัะบะฒะฐะถะธะฝั;
# - ะัะธะฑัะปั ัะฐะฒะฝะฐ ััะผะผะฐัะฝะพะน ะฟัะธะฑัะปะธ ะพัะพะฑัะฐะฝะฝัั
ะผะตััะพัะพะถะดะตะฝะธะน.
#
# ะฃัะปะพะฒะธั ะทะฐะดะฐัะธ:
#
# โข ะะปั ะพะฑััะตะฝะธั ะผะพะดะตะปะธ ะฟะพะดั
ะพะดะธั ัะพะปัะบะพ ะปะธะฝะตะนะฝะฐั ัะตะณัะตััะธั (ะพััะฐะปัะฝัะต โ ะฝะตะดะพััะฐัะพัะฝะพ ะฟัะตะดัะบะฐะทัะตะผัะต).
# โข ะัะธ ัะฐะทะฒะตะดะบะต ัะตะณะธะพะฝะฐ ะฟัะพะฒะพะดะธััั ะธััะปะตะดะพะฒะฐะฝะธะต 500 ัะพัะตะบ.
# โข ะัะดะถะตั ะฝะฐ ัะฐะทัะฐะฑะพัะบั ะผะตััะพัะพะถะดะตะฝะธะน โ 10 ะผะปัะด ััะฑะปะตะน, ััะพะธะผะพััั ะฑััะตะฝะธั ะพะดะฝะพะน ัะบะฒะฐะถะธะฝั โ 50 ะผะปะฝ ััะฑะปะตะน.
# โข ะะดะธะฝ ะฑะฐััะตะปั ััััั ะฟัะธะฝะพัะธั 4500 ััะฑะปะตะน ะฟัะธะฑัะปะธ.
# โข ะะต ัะฐััะผะฐััะธะฒะฐัั ัะตะณะธะพะฝั, ะฒ ะบะพัะพััั
ัะธัะบ ัะฑััะบะพะฒ ะฒััะต 2.5%. ะะท ะพััะฐะฒัะธั
ัั ะฒัะฑะธัะฐะตััั ัะตะณะธะพะฝ ั ะฝะฐะธะฑะพะปััะตะน ััะตะดะฝะตะน ะฟัะธะฑัะปัั.
#
#
# ะะฟะธัะฐะฝะธะต ะดะฐะฝะฝัั
#
# โข id โ ัะฝะธะบะฐะปัะฝัะน ะธะดะตะฝัะธัะธะบะฐัะพั ะผะตััะพัะพะถะดะตะฝะธั;
# โข f0, f1, f2 โ ััะธ ะฟัะธะทะฝะฐะบะฐ ัะพัะตะบ (ะฝะตะฒะฐะถะฝะพ, ััะพ ะพะฝะธ ะพะทะฝะฐัะฐัั, ะฝะพ ัะฐะผะธ ะฟัะธะทะฝะฐะบะธ ะทะฝะฐัะธะผั);
# โข product โ ะพะฑััะผ ะทะฐะฟะฐัะพะฒ ะฒ ะผะตััะพัะพะถะดะตะฝะธะธ (ััั. ะฑะฐััะตะปะตะน).
#
# ะะปะฐะฝ ัะฐะฑะพัั
# 1. ะะพะดะณะพัะพะฒะบะฐ ะดะฐะฝะฝัั
.
# 2. ะะฑััะตะฝะธะต ะธ ะฟัะพะฒะตัะบะฐ ะผะพะดะตะปะธ ะดะปั ะบะฐะถะดะพะณะพ ัะตะณะธะพะฝะฐ.
# 3. ะะพะดะณะพัะพะฒะบะฐ ะบ ัะฐััััั ะฟัะธะฑัะปะธ.
# 4. ะัะตะฝะบะฐ ัะธัะบะฐ ะธ ะฟัะธะฑัะปะธ ะดะปั ะบะฐะถะดะพะณะพ ัะตะณะธะพะฝะฐ.
# 5. ะะฑัะธะน ะฒัะฒะพะด.
# ### 1. ะะพะดะณะพัะพะฒะบะฐ ะดะฐะฝะฝัั
# ะธะผะฟะพัั ะฑะธะฑะปะธะพัะตะบ
import pandas as pd
import numpy as np
# ะทะฐะณััะทะบะฐ ะดะฐะฝะฝัั
df0 = pd.read_csv('/datasets/geo_data_0.csv')
df1 = pd.read_csv('/datasets/geo_data_1.csv')
df2 = pd.read_csv('/datasets/geo_data_2.csv')
# ะัะพะฒะตัะบะฐ ะดะฐะฝะฝัั
ะฟะพ ะบะฐะถะดะพะผั ัะตะณะธะพะฝั.
regions = {'ะ ะตะณะธะพะฝ_0': df0, 'ะ ะตะณะธะพะฝ_1': df1, 'ะ ะตะณะธะพะฝ_2': df2}
for reg, data in regions.items():
print (reg, ' ะะตัะฒัะต ะฟััั ัััะพะบ ะฑะฐะทั ะดะฐะฝะฝัั
')
print ()
print (data.head())
print ()
print (reg, ' ะะฑัะฐั ะธะฝัะพัะผะฐัะธั - ะฟัะพะฒะตัะบะฐ ะฝะฐ ะฝะฐะปะธัะธะต ะฟัะพะฟััะบะพะฒ, ะฟัะพะฒะตัะบะฐ ัะธะฟะฐ ะดะฐะฝะฝัั
')
print ()
print (data.info())
print ()
print (reg, ' ะะพะปะธัะตััะฒะพ ะดัะฟะปะธะบะฐัะพะฒ ัััะพะบ:',data.duplicated().sum())
print ()
# ะะพะธัะบ ะดัะฟะปะธะบะฐัะพะฒ ัััะพะบ ะฒ ะฑะฐะทะฐั
ะดะฐะฝะฝัั
ัะฐะทะฝัั
ัะตะณะธะพะฝะพะฒ - ะฟัะพะฒะตัะบะฐ ะฝะฐ ะฝะฐะปะธัะธะต ะพัะธะฑะพะบ ะฟัะธ ัะพัะผะธัะพะฒะฐะฝะธะธ ะฑะฐะท ะดะฐะฝะฝัั
:
pd.concat([df0,df1,df2]).duplicated().sum()
# ะฃะดะฐะปะธะผ ััะพะปะฑะตั id - ะพะฝ ะฝะต ะฝัะถะตะฝ ะฒ ะดะฐะฝะฝะพะน ัะฐะฑะพัะต.
df0 = df0.drop(columns=['id'])
df1 = df1.drop(columns=['id'])
df2 = df2.drop(columns=['id'])
# #### ะัะฒะพะด ะฟะพ ะฟัะตะดะพะฑัะฐะฑะพัะบะต ะดะฐะฝะฝัั
#
# ะัะพะฟััะบะธ ะฒ ะดะฐะฝะฝัั
ะฝะต ะพะฑะฝะฐััะถะตะฝั. ะัะฟะปะธะบะฐัั ะฝะต ะพะฑะฝะฐััะถะตะฝั. ะะทะผะตะฝะตะฝะธะต ัะธะฟะฐ ะดะฐะฝะฝัั
ะฝะต ััะตะฑัะตััั.
# ะะท ัะฐะฑะปะธั ัะดะฐะปะธะปะธ ััะพะปะฑัั, ะฝะต ะธะผะตััะธะต ะพัะฝะพัะตะฝะธั ะบ ะดะฐะฝะฝะพะน ัะฐะฑะพัะต.
# ### 2. ะะฑััะตะฝะธะต ะธ ะฟัะพะฒะตัะบะฐ ะผะพะดะตะปะธ
# ะะฝะฐัะตะฝะธั ะฟัะธะทะฝะฐะบะพะฒ ะธ ะธั
ัะฐะทะฑัะพัั ะฒ ัะฐะทะฝัั
ััะพะปะฑัะฐั
ัะฐะทะปะธัะฐัััั ะทะฝะฐัะธัะตะปัะฝะพ. ะะพััะพะผั ัะฝะฐัะฐะปะฐ ะฟัะพะธะทะฒะตะดะตะผ ะผะฐัััะฐะฑะธัะพะฒะฐะฝะธะต ะฟัะธะทะฝะฐะบะพะฒ.
# ะงัะพะฑั ะฝะต ะดะพะฟัััะธัั ััะตัะบะธ ะฒะฐะปะธะดะฐัะธะพะฝะฝัั
ะดะฐะฝะฝัั
ะฒ ะพะฑััะฐัััั ะฒัะฑะพัะบั, ะฝะฐัััะพะธะผ scaler ะฟะพ ะพะฑััะฐััะตะน ะฒัะฑะพัะบะต, ะฐ ะทะฐัะตะผ ัะถะต ะฟัะธะผะตะฝะธะผ ะบ ะพัะดะตะปัะฝะพ ะบ ะบะฐะถะดะพะน ะฒัะฑะพัะบะต ะฑะฐะทะต.
# ะะปั ัะพะทะดะฐะฝะธั ะผะพะดะตะปะธ ะฒะพัะฟะพะปัะทัะตะผัั ะฐะปะณะพัะธัะผะพะผ ะปะธะฝะตะนะฝะพะน ัะตะณัะตััะธะธ.
# ะัะตะดัะบะฐะทะฐะฝะธั ะธ ะฟัะฐะฒะธะปัะฝัะต ะพัะฒะตัั ัะพั
ัะฐะฝะธะผ ะฒ ะฒะธะดะต series ั ัะพะณะปะฐัะพะฒะฐะฝะฝัะผะธ ะธะฝะดะตะบัะฐะผะธ. ะญัะพ ะฟะพะฝะฐะดะพะฑะธััั ะฝะธะถะต ะดะปั ัะฐััะตัะฐ ะฟัะธะฑัะปะธ.
# ะ ะฐัััะธัะฐะตะผ ััะตะดะฝะธะน ะทะฐะฟะฐั ััััั ะฒ ะพะดะฝะพะน ัะบะฒะฐะถะธะฝะต ะบะฐะถะดะพะณะพ ัะตะณะธะพะฝะฐ ะธ ะบะพัะตะฝั ะธะท ััะตะดะฝะตะน ะบะฒะฐะดัะฐัะธัะฝะพะน ะพัะธะฑะบะธ ะผะพะดะตะปะธ (RMSE).
# ะะพะฟะพะปะฝะธัะตะปัะฝะพ, ััะพะฑั ะธะผะตัั ะฟัะตะดััะฐะฒะปะตะฝะธะต ะพะฑ ะพะดะฝะพัะพะดะฝะพััะธ ัะบะฒะฐะถะธะฝ ัะตะณะธะพะฝะฐ, ะฝะฐะนะดะตะผ ัะฐะทะฑัะพั ะทะฝะฐัะตะฝะธะน ะทะฐะฟะฐัะพะฒ ััััั ะธ ััะฐะฝะดะฐััะฝะพะต ะพัะบะปะพะฝะตะฝะธะต.
# ะะฐะฝะฝัะต ะฟัะตะดััะฐะฒะธะผ ะฒ ะฒะธะดะต ัะฐะฑะปะธัั.
# ะธะผะฟะพัั ััะฝะบัะธะธ train_test_split ะฑะธะฑะปะธะพัะตะบะธ sklearn
from sklearn.model_selection import train_test_split
# ะธะผะฟะพัั ััะฝะบัะธะธ ะผะฐัััะฐะฑะธัะพะฒะฐะฝะธั
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# ัะฟะธัะพะบ ะฟัะธะทะฝะฐะบะพะฒ, ะบะพัะพััะต ะฑัะดะตะผ ะผะฐัััะฐะฑะธัะพะฒะฐัั
numeric = ['f0','f1','f2']
# ะธะผะฟะพัั ะฐะปะณะพัะธัะผะฐ ะปะธะฝะตะนะฝะพะน ัะตะณัะตััะธะธ
from sklearn.linear_model import LinearRegression
# ะธะผะฟะพัั ััะฝะบัะธะธ ัะฐััะตัะฐ ะกะะ (MSE)
from sklearn.metrics import mean_squared_error as mse
# +
# ััะพะฑั ะธะทะฑะตะถะฐัั ะดัะฑะปะธะบะฐัะธะธ ะบะพะดะฐ, ัะพะทะดะฐะดะธะผ ััะฝะบัะธั model, ะบะพัะพัะฐั ะฑัะดะตั ะฟัะพะธะทะฒะพะดะธัั
# ัะฐะทะฑะธะฒะบั ะดะฐัะฐัะตัะฐ, ะผะฐัััะฐะฑะธัะพะฒะฐะฝะธะต, ะพะฑััะตะฝะธะต ะผะพะดะตะปะธ, ะฟะพะปััะตะฝะธะต ะฟัะตะดัะบะฐะทะฐะฝะธะน, ัะฐัััั RMSE ะธ ะดััะณะธั
ะฒะตะปะธัะธะฝ
def model(df):
# ะฒัะดะตะปะตะฝะธะต ะพะฑััะฐััะตะน ะธ ะฒะฐะปะธะดะฐัะธะพะฝะฝะพะน ะฒัะฑะพัะพะบ
df_train, df_valid = train_test_split(df, test_size=0.25, random_state=12345)
# ะผะฐัััะฐะฑะธัะพะฒะฐะฝะธะต ะฟัะธะทะฝะฐะบะพะฒ
scaler.fit(df_train[numeric]) # ะฝะฐัััะฐะธะฒะฐะตะผ scaler ะฟะพ ะพะฑััะฐััะตะน ะฒัะฑะพัะบะต
# ะฟัะธะผะตะฝัะตะผ scaler
df_train[numeric] = scaler.transform(df_train[numeric])
df_valid[numeric] = scaler.transform(df_valid[numeric])
# ัะพะทะดะฐะฝะธะต ะธ ะพะฑััะตะฝะธะต ะผะพะดะตะปะธ
model = LinearRegression()
features_train = df_train.drop(columns='product')
target_train = df_train['product']
model.fit(features_train, target_train)
# ะฟัะตะดัะบะฐะทะฐะฝะธั ะธ ะฟัะฐะฒะธะปัะฝัะต ะพัะฒะตัั
features_valid = df_valid.drop(columns='product')
target_valid = df_valid['product'].reset_index(drop=True) # ะฟัะฐะฒะธะปัะฝัะต ะพัะฒะตัั
pred_valid = pd.Series(model.predict(features_valid)) # ะฟัะตะดัะบะฐะทะฐะฝะธั
# ััะตะดะฝะธะน ะทะฐะฟะฐั ััััั ะฒ ะพะดะฝะพะน ัะบะฒะฐะถะธะฝะต ะบะฐะถะดะพะณะพ ัะตะณะธะพะฝะฐ ะธ ะบะพัะตะฝั ะธะท ััะตะดะฝะตะน ะบะฒะฐะดัะฐัะธัะฝะพะน ะพัะธะฑะบะธ ะผะพะดะตะปะธ (RMSE).
# ะดะพะฟะพะปะฝะธัะตะปัะฝะพ, ััะพะฑั ะธะผะตัั ะฟัะตะดััะฐะฒะปะตะฝะธะต ะพะฑ ะพะดะฝะพัะพะดะฝะพััะธ ัะบะฒะฐะถะธะฝ ัะตะณะธะพะฝะฐ, ะฝะฐะนะดะตะผ ัะฐะทะฑัะพั ะทะฝะฐัะตะฝะธะน ะทะฐะฟะฐัะพะฒ ััััั
# ะธ ััะฐะฝะดะฐััะฝะพะต ะพัะบะปะพะฝะตะฝะธะต.
mean = round(df['product'].mean(),1) # cัะตะดะฝะธะน ะทะฐะฟะฐั ััััั
rmse = round(mse(target_valid,pred_valid)**0.5,1) # ะบะฒ.ะบะพัะตะฝั ะธะท ะกะะ (RMSE) ะผะพะดะตะปะธ
ran = round(df['product'].max()-df['product'].min(),1) # ัะฐะทะฑัะพั ะทะฝะฐัะตะฝะธะน
sd = round(np.std(df['product']),1) # cัะฐะฝะดะฐััะฝะพะต ะพัะบะปะพะฝะตะฝะธะต
# ััะฝะบัะธั ะฒะพะทะฒัะฐัะฐะตั ัะฐะบัะธัะตัะบะธะต ะธ ะฟัะพะณะฝะพะทะฝัะต ะทะฝะฐัะตะฝะธั ะฟะพ ะพะฑัะตะผะฐะผ ะฝะตััะธ,
# ััะตะดะฝะธะน ะทะฐะฟะฐั ััััั ะฒ ะพะดะฝะพะน ัะบะฒะฐะถะธะฝะต ะบะฐะถะดะพะณะพ ัะตะณะธะพะฝะฐ, RMSE,
# ัะฐะทะฑัะพั ะทะฝะฐัะตะฝะธะน ะทะฐะฟะฐัะพะฒ ััััั ะธ ััะฐะฝะดะฐััะฝะพะต ะพัะบะปะพะฝะตะฝะธะต
return target_valid, pred_valid, mean, rmse, ran, sd
# +
import warnings
warnings.simplefilter("ignore")
# ะฟัะธะผะตะฝะธะผ ััะฝะบัะธั model ะบ ัะตะณะธะพะฝะฐะผ
target_valid0, pred_valid0, mean0, rmse0, ran0, sd0 = model(df0)
target_valid1, pred_valid1, mean1, rmse1, ran1, sd1 = model(df1)
target_valid2, pred_valid2, mean2, rmse2, ran2, sd2 = model(df2)
# -
# ะทะฐะณะพัะพะฒะบะฐ ัะฐะฑะปะธัั
columns1 = ['region','mean_volume', 'RMSE', 'range', 'standard_dev']
line0 = ['region_0', mean0, rmse0, ran0, sd0]
line1 = ['region_1', mean1, rmse1, ran1, sd1]
line2 = ['region_2', mean2, rmse2, ran2, sd2]
data1 = [line0, line1, line2]
# ัะฐะฑะปะธัะฐ ะธัะบะพะผัั
ะทะฝะฐัะตะฝะธะน
summary1 = pd.DataFrame(data=data1, columns=columns1)
summary1
# #### ะะฝะฐะปะธะท ัะตะทัะปััะฐัะฐ ัะฐะฑะพัั ะผะพะดะตะปะธ.
# ะะพะดะตะปั ะฟะพะบะฐะทัะฒะฐะตั ะธัะบะปััะธัะตะปัะฝะพ ั
ะพัะพัะธะต ัะตะทัะปััะฐัั ะดะปั ัะตะณะธะพะฝะฐ 1. ะะปั ัะตะณะธะพะฝะพะฒ 0 ะธ 2 RะSE ะผะพะดะตะปะธ (ั.ะต., ัะฐะบัะธัะตัะบะธ, ะพัะธะฑะบะฐ ะฟัะตะดัะบะฐะทะฐะฝะธั) ัะพััะฐะฒะปัะตั ะฑะพะปะตะต 40% ะพั ะฒะตะปะธัะธะฝั ััะตะดะฝะตะณะพ ะทะฐะฟะฐัะฐ ััััั ะฒ ัะบะฒะฐะถะธะฝะต. ะญัะพ ะผะพะถะตั ะฑััั ัะฒัะทะฐะฝะพ ั ะฟัะธัะพะดะพะน ะฟัะธะทะฝะฐะบะพะฒ f0, f1 ะธ f2 ะธ ะธั
ะฟัะธะผะตะฝะธะผะพัััั ะดะปั ะบะพะฝะบัะตัะฝะพะณะพ ัะตะณะธะพะฝะฐ.
# ะัะผะตัะธะผ, ััะพ ัะบะฒะฐะถะธะฝั ะบะฐะถะดะพะณะพ ัะตะณะธะพะฝะฐ ะพัะตะฝั ะฝะตะพะดะฝะพัะพะดะฝั ะฟะพ ะบะพะปะธัะตััะฒั ะทะฐะฟะฐัะพะฒ - ััะพ ะฒะธะดะฝะพ ะฟะพ ัะฐะทะฑัะพัะฐะผ ะทะฝะฐัะตะฝะธะน ะธ ะฒะตะปะธัะธะฝะต ััะฐะฝะดะฐััะฝะพะณะพ ะพัะบะปะพะฝะตะฝะธั.
# ### 3. ะะพะดะณะพัะพะฒะบะฐ ะบ ัะฐััััั ะฟัะธะฑัะปะธ
# ### 3.1. ะะปััะตะฒัะต ะทะฝะฐัะตะฝะธั ะดะปั ัะฐััััะพะฒ.
# ะบะพะปะธัะตััะฒะพ ัะพัะตะบ (points of research), ะธััะปะตะดัะตะผัั
ะฟัะธ ัะฐะทะฒะตะดะบะต
p = 500
# ะฑัะดะถะตั (budget) ะฝะฐ ัะฐะทัะฐะฑะพัะบั ะผะตััะพัะพะถะดะตะฝะธั, ะผะปะฝ ััะฑ.
b = 10000
# ััะพะธะผะพััั ะฑััะตะฝะธั ะพะดะฝะพะน ัะบะฒะฐะถะธะฝั (investments per well), ะผะปะฝ ััะฑ.
ipw = 50
# ะบะพะปะธัะตััะฒะพ ัะบะฒะฐะถะธะฝ ะธัั
ะพะดั ะธะท ะฑัะดะถะตัะฐ
w = int(b/ipw)
w
# ะฟัะธะฑัะปั ั ะพะดะฝะพะน ัััััะธ ะฑะฐััะตะปะตะน (profit per kilo barrel), ะผะปะฝ ััะฑ.
pkb = 4500*1000/1000000
pkb
# ะฟัะธะตะผะปะธะผัะน ัะธัะบ ัะฑััะบะพะฒ (acceptable risk), ะฟัะพัะตะฝั
risk_accept = 2.5
# ### 3.2. ะะธะฝะธะผะฐะปัะฝัะน ััะตะดะฝะธะน ะพะฑััะผ ััััั ะฒ ะผะตััะพัะพะถะดะตะฝะธัั
ัะตะณะธะพะฝะฐ, ะดะพััะฐัะพัะฝัะน ะดะปั ะตะณะพ ัะฐะทัะฐะฑะพัะบะธ.
# ััะตะดะฝะตะต ะบะพะปะธัะตััะฒะพ ััััั ะฒ ะพะดะฝะพะน ัะบะฒะฐะถะธะฝะต (volume per well), ะฝะตะพะฑั
ะพะดะธะผะพะต ะดะปั ะบะพะผะฟะตะฝัะฐัะธะธ ะทะฐััะฐั ะฝะฐ ะตะต ะฑััะตะฝะธะต, ััั. ะฑะฐัั.
vpw = round (ipw / pkb, 1)
vpw
# #### ะัะฒะพะด
# ะะดะฝะฐ ัะบะฒะฐะถะธะฝะฐ ะดะพะปะถะฝะฐ ะดะฐัั ะฒ ััะตะดะฝะตะผ ะฝะต ะผะตะฝะตะต 11.1 ััั. ะฑะฐััะตะปะตะน ะฝะตััะธ, ััะพะฑั ะฟะพะบัััั ัะฐัั
ะพะดั ะฝะฐ ะฑััะตะฝะธะต.
# ### 3.3. ะคัะฝะบัะธั ะดะปั ัะฐััััะฐ ะฟัะธะฑัะปะธ ะฟะพ ะฝะฐะฑะพัั ะพัะพะฑัะฐะฝะฝัั
ะผะตััะพัะพะถะดะตะฝะธะน ะธ ะฟัะตะดัะบะฐะทะฐะฝะธะน ะผะพะดะตะปะธ.
# +
# ะคัะฝะบัะธั ะฟัะธะฝะธะผะฐะตั ะฝะฐ ะฒั
ะพะด ะพะฑัะตะผั ััััั (ััั. ะฑะฐััะตะปะตะน) ะฒ ะบะฐะถะดะพะน ัะบะฒะฐะถะธะฝะต ะธ ะบะพะปะธัะตััะฒะพ ัะบะฒะฐะถะธะฝ ะฒ ะฒัะฑะพัะบะต;
# ะฒะพะทะฒัะฐัะฐะตั ััะผะผะฐัะฝัั ะฒะฐะปะพะฒัั ะฟัะธะฑัะปั (ะผะปะฝ ััะฑ.) ัะพ ะฒัะตั
ััะธั
ัะบะฒะฐะถะธะฝ.
def prof_reg (target, pred, n):
pred_sorted = pred.sort_values(ascending=False) # ะฟัะตะดัะบะฐะทะฐะฝะฝัะต ะพะฑัะตะผั ััััั, ัะพััะธัะพะฒะฐะฝะฝัะต ะฟะพ ัะฑัะฒะฐะฝะธั
target_selected = target[pred_sorted.index].head(n) # ะฒัะฑะพัะบะฐ n ัะฐะบัะธัะตัะบะธั
ะพะฑัะตะผะพะผ, ัะพะพัะฒะตัััะฒัััะธะธั
ะผะฐะบั. ะฟัะตะดัะบะฐะทะฐะฝะฝัะผ
income = target_selected.sum()*pkb # ัะฐะบัะธัะตัะบะฐั ะฒััััะบะฐ ั n ะพัะพะฑัะฐะฝะฝัั
ัะบะฒะฐะถะธะฝ, ะผะปะฝ ััะฑ.
ips = ipw*n # ััะพะธะผะพััั ะฑััะตะฝะธั n ัะบะฒะฐะถะธะฝ (investment per sample), ะผะปะฝ ััะฑ.
profit = income-ips # ะฒะฐะปะพะฒะฐั ะฟัะธะฑัะปั ั n ะพัะพะฑัะฐะฝะฝัั
ัะบะฒะฐะถะธะฝ, ะผะปะฝ ััะฑ.
return profit
# -
# ### 4. ะ ะธัะบะธ ะธ ะฟัะธะฑัะปั ะดะปั ะบะฐะถะดะพะณะพ ัะตะณะธะพะฝะฐ.
# ะัะธะผะตะฝะธะผ ัะตั
ะฝะธะบั Bootstrap ั 1000 ะฒัะฑะพัะพะบ, ััะพะฑั ะฝะฐะนัะธ ัะฐัะฟัะตะดะตะปะตะฝะธะต ะฟัะธะฑัะปะธ. ะะฐะนะดะตะผ ััะตะดะฝัั ะฟัะธะฑัะปั, 95%-ะน ะดะพะฒะตัะธัะตะปัะฝัะน ะธะฝัะตัะฒะฐะป ะธ ัะธัะบ ัะฑััะบะพะฒ. ะะฐ ะผะตัั ัะธัะบะฐ ัะฑััะบะพะฒ ะฟัะธะผะตะผ ะฟัะพัะตะฝั ะพััะธัะฐัะตะปัะฝัั
ะทะฝะฐัะตะฝะธะน profit.
# ะ ะตะทัะปััะฐั ะฟัะตะดััะฐะฒะธะผ ะฒ ะฒะธะดะต ัะฐะฑะปะธัั.
# ะธะผะฟะพัั ะฝะตะพะฑั
ะพะดะธะผัั
ััะฝะบะธะน
from scipy import stats as st
from numpy.random import RandomState # ััะฐ ััะฝะบัะธั ะฝะตะพะฑั
ะพะดะธะผะฐ ะดะปั ะฟัะธะผะตะฝะตะฝะธั ะผะตัะพะดะฐ bootstrap
state = RandomState(12345)
# ะทะฐะณะพัะพะฒะบะฐ ัะฐะฑะปะธัั
columns=['region','mean_profit','95%_low', '95%_high', '2.5%_quantile', 'risk_%', 'risk_status']
data=[]
regions = {'region_0':[target_valid0, pred_valid0], 'region_1': [target_valid1, pred_valid1], 'region_2': [target_valid2, pred_valid2]}
for reg, tp in regions.items():
values = []
for i in range(1000): # ัะตั
ะฝะธะบะฐ bootsprap ะดะปั ะฝะฐั
ะพะถะดะตะฝะธั ัะฐัะฟัะตะดะตะปะตะฝะธั ะฟัะธะฑัะปะธ
target_subsample = tp[0].sample(n=p, replace=True, random_state = state) # ะฒัะฑะพัะบะฐ p ัะบะฒะฐะถะธะฝ (p=500)
pred_subsample = tp[1][target_subsample.index] # ะฟัะตะดัะบะฐะทะฐะฝะฝัะต ะพะฑัะตะผั ััััั ะฒ ะฒัะฑัะฐะฝะฝัั
ัะบะฒะฐะถะธะฝะฐั
values.append(prof_reg(target_subsample, pred_subsample, w)) # ัะฐััะตั ัะฐะบัะธัะตัะบะพะน ะฟัะธะฑัะปะธ ั ะดะฐะฝะฝะพะน ะฒัะฑะพัะบะธ
values = pd.Series(values)
mean = values.mean() # ััะตะดะฝะตะต ะทะฝะฐัะตะฝะธะต ะฟัะธะฑัะปะธ, ะผะปะฝ
ci = st.t.interval(0.95, len(values)-1,loc=mean, scale=values.sem()) # ะดะพะฒะตัะธัะตะปัะฝัะน ะธะฝัะตัะฒะฐะป
q = values.quantile(0.025).astype('int64') # 2.5% ะบะฒะฐะฝัะธะปั
values_n = values[values<0] # ะฟะพะธัะบ ะฝะตะณะฐัะธะฒะฝัั
ะทะฝะฐัะตะฝะธะน ะฟัะธะฑัะปะธ
risk = round(len(values_n)/len(values)*100,1) # ัะฐััะตั ะดะพะปะธ ะฝะตะณะฐัะธะฒะฝัั
ะทะฝะฐัะตะฝะธะน ะฟัะธะฑัะปะธ
if risk < risk_accept: # ะฟัะพะฒะตัะบะฐ ะบัะธัะตัะธั ัะธัะบะฐ
risk_status = 'OK'
else:
risk_status = 'NG'
data.append([reg, mean.astype('int64'), ci[0].astype('int64'), ci[1].astype('int64'), q, risk, risk_status])
# ัะตะทัะปััะฐัั ะฟะพ ะฒัะตะผ ัะตะณะธะพะฝะฐะผ
summary = pd.DataFrame(data=data, columns=columns)
summary
# ะฒัะฑะพั ัะตะณะธะพะฝะฐ ะธัั
ะพะดั ะธะท ััะตะดะฝะตะณะพ ะทะฝะฐัะตะฝะธั ะฟัะธะฑัะปะธ
best = summary[summary['mean_profit']==summary['mean_profit'].max()]
best
# ะข.ะพ., ะดะปั ะดะฐะปัะฝะตะนัะตะน ัะฐะฑะพัั ัะตะบะพะผะตะฝะดะพะฒะฐะฝ ัะตะณะธะพะฝ 1: ััะผะผะฐัะฝะฐั ะฒะฐะปะพะฒะฐั ะฟัะธะฑัะปั ั ะพัะพะฑัะฐะฝะฝัั
ัะบะฒะฐะถะธะฝ - 95182 ะผะปะฝ ััะฑ., ั 95% ะฒะตัะพััะฝะพัััั ะทะฝะฐัะตะฝะธะต ะฟัะธะฑัะปะธ ะปะตะถะธั ะฒ ะฟัะตะดะตะปะฐั
95052-95312 ะผะปะฝ ััะฑ., ัะธัะบ ัะฑััะพัะฝะพััะธ ะพััััััะฒัะตั.
# ### 5. ะะฑัะธะน ะฒัะฒะพะด.
# ะัะปะฐ ะฟัะพะดะตะปะฐะฝะฐ ัะปะตะดัััะฐั ัะฐะฑะพัะฐ:
# ะะฐะฝะฝัะต ะฑัะปะธ ะฟัะพะฒะตัะตะฝั ะฝะฐ ะฝะตะพะฑั
ะพะดะธะผะพััั ะฟัะตะดะพะฑัะฐะฑะพัะบะธ ะธ ะผะฐััะฐะฑะธัะพะฒะฐะฝั.
# ะะปั ะบะฐะถะดะพะณะพ ัะตะณะธะพะฝะฐ ะฑัะปะธ ัะพะทะดะฐะฝั ะธ ะพะฑััะตะฝั ะผะพะดะตะปะธ ะฝะฐ ะพัะฝะพะฒะต ะฐะปะณะพัะธัะผะฐ ะปะธะฝะตะนะฝะพะน ัะตะณัะตััะธะธ.
# ะััะธัะปะตะฝั ะทะฝะฐัะตะฝะธั RMSE, ััะตะดะฝะธะต ะทะฝะฐัะตะฝะธั ะทะฐะฟะฐัะพะฒ ััััั ะฒ ัะบะฒะฐะถะธะฝะฐั
, ะผะธะฝะธะผะฐะปัะฝะพะต ะฑะตะทัะฑััะพัะฝะพะต ะทะฝะฐัะตะฝะธะต ััััั ะฒ ะพะดะฝะพะน ัะบะฒะฐะถะธะฝะต.
# ะก ะฟะพะผะพััั ัะตั
ะฝะธะบะธ bootstrap ะฒััะธัะปะตะฝะฐ ััะผะผะฐัะฝะฐั ะฒะฐะปะพะฒะฐั ะฟัะธะฑัะปั ั ะพัะพะฑัะฐะฝะฝัั
ัะบะฒะฐะถะธะฝ ะฒ ะบะฐะถะดะพะผ ัะฐะนะพะฝะต, 95% ะดะพะฒะตัะธัะตะปัะฝัะน ะธะฝัะตัะฒะฐะป ะตะต ะทะฝะฐัะตะฝะธะน ะธ ะพัะตะฝะตะฝ ัะธัะบ ัะฑััะพัะฝะพััะธ.
# ะ ัะตะทัะปััะฐัะต ะฟัะธัะปะธ ะบ ะทะฐะบะปััะตะฝะธั, ััะพ ะดะปั ะดะฐะปัะฝะตะนัะตะน ัะฐะฑะพัั ัะตะบะพะผะตะฝะดะพะฒะฐะฝ ัะตะณะธะพะฝ 1: ััะผะผะฐัะฝะฐั ะฒะฐะปะพะฒะฐั ะฟัะธะฑัะปั ั ะพัะพะฑัะฐะฝะฝัั
ัะบะฒะฐะถะธะฝ - 95182 ะผะปะฝ ััะฑ., ั 95% ะฒะตัะพััะฝะพัััั ะทะฝะฐัะตะฝะธะต ะฟัะธะฑัะปะธ ะปะตะถะธั ะฒ ะฟัะตะดะตะปะฐั
95052-95312 ะผะปะฝ ััะฑ., ัะธัะบ ัะฑััะพัะฝะพััะธ ะพััััััะฒัะตั.
| Project 6 - Vybor regiona neftedobychi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Subspace-search Variational Quantum Eigensolver
#
# <em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>
#
# ## Overview
#
# - In this tutorial, we will show how to train a quantum neural network (QNN) through Paddle Quantum to find the entire energy spectrum of a quantum system.
#
# - First, import the following packages.
import numpy
from numpy import pi as PI
import paddle
from paddle import matmul
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import random_pauli_str_generator, pauli_str_to_matrix, dagger
# ## Background
#
# - Variational Quantum Eigensolver (VQE) [1-3] is one of the most promising applications for near-term quantum computing. One of the its powerful versions is SSVQE [4], which can be used to find the ground state and the **excited state** of a physical system's Hamiltonian. Mathematically, one can interpret it as solving the eigenvalues and eigenvectors of a Hermitian matrix. The set of eigenvalues of the Hamiltonian is called the energy spectrum.
# - Next, we will use a brief example to demonstrate how to solve this problem by training a QNN, that is, to solve the energy spectrum of a given Hamiltonian $H$.
#
# ## Hamiltonian
#
# - For a specific molecule that needs to be analyzed, we need its geometry, charge, and spin multiplicity to obtain the Hamiltonian (in Pauli products form) describing the system. Specifically, through our built-in quantum chemistry toolkit, fermionic-to-qubit mapping technology can be used to output the qubit Hamiltonian.
# - As a simple demonstration of SSVQE, we provide a random 2-qubit Hamiltonian.
N = 2 # Number of qubits
SEED = 14 # Fixed random seed
# +
# Generate random Hamiltonian represented by Pauli string
numpy.random.seed(SEED)
hamiltonian = random_pauli_str_generator(N, terms=10)
print("Random Hamiltonian in Pauli string format = \n", hamiltonian)
# Generate matrix representation of Hamiltonian
H = pauli_str_to_matrix(hamiltonian, N)
# -
# ## Building a quantum neural network
#
# - To implement SSVQE, we first need to design a QNN $U(\theta)$ (parameterized quantum circuit). In this tutorial, we provide a predefined universal quantum circuit template suitable for 2 qubits. Theoretically, this template has enough expressibility to simulate arbitrary 2-qubit unitary operation [5]. The specific implementation requires 3 $CNOT$ gates plus 15 single-qubit rotation gates $\in \{R_y, R_z\}$.
#
# - One can randomly initialize the QNN parameters ${\bf{\vec{\theta }}}$ containing 15 parameters.
# +
THETA_SIZE = 15 # The number of parameters in the quantum neural network
def U_theta(theta, N):
"""
U_theta
"""
# Initialize the quantum neural network according to the number of qubits/network width
cir = UAnsatz(N)
# Call the built-in quantum neural network template
cir.universal_2_qubit_gate(theta, [0, 1])
# Return the circuit of the quantum neural network
return cir
# -
# ## Training model and loss function
#
# - After setting up the Hamiltonian and the quantum neural network architecture, we will further define the parameters to be trained, the loss function and optimization methods. For a detailed inspection of the theory of SSVQE, please refer to the original paper [4].
#
# - By acting the quantum neural network $U(\theta)$ on a set of orthogonal initial states (one can take the computational basis $\{|00\rangle, |01\rangle, |10\rangle, |11 \rangle \}$), we will get the output states $\{\left| {\psi_1 \left( {\bf{\theta }} \right)} \right\rangle, \left| {\psi_2 \left( {\bf{\theta }} \right)} \right\rangle, \left| {\psi_3 \left( {\bf{\theta }} \right)} \right\rangle, \left| {\psi_4 \left( {\bf{\theta }} \right)} \right\rangle \}$.
#
# - Further, the loss function in the SSVQE model generally consists of expectation value of each output quantum state $\left| {\psi_k \left( {\bf{\theta }} \right)} \right\rangle$ given the Hamiltonian $H$. More specifically, it's the weighted summation of the energy expectation value. In this example, the default weight vector is $\vec{w} = [4, 3, 2, 1]$.
#
# - The loss function is defined as:
#
# $$
# \mathcal{L}(\boldsymbol{\theta}) = \sum_{k=1}^{2^n}w_k*\left\langle {\psi_k \left( {\bf{\theta }} \right)} \right|H\left| {\psi_k \left( {\bf{\theta }} \right)} \right\rangle. \tag{1}
# $$
class Net(paddle.nn.Layer):
def __init__(self, shape, dtype='float64'):
super(Net, self).__init__()
# Initialize the theta parameter list and fill the initial value with the uniform distribution of [0, 2*pi]
self.theta = self.create_parameter(shape=shape,
default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2*PI),
dtype=dtype, is_bias=False)
# Define loss function and forward propagation mechanism
def forward(self, H, N):
# Build quantum neural network
cir = U_theta(self.theta, N)
U = cir.U
# Calculate the loss function
loss_struct = paddle.real(matmul(matmul(dagger(U), H), U))
# Enter the computational basis to calculate the expected value
# which is equivalent to taking the diagonal element of U^dagger*H*U
loss_components = [
loss_struct[0][0],
loss_struct[1][1],
loss_struct[2][2],
loss_struct[3][3]
]
# Weighted summation of loss function
loss = 4 * loss_components[0] + 3 * loss_components[1]\
+ 2 * loss_components[2] + 1 * loss_components[3]
return loss, loss_components, cir
# ## Hyper-parameters
#
# Before training the quantum neural network, we also need to set up several hyper-parameters, mainly the learning rate LR, the number of iterations ITR. Here we set the learning rate to be LR = 0.3 and the number of iterations ITR = 50. One can adjust these hyper-parameters accordingly and check how they influence the training performance.
ITR = 100 # Set the total number of iterations of training
LR = 0.3 # Set the learning rate
# ## Training process
#
# - After setting all the parameters of SSVQE model, we need to convert all the data into Tensor in the PaddlePaddle, and then train the quantum neural network.
# - We use Adam Optimizer in training, and one can also call other optimizers provided in PaddlePaddle.
# +
paddle.seed(SEED)
# We need to convert numpy.ndarray to Tensor supported in Paddle
hamiltonian = paddle.to_tensor(H)
# Determine the parameter dimension of the network
net = Net(shape=[THETA_SIZE])
# We use Adam optimizer for better performance
# One can change it to SGD or RMSprop.
opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())
# Optimization loop
for itr in range(1, ITR + 1):
# Forward propagation calculates the loss function and returns the estimated energy spectrum
loss, loss_components, cir = net(hamiltonian, N)
# Under the dynamic graph mechanism, use back propagation to minimize the loss function
loss.backward()
opt.minimize(loss)
opt.clear_grad()
# Print training results
if itr% 10 == 0:
print('iter:', itr,'loss:','%.4f'% loss.numpy()[0])
if itr == ITR:
print("\nThe trained circuit:")
print(cir)
# -
# ## Benchmarking
#
# We have now completed the training of the quantum neural network, and we will verify the results by comparing them with theoretical values.
# - The theoretical Hamiltonian eigenvalues are solved by the linear algebra package in NumPy;
# - We compare the energy of each energy level obtained by training QNN with the theoretical value.
# - It can be seen that the training output is very close to the exact value.
# +
print('The estimated ground state energy is: ', loss_components[0].numpy())
print('The theoretical ground state energy: ',
numpy.linalg.eigh(H)[0][0])
print('The estimated 1st excited state energy is: ', loss_components[1].numpy())
print('The theoretical 1st excited state energy: ', numpy.linalg.eigh(H)[0][1])
print('The estimated 2nd excited state energy is: ', loss_components[2].numpy())
print('The theoretical 2nd excited state energy: ', numpy.linalg.eigh(H)[0][2])
print('The estimated 3rd excited state energy is: ', loss_components[3].numpy())
print('The theoretical 3rd excited state energy: ', numpy.linalg.eigh(H)[0][3])
# -
# _______
#
# ## References
#
# [1] <NAME>. et al. A variational eigenvalue solver on a photonic quantum processor. [Nat. Commun. 5, 4213 (2014).](https://www.nature.com/articles/ncomms5213)
#
# [2] <NAME>., <NAME>., <NAME>., <NAME>. & <NAME>. Quantum computational chemistry. [Rev. Mod. Phys. 92, 015003 (2020).](https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.92.015003)
#
# [3] <NAME>. et al. Quantum chemistry in the age of quantum computing. [Chem. Rev. 119, 10856โ10915 (2019).](https://pubs.acs.org/doi/abs/10.1021/acs.chemrev.8b00803)
#
# [4] <NAME>., <NAME>. & <NAME>. Subspace-search variational quantum eigensolver for excited states. [Phys. Rev. Res. 1, 033062 (2019).](https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResearch.1.033062)
#
# [5] <NAME>. & <NAME>. Optimal quantum circuits for general two-qubit gates. [Phys. Rev. A 69, 032315 (2004).](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.69.032315)
| tutorial/quantum_simulation/SSVQE_EN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Next Permutation
#
# Given an integer, find the next permutation of it in absolute order. For example, given 48975, the next permutation would be 49578.
#
# ## Solution...
# - find the number that we have to change, then ... replace it with teh next biggest number in the tail.
# - after the swap, the tail is sorted in ascending order...
#
def next_permutation(num):
num, n = list(num), len(num)
# get the first digit of the tail...
tail_start = n - 1
while tail_start >= 0 and num[tail-start - 1] > num[tail_start]:
tail_start -= 1
# if the entire list is sorted in descending order, there's no larger permutation
if tail_start == 0:
return None
# find the smallest digit in teh tail that is greater than teh element we need to swap...
swap = tail_start
while swap < n and num[tail_start - 1] < num[swap]:
swap += 1
swap -= 1
# perform the swap
num[tail_start - 1], num[swap] = num[swap], num[tail-start - 1]
# reverse the tail elements...
start, end = tail_start, len(num) - 1
while start < end:
num[start], num[end] = num[end], num[start]
start += 1; end-= 1
return num
# +
# worst case complexity is O(N)
# -
| Daily/Next permutation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv_multimodal2
# language: python
# name: venv_multimodal2
# ---
# +
#import argparse
import datetime
import sys
import json
from collections import defaultdict
from pathlib import Path
from tempfile import mkdtemp
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import optim
from torch.utils.data import Subset, DataLoader
from torchnet.dataset import TensorDataset, ResampleDataset
import matplotlib.pyplot as plt
import math
import models
import objectives_dev as objectives
#from utils import Logger, Timer, save_model, save_vars, unpack_data
from utils_dev import Logger, Timer, save_model, save_vars, unpack_data, EarlyStopping, vade_kld
# +
#args
experiment = 'test'
model = 'rna_atac_dev' #VAE่ฉฆใใซไฝฟใ
obj = 'elbo'
K = 1
looser = False
llik_scaling = 0
batch_size = 1024
epochs = 100
n_centroids = 10
latent_dim = 20
num_hidden_layers = 1
hidden_dim = [128, 128]
learn_prior = False
logp = False
print_freq = 0
no_analytics = False
seed = 1
dataSize = []
r_dim = a_dim = []
class params():
def __init__(self,
experiment,
model,
obj,
K,
looser,
llik_scaling,
batch_size,
epochs,
n_centroids,
latent_dim,
num_hidden_layers,
hidden_dim,
learn_prior,
logp,
print_freq,
no_analytics,
seed,
dataSize,
r_dim,
a_dim):
self.experiment = experiment
self.model = model
self.obj = obj
self.K = K
self.looser = looser
self.llik_scaling = llik_scaling
self.batch_size = batch_size
self.epochs = epochs
self.n_centroids = n_centroids
self.latent_dim = latent_dim
self.num_hidden_layers = num_hidden_layers
self.hidden_dim = hidden_dim
self.learn_prior = learn_prior
self.logp = logp
self.print_freq = print_freq
self.no_analytics = no_analytics
self.seed = seed
self.dataSize = dataSize
self.r_dim = r_dim
self.a_dim = a_dim
args = params(experiment,
model,
obj,
K,
looser,
llik_scaling,
batch_size,
epochs,
n_centroids,
latent_dim,
num_hidden_layers,
hidden_dim,
learn_prior,
logp,
print_freq,
no_analytics,
seed,
dataSize,
r_dim,
a_dim)
# -
# random seed
# https://pytorch.org/docs/stable/notes/randomness.html
torch.backends.cudnn.benchmark = True
torch.manual_seed(args.seed)
np.random.seed(args.seed)
device = torch.device("cpu")
# set up run path
#runId = datetime.datetime.now().isoformat()
runId ='test'
experiment_dir = Path('../experiments/' + args.experiment)
experiment_dir.mkdir(parents=True, exist_ok=True)
runPath = mkdtemp(prefix=runId, dir=str(experiment_dir))
print(runPath)
# +
#train_loader = model.getDataLoaders(batch_size=args.batch_size, device=device) #for train only
# -
dataset_path = '../data/Paired-seq/combined/'
r_dataset = torch.load(dataset_path + 'r_dataset.rar')
a_dataset = torch.load(dataset_path + 'a_dataset.rar')
num = 5000
#num = 25845
r_dataset = Subset(r_dataset, list(range(num)))
a_dataset = Subset(a_dataset, list(range(num)))
train_dataset= TensorDataset([
#ResampleDataset(r_dataset),
#ResampleDataset(a_dataset)
r_dataset,
a_dataset
])
train_loader = DataLoader(train_dataset, batch_size=args.batch_size)
#args.r_dim = r_dataset.data.shape[1]
#args.a_dim = a_dataset.data.shape[1]
args.r_dim = r_dataset.dataset.shape[1]
args.a_dim = a_dataset.dataset.shape[1]
r_dataset = a_dataset = train_dataset = None
# load model
modelC = getattr(models, 'VAE_{}'.format(args.model))
model = modelC(args).to(device)
# preparation for training
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()),
lr=1e-4, amsgrad=True)
#pre_objective = getattr(objectives, 'elbo_ae')
pre_objective = getattr(objectives, 'm_elbo_naive_ae')
#pretrained_path = '../data/Paired-seq/combined/RNA-seq/'
pretrained_path = '../data/Paired-seq/combined/subset'
def pretrain(epoch, agg):
model.train()
b_loss = 0
for i, dataT in enumerate(train_loader):
#data = unpack_data(dataT, device=device) #unimodal
data = dataT #multimodal
optimizer.zero_grad()
#loss = -objective(model, data, K=args.K)
loss = -pre_objective(model, data, K=args.K)
loss.backward()
optimizer.step()
b_loss += loss.item()
if args.print_freq > 0 and i % args.print_freq == 0:
print("iteration {:04d}: loss: {:6.3f}".format(i, loss.item() / args.batch_size))
agg['train_loss'].append(b_loss / len(train_loader.dataset))
print('====> Epoch: {:03d} Train loss: {:.4f}'.format(epoch, agg['train_loss'][-1]))
with Timer('MM-VAE') as t:
agg = defaultdict(list)
pretrain_epoch = 5
for epoch in range(1, pretrain_epoch + 1):
pretrain(epoch, agg)
save_model(model, pretrained_path + '/model.rar')
save_vars(agg, pretrained_path + '/losses.rar')
print('Loading model {} from {}'.format(model.modelName, pretrained_path))
model.load_state_dict(torch.load(pretrained_path + '/model.rar', map_location=device))
model._pz_params = model._pz_params
rescue = '../experiments/test/2020-04-28T17:13:31.932565bjdxippz'
print('Loading model {} from {}'.format(model.modelName, rescue))
model.load_state_dict(torch.load(rescue + '/model.rar.old', map_location=device))
model._pz_params = model._pz_params
fit = False
model.init_gmm_params(train_loader, fit=fit, var=0.1, device=device)
#model.init_gmm_params_separate(train_loader, device=device)
pre_pi= model._pz_params[0].detach()
pre_mu = model._pz_params[1].detach()
pre_var = model._pz_params[2].detach()
print(pre_pi)
print(pre_mu)
print(pre_var)
print(model._pz_params[0]/sum(model._pz_params[0]))
print(model._pz_params[1])
print(model._pz_params[2])
#training
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()),
lr=1e-4, amsgrad=True)
#objective = getattr(objectives, 'elbo_vade')
objective = getattr(objectives, 'm_elbo_naive_vade')
#objective = getattr(objectives, 'm_elbo_vade')
#objective = getattr(objectives, 'm_elbo_vade_warmup')
#objective = getattr(objectives, 'm_elbo_vade_separate')
def train(epoch, agg, W=30):
model.train()
b_loss = 0
adj = 1
#beta = (epoch - 1) / W if epoch <= W else 1
alpha = 100
beta = alpha * (epoch - 1) / W if epoch<=W else alpha
for i, dataT in enumerate(train_loader):
#data = unpack_data(dataT, device=device) #unimodal
data = dataT #multimodal
optimizer.zero_grad()
if objective==getattr(objectives, 'm_elbo_vade_warmup'):
loss = -objective(model, data, beta, K=args.K)
else:
loss = -objective(model, data, adj=adj, K=args.K)
loss.backward()
optimizer.step()
b_loss += loss.item()
if args.print_freq > 0 and i % args.print_freq == 0:
print("iteration {:04d}: loss: {:6.3f}".format(i, loss.item() / args.batch_size))
agg['train_loss'].append(b_loss / len(train_loader.dataset))
print('====> Epoch: {:03d} Train loss: {:.4f}'.format(epoch, agg['train_loss'][-1]))
# +
model.train()
b_loss = 0
adj = 1
#beta = (epoch - 1) / W if epoch <= W else 1
alpha = 100
#beta = alpha * (epoch - 1) / W if epoch<=W else alpha
for i, dataT in enumerate(train_loader):
#data = unpack_data(dataT, device=device) #unimodal
data = dataT #multimodal
optimizer.zero_grad()
if objective==getattr(objectives, 'm_elbo_vade_warmup'):
loss = -objective(model, data, beta, K=args.K)
else:
loss = -objective(model, data, adj=adj, K=args.K)
loss.backward()
optimizer.step()
b_loss += loss.item()
if i == 0:
break
# -
from graphviz import Source
from torchviz import make_dot
arch = make_dot(loss)
Source(arch).render('../data/arch.png')
with Timer('MM-VAE') as t:
agg = defaultdict(list)
# initialize the early_stopping object
early_stopping = EarlyStopping(patience=10, verbose=True)
for epoch in range(1, args.epochs + 1):
train(epoch, agg)
#save_model(model, runPath + '/model.rar')
save_vars(agg, runPath + '/losses.rar')
# early_stopping needs the validation loss to check if it has decresed,
# and if it has, it will make a checkpoint of the current model
#validate(epoch, agg)
#early_stopping(agg['val_loss'][-1], model, runPath)
early_stopping(agg['train_loss'][-1], model, runPath)
if early_stopping.early_stop:
print('Early stopping')
break
#test(epoch, agg)
#MMVAE get all data
for i, d in enumerate(train_loader):
if i == 0:
data0 = d[0]
data1 = d[1]
else:
data0 = torch.cat([data0, d[0]], dim=0)
data1 = torch.cat([data1, d[1]], dim=0)
data = [data0.to(device), data1.to(device)]
model.visualize_latent(data, runPath, epoch=1, tsne=True, sampling=False)
#MMVAE get n data
n = 1
for i, d in enumerate(train_loader):
if i == 0:
data0 = d[0]
data1 = d[1]
elif i < n:
data0 = torch.cat([data0, d[0]], dim=0)
data1 = torch.cat([data1, d[1]], dim=0)
data = [data0.to(device), data1.to(device)]
#testing m_elbo_naive_vade
x = data
qz_xs, px_zs, zss = model(x)
n_centroids = model.params.n_centroids
lpx_zs, klds = [], []
model.vaes[0]._qz_x_params
for r, qz_x in enumerate(qz_xs):
zs = zss[r]
kld = vade_kld(model, zs, r)
klds.append(kld)
for d, px_z in enumerate(px_zs[r]):
lpx_z = px_z.log_prob(x[d]) * model.vaes[d].llik_scaling
#lpx_zs.append(lpx_z.view(*px_z.batch_shape[:2], -1).sum(-1).squeeze()) #added squeeze()
lpx_zs.append(lpx_z.sum(-1))
#obj = (1 / len(model.vaes)) * (torch.stack(lpx_zs).sum(0) - torch.stack(klds).sum(0))
obj = (1 / len(model.vaes)) * (torch.stack(lpx_zs).sum(0) - torch.stack(klds).sum(0))
2**3
gamma
lgamma
torch.stack(lpx_zs).mean(1)
klds
# +
r = 0
zs = zss[r]
n_centroids = model.params.n_centroids
gamma, lgamma, mu_c, var_c, pi = model.get_gamma(zs)
#mu, logvar = model.vaes[r]._qz_x_params ใใน
mu, var = model.vaes[r]._qz_x_params
mu_expand = mu.unsqueeze(2).expand(mu.size(0), mu.size(1), n_centroids)
#logvar_expand = logvar.unsqueeze(2).expand(logvar.size(0), logvar.size(1), n_centroids)
var_expand = var.unsqueeze(2).expand(var.size(0), var.size(1), n_centroids)
#lpz_c = -0.5*torch.sum(gamma*torch.sum(math.log(2*math.pi) + \
# torch.log(var_c) + \
# torch.exp(logvar_expand)/var_c + \
# (mu_expand-mu_c)**2/var_c, dim=1), dim=1) # log p(z|c)
lpz_c = -0.5*torch.sum(gamma*torch.sum(math.log(2*math.pi) + \
torch.log(var_c) + \
var_expand/var_c + \
(mu_expand-mu_c)**2/var_c, dim=1), dim=1) # log p(z|c)
lpc = torch.sum(gamma*torch.log(pi), 1) # log p(c)
lqz_x = -0.5*torch.sum(1+torch.log(var)+math.log(2*math.pi), 1) #see VaDE paper # log q(z|x)
lqc_x = torch.sum(gamma*(lgamma), 1) # log q(c|x)
kld = -lpz_c - lpc + lqz_x + lqc_x
# -
lpz_c
lpc
lqz_x
lqc_x
-lpz_c - lpc + lqz_x + lqc_x
| src/.ipynb_checkpoints/vade_main_notebook-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# ## 105 - Training Regressions
#
# This example notebook is similar to
# [Notebook 102](102 - Regression Example with Flight Delay Dataset.ipynb).
# In this example, we will demonstrate the use of `DataConversion()` in two
# ways. First, to convert the data type of several columns after the dataset
# has been read in to the Spark DataFrame instead of specifying the data types
# as the file is read in. Second, to convert columns to categorical columns
# instead of iterating over the columns and applying the `StringIndexer`.
#
# This sample demonstrates how to use the following APIs:
# - [`TrainRegressor`
# ](http://mmlspark.azureedge.net/docs/pyspark/TrainRegressor.html)
# - [`ComputePerInstanceStatistics`
# ](http://mmlspark.azureedge.net/docs/pyspark/ComputePerInstanceStatistics.html)
# - [`DataConversion`
# ](http://mmlspark.azureedge.net/docs/pyspark/DataConversion.html)
#
# First, import the pandas package
import pandas as pd
# Next, import the CSV dataset: retrieve the file if needed, save it locally,
# read the data into a pandas dataframe via `read_csv()`, then convert it to
# a Spark dataframe.
#
# Print the schema of the dataframe, and note the columns that are `long`.
dataFile = "On_Time_Performance_2012_9.csv"
import os, urllib
if not os.path.isfile(dataFile):
urllib.request.urlretrieve("https://mmlspark.azureedge.net/datasets/"+dataFile, dataFile)
flightDelay = spark.createDataFrame(pd.read_csv(dataFile))
# print some basic info
print("records read: " + str(flightDelay.count()))
print("Schema: ")
flightDelay.printSchema()
flightDelay.limit(10).toPandas()
# Use the `DataConversion` transform API to convert the columns listed to
# double.
#
# The `DataConversion` API accepts the following types for the `convertTo`
# parameter:
# * `boolean`
# * `byte`
# * `short`
# * `integer`
# * `long`
# * `float`
# * `double`
# * `string`
# * `toCategorical`
# * `clearCategorical`
# * `date` -- converts a string or long to a date of the format
# "yyyy-MM-dd HH:mm:ss" unless another format is specified by
# the `dateTimeFormat` parameter.
#
# Again, print the schema and note that the columns are now `double`
# instead of long.
from mmlspark import DataConversion
flightDelay = DataConversion(cols=["Quarter","Month","DayofMonth","DayOfWeek",
"OriginAirportID","DestAirportID",
"CRSDepTime","CRSArrTime"],
convertTo="double") \
.transform(flightDelay)
flightDelay.printSchema()
flightDelay.limit(10).toPandas()
# Split the datasest into train and test sets.
train, test = flightDelay.randomSplit([0.75, 0.25])
# Create a regressor model and train it on the dataset.
#
# First, use `DataConversion` to convert the columns `Carrier`, `DepTimeBlk`,
# and `ArrTimeBlk` to categorical data. Recall that in Notebook 102, this
# was accomplished by iterating over the columns and converting the strings
# to index values using the `StringIndexer` API. The `DataConversion` API
# simplifies the task by allowing you to specify all columns that will have
# the same end type in a single command.
#
# Create a LinearRegression model using the Limited-memory BFGS solver
# (`l-bfgs`), an `ElasticNet` mixing parameter of `0.3`, and a `Regularization`
# of `0.1`.
#
# Train the model with the `TrainRegressor` API fit on the training dataset.
# +
from mmlspark import TrainRegressor, TrainedRegressorModel
from pyspark.ml.regression import LinearRegression
trainCat = DataConversion(cols=["Carrier","DepTimeBlk","ArrTimeBlk"],
convertTo="toCategorical") \
.transform(train)
testCat = DataConversion(cols=["Carrier","DepTimeBlk","ArrTimeBlk"],
convertTo="toCategorical") \
.transform(test)
lr = LinearRegression().setSolver("l-bfgs").setRegParam(0.1) \
.setElasticNetParam(0.3)
model = TrainRegressor(model=lr, labelCol="ArrDelay").fit(trainCat)
# -
# Score the regressor on the test data.
scoredData = model.transform(testCat)
scoredData.limit(10).toPandas()
# Compute model metrics against the entire scored dataset
from mmlspark import ComputeModelStatistics
metrics = ComputeModelStatistics().transform(scoredData)
metrics.toPandas()
# Finally, compute and show statistics on individual predictions in the test
# dataset, demonstrating the usage of `ComputePerInstanceStatistics`
from mmlspark import ComputePerInstanceStatistics
evalPerInstance = ComputePerInstanceStatistics().transform(scoredData)
evalPerInstance.select("ArrDelay", "Scores", "L1_loss", "L2_loss") \
.limit(10).toPandas()
| notebooks/samples/105 - Regression with DataConversion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Network inference of categorical variables: non-sequential data
# +
import sys
import numpy as np
from scipy import linalg
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
# %matplotlib inline
import inference
import fem
# +
# setting parameter:
np.random.seed(1)
n = 40 # number of positions
m = 3 # number of values at each position
l = int(4*((n*m)**2)) # number of samples
g = 2.
nm = n*m
# -
def itab(n,m):
i1 = np.zeros(n)
i2 = np.zeros(n)
for i in range(n):
i1[i] = i*m
i2[i] = (i+1)*m
return i1.astype(int),i2.astype(int)
# generate coupling matrix w0:
def generate_interactions(n,m,g):
nm = n*m
w = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm))
i1tab,i2tab = itab(n,m)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,:] -= w[i1:i2,:].mean(axis=0)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,i1:i2] = 0. # no self-interactions
for i in range(nm):
for j in range(nm):
if j > i: w[i,j] = w[j,i]
return w
i1tab,i2tab = itab(n,m)
w0 = inference.generate_interactions(n,m,g)
# +
#plt.imshow(w0,cmap='rainbow',origin='lower')
#plt.clim(-0.5,0.5)
#plt.colorbar(fraction=0.045, pad=0.05,ticks=[-0.5,0,0.5])
#plt.show()
#print(w0)
# -
def generate_sequences2(w,n,m,l):
i1tab,i2tab = itab(n,m)
# initial s (categorical variables)
s_ini = np.random.randint(0,m,size=(l,n)) # integer values
#print(s_ini)
# onehot encoder
enc = OneHotEncoder(n_values=m)
s = enc.fit_transform(s_ini).toarray()
print(s)
nrepeat = 500
for irepeat in range(nrepeat):
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
h = s.dot(w[i1:i2,:].T) # h[t,i1:i2]
h_old = (s[:,i1:i2]*h).sum(axis=1) # h[t,i0]
k = np.random.randint(0,m,size=l)
for t in range(l):
if np.exp(h[t,k[t]] - h_old[t]) > np.random.rand():
s[t,i1:i2] = 0.
s[t,i1+k[t]] = 1.
return s
# 2018.11.07: Tai
def nrgy_tai(s,w):
l = s.shape[0]
n,m = 20,3
i1tab,i2tab = itab(n,m)
p = np.zeros((l,n))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
h = s.dot(w[i1:i2,:].T)
#e = (s[:,i1:i2]*h).sum(axis=1)
#p[:,i] = np.exp(e)
#p_sum = np.sum(np.exp(h),axis=1)
#p[:,i] /= p_sum
p[:,i] = np.exp((s[:,i1:i2]*h).sum(axis=1))/(np.exp(h).sum(axis=1))
#like = p.sum(axis=1)
return np.sum(np.log(p),axis=1)
# Vipul:
def nrgy_vp(onehot,w):
nrgy = onehot*(onehot.dot(w.T))
# print(nrgy - np.log(2*np.cosh(nrgy)))
return np.sum(nrgy - np.log(2*np.cosh(nrgy)),axis=1) #ln prob
# equilibrium
def nrgy(onehot,w):
nrgy = onehot*(onehot.dot(w.T))
# print(nrgy - np.log(2*np.cosh(nrgy)))
return np.sum(nrgy,axis=1) # - np.log(2*np.cosh(nrgy)),axis=1) #ln prob
# +
# 2018.11.07: equilibrium
def generate_sequences_vp_tai(w,n_positions,n_residues,n_seq):
n_size = n_residues*n_positions
n_trial = 10*(n_size) #monte carlo steps to find the right sequences
b = np.zeros((n_size))
trial_seq = np.tile(np.random.randint(0,n_residues,size=(n_positions)),(n_seq,1))
print(trial_seq[0])
enc = OneHotEncoder(n_values=n_residues)
onehot = enc.fit_transform(trial_seq).toarray()
old_nrgy = np.sum(onehot*(onehot.dot(w.T)),axis=1)
for trial in range(n_trial):
for index in range(n_positions):
r_trial = np.random.randint(0,n_residues,size=(n_seq))
mod_seq = trial_seq.copy()
mod_seq[:,index] = r_trial
onehot = enc.fit_transform(mod_seq).toarray()
mod_nrgy = np.sum(onehot*(onehot.dot(w.T)),axis=1)
seq_change = np.exp((mod_nrgy-old_nrgy)) > np.random.rand(n_seq)
trial_seq[seq_change,index] = r_trial[seq_change]
old_nrgy[seq_change] = mod_nrgy[seq_change]
if trial%(n_size) == 0: print('after',np.mean(old_nrgy))
print(trial_seq[:5,:10])
return enc.fit_transform(trial_seq).toarray()
# -
s = generate_sequences_vp_tai(w0,n,m,l)
def generate_sequences_time_series(s_ini,w,n,m):
i1tab,i2tab = itab(n,m)
l = s_ini.shape[0]
# initial s (categorical variables)
#s_ini = np.random.randint(0,m,size=(l,n)) # integer values
#print(s_ini)
# onehot encoder
enc = OneHotEncoder(n_values=m)
s = enc.fit_transform(s_ini).toarray()
#print(s)
ntrial = 20*m
for t in range(l-1):
h = np.sum(s[t,:]*w[:,:],axis=1)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
k = np.random.randint(0,m)
for itrial in range(ntrial):
k2 = np.random.randint(0,m)
while k2 == k:
k2 = np.random.randint(0,m)
if np.exp(h[i1+k2]- h[i1+k]) > np.random.rand():
k = k2
s[t+1,i1:i2] = 0.
s[t+1,i1+k] = 1.
return s
# +
# generate non-sequences from time series
#l1 = 100
#s_ini = np.random.randint(0,m,size=(l1,n)) # integer values
#s = np.zeros((l,nm))
#for t in range(l):
# np.random.seed(t+10)
# s[t,:] = generate_sequences_time_series(s_ini,w0,n,m)[-1,:]
# -
print(s.shape)
print(s[:10,:10])
# +
## 2018.11.07: for non sequencial data
def fit_additive(s,n,m):
nloop = 10
i1tab,i2tab = itab(n,m)
nm = n*m
nm1 = nm - m
w_infer = np.zeros((nm,nm))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
# remove column i
x = np.hstack([s[:,:i1],s[:,i2:]])
x_av = np.mean(x,axis=0)
dx = x - x_av
c = np.cov(dx,rowvar=False,bias=True)
c_inv = linalg.pinv(c,rcond=1e-15)
#print(c_inv.shape)
h = s[:,i1:i2].copy()
for iloop in range(nloop):
h_av = h.mean(axis=0)
dh = h - h_av
dhdx = dh[:,:,np.newaxis]*dx[:,np.newaxis,:]
dhdx_av = dhdx.mean(axis=0)
w = np.dot(dhdx_av,c_inv)
#w = w - w.mean(axis=0)
h = np.dot(x,w.T)
p = np.exp(h)
p_sum = p.sum(axis=1)
#p /= p_sum[:,np.newaxis]
for k in range(m):
p[:,k] = p[:,k]/p_sum[:]
h += s[:,i1:i2] - p
w_infer[i1:i2,:i1] = w[:,:i1]
w_infer[i1:i2,i2:] = w[:,i1:]
return w_infer
w2 = fit_additive(s,n,m)
plt.plot([-1,1],[-1,1],'r--')
plt.scatter(w0,w2)
# +
i1tab,i2tab = itab(n,m)
nloop = 5
nm1 = nm - m
w_infer = np.zeros((nm,nm))
wini = np.random.normal(0.0,1./np.sqrt(nm),size=(nm,nm1))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
x = np.hstack([s[:,:i1],s[:,i2:]])
y = s.copy()
# covariance[ia,ib]
cab_inv = np.empty((m,m,nm1,nm1))
eps = np.empty((m,m,l))
for ia in range(m):
for ib in range(m):
if ib != ia:
eps[ia,ib,:] = y[:,i1+ia] - y[:,i1+ib]
which_ab = eps[ia,ib,:] !=0.
xab = x[which_ab]
# ----------------------------
xab_av = np.mean(xab,axis=0)
dxab = xab - xab_av
cab = np.cov(dxab,rowvar=False,bias=True)
cab_inv[ia,ib,:,:] = linalg.pinv(cab,rcond=1e-15)
w = wini[i1:i2,:].copy()
for iloop in range(nloop):
h = np.dot(x,w.T)
for ia in range(m):
wa = np.zeros(nm1)
for ib in range(m):
if ib != ia:
which_ab = eps[ia,ib,:] !=0.
eps_ab = eps[ia,ib,which_ab]
xab = x[which_ab]
# ----------------------------
xab_av = np.mean(xab,axis=0)
dxab = xab - xab_av
h_ab = h[which_ab,ia] - h[which_ab,ib]
ha = np.divide(eps_ab*h_ab,np.tanh(h_ab/2.), out=np.zeros_like(h_ab), where=h_ab!=0)
dhdx = (ha - ha.mean())[:,np.newaxis]*dxab
dhdx_av = dhdx.mean(axis=0)
wab = cab_inv[ia,ib,:,:].dot(dhdx_av) # wa - wb
wa += wab
w[ia,:] = wa/m
w_infer[i1:i2,:i1] = w[:,:i1]
w_infer[i1:i2,i2:] = w[:,i1:]
#return w_infer
# -
plt.plot([-1,1],[-1,1],'r--')
plt.scatter(w0,w_infer)
#plt.scatter(w0[0:3,3:],w[0:3,:])
| old_versions/1main-v5-MCMC-symmetry-equilibrium-ln4-update1spin-Copy3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Racket
# language: racket
# name: racket
# ---
(require RacketFrames)
(define series-integer (new-ISeries (vector 1 2 3 4)
(build-index-from-labels (list 'a 'b 'c 'd))))
((iseries-referencer series-integer) 0)
(iseries-iref series-integer (list 0))
# +
(define columns-integer
(list
(cons 'col1 (new-ISeries (vector 1 2 3 4)
(build-index-from-labels (list 'a 'b 'c 'd))))
(cons 'col2 (new-ISeries (vector 5 6 7 8)
(build-index-from-labels (list 'e 'f 'g 'h))))
(cons 'col3 (new-ISeries (vector 9 10 11 12)
(build-index-from-labels (list 'i 'j 'k 'l))))))
; create new data-frame-integer
(define data-frame-integer (new-data-frame columns-integer))
# -
data-frame-integer
(data-frame-names data-frame-integer)
(set! data-frame-integer (data-frame-rename data-frame-integer 'col1 'col-one))
(data-frame-names data-frame-integer)
(data-frame-series data-frame-integer 'col-one)
(data-frame-head data-frame-integer)
(data-frame-dim data-frame-integer)
(ColumnInfo 'first 'CATEGORICAL)
# +
(define columns-integer
(list
(cons 'col1 (new-ISeries (vector 1 2 3 4 2 ) #f))
(cons 'col2 (new-ISeries (vector 5 6 7 8 6) #f))
(cons 'col3 (new-ISeries (vector 9 10 11 12 17) #f))
(cons 'col4 (new-ISeries (vector 13 14 15 16 18) #f))))
(define columns-categorical
(list
(cons 'col1 (new-CSeries (vector 'a 'b 'c 'd 'e)))
(cons 'col2 (new-CSeries (vector 'e 'f 'g 'h 'i)))
(cons 'col3 (new-CSeries (vector 'j 'k 'l 'm 'n)))))
; create new data-frame-integer
(define data-frame-integer (new-data-frame columns-integer))
; create new data-frame-categorical
(define data-frame-categorical (new-data-frame columns-categorical))
(data-frame-write-tab data-frame-integer (current-output-port))
(displayln "data-frame-groupby")
(data-frame-groupby data-frame-integer (list 'col1))
(data-frame-groupby data-frame-integer (list 'col2))
(data-frame-groupby data-frame-integer (list 'col1 'col2))
# +
(define columns-mixed-5
(list
(cons 'col1 (new-ISeries (vector 1 2 3 4) #f))
(cons 'col2 (new-CSeries (vector 'a 'b 'c 'd)))
(cons 'col3 (new-ISeries (vector 21 22 23 24) #f))))
(define columns-mixed-6
(list
(cons 'col1 (new-ISeries (vector 11 21 31 41) #f))
(cons 'col2 (new-CSeries (vector 'a 'b 'g 'd)))
(cons 'col3 (new-ISeries (vector 22 22 23 24) #f))))
; create new data-frame-mixed-5
(define data-frame-mixed-5 (new-data-frame columns-mixed-5))
; create new data-frame-mixed-6
(define data-frame-mixed-6 (new-data-frame columns-mixed-6))
(data-frame-write-tab data-frame-mixed-5 (current-output-port))
(data-frame-write-tab data-frame-mixed-6 (current-output-port))
(data-frame-write-tab (data-frame-join-left data-frame-mixed-5 data-frame-mixed-6 #:on (list 'col3)) (current-output-port))
(data-frame-write-tab (data-frame-join-inner data-frame-mixed-5 data-frame-mixed-6 #:on (list 'col2)) (current-output-port))
(data-frame-write-tab (data-frame-join-right data-frame-mixed-5 data-frame-mixed-6 #:on (list 'col2)) (current-output-port))
(data-frame-write-tab (data-frame-join-outer data-frame-mixed-5 data-frame-mixed-6 #:on (list 'col2)) (current-output-port))
# +
(define columns-mixed-1
(list
(cons 'col1 (new-ISeries (vector 1 2 3 4) #f))
(cons 'col3 (new-CSeries (vector 'a 'b 'c 'd)))
(cons 'col4 (new-ISeries (vector 21 22 23 24) #f))))
(define columns-mixed-2
(list
(cons 'col1 (new-ISeries (vector 1 2 3 4) #f))
(cons 'col3 (new-CSeries (vector 'e 'f 'g 'h)))
(cons 'col4 (new-ISeries (vector 1 2 3 4) #f))))
; create new data-frame-mixed-1
(define data-frame-mixed-1 (new-data-frame columns-mixed-1))
; create new data-frame-mixed-2
(define data-frame-mixed-2 (new-data-frame columns-mixed-2))
(displayln "Concat Test")
(data-frame-write-tab data-frame-mixed-1 (current-output-port))
(data-frame-write-tab data-frame-mixed-2 (current-output-port))
(displayln "Vertical Concat")
(data-frame-write-tab (data-frame-concat-vertical data-frame-mixed-1 data-frame-mixed-2) (current-output-port))
(displayln "Horizontal Concat")
(data-frame-write-tab (data-frame-concat-horizontal data-frame-mixed-1 data-frame-mixed-2) (current-output-port))
# -
| racketframes/jupyter-notebooks/racketframes-sample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Start from scratch: import matplotlib.pyplot as plt.
# Import package
import matplotlib.pyplot as plt
import numpy as np
pop=np.arange(10,50,dtype=int)
life_exp=np.random.randint(100,500,40)
# ### Build a scatter plot, where pop is mapped on the horizontal axis, and life_exp is mapped on the vertical axis.
# Build Scatter plot
plt.scatter(pop,life_exp)
# ### Finish the script with plt.show() to actually display the plot. Do you see a correlation?
#
# Show plot
plt.show()
| Intermediate Python for Data Science/Matplotlib/02-Scatter plot (2).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/LaFFF2300/Citie/blob/main/Fajardo_Assignment_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="4X9PYaty2s0s"
# #Linear Algebra for ChE
# Objectives
# Be familiar with the fundamental matrix operations.
# Apply the operations to solve
# Apply matrix algebra in engineering solutions.
# + id="pZP-S29v2zJg"
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# + [markdown] id="gwkJL2Sk4fa2"
# # This is formatted as code
# ```
#
# #Transposition
# One of the fundamental operations in matrix algebra is Transposition. The transpose of a matrix is done by flipping the values of its elements over its diagonals. With this, the rows and columns from the original matrix will be switched. So for matrix A, its transpose is denoted as $A^T$. So for example:
#
# :$$
# M=\begin{bmatrix} 4 & 6 & 3 & \\ 1 & 2 & 3 & \\ 1 & 3 & 5 & \end{bmatrix}
# $$
#
#
# :$$
# M^T=\begin{bmatrix} 7 & 0 & 1 & \\ 3 & 2 & 5 & \\ 1 & 8 & 9 & \end{bmatrix}
# $$
# + colab={"base_uri": "https://localhost:8080/"} id="FUAkgwEI4kEI" outputId="9101b748-97dc-4c0a-9ef3-ed8e6a2022da"
A = np.array([
[4, 6, 3],
[1, 2, 3],
[1, 3, 5]
])
A
# + colab={"base_uri": "https://localhost:8080/"} id="Z-62jxrG9P5O" outputId="cea5a16b-6999-42ee-c546-0e630c0af803"
AT1 = np.transpose(A)
AT1
# + colab={"base_uri": "https://localhost:8080/"} id="qVwzPN3h9TPp" outputId="c1aad0b7-d3f3-447a-82c0-a38c82e76a8a"
AT2 = A.T
AT2
# + colab={"base_uri": "https://localhost:8080/"} id="uwAB5Btf9XG9" outputId="b1bed763-9218-4013-b776-6e8ebf4b6803"
np.array_equiv(AT1, AT2)
# + colab={"base_uri": "https://localhost:8080/"} id="1-krCtaE9ZLc" outputId="f5632fed-ba83-40fe-d374-9de0eb76d3c7"
B = np.array([
[2, 7, 4, 8],
[3, 4, 0, -23],
[7, 14, -3, 9]
])
B.shape
# + colab={"base_uri": "https://localhost:8080/"} id="ntYqNi_pAfS3" outputId="de187125-6072-4dae-817b-edaf6af83ff6"
np.transpose(B).shape
# + colab={"base_uri": "https://localhost:8080/"} id="sLZ6lWFaAiOW" outputId="33cf9271-4124-4e56-938f-00f40b073514"
B.T.shape
# + [markdown] id="7zjKoSB-Amlt"
# Try to create your own matrix (you can try non-squares) to test transposition
# + colab={"base_uri": "https://localhost:8080/"} id="VmOq7G5OApP_" outputId="0b30367f-30c4-4c8c-ca36-f3ebc0eead6b"
alexij= np.array ([
[12, 3, 3],
[0, 3, 6],
[11, 5, 9],
[3, 44,22]
])
print('alexij matrix: \n',alexij, '\n')
print('Shape: ',alexij.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="MTga6LIlArgP" outputId="1dcf8af1-7d0e-4ad9-937f-406a0a1e4168"
print('Transpose Shape: ',np.transpose(alexij).shape)
# + colab={"base_uri": "https://localhost:8080/"} id="RiAXTM6mAvMD" outputId="76318c09-99c8-4241-9bc7-ce19777c75a4"
print('Transpose Shape: ',alexij.T.shape)
# + [markdown] id="sI45nhyuAzPM"
# ##Dot Product / Inner product
# + [markdown] id="VBEwfD6cA4Ne"
# If you recall the dot product from laboratory activity before, we will try to implement the same operation with matrices. In matrix dot product we are going to get the sum of products of the vectors by row-column pairs. So if we have two matrices $X$ and $Y$:
#
# $$X = \begin{bmatrix}x_{(0,0)}&x_{(0,1)}\\ x_{(1,0)}&x_{(1,1)}\end{bmatrix}, Y = \begin{bmatrix}y_{(0,0)}&y_{(0,1)}\\ y_{(1,0)}&y_{(1,1)}\end{bmatrix}$$
#
# The dot product will then be computed as:
# $$X \cdot Y= \begin{bmatrix} x_{(0,0)}*y_{(0,0)} + x_{(0,1)}*y_{(1,0)} & x_{(0,0)}*y_{(0,1)} + x_{(0,1)}*y_{(1,1)} \\ x_{(1,0)}*y_{(0,0)} + x_{(1,1)}*y_{(1,0)} & x_{(1,0)}*y_{(0,1)} + x_{(1,1)}*y_{(1,1)}
# \end{bmatrix}$$
#
# So if we assign values to $X$ and $Y$:
# $$X = \begin{bmatrix}1&2\\ 0&1\end{bmatrix}, Y = \begin{bmatrix}-1&0\\ 2&2\end{bmatrix}$$
# + [markdown] id="bLBojNa5A88U"
# italicized text$$X \cdot Y= \begin{bmatrix} 1*-1 + 2*2 & 1*0 + 2*2 \\ 0*-1 + 1*2 & 0*0 + 1*2 \end{bmatrix} = \begin{bmatrix} 3 & 4 \\2 & 2 \end{bmatrix}$$
# This could be achieved programmatically using `np.dot()`, `np.matmul()` or the `@` operator.
# + id="wUjt7kzLA0JA"
A = np.array([
[9,1],
[9,4]
])
M = np.array([
[8,9],
[3,6]
])
# + id="APXCI3JiBFO-"
P= np.array ([
[1,3,5,7,5,9,0],
[2,4,6,8,10,12,13]
])
# + colab={"base_uri": "https://localhost:8080/"} id="6cAqkd6iBG79" outputId="728007be-cc4f-42d4-af6a-7bac65efbf2d"
np.array_equiv(A,M)
# + colab={"base_uri": "https://localhost:8080/"} id="Ju56IFkiBJDh" outputId="e701a140-ad3d-45c9-9bd2-d30cd0d2acf6"
np.dot(A,M)
# + colab={"base_uri": "https://localhost:8080/"} id="zbrA-HALBLET" outputId="bf444453-4e42-4586-c4d1-afcf3d51a41e"
A.dot(M)
# + colab={"base_uri": "https://localhost:8080/"} id="StCP37esBMv9" outputId="475711c4-f42f-4e4c-f9e6-b0bd25a06220"
np.matmul(A,M)
# + [markdown] id="3RnGKebPBRnN"
# In matrix dot products there are additional rules compared with vector dot products. Since vector dot products were just in one dimension there are less restrictions. Since now we are dealing with Rank 2 vectors we need to consider some rules:
#
# ### Rule 1: The inner dimensions of the two matrices in question must be the same.
#
# So given a matrix $A$ with a shape of $(a,b)$ where $a$ and $b$ are any integers. If we want to do a dot product between $A$ and another matrix $B$, then matrix $B$ should have a shape of $(b,c)$ where $b$ and $c$ are any integers. So for given the following matrices:
#
# $$A = \begin{bmatrix}2&4\\5&-2\\0&1\end{bmatrix}, B = \begin{bmatrix}1&1\\3&3\\-1&-2\end{bmatrix}, C = \begin{bmatrix}0&1&1\\1&1&2\end{bmatrix}$$
#
# So in this case $A$ has a shape of $(3,2)$, $B$ has a shape of $(3,2)$ and $C$ has a shape of $(2,3)$. So the only matrix pairs that is eligible to perform dot product is matrices $A \cdot C$, or $B \cdot C$.
# + colab={"base_uri": "https://localhost:8080/"} id="dp9dFmaHBj0o" outputId="55004268-ea8d-4f6b-95f3-36f560e71a04"
A = np.array([
[2, 3],
[3, 4],
[5, 7],
[7,9]
])
M = np.array([
[1,4],
[5,6],
[7,8],
[-5,-7]
])
P = np.array([
[1,5,7],
[2,4,8]
])
print(A.shape)
print(M.shape)
print(P.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="dUa3YkuHBpO4" outputId="6c5d737a-7ba8-496b-9120-4062c0dfebe8"
A @ P
# + colab={"base_uri": "https://localhost:8080/"} id="WtHkrHM1Bq99" outputId="78ecb4c2-aaf4-4d7b-e8dc-daf8a5988c5e"
M @ P
# + [markdown] id="G5q0iO50Bwp8"
# If you would notice the shape of the dot product changed and its shape is not the same as any of the matrices we used. The shape of a dot product is actually derived from the shapes of the matrices used. So recall matrix $A$ with a shape of $(a,b)$ and matrix $B$ with a shape of $(b,c)$, $A \cdot B$ should have a shape $(a,c)$.
# + colab={"base_uri": "https://localhost:8080/"} id="MapjVHBFB0jR" outputId="f8a20ba5-2af4-4791-a373-e941b1bd5766"
A @ M.T
# + colab={"base_uri": "https://localhost:8080/"} id="xqC_lzjSB26_" outputId="f670e481-dded-4e97-cd0a-3645e4e7bf85"
A = np.array([
[1,2,3,0]
])
P = np.array([
[1,0,4,-1]
])
print(A.shape)
print(P.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="4jewFtoVB5Sb" outputId="89a59948-7752-47c0-e0bb-2db3899774d8"
P.T @ A
# + [markdown] id="jXqg2qhJB8r3"
# And you can see that when you try to multiply A and B, it returns `ValueError` pertaining to matrix shape mismatch.
# + [markdown] id="dy-qUD0_BVMP"
# ### Rule 2: Dot Product has special properties
#
# Dot products are prevalent in matrix algebra, this implies that it has several unique properties and it should be considered when formulation solutions:
# 1. $A \cdot B \neq B \cdot A$
# 2. $A \cdot (B \cdot C) = (A \cdot B) \cdot C$
# 3. $A\cdot(B+C) = A\cdot B + A\cdot C$
# 4. $(B+C)\cdot A = B\cdot A + C\cdot A$
# 5. $A\cdot I = A$
# 6. $A\cdot \emptyset = \emptyset$
# + id="8rDKzNsbCIAW"
A = np.array([
[13,15,17],
[12,14,16],
[15,17,19]
])
B = np.array([
[2,3,4],
[5,6,7],
[7,14,21]
])
C = np.array([
[10,15,25],
[12,16,18],
[3,16,19]
])
# + colab={"base_uri": "https://localhost:8080/"} id="aiPSj5YYCLZX" outputId="2510989d-8242-4e1f-bb70-55bd88b45ccd"
A.dot(np.zeros(A.shape))
# + colab={"base_uri": "https://localhost:8080/"} id="tZ_U6DvkCPFd" outputId="8c88a749-c73b-44bb-f71f-1a684b373c19"
z_mat = np.zeros(A.shape)
z_mat
# + colab={"base_uri": "https://localhost:8080/"} id="JWJjUOYlCRd8" outputId="42abf630-7884-420f-dc8b-265737089509"
a_dot_z = A.dot(np.zeros(A.shape))
a_dot_z
# + colab={"base_uri": "https://localhost:8080/"} id="dp5TS0gyCTI_" outputId="8cf3fdd0-989c-42f1-f439-614e2145e55d"
np.array_equal(a_dot_z,z_mat)
# + colab={"base_uri": "https://localhost:8080/"} id="ZJmLiRZxCVDb" outputId="d4ce8af5-1680-470b-a22b-d46b8d92e7d7"
null_mat = np.empty(A.shape, dtype=float)
null = np.array(null_mat,dtype=float)
print(null)
np.allclose(a_dot_z,null)
# + [markdown] id="_-pRmvwqCGSv"
# ## Determinant
#
#
# A determinant is a scalar value derived from a square matrix. The determinant is a fundamental and important value used in matrix algebra. Although it will not be evident in this laboratory on how it can be used practically, but it will be reatly used in future lessons.
#
# The determinant of some matrix $A$ is denoted as $det(A)$ or $|A|$. So let's say $A$ is represented as:
# $$A = \begin{bmatrix}a_{(0,0)}&a_{(0,1)}\\a_{(1,0)}&a_{(1,1)}\end{bmatrix}$$
# We can compute for the determinant as:
# $$|A| = a_{(0,0)}*a_{(1,1)} - a_{(1,0)}*a_{(0,1)}$$
# So if we have $A$ as:
# $$A = \begin{bmatrix}7&3\\1&9\end{bmatrix}, |A| = 3$$
#
# But you might wonder how about square matrices beyond the shape $(2,2)$? We can approach this problem by using several methods such as co-factor expansion and the minors method. This can be taught in the lecture of the laboratory but we can achieve the strenuous computation of high-dimensional matrices programmatically using Python. We can achieve this by using `np.linalg.det()`.
# + colab={"base_uri": "https://localhost:8080/"} id="Z_1QOWFHChDl" outputId="f8cde0d4-126b-490e-f0a5-18d548da5976"
A = np.array([
[1,3],
[5,9]
])
np.linalg.det(A)
# + colab={"base_uri": "https://localhost:8080/"} id="2UOimMHNClia" outputId="00b824c1-5ef0-41c8-d7df-486a6d36dc69"
J = np.array([
[1,7,4],
[3,9,6],
[5,2,8]
])
np.linalg.det(J)
# + colab={"base_uri": "https://localhost:8080/"} id="wcJXzRG7Cnij" outputId="f8274dc5-9571-4a82-90ec-505b2be19b47"
## Now other mathematics classes would require you to solve this by hand,
## and that is great for practicing your memorization and coordination skills
## but in this class we aim for simplicity and speed so we'll use programming
## but it's completely fine if you want to try to solve this one by hand.
B = np.array([
[1,3,1,6],
[2,3,1,3],
[3,1,3,2],
[5,2,5,4]
])
np.linalg.det(B)
# + [markdown] id="dTLqHT2ICuXy"
# ## Inverse
#
#
#
# The inverse of a matrix is another fundamental operation in matrix algebra. Determining the inverse of a matrix let us determine if its solvability and its characteristic as a system of linear equation โ we'll expand on this in the nect module. Another use of the inverse matrix is solving the problem of divisibility between matrices. Although element-wise division exists but dividing the entire concept of matrices does not exists. Inverse matrices provides a related operation that could have the same concept of "dividing" matrices.
#
# Now to determine the inverse of a matrix we need to perform several steps. So let's say we have a matrix $M$:
# $$M = \begin{bmatrix}1&7\\-3&5\end{bmatrix}$$
# First, we need to get the determinant of $M$.
# $$|M| = (1)(5)-(-3)(7) = 26$$
# Next, we need to reform the matrix into the inverse form:
# $$M^{-1} = \frac{1}{|M|} \begin{bmatrix} m_{(1,1)} & -m_{(0,1)} \\ -m_{(1,0)} & m_{(0,0)}\end{bmatrix}$$
# So that will be:
# $$M^{-1} = \frac{1}{26} \begin{bmatrix} 5 & -7 \\ 3 & 1\end{bmatrix} = \begin{bmatrix} \frac{5}{26} & \frac{-7}{26} \\ \frac{3}{26} & \frac{1}{26}\end{bmatrix}$$
# For higher-dimension matrices you might need to use co-factors, minors, adjugates, and other reduction techinques. To solve this programmatially we can use `np.linalg.inv()`.
# + colab={"base_uri": "https://localhost:8080/"} id="hxSTf6-yC1ho" outputId="1bd04fa0-0bdd-442b-b968-df79e2dbb49c"
T = np.array([
[15,-5],
[-3,6]
])
np.array(T @ np.linalg.inv(T), dtype=int)
# + colab={"base_uri": "https://localhost:8080/"} id="VqwoXEECC5Js" outputId="5ebdcfa8-3b87-470e-c47c-a433801af516"
T = np.array([
[15,-5],
[-3,6]
])
J= np.linalg.inv(T)
J
# + colab={"base_uri": "https://localhost:8080/"} id="cdk7h2KhC8z2" outputId="49e7673a-1f76-45cc-dc5c-c1860ac5f095"
J @ T
# + colab={"base_uri": "https://localhost:8080/"} id="rsG0elHHDAKu" outputId="88679216-8d7e-4e6b-f42d-cbc486d21781"
## And now let's test your skills in solving a matrix with high dimensions:
N = np.array([
[33,44,55,11,55,18,13],
[1,14,13,1,12,1,11],
[31,62,52,13,12,12,14],
[13,15,10,8,16,12,15],
[12,14,12,16,12,18,11],
[95,-15,12,40,60,16,-30],
[-2,-5,1,2,1,20,12],
])
N_inv = np.linalg.inv(N)
np.array(N @ N_inv,dtype=int)
# + [markdown] id="sDcLIhvFDDJi"
# To validate the wether if the matric that you have solved is really the inverse, we follow this dot product property for a matrix $M$:
# $$M\cdot M^{-1} = I$$
# + colab={"base_uri": "https://localhost:8080/"} id="lppbopcBDGnN" outputId="0a982eaa-5fb2-41af-f828-5ac1414c380d"
squad = np.array([
[1.25, 1.50, 1.50],
[2.75, 1.25, 2.90],
[3.30, 3.30, 3.75]
])
weights = np.array([
[0.2, 0.2, 0.6]
])
p_grade = squad @ weights.T
p_grade
# + [markdown] id="eOT6Um9XDKc2"
# ##ACTIVITY
# + [markdown] id="-5u9jbu_DSA9"
# ###Task 1
# Prove and implement the remaining 6 matrix multiplication properties. You may create your own
# matrices in which their shapes should not be lower than . In your methodology, create individual
# flowcharts for each property and discuss the property you would then present your proofs or
# validity of your implementation in the results section by comparing your result to present functions
# from NumPy.
#
# + id="jdYoHHWODKAo"
A = np.array([
[7, 9, 54],
[1, 7, -1],
[7, 13, 11]
])
M = np.array([
[13, 26, 86],
[3, 41, 8],
[97, -4, -2]
])
P = np.array([
[21, 90, -14],
[14, -6, 9],
[29, 1, 37]
])
# + colab={"base_uri": "https://localhost:8080/"} id="KvnU1dfEDJ87" outputId="0ee1f222-52c1-4e04-f729-600053ca90d5"
A @ M
# + colab={"base_uri": "https://localhost:8080/"} id="wzFu1VeWDYfZ" outputId="2276f1b5-655b-447c-8522-6f28ebf54989"
M @ P
# + colab={"base_uri": "https://localhost:8080/"} id="gO_-EC0gDb0g" outputId="3c2abbd8-8515-4de4-d84e-a1bd48b12cd8"
np.array_equiv(A @ M, M @ P)
# + colab={"base_uri": "https://localhost:8080/"} id="dF6ShzXoDeN7" outputId="a384c5c6-a72f-4690-db89-ea855cb4db8d"
A @ (M @ P)
# + colab={"base_uri": "https://localhost:8080/"} id="C7PWTO8-Dgg2" outputId="1d97e39b-282d-455c-c3f3-b2e87e010ad1"
(A @ M) @ P
# + colab={"base_uri": "https://localhost:8080/"} id="7CMpU592Dkl4" outputId="cbdf15ec-7661-421d-831d-09d66204804e"
np.array_equiv(A @( M @P ),(A @ M)@P)
# + colab={"base_uri": "https://localhost:8080/"} id="pGkj3OvaDnrx" outputId="7b307737-272d-4fd5-d008-8428c10029ac"
A @(M + P)
# + colab={"base_uri": "https://localhost:8080/"} id="SH8Mjfp1DtMs" outputId="5df4bb77-9a6e-407f-f31e-8c17951568ce"
A @ M + A @ P
# + colab={"base_uri": "https://localhost:8080/"} id="WXHga3wPDvyj" outputId="36fb0513-8be2-4220-94e6-17cead1731ca"
np.array_equiv(A@(M+P),A@M + A@P)
# + colab={"base_uri": "https://localhost:8080/"} id="etEy0IbrDxwJ" outputId="a0294fd2-1186-4c83-eef8-1943e05b815a"
(M+P)@A
# + colab={"base_uri": "https://localhost:8080/"} id="kA6uoJobD0Gu" outputId="7a4735cf-157e-4d0f-bcdf-f98fd356fdcc"
M@A + P@A
# + colab={"base_uri": "https://localhost:8080/"} id="dq7oH7jeD2kD" outputId="c72393d9-53fa-4b47-bda9-d40b93d741d9"
np.array_equiv((M+P)@A, M@A +P@A)
# + colab={"base_uri": "https://localhost:8080/"} id="eqyaNgkpD4qx" outputId="e9084372-a106-4751-b37a-eeb47c12f094"
A@np.eye(3)
# + colab={"base_uri": "https://localhost:8080/"} id="b3D4u173D65e" outputId="e3e8e6a2-7953-431c-bbd8-fbc8cffe6345"
np.array_equiv(A, A@np.eye(3))
# + colab={"base_uri": "https://localhost:8080/"} id="Asy7001OD9P0" outputId="a51e7ade-8562-4ac1-c911-1ddd40945bb4"
A@np.zeros((3,3))
# + colab={"base_uri": "https://localhost:8080/"} id="2rBQIHUFEAfk" outputId="6e72385a-f7e9-45a5-8464-9bb8ae409666"
np.array_equiv(A@np.zeros((3,3)),np.zeros((3,3)))
# + colab={"base_uri": "https://localhost:8080/"} id="GpF8cCjOEChn" outputId="f1cdbb56-f88f-4d1d-ede3-979caf65fa6c"
A.dot(np.zeros(A.shape))
# + colab={"base_uri": "https://localhost:8080/"} id="p-eYdNaYEEVC" outputId="918b777c-b706-42de-d934-835f13e491f4"
z_mat = np.zeros (A.shape)
z_mat
# + colab={"base_uri": "https://localhost:8080/"} id="Dm8LO0K0EGkA" outputId="d0db4370-2ec7-434b-b84b-279d47e166d8"
i_dot_z = A.dot(np.zeros(A.shape))
i_dot_z
# + colab={"base_uri": "https://localhost:8080/"} id="ke99lf26EJEI" outputId="1f460209-51e7-4a83-e152-7a426072f59f"
np.array_equal(i_dot_z,z_mat)
# + colab={"base_uri": "https://localhost:8080/"} id="oohOuQpnELFR" outputId="19211a98-2231-44a2-e844-4fafb070b998"
null_mat = np.empty(A.shape, dtype = float)
null = np.array(null_mat,dtype = float)
print(null)
np.allclose(i_dot_z , null)
# + [markdown] id="1_za7br4D4VJ"
#
| Fajardo_Assignment_5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # ๅ่กจๅๅ
็ป
#
# ไปๅคฉๆไปฌไธ่ตทๆฅๅญฆไน ๏ผPython ไธญๆๅธธ่ง็ไธค็งๆฐๆฎ็ปๆ๏ผๅ่กจ๏ผlist๏ผๅๅ
็ป๏ผtuple๏ผใ
#
# ### ๅบ็ก็ฅ่ฏ
#
# ๅ่กจๅๅ
็ป้ฝๆฏไธไธชๅฏไปฅๅญๆพไปปๆๆฐๆฎ็ฑปๅ็ๆๅบ้ๅใๅจ็ปๅคงๅคๆฐ็ผ็จ่ฏญ่จไธญ๏ผ้ๅ็ๆฐๆฎ็ฑปๅๅฟ
้กปไธ่ดใไธ่ฟ๏ผๅฏนไบ Python ็ๅ่กจๅๅ
็ปๆฅ่ฏด๏ผๅนถๆ ๆญค่ฆๆฑ๏ผ
l = [1, 2, 'hello', 'world']
print(l)
t = ('Tony', 22)
print(t)
# ### ๅ่กจๅๅ
็ป็ๅบๅซ
# * ๅ่กจๆฏๅจๆ็๏ผ้ฟๅบฆๅคงๅฐไธๅบๅฎ๏ผๅฏไปฅ้ๆๅฐๅขๅ ใๅ ๅๆ่
ๆนๅๅ
็ด ๏ผmutable๏ผ
# * ๅ
็ปๆฏ้ๆ็๏ผ้ฟๅบฆๅคงๅฐๅบๅฎ๏ผๆ ๆณๆนๅ (immutable)
#
# ๆไปฌๆฅ่ฏ่ฏ่ฏๅพๆนๅlistๅtuple
# +
l = [1, 2, 3, 4]
l[3] = 80
print(l)
t = (1, 2, 3, 4)
t[3] = 8
print(t)
# -
# ### ๆ่
# ๅฆๆๆไปฌ้่ฆๆนๅไธไธชๅ
็ป๏ผๅบ่ฏฅๆไนๅๅข๏ผ
# ### ่ดๆฐ็ดขๅผ
# listๅtuple้ฝๆฏๆ่ดๆฐ็ดขๅผ๏ผ-1่กจ็คบๆๅ็ไธไธชๅ
็ด ๏ผ-2ๅ่กจ็คบๅๆฐ็ฌฌไบไธชๅ
็ด ใ
# ```bash
# l = [1, 2, 3, 4]
# l[-1]
# 4
#
# t = (1, 2, 3, 4)
# t[-1]
# 4
# ```
#
# ### ๅ็ๆไฝ
# listๅtuple้ฝๆฏๆๅ็ๆไฝ
# ```bash
# l = [1, 2, 3, 4]
# l[1:3] # ่ฟๅๅ่กจไธญ็ดขๅผไป1ๅฐ2็ๅญๅ่กจ
# [2, 3]
#
# tup = (1, 2, 3, 4)
# tup[1:3] # ่ฟๅๅ
็ปไธญ็ดขๅผไป1ๅฐ2็ๅญๅ
็ป
# (2, 3)
# ```
#
#
# ### listๅtupleๅฏไปฅ้ๆๅตๅฅ
# ```bash
# # ๅจ่ฟ้list l1็ๆฏไธไธชๅ
็ด ไนๆฏไธไธชlist
# l1 = [[1, 2], [3, 4]]
#
# # ๅจ่ฟ้tuple t1็ๆฏไธไธชๅ
็ด ไนๆฏไธไธชtuple
# t1 = ((1, 2), (3, 4))
# ```
#
# ### ๆ่
# list่ฝๅตๅฅtupleๅ๏ผtuple่ฝๅตๅฅlistๅ๏ผ
#
# ### listๅtupleไบๆข
# ```bash
#
# list((1, 2, 3))
# [1, 2, 3]
#
# tuple([1, 2, 3])
# (1, 2, 3)
# ```
#
# ### ๅธธ็จ็ๅ
็ฝฎๅฝๆฐ
# ```bash
#
# l = [3, 2, 3, 7, 8, 1]
# l.count(3) # ็ป่ฎก3ๅบ็ฐ็ๆฌกๆฐ
# 2
#
# l.index(7) # ่ฟๅlistไธญ็ฌฌไธๆฌก7ๅบ็ฐ็็ดขๅผ
# 3
#
# l.reverse() # ๅๅฐๅ่ฝฌๅ่กจ
# l
# [1, 8, 7, 3, 2, 3]
#
# l.sort() # ๅฏนๅ่กจ่ฟ่กๆๅบ
# l
# [1, 2, 3, 3, 7, 8]
#
# tup = (3, 2, 3, 7, 8, 1)
# tup.count(3)
# 2
# tup.index(7)
# 3
# list(reversed(tup))
# [1, 8, 7, 3, 2, 3]
# sorted(tup)
# [1, 2, 3, 3, 7, 8]
# ```
#
# ### ๆ่
# ไธบไปไนtupleๆฒกๆtuple.reverse()่ฟไธชๅฝๆฐๅtuple.sort()่ฟไธชๅฝๆฐ๏ผ
t = (1, 2, 3)
list(reversed(t))
# ### ๅ่กจๅๅ
็ป็ๆง่ฝ
# * ๅ
็ป่ฆๆฏๅ่กจๆดๅ ่ฝป้็บงไธไบ๏ผๆไปฅๆปไฝไธๆฅ่ฏด๏ผๅ
็ป็ๆง่ฝ้ๅบฆ่ฆ็ฅไผไบๅ่กจใ
# * Python ไผๅจๅๅฐ๏ผๅฏน้ๆๆฐๆฎๅไธไบ่ตๆบ็ผๅญ๏ผresource caching๏ผใ้ๅธธๆฅ่ฏด๏ผๅ ไธบๅๅพๅๆถๆบๅถ็ๅญๅจ๏ผๅฆๆไธไบๅ้ไธ่ขซไฝฟ็จไบ๏ผPython ๅฐฑไผๅๆถๅฎไปฌๆๅ ็จ็ๅ
ๅญ๏ผ่ฟ่ฟ็ปๆไฝ็ณป็ป๏ผไปฅไพฟๅ
ถไปๅ้ๆๅ
ถไปๅบ็จไฝฟ็จใ
# * ไฝๆฏๅฏนไบไธไบ้ๆๅ้๏ผๆฏๅฆๅ
็ป๏ผๅฆๆๅฎไธ่ขซไฝฟ็จๅนถไธๅ ็จ็ฉบ้ดไธๅคงๆถ๏ผPython ไผๆๆถ็ผๅญ่ฟ้จๅๅ
ๅญใ่ฟๆ ท๏ผไธๆฌกๆไปฌๅๅๅปบๅๆ ทๅคงๅฐ็ๅ
็ปๆถ๏ผPython ๅฐฑๅฏไปฅไธ็จๅๅๆไฝ็ณป็ปๅๅบ่ฏทๆฑ๏ผๅปๅฏปๆพๅ
ๅญ๏ผ่ๆฏๅฏไปฅ็ดๆฅๅ้
ไนๅ็ผๅญ็ๅ
ๅญ็ฉบ้ด๏ผ่ฟๆ ทๅฐฑ่ฝๅคงๅคงๅ ๅฟซ็จๅบ็่ฟ่ก้ๅบฆใ
# * ไธ้ข็ไพๅญๆฏ่ฎก็ฎๅๅงๅไธไธช็ธๅๅ
็ด ็ๅ่กจๅๅ
็ปๅๅซๆ้็ๆถ้ดใๆไปฌๅฏไปฅ็ๅฐ๏ผๅ
็ป็ๅๅงๅ้ๅบฆ๏ผ่ฆๆฏๅ่กจๅฟซ 5 ๅ
# ```bash
# python3 -m timeit 'x=(1,2,3,4,5,6)'
# python3 -m timeit 'x=[1,2,3,4,5,6]'
# ```
#
# ### ๆป็ป
# ๆป็ๆฅ่ฏด๏ผๅ่กจๅๅ
็ป้ฝๆฏๆๅบ็๏ผๅฏไปฅๅญๅจไปปๆๆฐๆฎ็ฑปๅ็้ๅ๏ผๅบๅซไธป่ฆๅจไบไธ้ข่ฟไธค็นใ
# * ๅ่กจๆฏๅจๆ็๏ผ้ฟๅบฆๅฏๅ๏ผๅฏไปฅ้ๆ็ๅขๅ ใๅ ๅๆๆนๅๅ
็ด ใๅ่กจ็ๅญๅจ็ฉบ้ด็ฅๅคงไบๅ
็ป๏ผๆง่ฝ็ฅ้ไบๅ
็ปใ
# * ๅ
็ปๆฏ้ๆ็๏ผ้ฟๅบฆๅคงๅฐๅบๅฎ๏ผไธๅฏไปฅๅฏนๅ
็ด ่ฟ่กๅขๅ ใๅ ๅๆ่
ๆนๅๆไฝใๅ
็ป็ธๅฏนไบๅ่กจๆดๅ ่ฝป้็บง๏ผๆง่ฝ็จไผใ
| module1/jupyter/list-tuple.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Scores
# ### Introduction:
#
# This time you will create the data.
#
# ***Exercise based on [Chris Albon](http://chrisalbon.com/) work, the credits belong to him.***
#
# ### Step 1. Import the necessary libraries
# + jupyter={"outputs_hidden": false}
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# -
# ### Step 2. Create the DataFrame it should look like below.
# + jupyter={"outputs_hidden": false}
raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze'],
'female': [0, 1, 1, 0, 1],
'age': [42, 52, 36, 24, 73],
'preTestScore': [4, 24, 31, 2, 3],
'postTestScore': [25, 94, 57, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'age', 'female', 'preTestScore', 'postTestScore'])
df
# -
# ### Step 3. Create a Scatterplot of preTestScore and postTestScore, with the size of each point determined by age
# #### Hint: Don't forget to place the labels
# + jupyter={"outputs_hidden": false}
plt.scatter(df.preTestScore, df.postTestScore, s=df.age)
#set labels and titles
plt.title("preTestScore x postTestScore")
plt.xlabel('preTestScore')
plt.ylabel('preTestScore')
# -
# ### Step 4. Create a Scatterplot of preTestScore and postTestScore.
# ### This time the size should be 4.5 times the postTestScore and the color determined by sex
# + jupyter={"outputs_hidden": false}
plt.scatter(df.preTestScore, df.postTestScore, s= df.postTestScore * 4.5, c = df.female)
#set labels and titles
plt.title("preTestScore x postTestScore")
plt.xlabel('preTestScore')
plt.ylabel('preTestScore')
# -
# ### BONUS: Create your own question and answer it.
# + jupyter={"outputs_hidden": true}
| 07_Visualization/Scores/Exercises_with_solutions_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + cellView="form" colab={} colab_type="code" id="rQsYkXeIkL6d"
#@title ##### License
# Copyright 2018 The GraphNets Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
# + [markdown] colab_type="text" id="bXBusmrp1vaL"
# # Find the shortest path in a graph
# This notebook and the accompanying code demonstrates how to use the Graph Nets library to learn to predict the shortest path between two nodes in graph.
#
# The network is trained to label the nodes and edges of the shortest path, given the start and end nodes.
#
# After training, the network's prediction ability is illustrated by comparing its output to the true shortest path. Then the network's ability to generalise is tested, by using it to predict the shortest path in similar but larger graphs.
# + cellView="form" colab={} colab_type="code" id="FlBiBDZjK-Tl"
#@title ### Install the Graph Nets library on this Colaboratory runtime { form-width: "60%", run: "auto"}
#@markdown <br>1. Connect to a local or hosted Colaboratory runtime by clicking the **Connect** button at the top-right.<br>2. Choose "Yes" below to install the Graph Nets library on the runtime machine with the correct dependencies. Note, this works both with local and hosted Colaboratory runtimes.
install_graph_nets_library = "No" #@param ["Yes", "No"]
if install_graph_nets_library.lower() == "yes":
print("Installing Graph Nets library and dependencies:")
print("Output message from command:\n")
# !pip install graph_nets "dm-sonnet<2" "tensorflow_probability<0.9"
else:
print("Skipping installation of Graph Nets library")
# + [markdown] colab_type="text" id="31YqFsfHGab3"
# ### Install dependencies locally
#
# If you are running this notebook locally (i.e., not through Colaboratory), you will also need to install a few more dependencies. Run the following on the command line to install the graph networks library, as well as a few other dependencies:
#
# ```
# pip install graph_nets matplotlib scipy "tensorflow>=1.15,<2" "dm-sonnet<2" "tensorflow_probability<0.9"
# ```
# + [markdown] colab_type="text" id="ntNJc6x_F4u5"
# # Code
# + cellView="form" colab={} colab_type="code" id="tjd3-8PJdK2m"
#@title Imports { form-width: "30%" }
# %tensorflow_version 1.x # For Google Colab only.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import itertools
import time
from graph_nets import graphs
from graph_nets import utils_np
from graph_nets import utils_tf
from graph_nets.demos import models
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
from scipy import spatial
import tensorflow as tf
SEED = 1
np.random.seed(SEED)
tf.set_random_seed(SEED)
# + cellView="form" colab={} colab_type="code" id="TrGithqWUML7"
#@title Helper functions { form-width: "30%" }
# pylint: disable=redefined-outer-name
DISTANCE_WEIGHT_NAME = "distance" # The name for the distance edge attribute.
def pairwise(iterable):
"""s -> (s0,s1), (s1,s2), (s2, s3), ..."""
a, b = itertools.tee(iterable)
next(b, None)
return zip(a, b)
def set_diff(seq0, seq1):
"""Return the set difference between 2 sequences as a list."""
return list(set(seq0) - set(seq1))
def to_one_hot(indices, max_value, axis=-1):
one_hot = np.eye(max_value)[indices]
if axis not in (-1, one_hot.ndim):
one_hot = np.moveaxis(one_hot, -1, axis)
return one_hot
def get_node_dict(graph, attr):
"""Return a `dict` of node:attribute pairs from a graph."""
return {k: v[attr] for k, v in graph.nodes.items()}
def generate_graph(rand,
num_nodes_min_max,
dimensions=2,
theta=1000.0,
rate=1.0):
"""Creates a connected graph.
The graphs are geographic threshold graphs, but with added edges via a
minimum spanning tree algorithm, to ensure all nodes are connected.
Args:
rand: A random seed for the graph generator. Default= None.
num_nodes_min_max: A sequence [lower, upper) number of nodes per graph.
dimensions: (optional) An `int` number of dimensions for the positions.
Default= 2.
theta: (optional) A `float` threshold parameters for the geographic
threshold graph's threshold. Large values (1000+) make mostly trees. Try
20-60 for good non-trees. Default=1000.0.
rate: (optional) A rate parameter for the node weight exponential sampling
distribution. Default= 1.0.
Returns:
The graph.
"""
# Sample num_nodes.
num_nodes = rand.randint(*num_nodes_min_max)
# Create geographic threshold graph.
pos_array = rand.uniform(size=(num_nodes, dimensions))
pos = dict(enumerate(pos_array))
weight = dict(enumerate(rand.exponential(rate, size=num_nodes)))
geo_graph = nx.geographical_threshold_graph(
num_nodes, theta, pos=pos, weight=weight)
# Create minimum spanning tree across geo_graph's nodes.
distances = spatial.distance.squareform(spatial.distance.pdist(pos_array))
i_, j_ = np.meshgrid(range(num_nodes), range(num_nodes), indexing="ij")
weighted_edges = list(zip(i_.ravel(), j_.ravel(), distances.ravel()))
mst_graph = nx.Graph()
mst_graph.add_weighted_edges_from(weighted_edges, weight=DISTANCE_WEIGHT_NAME)
mst_graph = nx.minimum_spanning_tree(mst_graph, weight=DISTANCE_WEIGHT_NAME)
# Put geo_graph's node attributes into the mst_graph.
for i in mst_graph.nodes():
mst_graph.nodes[i].update(geo_graph.nodes[i])
# Compose the graphs.
combined_graph = nx.compose_all((mst_graph, geo_graph.copy()))
# Put all distance weights into edge attributes.
for i, j in combined_graph.edges():
combined_graph.get_edge_data(i, j).setdefault(DISTANCE_WEIGHT_NAME,
distances[i, j])
return combined_graph, mst_graph, geo_graph
def add_shortest_path(rand, graph, min_length=1):
"""Samples a shortest path from A to B and adds attributes to indicate it.
Args:
rand: A random seed for the graph generator. Default= None.
graph: A `nx.Graph`.
min_length: (optional) An `int` minimum number of edges in the shortest
path. Default= 1.
Returns:
The `nx.DiGraph` with the shortest path added.
Raises:
ValueError: All shortest paths are below the minimum length
"""
# Map from node pairs to the length of their shortest path.
pair_to_length_dict = {}
try:
# This is for compatibility with older networkx.
lengths = nx.all_pairs_shortest_path_length(graph).items()
except AttributeError:
# This is for compatibility with newer networkx.
lengths = list(nx.all_pairs_shortest_path_length(graph))
for x, yy in lengths:
for y, l in yy.items():
if l >= min_length:
pair_to_length_dict[x, y] = l
if max(pair_to_length_dict.values()) < min_length:
raise ValueError("All shortest paths are below the minimum length")
# The node pairs which exceed the minimum length.
node_pairs = list(pair_to_length_dict)
# Computes probabilities per pair, to enforce uniform sampling of each
# shortest path lengths.
# The counts of pairs per length.
counts = collections.Counter(pair_to_length_dict.values())
prob_per_length = 1.0 / len(counts)
probabilities = [
prob_per_length / counts[pair_to_length_dict[x]] for x in node_pairs
]
# Choose the start and end points.
i = rand.choice(len(node_pairs), p=probabilities)
start, end = node_pairs[i]
path = nx.shortest_path(
graph, source=start, target=end, weight=DISTANCE_WEIGHT_NAME)
# Creates a directed graph, to store the directed path from start to end.
digraph = graph.to_directed()
# Add the "start", "end", and "solution" attributes to the nodes and edges.
digraph.add_node(start, start=True)
digraph.add_node(end, end=True)
digraph.add_nodes_from(set_diff(digraph.nodes(), [start]), start=False)
digraph.add_nodes_from(set_diff(digraph.nodes(), [end]), end=False)
digraph.add_nodes_from(set_diff(digraph.nodes(), path), solution=False)
digraph.add_nodes_from(path, solution=True)
path_edges = list(pairwise(path))
digraph.add_edges_from(set_diff(digraph.edges(), path_edges), solution=False)
digraph.add_edges_from(path_edges, solution=True)
return digraph
def graph_to_input_target(graph):
"""Returns 2 graphs with input and target feature vectors for training.
Args:
graph: An `nx.DiGraph` instance.
Returns:
The input `nx.DiGraph` instance.
The target `nx.DiGraph` instance.
Raises:
ValueError: unknown node type
"""
def create_feature(attr, fields):
return np.hstack([np.array(attr[field], dtype=float) for field in fields])
input_node_fields = ("pos", "weight", "start", "end")
input_edge_fields = ("distance",)
target_node_fields = ("solution",)
target_edge_fields = ("solution",)
input_graph = graph.copy()
target_graph = graph.copy()
solution_length = 0
for node_index, node_feature in graph.nodes(data=True):
input_graph.add_node(
node_index, features=create_feature(node_feature, input_node_fields))
target_node = to_one_hot(
create_feature(node_feature, target_node_fields).astype(int), 2)[0]
target_graph.add_node(node_index, features=target_node)
solution_length += int(node_feature["solution"])
solution_length /= graph.number_of_nodes()
for receiver, sender, features in graph.edges(data=True):
input_graph.add_edge(
sender, receiver, features=create_feature(features, input_edge_fields))
target_edge = to_one_hot(
create_feature(features, target_edge_fields).astype(int), 2)[0]
target_graph.add_edge(sender, receiver, features=target_edge)
input_graph.graph["features"] = np.array([0.0])
target_graph.graph["features"] = np.array([solution_length], dtype=float)
return input_graph, target_graph
def generate_networkx_graphs(rand, num_examples, num_nodes_min_max, theta):
"""Generate graphs for training.
Args:
rand: A random seed (np.RandomState instance).
num_examples: Total number of graphs to generate.
num_nodes_min_max: A 2-tuple with the [lower, upper) number of nodes per
graph. The number of nodes for a graph is uniformly sampled within this
range.
theta: (optional) A `float` threshold parameters for the geographic
threshold graph's threshold. Default= the number of nodes.
Returns:
input_graphs: The list of input graphs.
target_graphs: The list of output graphs.
graphs: The list of generated graphs.
"""
input_graphs = []
target_graphs = []
graphs = []
for _ in range(num_examples):
graph = generate_graph(rand, num_nodes_min_max, theta=theta)[0]
graph = add_shortest_path(rand, graph)
input_graph, target_graph = graph_to_input_target(graph)
input_graphs.append(input_graph)
target_graphs.append(target_graph)
graphs.append(graph)
return input_graphs, target_graphs, graphs
def create_placeholders(rand, batch_size, num_nodes_min_max, theta):
"""Creates placeholders for the model training and evaluation.
Args:
rand: A random seed (np.RandomState instance).
batch_size: Total number of graphs per batch.
num_nodes_min_max: A 2-tuple with the [lower, upper) number of nodes per
graph. The number of nodes for a graph is uniformly sampled within this
range.
theta: A `float` threshold parameters for the geographic threshold graph's
threshold. Default= the number of nodes.
Returns:
input_ph: The input graph's placeholders, as a graph namedtuple.
target_ph: The target graph's placeholders, as a graph namedtuple.
"""
# Create some example data for inspecting the vector sizes.
input_graphs, target_graphs, _ = generate_networkx_graphs(
rand, batch_size, num_nodes_min_max, theta)
input_ph = utils_tf.placeholders_from_networkxs(input_graphs)
target_ph = utils_tf.placeholders_from_networkxs(target_graphs)
return input_ph, target_ph
def create_feed_dict(rand, batch_size, num_nodes_min_max, theta, input_ph,
target_ph):
"""Creates placeholders for the model training and evaluation.
Args:
rand: A random seed (np.RandomState instance).
batch_size: Total number of graphs per batch.
num_nodes_min_max: A 2-tuple with the [lower, upper) number of nodes per
graph. The number of nodes for a graph is uniformly sampled within this
range.
theta: A `float` threshold parameters for the geographic threshold graph's
threshold. Default= the number of nodes.
input_ph: The input graph's placeholders, as a graph namedtuple.
target_ph: The target graph's placeholders, as a graph namedtuple.
Returns:
feed_dict: The feed `dict` of input and target placeholders and data.
raw_graphs: The `dict` of raw networkx graphs.
"""
inputs, targets, raw_graphs = generate_networkx_graphs(
rand, batch_size, num_nodes_min_max, theta)
input_graphs = utils_np.networkxs_to_graphs_tuple(inputs)
target_graphs = utils_np.networkxs_to_graphs_tuple(targets)
feed_dict = {input_ph: input_graphs, target_ph: target_graphs}
return feed_dict, raw_graphs
def compute_accuracy(target, output, use_nodes=True, use_edges=False):
"""Calculate model accuracy.
Returns the number of correctly predicted shortest path nodes and the number
of completely solved graphs (100% correct predictions).
Args:
target: A `graphs.GraphsTuple` that contains the target graph.
output: A `graphs.GraphsTuple` that contains the output graph.
use_nodes: A `bool` indicator of whether to compute node accuracy or not.
use_edges: A `bool` indicator of whether to compute edge accuracy or not.
Returns:
correct: A `float` fraction of correctly labeled nodes/edges.
solved: A `float` fraction of graphs that are completely correctly labeled.
Raises:
ValueError: Nodes or edges (or both) must be used
"""
if not use_nodes and not use_edges:
raise ValueError("Nodes or edges (or both) must be used")
tdds = utils_np.graphs_tuple_to_data_dicts(target)
odds = utils_np.graphs_tuple_to_data_dicts(output)
cs = []
ss = []
for td, od in zip(tdds, odds):
xn = np.argmax(td["nodes"], axis=-1)
yn = np.argmax(od["nodes"], axis=-1)
xe = np.argmax(td["edges"], axis=-1)
ye = np.argmax(od["edges"], axis=-1)
c = []
if use_nodes:
c.append(xn == yn)
if use_edges:
c.append(xe == ye)
c = np.concatenate(c, axis=0)
s = np.all(c)
cs.append(c)
ss.append(s)
correct = np.mean(np.concatenate(cs, axis=0))
solved = np.mean(np.stack(ss))
return correct, solved
def create_loss_ops(target_op, output_ops):
loss_ops = [
tf.losses.softmax_cross_entropy(target_op.nodes, output_op.nodes) +
tf.losses.softmax_cross_entropy(target_op.edges, output_op.edges)
for output_op in output_ops
]
return loss_ops
def make_all_runnable_in_session(*args):
"""Lets an iterable of TF graphs be output from a session as NP graphs."""
return [utils_tf.make_runnable_in_session(a) for a in args]
class GraphPlotter(object):
def __init__(self, ax, graph, pos):
self._ax = ax
self._graph = graph
self._pos = pos
self._base_draw_kwargs = dict(G=self._graph, pos=self._pos, ax=self._ax)
self._solution_length = None
self._nodes = None
self._edges = None
self._start_nodes = None
self._end_nodes = None
self._solution_nodes = None
self._intermediate_solution_nodes = None
self._solution_edges = None
self._non_solution_nodes = None
self._non_solution_edges = None
self._ax.set_axis_off()
@property
def solution_length(self):
if self._solution_length is None:
self._solution_length = len(self._solution_edges)
return self._solution_length
@property
def nodes(self):
if self._nodes is None:
self._nodes = self._graph.nodes()
return self._nodes
@property
def edges(self):
if self._edges is None:
self._edges = self._graph.edges()
return self._edges
@property
def start_nodes(self):
if self._start_nodes is None:
self._start_nodes = [
n for n in self.nodes if self._graph.nodes[n].get("start", False)
]
return self._start_nodes
@property
def end_nodes(self):
if self._end_nodes is None:
self._end_nodes = [
n for n in self.nodes if self._graph.nodes[n].get("end", False)
]
return self._end_nodes
@property
def solution_nodes(self):
if self._solution_nodes is None:
self._solution_nodes = [
n for n in self.nodes if self._graph.nodes[n].get("solution", False)
]
return self._solution_nodes
@property
def intermediate_solution_nodes(self):
if self._intermediate_solution_nodes is None:
self._intermediate_solution_nodes = [
n for n in self.nodes
if self._graph.nodes[n].get("solution", False) and
not self._graph.nodes[n].get("start", False) and
not self._graph.nodes[n].get("end", False)
]
return self._intermediate_solution_nodes
@property
def solution_edges(self):
if self._solution_edges is None:
self._solution_edges = [
e for e in self.edges
if self._graph.get_edge_data(e[0], e[1]).get("solution", False)
]
return self._solution_edges
@property
def non_solution_nodes(self):
if self._non_solution_nodes is None:
self._non_solution_nodes = [
n for n in self.nodes
if not self._graph.nodes[n].get("solution", False)
]
return self._non_solution_nodes
@property
def non_solution_edges(self):
if self._non_solution_edges is None:
self._non_solution_edges = [
e for e in self.edges
if not self._graph.get_edge_data(e[0], e[1]).get("solution", False)
]
return self._non_solution_edges
def _make_draw_kwargs(self, **kwargs):
kwargs.update(self._base_draw_kwargs)
return kwargs
def _draw(self, draw_function, zorder=None, **kwargs):
draw_kwargs = self._make_draw_kwargs(**kwargs)
collection = draw_function(**draw_kwargs)
if collection is not None and zorder is not None:
try:
# This is for compatibility with older matplotlib.
collection.set_zorder(zorder)
except AttributeError:
# This is for compatibility with newer matplotlib.
collection[0].set_zorder(zorder)
return collection
def draw_nodes(self, **kwargs):
"""Useful kwargs: nodelist, node_size, node_color, linewidths."""
if ("node_color" in kwargs and
isinstance(kwargs["node_color"], collections.Sequence) and
len(kwargs["node_color"]) in {3, 4} and
not isinstance(kwargs["node_color"][0],
(collections.Sequence, np.ndarray))):
num_nodes = len(kwargs.get("nodelist", self.nodes))
kwargs["node_color"] = np.tile(
np.array(kwargs["node_color"])[None], [num_nodes, 1])
return self._draw(nx.draw_networkx_nodes, **kwargs)
def draw_edges(self, **kwargs):
"""Useful kwargs: edgelist, width."""
return self._draw(nx.draw_networkx_edges, **kwargs)
def draw_graph(self,
node_size=200,
node_color=(0.4, 0.8, 0.4),
node_linewidth=1.0,
edge_width=1.0):
# Plot nodes.
self.draw_nodes(
nodelist=self.nodes,
node_size=node_size,
node_color=node_color,
linewidths=node_linewidth,
zorder=20)
# Plot edges.
self.draw_edges(edgelist=self.edges, width=edge_width, zorder=10)
def draw_graph_with_solution(self,
node_size=200,
node_color=(0.4, 0.8, 0.4),
node_linewidth=1.0,
edge_width=1.0,
start_color="w",
end_color="k",
solution_node_linewidth=3.0,
solution_edge_width=3.0):
node_border_color = (0.0, 0.0, 0.0, 1.0)
node_collections = {}
# Plot start nodes.
node_collections["start nodes"] = self.draw_nodes(
nodelist=self.start_nodes,
node_size=node_size,
node_color=start_color,
linewidths=solution_node_linewidth,
edgecolors=node_border_color,
zorder=100)
# Plot end nodes.
node_collections["end nodes"] = self.draw_nodes(
nodelist=self.end_nodes,
node_size=node_size,
node_color=end_color,
linewidths=solution_node_linewidth,
edgecolors=node_border_color,
zorder=90)
# Plot intermediate solution nodes.
if isinstance(node_color, dict):
c = [node_color[n] for n in self.intermediate_solution_nodes]
else:
c = node_color
node_collections["intermediate solution nodes"] = self.draw_nodes(
nodelist=self.intermediate_solution_nodes,
node_size=node_size,
node_color=c,
linewidths=solution_node_linewidth,
edgecolors=node_border_color,
zorder=80)
# Plot solution edges.
node_collections["solution edges"] = self.draw_edges(
edgelist=self.solution_edges, width=solution_edge_width, zorder=70)
# Plot non-solution nodes.
if isinstance(node_color, dict):
c = [node_color[n] for n in self.non_solution_nodes]
else:
c = node_color
node_collections["non-solution nodes"] = self.draw_nodes(
nodelist=self.non_solution_nodes,
node_size=node_size,
node_color=c,
linewidths=node_linewidth,
edgecolors=node_border_color,
zorder=20)
# Plot non-solution edges.
node_collections["non-solution edges"] = self.draw_edges(
edgelist=self.non_solution_edges, width=edge_width, zorder=10)
# Set title as solution length.
self._ax.set_title("Solution length: {}".format(self.solution_length))
return node_collections
# pylint: enable=redefined-outer-name
# + cellView="form" colab={} colab_type="code" id="6oEV1OC3UQAc"
#@title Visualize example graphs { form-width: "30%" }
seed = 1 #@param{type: 'integer'}
rand = np.random.RandomState(seed=seed)
num_examples = 15 #@param{type: 'integer'}
# Large values (1000+) make trees. Try 20-60 for good non-trees.
theta = 20 #@param{type: 'integer'}
num_nodes_min_max = (16, 17)
input_graphs, target_graphs, graphs = generate_networkx_graphs(
rand, num_examples, num_nodes_min_max, theta)
num = min(num_examples, 16)
w = 3
h = int(np.ceil(num / w))
fig = plt.figure(40, figsize=(w * 4, h * 4))
fig.clf()
for j, graph in enumerate(graphs):
ax = fig.add_subplot(h, w, j + 1)
pos = get_node_dict(graph, "pos")
plotter = GraphPlotter(ax, graph, pos)
plotter.draw_graph_with_solution()
# + cellView="form" colab={} colab_type="code" id="cY09Bll0vuVj"
#@title Set up model training and evaluation { form-width: "30%" }
# The model we explore includes three components:
# - An "Encoder" graph net, which independently encodes the edge, node, and
# global attributes (does not compute relations etc.).
# - A "Core" graph net, which performs N rounds of processing (message-passing)
# steps. The input to the Core is the concatenation of the Encoder's output
# and the previous output of the Core (labeled "Hidden(t)" below, where "t" is
# the processing step).
# - A "Decoder" graph net, which independently decodes the edge, node, and
# global attributes (does not compute relations etc.), on each
# message-passing step.
#
# Hidden(t) Hidden(t+1)
# | ^
# *---------* | *------* | *---------*
# | | | | | | | |
# Input --->| Encoder | *->| Core |--*->| Decoder |---> Output(t)
# | |---->| | | |
# *---------* *------* *---------*
#
# The model is trained by supervised learning. Input graphs are procedurally
# generated, and output graphs have the same structure with the nodes and edges
# of the shortest path labeled (using 2-element 1-hot vectors). We could have
# predicted the shortest path only by labeling either the nodes or edges, and
# that does work, but we decided to predict both to demonstrate the flexibility
# of graph nets' outputs.
#
# The training loss is computed on the output of each processing step. The
# reason for this is to encourage the model to try to solve the problem in as
# few steps as possible. It also helps make the output of intermediate steps
# more interpretable.
#
# There's no need for a separate evaluate dataset because the inputs are
# never repeated, so the training loss is the measure of performance on graphs
# from the input distribution.
#
# We also evaluate how well the models generalize to graphs which are up to
# twice as large as those on which it was trained. The loss is computed only
# on the final processing step.
#
# Variables with the suffix _tr are training parameters, and variables with the
# suffix _ge are test/generalization parameters.
#
# After around 2000-5000 training iterations the model reaches near-perfect
# performance on graphs with between 8-16 nodes.
tf.reset_default_graph()
seed = 2
rand = np.random.RandomState(seed=seed)
# Model parameters.
# Number of processing (message-passing) steps.
num_processing_steps_tr = 10
num_processing_steps_ge = 10
# Data / training parameters.
num_training_iterations = 10000
theta = 20 # Large values (1000+) make trees. Try 20-60 for good non-trees.
batch_size_tr = 32
batch_size_ge = 100
# Number of nodes per graph sampled uniformly from this range.
num_nodes_min_max_tr = (8, 17)
num_nodes_min_max_ge = (16, 33)
# Data.
# Input and target placeholders.
input_ph, target_ph = create_placeholders(rand, batch_size_tr,
num_nodes_min_max_tr, theta)
# Connect the data to the model.
# Instantiate the model.
model = models.EncodeProcessDecode(edge_output_size=2, node_output_size=2)
# A list of outputs, one per processing step.
output_ops_tr = model(input_ph, num_processing_steps_tr)
output_ops_ge = model(input_ph, num_processing_steps_ge)
# Training loss.
loss_ops_tr = create_loss_ops(target_ph, output_ops_tr)
# Loss across processing steps.
loss_op_tr = sum(loss_ops_tr) / num_processing_steps_tr
# Test/generalization loss.
loss_ops_ge = create_loss_ops(target_ph, output_ops_ge)
loss_op_ge = loss_ops_ge[-1] # Loss from final processing step.
# Optimizer.
learning_rate = 1e-3
optimizer = tf.train.AdamOptimizer(learning_rate)
step_op = optimizer.minimize(loss_op_tr)
# Lets an iterable of TF graphs be output from a session as NP graphs.
input_ph, target_ph = make_all_runnable_in_session(input_ph, target_ph)
# + cellView="form" colab={} colab_type="code" id="WoVdyUTjvzWb"
#@title Reset session { form-width: "30%" }
# This cell resets the Tensorflow session, but keeps the same computational
# graph.
try:
sess.close()
except NameError:
pass
sess = tf.Session()
sess.run(tf.global_variables_initializer())
last_iteration = 0
logged_iterations = []
losses_tr = []
corrects_tr = []
solveds_tr = []
losses_ge = []
corrects_ge = []
solveds_ge = []
# + cellView="form" colab={} colab_type="code" id="wWSqSYyQv0Ur"
#@title Run training { form-width: "30%" }
# You can interrupt this cell's training loop at any time, and visualize the
# intermediate results by running the next cell (below). You can then resume
# training by simply executing this cell again.
# How much time between logging and printing the current results.
log_every_seconds = 20
print("# (iteration number), T (elapsed seconds), "
"Ltr (training loss), Lge (test/generalization loss), "
"Ctr (training fraction nodes/edges labeled correctly), "
"Str (training fraction examples solved correctly), "
"Cge (test/generalization fraction nodes/edges labeled correctly), "
"Sge (test/generalization fraction examples solved correctly)")
start_time = time.time()
last_log_time = start_time
for iteration in range(last_iteration, num_training_iterations):
last_iteration = iteration
feed_dict, _ = create_feed_dict(rand, batch_size_tr, num_nodes_min_max_tr,
theta, input_ph, target_ph)
train_values = sess.run({
"step": step_op,
"target": target_ph,
"loss": loss_op_tr,
"outputs": output_ops_tr
},
feed_dict=feed_dict)
the_time = time.time()
elapsed_since_last_log = the_time - last_log_time
if elapsed_since_last_log > log_every_seconds:
last_log_time = the_time
feed_dict, raw_graphs = create_feed_dict(
rand, batch_size_ge, num_nodes_min_max_ge, theta, input_ph, target_ph)
test_values = sess.run({
"target": target_ph,
"loss": loss_op_ge,
"outputs": output_ops_ge
},
feed_dict=feed_dict)
correct_tr, solved_tr = compute_accuracy(
train_values["target"], train_values["outputs"][-1], use_edges=True)
correct_ge, solved_ge = compute_accuracy(
test_values["target"], test_values["outputs"][-1], use_edges=True)
elapsed = time.time() - start_time
losses_tr.append(train_values["loss"])
corrects_tr.append(correct_tr)
solveds_tr.append(solved_tr)
losses_ge.append(test_values["loss"])
corrects_ge.append(correct_ge)
solveds_ge.append(solved_ge)
logged_iterations.append(iteration)
print("# {:05d}, T {:.1f}, Ltr {:.4f}, Lge {:.4f}, Ctr {:.4f}, Str"
" {:.4f}, Cge {:.4f}, Sge {:.4f}".format(
iteration, elapsed, train_values["loss"], test_values["loss"],
correct_tr, solved_tr, correct_ge, solved_ge))
# + cellView="form" colab={} colab_type="code" id="u0ckrMtj72s-"
#@title Visualize results { form-width: "30%" }
# This cell visualizes the results of training. You can visualize the
# intermediate results by interrupting execution of the cell above, and running
# this cell. You can then resume training by simply executing the above cell
# again.
def softmax_prob_last_dim(x): # pylint: disable=redefined-outer-name
e = np.exp(x)
return e[:, -1] / np.sum(e, axis=-1)
# Plot results curves.
fig = plt.figure(1, figsize=(18, 3))
fig.clf()
x = np.array(logged_iterations)
# Loss.
y_tr = losses_tr
y_ge = losses_ge
ax = fig.add_subplot(1, 3, 1)
ax.plot(x, y_tr, "k", label="Training")
ax.plot(x, y_ge, "k--", label="Test/generalization")
ax.set_title("Loss across training")
ax.set_xlabel("Training iteration")
ax.set_ylabel("Loss (binary cross-entropy)")
ax.legend()
# Correct.
y_tr = corrects_tr
y_ge = corrects_ge
ax = fig.add_subplot(1, 3, 2)
ax.plot(x, y_tr, "k", label="Training")
ax.plot(x, y_ge, "k--", label="Test/generalization")
ax.set_title("Fraction correct across training")
ax.set_xlabel("Training iteration")
ax.set_ylabel("Fraction nodes/edges correct")
# Solved.
y_tr = solveds_tr
y_ge = solveds_ge
ax = fig.add_subplot(1, 3, 3)
ax.plot(x, y_tr, "k", label="Training")
ax.plot(x, y_ge, "k--", label="Test/generalization")
ax.set_title("Fraction solved across training")
ax.set_xlabel("Training iteration")
ax.set_ylabel("Fraction examples solved")
# Plot graphs and results after each processing step.
# The white node is the start, and the black is the end. Other nodes are colored
# from red to purple to blue, where red means the model is confident the node is
# off the shortest path, blue means the model is confident the node is on the
# shortest path, and purplish colors mean the model isn't sure.
max_graphs_to_plot = 6
num_steps_to_plot = 4
node_size = 120
min_c = 0.3
num_graphs = len(raw_graphs)
targets = utils_np.graphs_tuple_to_data_dicts(test_values["target"])
step_indices = np.floor(
np.linspace(0, num_processing_steps_ge - 1,
num_steps_to_plot)).astype(int).tolist()
outputs = list(
zip(*(utils_np.graphs_tuple_to_data_dicts(test_values["outputs"][i])
for i in step_indices)))
h = min(num_graphs, max_graphs_to_plot)
w = num_steps_to_plot + 1
fig = plt.figure(101, figsize=(18, h * 3))
fig.clf()
ncs = []
for j, (graph, target, output) in enumerate(zip(raw_graphs, targets, outputs)):
if j >= h:
break
pos = get_node_dict(graph, "pos")
ground_truth = target["nodes"][:, -1]
# Ground truth.
iax = j * (1 + num_steps_to_plot) + 1
ax = fig.add_subplot(h, w, iax)
plotter = GraphPlotter(ax, graph, pos)
color = {}
for i, n in enumerate(plotter.nodes):
color[n] = np.array([1.0 - ground_truth[i], 0.0, ground_truth[i], 1.0
]) * (1.0 - min_c) + min_c
plotter.draw_graph_with_solution(node_size=node_size, node_color=color)
ax.set_axis_on()
ax.set_xticks([])
ax.set_yticks([])
try:
ax.set_facecolor([0.9] * 3 + [1.0])
except AttributeError:
ax.set_axis_bgcolor([0.9] * 3 + [1.0])
ax.grid(None)
ax.set_title("Ground truth\nSolution length: {}".format(
plotter.solution_length))
# Prediction.
for k, outp in enumerate(output):
iax = j * (1 + num_steps_to_plot) + 2 + k
ax = fig.add_subplot(h, w, iax)
plotter = GraphPlotter(ax, graph, pos)
color = {}
prob = softmax_prob_last_dim(outp["nodes"])
for i, n in enumerate(plotter.nodes):
color[n] = np.array([1.0 - prob[n], 0.0, prob[n], 1.0
]) * (1.0 - min_c) + min_c
plotter.draw_graph_with_solution(node_size=node_size, node_color=color)
ax.set_title("Model-predicted\nStep {:02d} / {:02d}".format(
step_indices[k] + 1, step_indices[-1] + 1))
| graph_nets/demos/shortest_path.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false
# # CMPSC 100: Sandbox
#
# ---
#
# In the computer science world, a "sandbox" is almost exactly what it is in the real world: a place to, effectively, "play."
#
# ## Usage
#
# First and foremost, this space is for *you*.
#
# Use this notebook to try out code presented in class or do self-guided exploration. Your instructor may reference this notebook and ask that you write some code in it to follow along with in-class exercises or live programming.
#
# I also encourage you to use this as a space to take notes as you participate in discussions or practice coding concepts on your own.
#
# ### Reminders
#
# The following key combinations and shortcuts will speed your work in this notebook.
#
# #### With cursor in cell
#
# | Key(s) | Outcome |
# |:-|:-|
# | `Enter` | Adds a new line in the current cell |
# | `Shift` + `Enter` | Runs the current cell and moves to the first cell below |
# | `Ctrl` + `Enter` | Runs the current cell; cursor remains in cell |
# | `Alt` + `Enter` | Runs current cell and inserts new cell below ||
#
# #### With cell selected
#
# | Key(s) | Outcome |
# |:-|:-|
# | `A` | Add cell above current cell |
# | `B` | Add cell below current cell |
# | `D` | Delete current cell |
# | `Shift` + `M` | Merge current cell with below |
# | `M` | Changes current cell to Markdown format|
# | `Y` | Changes current cell to code format |
#
#
# ## Helpful readings
#
# The readings below are meant to serve as references for you as you explore practical use of our course's platforms. As always, however, feel free to reach out to your instructors or post questions in our course [Slack](cmpsc-100-00-fa-2020.slack.com).
#
# ### Using Jupyter notebooks
#
# For questions about using a Jupyter Notebook, our colleagues at <NAME> created a [Jupyter Notebook User's Manual](https://jupyter.brynmawr.edu/services/public/dblank/Jupyter%20Notebook%20Users%20Manual.ipynb) which answers basic and advanced questions about the platform.
#
# ### Markdown
#
# I strongly recommend reading the [Mastering Markdown](https://guides.github.com/features/mastering-markdown/) guide to serve as a Markdown reference.
# -
# ## Sandbox
| sandbox/sandbox.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <hr>
# <NAME> - LSCE (Climate and Environment Sciences Laboratory)<br>
# <img align="left" width="40%" src="http://www.lsce.ipsl.fr/Css/img/banniere_LSCE_75.png" \><br><br>
# <hr>
# Updated: 2019/11/13
# #### Load the ferret extension
# %load_ext ferretmagic
# #### Put data from python
# ###### First example: 1D array
import numpy as np
b = {}
b['name']='myvar1'
x=np.linspace(-np.pi*4, np.pi*4, 500)
b['data']=np.sin(x)/x
b.keys()
# A dataset must have been opened before putting data to ferret to get list of variables latter.
#
# https://github.com/NOAA-PMEL/PyFerret/issues/64
# %%ferret
use levitus_climatology
# %ferret_putdata --axis_pos (0,1,2,3,4,5) b
# +
# %%ferret
set text/font=arial
show data
ppl color 2, 0, 50, 100, 75
ppl color 3, 100, 50, 0, 75
plot/thick=3/color myvar1, myvar1[x=@shf:50]
# -
# ###### Second example: 3D array (XYZ)
# Create a dummy 3D array (XY and a Z axis)
# +
nlons, nlats, dim3 = (145, 73, 10)
lats = np.linspace(-np.pi / 2, np.pi / 2, nlats)
lons = np.linspace(0, 2 * np.pi, nlons)
lons, lats = np.meshgrid(lons, lats, indexing='ij')
wave = 0.75 * (np.sin(2 * lats) ** 8) * np.cos(4 * lons)
mean = 0.5 * np.cos(2 * lats) * ((np.sin(2 * lats)) ** 2 + 2)
lats = np.rad2deg(lats)
lons = np.rad2deg(lons)
data2D = wave + mean
myaxis = np.linspace(1, 1000, dim3)
dataXYZ = np.repeat(np.expand_dims(data2D,axis=-1), dim3, axis=2)
print(dataXYZ.shape)
# -
# Please refer to http://ferret.pmel.noaa.gov/Ferret/documentation/pyferret/data-dictionaries/
import pyferret
data2ferret = {}
data2ferret['name']='myvar2'
data2ferret['axis_names']=('lons', 'lats', 'depth')
data2ferret['axis_units']=('degrees_east', 'degrees_north', 'meters')
data2ferret['axis_types']=(
pyferret.AXISTYPE_LONGITUDE,
pyferret.AXISTYPE_LATITUDE,
pyferret.AXISTYPE_LEVEL
)
data2ferret['axis_coords']=(lons[:,0], lats[0,:], myaxis[:])
data2ferret['data']=dataXYZ
data2ferret.keys()
# %ferret_putdata data2ferret
# %%ferret
show data
shade myvar2[k=1]
# ###### Third example: 3D array (XYT)
# Create a dummy 3D array (XY and a T axis)
dataXYT = np.reshape(dataXYZ, (nlons, nlats, 1, dim3))
print(dataXYT.shape)
import pyferret
data2ferret = {}
data2ferret['name']='myvar3'
data2ferret['axis_names']=('lons', 'lats', '', 'time')
data2ferret['axis_units']=('degrees_east', 'degrees_north', '', '')
data2ferret['axis_types']=(
pyferret.AXISTYPE_LONGITUDE,
pyferret.AXISTYPE_LATITUDE,
pyferret.AXISTYPE_NORMAL,
pyferret.AXISTYPE_ABSTRACT
)
data2ferret['axis_coords']=(lons[:,0], lats[0,:], None, None)
data2ferret['data']=dataXYT
data2ferret.keys()
# %ferret_putdata data2ferret
# %%ferret
show data
shade myvar3[l=1]
| notebooks/ferretmagic_02_PassDataFromPythonToFerret.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/oneoclockc/deeplearning-for-AI/blob/main/chapter11_part03_transformer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="NZHhiyttCKHJ"
# ## The Transformer architecture
# + [markdown] id="KWzxu5l5CKHL"
# ### Understanding self-attention
# + [markdown] id="d1oCqfOfUBG8"
# self-attention -> can make features *context-aware*
# "I'll **see** you later soon"?
# - I'll **see** this project to its end
# - I **see** what you mean
#
# smart embedding space -> provide different vector representation for a word depending on the other words
#
# + [markdown] id="0peQq1-fEJRB"
# https://github.com/eubinecto/k4ji_ai/issues/42
# + [markdown] id="No5RLZ_TCKHM"
# #### Generalized self-attention: the query-key-value model
# + [markdown] id="ShriIuE7Ycq5"
#
#
# ```
# outputs = sum(input c * pairwise_scores(input a, input b)
# ```
# input a = query
# input b = key
# input c = value
# ์ฟผ๋ฆฌ์ ๊ฐ ์์์ ๋ํด ๊ฐ ์์๊ฐ ๋ชจ๋ ํค์ ์ผ๋ง๋ ๊ด๋ จ์ด ์๋์ง ๊ณ์ฐํ๊ณ ์ด ์ ์๋ฅผ ์ฌ์ฉํ์ฌ ๊ฐ์ ํฉ์ ๊ฐ์ค์น๋ฅผ ๋ถ์ฌํ๋ค
#
# + [markdown] id="6bQL8fmkCKHM"
# ### Multi-head attention
# + [markdown] id="PQ4aaMF5CKHM"
# ### The Transformer encoder
# + [markdown] id="SizrpwXGCKHN"
# **Getting the data**
# + id="29cZ7bA_CKHN" outputId="b6f04af9-b545-463b-88cc-2c48217ba5e8" colab={"base_uri": "https://localhost:8080/"}
# !curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
# !tar -xf aclImdb_v1.tar.gz
# !rm -r aclImdb/train/unsup
# + [markdown] id="A7Wp6BYFCKHO"
# **Preparing the data**
# + id="GBs1G5qvCKHP" outputId="1da78e06-939c-4b06-f97f-b6be94083aea" colab={"base_uri": "https://localhost:8080/"}
import os, pathlib, shutil, random
from tensorflow import keras
batch_size = 32
base_dir = pathlib.Path("aclImdb")
val_dir = base_dir / "val"
train_dir = base_dir / "train"
for category in ("neg", "pos"):
os.makedirs(val_dir / category)
files = os.listdir(train_dir / category)
random.Random(1337).shuffle(files)
num_val_samples = int(0.2 * len(files))
val_files = files[-num_val_samples:]
for fname in val_files:
shutil.move(train_dir / category / fname,
val_dir / category / fname)
train_ds = keras.utils.text_dataset_from_directory(
"aclImdb/train", batch_size=batch_size
)
val_ds = keras.utils.text_dataset_from_directory(
"aclImdb/val", batch_size=batch_size
)
test_ds = keras.utils.text_dataset_from_directory(
"aclImdb/test", batch_size=batch_size
)
text_only_train_ds = train_ds.map(lambda x, y: x)
# + [markdown] id="PtU3rdXiCKHP"
# **Vectorizing the data**
# + id="gXB7pQdICKHQ"
from tensorflow.keras import layers
max_length = 600
max_tokens = 20000
text_vectorization = layers.TextVectorization(
max_tokens=max_tokens,
output_mode="int",
output_sequence_length=max_length,
)
text_vectorization.adapt(text_only_train_ds)
int_train_ds = train_ds.map(
lambda x, y: (text_vectorization(x), y),
num_parallel_calls=4)
int_val_ds = val_ds.map(
lambda x, y: (text_vectorization(x), y),
num_parallel_calls=4)
int_test_ds = test_ds.map(
lambda x, y: (text_vectorization(x), y),
num_parallel_calls=4)
# + [markdown] id="KIHq08xeCKHQ"
# **Transformer encoder implemented as a subclassed `Layer`**
# + id="TL0ovYkFCKHR"
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
class TransformerEncoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim # size of input token vectors
self.dense_dim = dense_dim # size of inner dense layer
self.num_heads = num_heads # number of attention heads
self.attention = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim)
self.dense_proj = keras.Sequential(
[layers.Dense(dense_dim, activation="relu"),
layers.Dense(embed_dim),]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
def call(self, inputs, mask=None):
if mask is not None:
mask = mask[:, tf.newaxis, :]
attention_output = self.attention(
inputs, inputs, attention_mask=mask)
proj_input = self.layernorm_1(inputs + attention_output)
proj_output = self.dense_proj(proj_input)
return self.layernorm_2(proj_input + proj_output)
def get_config(self):
config = super().get_config()
config.update({
"embed_dim": self.embed_dim,
"num_heads": self.num_heads,
"dense_dim": self.dense_dim,
})
return config
# + [markdown] id="eh7qpiHXCKHR"
# **Using the Transformer encoder for text classification**
# + id="_Equ8kuhCKHS" outputId="f97b528d-01f4-4d26-c986-04fc49c31388" colab={"base_uri": "https://localhost:8080/"}
vocab_size = 20000
embed_dim = 256
num_heads = 2
dense_dim = 32
inputs = keras.Input(shape=(None,), dtype="int64")
x = layers.Embedding(vocab_size, embed_dim)(inputs)
x = TransformerEncoder(embed_dim, dense_dim, num_heads)(x) ###
x = layers.GlobalMaxPooling1D()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop",
loss="binary_crossentropy",
metrics=["accuracy"])
model.summary()
# + [markdown] id="yfXPQxtYCKHS"
# **Training and evaluating the Transformer encoder based model**
# + id="RXFWYiX8CKHS" outputId="99ec3427-c46d-4300-c810-71f3bc1e9f5a" colab={"base_uri": "https://localhost:8080/"}
callbacks = [
keras.callbacks.ModelCheckpoint("transformer_encoder.keras",
save_best_only=True)
]
model.fit(int_train_ds, validation_data=int_val_ds, epochs=20, callbacks=callbacks)
model = keras.models.load_model(
"transformer_encoder.keras",
custom_objects={"TransformerEncoder": TransformerEncoder})
print(f"Test acc: {model.evaluate(int_test_ds)[1]:.3f}")
# + [markdown] id="P6sAUGg6CKHT"
# #### Using positional encoding to re-inject order information
# + [markdown] id="FIcNyDTBbLtW"
# ๋ชจ๋ธ์ด ์์ ์ ๋ณด์ ์ ๊ทผํ ์ ์๋๋ก ๋จ์ด ์์น ์ถ๊ฐ
# + [markdown] id="jOPuaUT1CKHT"
# **Implementing positional embedding as a subclassed layer**
# + id="fwAp24irCKHU"
class PositionalEmbedding(layers.Layer):
def __init__(self, sequence_length, input_dim, output_dim, **kwargs):
super().__init__(**kwargs)
self.token_embeddings = layers.Embedding(
input_dim=input_dim, output_dim=output_dim)
self.position_embeddings = layers.Embedding(
input_dim=sequence_length, output_dim=output_dim)
self.sequence_length = sequence_length
self.input_dim = input_dim
self.output_dim = output_dim
def call(self, inputs):
length = tf.shape(inputs)[-1]
positions = tf.range(start=0, limit=length, delta=1)
embedded_tokens = self.token_embeddings(inputs)
embedded_positions = self.position_embeddings(positions)
return embedded_tokens + embedded_positions
def compute_mask(self, inputs, mask=None):
return tf.math.not_equal(inputs, 0)
def get_config(self):
config = super().get_config()
config.update({
"output_dim": self.output_dim,
"sequence_length": self.sequence_length,
"input_dim": self.input_dim,
})
return config
# + [markdown] id="4Rsq6f_QCKHU"
# #### Putting it all together: A text-classification Transformer
# + [markdown] id="H5z5nbzzCKHU"
# **Combining the Transformer encoder with positional embedding**
# + id="mjMued1kCKHU" outputId="dde3ff69-b513-494c-fe0d-1bda7b48b366" colab={"base_uri": "https://localhost:8080/"}
vocab_size = 20000
sequence_length = 600
embed_dim = 256
num_heads = 2
dense_dim = 32
inputs = keras.Input(shape=(None,), dtype="int64")
x = PositionalEmbedding(sequence_length, vocab_size, embed_dim)(inputs)
x = TransformerEncoder(embed_dim, dense_dim, num_heads)(x)
x = layers.GlobalMaxPooling1D()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop",
loss="binary_crossentropy",
metrics=["accuracy"])
model.summary()
callbacks = [
keras.callbacks.ModelCheckpoint("full_transformer_encoder.keras",
save_best_only=True)
]
model.fit(int_train_ds, validation_data=int_val_ds, epochs=20, callbacks=callbacks)
model = keras.models.load_model(
"full_transformer_encoder.keras",
custom_objects={"TransformerEncoder": TransformerEncoder,
"PositionalEmbedding": PositionalEmbedding})
print(f"Test acc: {model.evaluate(int_test_ds)[1]:.3f}")
# + [markdown] id="m23PLdGmCKHV"
# ### When to use sequence models over bag-of-words models?
| chapter11_part03_transformer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.2
# language: julia
# name: julia-1.6
# ---
using Nemo
# +
using Nemo
function inv_fact(n)
R=big(1)//1
for i in 1:n
R*=1//i #produit ds fmpq
end
return R
end
IF100=inv_fact(100)
println(IF100) #OK
println(typeof(IF100)) #OK
println(IF100.den) #OK
IF100=QQ(IF100)
@show IF100 #OK
println(IF100.den) #WRONG !
# -
dump(IF100)
# Base.show_default(stdout, IF100)
# https://discourse.julialang.org/t/can-i-disable-overloaded-base-show-and-print-the-raw-form-of-an-object/68286/5
| 0021/show_default.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Idowuilekura/bank_of_portugal_predictive-model-building/blob/master/Data1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="ORbdgl8e8mGP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 143} outputId="45ec9b42-3564-4d21-8596-9f754b1f8850"
import pandas as pd
import numpy as np
import seaborn as sns
# %matplotlib inline
import matplotlib.pyplot as plt
import sklearn
import xgboost
from imblearn.over_sampling import SMOTE
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score,KFold,StratifiedKFold
from sklearn.metrics import confusion_matrix, accuracy_score,f1_score,precision_score,roc_auc_score
# + id="cIfvObR6Z7iE" colab_type="code" colab={}
def transform_encode(data):
""" removing outlier to do these we will use LocalOutlierfactor, any value tha
t is less than one will be an outlier,the purpose of removing outliers is to prevent the model from
taking too long to load and misledding the model"""
from scipy import stats
from sklearn.preprocessing import StandardScaler, LabelEncoder,OneHotEncoder
from sklearn.neighbors import LocalOutlierFactor
"""duration was dropped because it has correlation with the target variable
if duration is 0 then the customer might not subscribed, also this was done
so that it will not impact our outlier removal since it is not needed in training
"""
data_1 = data.drop(['duration','y'],axis=1)
numerical_df = data_1.select_dtypes(include=['int','float'])#selecting float and int columns
list_numerical_df_columns = list(numerical_df.columns)
"""The localoutlierfactor is another model to detect outliers,
any value that is less than 1 is considered an outlier since it dosen'
follow the uniform distribution"""
lof = LocalOutlierFactor()
yhat = lof.fit_predict(numerical_df) #fitting the localoutlier factor model
mask = yhat !=-1
data = data.loc[mask,:]
data_1 = data_1.loc[mask,:] #filtering out rows that are not outliers
for col in list_numerical_df_columns:
data_1[col] = StandardScaler().fit_transform(data_1[[col]]) #scaling the values so it can be on the same range
cat_df = data_1.select_dtypes(include=['object'])
cat_dumm = pd.get_dummies(cat_df) #converting the categorical data to 1 or 0
"""dropping the categorical columns becaue we have encoded and the old columns
are not needed"""
df = data_1.drop(list(cat_df.columns),axis=1)
"""concatenating the dataframe with the encoded categorical columns since we
had dropped the columns earlier"""
df = pd.concat([df,cat_dumm],axis=1)
#encoding the target variable y and renaming it to subscribed and joing it back
df['Subscribed'] = LabelEncoder().fit_transform(data['y'])
return df
# + id="yo_Ioz7TLTzE" colab_type="code" colab={}
def reduce_dimension(data,reduction_model):
"""since our colummns are many, we need to reduce the computational time by
reducing the numbers of columns and still retaining useful columns, we will be using
principal component ananlysis,t-distributed stochastic neighbor and auto-encoders"""
data_1 = transform_encode(data)
data = data_1.drop(['Subscribed'],axis=1)
""" importing necessary libraries"""
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from keras.models import Model
from keras.layers import Input,Dense
from keras import regularizers
encoding_dim= 20
if reduction_model == 'pca':
pca = PCA(n_components=20) #components to reduce the columns to
""" to justify why we choose 20 components from the plot we could see that
best components is 20 because that is where the lines starts to get constant"""
pca_2 = PCA().fit(data.values)
plt.plot(np.cumsum(pca_2.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cummulative explained variance')
reduced_df = pd.DataFrame(pca.fit_transform(data.values),columns = ["principal component{}".format(str(i)) for i in range(1,21)])
elif reduction_model=='tsne':
""" tsne maximum components is 2 hence we will go with it"""
tsne = TSNE(n_components=2,n_iter=300)
reduced_df = pd.DataFrame(tsne.fit_transform(data),columns= ["tsne component{}".format(str(i)) for i in range(1,3)])
else:
# fixed dimensions
input_dim = data.shape[1]
encoding_dim = 20
""" Number of neurons in each layer [data_columns which is input_dim,30 for
the first hidden layer,30 for the second hidden layer and 20 for our desired
output layer which is encoding dimension and it is 20]. Also for eah encoded layer we will be passing
the output fromone layer to the other, hence the need to make one layer input connected to the next layer
and our activation function will be tanh"""
input_layer = Input(shape=(input_dim,))
encoded_layer_1 = Dense(
40,activation='tanh',activity_regularizer=regularizers.l1(10e-5))(input_layer)
encoded_layer_2 = Dense(30,activation='tanh')(encoded_layer_1)
encoder_layer_3 = Dense(encoding_dim,activation='tanh')(encoded_layer_2)
# create encoder model
encoder = Model(inputs=input_layer,outputs=encoder_layer_3)
reduced_df= pd.DataFrame(encoder.predict(data)
)
print(reduced_df.shape)
print(data_1[['Subscribed']].shape)
reduced_df['Subscribed']=data_1['Subscribed'].values
return reduced_df
# + id="Qf7U5ueuwLkG" colab_type="code" colab={}
| Data1.ipynb |