markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Directory to store Models
import os if not os.path.exists('./models'): os.mkdir('./models') def position_index(x): if x<4: return 1 if x>10: return 3 else : return 2
_____no_output_____
UPL-1.0
beginners/04.ML_Modelling.ipynb
MKulfan/redbull-analytics-hol
Model considering only Drivers
x_d= data[['GP_name','quali_pos','driver','age_at_gp_in_days','position','driver_confidence','active_driver']] x_d = x_d[x_d['active_driver']==1] sc = StandardScaler() le = LabelEncoder() x_d['GP_name'] = le.fit_transform(x_d['GP_name']) x_d['driver'] = le.fit_transform(x_d['driver']) x_d['GP_name'] = le.fit_transform(x_d['GP_name']) x_d['age_at_gp_in_days'] = sc.fit_transform(x_d[['age_at_gp_in_days']]) X_d = x_d.drop(['position','active_driver'],1) y_d = x_d['position'].apply(lambda x: position_index(x)) #cross validation for diffrent models models = [LogisticRegression(),DecisionTreeClassifier(),RandomForestClassifier(),SVC(),GaussianNB(),KNeighborsClassifier()] names = ['LogisticRegression','DecisionTreeClassifier','RandomForestClassifier','SVC','GaussianNB','KNeighborsClassifier'] model_dict = dict(zip(models,names)) mean_results_dri = [] results_dri = [] name = [] for model in models: cv = StratifiedKFold(n_splits=10,random_state=1,shuffle=True) result = cross_val_score(model,X_d,y_d,cv=cv,scoring='accuracy') mean_results_dri.append(result.mean()) results_dri.append(result) name.append(model_dict[model]) print(f'{model_dict[model]} : {result.mean()}') plt.figure(figsize=(15,10)) plt.boxplot(x=results_dri,labels=name) plt.xlabel('Models') plt.ylabel('Accuracy') plt.title('Model performance comparision (drivers only)') plt.show()
_____no_output_____
UPL-1.0
beginners/04.ML_Modelling.ipynb
MKulfan/redbull-analytics-hol
Model considering only Constructors
x_c = data[['GP_name','quali_pos','constructor','position','constructor_reliability','active_constructor']] x_c = x_c[x_c['active_constructor']==1] sc = StandardScaler() le = LabelEncoder() x_c['GP_name'] = le.fit_transform(x_c['GP_name']) x_c['constructor'] = le.fit_transform(x_c['constructor']) X_c = x_c.drop(['position','active_constructor'],1) y_c = x_c['position'].apply(lambda x: position_index(x)) #cross validation for diffrent models models = [LogisticRegression(),DecisionTreeClassifier(),RandomForestClassifier(),SVC(),GaussianNB(),KNeighborsClassifier()] names = ['LogisticRegression','DecisionTreeClassifier','RandomForestClassifier','SVC','GaussianNB','KNeighborsClassifier'] model_dict = dict(zip(models,names)) mean_results_const = [] results_const = [] name = [] for model in models: cv = StratifiedKFold(n_splits=10,random_state=1,shuffle=True) result = cross_val_score(model,X_c,y_c,cv=cv,scoring='accuracy') mean_results_const.append(result.mean()) results_const.append(result) name.append(model_dict[model]) print(f'{model_dict[model]} : {result.mean()}') plt.figure(figsize=(15,10)) plt.boxplot(x=results_const,labels=name) plt.xlabel('Models') plt.ylabel('Accuracy') plt.title('Model performance comparision (Teams only)') plt.show()
_____no_output_____
UPL-1.0
beginners/04.ML_Modelling.ipynb
MKulfan/redbull-analytics-hol
Model considering both Drivers and Constructors
cleaned_data = data[['GP_name','quali_pos','constructor','driver','position','driver_confidence','constructor_reliability','active_driver','active_constructor']] cleaned_data = cleaned_data[(cleaned_data['active_driver']==1)&(cleaned_data['active_constructor']==1)] cleaned_data.to_csv('./data_f1/cleaned_data.csv',index=False)
_____no_output_____
UPL-1.0
beginners/04.ML_Modelling.ipynb
MKulfan/redbull-analytics-hol
Build your X dataset with next columns:- GP_name- quali_pos to predict the classification cluster (1,2,3) - constructor- driver- position- driver confidence- constructor_reliability- active_driver- active_constructor Filter the dataset for this Model "Driver + Constructor" all active drivers and constructors Create Standard Scaler and Label Encoder for the different features in order to have a similar scale for all features Prepare the X (Features dataset) and y for predicted value. In our case, we want to calculate the cluster of final position for ech driver using the "position_index" function
# Implement X, y
_____no_output_____
UPL-1.0
beginners/04.ML_Modelling.ipynb
MKulfan/redbull-analytics-hol
Applied the same list of ML Algorithms for cross validation of different modelsAnd Store the accuracy Mean Value in order to compare with previous ML Models
mean_results = [] results = [] name = [] # cross validation for different models
_____no_output_____
UPL-1.0
beginners/04.ML_Modelling.ipynb
MKulfan/redbull-analytics-hol
Use the same boxplot plotter used in the previous Models
# Implement boxplot
_____no_output_____
UPL-1.0
beginners/04.ML_Modelling.ipynb
MKulfan/redbull-analytics-hol
Comparing The 3 ML ModelsLet's see mean score of our three assumptions.
lr = [mean_results[0],mean_results_dri[0],mean_results_const[0]] dtc = [mean_results[1],mean_results_dri[1],mean_results_const[1]] rfc = [mean_results[2],mean_results_dri[2],mean_results_const[2]] svc = [mean_results[3],mean_results_dri[3],mean_results_const[3]] gnb = [mean_results[4],mean_results_dri[4],mean_results_const[4]] knn = [mean_results[5],mean_results_dri[5],mean_results_const[5]] font1 = { 'family':'serif', 'color':'black', 'weight':'normal', 'size':18 } font2 = { 'family':'serif', 'color':'black', 'weight':'bold', 'size':12 } x_ax = np.arange(3) plt.figure(figsize=(30,15)) bar1 = plt.bar(x_ax,lr,width=0.1,align='center', label="Logistic Regression") bar2 = plt.bar(x_ax+0.1,dtc,width=0.1,align='center', label="DecisionTree") bar3 = plt.bar(x_ax+0.2,rfc,width=0.1,align='center', label="RandomForest") bar4 = plt.bar(x_ax+0.3,svc,width=0.1,align='center', label="SVC") bar5 = plt.bar(x_ax+0.4,gnb,width=0.1,align='center', label="GaussianNB") bar6 = plt.bar(x_ax+0.5,knn,width=0.1,align='center', label="KNN") plt.text(0.05,1,'CV score for combined data',fontdict=font1) plt.text(1.04,1,'CV score only driver data',fontdict=font1) plt.text(2,1,'CV score only team data',fontdict=font1) for bar in bar1.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) for bar in bar2.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) for bar in bar3.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) for bar in bar4.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) for bar in bar5.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) for bar in bar6.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) plt.legend(loc='center', bbox_to_anchor=(0.5, -0.10), shadow=False, ncol=6) plt.show() end = time.time() import datetime str(datetime.timedelta(seconds=(end - start))) print(str(end - start)+" seconds")
62.024924516677856 seconds
UPL-1.0
beginners/04.ML_Modelling.ipynb
MKulfan/redbull-analytics-hol
Model buildinghttps://www.kaggle.com/vadbeg/pytorch-nn-with-embeddings-and-catboost/notebookPyTorchmostly based off this example, plus parts of code form tutorial 5 lab 3
# import load_data function from %load_ext autoreload %autoreload 2 # fix system path import sys sys.path.append("/home/jovyan/work") import numpy as np import pandas as pd import torch from torch.utils.data import Dataset, DataLoader import random def set_seed(seed): random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.backends.cudnn.deterministick = True torch.backends.cudnn.benchmark = False set_seed(27) from src.data.sets import load_sets X_train, y_train, X_val, y_val, X_test, y_test = load_sets() X_test.shape X_train.shape X_val.shape # need to convert to tensors from src.models.pytorch import EmbeddingDataset train_dataset = EmbeddingDataset(X_train, targets=y_train, cat_cols_idx=[0], cont_cols_idx=[1,2,3,4]) val_dataset = EmbeddingDataset(X_val, targets=y_val, cat_cols_idx=[0], cont_cols_idx=[1,2,3,4]) test_dataset = EmbeddingDataset(X_test, cat_cols_idx=[0], cont_cols_idx=[1,2,3,4], is_train=False) print(f'First element of train_dataset: {train_dataset[1]}', f'First element of val_dataset: {val_dataset[1]}', f'First element of test_dataset: {test_dataset[1]}',sep='\n') # embedding example class ClassificationEmbdNN(torch.nn.Module): def __init__(self, emb_dims, no_of_cont=None): super(ClassificationEmbdNN, self).__init__() self.emb_layers = torch.nn.ModuleList([torch.nn.Embedding(x, y) for x, y in emb_dims]) no_of_embs = sum([y for x, y in emb_dims]) self.no_of_embs = no_of_embs self.emb_dropout = torch.nn.Dropout(0.2) self.no_of_cont = 0 if no_of_cont: self.no_of_cont = no_of_cont self.bn_cont = torch.nn.BatchNorm1d(no_of_cont) self.fc1 = torch.nn.Linear(in_features=self.no_of_embs + self.no_of_cont, out_features=208) self.dropout1 = torch.nn.Dropout(0.2) self.bn1 = torch.nn.BatchNorm1d(208) self.act1 = torch.nn.ReLU() self.fc2 = torch.nn.Linear(in_features=208, out_features=208) self.dropout2 = torch.nn.Dropout(0.2) self.bn2 = torch.nn.BatchNorm1d(208) self.act2 = torch.nn.ReLU() # self.fc3 = torch.nn.Linear(in_features=256, # out_features=64) # self.dropout3 = torch.nn.Dropout(0.2) # self.bn3 = torch.nn.BatchNorm1d(64) # self.act3 = torch.nn.ReLU() self.fc3 = torch.nn.Linear(in_features=208, out_features=104) self.act3 = torch.nn.Softmax() def forward(self, x_cat, x_cont=None): if self.no_of_embs != 0: x = [emb_layer(x_cat[:, i]) for i, emb_layer in enumerate(self.emb_layers)] x = torch.cat(x, 1) x = self.emb_dropout(x) if self.no_of_cont != 0: x_cont = self.bn_cont(x_cont) if self.no_of_embs != 0: x = torch.cat([x, x_cont], 1) else: x = x_cont x = self.fc1(x) x = self.dropout1(x) x = self.bn1(x) x = self.act1(x) x = self.fc2(x) x = self.dropout2(x) x = self.bn2(x) x = self.act2(x) # x = self.fc3(x) # x = self.dropout3(x) # x = self.bn3(x) # x = self.act3(x) x = self.fc3(x) x = self.act3(x) return x model = ClassificationEmbdNN(emb_dims=[[5742, 252]], no_of_cont=4) from src.models.pytorch import get_device device = get_device() model.to(device) print(model) criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.1) BATCH_SIZE = 300 N_EPOCHS = 10 train_loader = DataLoader(train_dataset,batch_size=BATCH_SIZE) valid_loader = DataLoader(val_dataset,batch_size=BATCH_SIZE) next(iter(train_loader)) next(iter(valid_loader)) from tqdm import tqdm_notebook as tqdm def train_network(model, train_loader, valid_loader, loss_func, optimizer, n_epochs=20, saved_model='model.pt'): device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') model.to(device) train_losses = list() valid_losses = list() valid_loss_min = np.Inf for epoch in range(n_epochs): train_loss = 0.0 valid_loss = 0.0 # train_auc = 0.0 # valid_auc = 0.0 train_acc = 0.0 valid_acc = 0.0 model.train() for batch in tqdm(train_loader): optimizer.zero_grad() output = model(batch['data'][0].to(device, dtype=torch.long), batch['data'][1].to(device, dtype=torch.float)) loss = loss_func(output, batch['target'].to(device, dtype=torch.long)) loss.backward() optimizer.step() # Calculate global accuracy train_acc += (output.argmax(1) == batch['target']).sum().item() # train_auc += roc_auc_score(batch['target'].cpu().numpy(), # output.detach().cpu().numpy(), # multi_class = "ovo") train_loss += loss.item() * batch['data'][0].size(0) #!!! model.eval() for batch in tqdm(valid_loader): output = model(batch['data'][0].to(device, dtype=torch.long), batch['data'][1].to(device, dtype=torch.float)) loss = loss_func(output, batch['target'].to(device, dtype=torch.long)) # valid_auc += roc_auc_score(batch['target'].cpu().numpy(), # output.detach().cpu().numpy(), # multi_class = "ovo") valid_loss += loss.item() * batch['data'][0].size(0) #!!! # Calculate global accuracy valid_acc += (output.argmax(1) == batch['target']).sum().item() # train_loss = np.sqrt(train_loss / len(train_loader.sampler.indices)) # valid_loss = np.sqrt(valid_loss / len(valid_loader.sampler.indices)) # train_auc = train_auc / len(train_loader) # valid_auc = valid_auc / len(valid_loader) # train_losses.append(train_loss) # valid_losses.append(valid_loss) print('Epoch: {}. Training loss: {:.6f}. Validation loss: {:.6f}' .format(epoch, train_loss, valid_loss)) print('Training AUC: {:.6f}. Validation AUC: {:.6f}' .format(train_acc, valid_acc)) if valid_loss < valid_loss_min: # let's save the best weights to use them in prediction print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model...' .format(valid_loss_min, valid_loss)) torch.save(model.state_dict(), saved_model) valid_loss_min = valid_loss return train_losses, valid_losses train_losses, valid_losses = train_network(model=model, train_loader=train_loader, valid_loader=valid_loader, loss_func=criterion, optimizer=optimizer, n_epochs=N_EPOCHS, saved_model='../models/embed_3layers.pt')
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:24: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0 Please use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook`
FTL
notebooks/Model Building.ipynb
Reasmey/adsi_beer_app
forgot to divide the loss and accuracy by length of data set
print('Training Accuracy: {:.2f}%'.format(5926.0/300.0)) print('Validation Accuracy: {:.2f}%'.format(2361.0/300.0))
Training Accuracy: 19.75% Validation Accuracy: 7.87%
FTL
notebooks/Model Building.ipynb
Reasmey/adsi_beer_app
Predict with test set
def predict(data_loader, model): model.eval() device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') model.to(device) with torch.no_grad(): predictions = None for i, batch in enumerate(tqdm(data_loader)): output = model(batch['data'][0].to(device, dtype=torch.long), batch['data'][1].to(device, dtype=torch.float)).cpu().numpy() if i == 0: predictions = output else: predictions = np.vstack((predictions, output)) return predictions model.load_state_dict(torch.load('../models/embed_3layers.pt')) test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE) nn_predictions = predict(test_loader, model) nn_predictions test_acc = (nn_predictions.argmax(1) == y_test).sum().item() test_acc/300 from sklearn.metrics import roc_auc_score, classification_report # compute other metrics roc_auc_score(y_test,nn_predictions, multi_class='ovr', average='macro') print(y_test) print(nn_predictions.argmax(1)) def convert_cr_to_dataframe(report_dict:dict) -> pd.DataFrame: """ Converts the dictionary format of the Classification Report (CR) to a dataframe for easy of sorting :param report_dict: The dictionary returned by sklearn.metrics.classification_report. :return: Returns a dataframe of the same information. """ beer_style = list(report_dict.keys()) beer_style.remove('accuracy') beer_style.remove('macro avg') beer_style.remove('weighted avg') precision = [] recall = [] f1 = [] support = [] for key, value in report_dict.items(): if key not in ['accuracy', 'macro avg', 'weighted avg']: precision.append(value['precision']) recall.append(value['recall']) f1.append(value['f1-score']) support.append(value['support']) result = pd.DataFrame({'beer_style': beer_style, 'precision': precision, 'recall': recall, 'f1': f1, 'support': support}) return result from joblib import load lbel_encoders = load('../models/label_encoders.joblib') report_dict = classification_report(label_encoders['beer_style'].inverse_transform(y_test), label_encoders['beer_style'].inverse_transform(nn_predictions.argmax(1)), output_dict=True) report_df = convert_cr_to_dataframe(report_dict) print(report_df) #classification_report(y_test, nn_predictions.argmax(1)) torch.save(model, "../models/model.pt")
_____no_output_____
FTL
notebooks/Model Building.ipynb
Reasmey/adsi_beer_app
* 比较不同组合组合优化器在不同规模问题上的性能;* 下面的结果主要比较``alphamind``和``python``中其他优化器的性能差别,我们将尽可能使用``cvxopt``中的优化器,其次选择``scipy``;* 由于``scipy``在``ashare_ex``上面性能太差,所以一般忽略``scipy``在这个股票池上的表现;* 时间单位都是毫秒。* 请在环境变量中设置`DB_URI`指向数据库
import os import timeit import numpy as np import pandas as pd import cvxpy from alphamind.api import * from alphamind.portfolio.linearbuilder import linear_builder from alphamind.portfolio.meanvariancebuilder import mean_variance_builder from alphamind.portfolio.meanvariancebuilder import target_vol_builder pd.options.display.float_format = '{:,.2f}'.format
_____no_output_____
MIT
notebooks/Example 7 - Portfolio Optimizer Performance.ipynb
wangjiehui11235/alpha-mind
0. 数据准备------------------
ref_date = '2018-02-08' u_names = ['sh50', 'hs300', 'zz500', 'zz800', 'zz1000', 'ashare_ex'] b_codes = [16, 300, 905, 906, 852, None] risk_model = 'short' factor = 'EPS' lb = 0.0 ub = 0.1 data_source = os.environ['DB_URI'] engine = SqlEngine(data_source) universes = [Universe(u_name) for u_name in u_names] codes_set = [engine.fetch_codes(ref_date, universe=universe) for universe in universes] data_set = [engine.fetch_data(ref_date, factor, codes, benchmark=b_code, risk_model=risk_model) for codes, b_code in zip(codes_set, b_codes)]
_____no_output_____
MIT
notebooks/Example 7 - Portfolio Optimizer Performance.ipynb
wangjiehui11235/alpha-mind
1. 线性优化(带线性限制条件)---------------------------------
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind']) number = 1 for u_name, sample_data in zip(u_names, data_set): factor_data = sample_data['factor'] er = factor_data[factor].values n = len(er) lbound = np.ones(n) * lb ubound = np.ones(n) * ub risk_constraints = np.ones((n, 1)) risk_target = (np.array([1.]), np.array([1.])) status, y, x1 = linear_builder(er, lbound, ubound, risk_constraints, risk_target) elasped_time1 = timeit.timeit("linear_builder(er, lbound, ubound, risk_constraints, risk_target)", number=number, globals=globals()) / number * 1000 A_eq = risk_constraints.T b_eq = np.array([1.]) w = cvxpy.Variable(n) curr_risk_exposure = w * risk_constraints constraints = [w >= lbound, w <= ubound, curr_risk_exposure == risk_target[0]] objective = cvxpy.Minimize(-w.T * er) prob = cvxpy.Problem(objective, constraints) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 np.testing.assert_almost_equal(x1 @ er, np.array(w.value).flatten() @ er, 4) df.loc['alphamind', u_name] = elasped_time1 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df prob.value
_____no_output_____
MIT
notebooks/Example 7 - Portfolio Optimizer Performance.ipynb
wangjiehui11235/alpha-mind
2. 线性优化(带L1限制条件)-----------------------
from cvxpy import pnorm df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind (clp simplex)', 'alphamind (clp interior)', 'alphamind (ecos)']) turn_over_target = 0.5 number = 1 for u_name, sample_data in zip(u_names, data_set): factor_data = sample_data['factor'] er = factor_data[factor].values n = len(er) lbound = np.ones(n) * lb ubound = np.ones(n) * ub if 'weight' in factor_data: current_position = factor_data.weight.values else: current_position = np.ones_like(er) / len(er) risk_constraints = np.ones((len(er), 1)) risk_target = (np.array([1.]), np.array([1.])) status, y, x1 = linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='interior') elasped_time1 = timeit.timeit("""linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='interior')""", number=number, globals=globals()) / number * 1000 w = cvxpy.Variable(n) curr_risk_exposure = risk_constraints.T @ w constraints = [w >= lbound, w <= ubound, curr_risk_exposure == risk_target[0], pnorm(w - current_position, 1) <= turn_over_target] objective = cvxpy.Minimize(-w.T * er) prob = cvxpy.Problem(objective, constraints) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 status, y, x2 = linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='simplex') elasped_time3 = timeit.timeit("""linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='simplex')""", number=number, globals=globals()) / number * 1000 status, y, x3 = linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='ecos') elasped_time4 = timeit.timeit("""linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='ecos')""", number=number, globals=globals()) / number * 1000 np.testing.assert_almost_equal(x1 @ er, np.array(w.value).flatten() @ er, 4) np.testing.assert_almost_equal(x2 @ er, np.array(w.value).flatten() @ er, 4) np.testing.assert_almost_equal(x3 @ er, np.array(w.value).flatten() @ er, 4) df.loc['alphamind (clp interior)', u_name] = elasped_time1 df.loc['alphamind (clp simplex)', u_name] = elasped_time3 df.loc['alphamind (ecos)', u_name] = elasped_time4 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df
_____no_output_____
MIT
notebooks/Example 7 - Portfolio Optimizer Performance.ipynb
wangjiehui11235/alpha-mind
3. Mean - Variance 优化 (无约束)-----------------------
from cvxpy import * df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind']) number = 1 for u_name, sample_data in zip(u_names, data_set): all_styles = risk_styles + industry_styles + ['COUNTRY'] factor_data = sample_data['factor'] risk_cov = sample_data['risk_cov'][all_styles].values risk_exposure = factor_data[all_styles].values special_risk = factor_data.srisk.values sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000 er = factor_data[factor].values n = len(er) bm = np.zeros(n) lbound = -np.ones(n) * np.inf ubound = np.ones(n) * np.inf risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.) status, y, x1 = mean_variance_builder(er, risk_model, bm, lbound, ubound, None, None, lam=1) elasped_time1 = timeit.timeit("""mean_variance_builder(er, risk_model, bm, lbound, ubound, None, None, lam=1)""", number=number, globals=globals()) / number * 1000 w = cvxpy.Variable(n) risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.) objective = cvxpy.Minimize(-w.T * er + 0.5 * risk) prob = cvxpy.Problem(objective) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1 x2 = np.array(w.value).flatten() u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2 np.testing.assert_array_almost_equal(u1, u2, 4) df.loc['alphamind', u_name] = elasped_time1 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df
_____no_output_____
MIT
notebooks/Example 7 - Portfolio Optimizer Performance.ipynb
wangjiehui11235/alpha-mind
4. Mean - Variance 优化 (Box约束)---------------
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind']) number = 1 for u_name, sample_data in zip(u_names, data_set): all_styles = risk_styles + industry_styles + ['COUNTRY'] factor_data = sample_data['factor'] risk_cov = sample_data['risk_cov'][all_styles].values risk_exposure = factor_data[all_styles].values special_risk = factor_data.srisk.values sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000 er = factor_data[factor].values n = len(er) bm = np.zeros(n) lbound = np.zeros(n) ubound = np.ones(n) * 0.1 risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.) status, y, x1 = mean_variance_builder(er, risk_model, bm, lbound, ubound, None, None) elasped_time1 = timeit.timeit("""mean_variance_builder(er, risk_model, bm, lbound, ubound, None, None)""", number=number, globals=globals()) / number * 1000 w = cvxpy.Variable(n) risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.) objective = cvxpy.Minimize(-w.T * er + 0.5 * risk) constraints = [w >= lbound, w <= ubound] prob = cvxpy.Problem(objective, constraints) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1 x2 = np.array(w.value).flatten() u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2 np.testing.assert_array_almost_equal(u1, u2, 4) df.loc['alphamind', u_name] = elasped_time1 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df
_____no_output_____
MIT
notebooks/Example 7 - Portfolio Optimizer Performance.ipynb
wangjiehui11235/alpha-mind
5. Mean - Variance 优化 (Box约束以及线性约束)----------------
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind']) number = 1 for u_name, sample_data in zip(u_names, data_set): all_styles = risk_styles + industry_styles + ['COUNTRY'] factor_data = sample_data['factor'] risk_cov = sample_data['risk_cov'][all_styles].values risk_exposure = factor_data[all_styles].values special_risk = factor_data.srisk.values sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000 er = factor_data[factor].values n = len(er) bm = np.zeros(n) lbound = np.zeros(n) ubound = np.ones(n) * 0.1 risk_constraints = np.ones((len(er), 1)) risk_target = (np.array([1.]), np.array([1.])) risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.) status, y, x1 = mean_variance_builder(er, risk_model, bm, lbound, ubound, risk_constraints, risk_target) elasped_time1 = timeit.timeit("""mean_variance_builder(er, risk_model, bm, lbound, ubound, risk_constraints, risk_target)""", number=number, globals=globals()) / number * 1000 w = cvxpy.Variable(n) risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.) objective = cvxpy.Minimize(-w.T * er + 0.5 * risk) curr_risk_exposure = risk_constraints.T @ w constraints = [w >= lbound, w <= ubound, curr_risk_exposure == risk_target[0]] prob = cvxpy.Problem(objective, constraints) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1 x2 = np.array(w.value).flatten() u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2 np.testing.assert_array_almost_equal(u1, u2, 4) df.loc['alphamind', u_name] = elasped_time1 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df
_____no_output_____
MIT
notebooks/Example 7 - Portfolio Optimizer Performance.ipynb
wangjiehui11235/alpha-mind
6. 线性优化(带二次限制条件)-------------------------
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind']) number = 1 target_vol = 0.5 for u_name, sample_data in zip(u_names, data_set): all_styles = risk_styles + industry_styles + ['COUNTRY'] factor_data = sample_data['factor'] risk_cov = sample_data['risk_cov'][all_styles].values risk_exposure = factor_data[all_styles].values special_risk = factor_data.srisk.values sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000 er = factor_data[factor].values n = len(er) if 'weight' in factor_data: bm = factor_data.weight.values else: bm = np.ones_like(er) / n lbound = np.zeros(n) ubound = np.ones(n) * 0.1 risk_constraints = np.ones((n, 1)) risk_target = (np.array([bm.sum()]), np.array([bm.sum()])) risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.) status, y, x1 = target_vol_builder(er, risk_model, bm, lbound, ubound, risk_constraints, risk_target, vol_target=target_vol) elasped_time1 = timeit.timeit("""target_vol_builder(er, risk_model, bm, lbound, ubound, risk_constraints, risk_target, vol_target=target_vol)""", number=number, globals=globals()) / number * 1000 w = cvxpy.Variable(n) risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.) objective = cvxpy.Minimize(-w.T * er) curr_risk_exposure = risk_constraints.T @ w constraints = [w >= lbound, w <= ubound, curr_risk_exposure == risk_target[0], risk <= target_vol * target_vol] prob = cvxpy.Problem(objective, constraints) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 u1 = -x1 @ er x2 = np.array(w.value).flatten() u2 = -x2 @ er np.testing.assert_array_almost_equal(u1, u2, 4) df.loc['alphamind', u_name] = elasped_time1 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df
_____no_output_____
MIT
notebooks/Example 7 - Portfolio Optimizer Performance.ipynb
wangjiehui11235/alpha-mind
Based on **Train-AEmodel-GRU2x32-encoding16-AEmodel-DR5-ps-SDSS-QSO-balanced-wandb.ipynb** To-do
gpu_info = !nvidia-smi gpu_info = '\n'.join(gpu_info) if gpu_info.find('failed') >= 0: print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ') print('and then re-execute this cell.') else: print(gpu_info) from psutil import virtual_memory ram_gb = virtual_memory().total / 1e9 print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb)) if ram_gb < 20: print('To enable a high-RAM runtime, select the Runtime > "Change runtime type"') print('menu, and then select High-RAM in the Runtime shape dropdown. Then, ') print('re-execute this cell.') else: print('You are using a high-RAM runtime!')
Your runtime has 8.6 gigabytes of available RAM To enable a high-RAM runtime, select the Runtime > "Change runtime type" menu, and then select High-RAM in the Runtime shape dropdown. Then, re-execute this cell.
Apache-2.0
01_(Paula)TrainAE.ipynb
hernanlira/hl_stargaze
Pixel ShuffleThis notebook is a comparison between two best practices. Pixel shuffle and upsampling followed by a convolution. Imports
from fastai import * from fastai.tabular import * import pandas as pd from torchsummary import summary import torch from torch import nn import imageio import torch import glob from fastai.vision import * import os from torch import nn import torch.nn.functional as F
_____no_output_____
MIT
notebooks/cifar-10/pixelShuffle.ipynb
henriwoodcock/Applying-Modern-Best-Practices-to-Autoencoders
Data
colab = True if colab: from google.colab import drive drive.mount('/content/drive', force_remount = True) %cp "/content/drive/My Drive/autoencoder-training/data.zip" . !unzip -q data.zip image_path = "data" %cp "/content/drive/My Drive/autoencoder-training/model_layers.py" . %cp "/content/drive/My Drive/autoencoder-training/baseline_model.py" . %cp "/content/drive/My Drive/autoencoder-training/pixelShuffle_model.py" . import pixelShuffle_model else: os.chdir("../") image_path = os.getcwd() + "/data" np.random.seed(3333) torch.manual_seed(3333) size = 32 batchsize = 128 #tfms = get_transforms(do_flip = True) tfms = get_transforms(do_flip=True, flip_vert=True, max_rotate=10, max_zoom=1.1, max_lighting=0.2, max_warp=0.2, p_affine=0, p_lighting=0.75) src = (ImageImageList.from_folder(image_path).split_by_folder().label_from_func(lambda x: x)) data = (src.transform(tfms, size=size, tfm_y=True) .databunch(bs=batchsize) .normalize(imagenet_stats, do_y = False))
_____no_output_____
MIT
notebooks/cifar-10/pixelShuffle.ipynb
henriwoodcock/Applying-Modern-Best-Practices-to-Autoencoders
Model
autoencoder = pixelShuffle_model.autoencoder() learn = Learner(data, autoencoder, loss_func = F.mse_loss) learn.fit_one_cycle(5) learn.lr_find() learn.recorder.plot(suggestion=True) learn.metrics = [mean_squared_error, mean_absolute_error, r2_score, explained_variance] learn.fit_one_cycle(10, max_lr = 1e-03)
_____no_output_____
MIT
notebooks/cifar-10/pixelShuffle.ipynb
henriwoodcock/Applying-Modern-Best-Practices-to-Autoencoders
Results Training
learn.show_results(ds_type=DatasetType.Train)
_____no_output_____
MIT
notebooks/cifar-10/pixelShuffle.ipynb
henriwoodcock/Applying-Modern-Best-Practices-to-Autoencoders
Validation
learn.show_results(ds_type=DatasetType.Valid) torch.save(autoencoder, "/content/drive/My Drive/autoencoder-training/pixelShuffle-Cifar10.pt")
_____no_output_____
MIT
notebooks/cifar-10/pixelShuffle.ipynb
henriwoodcock/Applying-Modern-Best-Practices-to-Autoencoders
y_label = np.argmax(y_data, axis=1)y_text = ['bed', 'bird', 'cat', 'dog', 'house', 'tree']y_table = {i:text for i, text in enumerate(y_text)}y_table_array = np.array([(i, text) for i, text in enumerate(y_text)]) x_train_temp, x_test, y_train_temp, y_test = train_test_split( x_2d_data, y_label, test_size=0.2, random_state=42, stratify=y_label)x_train, x_val, y_train, y_val = train_test_split( x_train_temp, y_train_temp, test_size=0.25, random_state=42, stratify=y_train_temp)x_train.shape, y_train.shape, x_val.shape, y_val.shape, x_test.shape, y_test.shape
np.savez_compressed(path.join(base_path, 'imagenet_6_class_172_train_data_1.npz'), x_data=x_train, y_data=y_train, y_list=y_list) np.savez_compressed(path.join(base_path, 'imagenet_6_class_172_val_data_1.npz'), x_data=x_val, y_data=y_val, y_list=y_list)
_____no_output_____
MIT
make_172_imagenet_6_class_data-Copy1.ipynb
BbChip0103/research_2d_bspl
Control Flow Python if else
def multiply(a, b): """Function to multiply""" print(a * b) print(multiply.__doc__) multiply(5,2) def func(): """Function to check i is greater or smaller""" i=10 if i>5: print("i is greater than 5") else: print("i is less than 15") print(func.__doc__) func()
Function to check i is greater or smaller i is greater than 5
MIT
03...learn_python.ipynb
ram574/Python-Learning
Nested if
if i==20: print("i is 10") if i<15: print("i is less than 15") if i>15: print("i is greater than 15") else: print("Not present")
_____no_output_____
MIT
03...learn_python.ipynb
ram574/Python-Learning
if-elif-else ladder
def func(): i=10 if i==10: print("i is equal to 10") elif i==15: print("Not present") elif i==20: print('i am there') else: print("none") func()
_____no_output_____
MIT
03...learn_python.ipynb
ram574/Python-Learning
Python for loop
def func(): var = input("enter number:") x = int(var) for i in range(x): print(i) func() ## Lists iteration def func(): print("List Iteration") l = ["tulasi", "ram", "ponaganti"] for i in l: print(i) func() # Iterating over a tuple (immutable) def func(): print("\nTuple Iteration") t = ("tulasi", "ram", "ponaganti") for i in t: print(i) func() # Iterating over a String def func(): print("\nString Iteration") s = "tulasi" for i in s: print(i) func() # Iterating over dictionary def func(): print("\nDictionary Iteration") d = dict() d['xyz'] = 123 d['abc'] = 345 for i in d: print("% s % d" % (i, d[i])) func()
List Iteration tulasi ram ponaganti Tuple Iteration tulasi ram ponaganti String Iteration t u l a s i Dictionary Iteration xyz 123 abc 345
MIT
03...learn_python.ipynb
ram574/Python-Learning
Python for Loop with Continue Statement
def func(): for letter in 'tulasiram': if letter == 'a': continue print(letter) func()
t u l s i r m
MIT
03...learn_python.ipynb
ram574/Python-Learning
Python For Loop with Break Statement
def func(): for letter in 'tulasiram': if letter == 'a': break print('Current Letter :', letter) func()
Current Letter : t Current Letter : u Current Letter : l
MIT
03...learn_python.ipynb
ram574/Python-Learning
Python For Loop with Pass Statement
list = ['tulasi','ram','ponaganti'] def func(): #An empty loop for list in 'ponaganti': pass print('Last Letter :', list) func()
Last Letter : i
MIT
03...learn_python.ipynb
ram574/Python-Learning
Python range
def func(): sum=0 for i in range(1,5): sum = sum + i print(sum) func() def func(): i=5 for x in range(i): i = i+x print(i) func()
5 6 8 11 15
MIT
03...learn_python.ipynb
ram574/Python-Learning
Python for loop with else
for i in range(1, 4): print(i) else: # Executed because no break in for print("No Break\n") for i in range(1, 4): print(i) break else: # Not executed as there is a break print("No Break") ### Using all for loop statements in small program def func(): var = input("enter number:") x = int(var) for i in range(x): option = input("print, skip, or exit") if option=="print": print(i) elif option=='skip': continue elif option=='exit': break print("Good bye....!") func() ### Working with lists def func(): product_prices = [] for i in range(5): product = input("How much the product cost ?") product = float(product) product_prices.append(product) print(product_prices) print("Total price : " , sum(product_prices)) print("High cost of product :" , max(product_prices)) print("average price of products", sum(product_prices)/len(product_prices)) func() ### Nested for loop ### one to Twelve time tables using for loop def func(): for num1 in range(1,13): for num2 in range(1,13): print(num1, "*", num2, "=", num1*num2) func()
1 * 1 = 1 1 * 2 = 2 1 * 3 = 3 1 * 4 = 4 1 * 5 = 5 1 * 6 = 6 1 * 7 = 7 1 * 8 = 8 1 * 9 = 9 1 * 10 = 10 1 * 11 = 11 1 * 12 = 12 2 * 1 = 2 2 * 2 = 4 2 * 3 = 6 2 * 4 = 8 2 * 5 = 10 2 * 6 = 12 2 * 7 = 14 2 * 8 = 16 2 * 9 = 18 2 * 10 = 20 2 * 11 = 22 2 * 12 = 24 3 * 1 = 3 3 * 2 = 6 3 * 3 = 9 3 * 4 = 12 3 * 5 = 15 3 * 6 = 18 3 * 7 = 21 3 * 8 = 24 3 * 9 = 27 3 * 10 = 30 3 * 11 = 33 3 * 12 = 36 4 * 1 = 4 4 * 2 = 8 4 * 3 = 12 4 * 4 = 16 4 * 5 = 20 4 * 6 = 24 4 * 7 = 28 4 * 8 = 32 4 * 9 = 36 4 * 10 = 40 4 * 11 = 44 4 * 12 = 48 5 * 1 = 5 5 * 2 = 10 5 * 3 = 15 5 * 4 = 20 5 * 5 = 25 5 * 6 = 30 5 * 7 = 35 5 * 8 = 40 5 * 9 = 45 5 * 10 = 50 5 * 11 = 55 5 * 12 = 60 6 * 1 = 6 6 * 2 = 12 6 * 3 = 18 6 * 4 = 24 6 * 5 = 30 6 * 6 = 36 6 * 7 = 42 6 * 8 = 48 6 * 9 = 54 6 * 10 = 60 6 * 11 = 66 6 * 12 = 72 7 * 1 = 7 7 * 2 = 14 7 * 3 = 21 7 * 4 = 28 7 * 5 = 35 7 * 6 = 42 7 * 7 = 49 7 * 8 = 56 7 * 9 = 63 7 * 10 = 70 7 * 11 = 77 7 * 12 = 84 8 * 1 = 8 8 * 2 = 16 8 * 3 = 24 8 * 4 = 32 8 * 5 = 40 8 * 6 = 48 8 * 7 = 56 8 * 8 = 64 8 * 9 = 72 8 * 10 = 80 8 * 11 = 88 8 * 12 = 96 9 * 1 = 9 9 * 2 = 18 9 * 3 = 27 9 * 4 = 36 9 * 5 = 45 9 * 6 = 54 9 * 7 = 63 9 * 8 = 72 9 * 9 = 81 9 * 10 = 90 9 * 11 = 99 9 * 12 = 108 10 * 1 = 10 10 * 2 = 20 10 * 3 = 30 10 * 4 = 40 10 * 5 = 50 10 * 6 = 60 10 * 7 = 70 10 * 8 = 80 10 * 9 = 90 10 * 10 = 100 10 * 11 = 110 10 * 12 = 120 11 * 1 = 11 11 * 2 = 22 11 * 3 = 33 11 * 4 = 44 11 * 5 = 55 11 * 6 = 66 11 * 7 = 77 11 * 8 = 88 11 * 9 = 99 11 * 10 = 110 11 * 11 = 121 11 * 12 = 132 12 * 1 = 12 12 * 2 = 24 12 * 3 = 36 12 * 4 = 48 12 * 5 = 60 12 * 6 = 72 12 * 7 = 84 12 * 8 = 96 12 * 9 = 108 12 * 10 = 120 12 * 11 = 132 12 * 12 = 144
MIT
03...learn_python.ipynb
ram574/Python-Learning
Python while loop
## Single line statement def func(): '''first one''' count = 0 while (count < 5): count = count + 1; print("Tulasi Ram") print(func.__doc__) func() ### or def func(): '''Second one''' count = 0 while (count < 5): count = count + 1 print("Tulasi Ram") print(func.__doc__) func() def func(): list = ["ram","tulasi","ponaganti"] while list: print(list.pop()) func() def func(): i=0 for i in range(10): i+=1 return i func() def func(): i = 0 a = ['tulasi','ram','ponaganti'] while i < len(a): if a[i] == 'tulasi' or a[i] == 'ram': i += 1 continue print('Current word :', a[i]) i+=1 func() def func(): i = 0 a = ['tulasi','ram','ponaganti'] while i < len(a): if a[i] == 'ponaganti': i += 1 break print('Current word :', a[i]) i+=1 func() def func(): i = 0 a = ['tulasi','ram','ponaganti'] while i < len(a): if a[i] == 'tulasi': i += 1 pass print('Current word :', a[i]) i+=1 func() def whileElseFunc(): i=0 while i<10: i+=1 print(i) else: print('no break') whileElseFunc()
1 2 3 4 5 6 7 8 9 10 no break
MIT
03...learn_python.ipynb
ram574/Python-Learning
using break in loops
def func(): i=0 for i in range(10): i+=1 print(i) break else: print('no break') func()
1
MIT
03...learn_python.ipynb
ram574/Python-Learning
using continue in loops
def func(): i=0 for i in range(10): i+=1 print(i) continue else: for i in range(5): i+=1 print(i) break func() def func(): i=0 for i in range(10): i+=1 print(i) pass else: for i in range(5): i+=1 print(i) func()
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5
MIT
03...learn_python.ipynb
ram574/Python-Learning
Looping techniques using enumerate()
def enumearteFunc(): list =['tulasi','ram','ponaganti'] for key in enumerate(list): print(key) enumearteFunc() def enumearteFunc(): list =['tulasi','ram','ponaganti'] for key, value in enumerate(list): print(value) enumearteFunc() def zipFunc(): list1 = ['name', 'firstname', 'lastname'] list2 = ['ram', 'tulasi', 'ponaganti'] # using zip() to combine two containers # and print values for list1, list2 in zip(list1, list2): print('What is your {0}? I am {1}.'.format(list1, list2)) zipFunc()
What is your name? I am ram. What is your firstname? I am tulasi. What is your lastname? I am ponaganti.
MIT
03...learn_python.ipynb
ram574/Python-Learning
""" Using iteritem(): iteritems() is used to loop through the dictionary printing the dictionary key-value pair sequentially which is used before Python 3 version Using items(): items() performs the similar task on dictionary as iteritems() but have certain disadvantages when compared with iteritems() """
def itemFunc(): name = {"name": "tulasi", "firstname": "ram"} print("The key value pair using items is : ") for key, value in name.items(): print(key, value) itemFunc()
The key value pair using items is : name tulasi firstname ram
MIT
03...learn_python.ipynb
ram574/Python-Learning
sorting the list items using loop
def sortedFunc(): list = ['ram','tulasi','ponaganti'] for i in list: print(sorted(i)) continue for i in reversed(list): print(i, end=" ") sortedFunc()
['a', 'm', 'r'] ['a', 'i', 'l', 's', 't', 'u'] ['a', 'a', 'g', 'i', 'n', 'n', 'o', 'p', 't'] ponaganti tulasi ram
MIT
03...learn_python.ipynb
ram574/Python-Learning
Load CNNTracker
#only need to select one model #Model 1 CNN tracker for ICA TOF MRA swc_name = 'cnn_snake' import sys sys.path.append(r'U:\LiChen\AICafe\CNNTracker') from models.centerline_net import CenterlineNet max_points = 500 prob_thr = 0.85 infer_model = CenterlineNet(n_classes=max_points) checkpoint_path_infer = r"D:\tensorflow\LiChen\AICafe\CNNTracker\CNNTracker1-1\classification_checkpoints\centerline_net_model_Epoch_29.pkl" checkpoint = torch.load(checkpoint_path_infer) net_dict = checkpoint['net_dict'] infer_model.load_state_dict(net_dict) infer_model.to(device) infer_model.eval() #Model 2 CNN tracker for Coronary CTA swc_name = 'cnn_snake' max_points = 500 prob_thr = 0.85 infer_model = CenterlineNet(n_classes=max_points) checkpoint_path_infer = r"D:\tensorflow\LiChen\AICafe\CNNTracker\CNNTracker2-1\classification_checkpoints\centerline_net_model_Epoch_81.pkl" checkpoint = torch.load(checkpoint_path_infer) net_dict = checkpoint['net_dict'] infer_model.load_state_dict(net_dict) infer_model.to(device) infer_model.eval() #Model 3 CNN tracker for LATTE swc_name = 'cnn_snake' max_points = 500 prob_thr = 0.85 infer_model = CenterlineNet(n_classes=max_points) checkpoint_path_infer = r"D:\tensorflow\LiChen\AICafe\CNNTracker\CNNTracker4-1\classification_checkpoints\centerline_net_model_Epoch_99.pkl" checkpoint = torch.load(checkpoint_path_infer) net_dict = checkpoint['net_dict'] infer_model.load_state_dict(net_dict) infer_model.to(device) infer_model.eval()
_____no_output_____
MIT
CNNTracker1-2.ipynb
clatfd/Coronary-Artery-Tracking-via-3D-CNN-Classification
Load datasets
dbname = 'BRAVEAI' icafe_dir = r'\\DESKTOP2\GiCafe\result/' seg_model_name = 'LumenSeg2-3' with open(icafe_dir+'/'+dbname+'/db.list','rb') as fp: dblist = pickle.load(fp) train_list = dblist['train'] val_list = dblist['val'] test_list = dblist['test'] pilist = [pi.split('/')[1] for pi in dblist['test']] len(pilist) dbname = 'RotterdanCoronary' icafe_dir = r'\\DESKTOP2\GiCafe\result/' pilist = ['0_dataset05_U'] seg_model_name = 'CoronarySeg1-8-5' dbname = 'UNC' icafe_dir = r'\\DESKTOP2\GiCafe\result/' seg_model_name = 'LumenSeg5-1' with open(icafe_dir+'/'+dbname+'/db.list','rb') as fp: dblist = pickle.load(fp) pilist = [pi.split('/')[1] for pi in dblist['test']] len(pilist) dbname = 'HarborViewT1Pre' icafe_dir = r'\\DESKTOP2\GiCafe\result/' pilist = ['0_ID%d_U'%i for i in [2,9,10,11,12]] len(pilist) # MERGE dbname = 'CAREIIMERGEGT' icafe_dir = r'\\DESKTOP2\GiCafe\result/' seg_model_name = 'LumenSeg6-1' with open(icafe_dir+'/'+dbname+'/db.list','rb') as fp: dblist = pickle.load(fp) pilist = [pi.split('/')[1] for pi in dblist['test']] len(pilist) dbname = 'IPH-Sup-TOF-FullCoverage' icafe_dir = r'\\DESKTOP2\GiCafe\result/' seg_model_name = 'LumenSeg7-1' dblist_name = icafe_dir+'/'+dbname+'/db.list' with open(dblist_name,'rb') as fp: dblist = pickle.load(fp) pilist = [pi.split('/')[1] for pi in dblist['test']] len(pilist) dbname = 'WALLIAI' icafe_dir = r'\\DESKTOP2\GiCafe\result/' seg_model_name = 'LumenSeg8-1' dblist_name = icafe_dir+'/'+dbname+'/db.list' with open(dblist_name,'rb') as fp: dblist = pickle.load(fp) pilist = [pi.split('/')[1] for pi in dblist['test']] len(pilist),pilist
_____no_output_____
MIT
CNNTracker1-2.ipynb
clatfd/Coronary-Artery-Tracking-via-3D-CNN-Classification
Tracking
# from s.whole.modelname to swc traces from iCafePython.connect.ext import extSnake import SimpleITK as sitk #redo artery tracing RETRACE = 1 #redo artery tree contraint RETREE = 1 #segmentation src seg_src = 's.whole.'+seg_model_name #Lumen segmentation threshold. # Lower value will cause too many noise branches, and neighboring branches will merge as one # Higher value will reduce the traces detectable SEGTHRES = 0.5 #max search range in merge/branch, unit in mm # Higher value will allow larger gap and merge parts of broken arteries, # but will also force noise branches to be merged in the tree search_range_thres = 10 #which ves to build graph for artery labeling graph_ves = 'seg_ves_ext_tree2' DEBUG = 0 for pi in pilist[20:19:-1]: print('='*10,'Start processing',pilist.index(pi),'/',len(pilist),pi,'='*10) if not os.path.exists(icafe_dir+'/'+dbname+'/'+pi): os.mkdir(icafe_dir+'/'+dbname+'/'+pi) icafem = iCafe(icafe_dir+'/'+dbname+'/'+pi) #select correct version of s.whole from potentially multiple segmentation versions and save as s.whole icafem.loadImg(seg_src) icafem.saveImg('s.whole',icafem.I[seg_src],np.float16) icafem.loadImg('s.whole') #export v.tif for 3d visualization if icafe project does not have one already if 'v' not in icafem.listAvailImgs(): vimg = copy.copy(icafem.I['s.whole']) vimg[vimg<0] = 0 vimg = (vimg*255).astype(np.uint16) icafem.saveImg('v',vimg,np.int16) #Tracing if RETRACE or not icafem.existPath('seg_ves_ext.swc'): if 's.whole' not in icafem.I: icafem.loadImg('s.whole') seg_ves_snakelist = icafem.constructSkeleton(icafem.I['s.whole']>SEGTHRES) #load image file_name = icafem.getPath('o') re_spacing_img = sitk.GetArrayFromImage(sitk.ReadImage(file_name)) seg_ves_snakelist = icafem.readSnake('seg_ves') seg_ves_ext_snakelist = extSnake(seg_ves_snakelist,infer_model,re_spacing_img,DEBUG=DEBUG) icafem.writeSWC('seg_ves_ext',seg_ves_ext_snakelist) else: seg_ves_ext_snakelist = icafem.readSnake('seg_ves_ext') print('read from existing seg ves ext') if seg_ves_ext_snakelist.NSnakes==0: print('no snake found in seg ves, abort',pi) continue if RETREE or not icafem.existPath('seg_ves_ext_tree.swc'): if 's.whole' not in icafem.I: icafem.loadImg('s.whole') if icafem.xml.res is None: icafem.xml.setResolution(0.296875) icafem.xml.writexml() tree_snakelist = seg_ves_ext_snakelist.tree(icafem,search_range=search_range_thres/icafem.xml.res,int_src='o',DEBUG=DEBUG) icafem.writeSWC('seg_ves_ext_tree', tree_snakelist) tree_snakelist = tree_snakelist.tree(icafem,search_range=search_range_thres/3/icafem.xml.res,int_src='s.whole',DEBUG=DEBUG) icafem.writeSWC('seg_ves_ext_tree2', tree_snakelist) tree_main_snakelist = tree_snakelist.mainArtTree(dist_thres=10) icafem.writeSWC('seg_ves_ext_tree2_main',tree_main_snakelist)
_____no_output_____
MIT
CNNTracker1-2.ipynb
clatfd/Coronary-Artery-Tracking-via-3D-CNN-Classification
Artery labeling
from iCafePython.artlabel.artlabel import ArtLabel art_label_predictor = ArtLabel() for pi in pilist[:]: print('='*10,'Start processing',pilist.index(pi),'/',len(pilist),pi,'='*10) if not os.path.exists(icafe_dir+'/'+dbname+'/'+pi): os.mkdir(icafe_dir+'/'+dbname+'/'+pi) icafem = iCafe(icafe_dir+'/'+dbname+'/'+pi) #generate (simplified node!=2) graph for GNN art labeling G = icafem.generateGraph(graph_ves,None,graphtype='graphsim', mode='test', trim=1) if len(G.nodes())<5: print('too few snakes for artlabeling') continue icafem.writeGraph(G,graphtype='graphsim') #predict landmarks pred_landmark, ves_end_pts = art_label_predictor.pred(icafem.getPath('graphsim'),icafem.xml.res) #complete graph Gcom for finding the pts in the path Gcom = icafem.generateGraph(graph_ves, None, graphtype='graphcom') ves_snakelist = findSnakeFromPts(Gcom,G,ves_end_pts) print('@@@predict',len(pred_landmark),'landmarks',ves_snakelist) #save landmark and ves icafem.xml.landmark = pred_landmark icafem.xml.writexml() icafem.writeSWC('ves_pred', ves_snakelist) vimg = vimg[:,:,::-1] np.max(vimg) icafem.saveImg('v',vimg,np.float16) import tifffile a = tifffile.imread(r"\\DESKTOP2\GiCafe\result\WALLI\47_WALLI-V-09-1-B_M\TH_47_WALLI-V-09-1-B_Mv.tif") np.max(a[118])
_____no_output_____
MIT
CNNTracker1-2.ipynb
clatfd/Coronary-Artery-Tracking-via-3D-CNN-Classification
Eval
def eval_simple(snakelist): snakelist = copy.deepcopy(snakelist) _ = snakelist.resampleSnakes(1) #ground truth snakelist from icafem.veslist all_metic = snakelist.motMetric(icafem.veslist) metric_dict = all_metic.metrics(['MOTA','IDF1','MOTP','IDS']) #ref_snakelist = icafem.readSnake('ves') snakelist.compRefSnakelist(icafem.vessnakelist) metric_dict['OV'], metric_dict['OF'], metric_dict['OT'], metric_dict['AI'], metric_dict['UM'], metric_dict['UMS'], metric_dict['ref_UM'], metric_dict['ref_UMS'], metric_dict['mean_diff'] = snakelist.evalCompDist() str = '' metric_dict_simple = ['MOTA','IDF1','MOTP','IDS','OV'] for key in metric_dict_simple: str += key+'\t' str += '\n' for key in metric_dict_simple: if type(metric_dict[key]) == int: str += '%d\t'%metric_dict[key] else: str += '%.3f\t'%metric_dict[key] print(str) return metric_dict # calculate metric and save in each pi folder REFEAT = 0 for pi in pilist[:1]: print('='*10,'Start processing',pilist.index(pi),'/',len(pilist),pi,'='*10) icafem = iCafe(icafe_dir+'/'+pi) if REFEAT or not icafem.existPath('metric.pickle'): print('init metric') all_metric_dict = {} else: print('load metric') with open(icafem.getPath('metric.pickle'),'rb') as fp: all_metric_dict = pickle.load(fp) for vesname in ['seg_ves_ext_tree2_main']: #for vesname in ['seg_raw','seg_ves_ext_main','seg_ves_ext_tree2']: #comparison methods #for vesname in ['frangi_ves','seg_unet','seg_raw','raw_sep','cnn_snake','dcat_snake','seg_ves_ext_tree2_main']: if vesname in all_metric_dict: continue print('-'*10,vesname,'-'*10) pred_snakelist = icafem.readSnake(vesname) if pred_snakelist.NSnakes==0: print('no snake',pi,vesname) continue all_metric_dict[vesname] = eval_simple(pred_snakelist.resampleSnakes(1)) with open(icafem.getPath('metric.pickle'),'wb') as fp: pickle.dump(all_metric_dict,fp) #check feat pi = pilist[0] icafem = iCafe(icafe_dir+'/'+pi) with open(icafem.getPath('metric.pickle'),'rb') as fp: all_metric_dict = pickle.load(fp) all_metric_dict #collect feats from pickle eval_vesname = {'frangi_ves':'Frangi','seg_unet':'U-Net','seg_raw':'DDT', 'raw_sep':'iCafe','cnn_snake':'CNN Tracker','dcat_snake':'DCAT','seg_ves_ext_tree2_main':'DOST (ours)', 'seg_ves':'DOST (initial curve)','seg_ves_ext_main':'DOST (deep snake)','seg_ves_ext_tree2':'DOST tree'} feats = {} for vesname in eval_vesname: feats[vesname] = {} for pi in pilist[:]: icafem = iCafe(icafe_dir+'/'+dbname+'/'+pi) if not icafem.existPath('metric.pickle'): continue with open(icafem.getPath('metric.pickle'),'rb') as fp: all_metric_dict = pickle.load(fp) #for vesname in all_metric_dict: for vesname in eval_vesname: if vesname not in all_metric_dict: print('no',vesname,'in',pi) continue for metric in all_metric_dict[vesname]: if metric not in feats[vesname]: feats[vesname][metric] = [] feats[vesname][metric].append(all_metric_dict[vesname][metric]) sel_metrics = ['OV','AI', 'MOTA', 'IDF1', 'IDS'] print('\t'.join(['']+sel_metrics)) for vesname in feats: featstr = eval_vesname[vesname]+'\t' for metric in sel_metrics: if metric in ['IDS']: featstr += '%.1f\t'%np.mean(feats[vesname][metric]) else: featstr += '%.3f\t'%np.mean(feats[vesname][metric]) print(featstr)
_____no_output_____
MIT
CNNTracker1-2.ipynb
clatfd/Coronary-Artery-Tracking-via-3D-CNN-Classification
Assignment of Day 5
lst1 = [1,5,6,4,1,2,3,5] lst2 = [1,5,6,5,1,2,3,6] lst = [1,1,5] count = 0 r=0 for x in lst: for y in lst1[r:]: r+=1 if (x==y): count+=1 break; else: pass if(count==3): print("it’s a Match") else: print("it’s Gone") count = 0 r=0 for x in lst: for y in lst2[r:]: r+=1 if (x==y): count+=1 break; else: pass if(count==3): print("it’s a Match") else: print("it’s Gone") #Make a Function for prime numbers and use Filter to filter out all the prime numbers from 1-2500 check = 0 def prime(num): if num > 1: for i in range(2, num): if (num % i) == 0: break else: return num number = filter(prime,range(1,2500)) print(list(number)) #Make a Lambda function for capitalizing the whole sentence passed using arguments. #And map all the sentences in the List, with the lambda functions arr = [] st = ["hey this is sai","I am in mumbai","...."] for x in st: arr.append(x.upper()) print(arr)
['HEY THIS IS SAI', 'I AM IN MUMBAI', '....']
Apache-2.0
Assignment day5.ipynb
Raghavstyleking/LetsUpgrade-Python-Essentials
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
import cv2 # computer vision library import helpers import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline
_____no_output_____
MIT
1_1_Image_Representation/6_3. Average Brightness.ipynb
georgiagn/CVND_Exercises
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
# Image data directories image_dir_training = "day_night_images/training/" image_dir_test = "day_night_images/test/"
_____no_output_____
MIT
1_1_Image_Representation/6_3. Average Brightness.ipynb
georgiagn/CVND_Exercises
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
# Using the load_dataset function in helpers.py # Load training data IMAGE_LIST = helpers.load_dataset(image_dir_training)
_____no_output_____
MIT
1_1_Image_Representation/6_3. Average Brightness.ipynb
georgiagn/CVND_Exercises
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
# Standardize all training images STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
_____no_output_____
MIT
1_1_Image_Representation/6_3. Average Brightness.ipynb
georgiagn/CVND_Exercises
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
# Display a standardized image and its label # Select an image by index image_num = 0 selected_image = STANDARDIZED_LIST[image_num][0] selected_label = STANDARDIZED_LIST[image_num][1] # Display image and data about it plt.imshow(selected_image) print("Shape: "+str(selected_image.shape)) print("Label [1 = day, 0 = night]: " + str(selected_label))
Shape: (600, 1100, 3) Label [1 = day, 0 = night]: 1
MIT
1_1_Image_Representation/6_3. Average Brightness.ipynb
georgiagn/CVND_Exercises
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. RGB to HSV conversionBelow, a test image is converted from RGB to HSV colorspace and each component is displayed in an image.
# Convert and image to HSV colorspace # Visualize the individual color channels image_num = 0 test_im = STANDARDIZED_LIST[image_num][0] test_label = STANDARDIZED_LIST[image_num][1] # Convert to HSV hsv = cv2.cvtColor(test_im, cv2.COLOR_RGB2HSV) # Print image label print('Label: ' + str(test_label)) # HSV channels h = hsv[:,:,0] s = hsv[:,:,1] v = hsv[:,:,2] # Plot the original image and the three channels f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,10)) ax1.set_title('Standardized image') ax1.imshow(test_im) ax2.set_title('H channel') ax2.imshow(h, cmap='gray') ax3.set_title('S channel') ax3.imshow(s, cmap='gray') ax4.set_title('V channel') ax4.imshow(v, cmap='gray')
Label: 1
MIT
1_1_Image_Representation/6_3. Average Brightness.ipynb
georgiagn/CVND_Exercises
--- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
# Find the average Value or brightness of an image def avg_brightness(rgb_image): # Convert image to HSV hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV) # Add up all the pixel values in the V channel sum_brightness = np.sum(hsv[:,:,2]) ## TODO: Calculate the average brightness using the area of the image # and the sum calculated above avg = 0 avg = sum_brightness/rgb_image.shape[0]/rgb_image.shape[1] return avg # Testing average brightness levels # Look at a number of different day and night images and think about # what average brightness value separates the two types of images # As an example, a "night" image is loaded in and its avg brightness is displayed image_num = 190 test_im = STANDARDIZED_LIST[image_num][0] avg = avg_brightness(test_im) print('Avg brightness: ' + str(avg)) plt.imshow(test_im)
Avg brightness: 35.217
MIT
1_1_Image_Representation/6_3. Average Brightness.ipynb
georgiagn/CVND_Exercises
Lab Three---For this lab we're going to be making and using a bunch of functions. Our Goals are:- Switch Case- Looping- Making our own functions- Combining functions- Structuring solutions
// Give me an example of you using switch case. String house = "BlueLions"; switch(house){ case "BlueLions": System.out.println("Dimitri"); case "BlackEagles": System.put.println("Edelgard"); case "GoldenDeer": System.out.println("Claude"); } // Give me an example of you using a for loop for int x = 10; x > -1; x--) { System.out.println(x); } // Give me an example of you using a for each loop int[] numbers = {1, 2, 3, 4, 5}; for (int number: numbers) { System.out.println(number); } // Give me an example of you using a while loop int x = 0; int stop = 11; while (x < stop) { System.out.println("not 10"); x++; } System.out.println("is 10"); // I want you to write a function that will take in a number and raise it to the power given. // For example if given the numbers 2 and 3. The math that the function should do is 2^3 and should print out or return 8. Print the output. int base = 2; int power = 3; int newbase = base; for (int x = 1; x < power; x++){ newbase = newbase * base; } System.out.println(newbase); // I want you to write a function that will take in a list and see how many times a given number is in the list. // For example if the array given is [2,3,5,2,3,6,7,8,2] and the number given is 2 the function should print out or return 3. Print the output. int[] numbers = {2,3,5,2,3,6,7,8,2}; int givenNumber = 2; int counter = 0; for (int number: numbers) { if (number == givenNumber){ counter++; } } System.out.println(counter); // Give me a function that gives the answer to the pythagorean theorem. // I'd like you to reuse the exponent function from above as well as the functions below to make your function. // If you don't remember the pythagorean theorem the formula is (a^2 + b^2 = c^2). Given a and b as parameters i'd like you to return c. // If this doesn't make sense look up `Pythagorean Theorem Formula` on google. int addition(int a, int b) { int answer = a + b; return answer; } int division(int a, int b) { int answer = a / b; return answer; } int a = 3; int b = 4; int a = a * a; int b = b * b; int addition(int a, int b) { int answer = a + b; return answer;
_____no_output_____
MIT
JupyterNotebooks/Labs/Lab 3.ipynb
CometSmudge/CMPT-220L-903-21S
class test: def __init__(self,a): self.a=a def display(self): print(self.a) obj=test() obj.display() def f1(): x=100 print(x) x=+1 f1() area = { 'living' : [400, 450], 'living' : [650, 800], 'kitchen' : [300, 250], 'garage' : [250, 0]} print (area['living']) List_1=[2,6,7,8] List_2=[2,6,7,8] print(List_1[-2] + List_2[2]) d = {0: 'a', 1: 'b', 2: 'c'} for x, y in d.items(): print(x, y) Numbers=[10,5,7,8,9,5] print(max(Numbers)-min(Numbers)) fo = open("foo.txt", "read+") print("Name of the file: ", fo.name) # Assuming file has following 5 lines # This is 1st line # This is 2nd line # This is 3rd line # This is 4th line # This is 5th line for index in range(5): line = fo.readline() print("Line No {} - {}".format(index, line)) #Close opened file fo.close() x = "abcdef" while i in x: print(i, end=" ") def cube(x): return x * x * x x = cube(3) print (x) print(((True) or (False) and (False) or (False))) x1=int('16') x2=8 + 8 x3= (4**2) print(x1 is x2 is x3) Word='warrior knights' ,A=Word[9:14],B=Word[-13:-16:-1] B+A def to_upper(k): return k.upper() x = ['ab', 'cd'] print(list(map(to_upper, x))) my_string = "hello world" k = [(i.upper(), len(i)) for i in my_string] print(k) from csv import reader def explore_data(dataset, start, end, rows_and_columns=False): """Explore the elements of a list. Print the elements of a list starting from the index 'start'(included) upto the index 'end' (excluded). Keyword arguments: dataset -- list of which we want to see the elements start -- index of the first element we want to see, this is included end -- index of the stopping element, this is excluded rows_and_columns -- this parameter is optional while calling the function. It takes binary values, either True or False. If true, print the dimension of the list, else dont. """ dataset_slice = dataset[start:end] for row in dataset_slice: print(row) print('\n') # adds a new (empty) line between rows if rows_and_columns: print('Number of rows:', len(dataset)) print('Number of columns:', len(dataset[0])) def duplicate_and_unique_movies(dataset, index_): """Check the duplicate and unique entries. We have nested list. This function checks if the rows in the list is unique or duplicated based on the element at index 'index_'. It prints the Number of duplicate entries, along with some examples of duplicated entry. Keyword arguments: dataset -- two dimensional list which we want to explore index_ -- column index at which the element in each row would be checked for duplicacy """ duplicate = [] unique = [] for movie in dataset: name = movie[index_] if name in unique: duplicate.append(name) else: unique.append(name) print('Number of duplicate Movies:', len(duplicate)) print('\n') print('Examples of duplicate Movies:', duplicate[:15]) def movies_lang(dataset, index_, lang_): """Extract the movies of a particular language. Of all the movies available in all languages, this function extracts all the movies in a particular laguage. Once you ahve extracted the movies, call the explore_data() to print first few rows. Keyword arguments: dataset -- list containing the details of the movie index_ -- index which is to be compared for langauges lang_ -- desired language for which we want to filter out the movies Returns: movies_ -- list with details of the movies in selected language """ movies_ = [] for movie in movies: lang = movie[index_] if lang == lang_: movies_.append(movie) print("Examples of Movies in English Language:") explore_data(movies_, 0, 3, True) return movies_ def rate_bucket(dataset, rate_low, rate_high): """Extract the movies within the specified ratings. This function extracts all the movies that has rating between rate_low and high_rate. Once you ahve extracted the movies, call the explore_data() to print first few rows. Keyword arguments: dataset -- list containing the details of the movie rate_low -- lower range of rating rate_high -- higher range of rating Returns: rated_movies -- list of the details of the movies with required ratings """ rated_movies = [] for movie in dataset: vote_avg = float(movie[-4]) if ((vote_avg >= rate_low) & (vote_avg <= rate_high)): rated_movies.append(movie) print("Examples of Movies in required rating bucket:") explore_data(rated_movies, 0, 3, True) return rated_movies # Read the data file and store it as a list 'movies' opened_file = open(path, encoding="utf8") read_file = reader(opened_file) movies = list(read_file) # The first row is header. Extract and store it in 'movies_header'. movies_header = movies[0] print("Movies Header:\n", movies_header) # Subset the movies dataset such that the header is removed from the list and store it back in movies movies = movies[1:] # Delete wrong data # Explore the row #4553. You will see that as apart from the id, description, status and title, no other information is available. # Hence drop this row. print("Entry at index 4553:") explore_data(movies, 4553, 4554) del movies[4553] # Using explore_data() with appropriate parameters, view the details of the first 5 movies. print("First 5 Entries:") explore_data(movies, 0, 5, True) # Our dataset might have more than one entry for a movie. Call duplicate_and_unique_movies() with index of the name to check the same. duplicate_and_unique_movies(movies, 13) # We saw that there are 3 movies for which the there are multiple entries. # Create a dictionary, 'reviews_max' that will have the name of the movie as key, and the maximum number of reviews as values. reviews_max = {} for movie in movies: name = movie[13] n_reviews = float(movie[12]) if name in reviews_max and reviews_max[name] < n_reviews: reviews_max[name] = n_reviews elif name not in reviews_max: reviews_max[name] = n_reviews len(reviews_max) # Create a list 'movies_clean', which will filter out the duplicate movies and contain the rows with maximum number of reviews for duplicate movies, as stored in 'review_max'. movies_clean = [] already_added = [] for movie in movies: name = movie[13] n_reviews = float(movie[12]) if (reviews_max[name] == n_reviews) and (name not in already_added): movies_clean.append(movie) already_added.append(name) len(movies_clean) # Calling movies_lang(), extract all the english movies and store it in movies_en. movies_en = movies_lang(movies_clean, 3, 'en') # Call the rate_bucket function to see the movies with rating higher than 8. high_rated_movies = rate_bucket(movies_en, 8, 10)
_____no_output_____
MIT
practice_project.ipynb
Abhishekauti21/dsmp-pre-work
Detecting COVID-19 with Chest X Ray using PyTorchImage classification of Chest X Rays in one of three classes: Normal, Viral Pneumonia, COVID-19Dataset from [COVID-19 Radiography Dataset](https://www.kaggle.com/tawsifurrahman/covid19-radiography-database) on Kaggle Importing Libraries
from google.colab import drive drive.mount('/gdrive') %matplotlib inline import os import shutil import copy import random import torch import torch.nn as nn import torchvision import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import seaborn as sns import time from sklearn.metrics import confusion_matrix from PIL import Image import matplotlib.pyplot as plt torch.manual_seed(0) print('Using PyTorch version', torch.__version__)
Using PyTorch version 1.7.0+cu101
Apache-2.0
Model/Resnet_18.ipynb
reyvnth/COVIDX
Preparing Training and Test Sets
class_names = ['Non-Covid', 'Covid'] root_dir = '/gdrive/My Drive/Research_Documents_completed/Data/Data/' source_dirs = ['non', 'covid']
_____no_output_____
Apache-2.0
Model/Resnet_18.ipynb
reyvnth/COVIDX
Creating Custom Dataset
class ChestXRayDataset(torch.utils.data.Dataset): def __init__(self, image_dirs, transform): def get_images(class_name): images = [x for x in os.listdir(image_dirs[class_name]) if x.lower().endswith('png') or x.lower().endswith('jpg')] print(f'Found {len(images)} {class_name} examples') return images self.images = {} self.class_names = ['Non-Covid', 'Covid'] for class_name in self.class_names: self.images[class_name] = get_images(class_name) self.image_dirs = image_dirs self.transform = transform def __len__(self): return sum([len(self.images[class_name]) for class_name in self.class_names]) def __getitem__(self, index): class_name = random.choice(self.class_names) index = index % len(self.images[class_name]) image_name = self.images[class_name][index] image_path = os.path.join(self.image_dirs[class_name], image_name) image = Image.open(image_path).convert('RGB') return self.transform(image), self.class_names.index(class_name)
_____no_output_____
Apache-2.0
Model/Resnet_18.ipynb
reyvnth/COVIDX
Image Transformations
train_transform = torchvision.transforms.Compose([ torchvision.transforms.Resize(size=(224, 224)), torchvision.transforms.RandomHorizontalFlip(), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) test_transform = torchvision.transforms.Compose([ torchvision.transforms.Resize(size=(224, 224)), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ])
_____no_output_____
Apache-2.0
Model/Resnet_18.ipynb
reyvnth/COVIDX
Prepare DataLoader
train_dirs = { 'Non-Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/non/', 'Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/covid/' } #train_dirs = { # 'Non-Covid': '/gdrive/My Drive/Data/Data/non/', # 'Covid': '/gdrive/My Drive/Data/Data/covid/' #} train_dataset = ChestXRayDataset(train_dirs, train_transform) test_dirs = { 'Non-Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/test/non/', 'Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/test/covid/' } test_dataset = ChestXRayDataset(test_dirs, test_transform) batch_size = 25 dl_train = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) dl_test = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True) print(dl_train) print('Number of training batches', len(dl_train)) print('Number of test batches', len(dl_test))
<torch.utils.data.dataloader.DataLoader object at 0x7f3c11961048> Number of training batches 139 Number of test batches 128
Apache-2.0
Model/Resnet_18.ipynb
reyvnth/COVIDX
Data Visualization
class_names = train_dataset.class_names def show_images(images, labels, preds): plt.figure(figsize=(30, 20)) for i, image in enumerate(images): plt.subplot(1, 25, i + 1, xticks=[], yticks=[]) image = image.numpy().transpose((1, 2, 0)) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = image * std + mean image = np.clip(image, 0., 1.) plt.imshow(image) col = 'green' if preds[i] != labels[i]: col = 'red' plt.xlabel(f'{class_names[int(labels[i].numpy())]}') plt.ylabel(f'{class_names[int(preds[i].numpy())]}', color=col) plt.tight_layout() plt.show() images, labels = next(iter(dl_train)) show_images(images, labels, labels) images, labels = next(iter(dl_test)) show_images(images, labels, labels)
_____no_output_____
Apache-2.0
Model/Resnet_18.ipynb
reyvnth/COVIDX
Creating the Model
resnet18 = torchvision.models.resnet18(pretrained=True) print(resnet18) resnet18.fc = torch.nn.Linear(in_features=512, out_features=2) loss_fn = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(resnet18.parameters(), lr=3e-5) print(resnet18) def show_preds(): resnet18.eval() images, labels = next(iter(dl_test)) outputs = resnet18(images) _, preds = torch.max(outputs, 1) show_images(images, labels, preds) show_preds()
_____no_output_____
Apache-2.0
Model/Resnet_18.ipynb
reyvnth/COVIDX
Training the Model
def train(epochs): best_model_wts = copy.deepcopy(resnet18.state_dict()) b_acc = 0.0 t_loss = [] t_acc = [] avg_t_loss=[] avg_t_acc=[] v_loss = [] v_acc=[] avg_v_loss = [] avg_v_acc = [] ep = [] print('Starting training..') for e in range(0, epochs): ep.append(e+1) print('='*20) print(f'Starting epoch {e + 1}/{epochs}') print('='*20) train_loss = 0. val_loss = 0. train_accuracy = 0 total_train = 0 correct_train = 0 resnet18.train() # set model to training phase for train_step, (images, labels) in enumerate(dl_train): optimizer.zero_grad() outputs = resnet18(images) _, pred = torch.max(outputs, 1) loss = loss_fn(outputs, labels) loss.backward() optimizer.step() train_loss += loss.item() train_loss /= (train_step + 1) _, predicted = torch.max(outputs, 1) total_train += labels.nelement() correct_train += sum((predicted == labels).numpy()) train_accuracy = correct_train / total_train t_loss.append(train_loss) t_acc.append(train_accuracy) if train_step % 20 == 0: print('Evaluating at step', train_step) print(f'Training Loss: {train_loss:.4f}, Training Accuracy: {train_accuracy:.4f}') accuracy = 0. resnet18.eval() # set model to eval phase for val_step, (images, labels) in enumerate(dl_test): outputs = resnet18(images) loss = loss_fn(outputs, labels) val_loss += loss.item() _, preds = torch.max(outputs, 1) accuracy += sum((preds == labels).numpy()) val_loss /= (val_step + 1) accuracy = accuracy/len(test_dataset) print(f'Validation Loss: {val_loss:.4f}, Validation Accuracy: {accuracy:.4f}') v_loss.append(val_loss) v_acc.append(accuracy) show_preds() resnet18.train() if accuracy > b_acc: b_acc = accuracy avg_t_loss.append(sum(t_loss)/len(t_loss)) avg_v_loss.append(sum(v_loss)/len(v_loss)) avg_t_acc.append(sum(t_acc)/len(t_acc)) avg_v_acc.append(sum(v_acc)/len(v_acc)) best_model_wts = copy.deepcopy(resnet18.state_dict()) print('Best validation Accuracy: {:4f}'.format(b_acc)) print('Training complete..') plt.plot(ep, avg_t_loss, 'g', label='Training loss') plt.plot(ep, avg_v_loss, 'b', label='validation loss') plt.title('Training and Validation loss for each epoch') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.savefig('/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18_loss.png') plt.show() plt.plot(ep, avg_t_acc, 'g', label='Training accuracy') plt.plot(ep, avg_v_acc, 'b', label='validation accuracy') plt.title('Training and Validation Accuracy for each epoch') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.savefig('/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18_accuarcy.png') plt.show() torch.save(resnet18.state_dict(),'/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18.pt') %%time train(epochs=5)
Starting training.. ==================== Starting epoch 1/5 ==================== Evaluating at step 0 Training Loss: 0.8522, Training Accuracy: 0.4800
Apache-2.0
Model/Resnet_18.ipynb
reyvnth/COVIDX
Final Results VALIDATION LOSS AND TRAINING LOSS VS EPOCHVALIDATION ACCURACY AND TRAINING ACCURACY VS EPOCHBEST ACCURACY ERROR..
show_preds()
_____no_output_____
Apache-2.0
Model/Resnet_18.ipynb
reyvnth/COVIDX
Plotting Target Pixel Files with Lightkurve Learning GoalsBy the end of this tutorial, you will:- Learn how to download and plot target pixel files from the data archive using [Lightkurve](https://docs.lightkurve.org).- Be able to plot the target pixel file background.- Be able to extract and plot flux from a target pixel file. Introduction The [*Kepler*](https://www.nasa.gov/mission_pages/kepler/main/index.html), [*K2*](https://www.nasa.gov/mission_pages/kepler/main/index.html), and [*TESS*](https://tess.mit.edu/) telescopes observe stars for long periods of time, from just under a month to four years. By doing so they observe how the brightnesses of stars change over time.Pixels around targeted stars are cut out and stored as *target pixel files* at each observing cadence. In this tutorial, we will learn how to use Lightkurve to download and understand the different photometric data stored in a target pixel file, and how to extract flux using basic aperture photometry.It is useful to read the accompanying tutorial discussing how to use target pixel file products with Lightkurve before starting this tutorial. It is recommended that you also read the tutorial on using *Kepler* light curve products with Lightkurve, which will introduce you to some specifics on how *Kepler*, *K2*, and *TESS* make observations, and how these are displayed as light curves. It also introduces some important terms and concepts that are referred to in this tutorial.*Kepler* observed a single field in the sky, although not all stars in this field were recorded. Instead, pixels were selected around certain targeted stars. These cutout images are called target pixel files, or TPFs. By combining the amount of flux in the pixels where the star appears, you can make a measurement of the amount of light from a star in that observation. The pixels chosen to include in this measurement are referred to as an *aperture*.TPFs are typically the first port of call when studying a star with *Kepler*, *K2*, or *TESS*. They allow us to see where our data is coming from, and identify potential sources of noise or systematic trends. In this tutorial, we will use the *Kepler* mission as the main example, but these tools equally apply to *TESS* and *K2* as well. ImportsThis tutorial requires:- **[Lightkurve](https://docs.lightkurve.org)** to work with TPF files.- [**Matplotlib**](https://matplotlib.org/) for plotting.
import lightkurve as lk import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
1. Downloading a TPF A TPF contains the original imaging data from which a light curve is derived. Besides the brightness data measured by the charge-coupled device (CCD) camera, a TPF also includes post-processing information such as an estimate of the astronomical background, and a recommended pixel aperture for extracting a light curve. First, we download a target pixel file. We will use one quarter's worth of *Kepler* data for the star named [Kepler-8](http://www.openexoplanetcatalogue.com/planet/Kepler-8%20b/), a star somewhat larger than the Sun, and the host of a [hot Jupiter planet](https://en.wikipedia.org/wiki/Hot_Jupiter).
search_result = lk.search_targetpixelfile("Kepler-8", author="Kepler", quarter=4, cadence="long") search_result tpf = search_result.download()
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
This TPF contains data for every cadence in the quarter we downloaded. Let's focus on the first cadence for now, which we can select using zero-based indexing as follows:
first_cadence = tpf[0] first_cadence
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
2. Flux and Background At each cadence the TPF has a number of photometry data properties. These are:- `flux_bkg`: the astronomical background of the image.- `flux_bkg_err`: the statistical uncertainty on the background flux.- `flux`: the stellar flux after the background is removed.- `flux_err`: the statistical uncertainty on the stellar flux after background removal.These properties can be accessed via a TPF object as follows:
first_cadence.flux.value
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
And you can plot the data as follows:
first_cadence.plot(column='flux');
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
Alternatively, if you are working directly with a FITS file, you can access the data in extension 1 (for example, `first_cadence.hdu[1].data['FLUX']`). Note that you can find all of the details on the structure and contents of TPF files in Section 2.3.2 of the [*Kepler* Archive Manual](http://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/archive_manual.pdf). When plotting data using the `plot()` function, what you are seeing in the TPF is the flux *after* the background has been removed. This background flux typically consists of [zodiacal light](https://en.wikipedia.org/wiki/Zodiacal_light) or earthshine (especially in *TESS* observations). The background is typically smooth and changes on scales much larger than a single TPF. In *Kepler*, the background is estimated for the CCD as a whole, before being extracted from each TPF in that CCD. You can learn more about background removal in Section 4.2 of the [*Kepler* Data Processing Handbook](http://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/KSCI-19081-002-KDPH.pdf). Now, let's compare the background to the background-subtracted flux to get a sense of scale. We can do this using the `plot()` function's `column` keyword. By default the function plots the flux, but we can change this to plot the background, as well as other data such as the error on each pixel's flux.
fig, axes = plt.subplots(2,2, figsize=(16,16)) first_cadence.plot(ax=axes[0,0], column='FLUX') first_cadence.plot(ax=axes[0,1], column='FLUX_BKG') first_cadence.plot(ax=axes[1,0], column='FLUX_ERR') first_cadence.plot(ax=axes[1,1], column='FLUX_BKG_ERR');
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
From looking at the color scale on both plots, you may see that the background flux is very low compared to the total flux emitted by a star. This is expected — stars are bright! But these small background corrections become important when looking at the very small scale changes caused by planets or stellar oscillations. Understanding the background is an important part of astronomy with *Kepler*, *K2*, and *TESS*. If the background is particularly bright and you want to see what the TPF looks like with it included, passing the `bkg=True` argument to the `plot()` method will show the TPF with the flux added on top of the background, representing the total flux recorded by the spacecraft.
first_cadence.plot(bkg=True);
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
In this case, the background is low and the star is bright, so it doesn't appear to make much of a difference. 3. Apertures As part of the data processing done by the *Kepler* pipeline, each TPF includes a recommended *optimal aperture mask*. This aperture mask is optimized to ensure that the stellar signal has a high signal-to-noise ratio, with minimal contamination from the background. The optimal aperture is stored in the TPF as the `pipeline_mask` property. We can have a look at it by calling it here:
first_cadence.pipeline_mask
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
As you can see, it is a Boolean array detailing which pixels are included. We can plot this aperture over the top of our TPF using the `plot()` function, and passing in the mask to the `aperture_mask` keyword. This will highlight the pixels included in the aperture mask using red hatched lines.
first_cadence.plot(aperture_mask=first_cadence.pipeline_mask);
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
You don't necessarily have to pass in the `pipeline_mask` to the `plot()` function; it can be any mask you create yourself, provided it is the right shape. An accompanying tutorial explains how to create such custom apertures, and goes into aperture photometry in more detail. For specifics on the selection of *Kepler*'s optimal apertures, read the [*Kepler* Data Processing Handbook](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/KSCI-19081-002-KDPH.pdf), Section 7, *Finding Optimal Apertures in Kepler Data*. 4. Simple Aperture Photometry Finally, let's learn how to perform simple aperture photometry (SAP) using the provided optimal aperture in `pipeline_mask` and the TPF. Using the full TPF for all cadences in the quarter, we can perform aperture photometry using the `to_lightcurve()` method as follows:
lc = tpf.to_lightcurve()
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
This method returns a `LightCurve` object which details the flux and flux centroid position at each cadence:
lc
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
Note that this [`KeplerLightCurve`](https://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html) object has fewer data columns than in light curves downloaded directly from MAST. This is because we are extracting our light curve directly from the TPF using minimal processing, whereas light curves created using the official pipeline include more processing and more columns.We can visualize the light curve as follows:
lc.plot();
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
This light curve is similar to the SAP light curve we previously encountered in the light curve tutorial. NoteThe background flux can be plotted in a similar way, using the [`get_bkg_lightcurve()`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.htmllightkurve.targetpixelfile.KeplerTargetPixelFile.get_bkg_lightcurve) method. This does not require an aperture, but instead sums the flux in the TPF's `FLUX_BKG` column at each timestamp.
bkg = tpf.get_bkg_lightcurve() bkg.plot();
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
Inspecting the background in this way is useful to identify signals which appear to be present in the background rather than in the astronomical object under study. --- Exercises Some stars, such as the planet-hosting star Kepler-10, have been observed both with *Kepler* and *TESS*. In this exercise, download and plot both the *TESS* and *Kepler* TPFs, along with the optimal apertures. You can do this by either selecting the TPFs from the list returned by [`search_targetpixelfile()`](https://docs.lightkurve.org/api/lightkurve.search.search_targetpixelfile.html), or by using the `mission` keyword argument when searching.Both *Kepler* and *TESS* produce target pixel file data products, but these can look different across the two missions. *TESS* is focused on brighter stars and has larger pixels, so a star that might occupy many pixels in *Kepler* may only occupy a few in *TESS*.How do light curves extracted from both of them compare?
#datalist = lk.search_targetpixelfile(...) #soln: datalist = lk.search_targetpixelfile("Kepler-10") datalist kep = datalist[6].download() tes = datalist[15].download() fig, axes = plt.subplots(1, 2, figsize=(14,6)) kep.plot(ax=axes[0], aperture_mask=kep.pipeline_mask, scale='log') tes.plot(ax=axes[1], aperture_mask=tes.pipeline_mask) fig.tight_layout(); lc_kep = kep.to_lightcurve() lc_tes = tes.to_lightcurve() fig, axes = plt.subplots(1, 2, figsize=(14,6), sharey=True) lc_kep.flatten().plot(ax=axes[0], c='k', alpha=.8) lc_tes.flatten().plot(ax=axes[1], c='k', alpha=.8);
_____no_output_____
MIT
docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
alex-w/lightkurve
Copyright 2018 The TensorFlow Authors. [Licensed under the Apache License, Version 2.0](scrollTo=ByZjmtFgB_Y5).
// #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License.
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
View on TensorFlow.org Run in Google Colab View source on GitHub Python interoperabilitySwift For TensorFlow supports Python interoperability.You can import Python modules from Swift, call Python functions, and convert values between Swift and Python.
import PythonKit print(Python.version)
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
Setting the Python version By default, when you `import Python`, Swift searches system library paths for the newest version of Python installed. To use a specific Python installation, set the `PYTHON_LIBRARY` environment variable to the `libpython` shared library provided by the installation. For example: `export PYTHON_LIBRARY="~/anaconda3/lib/libpython3.7m.so"`The exact filename will differ across Python environments and platforms. Alternatively, you can set the `PYTHON_VERSION` environment variable, which instructs Swift to search system library paths for a matching Python version. Note that `PYTHON_LIBRARY` takes precedence over `PYTHON_VERSION`.In code, you can also call the `PythonLibrary.useVersion` function, which is equivalent to setting `PYTHON_VERSION`.
// PythonLibrary.useVersion(2) // PythonLibrary.useVersion(3, 7)
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
__Note: you should run `PythonLibrary.useVersion` right after `import Python`, before calling any Python code. It cannot be used to dynamically switch Python versions.__ Set `PYTHON_LOADER_LOGGING=1` to see [debug output for Python library loading](https://github.com/apple/swift/pull/20674discussion_r235207008). BasicsIn Swift, `PythonObject` represents an object from Python.All Python APIs use and return `PythonObject` instances.Basic types in Swift (like numbers and arrays) are convertible to `PythonObject`. In some cases (for literals and functions taking `PythonConvertible` arguments), conversion happens implicitly. To explicitly cast a Swift value to `PythonObject`, use the `PythonObject` initializer.`PythonObject` defines many standard operations, including numeric operations, indexing, and iteration.
// Convert standard Swift types to Python. let pythonInt: PythonObject = 1 let pythonFloat: PythonObject = 3.0 let pythonString: PythonObject = "Hello Python!" let pythonRange: PythonObject = PythonObject(5..<10) let pythonArray: PythonObject = [1, 2, 3, 4] let pythonDict: PythonObject = ["foo": [0], "bar": [1, 2, 3]] // Perform standard operations on Python objects. print(pythonInt + pythonFloat) print(pythonString[0..<6]) print(pythonRange) print(pythonArray[2]) print(pythonDict["bar"]) // Convert Python objects back to Swift. let int = Int(pythonInt)! let float = Float(pythonFloat)! let string = String(pythonString)! let range = Range<Int>(pythonRange)! let array: [Int] = Array(pythonArray)! let dict: [String: [Int]] = Dictionary(pythonDict)! // Perform standard operations. // Outputs are the same as Python! print(Float(int) + float) print(string.prefix(6)) print(range) print(array[2]) print(dict["bar"]!)
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
`PythonObject` defines conformances to many standard Swift protocols:* `Equatable`* `Comparable`* `Hashable`* `SignedNumeric`* `Strideable`* `MutableCollection`* All of the `ExpressibleBy_Literal` protocolsNote that these conformances are not type-safe: crashes will occur if you attempt to use protocol functionality from an incompatible `PythonObject` instance.
let one: PythonObject = 1 print(one == one) print(one < one) print(one + one) let array: PythonObject = [1, 2, 3] for (i, x) in array.enumerated() { print(i, x) }
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
To convert tuples from Python to Swift, you must statically know the arity of the tuple.Call one of the following instance methods:- `PythonObject.tuple2`- `PythonObject.tuple3`- `PythonObject.tuple4`
let pythonTuple = Python.tuple([1, 2, 3]) print(pythonTuple, Python.len(pythonTuple)) // Convert to Swift. let tuple = pythonTuple.tuple3 print(tuple)
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
Python builtinsAccess Python builtins via the global `Python` interface.
// `Python.builtins` is a dictionary of all Python builtins. _ = Python.builtins // Try some Python builtins. print(Python.type(1)) print(Python.len([1, 2, 3])) print(Python.sum([1, 2, 3]))
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
Importing Python modulesUse `Python.import` to import a Python module. It works like the `import` keyword in `Python`.
let np = Python.import("numpy") print(np) let zeros = np.ones([2, 3]) print(zeros)
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
Use the throwing function `Python.attemptImport` to perform safe importing.
let maybeModule = try? Python.attemptImport("nonexistent_module") print(maybeModule)
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
Conversion with `numpy.ndarray`The following Swift types can be converted to and from `numpy.ndarray`:- `Array`- `ShapedArray`- `Tensor`Conversion succeeds only if the `dtype` of the `numpy.ndarray` is compatible with the `Element` or `Scalar` generic parameter type.For `Array`, conversion from `numpy` succeeds only if the `numpy.ndarray` is 1-D.
import TensorFlow let numpyArray = np.ones([4], dtype: np.float32) print("Swift type:", type(of: numpyArray)) print("Python type:", Python.type(numpyArray)) print(numpyArray.shape) // Examples of converting `numpy.ndarray` to Swift types. let array: [Float] = Array(numpy: numpyArray)! let shapedArray = ShapedArray<Float>(numpy: numpyArray)! let tensor = Tensor<Float>(numpy: numpyArray)! // Examples of converting Swift types to `numpy.ndarray`. print(array.makeNumpyArray()) print(shapedArray.makeNumpyArray()) print(tensor.makeNumpyArray()) // Examples with different dtypes. let doubleArray: [Double] = Array(numpy: np.ones([3], dtype: np.float))! let intTensor = Tensor<Int32>(numpy: np.ones([2, 3], dtype: np.int32))!
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
Displaying imagesYou can display images in-line using `matplotlib`, just like in Python notebooks.
// This cell is here to display plots inside a Jupyter Notebook. // Do not copy it into another environment. %include "EnableIPythonDisplay.swift" print(IPythonDisplay.shell.enable_matplotlib("inline")) let np = Python.import("numpy") let plt = Python.import("matplotlib.pyplot") let time = np.arange(0, 10, 0.01) let amplitude = np.exp(-0.1 * time) let position = amplitude * np.sin(3 * time) plt.figure(figsize: [15, 10]) plt.plot(time, position) plt.plot(time, amplitude) plt.plot(time, -amplitude) plt.xlabel("Time (s)") plt.ylabel("Position (m)") plt.title("Oscillations") plt.show()
_____no_output_____
Apache-2.0
docs/site/tutorials/python_interoperability.ipynb
texasmichelle/swift
Example 1: Sandstone Model
# Importing import theano.tensor as T import theano import sys, os sys.path.append("../GeMpy") sys.path.append("../") # Importing GeMpy modules import gempy as GeMpy # Reloading (only for development purposes) import importlib importlib.reload(GeMpy) # Usuful packages import numpy as np import pandas as pn import matplotlib.pyplot as plt # This was to choose the gpu os.environ['CUDA_LAUNCH_BLOCKING'] = '1' # Default options of printin np.set_printoptions(precision = 6, linewidth= 130, suppress = True) #%matplotlib inline %matplotlib inline # Importing the data from csv files and settign extent and resolution geo_data = GeMpy.create_data([696000,747000,6863000,6950000,-20000, 200],[ 50, 50, 50], path_f = os.pardir+"/input_data/a_Foliations.csv", path_i = os.pardir+"/input_data/a_Points.csv") # Assigning series to formations as well as their order (timewise) GeMpy.set_data_series(geo_data, {"EarlyGranite_Series":geo_data.formations[-1], "BIF_Series":(geo_data.formations[0], geo_data.formations[1]), "SimpleMafic_Series":geo_data.formations[2]}, order_series = ["EarlyGranite_Series", "BIF_Series", "SimpleMafic_Series"], verbose=0) GeMpy.data_to_pickle(geo_data, 'sandstone') inter = GeMpy.InterpolatorInput(geo_data) inter.interpolator.tg.n_formation.get_value() import numpy as np np.zeros((100,0)) 100000/1000 GeMpy.plot_data(geo_data) geo_data.formations di = GeMpy.InterpolatorInput(geo_data) di.data.get_formation_number() geo_data_s = GeMpy.select_series(geo_data, ['EarlyGranite_Series']) # Preprocessing data to interpolate: This rescales the coordinates between 0 and 1 for stability issues. # Here we can choose also the drift degree (in new updates I will change it to be possible to change the # grade after compilation). From here we can set also the data type of the operations in case you want to # use the GPU. Verbose is huge. There is a large list of strings that select what you want to print while # the computation. data_interp = GeMpy.set_interpolator(geo_data, dtype="float32", verbose=[]) # This cell will go to the backend # Set all the theano shared parameters and return the symbolic variables (the input of the theano function) input_data_T = data_interp.interpolator.tg.input_parameters_list() # Prepare the input data (interfaces, foliations data) to call the theano function. #Also set a few theano shared variables with the len of formations series and so on input_data_P = data_interp.interpolator.data_prep() # Compile the theano function. debugging = theano.function(input_data_T, data_interp.interpolator.tg.whole_block_model(), on_unused_input='ignore', allow_input_downcast=True, profile=True) %%timeit # Solve model calling the theano function sol = debugging(input_data_P[0], input_data_P[1], input_data_P[2], input_data_P[3],input_data_P[4], input_data_P[5]) lith = sol[-1,0,:] np.save('sandstone_lith', lith) a = geo_data.grid.grid[:,0].astype(bool) a2 = a.reshape(50,50,50) a2[:,:,0] geo_data.grid.grid 50*50 geo_data.data_to_pickle('sandstone') a2 = a1[:2500] g = geo_data.grid.grid h = geo_data.grid.grid[:2500] %%timeit eu(g,h) def squared_euclidean_distances(x_1, x_2): """ Compute the euclidian distances in 3D between all the points in x_1 and x_2 Args: x_1 (theano.tensor.matrix): shape n_points x number dimension x_2 (theano.tensor.matrix): shape n_points x number dimension Returns: theano.tensor.matrix: Distancse matrix. shape n_points x n_points """ # T.maximum avoid negative numbers increasing stability return sqd x_1 = T.matrix() x_2 = T.matrix() sqd = T.sqrt(T.maximum( (x_1**2).sum(1).reshape((x_1.shape[0], 1)) + (x_2**2).sum(1).reshape((1, x_2.shape[0])) - 2 * x_1.dot(x_2.T), 0 )) eu = theano.function([x_1, x_2], sqd) from evtk.hl import gridToVTK import numpy as np # Dimensions nx, ny, nz = 50, 50, 50 lx = geo_data.extent[0]-geo_data.extent[1] ly = geo_data.extent[2]-geo_data.extent[3] lz = geo_data.extent[4]-geo_data.extent[5] dx, dy, dz = lx/nx, ly/ny, lz/nz ncells = nx * ny * nz npoints = (nx + 1) * (ny + 1) * (nz + 1) # Coordinates x = np.arange(0, lx + 0.1*dx, dx, dtype='float64') y = np.arange(0, ly + 0.1*dy, dy, dtype='float64') z = np.arange(0, lz + 0.1*dz, dz, dtype='float64') x += geo_data.extent[0] y +=geo_data.extent[2] z +=geo_data.extent[5] # Variables litho = sol[-1,0,:].reshape( (nx, ny, nz)) gridToVTK("./sandstone", x, y, z, cellData = {"lithology" : litho},) geo_data.extent[4] # Plot the block model. GeMpy.plot_section(geo_data, 13, block = sol[-1,0,:], direction='x', plot_data = True) geo_res = pn.read_csv('olaqases.vox') geo_res = geo_res.iloc[9:] geo_res['nx 50'].unique(), geo_data.formations ip_addresses = geo_data.interfaces["formation"].unique() ip_dict = dict(zip(ip_addresses, range(1, len(ip_addresses) + 1))) ip_dict['Murchison'] = 0 ip_dict['out'] = 0 ip_dict['SimpleMafic'] = 4 geo_res_num = geo_res['nx 50'].replace(ip_dict) geo_res_num ip_dict (geo_res_num.shape[0]), sol[-1,0,:].shape[0] sol[-1,0, :][7] geo_res_num: geo_res_num.as_matrix().astype(int) plt.imshow( geo_res_num.as_matrix().reshape(50, 50, 50)[:, 23, :], origin="bottom", cmap="viridis" ) plt.imshow( sol[-1,0,:].reshape(50, 50, 50)[:, 23, :].T, origin="bottom", cmap="viridis" ) # Plot the block model. GeMpy.plot_section(geo_data, 13, block = geo_res_num.as_matrix(), direction='y', plot_data = True) 50*50*50 np.unique(sol[-1,0,:]) # Formation number and formation data_interp.interfaces.groupby('formation number').formation.unique() data_interp.interpolator.tg.u_grade_T.get_value() np.unique(sol) #np.save('SandstoneSol', sol) np.count_nonzero(np.load('SandstoneSol.npy') == sol) sol.shape GeMpy.PlotData(geo_data).plot3D_steno(sol[-1,0,:], 'Sandstone', description='The sandstone model') np.linspace(geo_data.extent[0], geo_data.extent[1], geo_data.resolution[0], retstep=True) np.diff(np.linspace(geo_data.extent[0], geo_data.extent[1], geo_data.resolution[0], retstep=False)).shape (geo_data.extent[1]- geo_data.extent[0])/ geo_data.resolution[0]-4 (geo_data.extent[1]- geo_data.extent[0])/39 # So far this is a simple 3D visualization. I have to adapt it into GeMpy lith0 = sol == 0 lith1 = sol == 1 lith2 = sol == 2 lith3 = sol == 3 lith4 = sol == 4 np.unique(sol) import ipyvolume.pylab as p3 p3.figure(width=800) p3.scatter(geo_data.grid.grid[:,0][lith0], geo_data.grid.grid[:,1][lith0], geo_data.grid.grid[:,2][lith0], marker='box', color = 'blue', size = 0.1 ) p3.scatter(geo_data.grid.grid[:,0][lith1], geo_data.grid.grid[:,1][lith1], geo_data.grid.grid[:,2][lith1], marker='box', color = 'yellow', size = 1 ) p3.scatter(geo_data.grid.grid[:,0][lith2], geo_data.grid.grid[:,1][lith2], geo_data.grid.grid[:,2][lith2], marker='box', color = 'green', size = 1 ) p3.scatter(geo_data.grid.grid[:,0][lith3], geo_data.grid.grid[:,1][lith3], geo_data.grid.grid[:,2][lith3], marker='box', color = 'pink', size = 1 ) p3.scatter(geo_data.grid.grid[:,0][lith4], geo_data.grid.grid[:,1][lith4], geo_data.grid.grid[:,2][lith4], marker='box', color = 'red', size = 1 ) p3.xlim(np.min(geo_data.grid.grid[:,0]),np.min(geo_data.grid.grid[:,0])+2175.0*40) p3.ylim(np.min(geo_data.grid.grid[:,1]),np.max(geo_data.grid.grid[:,1])) p3.zlim(np.min(geo_data.grid.grid[:,2]),np.min(geo_data.grid.grid[:,2])+2175.0*40)#np.max(geo_data.grid.grid[:,2])) p3.show() # The profile at the moment sucks because all what is whithin a scan is not subdivided debugging.profile.summary()
Function profiling ================== Message: <ipython-input-6-22dcf15bad61>:3 Time in 5 calls to Function.__call__: 1.357155e+01s Time in Function.fn.__call__: 1.357096e+01s (99.996%) Time in thunks: 1.357014e+01s (99.990%) Total compile time: 2.592983e+01s Number of Apply nodes: 95 Theano Optimizer time: 1.642699e+01s Theano validate time: 3.617525e-02s Theano Linker time (includes C, CUDA code generation/compiling): 9.462233e+00s Import time 1.913705e-01s Node make_thunk time 9.450990e+00s Node forall_inplace,cpu,scan_fn}(Elemwise{Maximum}[(0, 0)].0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, IncSubtensor{InplaceSet;:int64:}.0, grade of the universal drift, <TensorType(float64, matrix)>, <TensorType(float64, vector)>, Value of the formation, Position of the dips, Rest of the points of the layers, Reference points for every layer, Angle of every dip, Azimuth, Polarity, InplaceDimShuffle{x,x}.0, InplaceDimShuffle{x,x}.0, Elemwise{Composite{((sqr(sqr(i0)) * sqr(i0)) * i0)}}.0, Elemwise{Composite{(sqr(sqr(i0)) * i0)}}.0, Elemwise{Composite{(sqr(i0) * i0)}}.0, Elemwise{mul,no_inplace}.0, Elemwise{neg,no_inplace}.0, Elemwise{mul,no_inplace}.0, Elemwise{true_div,no_inplace}.0, Elemwise{Mul}[(0, 1)].0, Elemwise{mul,no_inplace}.0, Elemwise{Composite{(i0 * Composite{sqr(sqr(i0))}(i1))}}.0, Elemwise{Composite{(((i0 * i1) / sqr(i2)) + i3)}}.0, Reshape{2}.0) time 9.275085e+00s Node Elemwise{Composite{Switch(LT((i0 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), i3), (i4 - i0), Switch(GE((i0 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), (i2 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2))), (i5 + i0), Switch(LE((i2 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), i3), (i5 + i0), i0)))}}(Elemwise{Composite{minimum(minimum(minimum(minimum(minimum(i0, i1), i2), i3), i4), i5)}}.0, TensorConstant{1}, Elemwise{add,no_inplace}.0, TensorConstant{0}, TensorConstant{-2}, TensorConstant{2}) time 3.851414e-03s Node Elemwise{Composite{Switch(i0, Switch(LT((i1 + i2), i3), i3, (i1 + i2)), Switch(LT(i1, i2), i1, i2))}}(Elemwise{lt,no_inplace}.0, Elemwise{Composite{minimum(minimum(minimum(minimum(minimum(i0, i1), i2), i3), i4), i5)}}.0, Elemwise{Composite{Switch(LT((i0 + i1), i2), i2, (i0 + i1))}}.0, TensorConstant{0}) time 3.796577e-03s Node Elemwise{Composite{minimum(minimum(minimum(minimum(minimum(i0, i1), i2), i3), i4), i5)}}(Elemwise{Composite{Switch(LT((i0 + i1), i2), i2, (i0 + i1))}}.0, Elemwise{sub,no_inplace}.0, Elemwise{Composite{Switch(LT((i0 + i1), i2), i2, (i0 + i1))}}.0, Elemwise{sub,no_inplace}.0, Elemwise{Composite{Switch(LT((i0 + i1), i2), i2, (i0 + i1))}}.0, Elemwise{sub,no_inplace}.0) time 3.589630e-03s Node Elemwise{Composite{(((i0 - maximum(i1, i2)) - i3) + maximum(i4, i5))}}[(0, 0)](Elemwise{Composite{Switch(LT(i0, i1), (i0 + i2), i0)}}.0, Elemwise{Composite{minimum(((i0 + i1) - i2), i3)}}.0, TensorConstant{1}, TensorConstant{1}, Elemwise{Composite{((maximum(i0, i1) - Switch(LT(i2, i3), (i2 + i4), i2)) + i1)}}[(0, 2)].0, TensorConstant{2}) time 3.567696e-03s Time in all call to theano.grad() 0.000000e+00s Time since theano import 74.908s Class --- <% time> <sum %> <apply time> <time per call> <type> <#call> <#apply> <Class name> 99.6% 99.6% 13.517s 2.70e+00s Py 5 1 theano.scan_module.scan_op.Scan 0.2% 99.9% 0.034s 6.76e-03s Py 5 1 theano.tensor.basic.Nonzero 0.1% 100.0% 0.012s 4.31e-05s C 285 57 theano.tensor.elemwise.Elemwise 0.0% 100.0% 0.005s 9.77e-04s C 5 1 theano.tensor.subtensor.AdvancedSubtensor1 0.0% 100.0% 0.001s 2.78e-04s C 5 1 theano.tensor.subtensor.IncSubtensor 0.0% 100.0% 0.000s 2.13e-06s C 40 8 theano.tensor.subtensor.Subtensor 0.0% 100.0% 0.000s 3.59e-06s C 15 3 theano.tensor.basic.Reshape 0.0% 100.0% 0.000s 7.26e-07s C 65 13 theano.tensor.basic.ScalarFromTensor 0.0% 100.0% 0.000s 9.44e-06s C 5 1 theano.tensor.basic.AllocEmpty 0.0% 100.0% 0.000s 1.88e-06s C 20 4 theano.compile.ops.Shape_i 0.0% 100.0% 0.000s 1.86e-06s C 20 4 theano.tensor.elemwise.DimShuffle 0.0% 100.0% 0.000s 8.58e-07s C 5 1 theano.compile.ops.Rebroadcast ... (remaining 0 Classes account for 0.00%(0.00s) of the runtime) Ops --- <% time> <sum %> <apply time> <time per call> <type> <#call> <#apply> <Op name> 99.6% 99.6% 13.517s 2.70e+00s Py 5 1 forall_inplace,cpu,scan_fn} 0.2% 99.9% 0.034s 6.76e-03s Py 5 1 Nonzero 0.1% 99.9% 0.011s 5.50e-04s C 20 4 Elemwise{mul,no_inplace} 0.0% 100.0% 0.005s 9.77e-04s C 5 1 AdvancedSubtensor1 0.0% 100.0% 0.001s 2.78e-04s C 5 1 IncSubtensor{InplaceSet;:int64:} 0.0% 100.0% 0.001s 1.77e-04s C 5 1 Elemwise{eq,no_inplace} 0.0% 100.0% 0.000s 7.26e-07s C 65 13 ScalarFromTensor 0.0% 100.0% 0.000s 9.44e-06s C 5 1 AllocEmpty{dtype='float64'} 0.0% 100.0% 0.000s 4.32e-06s C 10 2 Subtensor{int64} 0.0% 100.0% 0.000s 1.40e-06s C 30 6 Subtensor{int64:int64:int8} 0.0% 100.0% 0.000s 1.88e-06s C 20 4 Shape_i{0} 0.0% 100.0% 0.000s 3.39e-06s C 10 2 Reshape{1} 0.0% 100.0% 0.000s 2.31e-06s C 10 2 InplaceDimShuffle{x,x} 0.0% 100.0% 0.000s 7.63e-07s C 30 6 Elemwise{le,no_inplace} 0.0% 100.0% 0.000s 1.45e-06s C 15 3 Elemwise{Composite{Switch(LT((i0 + i1), i2), i2, (i0 + i1))}} 0.0% 100.0% 0.000s 4.01e-06s C 5 1 Elemwise{true_div,no_inplace} 0.0% 100.0% 0.000s 4.01e-06s C 5 1 Reshape{2} 0.0% 100.0% 0.000s 3.81e-06s C 5 1 Elemwise{Composite{Switch(LT((i0 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), i3), (i4 - i0), Switch(GE((i0 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), (i2 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2))), (i5 + i0), Switch(LE((i2 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), i3), (i5 + i0), i0)))}} 0.0% 100.0% 0.000s 1.21e-06s C 15 3 Elemwise{sub,no_inplace} 0.0% 100.0% 0.000s 3.62e-06s C 5 1 Elemwise{Composite{(i0 * Composite{sqr(sqr(i0))}(i1))}} ... (remaining 27 Ops account for 0.00%(0.00s) of the runtime) Apply ------ <% time> <sum %> <apply time> <time per call> <#call> <id> <Apply name> 99.6% 99.6% 13.517s 2.70e+00s 5 93 forall_inplace,cpu,scan_fn}(Elemwise{Maximum}[(0, 0)].0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, IncSubtensor{InplaceSet;:int64:}.0, grade of the universal drift, <TensorType(float64, matrix)>, <TensorType(float64, vector)>, Value of the formation, Position of the dips, Rest of the points of the layers, Re input 0: dtype=int64, shape=no shape, strides=no strides input 1: dtype=int64, shape=no shape, strides=no strides input 2: dtype=int64, shape=no shape, strides=no strides input 3: dtype=int64, shape=no shape, strides=no strides input 4: dtype=int64, shape=no shape, strides=no strides input 5: dtype=int64, shape=no shape, strides=no strides input 6: dtype=int64, shape=no shape, strides=no strides input 7: dtype=float64, shape=no shape, strides=no strides input 8: dtype=int64, shape=no shape, strides=no strides input 9: dtype=float64, shape=no shape, strides=no strides input 10: dtype=float64, shape=no shape, strides=no strides input 11: dtype=float64, shape=no shape, strides=no strides input 12: dtype=float32, shape=no shape, strides=no strides input 13: dtype=float32, shape=no shape, strides=no strides input 14: dtype=float32, shape=no shape, strides=no strides input 15: dtype=float32, shape=no shape, strides=no strides input 16: dtype=float32, shape=no shape, strides=no strides input 17: dtype=float32, shape=no shape, strides=no strides input 18: dtype=float64, shape=no shape, strides=no strides input 19: dtype=float64, shape=no shape, strides=no strides input 20: dtype=float64, shape=no shape, strides=no strides input 21: dtype=float64, shape=no shape, strides=no strides input 22: dtype=float64, shape=no shape, strides=no strides input 23: dtype=float64, shape=no shape, strides=no strides input 24: dtype=float64, shape=no shape, strides=no strides input 25: dtype=float64, shape=no shape, strides=no strides input 26: dtype=float64, shape=no shape, strides=no strides input 27: dtype=float64, shape=no shape, strides=no strides input 28: dtype=float64, shape=no shape, strides=no strides input 29: dtype=float64, shape=no shape, strides=no strides input 30: dtype=float64, shape=no shape, strides=no strides input 31: dtype=float64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.2% 99.9% 0.034s 6.76e-03s 5 37 Nonzero(Reshape{1}.0) input 0: dtype=float64, shape=no shape, strides=no strides output 0: dtype=int64, shape=no shape, strides=no strides 0.1% 99.9% 0.011s 2.19e-03s 5 27 Elemwise{mul,no_inplace}(<TensorType(float64, matrix)>, Elemwise{eq,no_inplace}.0) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=bool, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.005s 9.77e-04s 5 51 AdvancedSubtensor1(Reshape{1}.0, Subtensor{int64}.0) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=int64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.001s 2.78e-04s 5 91 IncSubtensor{InplaceSet;:int64:}(AllocEmpty{dtype='float64'}.0, Rebroadcast{0}.0, Constant{1}) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=float64, shape=no shape, strides=no strides input 2: dtype=int64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.001s 1.77e-04s 5 15 Elemwise{eq,no_inplace}(InplaceDimShuffle{0,x}.0, TensorConstant{(1, 1) of 0}) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=int8, shape=no shape, strides=no strides output 0: dtype=bool, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 9.44e-06s 5 88 AllocEmpty{dtype='float64'}(Elemwise{Composite{(Switch(LT(maximum(i0, i1), i2), (maximum(i0, i1) + i3), (maximum(i0, i1) - i2)) + i4)}}[(0, 0)].0, Shape_i{0}.0) input 0: dtype=int64, shape=no shape, strides=no strides input 1: dtype=int64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 5.25e-06s 5 34 Reshape{1}(Elemwise{mul,no_inplace}.0, TensorConstant{(1,) of -1}) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=int64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 4.63e-06s 5 44 Subtensor{int64}(Nonzero.0, Constant{0}) input 0: dtype=int64, shape=no shape, strides=no strides input 1: dtype=int64, shape=no shape, strides=no strides output 0: dtype=int64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 4.29e-06s 5 23 Elemwise{mul,no_inplace}(TensorConstant{(1, 1) of 4.0}, InplaceDimShuffle{x,x}.0) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=float64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 4.01e-06s 5 94 Subtensor{int64}(forall_inplace,cpu,scan_fn}.0, ScalarFromTensor.0) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=int64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 4.01e-06s 5 58 Reshape{2}(AdvancedSubtensor1.0, TensorConstant{[-1 3]}) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=int64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 4.01e-06s 5 29 Elemwise{true_div,no_inplace}(TensorConstant{(1, 1) of -14.0}, Elemwise{sqr,no_inplace}.0) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=float64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 3.81e-06s 5 38 Elemwise{Composite{Switch(LT((i0 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), i3), (i4 - i0), Switch(GE((i0 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), (i2 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2))), (i5 + i0), Switch(LE((i2 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), i3), (i5 + i0), i0)))}}(Elemwise{Composite{minimum(minimum(minimum(minimum(minimum(i0, i1), i2), i3), i4), i5)}}.0, TensorConstant{1}, Elemwise{add,no_inplace}. input 0: dtype=int64, shape=no shape, strides=no strides input 1: dtype=int64, shape=no shape, strides=no strides input 2: dtype=int64, shape=no shape, strides=no strides input 3: dtype=int8, shape=no shape, strides=no strides input 4: dtype=int64, shape=no shape, strides=no strides input 5: dtype=int64, shape=no shape, strides=no strides output 0: dtype=int64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 3.62e-06s 5 17 Elemwise{Composite{(i0 * Composite{sqr(sqr(i0))}(i1))}}(TensorConstant{(1, 1) of 15.0}, InplaceDimShuffle{x,x}.0) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=float64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 3.62e-06s 5 2 Shape_i{0}(Length of interfaces in every series) input 0: dtype=int64, shape=no shape, strides=no strides output 0: dtype=int64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 3.58e-06s 5 20 Elemwise{Composite{((sqr(sqr(i0)) * sqr(i0)) * i0)}}(InplaceDimShuffle{x,x}.0) input 0: dtype=float64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 3.39e-06s 5 81 Subtensor{int64:int64:int8}(Length of interfaces in every series, ScalarFromTensor.0, ScalarFromTensor.0, Constant{1}) input 0: dtype=int64, shape=no shape, strides=no strides input 1: dtype=int64, shape=no shape, strides=no strides input 2: dtype=int64, shape=no shape, strides=no strides input 3: dtype=int8, shape=no shape, strides=no strides output 0: dtype=int64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 3.19e-06s 5 8 Elemwise{Composite{(((i0 * i1) / sqr(i2)) + i3)}}(TensorConstant{14.0}, <TensorType(float64, scalar)>, <TensorType(float64, scalar)>, <TensorType(float64, scalar)>) input 0: dtype=float64, shape=no shape, strides=no strides input 1: dtype=float64, shape=no shape, strides=no strides input 2: dtype=float64, shape=no shape, strides=no strides input 3: dtype=float64, shape=no shape, strides=no strides output 0: dtype=float64, shape=no shape, strides=no strides 0.0% 100.0% 0.000s 3.00e-06s 5 77 Elemwise{Composite{(((i0 - maximum(i1, i2)) - i3) + maximum(i4, i5))}}[(0, 0)](Elemwise{Composite{Switch(LT(i0, i1), (i0 + i2), i0)}}.0, Elemwise{Composite{minimum(((i0 + i1) - i2), i3)}}.0, TensorConstant{1}, TensorConstant{1}, Elemwise{Composite{((maximum(i0, i1) - Switch(LT(i2, i3), (i2 + i4), i2)) + i1)}}[(0, 2)].0, TensorConstant{2}) input 0: dtype=int64, shape=no shape, strides=no strides input 1: dtype=int64, shape=no shape, strides=no strides input 2: dtype=int8, shape=no shape, strides=no strides input 3: dtype=int8, shape=no shape, strides=no strides input 4: dtype=int64, shape=no shape, strides=no strides input 5: dtype=int8, shape=no shape, strides=no strides output 0: dtype=int64, shape=no shape, strides=no strides ... (remaining 75 Apply instances account for 0.00%(0.00s) of the runtime) Here are tips to potentially make your code run faster (if you think of new ones, suggest them on the mailing list). Test them first, as they are not guaranteed to always provide a speedup. Sorry, no tip for today.
MIT
Prototype Notebook/Example_1_Sandstone.ipynb
nre-aachen/gempy
Below here so far is deprecated First we make a GeMpy instance with most of the parameters default (except range that is given by the project). Then we also fix the extension and the resolution of the domain we want to interpolate. Finally we compile the function, only needed once every time we open the project (the guys of theano they are working on letting loading compiled files, even though in our case it is not a big deal).*General note. So far the reescaling factor is calculated for all series at the same time. GeoModeller does it individually for every potential field. I have to look better what this parameter exactly means* All input data is stored in pandas dataframes under, ```self.Data.Interances``` and ```self.Data.Foliations```: In case of disconformities, we can define which formation belong to which series using a dictionary. Until python 3.6 is important to specify the order of the series otherwise is random Now in the data frame we should have the series column too Next step is the creating of a grid. So far only regular. By default it takes the extent and the resolution given in the `import_data` method.
# Create a class Grid so far just regular grid #GeMpy.set_grid(geo_data) #GeMpy.get_grid(geo_data)
_____no_output_____
MIT
Prototype Notebook/Example_1_Sandstone.ipynb
nre-aachen/gempy
Plotting raw data The object Plot is created automatically as we call the methods above. This object contains some methods to plot the data and the results.It is possible to plot a 2D projection of the data in a specific direction using the following method. Also is possible to choose the series you want to plot. Additionally all the key arguments of seaborn lmplot can be used.
#GeMpy.plot_data(geo_data, 'y', geo_data.series.columns.values[1])
_____no_output_____
MIT
Prototype Notebook/Example_1_Sandstone.ipynb
nre-aachen/gempy
Class Interpolator This class will take the data from the class Data and calculate potential fields and block. We can pass as key arguments all the variables of the interpolation. I recommend not to touch them if you do not know what are you doing. The default values should be good enough. Also the first time we execute the method, we will compile the theano function so it can take a bit of time.
%debug geo_data.interpolator.results geo_data.interpolator.tg.c_o_T.get_value(), geo_data.interpolator.tg.a_T.get_value() geo_data.interpolator.compile_potential_field_function() geo_data.interpolator.compute_potential_fields('BIF_Series',verbose = 3) geo_data.interpolator.potential_fields geo_data.interpolator.results geo_data.interpolator.tg.c_resc.get_value()
_____no_output_____
MIT
Prototype Notebook/Example_1_Sandstone.ipynb
nre-aachen/gempy
Now we could visualize the individual potential fields as follow: Early granite
GeMpy.plot_potential_field(geo_data,10, n_pf=0)
_____no_output_____
MIT
Prototype Notebook/Example_1_Sandstone.ipynb
nre-aachen/gempy
BIF Series
GeMpy.plot_potential_field(geo_data,13, n_pf=1, cmap = "magma", plot_data = True, verbose = 5)
_____no_output_____
MIT
Prototype Notebook/Example_1_Sandstone.ipynb
nre-aachen/gempy
SImple mafic
GeMpy.plot_potential_field(geo_data, 10, n_pf=2)
_____no_output_____
MIT
Prototype Notebook/Example_1_Sandstone.ipynb
nre-aachen/gempy
Optimizing the export of lithologiesBut usually the final result we want to get is the final block. The method `compute_block_model` will compute the block model, updating the attribute `block`. This attribute is a theano shared function that can return a 3D array (raveled) using the method `get_value()`.
GeMpy.compute_block_model(geo_data) #GeMpy.set_interpolator(geo_data, u_grade = 0, compute_potential_field=True)
_____no_output_____
MIT
Prototype Notebook/Example_1_Sandstone.ipynb
nre-aachen/gempy
And again after computing the model in the Plot object we can use the method `plot_block_section` to see a 2D section of the model
GeMpy.plot_section(geo_data, 13, direction='y')
_____no_output_____
MIT
Prototype Notebook/Example_1_Sandstone.ipynb
nre-aachen/gempy
Export to vtk. (*Under development*)
"""Export model to VTK Export the geology blocks to VTK for visualisation of the entire 3-D model in an external VTK viewer, e.g. Paraview. ..Note:: Requires pyevtk, available for free on: https://github.com/firedrakeproject/firedrake/tree/master/python/evtk **Optional keywords**: - *vtk_filename* = string : filename of VTK file (default: output_name) - *data* = np.array : data array to export to VKT (default: entire block model) """ vtk_filename = "noddyFunct2" extent_x = 10 extent_y = 10 extent_z = 10 delx = 0.2 dely = 0.2 delz = 0.2 from pyevtk.hl import gridToVTK # Coordinates x = np.arange(0, extent_x + 0.1*delx, delx, dtype='float64') y = np.arange(0, extent_y + 0.1*dely, dely, dtype='float64') z = np.arange(0, extent_z + 0.1*delz, delz, dtype='float64') # self.block = np.swapaxes(self.block, 0, 2) gridToVTK(vtk_filename, x, y, z, cellData = {"geology" : sol})
_____no_output_____
MIT
Prototype Notebook/Example_1_Sandstone.ipynb
nre-aachen/gempy