markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Projected ReturnsIt's now time to check if your trading signal has the potential to become profitable!We'll start by computing the net returns this portfolio would return. For simplicity, we'll assume every stock gets an equal dollar amount of investment. This makes it easier to compute a portfolio's returns as the si...
def portfolio_returns(df_long, df_short, lookahead_returns, n_stocks): """ Compute expected returns for the portfolio, assuming equal investment in each long/short stock. Parameters ---------- df_long : DataFrame Top stocks for each ticker and date marked with a 1 df_short : DataFra...
Tests Passed
Apache-2.0
P1_Trading_with_Momentum/project_notebook.ipynb
hemang-75/AI_for_Trading
View DataTime to see how the portfolio did.
expected_portfolio_returns expected_portfolio_returns = portfolio_returns(df_long, df_short, lookahead_returns, 2*top_bottom_n) project_helper.plot_returns(expected_portfolio_returns.T.sum(), 'Portfolio Returns')
_____no_output_____
Apache-2.0
P1_Trading_with_Momentum/project_notebook.ipynb
hemang-75/AI_for_Trading
Statistical Tests Annualized Rate of Return
expected_portfolio_returns_by_date = expected_portfolio_returns.T.sum().dropna() portfolio_ret_mean = expected_portfolio_returns_by_date.mean() portfolio_ret_ste = expected_portfolio_returns_by_date.sem() portfolio_ret_annual_rate = (np.exp(portfolio_ret_mean * 12) - 1) * 100 print(""" Mean: {:.6...
Mean: 0.106159 Standard Error: 0.071935 Annualized Rate of Return: 257.48%
Apache-2.0
P1_Trading_with_Momentum/project_notebook.ipynb
hemang-75/AI_for_Trading
The annualized rate of return allows you to compare the rate of return from this strategy to other quoted rates of return, which are usually quoted on an annual basis. T-TestOur null hypothesis ($H_0$) is that the actual mean return from the signal is zero. We'll perform a one-sample, one-sided t-test on the observed ...
from scipy import stats def analyze_alpha(expected_portfolio_returns_by_date): """ Perform a t-test with the null hypothesis being that the expected mean return is zero. Parameters ---------- expected_portfolio_returns_by_date : Pandas Series Expected portfolio returns for each date ...
Tests Passed
Apache-2.0
P1_Trading_with_Momentum/project_notebook.ipynb
hemang-75/AI_for_Trading
View DataLet's see what values we get with our portfolio. After you run this, make sure to answer the question below.
t_value, p_value = analyze_alpha(expected_portfolio_returns_by_date) print(""" Alpha analysis: t-value: {:.3f} p-value: {:.6f} """.format(t_value, p_value))
Alpha analysis: t-value: 1.476 p-value: 0.073339
Apache-2.0
P1_Trading_with_Momentum/project_notebook.ipynb
hemang-75/AI_for_Trading
4445 ...
import pandas import requests import time from selenium import webdriver from bs4 import BeautifulSoup url = "http://results2.xacte.com/#/e/2306/searchable" response = requests.get(url) if response.status_code==200: print(response.text) # https://www.freecodecamp.org/news/how-to-scrape-websites-with-python-and-beau...
_____no_output_____
MIT
tri-results/how-to-web-scrape.ipynb
kbridge14/how2py
4445AARON FIGURALONG, M/34MANHATTAN BEACH, CA5:25:365:25:36
# aria-hidden=false when the box with info is closed; true when you open up the box. # you'll want to set it to true when viewing all the information per individual """ <md-backdrop class="md-dialog-backdrop md-opaque ng-scope" style="position: fixed;" aria-hidden="true"></md-backdrop> """ soup.find_all(name = "md-row...
_____no_output_____
MIT
tri-results/how-to-web-scrape.ipynb
kbridge14/how2py
Train PhyDNet: We wil predict:- `n_in`: 5 images- `n_out`: 5 images - `n_obj`: up to 3 objects
Path.cwd() DATA_PATH = Path.cwd()/'data' ds = MovingMNIST(DATA_PATH, n_in=5, n_out=5, n_obj=[1,2], th=None) train_tl = TfmdLists(range(120), ImageTupleTransform(ds)) valid_tl = TfmdLists(range(120), ImageTupleTransform(ds)) # i=0 # fat_tensor = torch.stack([torch.cat(train_tl[i][0], 0) for i in range(100)]) # m,s = fa...
_____no_output_____
Apache-2.0
04_train_phydnet.ipynb
shawnwang-tech/moving_mnist
Left: Input, Right: Target
dls.show_batch() b = dls.one_batch() explode_types(b)
_____no_output_____
Apache-2.0
04_train_phydnet.ipynb
shawnwang-tech/moving_mnist
PhyDNet
phycell = PhyCell(input_shape=(16,16), input_dim=64, F_hidden_dims=[49], n_layers=1, kernel_size=(7,7)) convlstm = ConvLSTM(input_shape=(16,16), input_dim=64, hidden_dims=[128,128,64], n_layers=3, kernel_size=(3,3)) encoder = EncoderRNN(phycell, convlstm) model = StackUnstack(PhyDNet(encoder, sigmoid=False, momen...
_____no_output_____
Apache-2.0
04_train_phydnet.ipynb
shawnwang-tech/moving_mnist
A handy callback to include the loss computed inside the model to the target loss
#export class PHyCallback(Callback): def after_pred(self): self.learn.pred, self.loss_phy = self.pred def after_loss(self): self.learn.loss += self.loss_phy learn = Learner(dls, model, loss_func=mse_loss, metrics=metrics, cbs=[TeacherForcing(10), PHyCallback()], opt_func=ranger)...
_____no_output_____
Apache-2.0
04_train_phydnet.ipynb
shawnwang-tech/moving_mnist
Project led by Nikolas Papastavrou Code developed by Varun Bopardikar Data Analysis conducted by Selina Ho, Hana Ahmed
import pandas as pd import numpy as np from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder from sklearn.model_selection import train_test_split from sklearn import metrics from datetime import datetime from sklearn.naive_bayes import GaussianNB from sklearn import tree from s...
_____no_output_____
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Load Data
def gsev(val): """ Records whether or not a number is greater than 7. """ if val <= 7: return 0 else: return 1 df = pd.read_csv('../../fservice.csv') df['Just Date'] = df['Just Date'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d')) df['Seven'] = df['ElapsedDays'].apply(gsev, 0)
/Users/varunbopardikar/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3057: DtypeWarning: Columns (10,33) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result)
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Parameters
c = ['Anonymous','AssignTo', 'RequestType', 'RequestSource','CD','Direction', 'ActionTaken', 'APC' ,'AddressVerified'] d = ['Latitude', 'Longitude']
_____no_output_____
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Feature Cleaning
#Put desired columns into dataframe, drop nulls. dfn = df.filter(items = c + d + ['ElapsedDays'] + ['Seven']) dfn = dfn.dropna() #Separate data into explanatory and response variables XCAT = dfn.filter(items = c).values XNUM = dfn.filter(items = d).values y = dfn['ElapsedDays'] <= 7 #Encode cateogrical ...
/Users/varunbopardikar/anaconda3/lib/python3.7/site-packages/sklearn/preprocessing/_encoders.py:415: FutureWarning: The handling of integer data will change in version 0.22. Currently, the categories are determined based on the range [0, max(values)], while in the future they will be determined based on the unique valu...
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Algorithms and Hyperparameters
##Used Random Forest in Final Model gnb = GaussianNB() dc = tree.DecisionTreeClassifier(criterion = 'entropy', max_depth = 20) rf = RandomForestClassifier(n_estimators = 50, max_depth = 20) lr = LogisticRegression()
_____no_output_____
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Validation Set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size = 0.2, random_state = 0) #Train Model classifier = rf classifier.fit(X_train, y_train) #Test model y_vpred = classifier.predict(X_val) #Print Acc...
Accuracy: 0.9385983549336814 Precision, Recall, F1Score: (0.946896616482519, 0.9893259382317161, 0.9676463908853341, None)
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Test Set
#Train Model #Test model y_tpred = classifier.predict(X_test) #Print Accuracy Function results print("Accuracy:",metrics.accuracy_score(y_test, y_tpred)) print("Precision, Recall, F1Score:",metrics.precision_recall_fscore_support(y_test, y_tpred, average = 'binary'))
Accuracy: 0.9387186223709323 Precision, Recall, F1Score: (0.9468199376863904, 0.9895874917412928, 0.9677314319565967, None)
MIT
dataAnalysis/ETCClassifier.ipynb
mminamina/311-data
Deep Learning : Simple DNN to Classify Images, and application of TensorBoard.dev
#Importing the necessary libraries import tensorflow as tf import keras import tensorflow.keras.datasets.fashion_mnist as data import numpy as np from time import time import matplotlib.pyplot as plt
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
1. Loading Data
#Assigning the raw datafrom Keras dataset - Fashion MNIST raw_data = data #Loading the dataset into training and validation dataset (train_image, train_label), (test_image, test_label) = raw_data.load_data( )
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz 32768/29515 [=================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz 26427392/26421880 [=====================...
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
2. Data Inspection
#checking the input volume shape print("Total Training Images :{}".format(train_image.shape[0])) print("Training Images Shape (ht,wd) :{} X {}".format(train_image.shape[1],train_image.shape[2])) print("Total Testing Images :{}".format(test_image.shape[0])) print("Testing Images Shape (ht,wd) :{} X {}".format(test_imag...
Total Training Images :60000 Training Images Shape (ht,wd) :28 X 28 Total Testing Images :10000 Testing Images Shape (ht,wd) :28 X 28
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
3. Rescaling Data
#rescaling the images for better training of Neural Network train_image = train_image/255.0 test_image = test_image/255.0 #Existing Image classes from Fashion MNIST - in original Order class_labels= ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', ...
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
4. Sample Images Visualization
#Visualizing some of the training images fig, ax= plt.subplots(3,3, figsize=(10,10) ) for i,img in enumerate(ax.flatten()): img.pcolor(train_image[i]) img.set_title(class_labels[train_label[i]]) plt.tight_layout()
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
5. Building the Model Architecture
#Defining a very Simple Deep Neural Network with Softmax as activation function of the top layer for multi-class classification model = keras.Sequential() model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(256, activation= 'relu', use_bias= True)) model.add(keras.layers.Dropout(rate= .2)) model.add(keras.la...
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
6. Defining TensorBoard for Training visualization
#creating a tensorboard object to be called while training the model tensorboard = keras.callbacks.TensorBoard(log_dir='.../logs', histogram_freq=1, batch_size=1000, write_grads=True, write_images=True )
/usr/local/lib/python3.6/dist-packages/keras/callbacks/tensorboard_v2.py:92: UserWarning: The TensorBoard callback `batch_size` argument (for histogram computation) is deprecated with TensorFlow 2.0. It will be ignored. warnings.warn('The TensorBoard callback `batch_size` argument ' /usr/local/lib/python3.6/dist-pack...
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
7. Model Training
model.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy', metrics= ['accuracy']) # Fitting the model with tensorboard object as callbacks model.fit(train_image, train_label, batch_size=1000, epochs = 24, validation_data=(test_image, test_label), callbacks=[tensorboard] ) model.summary()
Model: "sequential_9" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten_8 (Flatten) (None, 784) 0 __________________________________...
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
8. Uploading the logs to TensorBoard.dev
# #checking out the TensorBoard dashboard to analyze training and validation performance with other statistics during the training of model %reload_ext tensorboard !tensorboard dev upload --logdir '.../logs' --name "Deep Learning : Tensorboard" --description "Modeling a very simple Image Classifier based on Fashion MNI...
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
Live Link : https://tensorboard.dev/experiment/u6hGU2LaQqKn1b1udgL1RA/ 9. Making a Sample Prediction
#selection of an image sample = test_image[6] plt.imshow(sample) plt.xlabel(class_labels[test_label[6]]) plt.title(test_label[6]) plt.tight_layout() #Prediction using trained model results= model.predict(test_image[6].reshape(1,28,28)) plt.bar(np.arange(0,10),results[0], tick_label=class_labels, ) plt.xticks(rotation=4...
_____no_output_____
MIT
Image_Classification_with_TensorBoard_Application.ipynb
shivtejshete/Computer-Vision
Dimensional Mechanics Coding Challenge Problem Statementβ€œYou are given a dictionary (dictionary.txt), containing a list of words, one per line. Imagine you have seven tiles. Each tile is either blank or contains a single lowercase letter (a-z).

Please list all the words from the dictionary that can be produced by usi...
#section 1 import csv import pandas as pd f = open("dictionary.txt","r") text_file= f.read() dictionary= text_file.split("\n") #'valid_words' stores this list. You can view this list of words in the 'wordlist.csv' file present #in the root directory (read instruction on how to access 'wordlist.csv') valid_words=[] f...
The number of words that can be formed, including those where blank tiles are used as wildcards 29455
MIT
DataWrangling.ipynb
niharikabalachandra/Data-Wrangling-Example
Video 1 - Linear regression with swyft
import numpy as np import pylab as plt from scipy.linalg import inv from scipy import stats
_____no_output_____
MIT
notebooks/Video 1 - Linear regression.ipynb
undark-lab/swyft
Linear regression for a second order polynomial $$y(x) = v_0 + v_1\cdot x + v_2 \cdot x^2$$$$d_i \sim \mathcal{N}(y(x_i), \sigma = 0.05)\;, \quad \text{with}\quad x_i = 0,\; 0.1,\; 0.2, \;\dots,\; 1.0$$
# Model and reference parameters N = 11 x = np.linspace(0, 1, N) T = np.array([x**0, x**1, x**2]).T v_true = np.array([-0.2, 0., 0.2]) # Mock data SIGMA = 0.05 np.random.seed(42) DATA = T.dot(v_true) + np.random.randn(N)*SIGMA # Linear regression v_lr = inv(T.T.dot(T)).dot(T.T.dot(DATA)) y_lr = T.dot(v_lr) # Fisher e...
v_0 = -0.188 +- 0.038 (-0.200) v_1 = 0.098 +- 0.177 (0.000) v_2 = 0.079 +- 0.171 (0.200)
MIT
notebooks/Video 1 - Linear regression.ipynb
undark-lab/swyft
SWYFT!
import swyft def model(v): y = T.dot(v) return dict(y=y) sim = swyft.Simulator(model, ['v0', 'v1', 'v2'], dict(y=(11,))) def noise(sim, v): d = sim['y'] + np.random.randn(11)*SIGMA return dict(d=d) store = swyft.Store.memory_store(sim) prior = swyft.Prior(lambda u: u*2 - 1, 3) # Uniform(-1, 1) store....
_____no_output_____
MIT
notebooks/Video 1 - Linear regression.ipynb
undark-lab/swyft
Discrete Choice Models Fair's Affair data A survey of women only was conducted in 1974 by *Redbook* asking about extramarital affairs.
%matplotlib inline import numpy as np import pandas as pd from scipy import stats import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.formula.api import logit print(sm.datasets.fair.SOURCE) print( sm.datasets.fair.NOTE) dta = sm.datasets.fair.load_pandas().data dta['affair'] = (dta['affairs'] ...
Logit Regression Results ============================================================================== Dep. Variable: affair No. Observations: 6366 Model: Logit Df Residuals: 6357 Meth...
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
How well are we predicting?
affair_mod.pred_table()
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
The coefficients of the discrete choice model do not tell us much. What we're after is marginal effects.
mfx = affair_mod.get_margeff() print(mfx.summary()) respondent1000 = dta.iloc[1000] print(respondent1000) resp = dict(zip(range(1,9), respondent1000[["occupation", "educ", "occupation_husb", "rate_marriage", "age", "yrs_married", "c...
Logit Marginal Effects ===================================== Dep. Variable: affair Method: dydx At: overall =================================================================================== dy/dx std err ...
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
`predict` expects a `DataFrame` since `patsy` is used to select columns.
respondent1000 = dta.iloc[[1000]] affair_mod.predict(respondent1000) affair_mod.fittedvalues[1000] affair_mod.model.cdf(affair_mod.fittedvalues[1000])
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
The "correct" model here is likely the Tobit model. We have an work in progress branch "tobit-model" on github, if anyone is interested in censored regression models. Exercise: Logit vs Probit
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) support = np.linspace(-6, 6, 1000) ax.plot(support, stats.logistic.cdf(support), 'r-', label='Logistic') ax.plot(support, stats.norm.cdf(support), label='Probit') ax.legend(); fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) support = np.linspace(-6, ...
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Compare the estimates of the Logit Fair model above to a Probit model. Does the prediction table look better? Much difference in marginal effects? Generalized Linear Model Example
print(sm.datasets.star98.SOURCE) print(sm.datasets.star98.DESCRLONG) print(sm.datasets.star98.NOTE) dta = sm.datasets.star98.load_pandas().data print(dta.columns) print(dta[['NABOVE', 'NBELOW', 'LOWINC', 'PERASIAN', 'PERBLACK', 'PERHISP', 'PERMINTE']].head(10)) print(dta[['AVYRSEXP', 'AVSALK', 'PERSPENK', 'PTRATIO', 'P...
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Aside: Binomial distribution Toss a six-sided die 5 times, what's the probability of exactly 2 fours?
stats.binom(5, 1./6).pmf(2) from scipy.special import comb comb(5,2) * (1/6.)**2 * (5/6.)**3 from statsmodels.formula.api import glm glm_mod = glm(formula, dta, family=sm.families.Binomial()).fit() print(glm_mod.summary())
Generalized Linear Model Regression Results ================================================================================ Dep. Variable: ['NABOVE', 'NBELOW'] No. Observations: 303 Model: GLM Df Residuals: ...
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
The number of trials
glm_mod.model.data.orig_endog.sum(1) glm_mod.fittedvalues * glm_mod.model.data.orig_endog.sum(1)
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
First differences: We hold all explanatory variables constant at their means and manipulate the percentage of low income households to assess its impacton the response variables:
exog = glm_mod.model.data.orig_exog # get the dataframe means25 = exog.mean() print(means25) means25['LOWINC'] = exog['LOWINC'].quantile(.25) print(means25) means75 = exog.mean() means75['LOWINC'] = exog['LOWINC'].quantile(.75) print(means75)
Intercept 1.000000 LOWINC 55.460075 PERASIAN 5.896335 PERBLACK 5.636808 PERHISP 34.398080 PCTCHRT 1.175909 PCTYRRND 11.611905 PERMINTE 14...
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Again, `predict` expects a `DataFrame` since `patsy` is used to select columns.
resp25 = glm_mod.predict(pd.DataFrame(means25).T) resp75 = glm_mod.predict(pd.DataFrame(means75).T) diff = resp75 - resp25
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
The interquartile first difference for the percentage of low income households in a school district is:
print("%2.4f%%" % (diff[0]*100)) nobs = glm_mod.nobs y = glm_mod.model.endog yhat = glm_mod.mu from statsmodels.graphics.api import abline_plot fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, ylabel='Observed Values', xlabel='Fitted Values') ax.scatter(yhat, y) y_vs_yhat = sm.OLS(y, sm.add_constant(yhat, pre...
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Plot fitted values vs Pearson residuals Pearson residuals are defined to be$$\frac{(y - \mu)}{\sqrt{(var(\mu))}}$$where var is typically determined by the family. E.g., binomial variance is $np(1 - p)$
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, title='Residual Dependence Plot', xlabel='Fitted Values', ylabel='Pearson Residuals') ax.scatter(yhat, stats.zscore(glm_mod.resid_pearson)) ax.axis('tight') ax.plot([0.0, 1.0],[0.0, 0.0], 'k-');
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Histogram of standardized deviance residuals with Kernel Density Estimate overlaid The definition of the deviance residuals depends on the family. For the Binomial distribution this is$$r_{dev} = sign\left(Y-\mu\right)*\sqrt{2n(Y\log\frac{Y}{\mu}+(1-Y)\log\frac{(1-Y)}{(1-\mu)}}$$They can be used to detect ill-fitting ...
resid = glm_mod.resid_deviance resid_std = stats.zscore(resid) kde_resid = sm.nonparametric.KDEUnivariate(resid_std) kde_resid.fit() fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, title="Standardized Deviance Residuals") ax.hist(resid_std, bins=25, density=True); ax.plot(kde_resid.support, kde_resid.density...
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
QQ-plot of deviance residuals
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) fig = sm.graphics.qqplot(resid, line='r', ax=ax)
_____no_output_____
BSD-3-Clause
docs/source2/examples/notebooks/generated/discrete_choice_example.ipynb
GreatWei/pythonStates
Import Libraries
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from keras.initializers import glorot_uniform import keras from keras.models import Sequential from keras.layers import Dense from sklearn.metri...
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
Background_Credit default_ can defined as the failure to repay a debt including interest or principal on a loan or security on the due date.This can cause losses for lenders so that preventive measures is a must, in which early detection for potential default can be one of those. This case study can be categorized as ...
df = pd.read_csv('credit_cards_dataset.csv') df.head()
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
The description of each column/variable can be seen below :- ID: ID of each client- LIMIT_BAL: Amount of given credit in NT dollars (includes individual and family/supplementary credit- SEX: Gender (1=male, 2=female)- EDUCATION: (1=graduate school, 2=university, 3=high school, 4=others, 5=unknown, 6=unknown)- MARRIAGE:...
df.describe()
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
Next, we want see the correlation between all of features and label in the dataset by using the Pearson correlation formula below. $$Covarian (S_{xy}) =\frac{\sum(x_{i}-\bar{x})(y_{i}-\bar{y})}{n-1}$$The plot below is the correlation between all features (predictor variables) toward label.
# Using Pearson Correlation plt.figure(figsize=(14,14)) cor = df.iloc[:,1:].corr() x = cor [['default.payment.next.month']] sns.heatmap(x, annot=True, cmap=plt.cm.Reds) plt.show()
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
As we can see in the plot above, the repayment status of customers (PAY_0 - PAY_6) have the higher correlation towards the label (default.payment.next.month) in compared to other features. Data Preparation Data CleansingBefore implementing the ANN to predict the "credit default customer", we have to check the data, wh...
df.isnull().sum()
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
After checking the summary of missing value in the dataset, the result shows that the data has no missing values so that the data is ready to the next stage. Splitting Data to Training and Test DataIn this stage, the clean data will be splitted into 2 categories, train data and test data. The train data will be utiliz...
X = df.iloc[:, 1:24].values y = df.iloc[:, 24].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
Data StandardizationAfter splitting data, the numeric data will be standardized by scaling the data to have mean of 0 and variance of 1. $$X_{stand} = \frac{X - \mu}{\sigma}$$
sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
ModellingOn the Modeling phase, we create the ANN model with 5 hidden layer (with 50,40,30,20, and 10 neurons respectively) with _relu_ activation function, and 1 output layer with 1 neuron with _sigmoid_ activation function. Furthermore, we choose the 'Adam' optimizer to optimize the parameter in the created model.
hl = 6 # number of hidden layer nohl = [50, 60, 40, 30, 20, 10] # number of neurons in each hidden layer classifier = Sequential() # Hidden Layer for i in range(hl): if i==0: classifier.add(Dense(units=nohl[i], input_dim=X_train.shape[1], kernel_initializer='uniform', activation='...
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
Here below the summary of created model architecture by ANN with the parameters needed.
classifier.summary()
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 50) 1200 __________________________________...
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
After create the model architecture by ANN, we train the model by a certain number of epoch and batch.
classifier.fit(X_train, y_train, epochs=50, batch_size=512)
Epoch 1/50 21000/21000 [==============================] - 1s 50us/step - loss: 0.5517 - accuracy: 0.7773 Epoch 2/50 21000/21000 [==============================] - 0s 11us/step - loss: 0.4703 - accuracy: 0.7833 Epoch 3/50 21000/21000 [==============================] - 0s 10us/step - loss: 0.4506 - accuracy: 0.8121 0s - ...
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
EvaluationIn this classification problem, we evaluate model by looking at how many of their predictions are correct in which the threshold is 50%. This can be plotted into Confusion Matrix.Here is the confusion matrix from the ANN model after doing prediction to the dataset :
y_pred = classifier.predict(X_test) y_pred = (y_pred > 0.5) conf_matr = confusion_matrix(y_test, y_pred) TP = conf_matr[0,0]; FP = conf_matr[0,1]; TN = conf_matr[1,1]; FN = conf_matr[1,0] print('Confusion Matrix : ') print(conf_matr) print() print('True Positive (TP) : ',TP) print('False Positive (FP) : ',FP) print('...
Confusion Matrix : [[6695 345] [1332 628]] True Positive (TP) : 6695 False Positive (FP) : 345 True Negative (TN) : 628 False Negative (FN) : 1332
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
in which - True Positive (TP) means the model predict customer will pay the credit and the prediction is correct.- False Positive (FP) means the model predict customer will will pay the credit and the prediction is incorrect.- True Negative (TN) means the model predict customer will not will pay the credit and the pred...
acc = (TP+TN)/(TP+TN+FP+FN) print('By this metric, only '+ str(round(acc*100)) + '% of them are correctly predicted.')
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
PrecisionIn this metric (precision), it only concern on how many positive prediction that are actually correct and this will be calculated by formula below. $$Precision = \frac{TP}{TP+FP}$$
pre = TP/(TP+FP) print('From those classification result, by calculating the precision, there are '+ str(round(pre*100)) + '% of them who are actually pay the credit.')
_____no_output_____
MIT
Model/3-NeuarlNetwork7-Copy1.ipynb
skawns0724/KOSA-Big-Data_Vision
Hurricane Ike Maximum Water LevelsCompute the maximum water level during Hurricane Ike on a 9 million node triangular mesh storm surge model. Plot the results with Datashader.
import xarray as xr import numpy as np import pandas as pd import fsspec import warnings warnings.filterwarnings("ignore") from dask.distributed import Client, progress, performance_report from dask_kubernetes import KubeCluster
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Start a dask cluster to crunch the data
cluster = KubeCluster() cluster.scale(15); cluster import dask; print(dask.__version__)
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
For demos, I often click in this cell and do "Cell=>Run All Above", then wait until the workers appear. This can take several minutes (up to 6!) for instances to spin up and Docker containers to be downloaded. Then I shutdown the notebook and run again from the beginning, and the workers will fire up quickly because t...
#cluster.adapt(maximum=15); client = Client(cluster)
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Read the data using the cloud-friendly zarr data format
ds = xr.open_zarr(fsspec.get_mapper('s3://pangeo-data-uswest2/esip/adcirc/ike', anon=False, requester_pays=True)) #ds = xr.open_zarr(fsspec.get_mapper('gcs://pangeo-data/rsignell/adcirc_test01')) ds['zeta']
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
How many GB of sea surface height data do we have?
ds['zeta'].nbytes/1.e9
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Take the maximum over the time dimension and persist the data on the workers in case we would like to use it later. This is the computationally intensive step.
%%time with performance_report(filename="dask-zarr-report.html"): max_var = ds['zeta'].max(dim='time').compute()
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Visualize data on mesh using HoloViz.org tools
import numpy as np import datashader as dshade import holoviews as hv import geoviews as gv import cartopy.crs as ccrs import hvplot.xarray import holoviews.operation.datashader as dshade dshade.datashade.precompute = True hv.extension('bokeh') v = np.vstack((ds['x'], ds['y'], max_var)).T verts = pd.DataFrame(v, colum...
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Extract a time series at a specified lon, lat location Because Xarray does not yet understand that `x` and `y` are coordinate variables on this triangular mesh, we create our own simple function to find the closest point. If we had a lot of these, we could use a more fancy tree algorithm.
# find the indices of the points in (x,y) closest to the points in (xi,yi) def nearxy(x,y,xi,yi): ind = np.ones(len(xi),dtype=int) for i in range(len(xi)): dist = np.sqrt((x-xi[i])**2+(y-yi[i])**2) ind[i] = dist.argmin() return ind #just offshore of Galveston lat = 29.2329856 lon = -95.15350...
_____no_output_____
BSD-3-Clause
hurricane_ike_water_levels.ipynb
ocefpaf/hurricane-ike-water-levels
Hello, PyTorch![img](https://pytorch.org/tutorials/_static/pytorch-logo-dark.svg)__This notebook__ will teach you to use PyTorch low-level core. If you're running this notebook outside the course environment, you can install it [here](https://pytorch.org).__PyTorch feels__ differently than tensorflow/theano on almost ...
import sys, os if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'): !wget -q https://raw.githubusercontent.com/yandexdataschool/deep_vision_and_graphics/fall21/week01-pytorch_intro/notmnist.py !touch .setup_complete import numpy as np import torch print(torch.__version__) # numpy world x...
X: tensor([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) X.shape: torch.Size([4, 4]) add 5: tensor([[ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [13., 14., 15., 16.], [17., 18., 19., 20.]]) X*X^T: tensor([[ 14., 38., 62., 86.]...
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
NumPy and PyTorchAs you can notice, PyTorch allows you to hack stuff much the same way you did with NumPy. No graph declaration, no placeholders, no sessions. This means that you can _see the numeric value of any tensor at any moment of time_. Debugging such code can be done with by printing tensors or using any debug...
import matplotlib.pyplot as plt %matplotlib inline t = torch.linspace(-10, 10, steps=10000) # compute x(t) and y(t) as defined above x = t - 1.5 * torch.cos(15 * t) y = t - 1.5 * torch.sin(0.5 * t) plt.plot(x.numpy(), y.numpy())
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
If you're done early, try adjusting the formula and seeing how it affects the function. --- Automatic gradientsAny self-respecting DL framework must do your backprop for you. Torch handles this with the `autograd` module.The general pipeline looks like this:* When creating a tensor, you mark it as `requires_grad`: ...
from sklearn.datasets import load_boston boston = load_boston() plt.scatter(boston.data[:, -1], boston.target) NLR_DEGREE = 3 LR = 1e-2 w = torch.rand(NLR_DEGREE + 1, requires_grad=True) x = torch.tensor(boston.data[:, -1] / 10, dtype=torch.float32) x = x.unsqueeze(-1) ** torch.arange(NLR_DEGREE + 1) y = torch.tensor...
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
The gradients are now stored in `.grad` of those variables that require them.
print("dL/dw = \n", w.grad) # print("dL/db = \n", b.grad)
dL/dw = tensor([ -43.1492, -43.7833, -60.1212, -103.8557])
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
If you compute gradient from multiple losses, the gradients will add up at variables, therefore it's useful to __zero the gradients__ between iteratons.
from IPython.display import clear_output for i in range(int(1e5)): y_pred = x @ w.T # + b loss = torch.mean((y_pred - y)**2) loss.backward() w.data -= LR * w.grad.data # b.data -= LR * b.grad.data # zero gradients w.grad.data.zero_() # b.grad.data.zero_() # the rest of code is j...
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
__Bonus quest__: try implementing and writing some nonlinear regression. You can try quadratic features or some trigonometry, or a simple neural network. The only difference is that now you have more variables and a more complicated `y_pred`. High-level PyTorchSo far we've been dealing with low-level PyTorch API. Whi...
from notmnist import load_notmnist X_train, y_train, X_test, y_test = load_notmnist(letters='AB') X_train, X_test = X_train.reshape([-1, 784]), X_test.reshape([-1, 784]) print("Train size = %i, test_size = %i" % (len(X_train), len(X_test))) for i in [0, 1]: plt.subplot(1, 2, i + 1) plt.imshow(X_train[i].reshap...
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
Let's start with layers. The main abstraction here is __`torch.nn.Module`__:
from torch import nn import torch.nn.functional as F print(nn.Module.__doc__)
Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F c...
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
There's a vast library of popular layers and architectures already built for ya'.This is a binary classification problem, so we'll train __Logistic Regression__.$$P(y_i | X_i) = \sigma(W \cdot X_i + b) ={ 1 \over {1+e^{- [W \cdot X_i + b]}} }$$
# create a network that stacks layers on top of each other model = nn.Sequential() # add first "dense" layer with 784 input units and 1 output unit. model.add_module('l1', nn.Linear(784, 1)) # add softmax activation for probabilities. Normalize over axis 1 # note: layer names must be unique model.add_module('l2', nn....
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
Let's now define a loss function for our model.The natural choice is to use binary crossentropy (aka logloss, negative llh):$$ L = {1 \over N} \underset{X_i,y_i} \sum - [ y_i \cdot log P(y_i=1 | X_i) + (1-y_i) \cdot log (1-P(y_i=1 | X_i)) ]$$
crossentropy_lambda = lambda input, target: -(target * torch.log(input) + (1 - target) * torch.log(1 - input)) crossentropy = crossentropy_lambda(y_predicted, y) loss = crossentropy.mean() assert tuple(crossentropy.size()) == ( 3,), "Crossentropy must be a vector with element per sample" assert tuple(loss.size())...
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
__Note:__ you can also find many such functions in `torch.nn.functional`, just type __`F.`__. __Torch optimizers__When we trained Linear Regression above, we had to manually `.zero_()` gradients on both our variables. Imagine that code for a 50-layer network.Again, to keep it from getting dirty, there's `torch.optim` m...
opt = torch.optim.RMSprop(model.parameters(), lr=0.01) # here's how it's used: opt.zero_grad() # clear gradients loss.backward() # add new gradients opt.step() # change weights # dispose of old variables to avoid bugs later del x, y, y_predicted, loss, y_pred
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
Putting it all together
# create network again just in case model = nn.Sequential() model.add_module('first', nn.Linear(784, 1)) model.add_module('second', nn.Sigmoid()) opt = torch.optim.Adam(model.parameters(), lr=1e-3) history = [] for i in range(100): # sample 256 random images ix = np.random.randint(0, len(X_train), 256) x...
step #0 | mean loss = 0.682 step #10 | mean loss = 0.371 step #20 | mean loss = 0.233 step #30 | mean loss = 0.168 step #40 | mean loss = 0.143 step #50 | mean loss = 0.126 step #60 | mean loss = 0.121 step #70 | mean loss = 0.114 step #80 | mean loss = 0.108 step #90 | mean loss = 0.109
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
__Debugging tips:__* Make sure your model predicts probabilities correctly. Just print them and see what's inside.* Don't forget the _minus_ sign in the loss function! It's a mistake 99% people do at some point.* Make sure you zero-out gradients after each step. Seriously:)* In general, PyTorch's error messages are qui...
# use your model to predict classes (0 or 1) for all test samples with torch.no_grad(): predicted_y_test = (model(torch.from_numpy(X_test)) > 0.5).squeeze().numpy() assert isinstance(predicted_y_test, np.ndarray), "please return np array, not %s" % type( predicted_y_test) assert predicted_y_test.shape == y_tes...
Test accuracy: 0.96051
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
More about PyTorch:* Using torch on GPU and multi-GPU - [link](http://pytorch.org/docs/master/notes/cuda.html)* More tutorials on PyTorch - [link](http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)* PyTorch examples - a repo that implements many cool DL models in PyTorch - [link](https://github.com/...
theta = torch.linspace(- np.pi, np.pi, steps=1000) # compute rho(theta) as per formula above rho = (1 + 0.9 * torch.cos(8 * theta)) * (1 + 0.1 * torch.cos(24 * theta)) * (0.9 + 0.05 * torch.cos(200 * theta)) * (1 + torch.sin(theta)) # Now convert polar (rho, theta) pairs into cartesian (x,y) to plot them. x = rho * t...
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
Task II: The Game of Life (3 points)Now it's time for you to make something more challenging. We'll implement Conway's [Game of Life](http://web.stanford.edu/~cdebs/GameOfLife/) in _pure PyTorch_. While this is still a toy task, implementing game of life this way has one cool benefit: __you'll be able to run it on GPU...
from scipy.signal import correlate2d def np_update(Z): # Count neighbours with convolution filters = np.array([[1, 1, 1], [1, 0, 1], [1, 1, 1]]) N = correlate2d(Z, filters, mode='same') # Apply rules birth = (N == 3) & (Z == 0) survive = ((N == ...
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
More fun with Game of Life: [video](https://www.youtube.com/watch?v=C2vgICfQawE) Task III: Going deeper (5 points)Your ultimate task for this week is to build your first neural network [almost] from scratch and pure PyTorch.This time you will solve the same digit recognition problem, but at a larger scale* 10 differen...
from notmnist import load_notmnist X_train, y_train, X_test, y_test = load_notmnist(letters='ABCDEFGHIJ') X_train, X_test = X_train.reshape([-1, 784]), X_test.reshape([-1, 784]) %matplotlib inline plt.figure(figsize=[12, 4]) for i in range(20): plt.subplot(2, 10, i+1) plt.imshow(X_train[i].reshape([28, 28])) ...
_____no_output_____
MIT
week01-pytorch_intro/seminar_pytorch.ipynb
mikita-zhuryk/deep_vision_and_graphics
Kili Tutorial: Importing medical data into a frame project In this tutorial, we will show you how to import dicom data into a [Frame Kili project](https://cloud.kili-technology.com/docs/video-interfaces/multi-frames-classification/docsNav). Such projects allow you to annotate volumes of image data.The data we use com...
import os import subprocess import tqdm if 'recipes' in os.getcwd(): os.chdir('..') os.makedirs(os.path.expanduser('~/Downloads'), exist_ok=True)
_____no_output_____
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
We will use a small package to help downloading the file hosted on Google Drive
%%bash pip install gdown gdown https://drive.google.com/uc?id=1q3qswXthFh3xMtAAnePph6vav3N7UtOF -O ~/Downloads/TCGA-LUAD.zip !apt-get install unzip !unzip -o ~/Downloads/TCGA-LUAD.zip -d ~/Downloads/ > /dev/null
_____no_output_____
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
Reading data We can then read the dicom files with [pydicom](https://pydicom.github.io/pydicom/stable/).
ASSET_ROOT = os.path.expanduser('~/Downloads/TCGA-LUAD') sorted_files = {} asset_number = 0 for root, dirs, files in os.walk(ASSET_ROOT): if len(files) > 0: file_paths = list(map(lambda path: os.path.join(root, path), files)) sorted_files[f'asset-{asset_number+1}'] = sorted(file_paths, ...
_____no_output_____
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
Let's see what is inside the dataset :
!pip install Pillow pydicom from PIL import Image import pydicom def read_dcm_image(path): dicom = pydicom.dcmread(path) image = dicom.pixel_array # Currently, Kili does not support windowing in the application. # This will soon change, but until then we advise you to reduce the range to 256 values. ...
Requirement already satisfied: Pillow in /opt/anaconda3/lib/python3.7/site-packages (8.4.0) Requirement already satisfied: pydicom in /opt/anaconda3/lib/python3.7/site-packages (2.0.0) asset-1 asset-2 asset-3
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
![asset-1](./img/frame_dicom_data_asset-1.png) ![asset-2](./img/frame_dicom_data_asset-2.png) ![asset-3](./img/frame_dicom_data_asset-3.png) Extracting and serving images For each of the dicom `.dcm` files, let's extract its content (image) and save it into a `.jpeg` image.
sorted_images = {} for asset_key, files in sorted_files.items(): images = [] for file in tqdm.tqdm(files): print(file) im = read_dcm_image(file) im_file = file.replace('.dcm', '.jpeg') im.save(im_file, format='JPEG') images.append(im_file) sorted_images[asset_key] = i...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 201/201 [00:02<00:00, 85.82it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 227/227 [00:02<00:00, 105.77it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 329/329 [00:02<00:00, 112.38it/s]
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
We now have extracted jpeg images processable by Kili. Creating the project We can now import those assets into a FRAME project !Let's begin by creating a project
## You can also directly create the interface on the application. interface = { "jobRendererWidth": 0.17, "jobs": { "JOB_0": { "mlTask": "OBJECT_DETECTION", "tools": [ "semantic" ], "instruction": "Segment the right class", "required": 1, "isChild": False, "content": { "categories": { ...
/Users/maximeduval/Documents/kili-playground/kili/authentication.py:97: UserWarning: Kili Playground version should match with Kili API version. Please install version: "pip install kili==2.100.0" warnings.warn(message, UserWarning)
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
Importing images Finally, let's import the volumes using `appendManyToDataset` (see [link](https://staging.cloud.kili-technology.com/docs/python-graphql-api/python-api/append_many_to_dataset)). The key argument is `json_content_array`, which is a list of list of strings. Each element is the list of urls or paths point...
subprocess.Popen(f'python -m http.server 8001 --directory {ASSET_ROOT}', shell=True, stdin=None, stdout=None, stderr=None, close_fds=True) ROOT_URL = 'http://localhost:8001/' def files_to_urls(files): return list(map(lambda file: ROOT_URL + file.split('TCGA-LUAD')[1], files)) kili.append_many_to_da...
_____no_output_____
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
Or, as mentionned, you can simply provide the paths to your images, and call the function like below :
kili.append_many_to_dataset( project_id=project_id, external_id_array=list(map(lambda key: f'local-path-{key}',sorted_images.keys())), json_content_array=list(sorted_images.values()) )
_____no_output_____
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
Back to the interface We can see our assets were imported...
ds_size = kili.count_assets(project_id=project_id) print(ds_size) assert ds_size == 6
6
Apache-2.0
recipes/frame_dicom_data.ipynb
ASonay/kili-playground
PEST setupThis notebook reads in the existing MF6 model built using modflow-setup with the script `../scripts/setup_model.py`. This notebook makes extensive use of the `PstFrom` functionality in `pyemu` to set up multipliers on parameters. There are a few custom parameterization steps as well. Observations are also d...
pyemu.__version__
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
define locations and other global variables
sim_ws = '../neversink_mf6/' # folder containing the MODFLOW6 files template_ws = '../run_data' # folder to create and write the PEST setup to noptmax0_dir = '../noptmax0_testing/' # folder in which to write noptmax=0 test run version of PST file
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
kill the `original` folder (a relic from the mfsetup process)
if os.path.exists(os.path.join(sim_ws,'original')): shutil.rmtree(os.path.join(sim_ws,'original')) run_MF6 = True # option to run MF6 to generate output but not needed if already been run in sim_ws cdir = os.getcwd() # optionally run MF6 to generate model output if run_MF6: os.chdir(sim_ws) os.system('mf6...
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
create land surface observations we will need at the endThese will be used as inequality observations (less than) to enforce that heads should not exceed the model top. Option for spatial frequency is set below.
irch_file = f'{sim_ws}/irch.dat' # file with the highest active layer identified id3_file = f'{sim_ws}/idomain_003.dat' # deepest layer idomain - gives the maximum lateral footprint top_file = f'{sim_ws}/top.dat' # the model top top = np.loadtxt(top_file) top[top<-8000] = np.nan plt.imshow(top) plt.colorbar() i...
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
make list of indices
with open(os.path.join(sim_ws,'land_surf_obs-indices.csv'), 'w') as ofp: ofp.write('k,i,j,obsname\n') [ofp.write('{0},{1},{2},land_surf_obs_{1}_{2}\n'.format(*i)) for i in keep_points]
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
make an observations file
with open(os.path.join(sim_ws,'land_surf_obs-observations.csv'), 'w') as ofp: ofp.write('obsname,obsval\n') [ofp.write('land_surf_obs_{1}_{2},{3}\n'.format(*i, top[i[1],i[2]])) for i in keep_points]
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
Start setting up the `PstFrom` object to create PEST inputs load up the simulation
sim = fp.mf6.MFSimulation.load(sim_ws=sim_ws) m = sim.get_model() # manually create a spatial reference object from the grid.json metadata # this file created by modflow-setup grid_data = json.load(open(os.path.join(sim_ws,'neversink_grid.json'))) sr_model = pyemu.helpers.SpatialReference(delr=grid_data['delr'], ...
2021-03-26 16:08:56.834646 starting: opening PstFrom.log for logging 2021-03-26 16:08:56.835640 starting PstFrom process 2021-03-26 16:08:56.868554 starting: setting up dirs 2021-03-26 16:08:56.869551 starting: removing existing new_d '..\run_data' 2021-03-26 16:08:57.058048 finished: removing existing new_d '..\run_da...
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
we will parameterize....- pilot points for k, k33, r- zones for l, k33, r- constant for R- sfr conductance by reach- well pumping - CHDs parameterize list-directed well and chd packages
list_tags = {'wel_':[.8,1.2], 'chd_':[.8,1.2]} for tag,bnd in list_tags.items(): lb,ub = bnd filename = os.path.basename(glob.glob(os.path.join(template_ws, '*{}*'.format(tag)))[0]) pf.add_parameters(filenames=filename, par_type = 'grid', upper_bound=ub, lower_bound=lb, par_name_base=ta...
2021-03-26 16:08:58.270602 starting: adding grid type multiplier style parameters for file(s) ['wel_000.dat'] 2021-03-26 16:08:58.271600 starting: loading list ..\run_data\wel_000.dat 2021-03-26 16:08:58.272597 starting: reading list ..\run_data\wel_000.dat 2021-03-26 16:08:58.279579 finished: reading list ..\run_data\...
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow
now set up pilot points
k_ub = 152 # ultimate upper bound on K # set up pilot points pp_tags = {'k':[.01,10.,k_ub], 'k33':[.01,10.,k_ub/10]}
_____no_output_____
CC0-1.0
notebooks_workflow_complete/0.0_PEST_parameterization.ipynb
usgs/neversink_workflow