markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period ok_freqs = [ok_ttl.shape[0], ok_sd.shape[0], ok_2019.shape[0], ok_2018.shape[0], ok_2017.shape[0] ] # then develop a simple dataframe to compare them ok_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 4.37%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period ok_sp_ttl = severity_percent(ok_ttl) ok_sp_sd = severity_percent(ok_sd) ok_sp_2019 = severity_percent(ok_2019) ok_sp_2018 = severity_percent(ok_2018) ok_sp_2017 = severity_percent(ok_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- New JerseyFirst create periods we'll want to compare for the state >
# full state period nj_ttl = accident_covid[(accident_covid.State == 'NJ')] # shutdown period nj_sd = accident_covid[ (accident_covid.State == 'NJ') & (accident_covid.Shut_Down == 1)] # 2019 nj_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-21') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period nj_freqs = [nj_ttl.shape[0], nj_sd.shape[0], nj_2019.shape[0], nj_2018.shape[0], nj_2017.shape[0] ] # then develop a simple dataframe to compare them nj_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 18.74%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period nj_sp_ttl = severity_percent(nj_ttl) nj_sp_sd = severity_percent(nj_sd) nj_sp_2019 = severity_percent(nj_2019) nj_sp_2018 = severity_percent(nj_2018) nj_sp_2017 = severity_percent(nj_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- Top 20 States AveragesCombining numbers for the Top 20 states to see average trends for those states > Frequency
# combine fequency data for the top 20 states top20_freq_data = (ca_freq_data + tx_freq_data + fl_freq_data + sc_freq_data + nc_freq_data + ny_freq_data + pa_freq_data + il_freq_data +...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
Now find change in 2020 shutdown period compared to average of past three years >
# percent change in frequency in 2020 stay-home compared to average past years change = change_freq(top20_freq_data) print("Change Frequency: %.2f%%" % (change))
Change Frequency: 51.77%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityFor average severity rank for top 10 states, we will include weighting for each state based on the states number of accidents compared to all accidents for the top 10 states
# finding weight for each state to then include in next cell len(accident_covid[accident_covid.State == 'NJ'])/accident_covid.State.value_counts()[:20].sum() # add all state severity dfs together top20_severity_data = (ca_severity_data * 27 + tx_severity_data * 11 + fl_se...
Percent Change in 2020 stay-home period vs. average 2017-2019:
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
MAT281 - Modelos de clasificación Objetivos de la clase* Aprender conceptos básicos de los modelos de clasificación en python. Contenidos* [Modelos de clasificación](c1)* [Ejemplos con python](c2) I.- Modelos de clasificaciónLos modelos de clasificacion son ocupadas para predecir valores categóricos, por ejemplo, ...
# librerias import os import numpy as np import pandas as pd from sklearn import datasets from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import seaborn as sns pd.set_option('display.max_columns', 500) # Ver más columnas de los dataframes # Ver gráficos de matplotlib en jupyt...
_____no_output_____
MIT
labs/08_analisis_supervisado_clasificacion/02_clasificacion.ipynb
gonzalogacitua/MAT281_Portafolio
Para ver gráficamente el modelo de regresión logística, ajustemos el modelo solo a dos variables: petal length (cm), petal width (cm).
# datos from sklearn.linear_model import LogisticRegression X = iris_df[['sepal length (cm)', 'sepal width (cm)']] Y = iris_df['TARGET'] # split dataset X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state = 2) # print rows train and test sets print('Separando informacion:\n') print...
_____no_output_____
MIT
labs/08_analisis_supervisado_clasificacion/02_clasificacion.ipynb
gonzalogacitua/MAT281_Portafolio
Grafiquemos nuestro resultados:
# grafica de la regresion logistica plt.figure(figsize=(12,4)) # dataframe a matriz X = X.values Y = Y.values x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 h = .02 # step size in the mesh xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min,...
_____no_output_____
MIT
labs/08_analisis_supervisado_clasificacion/02_clasificacion.ipynb
gonzalogacitua/MAT281_Portafolio
Gráficamente podemos decir que el modelo se ajusta bastante bien, puesto que las clasificaciones son adecuadas y el modelo no se confunde entre una clase y otra. Por otro lado, existe valores numéricos que también nos pueden ayudar a convensernos de estos, que son las métricas que se habian definidos con anterioridad. ...
# metrics from metrics_classification import * from sklearn.metrics import confusion_matrix y_true = list(Y_test) y_pred = list(rlog.predict(X_test)) print('Valores:\n') print('originales: ', y_true) print('predicho: ', y_pred) print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) # ejemplo...
Valores: originales: [0, 0, 2, 0, 0, 2, 0, 2, 2, 0, 0, 0, 0, 0, 1, 1, 0, 1, 2, 1, 1, 1, 2, 1, 1, 0, 0, 2, 0, 2] predicho: [0, 0, 1, 0, 0, 2, 0, 2, 2, 0, 0, 0, 0, 0, 2, 2, 1, 1, 2, 1, 2, 2, 2, 1, 1, 0, 0, 1, 0, 2] Matriz de confusion: [[13 1 0] [ 0 4 4] [ 0 2 6]] Metricas para los regresores : 'sepal le...
MIT
labs/08_analisis_supervisado_clasificacion/02_clasificacion.ipynb
gonzalogacitua/MAT281_Portafolio
Basado en las métricas y en la gráfica, podemos concluir que el ajuste realizado es bastante asertado. Otra forma de convencernos que nuestro ajuste es correcto es analizando la [curva AUC–ROC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic).La curva AUC-ROC es la métrica de selección del modelo para e...
from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score # graficar curva roc def plot_roc_curve(fpr, tpr): plt.figure(figsize=(9,4)) plt.plot(fpr, tpr, color='orange', label='ROC') plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--') plt.xlabel('False Positive Rate') pl...
_____no_output_____
MIT
labs/08_analisis_supervisado_clasificacion/02_clasificacion.ipynb
gonzalogacitua/MAT281_Portafolio
b) Varios modelos de clasificaciónExisten varios modelos de clasificación que podemos ir comparando unos con otros, dentro de los cuales estacamos los siguientes:* [Regresión Logística](https://es.wikipedia.org/wiki/Regresi%C3%B3n_log%C3%ADstica)* [Arboles de Decision](https://es.wikipedia.org/wiki/%C3%81rbol_de_decis...
from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from matplotl...
_____no_output_____
MIT
labs/08_analisis_supervisado_clasificacion/02_clasificacion.ipynb
gonzalogacitua/MAT281_Portafolio
b) MétricasDado que el sistema de calcular métricas sigue el mismo formato, solo cambiando el conjunto de datos y el modelo, se decide realizar una clase que automatice este proceso.
from metrics_classification import * class SklearnClassificationModels: def __init__(self,model,name_model): self.model = model self.name_model = name_model @staticmethod def test_train_model(X,y,n_size): X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=n...
_____no_output_____
MIT
labs/08_analisis_supervisado_clasificacion/02_clasificacion.ipynb
gonzalogacitua/MAT281_Portafolio
Constant features
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.feature_selection import VarianceThreshold
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Read Data
data.head(5)
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Train - Test Split
data.convert_dtypes().dtypes # separate dataset into train and test X_train, X_test, y_train, y_test = train_test_split( data.drop(labels=['Label_code'], axis=1), # drop the target data['Label_code'], # just the target test_size=0.2, random_state=0) X_train.shape, X_tes...
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Using VarianceThreshold from Scikit-learnThe VarianceThreshold from sklearn provides a simple baseline approach to feature selection. It removes all features which variance doesn’t meet a certain threshold. By default, it removes all zero-variance features, i.e., features that have the same value in all samples.
sel = VarianceThreshold(threshold=0.01) sel.fit(X_train) # fit finds the features with zero variance # get_support is a boolean vector that indicates which features are retained # if we sum over get_support, we get the number of features that are not constant # (if necessary, print the result of sel.get_support() to u...
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
We can see that 0 columns / variables are constant. This means that 0 variables show the same value, just one value, for all the observations of the training set.
# let's print the constant variable names constant # let's visualise the values of one of the constant variables # as an example X_train['Protocol_code'].unique() # we can do the same for every feature: for col in constant: print(col, X_train[col].unique())
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
We then use the transform() method of the VarianceThreshold to reduce the training and testing sets to its non-constant features.Note that VarianceThreshold returns a NumPy array without feature names, so we need to capture the names first, and reconstitute the dataframe in a later step.
# capture non-constant feature names feat_names = X_train.columns[sel.get_support()] X_train = sel.transform(X_train) X_test = sel.transform(X_test) X_train.shape, X_test.shape
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
We have now 23 variables.
# X_ train is a NumPy array X_train # reconstitute de dataframe X_train = pd.DataFrame(X_train, columns=feat_names) X_train.head()
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
In the Kyoto dataset, 0 features was classified as constant were found, remaining the original 23 features of the dataset Standardize Data
from sklearn.preprocessing import StandardScaler scaler = StandardScaler().fit(X_train) X_train = scaler.transform(X_train)
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Hyperparameter Optimization
import numpy as np import pandas as pd from sklearn.model_selection import GridSearchCV class EstimatorSelectionHelper: def __init__(self, models, params): self.models = models self.params = params self.keys = models.keys() self.grid_searches = {} def fit(self, X, y, *...
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Classifiers
from sklearn import linear_model from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from catboost import CatBoostClassifier
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Metrics Evaluation
from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_curve, f1_score from sklearn import metrics from sklearn.model_selection import cross_val_score
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Logistic Regression
%%time clf_LR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=1).fit(X_train, y_train) pred_y_test = clf_LR.predict(X_test) print('Accuracy:', accuracy_score(y_test, pred_y_test)) f1 = f1_score(y_test, pred_y_test) print('F1 Score:', f1) fpr, tpr, thresholds = roc_curve(y_test, pred_y_test) print('FPR...
Accuracy: 0.4527830397807424 F1 Score: 0.2463502636691646 FPR: 0.5989054991991457 TPR: 0.9503211991434689
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Naive Bayes
%%time clf_NB = GaussianNB(var_smoothing=1e-05).fit(X_train, y_train) pred_y_testNB = clf_NB.predict(X_test) print('Accuracy:', accuracy_score(y_test, pred_y_testNB)) f1 = f1_score(y_test, pred_y_testNB) print('F1 Score:', f1) fpr, tpr, thresholds = roc_curve(y_test, pred_y_testNB) print('FPR:', fpr[1]) print('TPR:',...
Accuracy: 0.9045584619725122 F1 Score: 0.0 FPR: 0.0014682327816337426 TPR: 0.0
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Random Forest
%%time clf_RF = RandomForestClassifier(random_state=0,max_depth=70,n_estimators=100).fit(X_train, y_train) pred_y_testRF = clf_RF.predict(X_test) print('Accuracy:', accuracy_score(y_test, pred_y_testRF)) f1 = f1_score(y_test, pred_y_testRF, average='weighted', zero_division=0) print('F1 Score:', f1) fpr, tpr, thresho...
Accuracy: 0.9058885171899561 F1 Score: 0.8611563563923045 FPR: 1.0 TPR: 1.0
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
KNN
%%time clf_KNN = KNeighborsClassifier(algorithm='auto',leaf_size=1,n_neighbors=2,weights='uniform').fit(X_train, y_train) pred_y_testKNN = clf_KNN.predict(X_test) print('accuracy_score:', accuracy_score(y_test, pred_y_testKNN)) f1 = f1_score(y_test, pred_y_testKNN) print('f1:', f1) fpr, tpr, thresholds = roc_curve(y_...
accuracy_score: 0.9058885171899561 f1: 0.0 fpr: 1.0 tpr: 1.0
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
CatBoost
%%time clf_CB = CatBoostClassifier(random_state=0,depth=7,iterations=50,learning_rate=0.04).fit(X_train, y_train) pred_y_testCB = clf_CB.predict(X_test) print('Accuracy:', accuracy_score(y_test, pred_y_testCB)) f1 = f1_score(y_test, pred_y_testCB, average='weighted', zero_division=0) print('F1 Score:', f1) fpr, tpr, ...
Accuracy: 0.9058885171899561 F1 Score: 0.8611563563923045 FPR: 1.0 TPR: 1.0
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Model Evaluation
import pandas as pd, numpy as np test_df = pd.read_csv("../Kyoto_Test.csv") test_df.shape # Create feature matrix X and target vextor y y_eval = test_df['Label_code'] X_eval = test_df.drop(columns=['Label_code'])
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Model Evaluation - Logistic Regression
modelLR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=1) modelLR.fit(X_train, y_train) # Predict on the new unseen test data y_evalpredLR = modelLR.predict(X_eval) y_predLR = modelLR.predict(X_test) train_scoreLR = modelLR.score(X_train, y_train) test_scoreLR = modelLR.score(X_test, y_test) print("Tra...
Performance measures for test: -------- Accuracy: 0.4527830397807424 F1 Score: 0.2463502636691646 Precision Score: 0.14151785714285714 Recall Score: 0.9503211991434689 Confusion Matrix: [[ 9015 13461] [ 116 2219]]
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Cross validation - Logistic Regression
from sklearn.model_selection import cross_val_score from sklearn import metrics accuracy = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='accuracy') print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2)) f = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='f1') print("F1 Score:...
Accuracy: 0.90021 (+/- 0.00142) F1 Score: 0.00064 (+/- 0.00385) Precision: 0.00769 (+/- 0.04615) Recall: 0.00033 (+/- 0.00201)
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Model Evaluation - Naive Bayes
modelNB = GaussianNB(var_smoothing=1e-05) modelNB.fit(X_train, y_train) # Predict on the new unseen test data y_evalpredNB = modelNB.predict(X_eval) y_predNB = modelNB.predict(X_test) train_scoreNB = modelNB.score(X_train, y_train) test_scoreNB = modelNB.score(X_test, y_test) print("Training accuracy is ", train_scoreN...
Performance measures for test: -------- Accuracy: 0.9045584619725122 F1 Score: 0.0 Precision Score: 0.0 Recall Score: 0.0 Confusion Matrix: [[22443 33] [ 2335 0]]
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Cross validation - Naive Bayes
from sklearn.model_selection import cross_val_score from sklearn import metrics accuracy = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='accuracy') print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2)) f = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='f1') print("F1 Score:...
Accuracy: 0.51851 (+/- 0.28039) F1 Score: 0.25979 (+/- 0.02808) Precision: 0.21404 (+/- 0.38891) Recall: 0.86306 (+/- 0.46302)
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Model Evaluation - Random Forest
modelRF = RandomForestClassifier(random_state=0,max_depth=70,n_estimators=100) modelRF.fit(X_train, y_train) # Predict on the new unseen test data y_evalpredRF = modelRF.predict(X_eval) y_predRF = modelRF.predict(X_test) train_scoreRF = modelRF.score(X_train, y_train) test_scoreRF = modelRF.score(X_test, y_test) print(...
Performance measures for test: -------- Accuracy: 0.9058885171899561 F1 Score: 0.8611563563923045 Precision Score: 0.9147454883866614 Recall Score: 0.9058885171899561 Confusion Matrix: [[22476 0] [ 2335 0]]
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Cross validation - Random Forest
from sklearn.model_selection import cross_val_score from sklearn import metrics accuracy = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='accuracy') print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2)) f = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='f1') print("F1 Score:...
Accuracy: 0.99929 (+/- 0.00060) F1 Score: 0.99631 (+/- 0.00312) Precision: 0.99950 (+/- 0.00154) Recall: 0.99315 (+/- 0.00676)
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Model Evaluation - KNN
modelKNN = KNeighborsClassifier(algorithm='auto',leaf_size=1,n_neighbors=2,weights='uniform') modelKNN.fit(X_train, y_train) # Predict on the new unseen test data y_evalpredKNN = modelKNN.predict(X_eval) y_predKNN = modelKNN.predict(X_test) train_scoreKNN = modelKNN.score(X_train, y_train) test_scoreKNN = modelKNN.scor...
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Cross validation - KNN
from sklearn.model_selection import cross_val_score from sklearn import metrics accuracy = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='accuracy') print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2)) f = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='f1') print("F1 Scor...
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Model Evaluation - CatBoost
modelCB = CatBoostClassifier(random_state=0,depth=7,iterations=50,learning_rate=0.04) modelCB.fit(X_train, y_train) # Predict on the new unseen test data y_evalpredCB = modelCB.predict(X_eval) y_predCB = modelCB.predict(X_test) train_scoreCB = modelCB.score(X_train, y_train) test_scoreCB = modelCB.score(X_test, y_test)...
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Cross validation - CatBoost
from sklearn.model_selection import cross_val_score from sklearn import metrics accuracy = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='accuracy') f = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='f1') precision = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='precision') recall = cros...
_____no_output_____
Apache-2.0
Kyoto2006+/1-Constant-Quasi-Constant-Duplicates/1.1-Constant-features.ipynb
theavila/EmployingFS
Problem 1
import numpy as np A = np.array([[1,2,3],[4,5,6]]) B = np.array([[1,2],[3,4],[5,6]]) C = np.array([[1,2,3],[4,5,6],[7,8,9]]) D = np.array([[1,2],[3,4]]) a = np.dot(A,B) b = np.add(D,D) c = 2*C print("A. AB") print(a) print("\nB. D+D") print(b) print("\nC. 2*C") print(c)
A. AB [[22 28] [49 64]] B. D+D [[2 4] [6 8]] C. 2*C [[ 2 4 6] [ 8 10 12] [14 16 18]]
Apache-2.0
Practical_Lab_Exam_1_.ipynb
Nickamaes/Linear-Algebra_58109
Problem 2
import numpy as np X = np.array([5,3,-1]) print(X) print('\ntype of array', type(X)) print('size of array : ', X.size) print('shape of array: ', X.shape) print('dimension of array:', X.ndim)
[ 5 3 -1] type of array <class 'numpy.ndarray'> size of array : 3 shape of array: (3,) dimension of array: 1
Apache-2.0
Practical_Lab_Exam_1_.ipynb
Nickamaes/Linear-Algebra_58109
Supervised Learning Decision Tree Classification Model Fitting In this notebook, I will be applying the following techniques to predict Titanic survival:1. Decision Tree Classification (with default settings)2. Decision Tree Classification with the following optimal hyperparameters identified by using 5-fold Cross Val...
import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import numpy as np
_____no_output_____
MIT
model-fitting.ipynb
talsiddiqui/Titanic
Now import the train and test data sets. Take a quick glance at the data and get summary statistics for our variables.
train = pd.read_csv("data/new_train.csv", index_col=0) train.head(5) train.describe() test = pd.read_csv("data/new_test.csv", index_col=0) test.head(5) test.describe()
_____no_output_____
MIT
model-fitting.ipynb
talsiddiqui/Titanic
Feature columns from our data:
feature_cols = ['Pclass','LastName','Title','Sex','Age','SibSp','Parch','Fare','Embarked','TicketNumber','OtherName']
_____no_output_____
MIT
model-fitting.ipynb
talsiddiqui/Titanic
Decision Tree Classification (with default settings) Using the new_test and new_train CSV files, predict the Titanic survival with default model settings.
from sklearn import tree model = tree.DecisionTreeClassifier(random_state=42) # Prepare data for model fitting X = train.loc[:, feature_cols] y = train.Survived # Fit a model model.fit(X, y) X_test = test.loc[:, feature_cols] Survived = model.predict(X_test) test["Survived"] = Survived test.drop(['Pclass','LastName','...
_____no_output_____
MIT
model-fitting.ipynb
talsiddiqui/Titanic
Result of the 1st submission is not bad: ![](img/submission_01.png) But I am curious if it can be improved if we applied better hyperparameters. Decision Tree Classification with optimal hyperparameters Identified using 5-fold Cross Validation Let's start by re-importing the train and test data sets so we have a clean...
train = pd.read_csv("data/new_train.csv", index_col=0) test = pd.read_csv("data/new_test.csv", index_col=0) X = train.loc[:, feature_cols] y = train.Survived
_____no_output_____
MIT
model-fitting.ipynb
talsiddiqui/Titanic
max_depth The first parameter we will optimize is `max_depth`. Let's test 40 possible values from 1 to 40 for `max_depth` to identify the ideal value.
from sklearn.model_selection import train_test_split, cross_val_score max_depth = np.arange(1,41,1) scores = [] # split the data set into 80% training data and 20% test data Xtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2) for i in max_depth: model = tree.DecisionTreeClassifier(max_depth=i, ran...
Value of max_depth with the best accuracy: 3 Performance on test split: 0.8603351955307262
MIT
model-fitting.ipynb
talsiddiqui/Titanic
min_samples_split The second parameter we will optimize is `min_samples_split`. Let's again test 40 possible values but this time, from 2 to 160 for `min_samples_split` to identify the ideal value.
min_samples_split = np.arange(2,160,4) scores = [] # split the data set into 80% training data and 20% test data Xtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2) for i in min_samples_split: model = tree.DecisionTreeClassifier(min_samples_split=i, random_state=42) avg_score = np.mean(cross_va...
Value of min_samples_split with the best accuracy: 86 Performance on test split: 0.7988826815642458
MIT
model-fitting.ipynb
talsiddiqui/Titanic
min_samples_leaf The third parameter we will optimize is `min_samples_leaf`. Similar to `max_depth`, let's test 40 possible values from 1 to 40 for `min_samples_leaf` to identify the ideal value.
min_samples_leaf = np.arange(1,41,1) scores = [] # split the data set into 80% training data and 20% test data Xtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2) for i in min_samples_leaf: model = tree.DecisionTreeClassifier(min_samples_leaf=i, random_state=42) avg_score = np.mean(cross_val_sc...
Value of min_samples_leaf with the best accuracy: 10 Performance on test split: 0.8324022346368715
MIT
model-fitting.ipynb
talsiddiqui/Titanic
max_features The final parameter we will optimize is `max_features`
max_features = np.arange(1,len(feature_cols),1) scores = [] # split the data set into 80% training data and 20% test data Xtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2) for i in max_features: model = tree.DecisionTreeClassifier(max_features=i, random_state=42) avg_score = np.mean(cross_val...
Value of max_features with the best accuracy: 10 Performance on test split: 0.7821229050279329
MIT
model-fitting.ipynb
talsiddiqui/Titanic
Let's take our _best_ optimized hyperparameters and create the _best_ model. Then, fit the model and make preditions.
best_model = tree.DecisionTreeClassifier(max_depth = best_max_depth, max_features = best_max_features, min_samples_split = best_min_samples_split, min_samples_leaf = best_min_samples_leaf, ...
_____no_output_____
MIT
model-fitting.ipynb
talsiddiqui/Titanic
Prepare the predictions for submission.
test.drop(['Pclass','LastName','Title','Sex','Age','SibSp','Parch','Fare','Embarked','TicketNumber','OtherName'], axis=1, inplace=True) test.to_csv("data/submission.csv", sep=',')
_____no_output_____
MIT
model-fitting.ipynb
talsiddiqui/Titanic
The results are an improvement: ![](img/submission_02.png) We can observe a 3.35% improvement of accuracy using optimized hyperparameters. Random Forest A combination of multiple decision tree classifiers creates a very powerful classifier -- Random Forest. Let's predict using it now!
from sklearn.ensemble import RandomForestClassifier train = pd.read_csv("data/new_train.csv", index_col=0) test = pd.read_csv("data/new_test.csv", index_col=0) X = train.loc[:, feature_cols] y = train.Survived
_____no_output_____
MIT
model-fitting.ipynb
talsiddiqui/Titanic
Let's do a multi-dimensional search for ideal hyperparameters.
# hyperparameters n_estimators = np.arange(2,100,4) max_depth = np.arange(1,41,1) max_features = np.arange(1,len(feature_cols),1) # split the data set into 80% training data and 20% test data Xtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2) # accuracy train_accuracy = np.zeros((len(n_estimators),len...
_____no_output_____
MIT
model-fitting.ipynb
talsiddiqui/Titanic
#hide !pip install -Uqq fastbook import fastbook fastbook.setup_book() from fastbook import * from fastai.vision.all import * path = Path('gdrive/MyDrive/Colab Notebooks') Path.BASE_PATH = path a = (path/'A').ls().sorted() b = (path/'B').ls().sorted() len(b) img = Image.open(a[100]) img img_t = tensor(img) df = pd.Data...
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
The cell above shows how the image is simply made of black and white and pixels; the darker the pixel, the higher the number in the cell. Pixel SimilarityFind the mean shape of A and comapre each new image to that.
# get first 64 for testing a_tensor = [tensor(Image.open(x)) for x in a[:64]] b_tensor = [tensor(Image.open(x)) for x in b[:64]]
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
To make this into one big matrix a neural network can use we need to stack all the tensors on top of each other.
stacked_a = torch.stack(a_tensor) stacked_b = torch.stack(b_tensor)
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
The data must be converted from a 0-255 scale to 0-1 for the neural network. The tensors are integers so they must be converted to floats too.
stacked_a = stacked_a.float()/255 stacked_b = stacked_b.float()/255
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Find the mean of each matrix.
mean_a = stacked_a.mean(0) mean_b = stacked_b.mean(0)
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Getting the distance from the ideal (mean) character is going to determine how it is classified. There are two methods we can use:1. L1 norm - absolute mean value. This subtracts the mean from each value in the tensor and makes it postive.1. L2 norm - Root Mean Squared Error (RMSE). This subtracts the mean in the tenso...
a0 = stacked_a[0] show_image(a0) dist_a_abs = (a0 - mean_a).abs().mean() dist_a_rmse = ((a0 - mean_a)**2).mean().sqrt() dist_a_abs, dist_a_rmse dist_b_abs = (a0 - mean_b).abs().mean() dist_b_rmse = ((a0 - mean_b)**2).mean().sqrt() dist_b_abs, dist_b_rmse
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Using either of the above approaches, the average distance between the "a" example and the ideal "a" is less than that between the "b" example. Either can be used to classify. Defining a function for each might be the most convenient way to proceed.
def abs_dist(ex, mean): return (ex - mean).abs().mean((-1, -2)) def rmse_dist(ex, mean): return ((ex - mean)**2).mean().sqrt() abs_dist(a0, mean_b), rmse_dist(a0, mean_b)
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Now the error between an example and the mean can be found easily. Defining a way to compare the error to other character is the next step.The code below checks if the absolute mean error between an "a" example and the mean of "a" is less than that of a "b" example.
abs_dist(a0, mean_a) < abs_dist(a0, mean_b) def is_a(ex_a): return (abs_dist(ex_a, mean_a) < abs_dist(ex_a, mean_b)).float() is_a(a0)
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Create Validation SetGather validation data to measure accuracy of classification.
valid_stacked_a = torch.stack([tensor(Image.open(x)) for x in a[65:128]]).float()/255 valid_stacked_b = torch.stack([tensor(Image.open(x)) for x in b[65:128]]).float()/255 valid_stacked_a.shape, valid_stacked_b.shape accuracy_...
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Next we need to find the distance from each value in the validation matrix from the ideal "a". This could be acheived by looping over the each value but this would be inefficient. Luckily, broadcasting exists, this is where PyTorch has tensors of different ranks and it expands the smaller one (```mean_a```) to the size...
valid_a_dist = abs_dist(valid_stacked_a, mean_a) valid_a_dist, valid_a_dist.shape
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Stochastic Gradient Descent (SGD)Algorithm used to optimise weights and parameters so that the loss function is minimised. Gradient descent wants to find the lowest point on the graph so that the losses are also low.**7 Steps:**1. Init weights1. Predict character1. Calculate loss1. Calculate gradient1. Make step1. Rep...
xt = tensor(3.).requires_grad_()
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
The ```requires_grad_()``` function marks the variable so that when it is used in future it knows that the gradient needs to be calculated.
def f(x): return x**2 yt = f(xt) yt
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Calculate the gradients now
yt.backward()
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
The ```.backward()``` function applies the backpropagation algorithm to each layer of the network
xt.grad
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
The gradients only tell us the slope of our weights and don't really provide anything actionable to use. This is where steps are used. StepsThis step of the process uses the gradients to attempt to lower the loss function by "jumping" down the curve. The size of the jump is called the learning late ```lr```, the bigge...
def sigmoid(x): return 1 / (1 + torch.exp(-x)) plot_function(torch.sigmoid, title='Sigmoid', min=-4, max=4)
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
After the midpoint (0.5) all values below are 0 and all above are 1. This can now be used in the loss function.
def char_loss(preds, targets): preds = preds.sigmoid() # return mean of either: preds or (1-preds) return torch.where(preds == 1, 1 - preds, preds).mean()
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Mini-BatchesThe loss function has been defined, we're ready to run gradient descent on the data. Well maybe not there's a few more things to iron out. The issue with running gradient descent on all our data is that would take quite a while; instead we could run it on each item, this would be much faster. The issue wit...
col = range(30) # bs=batch size dl = DataLoader(col, bs=5, shuffle=True) list(dl)
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Implementation
path = Path('gdrive/MyDrive/Colab Notebooks/characters') Path.BASE_PATH = path fns = get_image_files(path) fns failed = verify_images(fns) failed
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Delete failed images
failed.map(Path.unlink);
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Data Curation
characters = DataBlock( blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, splitter=RandomSplitter(valid_pct=0.2, seed=42), get_y=parent_label)
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
This defines how the data will be retreived and organised. The ```splitter``` divides the data into the training and validation sets.
dls = characters.dataloaders(path) dls.valid.show_batch(max_n=4, nrows=1)
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Model Training
characters = characters.new( # item_tfms=RandomResizedCrop(224, min_scale=0.5), batch_tfms=aug_transforms()) dls = characters.dataloaders(path) learn = cnn_learner(dls, resnet18, metrics=error_rate) learn.fine_tune(5) interp = ClassificationInterpretation.from_learner(learn) interp.plot_confusion_matrix() inter...
_____no_output_____
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Model InferenceExport model to app.
learn.export() path = Path() path.ls(file_exts='.pkl') %mv export.pkl gdrive/MyDrive/Colab\ Notebooks/characters %ls gdrive/MyDrive/Colab\ Notebooks/characters
A/ B/ export.pkl
MIT
mdl_german_char_classifier.ipynb
jacKlinc/german_char_recogniser
Managing assignment files Distributing assignments to students and collecting them can be a logistical nightmare. If you are running nbgrader on a server, some of this pain can be relieved by relying on nbgrader's built-in functionality for releasing and collecting assignments on the instructor's side, and fetching an...
.. contents:: Table of Contents :depth: 2
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Setting up the exchange After an assignment has been created using `nbgrader assign`, the instructor must actually release that assignment to students. If the class is being taught on a single filesystem, then the instructor may use `nbgrader release` to copy the assignment files to a shared location on the filesystem...
%%file nbgrader_config.py c = get_config() c.Exchange.course_id = "example_course" c.Exchange.root = "/tmp/exchange"
Writing nbgrader_config.py
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
In the config file, we've specified the "exchange" directory to be `/tmp/exchange`. This directory must exist before running `nbgrader`, and it *must* be readable and writable by all users, so we'll first create it and configure the appropriate permissions:
%%bash # remove existing directory, so we can start fresh for demo purposes rm -rf /tmp/exchange # create the exchange directory, with write permissions for everyone mkdir /tmp/exchange chmod ugo+rw /tmp/exchange
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Releasing assignments
.. seealso:: :doc:`creating_and_grading_assignments` Details on generating assignments :doc:`/command_line_tools/nbgrader-release` Command line options for ``nbgrader release`` :doc:`/command_line_tools/nbgrader-list` Command line options for ``nbgrader list`` :doc:`philosoph...
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
From the formgrader Using the formgrader extension, you may release assignments by clicking on the "release" button:![](images/manage_assignments5.png)**Note** that for the "release" button to become available, the `course_id` option must be set in `nbgrader_config.py`.Once completed, you will see a pop-up window with...
%%bash nbgrader release "ps1"
[ReleaseApp | INFO] Source: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/release/./ps1 [ReleaseApp | INFO] Destination: /tmp/exchange/example_course/outbound/ps1 [ReleaseApp | INFO] Released as: example_course ps1
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Finally, you can verify that the assignment has been appropriately released by running the `nbgrader list` command:
%%bash nbgrader list
[ListApp | INFO] Released assignments: [ListApp | INFO] example_course ps1
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Note that there should only ever be *one* instructor who runs the `nbgrader release` and `nbgrader collect` commands (and there should probably only be one instructor -- the same instructor -- who runs `nbgrader assign`, `nbgrader autograde` and the formgrader as well). However this does not mean that only one instruct...
.. _fetching-assignments:
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Fetching assignments
.. seealso:: :doc:`/command_line_tools/nbgrader-fetch` Command line options for ``nbgrader fetch`` :doc:`/command_line_tools/nbgrader-list` Command line options for ``nbgrader list`` :doc:`/configuration/config_options` Details on ``nbgrader_config.py``
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
From the student's perspective, they can list what assignments have been released, and then fetch a copy of the assignment to work on. First, we'll create a temporary directory to represent the student's home directory:
%%bash # remove the fake student home directory if it exists, for demo purposes rm -rf /tmp/student_home # create the fake student home directory and switch to it mkdir /tmp/student_home
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
If you are not using the default exchange directory (as is the case here), you will additionally need to provide your students with a configuration file that sets the appropriate directory for them:
%%file /tmp/student_home/nbgrader_config.py c = get_config() c.Exchange.root = '/tmp/exchange' c.Exchange.course_id = "example_course"
Writing /tmp/student_home/nbgrader_config.py
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
From the notebook dashboard
.. warning:: The "Assignment List" extension is not fully compatible with multiple courses on the same server. Please see :ref:`multiple-classes` for details. Alternatively, students can fetch assignments using the assignment list notebook server extension. You must have installed the extension by following the i...
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
![](images/assignment_list_released.png) The image above shows that there has been one assignment released ("ps1") for the class "example_course". To get this assignment, students can click the "Fetch" button (analogous to running `nbgrader fetch ps1 --course example_course`. **Note: this assumes nbgrader is always run...
%%bash export HOME=/tmp/student_home && cd $HOME nbgrader list
[ListApp | INFO] Released assignments: [ListApp | INFO] example_course ps1
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
They can then fetch an assignment for that class using `nbgrader fetch` and passing the name of the class and the name of the assignment:
%%bash export HOME=/tmp/student_home && cd $HOME nbgrader fetch "ps1"
[FetchApp | INFO] Source: /tmp/exchange/example_course/outbound/ps1 [FetchApp | INFO] Destination: /private/tmp/student_home/ps1 [FetchApp | INFO] Fetched as: example_course ps1
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Note that running `nbgrader fetch` copies the assignment files from the exchange directory to the local directory, and therefore can be used from any directory:
%%bash ls -l "/tmp/student_home/ps1"
total 40 -rw-r--r-- 1 jhamrick wheel 5733 Apr 22 15:29 jupyter.png -rw-r--r-- 1 jhamrick wheel 8126 Apr 22 15:29 problem1.ipynb -rw-r--r-- 1 jhamrick wheel 2318 Apr 22 15:29 problem2.ipynb
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Additionally, the `nbgrader fetch` (as well as `nbgrader submit`) command also does not rely on having access to the nbgrader database -- the database is only used by instructors. Submitting assignments
.. seealso:: :doc:`/command_line_tools/nbgrader-submit` Command line options for ``nbgrader fetch`` :doc:`/command_line_tools/nbgrader-list` Command line options for ``nbgrader list`` :doc:`/configuration/config_options` Details on ``nbgrader_config.py``
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
From the notebook dashboard
.. warning:: The "Assignment List" extension is not fully compatible with multiple courses on the same server. Please see :ref:`multiple-classes` for details. Alternatively, students can submit assignments using the assignment list notebook server extension. You must have installed the extension by following the ...
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
After students have worked on the assignment for a while, but before submitting, they can validate that their notebooks pass the tests by clicking the "Validate" button (analogous to running `nbgrader validate`). If any tests fail, they will see a warning:![](images/assignment_list_validate_failed.png) If there are no ...
.. note:: If the notebook has been released with hidden tests removed from the source version (see :ref:`autograder-tests-cell-hidden-tests`) then this validation is only done against the tests the students can see in the release version.
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Once students have validated all the notebooks, they can click the "Submit" button to submit the assignment (analogous to running `nbgrader submit ps1 --course example_course`). Afterwards, it will show up in the list of submitted assignments (and also still in the list of downloaded assignments):![](images/assignment_...
%%bash cat /tmp/student_home/nbgrader_config.py
c = get_config() c.Exchange.root = '/tmp/exchange' c.Exchange.course_id = "example_course"
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
After working on an assignment, the student can submit their version for grading using `nbgrader submit` and passing the name of the assignment and the name of the class:
%%bash export HOME=/tmp/student_home && cd $HOME nbgrader submit "ps1"
[SubmitApp | INFO] Source: /private/tmp/student_home/ps1 [SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:40.397476 UTC [SubmitApp | INFO] Submitted as: example_course ps1 2018-04-22 14:29:40.397476 UTC
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Note that "the name of the assignment" really corresponds to "the name of a folder". It just happens that, in our current directory, there is a folder called "ps1":
%%bash export HOME=/tmp/student_home && cd $HOME ls -l "/tmp/student_home"
total 8 drwxr-xr-x 3 jhamrick wheel 96 Apr 22 15:29 Library -rw-r--r-- 1 jhamrick wheel 91 Apr 22 15:29 nbgrader_config.py drwxr-xr-x 5 jhamrick wheel 160 Apr 22 15:29 ps1
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader