markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Multi-label prediction with Planet Amazon dataset
%reload_ext autoreload %autoreload 2 %matplotlib inline from fastai import * from fastai.vision import *
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Getting the data The planet dataset isn't available on the [fastai dataset page](https://course.fast.ai/datasets) due to copyright restrictions. You can download it from Kaggle however. Let's see how to do this by using the [Kaggle API](https://github.com/Kaggle/kaggle-api) as it's going to be pretty useful to you if ...
! pip install kaggle --upgrade
Collecting kaggle [?25l Downloading https://files.pythonhosted.org/packages/9e/94/5370052b9cbc63a927bda08c4f7473a35d3bb27cc071baa1a83b7f783352/kaggle-1.5.1.1.tar.gz (53kB)  100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 61kB 2.6MB/s ta 0:00:01 [?25hCollecting urllib3<1.23.0,>=1.15 (from kaggle) [?25l Downloading ht...
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Then you need to upload your credentials from Kaggle on your instance. Login to kaggle and click on your profile picture on the top left corner, then 'My account'. Scroll down until you find a button named 'Create New API Token' and click on it. This will trigger the download of a file named 'kaggle.json'.Upload this f...
! mkdir -p ~/.kaggle/ ! mv kaggle.json ~/.kaggle/
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
You're all set to download the data from [planet competition](https://www.kaggle.com/c/planet-understanding-the-amazon-from-space). You **first need to go to its main page and accept its rules**, and run the two cells below (uncomment the shell commands to download and unzip the data). If you get a `403 forbidden` erro...
path = Config.data_path()/'planet' path.mkdir(parents=True, exist_ok=True) path ! kaggle --version ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {path} ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train_v2.csv -p {path} ! unzip -q ...
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
To extract the content of this file, we'll need 7zip, so uncomment the following line if you need to install it (or run `sudo apt install p7zip` in your terminal).
! conda install -y -c haasad eidl7zip
Solving environment: done ## Package Plan ## environment location: /home/cedric/anaconda3 added / updated specs: - eidl7zip The following packages will be downloaded: package | build ---------------------------|----------------- eidl7zip-1.0.0 | ...
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
And now we can unpack the data (uncomment to run - this might take a few minutes to complete).
! 7za -bd -y -so x {path}/train-jpg.tar.7z | tar xf - -C {path} !ls {path}/train-jpg | head -n10
train_0.jpg train_1.jpg train_10.jpg train_100.jpg train_1000.jpg train_10000.jpg train_10001.jpg train_10002.jpg train_10003.jpg train_10004.jpg ls: write error: Broken pipe
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Multiclassification Contrary to the pets dataset studied in last lesson, here each picture can have multiple labels. If we take a look at the csv file containing the labels (in 'train_v2.csv' here) we see that each 'image_name' is associated to several tags separated by spaces.
df = pd.read_csv(path/'train_v2.csv') df.head()
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
To put this in a `DataBunch` while using the [data block API](https://docs.fast.ai/data_block.html), we then need to using `ImageMultiDataset` (and not `ImageClassificationDataset`). This will make sure the model created has the proper loss function to deal with the multiple classes.
# This is a set of transformation which is pretty good for satellite images tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
We use parentheses around the data block pipeline below, so that we can use a multiline statement without needing to add '\\'.
np.random.seed(42) src = (ImageItemList.from_csv(path, 'train_v2.csv', folder='train-jpg', suffix='.jpg') .random_split_by_pct(0.2) .label_from_df(sep=' ')) data = (src.transform(tfms, size=128) .databunch().normalize(imagenet_stats))
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
`show_batch` still works, and show us the different labels separated by `;`.
data.show_batch(rows=3, figsize=(12,9))
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Initial Model To create a `Learner` we use the same function as in lesson 1. Our base architecture is resnet50, but the metrics are a little bit different: we use `accuracy_thresh` instead of `accuracy`. In lesson 1, we determined the predicition for a given class by picking the final activation that was the biggest, ...
arch = models.resnet50 acc_02 = partial(accuracy_thresh, thresh=0.2) f_score = partial(fbeta, thresh=0.2) learn = create_cnn(data, arch, metrics=[acc_02, f_score])
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
We use the LR Finder to pick a good learning rate.
learn.lr_find() learn.recorder.plot()
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Then we can fit the head of our network.
lr = 0.01 learn.fit_one_cycle(5, slice(lr)) learn.save('stage-1-rn50')
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
...And fine-tune the whole model:
learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, slice(1e-5, lr/5)) learn.save('stage-2-rn50') learn.load('stage-2-rn50')
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Use Full Size Images We've used the image size of 128px in the initial model. That's simply because we want to try it out very quickly.Now, let's try to use the full size images.
data = (src.transform(tfms, size=256) .databunch(bs=32).normalize(imagenet_stats)) learn.data = data data.train_ds[0][0].shape learn.freeze()
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Notice that we are using **transfer learning**. Instead of training from the beginning, we just start from model we trained with smaller images.
learn.lr_find() learn.recorder.plot()
LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
**Training Stage 1 - Freeze**
lr = 1e-3/2 learn.fit_one_cycle(5, slice(lr)) learn.save('stage-1-256-rn50') learn.recorder.plot_losses() learn.recorder.plot_lr()
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
**Training Stage 2 - Unfreeze**
learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, slice(1e-5, lr/5)) learn.recorder.plot_losses() learn.save('stage-2-256-rn50')
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
You won't really know how you're going until you submit to Kaggle, since the leaderboard isn't using the same subset as we have for training. But as a guide, 50th place (out of 938 teams) on the private leaderboard was a score of `0.930`. fin (We'll look at this section later - please don't ask about it just yet! :) )
# ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg.tar.7z -p {path} # ! 7za -bd -y -so x {path}/test-jpg.tar.7z | tar xf - -C {path} learn.load('stage-2-256-rn50')
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Test Download test dataset Use Kaggle API to download the test dataset:
! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg.tar.7z -p {path} ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg-additional.tar.7z -p {path} ! 7za -bd -y -so x {path}/test-jpg.tar.7z | tar xf - -C {path} ! 7za -bd -y -so x {path}/test-jpg...
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Add test data to ImageItemList and ImageDataBunch
type(src) learn.data =(src.add_test_folder('test-jpg') .transform(tfms, size=256) .databunch(bs=8).normalize(imagenet_stats)) # Sanity check len(learn.data.train_ds), len(learn.data.valid_ds), len(learn.data.test_ds) # Sanity check len(learn.data.train_dl), len(learn.data.valid_dl), len(learn....
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Kaggle Submission Applies fastai Test-Time-Augmentation ([TTA](https://docs.fast.ai/tta.html)) to predict on test set:
preds = learn.TTA(ds_type=DatasetType.Test) # TTA brings test time functionality to the Learner class. torch.save(preds, path/'preds-tta-256-rn50.pt')
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Get final predictions:
final_preds = preds[0] # note, preds[1] is y, which is the ground truth/target final_preds.shape # Sanity check len(final_preds[1]) # Sanity check final_preds[0][0] # PS: I have taken these parts of code from Arunoda's notebook. def find_tags(pred, thresh, show_probs): classes = '' for idx, val in enumerate(pr...
_____no_output_____
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Create data frame for Kaggle submission file:
df = pd.DataFrame(columns=['image_name', 'tags']) for idx in range(len(final_preds)): if idx % 1000 == 0: print(f'Progress: {idx}') image_name, tags = get_row(final_preds, idx, 0.2) df.loc[idx] = [image_name, tags] df.head() subm_path = path/'subm_fastai_1.0.34_tta_stage2_sz_256_rn50_val_0.2.csv' ...
image_name,tags file_19658,agriculture haze partly_cloudy primary test_18775,agriculture bare_ground clear habitation primary road file_20453,agriculture haze primary test_23183,clear primary water test_28867,partly_cloudy primary test_17746,clear primary test_11747,agriculture clear primary water test_21382,cl...
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
**Upload submission file to Kaggle** Kaggle allows late submission to check your score. You can use the following command to do that:
! kaggle competitions submit -c planet-understanding-the-amazon-from-space -f {subm_path} -m "fastai: 1.0.34, train: stage2, sz: 256, arch: resnet50, val split: 0.2, TTA"
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.19M/2.19M [00:00<00:00, 7.36MB/s] Successfully submitted to Planet: Understanding the Amazon from Space
Apache-2.0
nbs/dl1/lesson3-planet_20181210.ipynb
cedrickchee/fastai-course-v3
Solving Captcha using Tensorflow
# Import all the packages import cv2 import pickle import os.path import time import matplotlib.pyplot as plt import numpy as np import imutils from imutils import paths from sklearn.preprocessing import LabelBinarizer import tensorflow as tf from tensorflow.python.framework import ops from helpers import resize_to_fit...
_____no_output_____
CC-BY-4.0
notebooks/train_model.ipynb
Apidwalin/python-web-scraping-master
Getting preprocessed train images and it's labels
# Initialize the data and labels data = [] labels = [] # loop over the input images for image_file in paths.list_images(LETTER_IMAGES_FOLDER): # Load the image and convert it to grayscale image = cv2.imread(image_file) image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Resize the letter so it fits in a...
_____no_output_____
CC-BY-4.0
notebooks/train_model.ipynb
Apidwalin/python-web-scraping-master
CNN Architecture
# Create convolutional neural network with train_graph.as_default(): # Layer1 - Convolutional conv_layer1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME', name = 'conv1') relu_layer1 = tf.nn.relu(conv_layer1, name = 'relu1') max_pool_layer1 = tf.nn.max_pool(relu_layer1, ksize = [1, 2, 2, 1], ...
_____no_output_____
CC-BY-4.0
notebooks/train_model.ipynb
Apidwalin/python-web-scraping-master
Training the model
ops.reset_default_graph() tf.set_random_seed(1) # Initialize all the hyperparameters seed = 3 num_epochs=10 minibatch_size=64 costs = [] # Training the model with tf.Session(graph=train_graph) as sess: # Initialize all variables sess.run(tf.global_variables_initializer()) # If we want to continue tra...
_____no_output_____
CC-BY-4.0
notebooks/train_model.ipynb
Apidwalin/python-web-scraping-master
Preprocessing the test images and making predicitons
# Load up the model labels (so we can translate model predictions to actual letters) with open(MODEL_LABELS_FILENAME, "rb") as f: lb = pickle.load(f) # Ignoring the INFO from the tensorflow tf.logging.set_verbosity(tf.logging.ERROR) loaded_graph = tf.Graph() # loop over the image paths for image_file in test...
_____no_output_____
CC-BY-4.0
notebooks/train_model.ipynb
Apidwalin/python-web-scraping-master
Eigendecompositions:label:`sec_eigendecompositions`Eigenvalues are often one of the most useful notions we will encounter when studying linear algebra, however, as a beginner, it is easy to overlook their importance.Below, we introduce eigendecomposition and try to convey some sense of just why it is so important. Sup...
%matplotlib inline import numpy as np from IPython import display from d2l import mxnet as d2l np.linalg.eig(np.array([[2, 1], [2, 3]]))
_____no_output_____
MIT
scripts/d21-en/mxnet/chapter_appendix-mathematics-for-deep-learning/eigendecomposition.ipynb
lucmertins/CapDeepLearningBook
Note that `numpy` normalizes the eigenvectors to be of length one,whereas we took ours to be of arbitrary length.Additionally, the choice of sign is arbitrary.However, the vectors computed are parallel to the ones we found by hand with the same eigenvalues. Decomposing MatricesLet us continue the previous example one s...
A = np.array([[1.0, 0.1, 0.1, 0.1], [0.1, 3.0, 0.2, 0.3], [0.1, 0.2, 5.0, 0.5], [0.1, 0.3, 0.5, 9.0]]) v, _ = np.linalg.eig(A) v
_____no_output_____
MIT
scripts/d21-en/mxnet/chapter_appendix-mathematics-for-deep-learning/eigendecomposition.ipynb
lucmertins/CapDeepLearningBook
In this way, eigenvalues can be approximated, and the approximations will be fairly accurate in the case that the diagonal is significantly larger than all the other elements. It is a small thing, but with a complex and subtle topic like eigendecomposition, it is good to get any intuitive grasp we can. A Useful Applic...
np.random.seed(8675309) k = 5 A = np.random.randn(k, k) A
_____no_output_____
MIT
scripts/d21-en/mxnet/chapter_appendix-mathematics-for-deep-learning/eigendecomposition.ipynb
lucmertins/CapDeepLearningBook
Behavior on Random DataFor simplicity in our toy model, we will assume that the data vector we feed in $\mathbf{v}_{in}$ is a random five dimensional Gaussian vector.Let us think about what we want to have happen.For context, lets think of a generic ML problem,where we are trying to turn input data, like an image, int...
# Calculate the sequence of norms after repeatedly applying `A` v_in = np.random.randn(k, 1) norm_list = [np.linalg.norm(v_in)] for i in range(1, 100): v_in = A.dot(v_in) norm_list.append(np.linalg.norm(v_in)) d2l.plot(np.arange(0, 100), norm_list, 'Iteration', 'Value')
_____no_output_____
MIT
scripts/d21-en/mxnet/chapter_appendix-mathematics-for-deep-learning/eigendecomposition.ipynb
lucmertins/CapDeepLearningBook
The norm is growing uncontrollably! Indeed if we take the list of quotients, we will see a pattern.
# Compute the scaling factor of the norms norm_ratio_list = [] for i in range(1, 100): norm_ratio_list.append(norm_list[i] / norm_list[i - 1]) d2l.plot(np.arange(1, 100), norm_ratio_list, 'Iteration', 'Ratio')
_____no_output_____
MIT
scripts/d21-en/mxnet/chapter_appendix-mathematics-for-deep-learning/eigendecomposition.ipynb
lucmertins/CapDeepLearningBook
If we look at the last portion of the above computation, we see that the random vector is stretched by a factor of `1.974459321485[...]`,where the portion at the end shifts a little, but the stretching factor is stable. Relating Back to EigenvectorsWe have seen that eigenvectors and eigenvalues correspond to the amou...
# Compute the eigenvalues eigs = np.linalg.eigvals(A).tolist() norm_eigs = [np.absolute(x) for x in eigs] norm_eigs.sort() print(f'norms of eigenvalues: {norm_eigs}')
norms of eigenvalues: [0.8786205280381857, 1.2757952665062624, 1.4983381517710659, 1.4983381517710659, 1.974459321485074]
MIT
scripts/d21-en/mxnet/chapter_appendix-mathematics-for-deep-learning/eigendecomposition.ipynb
lucmertins/CapDeepLearningBook
An ObservationWe see something a bit unexpected happening here: that number we identified before for the long term stretching of our matrix $\mathbf{A}$ applied to a random vector is *exactly* (accurate to thirteen decimal places!) the largest eigenvalue of $\mathbf{A}$.This is clearly not a coincidence!But, if we now...
# Rescale the matrix `A` A /= norm_eigs[-1] # Do the same experiment again v_in = np.random.randn(k, 1) norm_list = [np.linalg.norm(v_in)] for i in range(1, 100): v_in = A.dot(v_in) norm_list.append(np.linalg.norm(v_in)) d2l.plot(np.arange(0, 100), norm_list, 'Iteration', 'Value')
_____no_output_____
MIT
scripts/d21-en/mxnet/chapter_appendix-mathematics-for-deep-learning/eigendecomposition.ipynb
lucmertins/CapDeepLearningBook
We can also plot the ratio between consecutive norms as before and see that indeed it stabilizes.
# Also plot the ratio norm_ratio_list = [] for i in range(1, 100): norm_ratio_list.append(norm_list[i] / norm_list[i - 1]) d2l.plot(np.arange(1, 100), norm_ratio_list, 'Iteration', 'Ratio')
_____no_output_____
MIT
scripts/d21-en/mxnet/chapter_appendix-mathematics-for-deep-learning/eigendecomposition.ipynb
lucmertins/CapDeepLearningBook
Réseau de neuronesNous allons, illustrer notre exemple de réseau de neurone sur une datase qui consiste predire si un étudiant va terminer ou pas ses études en nous basant sur certains critères Importation des packages
import matplotlib as mpl import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler, LabelEncoder from sklearn.impute import SimpleImputer from sklearn.model_selection import train_test_split import pandas as pd from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Den...
_____no_output_____
Apache-2.0
python/studentperformancemodel.ipynb
camara94/reseau-neurone-tensorflow-2
CrΓ©ation de la dataset
df = pd.read_csv('./../data/StudentsPerformance.csv') df.head(2)
_____no_output_____
Apache-2.0
python/studentperformancemodel.ipynb
camara94/reseau-neurone-tensorflow-2
Data preprocessing
df.describe() import numpy as np for col in df.columns: messing_val = 0 for v in df[col]: if v == np.nan: messing_val += 1 print(f'la colonne **{col}** Γ  {messing_val} donnΓ©e(s) manquante(s)') df.head(3) ## Le rΓ΄le de cette methode est de transformer les variables catΓ©gorielles en variab...
La taille de train set est: 750 La taille de test set est: 250 La taille de test set est: (250, 7) La taille de test set est: (250,)
Apache-2.0
python/studentperformancemodel.ipynb
camara94/reseau-neurone-tensorflow-2
CrΓ©ation de notre rΓ©seau de neuron
## Model simple avec Sequential model = Sequential() ## crΓ©ation de la couche cachΓ©e couche_cachee = Dense(30, input_dim=7, activation='relu') couche_cachee2 = Dense(20, activation='relu') couche_cachee3 = Dense(5, activation='relu') couche_sortie = Dense(1, activation='sigmoid') model.add( couche_cachee ) model.add( c...
_____no_output_____
Apache-2.0
python/studentperformancemodel.ipynb
camara94/reseau-neurone-tensorflow-2
Load dataset and split into training / test`training.csv` is a mixture of simulated signal, real background.It has the following columns.`test.csv` has the following columns:
train_ada = pandas.read_csv('reference/training.csv', sep=',') test_ada = pandas.read_csv('reference/test.csv', sep=',', index_col='id') print ("Training full sample columns:", ", ".join(train_ada.columns), "\nShape:", train_ada.shape) print ("Test full sample columns:", ", ".join(test_ada.columns), "\nShape:", test_ad...
Test full sample columns: LifeTime, dira, FlightDistance, FlightDistanceError, IP, IPSig, VertexChi2, pt, DOCAone, DOCAtwo, DOCAthree, IP_p0p2, IP_p1p2, isolationa, isolationb, isolationc, isolationd, isolatione, isolationf, iso, CDF1, CDF2, CDF3, ISO_SumBDT, p0_IsoBDT, p1_IsoBDT, p2_IsoBDT, p0_track_Chi2Dof, p1_track_...
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
Train simple model using part of the training sample
train, test = train_test_split(train_ada, test_size=0.3, random_state=13)
_____no_output_____
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
Let's chose features to train a model
variables = list(set(train_ada.columns) - {'id', 'signal', 'mass', 'production', 'min_ANNmuon'}) print (variables) %%time clf = AdaBoostClassifier(n_estimators=120, learning_rate=0.009, random_state=13, base_estimator=DecisionTreeClassifier(max_depth=19, min_samples_leaf=40, max_features=10...
Wall time: 49.8 s
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
Check model quality on a half of the training sample
def plot_metrics(y_true, y_pred): fpr, tpr, thresholds = roc_curve(y_true, y_pred) roc_auc = roc_auc_score(y_true, y_pred) plt.plot(fpr, tpr, label='ROC AUC=%f' % roc_auc) plt.xlabel("FPR") plt.ylabel("TPR") plt.legend() plt.title("ROC Curve") y_pred = clf.predict_proba(test[variables])[:, ...
_____no_output_____
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
ROC AUC is just a part of the solution, you also have to make sure that- the classifier output is not correlated with the mass- classifier performs similarily on MC and real data of the normalization channel Mass correlation check
df_corr_check = pandas.read_csv("reference/check_correlation.csv") df_corr_check.shape y_pred = clf.predict(df_corr_check[variables]) def efficiencies(features, thresholds=None, mask=None, bins=30, labels_dict=None, ignored_sideband=0.0, errors=False, grid_columns=2): """ Efficienc...
0.00019410562429501838
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
MC vs Real difference
df_agreement = pandas.read_csv('reference/check_agreement.csv') from sklearn.utils.validation import column_or_1d def get_ks_metric(df_agree, df_test): sig_ind = df_agree[df_agree['signal'] == 1].index bck_ind = df_agree[df_agree['signal'] == 0].index mc_prob = numpy.array(df_test.loc[sig_ind]['prediction...
_____no_output_____
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
Let's see if adding some noise can improve the agreement
def add_noise(array, level=0.15, random_seed=34): numpy.random.seed(random_seed) return level * numpy.random.random(size=array.size) + (1 - level) * array agreement_probs_noise = add_noise(clf.predict_proba(df_agreement[variables])[:, 1]) ks_noise = compute_ks( agreement_probs_noise[df_agreement['signal']....
_____no_output_____
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
Check ROC with noise
test.shape y_pred = add_noise(clf.predict_proba(test[variables])[:, 1]) plot_metrics(test['signal'], y_pred) test.shape, y_pred.shape
_____no_output_____
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
Train the model using the whole training sample
%time clf.fit(train_ada[variables], train_ada['signal'])
Wall time: 1min 16s
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
Compute prediction and add noise
y_pred = add_noise(clf.predict_proba(test_ada[variables])[:, 1])
_____no_output_____
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
Prepare submission file
def save_submission(y_pred, index, filename='result'): sep = ',' filename = '{}.csv.gz'.format(filename) pandas.DataFrame({'id': index, 'prediction': y_pred}).to_csv( filename, sep=sep, index=False, compression='gzip') print ("Saved file: ", filename, "\nShape:", (y_pred.shape[0], 2)) ...
Saved file: sample_submission.csv.gz Shape: (855819, 2)
MIT
Addressing Large Hadron Collider Challenges by Machine Learning/Week3/index.ipynb
Mohitkr95/Advanced-ML
Crystallization at a plane no-slip wall
rbm = pystokes.wallBounded.Rbm(radius=b, particles=Np, viscosity=eta) force = pystokes.forceFields.Forces(particles=Np) # simulate the resulting system Tf, Npts = 150, 200 pystokes.utils.simulate(np.concatenate((r,p)), Tf,Npts,rhs,integrator='odeint', filename='crystallization') # plot the data at specific time ins...
_____no_output_____
MIT
examples/ex4-crystallization.ipynb
rajeshrinet/pystokes
Crystallization at a plane no-shear interface
rbm = pystokes.interface.Rbm(radius=b, particles=Np, viscosity=eta) force = pystokes.forceFields.Forces(particles=Np) # simulate the resulting system Tf, Npts = 150, 200 pystokes.utils.simulate(np.concatenate((r,p)), Tf,Npts,rhs,integrator='odeint', filename='crystallization') # plot the data at specific time instan...
_____no_output_____
MIT
examples/ex4-crystallization.ipynb
rajeshrinet/pystokes
Evaluating Validation Set (can't evaluate test set on the fly, but the validation set wasn't used to change any hyperparameters)
from dataset.pycocotools.coco import COCO from dataset.pycocotools.cocoeval import COCOeval cocoGt=COCO("/home/data/preprocessed/test-ard-june-sept-rgb-jpeg-split-geo-128/annotations/instances_val-nebraska.json") cocoDt=cocoGt.loadRes("/home/data/output/resnet_v1_101_coco_fcis_end2end_ohem-nebraska-128-moresamples/val...
loading annotations into memory... Done (t=2.20s) creating index... index created! Loading and preparing results... DONE (t=2.73s) creating index... index created! Running per image evaluation... DONE (t=32.98s). Accumulating evaluation results... DONE (t=1.76s). Average Precision (AP) @[ IoU=0.50:0.95 ...
MIT
fcis/InstanceSegmentation_Sentinel2/fcis_profile_nebraska.ipynb
ecohydro/CropMask_RCNN
Making figures for All Validation Images
plt.ioff() all_maps = [] all_mars = [] all_ims, all_dets, all_masks, all_configs, all_classes = compare.predict_on_image_names(image_names, config, model_path_id="/home/data/output/resnet_v1_101_coco_fcis_end2end_ohem-nebraska-128-moresamples/train-nebraska/e2e", epoch=1) for index in range(len(all_ims)): coco_anns...
_____no_output_____
MIT
fcis/InstanceSegmentation_Sentinel2/fcis_profile_nebraska.ipynb
ecohydro/CropMask_RCNN
Homework 8 Due Date: Tuesday, October 31st at 11:59 PM Problem 1: BST TraversalThis problem builds on Problem 1 of Homework 7 in which you wrote a binary search tree. Part 1As discussed in lecture, three different types to do a depth-first traversal are: preorder, inorder, and postorder. Here is a reference: [Tree ...
# Part 1 from enum import Enum class DFSTraversalTypes(Enum): PREORDER = 1 INORDER = 2 POSTORDER = 3 class Node: def __init__(self, value, depth=0, left=None, right=None): self.value = value self.left = left self.right = right self.parent = None self.depth ...
Writing TreeTraversal.py
MIT
homeworks/HW8/HW8-final.ipynb
xuwd11/cs207_Weidong_Xu
--- Problem 2: Markov Chains[Markov Chains](https://en.wikipedia.org/wiki/Markov_chain) are widely used to model and predict discrete events. Underlying Markov chains are Markov processes which make the assumption that the outcome of a future event only depends on the event immediately preceeding it. In this exercise,...
#Load CSV file -- hint: you can use np.genfromtxt() import numpy as np weather_array = np.genfromtxt('weather.csv', delimiter=',') print(weather_array)
[[ 0.4 0.3 0.1 0.05 0.1 0.05] [ 0.3 0.4 0.1 0.1 0.08 0.02] [ 0.2 0.3 0.35 0.05 0.05 0.05] [ 0.1 0.2 0.25 0.3 0.1 0.05] [ 0.15 0.2 0.1 0.15 0.3 0.1 ] [ 0.1 0.2 0.35 0.1 0.05 0.2 ]]
MIT
homeworks/HW8/HW8-final.ipynb
xuwd11/cs207_Weidong_Xu
Part 2: Create a class called `Markov` that has the following methods:* `load_data(array)`: loads the Numpy 2D array and stores it as a class variable.* `get_prob(previous_day, following_day)`: returns the probability of `following_day` weather given `previous_day` weather. **Note:** `previous_day` and `following_day...
class Markov: def __init__(self): # implement here self.weather_types = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy', 'hailing'] def load_data(self, array): # implement here self.weather_array = array return self def get_prob(self, previous_day, followi...
0.4
MIT
homeworks/HW8/HW8-final.ipynb
xuwd11/cs207_Weidong_Xu
--- Problem 3: Iterators Iterators are a convenient way to walk along your Markov chain. Part 1: Using your `Markov` class from Problem 3, write `Markov` as an iterator by implementing the `__iter__()` and `__next__()` methods.Remember: * `__iter__()` should return the iterator object and should be implicitly called ...
class Markov: def __init__(self): # implement here self.weather_types = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy', 'hailing'] self.weather = None def load_data(self, array): # implement here self.weather_array = array return self def get_prob...
cloudy sunny windy snowy rainy
MIT
homeworks/HW8/HW8-final.ipynb
xuwd11/cs207_Weidong_Xu
Part 2: We want to predict what weather will be like in a week for 5 different cities.Now that we have our `Markov` iterator, we can try to predict what the weather will be like in seven days from now.Given each city's current weather in the dictionary `city_weather` (see below), simulate what the weather will be like...
city_weather = { 'New York': 'rainy', 'Chicago': 'snowy', 'Seattle': 'rainy', 'Boston': 'hailing', 'Miami': 'windy', 'Los Angeles': 'cloudy', 'San Fransisco': 'windy' } from collections import Counter def predict(weather, days=7): m_it = iter(Markov().load_data(weather_array).set_weathe...
{'New York': 'cloudy', 'Chicago': 'cloudy', 'Seattle': 'sunny', 'Boston': 'sunny', 'Miami': 'cloudy', 'Los Angeles': 'cloudy', 'San Fransisco': 'cloudy'}
MIT
homeworks/HW8/HW8-final.ipynb
xuwd11/cs207_Weidong_Xu
Germany: LK Saalekreis (Sachsen-Anhalt)* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Sachsen-Anhalt-LK-Saalekreis.ipynb)
import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview(country="Germany", subregion="LK Saalekreis"); # load the data cases, deaths, r...
_____no_output_____
CC-BY-4.0
ipynb/Germany-Sachsen-Anhalt-LK-Saalekreis.ipynb
RobertRosca/oscovida.github.io
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Sachsen-Anhalt-LK-Saalekreis.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyte...
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}")
_____no_output_____
CC-BY-4.0
ipynb/Germany-Sachsen-Anhalt-LK-Saalekreis.ipynb
RobertRosca/oscovida.github.io
Text-to-Speech with Tacotron+WaveRNNThis is an English female voice TTS demo using an open source project [fatchord/WaveRNN](https://github.com/fatchord/WaveRNN).For other deep-learning Colab notebooks, visit [tugstugi/dl-colab-notebooks](https://github.com/tugstugi/dl-colab-notebooks). Install fatchord/WaveRNN
import os import time from os.path import exists, join, basename, splitext git_repo_url = 'https://github.com/fatchord/WaveRNN.git' project_name = splitext(basename(git_repo_url))[0] if not exists(project_name): !git clone -q {git_repo_url} !cd {project_name} && pip install -q -r requirements.txt import sys sys...
_____no_output_____
MIT
Deep-Learning-Notebooks/notebooks/fatchordWaveRNN.ipynb
deepraj1729/Resources-and-Guides
Sentence to synthesize
SENTENCE = 'Supporters say they expect the law to be blocked in court but hope that the appeals process will bring it before the Supreme Court.'
_____no_output_____
MIT
Deep-Learning-Notebooks/notebooks/fatchordWaveRNN.ipynb
deepraj1729/Resources-and-Guides
Synthetize
!rm -rf {project_name}/quick_start/*.wav !cd {project_name} && python quick_start.py --input_text "{SENTENCE}" wavs = !ls {project_name}/quick_start/*.wav display(Audio(wavs[0], rate=22050))
Initialising WaveRNN Model... Trainable Parameters: 4.234M Loading Weights: "quick_start/voc_weights/latest_weights.pyt" Initialising Tacotron Model... Trainable Parameters: 11.088M Loading Weights: "quick_start/tts_weights/latest_weights.pyt" +---------+---------------+-----------------+----------------+------...
MIT
Deep-Learning-Notebooks/notebooks/fatchordWaveRNN.ipynb
deepraj1729/Resources-and-Guides
Pathfinder Application (Polarisation and Light)Author: R. Mitchell (email: s1432329@sms.ed.ac.uk) FeedbackQuestions, comments, suggestions, or requests for functionality are welcome and can be sent to the email address above. This tool will continue development on an 'as-required' basis (i.e. I will add features when ...
# Run this cell! %matplotlib notebook from pathfinder.runnable.pol_and_light import generate_controls from IPython.display import display controls = generate_controls() display(controls)
_____no_output_____
MIT
Polarisation and Light.ipynb
refmitchell/pathfinder
Data Wrangling, Cleaning of Data, Exploration of Data to make it consistent for Analysis
%matplotlib inline # importing required libraries import os import subprocess import stat import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from datetime import datetime sns.set(style="white") # getting absolute path till the raw data file abs_path = os.getcwd()[:-15] raw_data...
_____no_output_____
MIT
DataPreparation/DataPreparation.ipynb
ranjith283/Data-analysis-by-python
Get the data
from tsflex.utils.data import load_empatica_data df_tmp, df_acc, df_gsr, df_ibi = load_empatica_data(["tmp", "acc", "gsr", "ibi"]) from pandas.tseries.frequencies import to_offset data = [df_tmp, df_acc, df_gsr, df_ibi] for df in data: print("Time-series:", df.columns.values) print(df.shape) try: ...
Time-series: ['TMP'] (30200, 1) Irregular sampling rate Time-series: ['ACC_x' 'ACC_y' 'ACC_z'] (241620, 3) Irregular sampling rate Time-series: ['EDA'] (30204, 1) Irregular sampling rate Time-series: ['IBI'] (1230, 1) Irregular sampling rate
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Look at the data
import plotly.graph_objects as go from plotly.subplots import make_subplots fig = make_subplots( rows=len(data), cols=1, shared_xaxes=True, subplot_titles=[df.columns.values[0].split('_')[0] for df in data], vertical_spacing=0.1, ) for plot_idx, df in enumerate(data, 1): # Select first minute of data...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
These visualizations indicate that some preprocessing might be necessary for the signals (some sort of clipping) tsflex processing This is roughly identical to the processing of notebook containing the example code of the paper.
import pandas as pd; import numpy as np; from scipy.signal import savgol_filter from tsflex.processing import SeriesProcessor, SeriesPipeline # Create the processing functions def clip_data(sig: pd.Series, min_val=None, max_val=None) -> np.ndarray: return np.clip(sig, a_min=min_val, a_max=max_val) def smv(*sigs)...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
tsflex feature extraction with [tsfresh](https://github.com/blue-yonder/tsfresh) integration
# !pip install tsfresh
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
> Useful links; > [List of all tsfresh features](https://tsfresh.readthedocs.io/en/latest/text/list_of_features.html) > [More detailed documentation of the tsfresh features](https://tsfresh.readthedocs.io/en/latest/api/tsfresh.feature_extraction.htmlmodule-tsfresh.feature_extraction.feature_calculators) > [More deta...
# This wrapper handles tsfresh its feature extraction settings from tsflex.features.integrations import tsfresh_settings_wrapper # This wrappers handles tsfresh its combiner functions from tsflex.features.integrations import tsfresh_combiner_wrapper from tsflex.features import FeatureCollection, MultipleFeatureDescript...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Using tsfresh feature extraction settings
# Import some preset feature extraction setting from tsfresh from tsfresh.feature_extraction import MinimalFCParameters, EfficientFCParameters
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Calculate the features for a tsfresh feature extraction setting. Note that;* `tsfresh_settings_wrapper` transforms this feature extraction settings object to a list of features that you can directly pass as the `function` argument of tsflex `MultipleFeatureDescriptors`.
simple_feats = MultipleFeatureDescriptors( functions=tsfresh_settings_wrapper(MinimalFCParameters()), series_names=["ACC_SMV", "EDA", "TMP"], windows=["5min", "2.5min"], strides=["2.5min"], ) feature_collection = FeatureCollection(simple_feats) feature_collection features_df = feature_collection.calcula...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Extract a lot more tsfresh features (& customize the settings, i.e., remove the slower functions)
slow_funcs = [ "matrix_profile", "number_cwt_peaks", "augmented_dickey_fuller", "partial_autocorrelation", "agg_linear_trend", "lempel_ziv_complexity", "benford_correlation", "ar_coefficient", "permutation_entropy", "friedrich_coefficients", ] settings = EfficientFCParameters() for f in slow_funcs: del settings[f]...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Plot the EDA features
import plotly.graph_objects as go from plotly.subplots import make_subplots fig = make_subplots( rows=2, cols=1, shared_xaxes=True, subplot_titles=['Raw EDA data', 'EDA features'], vertical_spacing=0.1 ) fig.add_trace( go.Scattergl(x=df_gsr.index[::4*5], y=df_gsr['EDA'].values[::4*5], name='EDA', mod...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Using simple tsfresh features Integrates natively :)
# Import some simple funtions from tsfresh.feature_extraction.feature_calculators import ( abs_energy, absolute_sum_of_changes, cid_ce, variance_larger_than_standard_deviation, ) from tsflex.features import FeatureCollection, FuncWrapper, MultipleFeatureDescriptors simple_feats = MultipleFeatureDescri...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Plot the EDA features
import plotly.graph_objects as go from plotly.subplots import make_subplots fig = make_subplots( rows=2, cols=1, shared_xaxes=True, subplot_titles=['Raw EDA data', 'EDA features'], vertical_spacing=0.1, ) fig.add_trace( go.Scattergl(x=df_gsr.index[::4*5], y=df_gsr['EDA'].values[::4*5], name='EDA', mo...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Using combiner tsfresh features
# Import all combiner funcs from tsfresh.feature_extraction.feature_calculators import ( agg_autocorrelation, augmented_dickey_fuller, cwt_coefficients, fft_aggregated, fft_coefficient, index_mass_quantile, linear_trend, partial_autocorrelation, spkt_welch_density, symmetry_looki...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Calculate the features for some of tsfresh its combiner functions. Note that;* `param` is now passed to `tsfresh_combiner_wrapper` instead of the combiner function itself* combiner functions that require a `pd.Series` (with a `pd.DatetimeIndex`) are also handled by this wrapper
from tsflex.features import FeatureCollection, MultipleFeatureDescriptors combiner_feats = MultipleFeatureDescriptors( functions=[ tsfresh_combiner_wrapper(index_mass_quantile, param=[{"q": v} for v in [0.15, 0.5, 0.75]]), tsfresh_combiner_wrapper(linear_trend, param=[{"attr": v} for v in ["interce...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Plot the EDA features
import plotly.graph_objects as go from plotly.subplots import make_subplots fig = make_subplots( rows=2, cols=1, shared_xaxes=True, subplot_titles=['Raw EDA data', 'EDA features'], vertical_spacing=0.1, ) fig.add_trace( go.Scattergl(x=df_gsr.index[::4*5], y=df_gsr['EDA'].values[::4*5], name='EDA', mo...
_____no_output_____
MIT
examples/tsfresh_integration.ipynb
predict-idlab/tsflex
Training differentially private pipelines We start by importing the required libraries and modules and collecting the data that we need from the [Adult dataset](https://archive.ics.uci.edu/ml/datasets/adult).
import warnings import numpy as np from sklearn.decomposition import PCA from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from diffprivlib import models X_train = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-datab...
_____no_output_____
MIT
notebooks/pipeline.ipynb
Bhaskers-Blu-Org1/differential-privacy-library
Pipeline with no privacy To begin, let's train and test a scikit-learn pipeline without any privacy guarantees. We first use `StandardScaler` to normalise the data to zero mean and unit variance, then use `PCA` to reduce the dimensionality of the system, and then use `LogisticRegression` as a classifier.
pipe = Pipeline([ ('scaler', StandardScaler()), ('pca', PCA(2)), ('lr', LogisticRegression(solver="lbfgs")) ])
_____no_output_____
MIT
notebooks/pipeline.ipynb
Bhaskers-Blu-Org1/differential-privacy-library
We now train the model, and save the test accuracy as a baseline.
pipe.fit(X_train, y_train) baseline = pipe.score(X_test, y_test) print("Non-private test accuracy: %.2f%%" % (baseline * 100))
Non-private test accuracy: 80.30%
MIT
notebooks/pipeline.ipynb
Bhaskers-Blu-Org1/differential-privacy-library
Differentially private pipeline Using `diffprivlib`, we can now train a differentially private pipeline. We use the same components as in our pipeline above, but with each component satisfying differential privacy. We decide on the `bounds` and `data_norm` parameters by trial and error for this example. In practice, t...
epsilons = np.logspace(-3, 0, 500) dp_pipe = Pipeline([ ('scaler', models.StandardScaler(bounds=([17, 1, 0, 0, 1], [90, 160, 10000, 4356, 99]))), ('pca', models.PCA(2, data_norm=5, centered=True)), ('lr', models.LogisticRegression(data_norm=5)) ])
_____no_output_____
MIT
notebooks/pipeline.ipynb
Bhaskers-Blu-Org1/differential-privacy-library
Let's now train the pipeline across a range of epsilons.
pipe_accuracy = [] for epsilon in epsilons: _eps = epsilon / 3 dp_pipe.set_params(scaler__epsilon=_eps, pca__epsilon=_eps, lr__epsilon=_eps) dp_pipe.fit(X_train, y_train) pipe_accuracy.append(dp_pipe.score(X_test, y_test))
_____no_output_____
MIT
notebooks/pipeline.ipynb
Bhaskers-Blu-Org1/differential-privacy-library
Let's save the results so they can be used later.
import pickle pickle.dump((epsilons, baseline, pipe_accuracy), open("pipeline_accuracy_500.p", "wb" ) )
_____no_output_____
MIT
notebooks/pipeline.ipynb
Bhaskers-Blu-Org1/differential-privacy-library
Results We can now plot the results, showing that non-private accuracy is matched from approximately `epsilon = 0.1`.
import matplotlib.pyplot as plt import pickle epsilons, baseline, pipe_accuracy = pickle.load(open("pipeline_accuracy_500.p", "rb")) plt.semilogx(epsilons, pipe_accuracy, label="Differentially private pipeline", zorder=10) plt.plot(epsilons, np.ones_like(epsilons) * baseline, dashes=[2,2], label="Non-private pipeline...
_____no_output_____
MIT
notebooks/pipeline.ipynb
Bhaskers-Blu-Org1/differential-privacy-library
Using Google Colab with GitHub [Google Colaboratory](http://colab.research.google.com) is designed to integrate cleanly with GitHub, allowing both loading notebooks from github and saving notebooks to github. Loading Public Notebooks Directly from GitHubColab can load public github notebooks directly, with no requir...
_____no_output_____
MIT
notebooks/colab-github-demo.ipynb
wilaiphorn/PatientExploreR
--- model kaggle_ 0.48665 - 768/1257 (61%)
model1 = sm.OLS.from_formula("log_duration ~ \ scale(sqrt_log_dist)*C(vendor_id)\ + scale(sqrt_log_dist)*C(work)\ + C(weekday)\ + C(hour)\ + scale(sqrt_log_dist)*scale(weath...
_____no_output_____
MIT
Mk/2.fitting.ipynb
Romanism/dss-project-taxi
cross validation
score, result_set = cross_validater("log_duration ~ \ scale(sqrt_log_dist)*C(vendor_id)\ + scale(sqrt_log_dist)*C(work)\ + C(weekday)\ + C(hour)\ + scale(sqrt_log_dist)*scal...
_____no_output_____
MIT
Mk/2.fitting.ipynb
Romanism/dss-project-taxi
--- Kaggle
test = pd.read_csv("edited_test.csv") test['sqrt_log_dist'] = test['dist'].apply(lambda x: np.sqrt(np.log1p(x))) # ν…ŒμŠ€νŠΈ 데이터λ₯Ό 톡해 yκ°’ 예츑 y_hat = result.predict(test) y_hat = y_hat.apply(lambda x: int(round(np.exp(x)))) ans = pd.concat([test['id'], y_hat], axis=1) ans.rename(columns={'id':'id' , 0:'trip_duration'}, inplace=...
_____no_output_____
MIT
Mk/2.fitting.ipynb
Romanism/dss-project-taxi
0.48665 - 768/1257 (61%) --- --- dist
a = taxi.pivot_table("log_duration", "sqrt_log_dist", aggfunc='mean') a.plot() results = pd.DataFrame(columns = ["R-square", "AIC", "BIC", "Cond.No.", "Pb(Fstatics)", "Pb(omnibus)", "Pb(jb)", "Dub-Wat","Remarks"]) model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist)", data = taxi) result1 = model1.fit() s...
_____no_output_____
MIT
Mk/2.fitting.ipynb
Romanism/dss-project-taxi
아웃라이어λ₯Ό μ œκ±°ν•œ μƒνƒœμ—μ„œλŠ” cbrtκ°€ 더 μ’‹μŒ work
a = taxi.pivot_table("trip_duration", "work", aggfunc='mean') a.plot() results = pd.DataFrame(columns = ["R-square", "AIC", "BIC", "Cond.No.", "Pb(Fstatics)", "Pb(omnibus)", "Pb(jb)", "Dub-Wat","Remarks"]) model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + C(work)", data = taxi) result1 = model1.fit()...
_____no_output_____
MIT
Mk/2.fitting.ipynb
Romanism/dss-project-taxi
weather_event
a = taxi.pivot_table("log_duration", "weather_event", aggfunc='mean') a.plot() results = pd.DataFrame(columns = ["R-square", "AIC", "BIC", "Cond.No.", "Pb(Fstatics)", "Pb(omnibus)", "Pb(jb)", "Dub-Wat","Remarks"]) model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + C(weather_event)", data = taxi) resul...
_____no_output_____
MIT
Mk/2.fitting.ipynb
Romanism/dss-project-taxi
weekday
a = taxi.pivot_table("log_duration", "weekday", aggfunc='mean') a.plot() # origin data model model = sm.OLS.from_formula("log_duration ~ scale(weekday) +scale(weekday**2) +scale(weekday**3) + scale(weekday**4) +scale(weekday**5) +scale(weekday**6)+scale(weekday**7) + scale(weekday**8) + scale(weekday**9)", data = taxi)...
_____no_output_____
MIT
Mk/2.fitting.ipynb
Romanism/dss-project-taxi
month
# origin data model model = sm.OLS.from_formula("log_duration ~ scale(month) +scale(month**2) +scale(month**3) + scale(month**4) +scale(month**5) +scale(month**6)+scale(month**7) + scale(month**8) + scale(month**9)", data = taxi) result2 = model.fit_regularized(alpha=0.01, L1_wt=1) print(result2.params) results = pd.Da...
_____no_output_____
MIT
Mk/2.fitting.ipynb
Romanism/dss-project-taxi
day
a = taxi.pivot_table("log_duration", "day", aggfunc='mean') a.plot() # origin data model model = sm.OLS.from_formula("log_duration ~ scale(day) +scale(day**2) +scale(day**3) + scale(day**4) +scale(day**5) +scale(day**6)+scale(day**7) + scale(day**8) + scale(day**9)", data = taxi) result2 = model.fit_regularized(alpha=0...
_____no_output_____
MIT
Mk/2.fitting.ipynb
Romanism/dss-project-taxi