markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
There are many other operations. ...but you will find more power in *pandas* for this. Copy and "deep copy" - when objects are passed between functions - you want to avoid an excessive amount of memory copying when it is not necessary - (techincal term: pass by reference)
A = array([[1, 2], [3, 4]]) A B = A # now B is referring to the same array data as A B A == B # check this # changing B affects A B[0,0] = 10 B A
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
If we want to avoid this behavior - get a new completely independent object `B` copied from `A`- we need to do a so-called "deep copy" using the function `copy`
B = copy(A) # now, if we modify B, A is not affected B[0,0] = -5 B A
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Iterating over array elements > Vectorization describes the absence of any explicit looping, indexing, etc., in the code - these things are taking place, of course, just “behind the scenes” (in optimized, pre-compiled C code).source: numpy website - Generally, we want to avoid iterating over the elements of arrays ...
M.dtype M M2 = M.astype(bool) M2 M3 = M.astype(str) M3
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Versions
%reload_ext version_information %version_information numpy
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Uploading an image with graphical annotations stored in a CSV file======================We'll be using standard python tools to parse CSV and create an XML document describing cell nuclei for BisQueMake sure you have bisque api installed:> pip install bisque-api
import os import csv from datetime import datetime try: from lxml import etree except ImportError: import xml.etree.ElementTree as etree
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Include BisQue API
from bqapi import BQSession from bqapi.util import save_blob
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
define some paths
path = '.' path_img = os.path.join(path, 'BisQue_CombinedSubtractions.lsm') path_csv = os.path.join(path, 'BisQue_CombinedSubtractions.csv')
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Parse CSV file and load nuclei positions------------------------------------------We'll create a list of XYZT coordinates with confidence
#x, y, z, t, confidence coords = [] with open(path_csv, 'rb') as csvfile: reader = csv.reader(csvfile) h = reader.next() for r in reader: c = (r[0], r[1], r[2], r[4]) print c coords.append(c)
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Initiaize authenticated session--------------Initialize a BisQue session using simple user credentials
root = 'https://bisque.cyverse.org' user = 'demo' pswd = 'iplant' session = BQSession().init_local(user, pswd, bisque_root=root, create_mex=False)
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Create XML image descriptor---------------------------We'll provide a suggested path in the remote user's directory
path_on_bisque = 'demo/nuclei_%s/%s'%(datetime.now().strftime('%Y%m%dT%H%M%S'), os.path.basename(path_img)) resource = etree.Element('image', name=path_on_bisque) print etree.tostring(resource, pretty_print=True)
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Upload the image-----------------
# use import service to /import/transfer activating import service r = etree.XML(session.postblob(path_img, xml=resource)).find('./') if r is None or r.get('uri') is None: print 'Upload failed' else: print 'Uploaded ID: %s, URL: %s\n'%(r.get('resource_uniq'), r.get('uri')) print etree.tostring(r, pretty_p...
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Add graphical annotations------------------------------We'll create point annotaions as an XML attached to the image we just uploaded into BisQue
g = etree.SubElement (r, 'gobject', type='My nuclei') for c in coords: p = etree.SubElement(g, 'point') etree.SubElement(p, 'vertex', x=c[0], y=c[1], z=c[2]) etree.SubElement(p, 'tag', name='confidence', value=c[3]) print etree.tostring(r, pretty_print=True)
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Save graphical annotations to the system------------------------------------------After storing all annotations become searchable
url = session.service_url('data_service') r = session.postxml(url, r) if r is None or r.get('uri') is None: print 'Adding annotations failed' else: print 'Image ID: %s, URL: %s'%(r.get('resource_uniq'), r.get('uri'))
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Copyright © 2017-2021 ABBYY Production LLC
#@title # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distribut...
_____no_output_____
ECL-2.0
NeoML/docs/en/Python/tutorials/KMeans.ipynb
NValerij/neoml
*k*-means clustering [Download the tutorial as a Jupyter notebook](https://github.com/neoml-lib/neoml/blob/master/NeoML/docs/en/Python/tutorials/KMeans.ipynb)In this tutorial, we will use the NeoML implementation of *k*-means clustering algorithm to clusterize a randomly generated dataset.The tutorial includes the fol...
import numpy as np np.random.seed(451) n_dots = 128 n_clusters = 4 centers = np.array([(-2., -2.), (-2., 2.), (2., -2.), (2., 2.)]) X = np.zeros(shape=(n_dots, 2), dtype=np.float32) y = np.zeros(shape=(n_dots,), dtype=np.int32) for i in range(n_dots): # Ch...
_____no_output_____
ECL-2.0
NeoML/docs/en/Python/tutorials/KMeans.ipynb
NValerij/neoml
Cluster the data Now we'll create a `neoml.Clustering.KMeans` class that represents the clustering algorithm, and feed the data into it.
import neoml kmeans = neoml.Clustering.KMeans(max_iteration_count=1000, cluster_count=n_clusters, thread_count=4) y_pred, centers_pred, vars_pred = kmeans.clusterize(X)
_____no_output_____
ECL-2.0
NeoML/docs/en/Python/tutorials/KMeans.ipynb
NValerij/neoml
Before going further let's take a look at the returned data.
print('y_pred') print(' ', type(y_pred)) print(' ', y_pred.shape) print(' ', y_pred.dtype) print('centers_pred') print(' ', type(centers_pred)) print(' ', centers_pred.shape) print(' ', centers_pred.dtype) print('vars_pred') print(' ', type(vars_pred)) print(' ', vars_pred.shape) print(' ', vars_pred.dtype)
y_pred <class 'numpy.ndarray'> (128,) int32 centers_pred <class 'numpy.ndarray'> (4, 2) float64 vars_pred <class 'numpy.ndarray'> (4, 2) float64
ECL-2.0
NeoML/docs/en/Python/tutorials/KMeans.ipynb
NValerij/neoml
As you can see, the `y_pred` array contains the cluster indices of each object. `centers_pred` and `disps_pred` contain centers and variances of each cluster. Visualize the results In this section we'll draw both clusterizations: ground truth and predicted.
%matplotlib inline import matplotlib.pyplot as plt colors = { 0: 'r', 1: 'g', 2: 'b', 3: 'y' } # Create figure with 2 subplots fig, axs = plt.subplots(ncols=2) fig.set_size_inches(10, 5) # Show ground truth axs[0].set_title('Ground truth') axs[0].scatter(X[:, 0], X[:, 1], marker='o', c=list(map(colo...
_____no_output_____
ECL-2.0
NeoML/docs/en/Python/tutorials/KMeans.ipynb
NValerij/neoml
BS: 32 MNIST![](../ckpts/nb/grads_300_bs_16_dataset_mnist.png)![](../ckpts/nb/grads_200_bs_16_dataset_mnist.png)![](../ckpts/nb/grads_100_bs_16_dataset_mnist.png)![](../ckpts/nb/grads_50_bs_16_dataset_mnist.png) BS: 128 MNIST![](../ckpts/nb/grads_300_bs_128_dataset_mnist.png)![](../ckpts/nb/grads_200_bs_128_dataset_m...
if sdirs_algo=='pca': sdirs0, _ = pca_transform(sdirs0, n0) # sdirs1, _ = pca_transform(sdirs1, n1) else: sdirs0, _ = np.linalg.qr(sdirs0) # sdirs1, _ = np.linalg.qr(sdirs1) sdirs0.shape, sdirs1.shape sdirs = [[t.Tensor(sdirs0[:, _].reshape(output_size, input_size)), t.Tensor(sdirs1[:,_].reshape(output_...
_____no_output_____
MIT
LBGM/nb/11. RandomGradientSamplingMNIST.ipynb
sidsrini12/FURL_Sim
pretraining
trainloader = get_trainloader(dataset, 256, False) testloader = get_testloader(dataset, 256, False) model = resnet.resnet18(num_channels=1, num_classes=output_size).to(device) model.load_state_dict(t.load('../ckpts/init/{}_resnet18.init'.format(dataset))) correcti = 0 x_test = 0 for idx, (data, labels) in enumerate(te...
_____no_output_____
MIT
LBGM/nb/11. RandomGradientSamplingMNIST.ipynb
sidsrini12/FURL_Sim
w/o gradient approximation
model = resnet.resnet18(num_channels=1, num_classes=output_size).to(device) model.load_state_dict(t.load('../ckpts/init/{}_resnet18.init'.format(dataset))) xb_train, yb_train = [], [] xb_test, yb_test =[], [] for _ in tqdm(range(1, epochs+1), leave=False): xb_train.append(_) correcti = 0 for idx, (data, la...
_____no_output_____
MIT
LBGM/nb/11. RandomGradientSamplingMNIST.ipynb
sidsrini12/FURL_Sim
gradient approximation using all directions
model = resnet.resnet18(num_channels=1, num_classes=output_size).to(device) model.load_state_dict(t.load('../ckpts/init/{}_resnet18.init'.format(dataset))) xa_train, ya_train = [], [] xa_test, ya_test = [], [] for _ in tqdm(range(1, epochs+1), leave=False): start = time.time() xa_train.append(_) xa_test.ap...
_____no_output_____
MIT
LBGM/nb/11. RandomGradientSamplingMNIST.ipynb
sidsrini12/FURL_Sim
gradient approximation using n directions
n = 1 model = resnet.resnet18(num_channels=1, num_classes=output_size).to(device) model.load_state_dict(t.load('../ckpts/init/{}_resnet18.init'.format(dataset))) xe_train, ye_train = [], [] xe_test, ye_test = [], [] for _ in tqdm(range(1, epochs+1), leave=False): start = time.time() xe_train.append(_) xe_t...
clf_resnet18_mnist_algo_pca_bs_16_sgd_vs_sgd_approx_random_grad_sampling
MIT
LBGM/nb/11. RandomGradientSamplingMNIST.ipynb
sidsrini12/FURL_Sim
Dependencies
import os import sys import cv2 import shutil import random import warnings import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from tensorflow import set_random_seed from sklearn.utils import class_weight from sklearn.model_selection import train_test_split from sklearn.metrics...
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /opt/conda/lib/python3.6/sit...
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Load data
hold_out_set = pd.read_csv('../input/aptos-data-split/hold-out.csv') X_train = hold_out_set[hold_out_set['set'] == 'train'] X_val = hold_out_set[hold_out_set['set'] == 'validation'] test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv') print('Number of train samples: ', X_train.shape[0]) print('Number o...
Number of train samples: 2929 Number of validation samples: 733 Number of test samples: 1928
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Model parameters
# Model parameters FACTOR = 4 BATCH_SIZE = 8 * FACTOR EPOCHS = 20 WARMUP_EPOCHS = 5 LEARNING_RATE = 1e-4 * FACTOR WARMUP_LEARNING_RATE = 1e-3 * FACTOR HEIGHT = 224 WIDTH = 224 CHANNELS = 3 ES_PATIENCE = 5 RLROP_PATIENCE = 3 DECAY_DROP = 0.5 LR_WARMUP_EPOCHS_1st = 2 LR_WARMUP_EPOCHS_2nd = 5 STEP_SIZE = len(X_train) // B...
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Pre-procecess images
train_base_path = '../input/aptos2019-blindness-detection/train_images/' test_base_path = '../input/aptos2019-blindness-detection/test_images/' train_dest_path = 'base_dir/train_images/' validation_dest_path = 'base_dir/validation_images/' test_dest_path = 'base_dir/test_images/' # Making sure directories don't exist...
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Data generator
datagen=ImageDataGenerator(rescale=1./255, rotation_range=360, horizontal_flip=True, vertical_flip=True) train_generator=datagen.flow_from_dataframe( dataframe=X_train, directory=train_dest...
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Model
def create_model(input_shape): input_tensor = Input(shape=input_shape) base_model = EfficientNetB5(weights=None, include_top=False, input_tensor=input_tensor) base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b5_im...
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Train top layers
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS)) for layer in model.layers: layer.trainable = False for i in range(-7, 0): model.layers[i].trainable = True cosine_lr_1st = WarmUpCosineDecayScheduler(learning_rate_base=WARMUP_LEARNING_RATE, total_steps=TOT...
Epoch 1/5 - 56s - loss: 3.6864 - acc: 0.2126 - val_loss: 2.2876 - val_acc: 0.2898 Epoch 2/5 - 42s - loss: 2.0355 - acc: 0.2989 - val_loss: 1.7932 - val_acc: 0.2739 Epoch 3/5 - 42s - loss: 1.3178 - acc: 0.3542 - val_loss: 1.9237 - val_acc: 0.2653 Epoch 4/5 - 42s - loss: 1.3226 - acc: 0.3797 - val_loss: 1.8302 - val_...
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Fine-tune the complete model
for layer in model.layers: layer.trainable = True es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1) cosine_lr_2nd = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE, total_steps=TOTAL_STEPS_2nd, ...
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Model loss graph
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 14)) ax1.plot(history['loss'], label='Train loss') ax1.plot(history['val_loss'], label='Validation loss') ax1.legend(loc='best') ax1.set_title('Loss') ax2.plot(history['acc'], label='Train accuracy') ax2.plot(history['val_acc'], label='Validation accurac...
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Model Evaluation Confusion Matrix Original thresholds
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR'] def plot_confusion_matrix(train, validation, labels=labels): train_labels, train_preds = train validation_labels, validation_preds = validation fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7)) tra...
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Quadratic Weighted Kappa
def evaluate_model(train, validation): train_labels, train_preds = train validation_labels, validation_preds = validation print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic')) print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(val...
Train Cohen Kappa score: 0.960 Validation Cohen Kappa score: 0.902 Complete set Cohen Kappa score: 0.949
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Apply model to test set and output predictions
def apply_tta(model, generator, steps=10): step_size = generator.n//generator.batch_size preds_tta = [] for i in range(steps): generator.reset() preds = model.predict_generator(generator, steps=step_size) preds_tta.append(preds) return np.mean(preds_tta, axis=0) preds = apply_t...
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Predictions class distribution
fig = plt.subplots(sharex='col', figsize=(24, 8.7)) sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test') sns.despine() plt.show() results.to_csv('submission.csv', index=False) display(results.head())
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
4.5.1 load and save NDarray
import numpy as np x = tf.ones(3) x np.save('x.npy', x) x2 = np.load('x.npy') x2 y = tf.zeros(4) np.save('xy.npy',[x,y]) x2, y2 = np.load('xy.npy', allow_pickle=True) (x2, y2) mydict = {'x': x, 'y': y} np.save('mydict.npy', mydict) mydict2 = np.load('mydict.npy', allow_pickle=True) mydict2
_____no_output_____
Apache-2.0
ch4_DL_computation/4.5 load and save.ipynb
gunpowder78/Dive-into-DL-TensorFlow2.0
4.5.2 load and save model parameters
X = tf.random.normal((2,20)) X class MLP(tf.keras.Model): def __init__(self): super().__init__() self.flatten = tf.keras.layers.Flatten() # Flatten层将除第一维(batch_size)以外的维度展平 self.dense1 = tf.keras.layers.Dense(units=256, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(un...
_____no_output_____
Apache-2.0
ch4_DL_computation/4.5 load and save.ipynb
gunpowder78/Dive-into-DL-TensorFlow2.0
Exp 43 analysisSee `./informercial/Makefile` for experimentaldetails.
import os import numpy as np from IPython.display import Image import matplotlib import matplotlib.pyplot as plt` %matplotlib inline %config InlineBackend.figure_format = 'retina' import seaborn as sns sns.set_style('ticks') matplotlib.rcParams.update({'font.size': 16}) matplotlib.rc('axes', titlesize=16) from inf...
_____no_output_____
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
Load and process data
data_path ="/Users/qualia/Code/infomercial/data/" exp_name = "exp43" best_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_best.pkl")) sorted_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_sorted.pkl")) best_params
_____no_output_____
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
Performanceof best parameters
env_name = 'BanditOneHigh2-v0' num_episodes = 20*100 # Run w/ best params result = meta_bandit( env_name=env_name, num_episodes=num_episodes, lr=best_params["lr"], tie_threshold=best_params["tie_threshold"], seed_value=19, save="exp43_best_model.pkl" ) # Plot run episodes = result["episodes"]...
Best arm: 0, last arm: 0
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
Sensitivityto parameter choices
total_Rs = [] ties = [] lrs = [] trials = list(sorted_params.keys()) for t in trials: total_Rs.append(sorted_params[t]['total_E']) ties.append(sorted_params[t]['tie_threshold']) lrs.append(sorted_params[t]['lr']) # Init plot fig = plt.figure(figsize=(10, 18)) grid = plt.GridSpec(4, 1, wspace=0.3, hspa...
_____no_output_____
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
Distributionsof parameters
# Init plot fig = plt.figure(figsize=(5, 6)) grid = plt.GridSpec(2, 1, wspace=0.3, hspace=0.8) plt.subplot(grid[0, 0]) plt.hist(ties, color="black") plt.xlabel("tie threshold") plt.ylabel("Count") _ = sns.despine() plt.subplot(grid[1, 0]) plt.hist(lrs, color="black") plt.xlabel("lr") plt.ylabel("Count") _ = sns.despi...
_____no_output_____
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
of total reward
# Init plot fig = plt.figure(figsize=(5, 2)) grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8) plt.subplot(grid[0, 0]) plt.hist(total_Rs, color="black", bins=50) plt.xlabel("Total reward") plt.ylabel("Count") plt.xlim(0, 10) _ = sns.despine()
_____no_output_____
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. N...
import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' impor...
_____no_output_____
MIT
module1-decision-trees/LS_DS_221_assignment.ipynb
SwetaSengupta/DS-Unit-2-Kaggle-Challenge
function code_toggle() { if (code_shown){ $('div.input').hide('500'); $('toggleButton').val('Show Code') } else { $('div.input').show('500'); $('toggleButton').val('Hide Code') } code_shown = !code_shown } $( document ).ready(function(){ code_shown=false; $('div.input').hide() ...
# Importing some python libraries. import numpy as np from numpy.random import randn,rand import matplotlib.pyplot as pl from matplotlib.pyplot import plot import seaborn as sns %matplotlib inline # Fixing figure sizes from pylab import rcParams rcParams['figure.figsize'] = 10,5 sns.set_palette('Reds_r')
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
Reaction Network HomeworkIn this homework, we will study a very simple set of reactions by modelling it through three different ways. First, we shall employ an ODE model called the **Reaction Rate Equation**. Then, we will solve the **Chemical Langevin Equation** and, finally, we will simulate the exact model by "sol...
# Solution of the RRE def x(t,x0=3,a=10.0,mu=1.0): return (x0-a/mu)*np.exp(-t*mu)+a/mu
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
We note that there is a stationary solution, $x(t)=a/\mu$. From the exponential in the solution, we can see that this is an attracting fixed point.
t = np.linspace(0,3) x0list = np.array([0.5,1,15]) sns.set_palette("Reds",n_colors=3) for x0 in x0list: pl.plot(t,x(t,x0),linewidth=4) pl.title('Population numbers for different initial conditions.', fontsize=20) pl.xlabel('Time',fontsize=20)
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
Chemical Langevin EquationNext, we will model the system by using the CLE. For our particular birth/death process, this will be $$dX_t=(a-\mu\cdot X_t)dt+(\sqrt{a}-\sqrt{\mu\cdot X_t})dW.$$To solve this, we shall use the Euler-Maruyama scheme from the previous homework. We fix a $\Delta t$ positive. Then, the scheme s...
def EM(xinit,T,Dt=0.1,a=1,mu=2): ''' Returns the solution of the CLE with parameters a, mu Arguments ========= xinit : real, initial condition. Dt : real, stepsize of the Euler-Maruyama. T : real, final time to reach. a : real, parameter of...
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
Similarly to the previous case, here is a run with multiple initial conditions.
T = 10 # final time to reach Dt = 0.01 # time-step for EM # Set the palette to reds with ten colors sns.set_palette('Reds',10) def plotPaths(T,Dt): n = int(T/Dt) t = np.linspace(0,T,n) xinitlist = np.linspace(10,15,10) for x0 in xinitlist : path = EM(xinit=x0,T=T,Dt=Dt,a=10.0,mu=1.0) ...
Paths decay towards 10.0004633499 The stationary point is 1.0
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
We notice that the asymptotic behavior of the CLE is the same as that of the RRE. The only notable difference is the initial random kicks in the paths, all because of the stochasticicity. Chemical Master EquationFinally, we shall simulate the system exactly by using the Stochastic Simulation Algorithm (SSA).
def SSA(xinit, nsteps, a=10.0, mu=1.0): ''' Using SSA to exactly simulate the death/birth process starting from xinit and for nsteps. a and mu are parameters of the propensities. Returns ======= path : array-like, the path generated. tpath:...
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
Now that we have SSA setup, we can run multiple paths and compare the results to the previous cases.
# Since the paths below are not really related # let's use a more interesting palette # for the plot. sns.set_palette('hls',1) for _ in xrange(1): path, tpath = SSA(xinit=1,nsteps=100) # Since this is the path of a jump process # I'm switching from "plot" to "step" # to get the figure right. :) ...
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
We can see three chains above, all starting from $X_0=1$, and simulated with the SSA.
npaths = 1 nsteps = 30000 path = np.zeros([npaths,nsteps]) for i in xrange(npaths): path[i,:], tpath = SSA(xinit=1,nsteps=nsteps) skip = 20000 sum(path[0,skip:nsteps-1]*tpath[skip:nsteps-1])/sum(tpath[skip:nsteps-1])
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
Copyright 2018 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
과대적합과 과소적합 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰...
import tensorflow.compat.v1 as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt print(tf.__version__)
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
IMDB 데이터셋 다운로드이전 노트북에서처럼 임베딩을 사용하지 않고 여기에서는 문장을 멀티-핫 인코딩(multi-hot encoding)으로 변환하겠습니다. 이 모델은 훈련 세트에 빠르게 과대적합될 것입니다. 과대적합을 발생시키기고 어떻게 해결하는지 보이기 위해 선택했습니다.멀티-핫 인코딩은 정수 시퀀스를 0과 1로 이루어진 벡터로 변환합니다. 정확하게 말하면 시퀀스 `[3, 5]`를 인덱스 3과 5만 1이고 나머지는 모두 0인 10,000 차원 벡터로 변환한다는 의미입니다.
NUM_WORDS = 10000 (train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS) def multi_hot_sequences(sequences, dimension): # 0으로 채워진 (len(sequences), dimension) 크기의 행렬을 만듭니다 results = np.zeros((len(sequences), dimension)) for i, word_indices in enumerate(seq...
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
만들어진 멀티-핫 벡터 중 하나를 살펴 보죠. 단어 인덱스는 빈도 순으로 정렬되어 있습니다. 그래프에서 볼 수 있듯이 인덱스 0에 가까울수록 1이 많이 등장합니다:
plt.plot(train_data[0])
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
과대적합 예제과대적합을 막는 가장 간단한 방법은 모델의 규모를 축소하는 것입니다. 즉, 모델에 있는 학습 가능한 파라미터의 수를 줄입니다(모델 파라미터는 층(layer)의 개수와 층의 유닛(unit) 개수에 의해 결정됩니다). 딥러닝에서는 모델의 학습 가능한 파라미터의 수를 종종 모델의 "용량"이라고 말합니다. 직관적으로 생각해 보면 많은 파라미터를 가진 모델이 더 많은 "기억 용량"을 가집니다. 이런 모델은 훈련 샘플과 타깃 사이를 일반화 능력이 없는 딕셔너리와 같은 매핑으로 완벽하게 학습할 수 있습니다. 하지만 이전에 본 적 없는 데이터에서 예측을 할 땐 쓸모가...
baseline_model = keras.Sequential([ # `.summary` 메서드 때문에 `input_shape`가 필요합니다 keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(16, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) baseline_model.compile(optimizer='adam', ...
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
작은 모델 만들기 앞서 만든 기준 모델과 비교하기 위해 적은 수의 은닉 유닛을 가진 모델을 만들어 보죠:
smaller_model = keras.Sequential([ keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(4, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) smaller_model.compile(optimizer='adam', loss='binary_crossentropy', met...
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
같은 데이터를 사용해 이 모델을 훈련합니다:
smaller_history = smaller_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2...
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
큰 모델 만들기아주 큰 모델을 만들어 얼마나 빠르게 과대적합이 시작되는지 알아 볼 수 있습니다. 이 문제에 필요한 것보다 훨씬 더 큰 용량을 가진 네트워크를 추가해서 비교해 보죠:
bigger_model = keras.models.Sequential([ keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(512, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) bigger_model.compile(optimizer='adam', loss='binary_crossentropy', ...
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
역시 같은 데이터를 사용해 모델을 훈련합니다:
bigger_history = bigger_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2)
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
훈련 손실과 검증 손실 그래프 그리기 실선은 훈련 손실이고 점선은 검증 손실입니다(낮은 검증 손실이 더 좋은 모델입니다). 여기서는 작은 네트워크가 기준 모델보다 더 늦게 과대적합이 시작되었습니다(즉 에포크 4가 아니라 6에서 시작됩니다). 또한 과대적합이 시작되고 훨씬 천천히 성능이 감소합니다.
def plot_history(histories, key='binary_crossentropy'): plt.figure(figsize=(16,10)) for name, history in histories: val = plt.plot(history.epoch, history.history['val_'+key], '--', label=name.title()+' Val') plt.plot(history.epoch, history.history[key], color=val[0].get_color(), ...
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
큰 네트워크는 거의 바로 첫 번째 에포크 이후에 과대적합이 시작되고 훨씬 더 심각하게 과대적합됩니다. 네트워크의 용량이 많을수록 훈련 세트를 더 빠르게 모델링할 수 있습니다(훈련 손실이 낮아집니다). 하지만 더 쉽게 과대적합됩니다(훈련 손실과 검증 손실 사이에 큰 차이가 발생합니다). 전략 가중치를 규제하기 아마도 오캄의 면도날(Occam's Razor) 이론을 들어 보았을 것입니다. 어떤 것을 설명하는 두 가지 방법이 있다면 더 정확한 설명은 최소한의 가정이 필요한 가장 "간단한" 설명일 것입니다. 이는 신경망으로 학습되는 모델에도 적용됩니다. 훈련 데이터와 네...
l2_model = keras.models.Sequential([ keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001), activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001), activation=tf.nn.relu), keras.l...
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
```l2(0.001)```는 네트워크의 전체 손실에 층에 있는 가중치 행렬의 모든 값이 ```0.001 * weight_coefficient_value**2```만큼 더해진다는 의미입니다. 이런 페널티(penalty)는 훈련할 때만 추가됩니다. 따라서 테스트 단계보다 훈련 단계에서 네트워크 손실이 훨씬 더 클 것입니다.L2 규제의 효과를 확인해 보죠:
plot_history([('baseline', baseline_history), ('l2', l2_model_history)])
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
결과에서 보듯이 모델 파라미터의 개수는 같지만 L2 규제를 적용한 모델이 기본 모델보다 과대적합에 훨씬 잘 견디고 있습니다. 드롭아웃 추가하기드롭아웃(dropout)은 신경망에서 가장 효과적이고 널리 사용하는 규제 기법 중 하나입니다. 토론토(Toronto) 대학의 힌튼(Hinton)과 그의 제자들이 개발했습니다. 드롭아웃을 층에 적용하면 훈련하는 동안 층의 출력 특성을 랜덤하게 끕니다(즉, 0으로 만듭니다). 훈련하는 동안 어떤 입력 샘플에 대해 [0.2, 0.5, 1.3, 0.8, 1.1] 벡터를 출력하는 층이 있다고 가정해 보죠. 드롭아웃을 적용하면 이 벡터에...
dpt_model = keras.models.Sequential([ keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dropout(0.5), keras.layers.Dense(16, activation=tf.nn.relu), keras.layers.Dropout(0.5), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) dpt_model.compile(optimizer='adam', ...
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
Gambling 101You are participating in a lottery game. A deck of cards numbered from 1-50 is shuffled and 5 cards are drawn out and laid out. You are given a coin. For each card, you toss the coin and pick it up if it says heads, otherwise you don't pick it up. The sum of the cards is what you win.The lottery ticket cos...
#my_input1 = input("Enter the 1st input here :" ) #my_input2 = input("Enter the 2nd input here :" ) #my_input3 = input("Enter the 3rd input here :" ) #my_input4 = input("Enter the 4th input here :" ) #my_input5 = input("Enter the 5th input here :" ) c = input("cost of lottery ticket here :" ) import ast,sys input_str ...
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Generating normal distributionGenerate an array of real numbers representing a normal distribution. You will be given the mean and standard deviation as input. You have to generate 10 such numbers.**Hint:** You can use numpy's numpy's np.random here... np.random https://pynative.com/python-random-seed/. To keep...
import numpy as np seed=int(input()) mean=float(input()) std_dev=float(input()) np.random.seed(seed) s = np.random.normal(mean, std_dev, 10) print(s)
1 0 0.1 [ 0.16243454 -0.06117564 -0.05281718 -0.10729686 0.08654076 -0.23015387 0.17448118 -0.07612069 0.03190391 -0.02493704]
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Confidence IntervalsFor a given column in a dataframe, you have to calculate the 90 percent confidence interval for its mean value. (You can find Z* value for 90 percent confidence from previous segments)The input will have the column name. The output should have the confidence interval printed as a tuple.**Note:** Do...
import pandas as pd import numpy as np df=pd.read_csv("https://media-doselect.s3.amazonaws.com/generic/N9LKLvBAx1y14PLoBdL0yRn3/Admission_Predict.csv") col=input() mean = df[col].mean() sd = df[col].std() n = len(df) Zstar=1.65 se = sd/np.sqrt(n) lcb = mean - Zstar * se ucb = mean + Zstar * se print((round(lcb,2),roun...
GRE Score (315.86, 317.75)
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
College admissionsThe probability that a college will accept a student's application is x.Consider that m students have applied to college. You have to find the probability that at most n students are accepted by the college.The input will contain three lines with x, m and n respectively.The output should be rounded o...
#probability of accepting an application x=float(input()) #number of applicants m=int(input()) #find the probability that at most n applications are accepted n=int(input()) #write your code here import scipy.stats as ss dist=ss.binom(m,x) sum=0.0 for i in range(0,n+1): sum=sum+dist.pmf(i) print(round(sum,4))
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Tossing a coinGiven that you are tossing a coin n times, you have to find the probability of getting heads at most m times.The input will have two lines containing n and m respectively.**Sample Input:**102**Sample Output:**0.0547
import scipy.stats as ss #number of trials n=int(input()) # find the probability of getting at most m heads m=int(input()) dist=ss.binom(n,0.5) sum=0.0 for i in range(0,m+1): sum=sum+dist.pmf(i) print(round(sum,4)) #you can also use the following #round(dist.cdf(m),2)
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Combination TheoryYou are given a list of n natural numbers. You select m numbers from the list at random. Find the probability that at least one of the selected alphabets is "x" where x is a number given to you as input.The first line of input will contain a list of numbers. The second line will contain m and the thi...
import ast,sys input_str = sys.stdin.read() input_list = ast.literal_eval(input_str) nums=input_list[0] #m numbers are chosen m=int(input_list[1]) #find probability of getting at least one x x=int(input_list[2]) from itertools import combinations num = 0 den = 0 for c in combinations(nums,m): den=den+1 if x...
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Rolling the diceA die is rolled n times. You have to find the probability that a number i is rolled at least j times(up to four decimal places)The input will contain the integers n, i and j in three lines respectively. You can assume that j<n and 0<i<7.The output should be rounded off to four decimal places.**Sample I...
import scipy.stats as ss n=int(input()) i=int(input()) j=int(input()) dist=ss.binom(n,1/6) print(round(1-dist.cdf(j-1),4))
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Lego StackYou are given a row of Lego Blocks consisting of n blocks. All the blocks given have a square base whose side length is known. You need to stack the blocks over each other and create a vertical tower. Block-1 can go over Block-2 only if sideLength(Block-2)>sideLength(Block-1).From the row of Lego blocks, you...
import ast,sys input_str = sys.stdin.read() sides = ast.literal_eval(input_str)#list of side lengths l=len(sides) diff = [(sides[i]-sides[i+1]) for i in range(l-1)] i = 0 while (i<l-1 and diff[i]>=0) : i += 1 while (i<l-1 and diff[i]<=0) : i += 1 if (i==l-1) : print("Possible") else : print("Impossible") #to unders...
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Ensemble Learning Initial Imports
import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd from pathlib import Path from collections import Counter from sklearn.metrics import balanced_accuracy_score from sklearn.metrics import confusion_matrix from imblearn.metrics import classification_report_imbalanced
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Read the CSV and Perform Basic Data Cleaning
# Load the data file_path = Path('Resources/LoanStats_2019Q1.csv') df = pd.read_csv(file_path) # Preview the data df.head() df.shape df.info() pd.set_option('display.max_rows', None) # or 1000 df.nunique(axis=0) df['recoveries'].value_counts() df['pymnt_plan'].value_counts() # Drop all unnecessary columns with only a...
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Split the Data into Training and Testing
# Create our features X = df_encoded.drop(columns=['loan_status']) # Create our target y = df_encoded['loan_status'].to_frame('loan_status') y.head() X.describe() # Check the balance of our target values y['loan_status'].value_counts() # Split the X and y into X_train, X_test, y_train, y_test from sklearn.model_selecti...
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Data Pre-ProcessingScale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).
X_train.columns X_train.head() X_train.shape
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
NEW SOLUTION: USE SCIKIT-LEARN'S ColumnTransformer():OneHotEncoder versus GetDummies:https://www.quora.com/When-would-you-choose-to-use-pandas-get_dummies-vs-sklearn-OneHotEncoderBoth options are equally handy but the major difference is that OneHotEncoder is a transformer class, so it can be fitted to data. Once fitt...
from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import make_column_transformer ohe = OneHotEncoder() sc = StandardScaler() ct = make_column_transformer( (sc, ['loan_amnt', 'int_rate', 'installment', 'annual_inc', 'dti', 'delinq_2yrs', 'inq_last_6mths', 'open_acc', 'pub...
<class 'numpy.ndarray'>
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Ensemble LearnersIn this section, you will compare two ensemble algorithms to determine which algorithm results in the best performance. You will train a Balanced Random Forest Classifier and an Easy Ensemble classifier . For each algorithm, be sure to complete the folliowing steps:1. Train the model using the trainin...
# Resample the training data with the BalancedRandomForestClassifier from imblearn.ensemble import BalancedRandomForestClassifier brf = BalancedRandomForestClassifier(n_estimators=1000, random_state=1) brf.fit(X_train_scaled, y_train) # Predict y_pred_rf = brf.predict(X_test_scaled) # Calculated the balanced accuracy s...
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Visualizing the Features by Importance to the model: (Top 20)
importance_df = pd.DataFrame(sorted(zip(brf.feature_importances_, X.columns), reverse=True)) importance_df.set_index(importance_df[1], inplace=True) importance_df.drop(columns=1, inplace=True) importance_df.rename(columns={0:'Feature Importances'}, inplace=True) importance_sorted = importance_df.sort_values(by='Feature...
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Easy Ensemble Classifierhttps://imbalanced-learn.org/stable/references/generated/imblearn.ensemble.EasyEnsembleClassifier.html
# Create an instance of an Easy Ensemble Classifier: from imblearn.ensemble import EasyEnsembleClassifier eec = EasyEnsembleClassifier(n_estimators=1000, random_state=1) # Train the Classifier eec.fit(X_train_scaled, y_train) # Predict y_pred_eec = eec.predict(X_test_scaled) # Calculated the balanced accuracy score ba...
pre rec spe f1 geo iba sup high_risk 0.09 0.88 0.95 0.17 0.91 0.82 104 low_risk 1.00 0.95 0.88 0.97 0.91 0.83 17101 avg / total 0.99 0.95 0.88 0.97 0.91 0...
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Codenation - Data ScienceAutor: Leonardo Simões Desafio 7 - Descubra as melhores notas de matemática do ENEM 2016Você deverá criar um modelo para prever a nota da prova de matemática de quem participou do ENEM 2016. Para isso, usará Python, Pandas, Sklearn e Regression. DetalhesO contexto do desafio gira em torno dos ...
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.preprocessing import StandardScaler from sklearn.metrics import mean_squared_error, mean_absolute_error
_____no_output_____
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
Análise dos dados Lendo os arquivos de treino (train) e de teste (test).
df_train = pd.read_csv('train.csv') df_train.head() df_test = pd.read_csv('test.csv') df_test.head()
_____no_output_____
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
Antes de manipular os dataframes, deve-separar as colunas de notas da prova de matemática do treino e a do número de inscrição do teste.
train_y = df_train['NU_NOTA_MT'].fillna(0) n_insc = df_test['NU_INSCRICAO'].values
_____no_output_____
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
Idealmente os arquivos de teste e treino teriam as mesmas colunas, exceto a que deve ser predita. Então, primeiro verifica-se a quantidade de colunas de cada, e depois exclui-se as colunas que não pertence a ambas.
len(df_test.columns) len(df_train.columns) colunas_intersecao = np.intersect1d(df_train.columns.values, df_test.columns.values) colunas_intersecao df_train = df_train[colunas_intersecao] df_train.head() df_test = df_test[colunas_intersecao] df_test.head() df_train.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 13730 entries, 0 to 13729 Data columns (total 47 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 CO_PROVA_CH 13730 non-null object 1 CO_PROVA_CN 13730 non-null object ...
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
Em um momento anterior eu usei todas as colunas numéricas na predição, mas isso se mostrou menos eficaz do que usar apenas as colunas de notas.
colunas_numericas = df_train.select_dtypes(include=['float64', 'int64']).columns colunas_numericas colunas_notas = ['NU_NOTA_CH', 'NU_NOTA_CN', 'NU_NOTA_COMP1', 'NU_NOTA_COMP2','NU_NOTA_COMP3', 'NU_NOTA_COMP4','NU_NOTA_COMP5','NU_NOTA_LC', 'NU_NOTA_REDACAO'] df_train = df_train[colunas_notas].fillna(0...
_____no_output_____
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
Depois de predizer as notas do arquivo de teste, estas foram salvar em um arquivo csv.
answer = pd.DataFrame() answer['NU_INSCRICAO'] = n_insc answer['NU_NOTA_MT'] = y_teste answer.head() answer.to_csv('answer.csv', index=False)
_____no_output_____
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
IntroductionIf you've had any experience with the python scientific stack, you've probably come into contact with, or at least heard of, the [pandas][1] data analysis library. Before the introduction of pandas, if you were to ask anyone what language to learn as a budding data scientist, most would've likely said the ...
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from IPython.display import set_matplotlib_formats set_matplotlib_formats('retina')
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
What is pandas?Pandas is a library created by [Wes McKinney][1] that provides several data structures that make working with data fast, efficient, and easy. Chief among them is the `DataFrame`, which takes on R's `data.frame` data type, and in many scenarios, bests it. It also provides a simple wrapper around the `pyp...
import pandas as pd
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Load in Some DataIn the next cell, we'll use the `read_csv` function to load in the [Census Income][1] dataset from the [UCI Machine Learning Repository][2]. Incidentally, this is the exact same dataset that we used in our Exploratory Data Analysis (EDA) example in chapter 2, so we'll get to see some examples of how w...
import pandas as pd # Download and read in the data from the UCI Machine Learning Repository df = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data', header=None, names=('age', 'workclass', 'fnlwgt...
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Plotting With pandasJust like we did in our EDA example from chapter 2, we can once again create a simple histogram from our data. This time though, notice that we simply call the `hist` command on the column that contains the education level to plot our data.
df.education_num.hist(bins=16);
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
And, remember, pandas isn't doing anything magical here, it's just providing a very simple wrapper around the `pyplot` module. At the end of the day, the code above is simply calling the `pyplot.hist` function to create the histogram. So, we can interact with the plot that it produces the same way we would any other pl...
df.education_num.hist(bins=16) # Remove the empty bar from the histogram that's below the # education_num's minimum value. plt.xlim(df.education_num.min(), df.education_num.max());
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Well, that looks better, but we're still stuck with many of the same problems that we had in the original EDA lesson. You'll notice that most of the x-ticks don't actually line up with their bars, and there's a good reason for that. Remember, in that lesson, we discussed how a histogram was meant to be used with contin...
df.education.value_counts().plot(kind='bar', width=1);
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Now, rather than passing in the plot type with the `kind` parameter, we could've also just called the `bar` function from the `plot` object, like we do in the next cell.
df.education.value_counts().plot.bar(width=1);
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Ok, so that's a pretty good introduction to the simple interface that pandas provides to the matplotlib library, but it doesn't stop there. Pandas also provides a handful of more complex plotting functions in the `pandas.tools.plotting` module. So, let's import another dataset and take a look at an example of what's av...
df = pd.read_csv('https://raw.githubusercontent.com/pydata/pandas/master/pandas/tests/data/iris.csv')
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
We'll need a color map, essentially just a dictionary mapping each species to a unique color, so we'll put one together in the next cell. Fortunately, pandas makes it easy to get the species names by simply calling the `unique` function on the `Name` column.
names = df.Name.unique() colors = ['red', 'green', 'blue'] cmap = dict(zip(names, colors))
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Now, before we take a look at one of the functions from the `plotting` module, let's quickly take a look at one of the [changes that was made to matplotlib in version 1.5][1] to accommodate labeled data, like a pandas `DataFrame` for example. The code in the next cell, creates a scatter plot using the `pyplot.scatter` ...
plt.scatter(x='PetalLength', y='PetalWidth', data=df, c=df.Name.apply(lambda name: cmap[name]));
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Now, we're ready to take a look at one of the functions that pandas provides us, and for comparison sake, let's take a look at our old friend, the scatterplot matrix. In the next cell, we'll import the `scatter_matrix` function from the `pandas.tools.plotting` module and run it on the Iris dataset.
from pandas.tools.plotting import scatter_matrix scatter_matrix(df, figsize=(10,8), c=df.Name.apply(lambda name: cmap[name]), s=40);
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Interpreting Tree Models You'll need to install the `treeinterpreter` library.
# !pip install treeinterpreter import sklearn import tensorflow as tf import numpy as np import pandas as pd from sklearn.datasets import fetch_california_housing from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor, export_graphviz from sklearn.ensemble import RandomFore...
The scikit-learn version is 1.0.1.
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners