markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
TASK 4, Part 2**Instructions**The advertising campaign mentioned in Task 2 was successful as such the team wants more information on the average prices. Use a pivot table to look at the average prices for different room types within each neighbourhood. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas....
# YOUR CODE FOR TASK 4, PART 2 #Your code here #Pivot = ... Pivot = df_brooklyn.pivot_table(index=['neighbourhood'], values=['price'], columns=['room_type'], aggfunc=np.mean, fill_value = 0) Pivot
_____no_output_____
MIT
Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb
shobhit009/Machine_Learning_Projects
TASK 5, Part 1**Instructions**The Airbnb analysts want to know the factors influencing the price. Before proceedeing with Correlation analysis, you need to perform some feature engineering tasks such as converting the categorical columns, dropping descriptive columns.1. Encode the categorical variable `room_type` and ...
# YOUR CODE FOR TASK 5, PART 1 # encode the columnns room_type and neighbourhood # df_brooklyn_rt = ... df_brooklyn_rt = pd.get_dummies(df_brooklyn, columns=['room_type','neighbourhood']) df_brooklyn_rt #drop the descriptive columns from the dataframe # df_brooklyn_rt = ... df_brooklyn_rt = df_brooklyn_rt.drop(['name'...
df is correct
MIT
Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb
shobhit009/Machine_Learning_Projects
TASK 5, Part 2**Instructions**We will now study the correlation of the features in the dataset with `price`. Use Pandas dataframe.corr() to find the pairwise correlation of all columns in the dataframe. Use pandas corr() function to create correlation dataframe.Function syntax : new_dataframe = Dataframe.corr()Visuali...
# YOUR CODE FOR TASK 5, PART 2 # create a correlation matix # corr = ... corr = df_brooklyn_rt.corr() # plot the heatmap # sns.heatmap(...) sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns)
_____no_output_____
MIT
Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb
shobhit009/Machine_Learning_Projects
Multicollinearity occurs when your data includes multiple attributes that are correlated not just to your target variable, but also to each other. **Based on the correlation matrix, answer the following:1. Which columns would you drop to prevent multicollinearity? Sample Answer: brooklyn_whole or number_of_reviews2. Wh...
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT, ## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE ## WITH THE ASSESSMENT. task_5_part_2_check = data_load_files.TASK_5_PART_2_OUTPUT corr_shape_check = (corr.shape == task_5_part_2_check.shape) if corr_shape...
df is correct
MIT
Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb
shobhit009/Machine_Learning_Projects
TASK 6Property Hosts are expected to set their own prices for their listings. Although Airbnb provide some general guidance, there are currently no services which help hosts price their properties using range of data points.Airbnb pricing is important to get right, particularly in big cities like New York where there ...
#Your code here #X = #Y = #Solution X = df_brooklyn_rt.drop(['price', 'last_review', 'brooklyn_whole','price_category'], axis = 1) Y = df_brooklyn_rt['price'] #Your code here #Please don't change the test_size value it should remain 0.2 #X_train, X_test, Y_train, Y_test = train_test_split(<....your X value here...>,...
dfs are correct
MIT
Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb
shobhit009/Machine_Learning_Projects
Task 6 Part 2**Instructions**Training the modelWe use scikit-learn’s LinearRegression to train our model. Using the fit() method, we will pass the training datasets X_train and Y_train as arguments to the linear regression model. Testing the modelThe model has learnt about the dataset. We will now use the trained mode...
#Run this cell lin_model = LinearRegression() #Training the model #Your code here #lin_model.fit(X_argument, Y_argument) #Solution lin_model.fit(X_train, Y_train) #Testing the model #Your code here #y_test_predict = lin_model.predict(...X test Dataset...) #Solution y_test_predict = lin_model.predict(X_test) #Run th...
Model evaluation is correct
MIT
Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb
shobhit009/Machine_Learning_Projects
Task 6 Part 3**Instructions**Now we will compare the actual output values for X_test with the predicted values using a bar chart.- 1(pt) Create a new dataframe lr_pred_df using the Y_test and y_test_predict- 1(pt) Use first 20 records from the dataframe lr_pred_df and plot a bar graph showing comparision of actual and...
#Actual Vs Predicted for Linear Regression #Your code here #lr_pred_df = #Solution lr_pred_df = pd.DataFrame({ 'actual_values': np.array(Y_test).flatten(), 'y_test_predict': y_test_predict.flatten()}) #Your code here #lr_pred_df.plot() #Solution lr_pred_df = lr_pred_df.head(20) plt = lr_pred_df.plot...
df is correct
MIT
Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb
shobhit009/Machine_Learning_Projects
Lab: Connect to Db2 database on Cloud using Python IntroductionThis notebook illustrates how to access a DB2 database on Cloud using Python by following the steps below:1. Import the `ibm_db` Python library1. Enter the database connection credentials1. Create the database connection1. Close the database connection__No...
import ibm_db
_____no_output_____
MIT
5. Databases_SQL/1-1-Connecting-v4-py.ipynb
naquech/IBM_Watson_Studio
When the command above completes, the `ibm_db` library is loaded in your notebook. Identify the database connection credentialsConnecting to dashDB or DB2 database requires the following information:* Driver Name* Database name * Host DNS name or IP address * Host port* Connection protocol* User ID (or username)* User...
#Replace the placeholder values with your actual Db2 hostname, username, and password: dsn_hostname = "YourDb2Hostname" # e.g.: "dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net" dsn_uid = "YourDb2Username" # e.g. "abc12345" dsn_pwd = "YoueDb2Password" # e.g. "7dBZ3wWt9XN6$o0J" dsn_driver = "{IBM DB2 O...
_____no_output_____
MIT
5. Databases_SQL/1-1-Connecting-v4-py.ipynb
naquech/IBM_Watson_Studio
Create the DB2 database connectionIbm_db API uses the IBM Data Server Driver for ODBC and CLI APIs to connect to IBM DB2 and Informix.Lets build the dsn connection string using the credentials you entered above
#DO NOT MODIFY THIS CELL. Just RUN it with Shift + Enter #Create the dsn connection string dsn = ( "DRIVER={0};" "DATABASE={1};" "HOSTNAME={2};" "PORT={3};" "PROTOCOL={4};" "UID={5};" "PWD={6};").format(dsn_driver, dsn_database, dsn_hostname, dsn_port, dsn_protocol, dsn_uid, dsn_pwd) #print...
DRIVER=DATABASE=BLUDB;HOSTNAME=dashdb-txn-sbox-yp-dal09-03.services.dal.bluemix.net;PORT=50000;PROTOCOL=TCPIP;UID=wvb91528;PWD=tm^1nlbn4dj3j04b;;DATABASE=BLUDB;HOSTNAME=dashdb-txn-sbox-yp-dal09-03.services.dal.bluemix.net;PORT=50000;PROTOCOL=TCPIP;UID=wvb91528;PWD=tm^1nlbn4dj3j04b;
MIT
5. Databases_SQL/1-1-Connecting-v4-py.ipynb
naquech/IBM_Watson_Studio
Now establish the connection to the database
#DO NOT MODIFY THIS CELL. Just RUN it with Shift + Enter #Create database connection try: conn = ibm_db.connect(dsn, "", "") print ("Connected to database: ", dsn_database, "as user: ", dsn_uid, "on host: ", dsn_hostname) except: print ("Unable to connect: ", ibm_db.conn_errormsg() )
Connected to database: BLUDB as user: wvb91528 on host: dashdb-txn-sbox-yp-dal09-03.services.dal.bluemix.net
MIT
5. Databases_SQL/1-1-Connecting-v4-py.ipynb
naquech/IBM_Watson_Studio
Congratulations if you were able to connect successfuly. Otherwise check the error and try again.
#Retrieve Metadata for the Database Server server = ibm_db.server_info(conn) print ("DBMS_NAME: ", server.DBMS_NAME) print ("DBMS_VER: ", server.DBMS_VER) print ("DB_NAME: ", server.DB_NAME) #Retrieve Metadata for the Database Client / Driver client = ibm_db.client_info(conn) print ("DRIVER_NAME: ", clien...
DRIVER_NAME: libdb2.a DRIVER_VER: 11.01.0404 DATA_SOURCE_NAME: BLUDB DRIVER_ODBC_VER: 03.51 ODBC_VER: 03.01.0000 ODBC_SQL_CONFORMANCE: EXTENDED APPL_CODEPAGE: 1208 CONN_CODEPAGE: 1208
MIT
5. Databases_SQL/1-1-Connecting-v4-py.ipynb
naquech/IBM_Watson_Studio
Close the ConnectionWe free all resources by closing the connection. Remember that it is always important to close connections so that we can avoid unused connections taking up resources.
ibm_db.close(conn)
_____no_output_____
MIT
5. Databases_SQL/1-1-Connecting-v4-py.ipynb
naquech/IBM_Watson_Studio
import numpy as np import pandas as pd import seaborn as sns sns.set() np.__version__ def fetch_financial_data(company='AMZN'): import pandas_datareader.data as web return web.DataReader(name=company, data_source='stooq') google = fetch_financial_data(company='GOOGL') google google.info() pd.set_option('prec...
_____no_output_____
MIT
Pandas121.ipynb
mariuszkr33/dw_matrix
Setup environment
from monai.utils import first, set_determinism from monai.transforms import ( AsDiscrete, AsDiscreted, EnsureChannelFirstd, Compose, CropForegroundd, LoadImaged, Orientationd, RandCropByPosNegLabeld, ScaleIntensityRanged, Spacingd, EnsureTyped, EnsureType, Invertd, ) ...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Setup imports
# Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, s...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir)
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Download datasetDownloads and extracts the dataset. The dataset comes from http://medicaldecathlon.com/.
resource = "https://msd-for-monai.s3-us-west-2.amazonaws.com/Task09_Spleen.tar" md5 = "410d4a301da4e5b2f6f86ec3ddba524e" compressed_file = os.path.join(root_dir, "Task09_Spleen.tar") data_dir = os.path.join(root_dir, "Task09_Spleen") if not os.path.exists(data_dir): download_and_extract(resource, compressed_file, ...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Set MSD Spleen dataset path
train_images = sorted( glob.glob(os.path.join(data_dir, "imagesTr", "*.nii.gz"))) train_labels = sorted( glob.glob(os.path.join(data_dir, "labelsTr", "*.nii.gz"))) data_dicts = [ {"image": image_name, "label": label_name} for image_name, label_name in zip(train_images, train_labels) ] train_files, val_f...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Set deterministic training for reproducibility
set_determinism(seed=0)
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Setup transforms for training and validationHere we use several transforms to augment the dataset:1. `LoadImaged` loads the spleen CT images and labels from NIfTI format files.1. `AddChanneld` as the original data doesn't have channel dim, add 1 dim to construct "channel first" shape.1. `Spacingd` adjusts the spacing ...
train_transforms = Compose( [ LoadImaged(keys=["image", "label"]), EnsureChannelFirstd(keys=["image", "label"]), Spacingd(keys=["image", "label"], pixdim=( 1.5, 1.5, 2.0), mode=("bilinear", "nearest")), Orientationd(keys=["image", "label"], axcodes="RAS"), ScaleIn...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Check transforms in DataLoader
check_ds = Dataset(data=val_files, transform=val_transforms) check_loader = DataLoader(check_ds, batch_size=1) check_data = first(check_loader) image, label = (check_data["image"][0][0], check_data["label"][0][0]) print(f"image shape: {image.shape}, label shape: {label.shape}") # plot the slice [:, :, 80] plt.figure("c...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Define CacheDataset and DataLoader for training and validationHere we use CacheDataset to accelerate training and validation process, it's 10x faster than the regular Dataset. To achieve best performance, set `cache_rate=1.0` to cache all the data, if memory is not enough, set lower value. Users can also set `cache_...
train_ds = CacheDataset( data=train_files, transform=train_transforms, cache_rate=1.0, num_workers=4) # train_ds = monai.data.Dataset(data=train_files, transform=train_transforms) # use batch_size=2 to load images and use RandCropByPosNegLabeld # to generate 2 x 4 images for network training train_loader = Dat...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Create Model, Loss, Optimizer
# standard PyTorch program style: create UNet, DiceLoss and Adam optimizer device = torch.device("cpu") model = UNet( dimensions=3, in_channels=1, out_channels=2, channels=(16, 32, 64, 128, 256), strides=(2, 2, 2, 2), num_res_units=2, norm=Norm.BATCH, ).to(device) loss_function = DiceLoss(to...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Execute a typical PyTorch training process
max_epochs = 600 val_interval = 2 best_metric = -1 best_metric_epoch = -1 epoch_loss_values = [] metric_values = [] post_pred = Compose([EnsureType(), AsDiscrete(argmax=True, to_onehot=True, n_classes=2)]) post_label = Compose([EnsureType(), AsDiscrete(to_onehot=True, n_classes=2)]) for epoch in range(max_epochs): ...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Plot the loss and metric
plt.figure("train", (12, 6)) plt.subplot(1, 2, 1) plt.title("Epoch Average Loss") x = [i + 1 for i in range(len(epoch_loss_values))] y = epoch_loss_values plt.xlabel("epoch") plt.plot(x, y) plt.subplot(1, 2, 2) plt.title("Val Mean Dice") x = [val_interval * (i + 1) for i in range(len(metric_values))] y = metric_values ...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Check best model output with the input image and label
model.load_state_dict(torch.load( os.path.join(root_dir, "best_metric_model.pth"))) model.eval() with torch.no_grad(): for i, val_data in enumerate(val_loader): roi_size = (160, 160, 160) sw_batch_size = 4 val_outputs = sliding_window_inference( val_data["image"].to(device), ...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Evaluation on original image spacings
val_org_transforms = Compose( [ LoadImaged(keys=["image", "label"]), EnsureChannelFirstd(keys=["image", "label"]), Spacingd(keys=["image"], pixdim=( 1.5, 1.5, 2.0), mode="bilinear"), Orientationd(keys=["image"], axcodes="RAS"), ScaleIntensityRanged( ke...
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
Cleanup data directoryRemove directory if a temporary was used.
if directory is None: shutil.rmtree(root_dir)
_____no_output_____
Apache-2.0
tutorials/segmentation/spleen_segmentation_3d.ipynb
YipengHu/MPHY0043
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://...
# Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import gee...
_____no_output_____
MIT
FeatureCollection/set_properties.ipynb
c11/earthengine-py-notebooks
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map
_____no_output_____
MIT
FeatureCollection/set_properties.ipynb
c11/earthengine-py-notebooks
Add Earth Engine Python script
# Add Earth Engine dataset # Make a feature and set some properties. feature = ee.Feature(ee.Geometry.Point([-122.22599, 37.17605])) \ .set('genus', 'Sequoia').set('species', 'sempervirens') # Get a property from the feature. species = feature.get('species') print(species.getInfo()) # Set a new property. feature = ...
_____no_output_____
MIT
FeatureCollection/set_properties.ipynb
c11/earthengine-py-notebooks
Display Earth Engine data layers
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map
_____no_output_____
MIT
FeatureCollection/set_properties.ipynb
c11/earthengine-py-notebooks
DEMO_TOY_IMAGES Simple illustration of GLFM pipeline, replicating the example of the IBP linear-Gaussian model in (Griffiths and Ghahramani, 2011).
# --------------------------------------------- # Import necessary libraries # --------------------------------------------- import numpy as np # import numpy matrix for calculus with matrices import matplotlib.pyplot as plt # import plotting library import time # import time to be able to measure iteration spee...
Print inferred latent features...
MIT
demos/python/demo_toy_images.ipynb
ferjorosa/test-glfm
Codecademy Completion This problem will be used for verifying that you have completed the Python course on http://www.codecademy.com/.Here are the steps to do this verification:1. Go to the page on http://www.codecademy.com/ that shows your percent completion.2. Take a screen shot of that page.3. Name the file `codeca...
from IPython.display import Image Image(filename='codecademy.png', width='100%')
_____no_output_____
MIT
assignments/assignment01/Codecademy.ipynb
edwardd1/phys202-2015-work
*Copyright 2020 Google LLC**Licensed under the Apache License, Version 2.0 (the "License")*
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the L...
_____no_output_____
Apache-2.0
retrain_detection_qat_tf1.ipynb
KeithAzzopardi1998/tutorials
Retrain a detection model for Edge TPU with quant-aware training (TF 1.12) This notebook uses a set of TensorFlow training scripts to perform transfer-learning on a quantization-aware object detection model and then convert it for compatibility with the [Edge TPU](https://coral.ai/products/).Specifically, this tutoria...
! pip uninstall tensorflow -y ! pip install tensorflow==1.12 import tensorflow as tf print(tf.__version__)
_____no_output_____
Apache-2.0
retrain_detection_qat_tf1.ipynb
KeithAzzopardi1998/tutorials
Clone the model and training repos
! git clone https://github.com/tensorflow/models.git ! cd models && git checkout f788046ca876a8820e05b0b48c1fc2e16b0955bc ! git clone https://github.com/google-coral/tutorials.git ! cp -r tutorials/docker/object_detection/scripts/* models/research/
_____no_output_____
Apache-2.0
retrain_detection_qat_tf1.ipynb
KeithAzzopardi1998/tutorials
Import dependencies For details, see https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md
! apt-get install -y python python-tk ! pip install Cython contextlib2 pillow lxml jupyter matplotlib # Get protoc 3.0.0, rather than the old version already in the container ! wget https://www.github.com/google/protobuf/releases/download/v3.0.0/protoc-3.0.0-linux-x86_64.zip ! unzip protoc-3.0.0-linux-x86_64.zip -d pro...
_____no_output_____
Apache-2.0
retrain_detection_qat_tf1.ipynb
KeithAzzopardi1998/tutorials
Just to verify everything is correctly set up:
! python object_detection/builders/model_builder_test.py
_____no_output_____
Apache-2.0
retrain_detection_qat_tf1.ipynb
KeithAzzopardi1998/tutorials
Convert training data to TFRecord To train with different images, read [how to configure your own training data](https://coral.ai/docs/edgetpu/retrain-detection/configure-your-own-training-data).
! ./prepare_checkpoint_and_dataset.sh --network_type mobilenet_v1_ssd --train_whole_model false
_____no_output_____
Apache-2.0
retrain_detection_qat_tf1.ipynb
KeithAzzopardi1998/tutorials
Perform transfer-learning The following script takes several hours to finish in Colab. (You can shorten by reducing the steps, but that reduces the final accuracy.)If you didn't already select "Run all" then you should run all remaining cells now. That will ensure the rest of the notebook completes while you are away,...
%env NUM_TRAINING_STEPS=500 %env NUM_EVAL_STEPS=100 # If you're retraining the whole model, we suggest thes values: # %env NUM_TRAINING_STEPS=50000 # %env NUM_EVAL_STEPS=2000 ! ./retrain_detection_model.sh --num_training_steps $NUM_TRAINING_STEPS --num_eval_steps $NUM_EVAL_STEPS
_____no_output_____
Apache-2.0
retrain_detection_qat_tf1.ipynb
KeithAzzopardi1998/tutorials
As training progresses, you can see new checkpoint files appear in the `models/research/learn_pet/train/` directory. Compile for the Edge TPU
! ./convert_checkpoint_to_edgetpu_tflite.sh --checkpoint_num $NUM_TRAINING_STEPS ! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list ! sudo apt-get update !...
_____no_output_____
Apache-2.0
retrain_detection_qat_tf1.ipynb
KeithAzzopardi1998/tutorials
Download the files:
from google.colab import files files.download('output_tflite_graph_edgetpu.tflite') files.download('labels.txt')
_____no_output_____
Apache-2.0
retrain_detection_qat_tf1.ipynb
KeithAzzopardi1998/tutorials
Initial Processing & EDABy: Aditya Mengani, Ognjen Sosa, Sanjay Elangovan, Song Park, Sophia Skowronski
'''Importing basic data analysis packages''' import numpy as np import pandas as pd import csv import warnings import os import time import math warnings.filterwarnings('ignore') '''Plotting packages''' import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns sns.set(font_scale=1.3)
_____no_output_____
MIT
1_4_EDA.ipynb
aditya-mengani/network_analysis_cbase_p1_ml
Function: memory reduction of dataframe
def reduce_mem_usage(df, verbose=True): numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] start_mem = df.memory_usage().sum() / 1024**2 for col in df.columns: col_type = df[col].dtypes if col_type in numerics: c_min = df[col].min() c_max = df...
_____no_output_____
MIT
1_4_EDA.ipynb
aditya-mengani/network_analysis_cbase_p1_ml
Load DataUse `tar -xvzf 20200908_bulk_export.tar.gz` to unzip Crunchbase export (for Windows)Check out summary of data from Crunchbase export here.
########################### # Pledge 1% Company UUIDs # ########################### print('='*100) p1 = pd.read_csv('~/Desktop/w207/Project/Data/p1.csv') print('PLEDGE 1%/p1 cols: {}\nSHAPE: {}'.format(p1.columns.to_list(), p1.shape)) p1 = reduce_mem_usage(p1) ################# # Organizations # ################# pri...
_____no_output_____
MIT
1_4_EDA.ipynb
aditya-mengani/network_analysis_cbase_p1_ml
Explore Degree Data
df2 = pd.merge(df.copy(),degrees.copy(),how='outer',on='uuid') pledge1_2 = df2[df2['p1_tag'] == 1].sort_values('p1_date') pledge1_2.head(10) # Exclude rows that have NaN institution_uuid pledge1_2_degrees = pledge1_2[~pledge1_2['institution_name'].isna()] df2_degrees = df2[~df2['institution_name'].isna()] # Create cou...
_____no_output_____
MIT
1_4_EDA.ipynb
aditya-mengani/network_analysis_cbase_p1_ml
Can't create plots with degree data because degree data is missing for all P1 companies...:(
# Barplots _, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 12), sharey=True) #sns.barplot(x='p1_tag', y='institution_name', data=pledge1_2_degrees, orient='h', ax=ax[0]) sns.barplot(x='count', y='institution_name', data=df2_degrees, orient='h', ax=ax[1]) # Labels ax[0].set_title('Pledge Companies by Employee Count...
_____no_output_____
MIT
1_4_EDA.ipynb
aditya-mengani/network_analysis_cbase_p1_ml
Project DescriptionThis is a chatbot that provides live daily news headlines from the Wall Street Journal, with main three categories of news : political, deals, and economy. First, it gives you a brief view about titles after you ask for certain type of news, then it can provide more details in a specific news based o...
import string import random import nltk import requests import json news_request = requests.get(url='https://newsapi.org/v2/top-headlines?sources=the-wall-street-journal&apiKey=df21f07e419c41feb602fb9ba2a8456c') news_dict = news_request.json()['articles'] def is_question(input_string): """Check if the input is a ...
_____no_output_____
MIT
Wu, You-Final Project-vF.ipynb
fionacandicewu/cogs18-finalproject-chatbot
Matrizeak: zerrenden zerrendakMatrize baten moduko egitura sor daiteke zerrenden zerrendekin:
A = [[1,2,3],[4,5,6],[7,8,9]] print(A) print(A[1][2]) # Hau okerra litzateke #print(A[1,2]) # Lerro berriak gehituz, argiagoa. A = [ [1,2,3], [4,5,6], [7,8,9] ] print(A) print(A[1][2]) # Baina... ez da matrize bat! A[1].append(100) print(A) def matrizea_idatzi(M): ilara = len(M) zutabe = len(M[0]) ...
1 2 3 4 5 6 100 7 8 9
MIT
Gardenkiak/Programazioa/.ipynb_checkpoints/ZerrendenZerrendak-checkpoint.ipynb
mpenagar/Konputaziorako-Sarrera
Kontuz espresio literal oso trinkoekin...
print([0]*3) print([[0]*3]) print([[0]*3]*3) A = [[0]*3]*3 matrizea_idatzi(A) A[1][1] = 5 matrizea_idatzi(A) print(A[0] is A[1], A[0] is A[2], A[1] is A[2])
True True True
MIT
Gardenkiak/Programazioa/.ipynb_checkpoints/ZerrendenZerrendak-checkpoint.ipynb
mpenagar/Konputaziorako-Sarrera
Match Analysis
import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
KKR VS MI/Match Analysis KKR VS MI.ipynb
tacklesta/WPL
Data Cleaning and Exploring
matches = pd.read_csv("matches.csv" , index_col = "id") matches = matches.iloc[:,:-3] matches.head() matches.shape matches.winner.unique()
_____no_output_____
MIT
KKR VS MI/Match Analysis KKR VS MI.ipynb
tacklesta/WPL
Taking in consideration only KKR VS MI matches
KM =matches[np.logical_or(np.logical_and(matches['team1']=='Kolkata Knight Riders',matches['team2']=='Mumbai Indians'), np.logical_and(matches['team2']=='Kolkata Knight Riders',matches['team1']=='Mumbai Indians'))] KM.head() KM.shape KM.season.unique() KM.isnull().sum() KM.describe().iloc[:,...
_____no_output_____
MIT
KKR VS MI/Match Analysis KKR VS MI.ipynb
tacklesta/WPL
HEAD TO HEAD
KM.groupby("winner")["winner"].count() sns.countplot(KM["winner"]) plt.text(-0.09,17,str(KM['winner'].value_counts()['Mumbai Indians']),size=20,color='white') plt.text(0.95,4,str(KM['winner'].value_counts()['Kolkata Knight Riders']),size=20,color='white') plt.xlabel('Winner',fontsize=15) plt.ylabel('No. of Matches',fon...
Season wise winner of matches between KKR and MI :
MIT
KKR VS MI/Match Analysis KKR VS MI.ipynb
tacklesta/WPL
Winning Percentage
Winning_Percentage = KM['winner'].value_counts()/len(KM['winner']) print(" MI winning percentage against KKR(overall) : {}%".format(int(round(Winning_Percentage[0]*100)))) print("KKR winning percentage against MI(overall) : {}%".format(int(round(Winning_Percentage[1]*100))))
MI winning percentage against KKR(overall) : 76% KKR winning percentage against MI(overall) : 24%
MIT
KKR VS MI/Match Analysis KKR VS MI.ipynb
tacklesta/WPL
Performance Based Analysis
def performance( team_name , given_df ): for value in given_df.groupby('winner'): if value[0] == team_name: total_win_by_runs = sum(list(value[1]['win_by_runs'])) total_win_by_wickets = sum(list(value[1]['win_by_wickets'])) if 0 in list(value[1]['win_by_runs...
Number of times given team win while defending : 8 Number of times given team win while chasing : 11 Average runs by which a given team wins while defending : 40.0 Average wickets by which a given team wins while chasing : 6.0
MIT
KKR VS MI/Match Analysis KKR VS MI.ipynb
tacklesta/WPL
Toss Analysis
Toss_Decision_based_Winner = pd.DataFrame(KM.groupby(['toss_winner',"toss_decision","winner"])["winner"].count()) print(" No of times toss winning decision leading to match winning : ") Toss_Decision_based_Winner Toss_Decision = pd.DataFrame(KM.groupby(['toss_winner',"toss_decision"])["toss_decision"].count()) print ...
_____no_output_____
MIT
KKR VS MI/Match Analysis KKR VS MI.ipynb
tacklesta/WPL
From the above analysis we can see that mostly both the teams prefer chasing the score after winning the toss
sns.set(style='whitegrid') plt.figure(figsize = (18,9)) sns.countplot(KM['toss_winner'],hue=KM['winner']) plt.title('Match Winner vs Toss Winner statistics for both team',fontsize=15) plt.yticks(fontsize=15) plt.xticks(fontsize=15) plt.xlabel('Toss winner',fontsize=15) plt.ylabel('Match Winner',fontsize=15) plt.legend(...
_____no_output_____
MIT
KKR VS MI/Match Analysis KKR VS MI.ipynb
tacklesta/WPL
Toss Decision based Analysis of both the teams seperately :
KKR = KM[KM["toss_winner"]=="Kolkata Knight Riders"] MI = KM[KM["toss_winner"]=="Mumbai Indians"] sns.set(style='whitegrid') plt.figure(figsize = (18,9)) sns.countplot(KKR['toss_decision'],hue=KKR['winner']) plt.title('Match Winner vs Toss Winner statistics for KKR',fontsize=15) plt.yticks(fontsize=15) plt.xticks(fonts...
Man of the match :
MIT
KKR VS MI/Match Analysis KKR VS MI.ipynb
tacklesta/WPL
Recent Year Performance Analysis
cond1 = KM["season"] == 2015 cond2 = KM["season"] == 2016 cond3 = KM["season"] == 2017 cond4 = KM["season"] == 2018 cond5 = KM["season"] == 2019 final = KM[cond1 | cond2 | cond3 | cond4 | cond5] final final.shape player = pd.DataFrame(final.player_of_match.value_counts()) print("Man of the match :") player plt.figure(...
_____no_output_____
MIT
KKR VS MI/Match Analysis KKR VS MI.ipynb
tacklesta/WPL
Interpolation & Fitting 1. Libraries
import matplotlib.pyplot as plt import numpy as np import statsmodels.api as sm from matplotlib.cm import colors from scipy.interpolate import interp1d, lagrange from scipy.optimize import curve_fit from statsmodels.nonparametric.kernel_regression import KernelReg
_____no_output_____
MIT
plotting/ex_interpolation.ipynb
nathanielng/python-snippets
2. Calculations 2.1 Original data
# Original data points x = np.array([1.0, 2.0, 3.0, 4.0, 5.0]) y = np.array([0.1, 1.2, 3.0, 4.2, 3.8]) # Extra data points for drawing the curves x1 = np.linspace(-0.9, 6.7, 50) x2 = np.linspace(x.min(), x.max(), 50)
_____no_output_____
MIT
plotting/ex_interpolation.ipynb
nathanielng/python-snippets
2.2 Calculate interpolating functions
lg = lagrange(x, y) linear = interp1d(x, y, kind='linear') spline0 = interp1d(x, y, kind='zero') spline1 = interp1d(x, y, kind='slinear') spline2 = interp1d(x, y, kind='quadratic') spline3 = interp1d(x, y, kind='cubic')
_____no_output_____
MIT
plotting/ex_interpolation.ipynb
nathanielng/python-snippets
3. Plots - Lagrange vs Splines
fig, (ax0, ax1) = plt.subplots(2, 1, figsize=(7,13)) ax0.plot(x, y, 'bo') ax0.plot(x1, lg(x1), label='Lagrange') ax0.plot(x2, linear(x2), label='linear') ax0.legend(loc='best', frameon=False) ax1.plot(x, y, 'bo') ax1.plot(x2, lg(x2), label='Lagrange', color='black') ax1.plot(x2, spline0(x2), label='spline (0th order)'...
_____no_output_____
MIT
plotting/ex_interpolation.ipynb
nathanielng/python-snippets
4. Comparison - Lagrange, LOWESS, Kernel, Cubic Splines 4.1 Function to fit the data
def get_interpolated_data(x_train, y_train, x_new, kind, frac=0.1): if kind == 'lagrange': fn = lagrange(x_train, y_train) x_pred = x_new y_pred = fn(x_new) elif kind == 'lowess': xy = sm.nonparametric.lowess(y_train, x_train, frac=frac) x_pred = xy[:, 0] y_pred =...
_____no_output_____
MIT
plotting/ex_interpolation.ipynb
nathanielng/python-snippets
4.2 Data
n_pts = 10 n_all = 50 x = np.linspace(0, 2*np.pi, n_pts) y = np.sin(x) + 0.1*(np.random.uniform(0, 1, n_pts) - 0.5) x_actual = np.linspace(0, 2*np.pi, n_all) y_actual = np.sin(x_actual) x2 = np.linspace(x.min(), x.max(), 50)
_____no_output_____
MIT
plotting/ex_interpolation.ipynb
nathanielng/python-snippets
4.3 Plots
fig, axs = plt.subplots(2, 2, figsize=(12,8)) kinds = ['lagrange', 'lowess', 'kernel', 'cubic'] cmap = plt.get_cmap("tab10") i = 0 for row in range(2): for col in range(2): kind = kinds[i] x_p, y_p = get_interpolated_data(x, y, x2, kind) axs[row][col].plot(x, y, 'bo') axs[row][col]....
_____no_output_____
MIT
plotting/ex_interpolation.ipynb
nathanielng/python-snippets
모두를 위한 딥러닝 week 2 Tensorflow for logistic classifiersJimin Sun
import tensorflow as tf
/usr/local/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 return f(*args, **kwds)
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
Example from lab video
x_data = [[1,2], [2,3], [3,1], [4,4], [5,3], [6,2]] y_data = [[0],[0],[0],[1],[1],[1]] X = tf.placeholder(tf.float32, shape = [None, 2]) Y = tf.placeholder(tf.float32, shape = [None, 1]) # Shape에 주의! W = tf.Variable(tf.random_normal([2,1]), name = 'weight') # 들어오는 값 2개, 나가는 값 1개. b = tf.Variable(tf.random_normal([1]), ...
_____no_output_____
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
What is 'reduce_mean'?* tf.reduce_mean 은 평균을 구해주는 operation을 한다. (np.mean과 같은 기능!)* 둘의 차이점은, numpy operation은 파이썬 어디서든 사용할 수 있지만, tensorflow operation은 tensorflow **Session** 내에서만 동작한다는 데에 있다.* But why **reduce**_mean?> The key here is the word reduce, a concept from functional programming, which makes it possible for ...
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for step in range(10001): cost_val, _ = sess.run([cost, train], feed_dict = {X: x_data, Y: y_data}) if step % 1000 == 0: print(step, cost_val) # Accuracy h, c, a = sess.run([hypothesis, predicted...
0 0.585211 1000 0.387248 2000 0.313725 3000 0.261586 4000 0.223239 5000 0.194146 6000 0.17146 7000 0.153347 8000 0.13859 9000 0.126357 10000 0.116064 Hypothesis: [[ 0.0268787 ] [ 0.16871433] [ 0.21446182] [ 0.86153394] [ 0.93646842] [ 0.97216052]] Correct (Y): [[ 0.] [ 0.] [ 0.] [ 1.] [ 1.] [ 1.]] Accu...
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
Let's apply this classifier to another example!The same dataset from JaeYoung's example last week :)Where to get it : https://www.kaggle.com/c/uci-wine-quality-dataset/data
import numpy as np import pandas as pd data = pd.read_csv('winequality-data.csv', dtype = 'float32', header=0) data.head()
_____no_output_____
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
Binary Classifier Here, you can spot the **'quality'** column, where the quality of wine is classified to 7 categories.
data['quality'].unique()
_____no_output_____
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
We'll start from a binary classifier, so we label * wines of quality 3.0, 4.0, 5.0 as class 0,* and wines of quality 6.0, 7.0, 8.0, 9.0 as class 1.
data.info() # You can easily check if there are any missing values in your data. grade = data['quality'] > 5.0 data['grade'] = grade.astype(np.float32) data.head() # new column 'grade' is added at the end y_data = data.values[:,[-1]] x_data = data.values[:,:-3] # columns quality, id, grade are excluded y_data.shape, x...
0 nan 100 nan 200 nan 300 nan 400 nan 500 nan 600 nan 700 nan 800 nan 900 nan 1000 nan Hypothesis: [[ nan] [ nan] [ nan] ..., [ nan] [ nan] [ nan]] Correct (Y) : [[ 0.] [ 0.] [ 0.] ..., [ 0.] [ 0.] [ 0.]] Accuracy : 0.335375
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
??????? Data normalization was needed! There are many ways to normalize (= change scales to $[0,1]$) data, but this time we'll use **min_max_normalization**. The idea here is to apply this formula below.$$min\_max(x_{ij}) = \dfrac{x_{ij} - min_{1 \leq i \leq n}(x_{ij})}{(max_{1 \leq i \leq n}(x_{ij})-min_{1 \leq i \leq...
def min_max_normalized(data): col_max = np.max(data, axis=0) # axis=0 : 열, axis=1 : 행 col_min = np.min(data, axis=0) return np.divide(data - col_min, col_max - col_min) x_data = min_max_normalized(x_data) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) feed = {X: x_data, ...
0 1.53646 2000 0.611673 4000 0.596641 6000 0.585185 8000 0.576219 10000 0.569024 12000 0.563115 14000 0.558161 16000 0.553931 18000 0.550264 20000 0.547041 Hypothesis: [[ 0.83623403] [ 0.87260079] [ 0.53597355] ..., [ 0.79522443] [ 0.75276494] [ 0.38663006]] Correct (Y) : [[ 1.] [ 1.] [ 1.] ..., [ 1.] ...
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
Even after 20,000 steps, the accuracy doesn't look high enough. $(\approx 70\%)$ Why?Let's plot the data, and see if we can find a reason there.
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns data.head() data.describe()
_____no_output_____
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
*The variance of most columns seem extremely low.*
sns.FacetGrid(data, hue = 'grade', size=6).map(plt.scatter, 'free.sulfur.dioxide', 'total.sulfur.dioxide').add_legend() plt.show() sns.FacetGrid(data, hue = 'grade', size=6).map(plt.scatter, 'fixed.acidity', 'residual.sugar').add_legend() plt.show() sns.FacetGrid(data, hue = 'grade', size=6).map(plt.scatter, 'density',...
_____no_output_____
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
The data itself doesn't really seem linearly separable :(This kind of problem will lead us to our next session in the lecture, such as Neural Networks! Multiclass classifier Example from lab videoA quick review of the example dealt in the lecture.
xy = np.loadtxt('data-04-zoo.csv', delimiter=',', dtype=np.float32) xy x_data = xy[:, 0:-1] y_data = xy[:, [-1]] # nb_classes = 7 # For cases when you don't want to set a specific number to 'nb_classes', # or if you don't exactly know the number of categories , # this might be a more generalized method. np.unique(y_...
_____no_output_____
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
Back to our wine quality dataset!
x_data = data.values[:,:-3] x_data = min_max_normalized(x_data) x_data y_data = (data.values[:,[-3]]).astype(np.int32) # quality column으로 다시 설정 x_data.shape, y_data.shape num_class = len(data['quality'].unique()) num_class X = tf.placeholder(tf.float32, [None, 11]) Y = tf.placeholder(tf.int32, [None, 1]) Y_one_hot = tf...
_____no_output_____
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
It seems that the classifier has assigned most data points to the class 5 and 6, since these two took up 75% of all classes. :(
alist = ['a1', 'a2', 'a3'] blist = ['b1', 'b2', 'b3'] print(set(zip(alist, blist)))
{('a2', 'b2'), ('a3', 'b3'), ('a1', 'b1')}
MIT
week-2/index.ipynb
PluVian/deep-learning-study-2017-winter
Magic MethodsBelow you'll find the same code from the previous exercise except two more methods have been added: an __add__ method and a __repr__ method. Your task is to fill out the code and get all of the unit tests to pass. You'll find the code cell with the unit tests at the bottom of this Jupyter notebook.As in p...
import math import matplotlib.pyplot as plt class Gaussian(): """ Gaussian distribution class for calculating and visualizing a Gaussian distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the dist...
_____no_output_____
FTL
Software_Engineering_Practices/magic_methods.ipynb
tthoraldson/MachineLearningNanodegree
FastTreeSHAP in Census Income Data This notebook contains usages and detailed comparisons of FastTreeSHAP v1, FastTreeSHAP v2 and the original TreeSHAP in **binary classification** problems using scikit-learn, XGBoost and LightGBM. It also contains the discussions of automatic algorithm selection. It may take a few mi...
import numpy as np import pandas as pd from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import roc_auc_score, accuracy_score import xgboost as xgb import lightgbm as lgb import fasttreeshap import os import time
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Pre-process training and testing data
# source of data: https://archive.ics.uci.edu/ml/datasets/census+income train = pd.read_csv("../data/adult_data.txt", sep = ",\s+", header = None, engine = "python") test = pd.read_csv("../data/adult_test.txt", sep = ",\s+", header = None, skiprows = 1, engine = "python") label_train = train[14].map({"<=50K": 0, ">50K"...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Train a random forest model using scikit-learn and compute SHAP values
n_estimators = 200 # number of trees in random forest model max_depth = 8 # maximum depth of any trees in random forest model # train a random forest model rf_model = RandomForestClassifier(n_estimators = n_estimators, max_depth = max_depth, random_state = 0) rf_model.fit(train, label_train) print("AUC on testing set...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compute SHAP values via different versions of TreeSHAP
num_sample = 10000 # number of samples to be explained n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # compute SHAP values via FastTreeSHAP v0 (i.e., original TreeSHAP) # parallel computing is not enabled in original TreeSHAP in SHAP package, but here we enable it for a fair compar...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compare running times of different versions of TreeSHAP in computing SHAP values
# compute SHAP values/SHAP interaction values via TreeSHAP algorithm with version "algorithm_version" # (parallel on "n_jobs" threads) def run_fasttreeshap(model, sample, interactions, algorithm_version, n_jobs, num_round, num_sample, shortcut = False): shap_explainer = fasttreeshap.TreeExplainer( model, al...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compute SHAP interaction values via different versions of TreeSHAP
num_sample = 100 # number of samples to be explained n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # compute SHAP interaction values via FastTreeSHAP v0 (i.e., original TreeSHAP) # parallel computing is not enabled in original TreeSHAP in SHAP package, but here we enable it for a f...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compare running times of different versions of TreeSHAP in computing SHAP interaction values
num_sample = 100 # number of samples to be explained num_round = 3 # number of rounds to record mean and standard deviation of running time n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # run FastTreeSHAP v0 (i.e., original TreeSHAP) multiple times and record its average running t...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Train an XGBoost model and compute SHAP values
n_estimators = 200 # number of trees in XGBoost model max_depth = 8 # maximum depth of any trees in XGBoost model # train an XGBoost model xgb_model = xgb.XGBClassifier( max_depth = max_depth, n_estimators = n_estimators, learning_rate = 0.1, n_jobs = -1, use_label_encoder = False, eval_metric = "logloss", ra...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compute SHAP values via different versions of TreeSHAP
num_sample = 10000 # number of samples to be explained n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # compute SHAP values via "shortcut" (i.e., original TreeSHAP in XGBoost package) # by default, parallel computing on all available cores is enabled in "shortcut" shap_explainer = f...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compare running times of different versions of TreeSHAP in computing SHAP values
num_sample = 10000 # number of samples to be explained num_round = 3 # number of rounds to record mean and standard deviation of running time n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # run "shortcut" version of TreeSHAP multiple times and record its average running time # by ...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compute SHAP interaction values via different versions of TreeSHAP
num_sample = 100 # number of samples to be explained n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # compute SHAP interaction values via "shortcut" (i.e., original TreeSHAP in XGBoost package) # by default, parallel computing on all available cores is enabled in "shortcut" shap_exp...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compare running times of different versions of TreeSHAP in computing SHAP interaction values
num_sample = 100 # number of samples to be explained num_round = 3 # number of rounds to record mean and standard deviation of running time n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # run "shortcut" version of TreeSHAP multiple times and record its average running time # by de...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Train a LightGBM model and compute SHAP values
n_estimators = 500 # number of trees in LightGBM model max_depth = 8 # maximum depth of any trees in LightGBM model # train a LightGBM model lgb_model = lgb.LGBMClassifier( max_depth = max_depth, n_estimators = n_estimators, learning_rate = 0.1, n_jobs = -1, random_state = 0) lgb_model.fit(train, label_train) pri...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compute SHAP values via different versions of TreeSHAP
num_sample = 10000 # number of samples to be explained n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # compute SHAP values via "shortcut" (i.e., original TreeSHAP in LightGBM package) # by default, parallel computing on all available cores is enabled in "shortcut" shap_explainer = ...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compare running times of different versions of TreeSHAP in computing SHAP values
num_sample = 10000 # number of samples to be explained num_round = 3 # number of rounds to record mean and standard deviation of running time n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # run "shortcut" version of TreeSHAP multiple times and record its average running time # by ...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compute SHAP interaction values via different versions of TreeSHAP
num_sample = 100 # number of samples to be explained n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # compute SHAP interaction values via FastTreeSHAP v0 (i.e., original TreeSHAP in SHAP package) # parallel computing is not enabled in original TreeSHAP in SHAP package, but here we e...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Compare running times of different versions of TreeSHAP in computing SHAP interaction values
num_sample = 100 # number of samples to be explained num_round = 3 # number of rounds to record mean and standard deviation of running time n_jobs = -1 # number of parallel threads (-1 means utilizing all available cores) # run FastTreeSHAP v0 (i.e., original TreeSHAP) multiple times and record its average running t...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Deep dive into automatic algorithm selection The default value of the argument `algorithm` in the class `TreeExplainer` is `auto`, indicating that the TreeSHAP algorithm is automatically selected from `"v0"`, `"v1"` and `"v2"` according to the number of samples to be explained and the constraint on the allocated memor...
n_estimators = 200 # number of trees in random forest model max_depth = 8 # maximum depth of any trees in random forest model # train a random forest model rf_model = RandomForestClassifier(n_estimators = n_estimators, max_depth = max_depth, random_state = 0) rf_model.fit(train, label_train) # estimated memory usage ...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP