code
stringlengths
2.5k
150k
kind
stringclasses
1 value
<a href="https://colab.research.google.com/github/healthonrails/annolid/blob/main/docs/tutorials/Train_networks_tutorial_v1.0.1.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Annolid (A segmenation based on mutiple animal tracking package) In this tutorial, we will show how to track multiple animals by detection and instance segmentation. The instance segmenatation on images and video is based on [YOLACT](https://github.com/dbolya/yolact). ## Runtime environment in Google Colab Colab is google's free version of Jupyter notebook. This notebook is only tested inside Colab. It might not run in your workstation esp if you are running Windows system. ### How to use the free GPU? - Click **Runtime** > **Change Runtime Type** - Choose **GPU** > **Save** ## YOLACT(You Only Look At CoefficienTs) A simple, fully convolutional model for real-time instance segmentation. This is the code for their papers: - [YOLACT: Real-time Instance Segmentation](https://arxiv.org/abs/1904.02689) - [YOLACT++: Better Real-time Instance Segmentation](https://arxiv.org/abs/1912.06218) ## Reference ``` @inproceedings{yolact-iccv2019, author = {Daniel Bolya and Chong Zhou and Fanyi Xiao and Yong Jae Lee}, title = {YOLACT: {Real-time} Instance Segmentation}, booktitle = {ICCV}, year = {2019}, } ``` ``` @misc{yolact-plus-arxiv2019, title = {YOLACT++: Better Real-time Instance Segmentation}, author = {Daniel Bolya and Chong Zhou and Fanyi Xiao and Yong Jae Lee}, year = {2019}, eprint = {1912.06218}, archivePrefix = {arXiv}, primaryClass = {cs.CV} } ``` This notebook was inspired and modified from https://colab.research.google.com/drive/1ncRxvmNR-iTtQCscj2UFSGV8ZQX_LN0M. https://www.immersivelimit.com/tutorials/yolact-with-google-colab # Install required packages ``` # Cython needs to be installed before pycocotools !pip install cython !pip install opencv-python pillow pycocotools matplotlib !pip install -U PyYAML !nvidia-smi # DCNv2 will not work if Pytorch is greater than 1.4.0 !pip install torchvision==0.5.0 !pip install torch==1.4.0 ``` ## Clone and install Annolid from GitHub ``` # The root folder %cd /content # Clone the repo !git clone --recurse-submodules https://github.com/healthonrails/annolid.git # install annolid %cd /content/annolid/ !pip install -e . ``` ### Please uncomment the following cell and update opencv-python in Colab for the errors as follows. File "eval.py", line 804, in cleanup_and_exit cv2.destroyAllWindows() cv2.error: OpenCV(4.1.2) /io/opencv/modules/highgui/src/window.cpp:645: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows' ``` #!pip install -U opencv-python ``` ## DCNv2 If you want to use YOLACT++, compile deformable convolutional layers (from DCNv2). Note: Make sure you have selected GPU option as discribed in the previous cell. ``` # Change to the DCNv2 directory %cd /content/annolid/annolid/segmentation/yolact/external/DCNv2 # Build DCNv2 !python setup.py build develop ``` ## Download Pretrained Weights We need to download pre-trained weights for inference. The creator of the GitHub repo shared them on Google Drive. We're going to use a **gdown** to easily access the Drive file from Colab. ``` # Make sure we're in the yolact folder %cd /content/annolid/annolid/segmentation/ # Create a new directory for the pre-trained weights !mkdir -p /content/annolid/annolid/segmentation/yolact/weights # Download the file !gdown --id 1ZPu1YR2UzGHQD0o1rEqy-j5bmEm3lbyP -O ./yolact/weights/yolact_plus_resnet50_54_800000.pth ``` ## Train a model You can let the model traing for a few hours. Click the stop by for the cell will interrupt the training and save a interrupted model. Then you can eval the model based with a save model in the yolact/weigths folder. ``` # Check the GPU availbe for your run !nvidia-smi ``` ## Download an example dataset ``` #https://drive.google.com/file/d/1_kkJ8rlnoMymj0uw8VZkVEuFIMfLDwPh/view?usp=sharing !gdown --id 1_kkJ8rlnoMymj0uw8VZkVEuFIMfLDwPh -O /content/novelctrl_coco_dataset.zip %%shell #rm -rf /content/novelctrl_coco_dataset// #rm -rf /content/annolid/annolid/datasets/novelctrl_coco_dataset unzip /content/novelctrl_coco_dataset.zip -d /content/novelctrl_coco_dataset/ # the following command is only for the demo dataset with relative # paths to the dataset in the dataset data.yaml file. mv /content/novelctrl_coco_dataset/novelctrl_coco_dataset/ /content/annolid/annolid/datasets/ ``` Note please change the path highlighted with ** ** as in the e.g. config data.yaml file to your dataset's location. ```bash DATASET: name: 'novelctrl' train_info: '**/Users/xxxxxx/xxxxxx/novelctrl_coco_dataset/***train/annotations.json' train_images: '**/Users/xxxxxx/xxxxxx/novelctrl_coco_dataset/**train' valid_info: '**/Users/xxxxxx/xxxxxx/novelctrl_coco_dataset/**valid/annotations.json' valid_images: '**/Users/xxxxxx/xxxxxx/novelctrl_coco_dataset/**valid' class_names: ['nose', 'left_ear', 'right_ear', 'centroid', 'tail_base', 'tail_end', 'grooming', 'rearing', 'object_investigation', 'left_lateral', 'right_lateral', 'wall_1', 'wall_2', 'wall_3', 'wall_4', 'object_1', 'object_2', 'object_3', 'object_4', 'object_5', 'object_6', 'mouse'] YOLACT: name: 'novelctrl' dataset: 'dataset_novelctrl_coco' max_size: 512 ``` ## Start Tensorboard ``` # Start tensorboard for visualization of training %cd /content/annolid/annolid/segmentation/yolact/ %load_ext tensorboard %tensorboard --logdir /content/annolid/runs/logs %cd /content/annolid/annolid/segmentation/yolact/ !python train.py --config=../../datasets/novelctrl_coco_dataset/data.yaml --batch_size=16 --keep_latest_interval=100 # --resume=weights/novelctrl_244_2202_interrupt.pth --start_iter=244 ``` ## Mount your GOOGLE Drive ``` from google.colab import drive drive.mount('/content/drive') ``` ## Save a trained model to your Google Drive Please replace and put your_model_xxx.pth ``` # e.g. !cp /content/annolid/annolid/segmentation/yolact/weights/your_model_xxxx.pth /content/drive/My\ Drive # optional #!cp /content/annolid/annolid/segmentation/yolact/weights/resnet50-19c8e357.pth /content/drive/My\ Drive ``` ## Download the original video and test it based on your trained model ``` # https://drive.google.com/file/d/1_UDnyYKQplOLMzv1hphk1gVqepmLeyGD/view?usp=sharing !gdown --id 1_UDnyYKQplOLMzv1hphk1gVqepmLeyGD -O /content/novelctrl.mkv !python eval.py --trained_model=weights/novelctrl_xxxx_xxxx.pth --config=../../datasets/novelctrl_coco/data.yaml --score_threshold=0.15 --top_k=9 --video_multiframe=1 --video=/content/novelctrl.mkv:novelctrl_tracked.mkv --display_mask=False --mot ``` The output results are located at /content/annolid/annolid/segmentation/yolact/results. tracking_results.csv xxxx_video_tracked.mp4
github_jupyter
### Model Selection In order to select the best model for our data, I am going to look at scores for a variety of scikit-learn models and compare them using visual tools from Yellowbrick. ### Imports ``` %matplotlib inline import warnings warnings.filterwarnings('ignore') import os import pandas as pd from sklearn.metrics import f1_score from sklearn.pipeline import Pipeline from sklearn.svm import LinearSVC, NuSVC, SVC from sklearn.neighbors import KNeighborsClassifier from sklearn.preprocessing import OneHotEncoder, LabelEncoder from sklearn.naive_bayes import GaussianNB, BernoulliNB, MultinomialNB from sklearn.linear_model import LogisticRegressionCV, LogisticRegression, SGDClassifier from sklearn.ensemble import BaggingClassifier, ExtraTreesClassifier, RandomForestClassifier from yellowbrick.classifier import ClassificationReport ``` ### About the Data I am using Home Mortgage Disclosure Act (HMDA) data for one year 2013,sample of 25,000 observations. https://www.consumerfinance.gov/data-research/hmda/explore Let's load the data: ``` dataset = pd.read_csv("/Users/tamananaheeme/Desktop/fixtures/hmda2013sample.csv") dataset.head(5) dataset['action_taken'] = dataset.action_taken_name.apply(lambda x: 1 if x in ['Loan purchased by the institution', 'Loan originated'] else 0) pd.crosstab(dataset['action_taken_name'],dataset['action_taken'], margins=True) print(dataset.describe()) features = ['tract_to_msamd_income', 'population', 'minority_population', 'number_of_owner_occupied_units', 'number_of_1_to_4_family_units', 'loan_amount_000s', 'hud_median_family_income', 'applicant_income_000s'] target = ['action_taken'] # Extract our X and y data X = dataset[features] y = dataset[target] print(X.shape, y.shape) # add line of code to fill in missing values X.fillna(X.mean(), inplace=True) X.describe() ``` ### Modeling and Evaluation ``` def score_model(X, y, estimator, **kwargs): """ Test various estimators. """ model = Pipeline([ ('estimator', estimator) ]) # Instantiate the classification model and visualizer model.fit(X, y, **kwargs) expected = y predicted = model.predict(X) # Compute and return F1 (harmonic mean of precision and recall) print("{}: {}".format(estimator.__class__.__name__, f1_score(expected, predicted))) # Try them all! models = [ GaussianNB(), BernoulliNB(), MultinomialNB(), LogisticRegression(solver='lbfgs', max_iter=4000), LogisticRegressionCV(cv=3, max_iter=4000), BaggingClassifier(), ExtraTreesClassifier(n_estimators=100), RandomForestClassifier(n_estimators=100) ] for model in models: score_model(X, y, model) ``` ### Visual Model Evaluation ``` def visualize_model(X, y, estimator): """ Test various estimators. """ model = Pipeline([ ('estimator', estimator) ]) # Instantiate the classification model and visualizer visualizer = ClassificationReport( model, classes=[1,0], cmap="greens", size=(600, 360) ) visualizer.fit(X, y) visualizer.score(X, y) visualizer.poof() for model in models: visualize_model(X, y, model) ```
github_jupyter
# Antarctic Circumnavigation Expedition Cruise Track data processing ## GLONASS and Trimble GPS data Follow the steps as described here: http://epic.awi.de/48174/ Import relevant packages ``` import pandas as pd import csv import MySQLdb import datetime import math import numpy as np ``` ### STEP 1 - Extract data from database Import data from a database table into a dataframe ``` def get_data_from_database(query, db_connection): dataframe = pd.read_sql(query, con=db_connection) return dataframe ``` **GPS data** ``` query_trimble = 'select * from ship_data_gpggagpsfix where device_id=63 order by date_time;' password = input() db_connection = MySQLdb.connect(host = 'localhost', user = 'ace', passwd = password, db = 'ace2016', port = 3306); gpsdb_df = get_data_from_database(query_trimble, db_connection) #gpsdb_df_opt = optimise_dataframe(gpsdb_df_opt) ``` Preview data ``` gpsdb_df.sample(5) ``` Number of data points output ``` len(gpsdb_df) ``` Dates and times covered by the Trimble data set: ``` print("Start date:", gpsdb_df['date_time'].min()) print("End date:", gpsdb_df['date_time'].max()) ``` Output the data into monthly files (to be able to plot them to visually screen the obvious outliers). ``` gpsdb_df.dtypes gpsdb_df['date_time_day'] = gpsdb_df['date_time'].dt.strftime('%Y-%m-%d') gpsdb_df.sample(5) days = gpsdb_df.groupby('date_time_day') for day in days.groups: path = '/home/jen/ace_trimble_gps_' + str(day) + '.csv' days.get_group(day).to_csv(path, index=False) ``` **GLONASS data** ``` query_glonass = 'select * from ship_data_gpggagpsfix where device_id=64;' password = input() db_connection = MySQLdb.connect(host = 'localhost', user = 'ace', passwd = password, db = 'ace2016', port = 3306); glonassdb_df = get_data_from_database(query_glonass, db_connection) ``` Preview data ``` glonassdb_df.sample(5) ``` Number of data points output ``` len(glonassdb_df) ``` Dates and times covered by the GLONASS data set: ``` print("Start date:", glonassdb_df['date_time'].min()) print("End date:", glonassdb_df['date_time'].max()) ``` Output the data into monthly files (to be able to plot them to visually screen the obvious outliers). ``` glonassdb_df['date_time_day'] = glonassdb_df['date_time'].dt.strftime('%Y-%m-%d') glonassdb_df.sample(5) days = glonassdb_df.groupby('date_time_day') for day in days.groups: path = '/home/jen/ace_glonass_' + str(day) + '.csv' days.get_group(day).to_csv(path, index=False) ``` ### STEP 2 - Visual inspection ### Trimble GPS Data in the form of the daily csv files, were imported into QGIS mapping software in order to manually visually inspect the data points. This was done at a resolution of 1:100,000. There were no obvious outlying points. Number of points flagged as outliers: 0 The following sections were identified as unusual and have been classified in the table below: | Start date and time (UTC) | End date and time (UTC) | Potential problem | |---------|-----------|----------| | 2016-12-21 10:15:46.620 | 2016-12-21 10:16:35.620 | Overlapping track | | 2016-12-21 07:37:16.470 | 2016-12-21 07:40:02.470 | Overlapping track | | 2016-12-22 04:43:10.010 | 2016-12-22 04:43:14.010 | Overlapping track | | 2016-12-22 09:38:40.270 | 2016-12-22 09:38:41.270 | Strange gap | | 2016-12-23 00:15:22.060 | 2016-12-23 00:15:33.060 | Overlapping track | | 2016-12-23 04:57:57.310 | 2016-12-23 04:57:58.310 | Large gap | | 2016-12-23 05:05:05.310 | 2016-12-23 05:11:43.540 | Overlapping track | | 2016-12-23 05:30:01.560 | 2016-12-23 05:30:04.560 | Large gap | | 2016-12-25 06:04:50.900 | 2016-12-25 06:09:52.980 | Overlapping track | | 2016-12-29 12:57:59.390 | 2016-12-29 14:57:34.730 | Strange diversion | | 2016-12-30 04:53:24.840 | 2016-12-30 08:51:12.670 | Strange diversion, missing data | | 2016-12-31 00:51:37.580 | 2016-12-31 00:51:39.580 | | | 2017-01-01 11:54:29.820 | 2017-01-01 11:54:41.820 | Overlapping track | | 2017-01-01 22:51:51.680 | 2017-01-01 22:54:41.700 | Strange deflection | | 2017-01-01 22:56:05.700 | 2017-01-01 22:59:57.700 | Strange deflection | | 2017-01-04 00:34:30.360 | 2017-01-04 00:34:33.360 | Large move | | 2017-01-13 08:26:40.850 | 2017-01-13 08:30:36.840 | Strange deflection | | 2017-03-11 06:47:22.300 | 2017-03-11 08:21:40.970 | Strange deflection | | 2017-03-16 17:51:36.490 | 2017-03-16 18:12:21.470 | Gap with jump | | 2017-03-17 15:29:07.620 | 2017-03-17 15:29:10.620 | Gap with jump | | 2017-03-18 04:07:31.290 | 2017-03-18 04:07:32.290 | Gap with jump | | 2017-03-18 12:43:22.100 | 2017-03-18 12:43:42.100 | Overlapping track | | 2017-03-18 13:01:04.100 | 2017-03-18 19:09:28.440 | Large gap with large time difference | | 2017-03-24 10:13:24.720 | 2017-03-24 10:13:25.720 | Gap with jump | | 2017-03-25 10:07:08.990 | 2017-03-25 10:07:15.990 | Gap with jump | | 2017-03-26 22:25:49.940 | 2017-03-26 22:25:50.940 | Gap with jump | These points have not been flagged but will be returned to later on in the processing. ### GLONASS Data in the form of the daily csv files, were imported into QGIS mapping software in order to manually visually inspect the data points. This was done at a resolution of 1:100,000. There were no obvious outlying points. Number of points flagged as outliers: 0 The following sections were identified as unusual and have been classified in the table below: | Start date and time (UTC) | End date and time (UTC) | Potential problem | |---------|-----------|----------| | 2017-04-06 15:26:30 | 2017-04-06 15:26:32 | Gap with jump | | 2017-04-06 18:22:07 | 2017-04-06 18:22:09 | Gap with jump | | 2017-04-06 21:41:21 | 2017-04-06 21:41:45 | Strange deflection | | 2017-04-07 15:25:06 | 2017-04-07 15:25:13 | Strange deflection | | 2017-04-08 04:12:44 | 2017-04-08 04:13:27 | Strange deflection | | 2017-04-08 05:01:34 | 2017-04-08 05:01:49 | Strange deflection | | 2017-04-08 18:42:12 | 2017-04-08 18:43:04 | Strange deflection | | 2017-04-08 18:56:02 | 2017-04-08 18:57:27 | Strange deflection | | 2017-04-08 19:00:26 | 2017-04-08 19:01:03 | Strange deflection | | 2017-04-08 19:05:31 | 2017-04-08 19:05:46 | Strange deflection | | 2017-04-10 02:43:40 | 2017-04-10 02:44:25 | Strange deflection | ### STEP 3 - Motion data correction ### STEP 4 - Automated data filtering Each data point will be compared with the one before and after to automatically filter out points that are out of the conceivable range of the ship's movement.' The second of two consecutive points to be flagged as "likely incorrect" when any of the following cases occur: - speed between two points >= 20 knots - acceleration between two points >= 1 ms^-2 - direction between two points >= 5 degrees ``` # Test df_test = gpsdb_df.head(10000) df_test.head(5) def get_location(datetime, position_df): """Create a tuple of the date_time, latitude and longitude of a location in a dataframe from a given date_time.""" latitude = position_df[position_df.date_time == datetime].latitude.item() longitude = position_df[position_df.date_time == datetime].longitude.item() location = (datetime, latitude, longitude) return location def calculate_distance(origin, destination): """Calculate the haversine or great-circle distance in metres between two points with latitudes and longitudes, where they are known as the origin and destination.""" datetime1, lat1, lon1 = origin datetime2, lat2, lon2 = destination radius = 6371 # km dlat = math.radians(lat2 - lat1) dlon = math.radians(lon2 - lon1) a = math.sin(dlat / 2) * math.sin(dlat / 2) + math.cos(math.radians(lat1)) \ * math.cos(math.radians(lat2)) * math.sin(dlon / 2) * math.sin(dlon / 2) c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a)) d = radius * c # Distance in km d_m = d*1000 # Distance in metres return d_m def knots_two_points(origin, destination): """Calculate the speed in knots between two locations which are dictionaries containing latitude, longitude and date_time.""" distance = calculate_distance(origin, destination) datetime1_timestamp, lat1, lon1 = origin datetime2_timestamp, lat2, lon2 = destination #datetime1 = datetime.datetime.strptime(datetime_str1,"%Y-%m-%d %H:%M:%S.%f") #datetime2 = datetime.datetime.strptime(datetime_str2,"%Y-%m-%d %H:%M:%S.%f") datetime1 = datetime1_timestamp.timestamp() datetime2 = datetime2_timestamp.timestamp() seconds = abs((datetime1) - (datetime2)) #seconds = abs((datetime_str1)-(datetime_str2)).total_seconds() conversion = 3600/1852 # convert 1 ms-1 to knots (nautical miles per hour; 1 nm = 1852 metres) speed_knots = (distance/seconds) * conversion if seconds > 0: return speed_knots else: return "N/A" destination = (Timestamp('2016-12-21 06:59:14.440000'), -35.8380768456667, 18.0307165761667) origin = (Timestamp('2016-12-21 06:59:14.440000'), -35.8380768456667, 18.0307165761667) knots_two_points(origin, destination) def set_utc(date_time): """Set the timezone to be UTC.""" utc = datetime.timezone(datetime.timedelta(0)) date_time = date_time.replace(tzinfo=utc) return date_time def analyse_speed(position_df): """Analyse the cruise track to ensure each point lies within a reasonable distance and direction from the previous point.""" total_data_points = len(position_df) earliest_date_time = position_df['date_time'].min() latest_date_time = position_df['date_time'].max() current_date = earliest_date_time previous_position = get_location(earliest_date_time, position_df) datetime_previous, latitude_previous, longitude_previous = previous_position count_speed_errors = 0 line_number = -1 for position in position_df.itertuples(): line_number += 1 if line_number == 0: continue current_position = position[2:5] row_index = position[0] #print(current_position) speed_knots = knots_two_points(previous_position, current_position) error_message = "" if speed_knots == "N/A": error_message = "No speed?" position_df.at[row_index, 'measureland_qualifier_flag_speed'] = 9 position_df.loc[position_df['id'] == row_index, 'measureland_qualifier_flag_speed'] = 9 print(position_df['id' == row_index]) elif speed_knots >= 20: error_message += "** Too fast **" #print(row_id) #print(position_df[position_df['id'] == row_id]) position_df.loc[position_df['id'] == row_index, 'measureland_qualifier_flag_speed'] = 4 count_speed_errors += 1 elif speed_knots < 20: position_df.loc[position_df['id'] == row_index, 'measureland_qualifier_flag_speed'] = 1 if error_message != "": print("Error {} {} ({:.4f}, {:.4f}) speed: {} knots".format(error_message, current_position[0], current_position[1], current_position[2], speed_knots)) previous_position = current_position return count_speed_errors def calculate_bearing(origin, destination): """Calculate the direction turned between two points.""" datetime1, lat1, lon1 = origin datetime2, lat2, lon2 = destination dlon = math.radians(lon2 - lon1) bearing = math.atan2(math.sin(dlon) * math.cos(math.radians(lat2)), math.cos(math.radians(lat1)) * math.sin(math.radians(lat2)) - math.sin(math.radians(lat1)) * math.cos(math.radians(lat2)) * math.cos(dlon)) bearing_degrees = math.degrees(bearing) return bearing_degrees def calculate_bearing_difference(current_bearing, previous_bearing): """Calculate the difference between two bearings, based on bearings between 0 and 360.""" difference = current_bearing - previous_bearing while difference < -180: difference += 360 while difference > 180: difference -= 360 return difference def analyse_course(position_df): """Analyse the change in the course between two points regarding the bearing and acceleration - these features need information from previous points.""" total_data_points = len(position_df) earliest_date_time = position_df['date_time'].min() current_date = earliest_date_time previous_position = get_location(earliest_date_time, position_df) datetime_previous, latitude_previous, longitude_previous = previous_position previous_bearing = 0 previous_speed_knots = 0 count_bearing_errors = 0 count_acceleration_errors = 0 line_number = -1 for position in position_df.itertuples(): line_number += 1 if line_number == 0: continue current_position = position[2:5] row_index = position[0] # Calculate bearing and change in bearing current_bearing = calculate_bearing(previous_position, current_position) difference_in_bearing = calculate_bearing_difference(current_bearing, previous_bearing) # Calculate acceleration between two points current_speed_knots = knots_two_points(previous_position, current_position) time_difference = (current_position[0] - previous_position[0]).total_seconds() speed_difference_metres_per_sec = (current_speed_knots - previous_speed_knots) * (1852/3600) # convert knots to ms-1 if time_difference >0: acceleration = speed_difference_metres_per_sec / time_difference else: acceleration = 0 # Print errors where data do not meet requirements error_message_bearing = "" error_message_acceleration = "" if difference_in_bearing == "N/A": error_message_bearing = "No bearing?" position_df.at[row_index, 'measureland_qualifier_flag_course'] = 9 #position_df.loc[position_df['id'] == row_index, 'measureland_qualifier_flag_course'] = 9 print(row_index) #print(position_df['id' == row_id]) elif difference_in_bearing >= 5: error_message_bearing = "** Turn too tight **" position_df.at[row_index, 'measureland_qualifier_flag_course'] = 4 #position_df.loc[position_df['id'] == row_index, 'measureland_qualifier_flag_course'] = 4 print(row_index) #print(position_df['id' == row_id]) count_bearing_errors += 1 if error_message_bearing != "": print("Error: {} {} ({:.4f}, {:.4f}) bearing change: {} degrees".format(error_message_bearing, current_position[0], current_position[1], current_position[2], difference_in_bearing)) if acceleration =="N/A": error_message_acceleration = "No acceleration" position_df.at[row_index, 'measureland_qualifier_flag_acceleration'] = 9 #position_df.loc[position_df['id'] == row_index, 'measureland_qualifier_flag_acceleration'] = 9 print(row_index) #print(position_df['id' == row_id]) elif acceleration > 1: count_acceleration_errors += 1 error_message_acceleration = "** Acceleration to quick **" position_df.at[row_index, 'measureland_qualifier_flag_acceleration'] = 4 #position_df.loc[position_df['id'] == row_index, 'measureland_qualifier_flag_acceleration'] = 4 print(row_index) #print(position_df['id' == row_id]) if error_message_acceleration != "": print("Error: {} {} ({:.4f}, {:.4f}) acceleration: {} ms-2".format(error_message_acceleration, current_position[0], current_position[1], current_position[2], acceleration)) previous_position = current_position previous_bearing = current_bearing previous_speed_knots = current_speed_knots return (count_bearing_errors, count_acceleration_errors) analyse_speed(df_test) analyse_course(df_test) df_test.head() #df_test.loc[df_test['id'] == 613, 'latitude'] = 99 #df_test.at[1, 'latitude'] = 999 ```
github_jupyter
# EuroSciPy 2018: NumPy tutorial (https://github.com/gertingold/euroscipy-numpy-tutorial) ## Let's do some slicing ``` mylist = list(range(10)) print(mylist) ``` Use slicing to produce the following outputs: [2, 3, 4, 5] [0, 1, 2, 3, 4] [6, 7, 8, 9] [0, 2, 4, 6, 8] [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] [7, 5, 3] ## Matrices and lists of lists ``` matrix = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] ``` Get the second row by slicing twice Try to get the second column by slicing. ~~Do not use a list comprehension!~~ ## Getting started Import the NumPy package ## Create an array ``` np.lookfor('create array') help(np.array) # remember Shift + Tab give a pop up help. Wh Shift + Tab ``` Press Shift + Tab when your cursor in a Code cell. This will open a pop up with some helping text. What happens Shift + Tab is pressed a second time? The variable `matrix` contains a list of lists. Turn it into an `ndarray` and assign it to the variable `myarray`. Verify that its type is correct. For practicing purposes, arrays can conveniently be created with the `arange` method. ``` myarray1 = np.arange(6) myarray1 def array_attributes(a): for attr in ('ndim', 'size', 'itemsize', 'dtype', 'shape', 'strides'): print('{:8s}: {}'.format(attr, getattr(a, attr))) array_attributes(myarray1) ``` ## Data types Use `np.array()` to create arrays containing * floats * complex numbers * booleans * strings and check the `dtype` attribute. Do you understand what is happening in the following statement? ``` np.arange(1, 160, 10, dtype=np.int8) ``` ## Strides/Reshape ``` myarray2 = myarray1.reshape(2, 3) myarray2 array_attributes(myarray2) myarray3 = myarray1.reshape(3, 2) array_attributes(myarray3) ``` ## Views Set the first entry of `myarray1` to a new value, e.g. 42. What happened to `myarray2`? What happens when a matrix is transposed? ``` a = np.arange(9).reshape(3, 3) a a.T ``` Check the strides! ``` a.strides a.T.strides ``` ## View versus copy identical object ``` a = np.arange(4) b = a id(a), id(b) ``` view: a different object working on the same data ``` b = a[:] id(a), id(b) a[0] = 42 a, b ``` an independent copy ``` a = np.arange(4) b = np.copy(a) id(a), id(b) a[0] = 42 a, b ``` ## Some array creation routines ### numerical ranges `arange(`*start*, *stop*, *step*`)`, *stop* is not included in the array ``` np.arange(5, 30, 5) ``` `arange` resembles `range`, but also works for floats Create the array [1, 1.1, 1.2, 1.3, 1.4, 1.5] `linspace(`*start*, *stop*, *num*`)` determines the step to produce *num* equally spaced values, *stop* is included by default Create the array [1., 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.] For equally spaced values on a logarithmic scale, use `logspace`. ``` np.logspace(-2, 2, 5) np.logspace(0, 4, 9, base=2) ``` ### Application ``` import matplotlib.pyplot as plt %matplotlib inline x = np.linspace(0, 10, 100) y = np.cos(x) plt.plot(x, y) ``` ### Homogeneous data ``` np.zeros((4, 4)) ``` Create a 4x4 array with integer zeros ``` np.ones((2, 3, 3)) ``` Create a 3x3 array filled with tens ### Diagonal elements ``` np.diag([1, 2, 3, 4]) ``` `diag` has an optional argument `k`. Try to find out what its effect is. Replace the 1d array by a 2d array. What does `diag` do? ``` np.info(np.eye) ``` Create the 3x3 array ```[[2, 1, 0], [1, 2, 1], [0, 1, 2]] ``` ### Random numbers What is the effect of np.random.seed? ``` np.random.seed() np.random.rand(5, 2) np.random.seed(1234) np.random.rand(5, 2) data = np.random.rand(20, 20) plt.imshow(data, cmap=plt.cm.hot, interpolation='none') plt.colorbar() casts = np.random.randint(1, 7, (100, 3)) plt.hist(casts, np.linspace(0.5, 6.5, 7)) ``` ## Indexing and slicing ### 1d arrays ``` a = np.arange(10) ``` Create the array [7, 8, 9] Create the array [2, 4, 6, 8] Create the array [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] ### Higher dimensions ``` a = np.arange(40).reshape(5, 8) ``` Create the array [[21, 22, 23], [29, 30, 31], [37, 38, 39]] Create the array [ 3, 11, 19, 27, 35] Create the array [11, 12, 13] Create the array [[ 8, 11, 14], [24, 27, 30]] ## Fancy indexing ‒ Boolean mask ``` a = np.arange(40).reshape(5, 8) a % 3 == 0 a[a % 3 == 0] a[(1, 1, 2, 2, 3, 3), (3, 4, 2, 5, 3, 4)] ``` ## Axes Create an array and calculate the sum over all elements Now calculate the sum along axis 0 ... and now along axis 1 Identify the axis in the following array ``` a = np.arange(24).reshape(2, 3, 4) a ``` ## Axes in more than two dimensions Create a three-dimensional array Produce a two-dimensional array by cutting along axis 0 ... and axis 1 ... and axis 2 What do you get by simply using the index `[0]`? What do you get by using `[..., 0]`? ## Exploring numerical operations ``` a = np.arange(4) b = np.arange(4, 8) a, b a+b a*b ``` Operations are elementwise. Check this by multiplying two 2d array... ... and now do a real matrix multiplication ## Application: Random walk ``` length_of_walk = 10000 realizations = 5 angles = 2*np.pi*np.random.rand(length_of_walk, realizations) x = np.cumsum(np.cos(angles), axis=0) y = np.cumsum(np.sin(angles), axis=0) plt.plot(x, y) plt.axis('scaled') plt.plot(np.hypot(x, y)) plt.plot(np.mean(x**2+y**2, axis=1)) plt.axis('scaled') ``` ## Let's check the speed ``` %%timeit a = np.arange(1000000) a**2 %%timeit xvals = range(1000000) [xval**2 for xval in xvals] %%timeit a = np.arange(100000) np.sin(a) import math %%timeit xvals = range(100000) [math.sin(xval) for xval in xvals] ``` ## Broadcasting ``` a = np.arange(12).reshape(3, 4) a a+1 a+np.arange(4) a+np.arange(3) np.arange(3) np.arange(3).reshape(3, 1) a+np.arange(3).reshape(3, 1) ``` Create a multiplication table for the numbers from 1 to 10 starting from two appropriately chosen 1d arrays. As an alternative to `reshape` one can add additional axes with `newaxes` or `None`: ``` a = np.arange(5) b = a[:, np.newaxis] ``` Check the shapes. ## Functions of two variables ``` x = np.linspace(-40, 40, 200) y = x[:, np.newaxis] z = np.sin(np.hypot(x-10, y))+np.sin(np.hypot(x+10, y)) plt.imshow(z, cmap='viridis') x, y = np.mgrid[-10:10:0.1, -10:10:0.1] x y plt.imshow(np.sin(x*y)) x, y = np.mgrid[-10:10:50j, -10:10:50j] x y plt.imshow(np.arctan2(x, y)) ``` It is natural to use broadcasting. Check out what happens when you replace `mgrid` by `ogrid`. ## Linear Algebra in NumPy ``` a = np.arange(4).reshape(2, 2) eigenvalues, eigenvectors = np.linalg.eig(a) eigenvalues eigenvectors ``` Explore whether the eigenvectors are the rows or the columns. Try out `eigvals` and other methods offered by `linalg` which your are interested in ## Application: identify entry closest to ½ Create a 2d array containing random numbers and generate a vector containing for each row the entry closest to one-half.
github_jupyter
``` '''Check that the simulated kappa_gmf values are properly simulated, and also check them as a function of Z''' %matplotlib inline import matplotlib.pyplot as plt import numpy as np import os import h5py import matplotlib.lines as mlines from fancy import Uhecr from fancy.interfaces.stan import uv_to_coord from fancy.plotting.allskymap_cartopy import AllSkyMapCartopy as AllSkyMap ptypes_list = ["p", "He", "N", "Si", "Fe"] # ptypes_list = ["p", "He", "N"] seed = 19990308 uhecr_file = "../../data/UHECRdata.h5" uhecr = Uhecr() # read kappa_gmf values for each particle type kappa_gmf_list = [] coord_gal_list = [] coord_rand_list = [] coord_true_list = [] kappa_gmf_rand_list = [] gmf_model = "PT11" detector = "TA2015" with h5py.File(uhecr_file, "r") as f: detector_dict = f[detector] kappa_gmf_dict = detector_dict["kappa_gmf"] gmf_model_dict = kappa_gmf_dict[gmf_model] for ptype in ptypes_list: gmf_model_dict_ptype = gmf_model_dict[ptype] coord_true_list.append(gmf_model_dict_ptype["omega_true"][()]) coord_rand_list.append(gmf_model_dict_ptype["omega_rand"][()]) coord_gal_list.append(gmf_model_dict_ptype["omega_gal"][()]) kappa_gmf_list.append(gmf_model_dict_ptype["kappa_gmf"][()]) kappa_gmf_rand_list.append(gmf_model_dict_ptype["kappa_gmf_rand"][()]) glon = detector_dict['glon'][()] glat = detector_dict['glat'][()] uhecr_coord = uhecr.get_coordinates(glon, glat) # for ptype in ptypes_list: # sim_output_file = "../../output/{0}_sim_{1}_{2}_{3}_{4}.h5".format( # "joint_gmf", "SBG_23", "TA2015", seed, ptype) # with h5py.File(sim_output_file, "r") as f: # uhecr_coord_list.append(uv_to_coord(f["uhecr/unit_vector"][()])) # kappa_gmf_list.append(f["uhecr/kappa_gmf"][()]) # coord_gmf_list.append(f["plotvars/omega_rand_kappa_gmf"][()]) # coord_defl_list.append(f["plotvars/omega_defl_kappa_gmf"][()]) # # print(f["uhecr"].keys(), type(uhecr_coord)) # # kappa_gmf len(kappa_gmf_list[0]) plt.style.use("minimalist") skymap = AllSkyMap(lon_0 = 180) skymap.set_gridlines(label_fmt="default") ptype = "N" ptype_idx = np.argwhere([p == ptype for p in ptypes_list])[0][0] # print(ptype_idx) skymap.scatter(np.rad2deg(coord_true_list[ptype_idx][:, 0]), np.rad2deg(coord_true_list[ptype_idx][:, 1]), color="k", alpha=1, marker="+", s=20.0) skymap.scatter(np.rad2deg(coord_rand_list[ptype_idx][:, :, 0]), np.rad2deg(coord_rand_list[ptype_idx][:, :, 1]), color="b", alpha=0.1, s=5.0, lw=0) skymap.scatter(np.rad2deg(coord_gal_list[ptype_idx][:, :, 0]), np.rad2deg(coord_gal_list[ptype_idx][:, :, 1]), color="r", alpha=0.1, s=5.0, lw=0) # skymap.scatter(180. - 177.14668051, 49.59823616, color="g", marker="o", s=200) # skymap.scatter(180. - uhecr_coord.galactic.l.deg, uhecr_coord.galactic.b.deg, color="purple", alpha=1, marker="+", s=50.0) handles = [mlines.Line2D([], [], color='k', marker='+', lw=0, markersize=6, alpha=1, label="Original UHECR"), mlines.Line2D([], [], color='b', marker='o', lw=0, markersize=3, alpha=0.3, label="Randomized UHECR"), mlines.Line2D([], [], color='r', marker='o', lw=0, markersize=3, alpha=0.3, label="Deflected UHECR")] skymap.legend(handles=handles, bbox_to_anchor=(1.25, 0.27), fontsize=14) # skymap.title("Deflection skymap with data - {0}".format(ptype)) skymap.save(f"defl_skymap_{detector}_{ptype}_{gmf_model}.png") plt.style.use("minimalist") skymap = AllSkyMap(lon_0 = 180) skymap.set_gridlines(label_fmt="default", zorder=1) # ptype = "p" # for sel_uhecr_idx in np.arange(0, 72, 8): # print(sel_uhecr_idx) sel_uhecr_idx = 24 sel_uhecr_lon, sel_uhecr_lat = 180-np.rad2deg(coord_true_list[ptype_idx][sel_uhecr_idx, 0]), np.rad2deg(coord_true_list[ptype_idx][sel_uhecr_idx, 1]) ptype_idx = np.argwhere([p == ptype for p in ptypes_list])[0][0] # print(ptype_idx) skymap.scatter(np.rad2deg(coord_true_list[ptype_idx][:, 0]), np.rad2deg(coord_true_list[ptype_idx][:, 1]), color="k", alpha=1, marker="+", s=20.0) skymap.scatter(np.rad2deg(coord_rand_list[ptype_idx][:, :, 0]), np.rad2deg(coord_rand_list[ptype_idx][:, :, 1]), color="b", alpha=0.1, s=5.0, lw=0) skymap.scatter(np.rad2deg(coord_gal_list[ptype_idx][:, :, 0]), np.rad2deg(coord_gal_list[ptype_idx][:, :, 1]), color="r", alpha=0.1, s=5.0, lw=0) # skymap.scatter(180. - uhecr_coord.galactic.l.deg, uhecr_coord.galactic.b.deg, color="purple", alpha=1, marker="+", s=50.0) skymap.scatter(np.rad2deg(coord_rand_list[ptype_idx][sel_uhecr_idx, :, 0]), np.rad2deg(coord_rand_list[ptype_idx][sel_uhecr_idx, :, 1]), color="g", alpha=0.1, s=5.0, lw=0) skymap.scatter(np.rad2deg(coord_gal_list[ptype_idx][sel_uhecr_idx, :, 0]), np.rad2deg(coord_gal_list[ptype_idx][sel_uhecr_idx, :, 1]), color="g", alpha=0.8, s=5.0, lw=0) for i in range(100): skymap.geodesic(np.rad2deg(coord_rand_list[ptype_idx][sel_uhecr_idx, i, 0]), np.rad2deg(coord_rand_list[ptype_idx][sel_uhecr_idx, i, 1]), np.rad2deg(coord_gal_list[ptype_idx][sel_uhecr_idx, i, 0]), np.rad2deg(coord_gal_list[ptype_idx][sel_uhecr_idx, i, 1]), alpha=0.05, color="k") handles = [mlines.Line2D([], [], color='k', marker='+', lw=0, markersize=6, alpha=1, label="Original UHECR"), mlines.Line2D([], [], color='b', marker='o', lw=0, markersize=3, alpha=0.3, label="Randomized UHECR"), mlines.Line2D([], [], color='r', marker='o', lw=0, markersize=3, alpha=0.3, label="Deflected UHECR")] # sel_uhecr_handles = [mlines.Line2D([], [], color='g', marker='o', lw=0, # markersize=4, alpha=0.3, label="UHECR Coordinate (Gal):\n ({0:.1f}$^\circ$, {1:.1f}$^\circ$)".format(sel_uhecr_lon, sel_uhecr_lat))] legend1 = skymap.legend(handles=handles, bbox_to_anchor=(1.25, 0.27)) # legend2 = skymap.legend(handles=sel_uhecr_handles, bbox_to_anchor=(1.25, 1.05)) skymap.ax.add_artist(legend1) # skymap.ax.add_artist(legend2) # skymap.title("Deflection skymap with data - {0}".format(ptype)) # skymap.save("skymap_banana.png") # evaluate dot product between deflected and randomized vector # then obtain the effective thetea_rms for each (true) UHECR from astropy.coordinates import SkyCoord from astropy import units as u from fancy.interfaces.stan import coord_to_uv from scipy.optimize import root ptype = "p" '''Integral of Fischer distribution used to evaluate kappa_d''' def fischer_int(kappa, cos_thetaP): '''Integral of vMF function over all angles''' return (1. - np.exp(-kappa * (1 - cos_thetaP))) / (1. - np.exp(-2.*kappa)) def fischer_int_eq_P(cos_thetaP, kappa, P): '''Equation to find roots for''' return fischer_int(kappa, cos_thetaP) - P ptype_idx = np.argwhere([p == ptype for p in ptypes_list])[0][0] # print(kappa_gmf_list[ptype_idx]) cos_theta_arr = np.zeros(len(kappa_gmf_list[ptype_idx])) for i, kappa_gmf in enumerate(kappa_gmf_list[ptype_idx]): sol = root(fischer_int_eq_P, x0=1, args=(kappa_gmf, 0.683)) cos_theta = sol.x[0] cos_theta_arr[i] = cos_theta thetas = np.rad2deg(np.arccos(cos_theta_arr)) # print(np.rad2deg(np.arccos(cos_theta_arr))) skymap = AllSkyMap(lon_0 = 180) skymap.set_gridlines(label_fmt="default") ang_err = 1.7 skymap.scatter(np.rad2deg(coord_true_list[ptype_idx][:, 0]), np.rad2deg(coord_true_list[ptype_idx][:, 1]), color="k", alpha=1, marker="+", s=10.0) for i, (lon, lat) in enumerate(coord_true_list[ptype_idx][:]): skymap.tissot(np.rad2deg(lon), np.rad2deg(lat), ang_err, color="b", alpha=0.2) skymap.tissot(np.rad2deg(lon), np.rad2deg(lat), thetas[i], color="r", alpha=0.05) handles = [mlines.Line2D([], [], color='k', marker='+', lw=0, markersize=8, alpha=1, label="Original UHECR"), mlines.Line2D([], [], color='b', marker='o', lw=0, markersize=8, alpha=0.3, label="Angular Uncertainty Circle"), mlines.Line2D([], [], color='r', marker='o', lw=0, markersize=8, alpha=0.3, label="Deflection Circle")] skymap.legend(handles=handles, bbox_to_anchor=(1.3, 0.27)) # skymap.title("Deflection skymap with data - {0}".format(ptype)) # skymap.scatter(np.rad2deg(coord_true_list[ptype_idx][:, 0]), np.rad2deg(coord_true_list[ptype_idx][:, 1]), color="k", alpha=1, marker="+", s=20.0) # skymap.scatter(np.rad2deg(coord_rand_list[ptype_idx][:, :, 0]), np.rad2deg(coord_rand_list[ptype_idx][:, :, 1]), color="b", alpha=0.05, s=4.0) # evaluate dot product between deflected and randomized vector # then obtain the effective thetea_rms for each (true) UHECR from astropy.coordinates import SkyCoord from astropy import units as u from fancy.interfaces.stan import coord_to_uv from scipy.optimize import root ptype = "N" '''Integral of Fischer distribution used to evaluate kappa_d''' def fischer_int(kappa, cos_thetaP): '''Integral of vMF function over all angles''' return (1. - np.exp(-kappa * (1 - cos_thetaP))) / (1. - np.exp(-2.*kappa)) def fischer_int_eq_P(cos_thetaP, kappa, P): '''Equation to find roots for''' return fischer_int(kappa, cos_thetaP) - P ptype_idx = np.argwhere([p == ptype for p in ptypes_list])[0][0] # print(kappa_gmf_list[ptype_idx]) cos_theta_arr = np.zeros(len(kappa_gmf_list[ptype_idx])) for i, kappa_gmf in enumerate(kappa_gmf_list[ptype_idx]): sol = root(fischer_int_eq_P, x0=1, args=(kappa_gmf, 0.683)) cos_theta = sol.x[0] cos_theta_arr[i] = cos_theta thetas = np.rad2deg(np.arccos(cos_theta_arr)) # print(np.rad2deg(np.arccos(cos_theta_arr))) skymap = AllSkyMap(lon_0 = 180) skymap.set_gridlines(label_fmt="default") ang_err = 1.7 skymap.scatter(np.rad2deg(coord_true_list[ptype_idx][:, 0]), np.rad2deg(coord_true_list[ptype_idx][:, 1]), color="k", alpha=1, marker="+", s=10.0) for i, (lon, lat) in enumerate(coord_true_list[ptype_idx][:]): skymap.tissot(np.rad2deg(lon), np.rad2deg(lat), ang_err, color="b", alpha=0.2) skymap.tissot(np.rad2deg(lon), np.rad2deg(lat), thetas[i], color="r", alpha=0.05) sel_uhecr_idx = 24 skymap.tissot(np.rad2deg(coord_true_list[ptype_idx][sel_uhecr_idx, 0]), np.rad2deg(coord_true_list[ptype_idx][sel_uhecr_idx, 1]), ang_err, color="b", alpha=0.2) skymap.tissot(np.rad2deg(coord_true_list[ptype_idx][sel_uhecr_idx, 0]), np.rad2deg(coord_true_list[ptype_idx][sel_uhecr_idx, 1]), thetas[sel_uhecr_idx], color="g", alpha=0.3) handles = [mlines.Line2D([], [], color='k', marker='+', lw=0, markersize=8, alpha=1, label="Original UHECR"), mlines.Line2D([], [], color='b', marker='o', lw=0, markersize=8, alpha=0.3, label="Angular Uncertainty Circle"), mlines.Line2D([], [], color='r', marker='o', lw=0, markersize=8, alpha=0.3, label="Deflection Circle")] # sel_uhecr_handles = [mlines.Line2D([], [], color='g', marker='o', lw=0, # markersize=4, alpha=0.3, label="UHECR Coordinate (Gal):\n ({0:.1f}$^\circ$, {1:.1f}$^\circ$)\n".format(sel_uhecr_lon, sel_uhecr_lat))] legend1 = skymap.legend(handles=handles, bbox_to_anchor=(1.3, 0.27)) # legend2 = skymap.legend(handles=sel_uhecr_handles, bbox_to_anchor=(0.9, 1.05)) skymap.ax.add_artist(legend1) # skymap.ax.add_artist(legend2) # skymap.title("Deflection skymap with data - {0}".format(ptype)) # skymap.save("GMFPlots_N_defl_circles_selUHECR.png") # skymap.scatter(np.rad2deg(coord_true_list[ptype_idx][:, 0]), np.rad2deg(coord_true_list[ptype_idx][:, 1]), color="k", alpha=1, marker="+", s=20.0) # skymap.scatter(np.rad2deg(coord_rand_list[ptype_idx][:, :, 0]), np.rad2deg(coord_rand_list[ptype_idx][:, :, 1]), color="b", alpha=0.05, s=4.0) len(thetas) # somehow the circles that cover all sky (fior N) disappeared, need to invesitagte why ptype_idx = np.argwhere([p == "p" for p in ptypes_list])[0][0] # plt.hist(kappa_gmf_list[ptype_idx], bins=np.logspace(-2, 3, 10)) plt.hist(kappa_gmf_list[ptype_idx], bins=np.linspace(1e-2, 1e2, 10)) # plt.xscale("log") colors = ["b", "r", "g"] fig, ax = plt.subplots(figsize=(6,4)) # plt.hist(kappa_gmf_list[ptype_idx], bins=np.logspace(-2, 3, 10)) for i, ptype in enumerate(["p", "N", "Fe"]): ptype_idx = np.argwhere([p == ptype for p in ptypes_list])[0][0] # print(kappa_gmf_list[ptype_idx]) not_zero_idces = np.argwhere(kappa_gmf_list[ptype_idx] > 1e-9) cos_theta_arr = np.zeros_like(kappa_gmf_list[ptype_idx]) for j, kappa_gmf in enumerate(kappa_gmf_list[ptype_idx]): if kappa_gmf < 1e-9: kappa_gmf = 0 sol = root(fischer_int_eq_P, x0=1, args=(kappa_gmf, 0.683)) cos_theta = sol.x[0] cos_theta_arr[j] = cos_theta thetas = np.rad2deg(np.arccos(cos_theta_arr)) thetas = thetas[not_zero_idces] # print(thetas) ax.hist(thetas, bins=np.linspace(0, 180, 25), alpha=0.4, color=colors[i], label=ptype, weights=np.ones_like(thetas) / len(thetas) / 25) # plt.xscale("log") ax.set_xlim([0, 130]) ax.set_xticks(np.arange(0, 130, 30, dtype=int)) ax.set_yticks([0, 0.01, 0.02, 0.03]) # for tick in ax.xaxis.get_major_ticks(): # tick.label.set_fontsize(12) # ax.set_yticklabels() ax.legend() ax.tick_params(axis="x", labelsize=14) ax.tick_params(axis="y", labelsize=14) ax.set_xlabel(r"$\sigma_{{\omega + \rm{GMF}}}$ / degrees", fontsize=14) ax.set_ylabel(r"$P (\sigma_{{\omega + \rm{GMF}}} | \hat{\omega}, \omega_{g})$", fontsize=16) fig.tight_layout() fig.savefig("kappa_defl_histogram.png", dpi=400) # plt.hist(kappa_gmf_rand_list[ptype_idx][sel_uhecr_idx, :], # bins=np.logspace(-2, 2, 30), # alpha=0.4, color=colors[i], label=ptype) # plt.xscale("log") # plt.legend() colors = ["k"] fig, ax = plt.subplots(figsize=(6,4)) # plt.hist(kappa_gmf_list[ptype_idx], bins=np.logspace(-2, 3, 10)) for i, ptype in enumerate(["p"]): ptype_idx = np.argwhere([p == ptype for p in ptypes_list])[0][0] # print(kappa_gmf_list[ptype_idx]) cos_theta_arr = np.zeros_like(kappa_gmf_rand_list[ptype_idx][sel_uhecr_idx, :]) for j, kappa_gmf in enumerate(kappa_gmf_rand_list[ptype_idx][sel_uhecr_idx, :]): sol = root(fischer_int_eq_P, x0=1, args=(kappa_gmf, 0.683)) cos_theta = sol.x[0] cos_theta_arr[j] = cos_theta thetas = np.rad2deg(np.arccos(cos_theta_arr)) ax.hist(thetas, bins=np.linspace(0, 30, 20), alpha=0.4, color=colors[i], label=ptype, weights=np.ones_like(thetas) / len(thetas) / 20) # plt.xscale("log") ax.set_xlim([0, 30]) ax.set_xticks(np.arange(0, 35, 5, dtype=int)) ax.set_yticks([0, 0.005, 0.01, 0.015]) ax.tick_params(axis="x", labelsize=14) ax.tick_params(axis="y", labelsize=14) # for tick in ax.xaxis.get_major_ticks(): # tick.label.set_fontsize(12) # ax.set_yticklabels() # ax.legend() ax.set_xlabel(r"$\sigma_{{\omega + \rm{GMF}}}$ / degrees", fontsize=14) ax.set_ylabel(r"$P (\sigma_{{\omega + \rm{GMF}}} | \hat{\omega}, \omega_{g})$", fontsize=16) fig.tight_layout() fig.savefig(f"kappa_defl_{ptype}_histogram.png", dpi=400) ```
github_jupyter
#Grae prediction using Parallel Random Forest Grading is necessary tool for measure outcome of the sudying process. Unsuccessful lerning process may depend on many factors such as student attenction, teacher and background knowledge. In many academic instituetions, the prerequisite subjects are required before registering some particular subjects. In this project, we attempt to investigate the relationship between prerequisite grades on a grade of the paricular subject. We have aquired a anonymous enrolment records of computer science students from the registrar of Thammasat University. The records have been constantly collected for 4 years consisting of over 28,272 records. ``` import pandas as pd import numpy as np print pd.__version__ %matplotlib inline pd.set_option('display.max_rows', 100) pd.set_option('display.max_columns', 500) df_csv = pd.read_csv('CS_table_No2_No4_new.csv',delimiter=";", skip_blank_lines = True, error_bad_lines=False) df_csv=df_csv.dropna() print df_csv.head() #get first 5 records print df_csv.tail() #get lat 5 records ``` #Data minning process The normal procedures for data mininng are the following: * data cleaning * data transformation * training * evaluation ##Drop unnecessary data Some column are unnecessary for the prediction. We can drop them by using df.drop() ```python df_csv.drop(["column_name"]) ``` ``` column_name=df_csv.columns.values print column_name for i in column_name: print "datatype of column {} is {}.".format(i,df_csv[i].dtype) df_csv.drop(['CAMPUSID','CAMPUSNAME','COURSENAME','CURRIC','SECTIONGROUP','CREDIT'], axis=1, inplace=True) column_name=df_csv.columns.values print column_name for i in column_name: print "datatype of column {} is {}.".format(i,df_csv[i].dtype) ``` ##Data cleaning There are some records having NAN ann invalid data. We need to fix or remove them. ``` df=df_csv.copy() myhist=df["GRADE"].value_counts() print myhist myhist.plot(kind='bar') ``` ##converting category to numeric data ``` grade_cn={'':0, 'U#':1, 'U':2, 'S#':3, 'S':4, 'W':5, 'F':6, 'D':7, 'D+':8, 'C':9, 'C+':10, 'B':11, 'B+':12, 'A':13} grade_list=['', 'U#', 'U', 'S#', 'S', 'W', 'F', 'D', 'D+', 'C', 'C+', 'B', 'B+', 'A'] [grade_cn[i] for i in grade_list] df2=df.copy() df2 = df2.fillna(0) df2 = df2.replace(grade_list, [grade_cn[i] for i in grade_list] ) myhist=df["GRADE"].value_counts() print myhist myhist2=df2["GRADE"].value_counts() print myhist2 df2["TERM"]=10*df2["ACADYEAR"]+df2["SEMESTER"] df3=df2.copy() df3.drop(["ACADYEAR","SEMESTER"],axis=1, inplace=True) print df3.head() print df3.tail() ``` We need to create a new dataframe that has pattern, where pattern is a set of grades from studied subjects. ``` #df4=df3[0:20].copy() df4=df3 #create a new data frame df5=df4.copy() for c in df5.COURSEID.value_counts().index: df5[c]=pd.Series(np.zeros(df5.shape[0],dtype=int),index=df5.index) #reindex the columns cname=df5.columns[0:4].values.tolist() + sorted(df5.columns[4:].values) df5=df5.reindex_axis(cname, axis=1) df5.COURSEID.value_counts().plot(kind='bar') %%timeit -n1 for i, ri in df5.iterrows(): dfx=df4[ (df4["STUDENTID"] == ri["STUDENTID"]) & (df4["TERM"] < ri["TERM"]) ] if(dfx.shape[0]): for j,rj in dfx.iterrows(): df5.loc[i,rj["COURSEID"]]=rj["GRADE"] df5.to_pickle("df5.pkl") #df = pd.read_pickle("df5.pkl") df6 = pd.read_pickle("df5.pkl") df6 import pickle with open('grade_cn.pkl', 'wb') as pickleFile: pickle.dump(grade_cn, pickleFile, pickle.HIGHEST_PROTOCOL) import pickle with open('grade_cn.pkl', 'rb') as pickleFile: _grade_cn = pickle.load(pickleFile) print grade_cn print _grade_cn df6.to_csv("df6.csv") abc=df6["COURSEID"].value_counts() len(abc) m20=abc[abc[:]>=20] print m20 len(m20) mm=abc[abc[:]<20] len(mm) df6[df6["COURSEID"].isin(mm.index)].shape[0] df6[df6["COURSEID"].isin(m20.index)].shape[0] 278+27994 df6.shape[0] df7=df6[df6["COURSEID"].isin(m20.index)] df7.to_csv('df7.csv') for m in m20.index: dfx=df6[df6["COURSEID"].isin([m])] dfx.to_csv("df_%s.csv"%m) df7.COURSEID.value_counts().plot(kind='bar') ```
github_jupyter
## 8.5 Optimization of Basic Blocks ### 8.5.1 > Construct the DAG for the basic block > ``` d = b * c e = a + b b = b * c a = e - d ``` ``` +--+--+ | - | a +-+++-+ | | +---+ +---+ | | +--v--+ +--v--+ e | + | | * | d,b +-+++-+ +-+-+-+ | | | | +---+ +--+ +--+ +---+ | | | | v v v v a0 b0 c ``` ### 8.5.2 > Simplify the three-address code of Exercise 8.5.1, assuming > a) Only $a$ is live on exit from the block. ``` d = b * c e = a + b a = e - d ``` > b) $a$, $b$, and $c$ are live on exit from the block. ``` e = a + b b = b * c a = e - b ``` ### 8.5.3 > Construct the DAG for the code in block $B_6$ of Fig. 8.9. Do not forget to include the comparison $i \le 10$. ``` +-----+ | []= | a[t0] ++-+-++ | | | +--------+ | +-------+ | | | v +--v--+ v +-----+ a t6 | * | 1.0 | <= | +-+-+-+ +-+-+-+ | | | | +------+ +--+ +---+ +---+ | | | | v +-----+ +-----+ v 88 t5 | - | i | + | 10 +-+++-+ +-++-++ | | | | +-----------------+ | | | | | +-----+ +----+-------+ v v i0 1 ``` ### 8.5.4 > Construct the DAG for the code in block $B_3$ of Fig. 8.9. ``` +-----+ | []= | a[t4] ++-+-++ | | | +--------+ | +--------+ | | | v +--v--+ v a0 t4 | - | 0.0 +-+-+-+ | | +---+ +---+ | | +--v--+ v +-----+ t3 | * | 88 | <= | +-+-+-+ +---+-+ | | | | +-----+ +---+ +---+ +---+ | | | | v +--v--+ +--v--+ v 8 t2 | + | j | + | 10 +-+-+-+ +-+-+-+ | | | | +---+ +--------+ +---+ | || | +--v--+ vv v t1 | * | j0 1 +-+-+-+ | | +-----+ +---+ v v 10 i ``` ### 8.5.5 > Extend Algorithm 8.7 to process three-statements of the form > a) `a[i] = b` > b) `a = b[i]` > c) `a = *b` > d) `*a = b` 1. ... 2. set $a$ to "not live" and "no next use". 3. set $b$ (and $i$) to "live" and the next uses of $b$ (and $i$) to statement ### 8.5.6 > Construct the DAG for the basic block > ``` a[i] = b *p = c d = a[j] e = *p *p = a[i] ``` > on the assumption that > a) `p` can point anywhere. Calculate `a[i]` twice. > b) `p` can point only to `b` or `d`. ``` +-----+ +-----+ | *= | *p e | =* | +--+--+ +--+--+ | | | | | | +--v--+ +-----+ +--v--+ a[i] | []= | d | []= | *p0 | *= | ++-+-++ ++-+-++ +--+--+ | | | | | | | | | | | | | | | | | | | | | | | | | | | | +------------------------+ | | | | | | | | | +------------------------------+ | | | | | | | | | | | | | +------------------------+ | | | | | | | | | <--+ +------+ +----+ | | v v v v v v d a i j b c ``` ### 8.5.7 > Revise the DAG-construction algorithm to take advantage of such situations, and apply your algorithm to the code of Exercise 8.5.6. ``` a[i] = b d = a[j] e = c *p = b ``` ### 8.5.8 > Suppose a basic block is formed from the C assignment statements > ``` x = a + b + c + d + e + f; y = a + c + e; ``` > a) Give the three-address statements for this block. ``` t1 = a + b t2 = t1 + c t3 = t2 + d t4 = t3 + e t5 = t4 + f x = t5 t6 = a + c t7 = t6 + e y = t7 ``` > b) Use the associative and commutative laws to modify the block to use the fewest possible number of instructions, assuming both `x` and `y` are live on exit from the block. ``` t1 = a + c t2 = t1 + e y = t2 t3 = t2 + b t4 = t3 + d t5 = t4 + f x = t5 ```
github_jupyter
*Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).* Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning). ``` %load_ext watermark %watermark -a 'Sebastian Raschka' -v -p torch ``` # Model Zoo -- Using PyTorch Dataset Loading Utilities for Custom Datasets (MNIST) This notebook provides an example for how to load an image dataset, stored as individual PNG files, using PyTorch's data loading utilities. For a more in-depth discussion, please see the official - [Data Loading and Processing Tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html) - [torch.utils.data](http://pytorch.org/docs/master/data.html) API documentation In this example, we are using the cropped version of the **Street View House Numbers (SVHN) Dataset**, which is available at http://ufldl.stanford.edu/housenumbers/. To execute the following examples, you need to download the 2 ".mat" files - [train_32x32.mat](http://ufldl.stanford.edu/housenumbers/train_32x32.mat) (ca. 182 Mb, 73,257 images) - [test_32x32.mat](http://ufldl.stanford.edu/housenumbers/test_32x32.mat) (ca. 65 Mb, 26,032 images) ## Imports ``` import pandas as pd import numpy as np import os import torch from torch.utils.data import Dataset from torch.utils.data import DataLoader from torchvision import transforms import matplotlib.pyplot as plt from PIL import Image import scipy.io as sio import imageio ``` ## Dataset The following function will convert the images from ".mat" into individual ".png" files. In addition, we will create CSV contained the image paths and associated class labels. ``` def make_pngs(main_dir, mat_file, label): if not os.path.exists(main_dir): os.mkdir(main_dir) sub_dir = os.path.join(main_dir, label) if not os.path.exists(sub_dir): os.mkdir(sub_dir) data = sio.loadmat(mat_file) X = np.transpose(data['X'], (3, 0, 1, 2)) y = data['y'].flatten() with open(os.path.join(main_dir, '%s_labels.csv' % label), 'w') as out_f: for i, img in enumerate(X): file_path = os.path.join(sub_dir, str(i) + '.png') imageio.imwrite(os.path.join(file_path), img) out_f.write("%d.png,%d\n" % (i, y[i])) make_pngs(main_dir='svhn_cropped', mat_file='train_32x32.mat', label='train') make_pngs(main_dir='svhn_cropped', mat_file='test_32x32.mat', label='test') df = pd.read_csv('svhn_cropped/train_labels.csv', header=None, index_col=0) df.head() df = pd.read_csv('svhn_cropped/test_labels.csv', header=None, index_col=0) df.head() ``` ## Implementing a Custom Dataset Class Now, we implement a custom `Dataset` for reading the images. The `__getitem__` method will 1. read a single image from disk based on an `index` (more on batching later) 2. perform a custom image transformation (if a `transform` argument is provided in the `__init__` construtor) 3. return a single image and it's corresponding label ``` class SVHNDataset(Dataset): """Custom Dataset for loading cropped SVHN images""" def __init__(self, csv_path, img_dir, transform=None): df = pd.read_csv(csv_path, index_col=0, header=None) self.img_dir = img_dir self.csv_path = csv_path self.img_names = df.index.values self.y = df[1].values self.transform = transform def __getitem__(self, index): img = Image.open(os.path.join(self.img_dir, self.img_names[index])) if self.transform is not None: img = self.transform(img) label = self.y[index] return img, label def __len__(self): return self.y.shape[0] ``` Now that we have created our custom Dataset class, let us add some custom transformations via the `transforms` utilities from `torchvision`, we 1. normalize the images (here: dividing by 255) 2. converting the image arrays into PyTorch tensors Then, we initialize a Dataset instance for the training images using the 'quickdraw_png_set1_train.csv' label file (we omit the test set, but the same concepts apply). Finally, we initialize a `DataLoader` that allows us to read from the dataset. ``` # Note that transforms.ToTensor() # already divides pixels by 255. internally custom_transform = transforms.Compose([#transforms.Grayscale(), #transforms.Lambda(lambda x: x/255.), transforms.ToTensor()]) train_dataset = SVHNDataset(csv_path='svhn_cropped/train_labels.csv', img_dir='svhn_cropped/train', transform=custom_transform) test_dataset = SVHNDataset(csv_path='svhn_cropped/test_labels.csv', img_dir='svhn_cropped/test', transform=custom_transform) BATCH_SIZE=128 train_loader = DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) test_loader = DataLoader(dataset=test_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4) ``` That's it, now we can iterate over an epoch using the train_loader as an iterator and use the features and labels from the training dataset for model training: ## Iterating Through the Custom Dataset ``` device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") torch.manual_seed(0) num_epochs = 2 for epoch in range(num_epochs): for batch_idx, (x, y) in enumerate(train_loader): print('Epoch:', epoch+1, end='') print(' | Batch index:', batch_idx, end='') print(' | Batch size:', y.size()[0]) x = x.to(device) y = y.to(device) break ``` Just to make sure that the batches are being loaded correctly, let's print out the dimensions of the last batch: ``` x.shape ``` As we can see, each batch consists of 128 images, just as specified. However, one thing to keep in mind though is that PyTorch uses a different image layout (which is more efficient when working with CUDA); here, the image axes are "num_images x channels x height x width" (NCHW) instead of "num_images height x width x channels" (NHWC): To visually check that the images that coming of the data loader are intact, let's swap the axes to NHWC and convert an image from a Torch Tensor to a NumPy array so that we can visualize the image via `imshow`: ``` one_image = x[99].permute(1, 2, 0) one_image.shape # note that imshow also works fine with scaled # images in [0, 1] range. plt.imshow(one_image.to(torch.device('cpu'))); %watermark -iv ```
github_jupyter
# Using `matplotlib` to display inline images In this notebook we will explore using `matplotlib` to display images in our notebooks, and work towards developing a reusable function to display 2D,3D, color, and label overlays for SimpleITK images. We will also look at the subtleties of working with image filters that require the input images' to be overlapping. ``` import matplotlib.pyplot as plt %matplotlib inline import SimpleITK as sitk # Download data to work on %run update_path_to_download_script from downloaddata import fetch_data as fdata ``` SimpleITK has a built in `Show` method which saves the image to disk and launches a user configurable program ( defaults to ImageJ ), to display the image. ``` img1 = sitk.ReadImage(fdata("cthead1.png")) sitk.Show(img1, title="cthead1") img2 = sitk.ReadImage(fdata("VM1111Shrink-RGB.png")) sitk.Show(img2, title="Visible Human Head") nda = sitk.GetArrayViewFromImage(img1) plt.imshow(nda) nda = sitk.GetArrayViewFromImage(img2) ax = plt.imshow(nda) def myshow(img): nda = sitk.GetArrayViewFromImage(img) plt.imshow(nda) myshow(img2) myshow(sitk.Expand(img2, [10]*5)) ``` This image does not appear bigger. There are numerous improvements that we can make: - support 3d images - include a title - use physical pixel size for axis labels - show the image as gray values ``` def myshow(img, title=None, margin=0.05, dpi=80): nda = sitk.GetArrayViewFromImage(img) spacing = img.GetSpacing() if nda.ndim == 3: # fastest dim, either component or x c = nda.shape[-1] # the the number of components is 3 or 4 consider it an RGB image if not c in (3,4): nda = nda[nda.shape[0]//2,:,:] elif nda.ndim == 4: c = nda.shape[-1] if not c in (3,4): raise Runtime("Unable to show 3D-vector Image") # take a z-slice nda = nda[nda.shape[0]//2,:,:,:] ysize = nda.shape[0] xsize = nda.shape[1] # Make a figure big enough to accommodate an axis of xpixels by ypixels # as well as the ticklabels, etc... figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi fig = plt.figure(figsize=figsize, dpi=dpi) # Make the axis the right size... ax = fig.add_axes([margin, margin, 1 - 2*margin, 1 - 2*margin]) extent = (0, xsize*spacing[1], ysize*spacing[0], 0) t = ax.imshow(nda,extent=extent,interpolation=None) if nda.ndim == 2: t.set_cmap("gray") if(title): plt.title(title) myshow(sitk.Expand(img2,[2,2]), title="Big Visibile Human Head") ``` ## Tips and Tricks for Visualizing Segmentations We start by loading a segmented image. As the segmentation is just an image with integral data, we can display the labels as we would any other image. ``` img1_seg = sitk.ReadImage(fdata("2th_cthead1.png")) myshow(img1_seg, "Label Image as Grayscale") ``` We can also map the scalar label image to a color image as shown below. ``` myshow(sitk.LabelToRGB(img1_seg), title="Label Image as RGB") ``` Most filters which take multiple images as arguments require that the images occupy the same physical space. That is the pixel you are operating must refer to the same location. Luckily for us our image and labels do occupy the same physical space, allowing us to overlay the segmentation onto the original image. ``` myshow(sitk.LabelOverlay(img1, img1_seg), title="Label Overlayed") ``` We can also overlay the labels as contours. ``` myshow(sitk.LabelOverlay(img1, sitk.LabelContour(img1_seg), 1.0)) ``` ## Tips and Tricks for 3D Image Visualization Now lets move on to visualizing real MRI images with segmentations. The Surgical Planning Laboratory at Brigham and Women's Hospital provides a wonderful Multi-modality MRI-based Atlas of the Brain that we can use. Please note, what is done here is for convenience and is not the common way images are displayed for radiological work. ``` img_T1 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT1.nrrd")) img_T2 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT2.nrrd")) img_labels = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/hncma-atlas.nrrd")) myshow(img_T1) myshow(img_T2) myshow(sitk.LabelToRGB(img_labels)) size = img_T1.GetSize() myshow(img_T1[:,size[1]//2,:]) slices =[img_T1[size[0]//2,:,:], img_T1[:,size[1]//2,:], img_T1[:,:,size[2]//2]] myshow(sitk.Tile(slices, [3,1]), dpi=20) nslices = 5 slices = [ img_T1[:,:,s] for s in range(0, size[2], size[0]//(nslices+1))] myshow(sitk.Tile(slices, [1,0])) ``` Let's create a version of the show methods which allows the selection of slices to be displayed. ``` def myshow3d(img, xslices=[], yslices=[], zslices=[], title=None, margin=0.05, dpi=80): size = img.GetSize() img_xslices = [img[s,:,:] for s in xslices] img_yslices = [img[:,s,:] for s in yslices] img_zslices = [img[:,:,s] for s in zslices] maxlen = max(len(img_xslices), len(img_yslices), len(img_zslices)) img_null = sitk.Image([0,0], img.GetPixelID(), img.GetNumberOfComponentsPerPixel()) img_slices = [] d = 0 if len(img_xslices): img_slices += img_xslices + [img_null]*(maxlen-len(img_xslices)) d += 1 if len(img_yslices): img_slices += img_yslices + [img_null]*(maxlen-len(img_yslices)) d += 1 if len(img_zslices): img_slices += img_zslices + [img_null]*(maxlen-len(img_zslices)) d +=1 if maxlen != 0: if img.GetNumberOfComponentsPerPixel() == 1: img = sitk.Tile(img_slices, [maxlen,d]) #TO DO check in code to get Tile Filter working with vector images else: img_comps = [] for i in range(0,img.GetNumberOfComponentsPerPixel()): img_slices_c = [sitk.VectorIndexSelectionCast(s, i) for s in img_slices] img_comps.append(sitk.Tile(img_slices_c, [maxlen,d])) img = sitk.Compose(img_comps) myshow(img, title, margin, dpi) myshow3d(img_T1,yslices=range(50,size[1]-50,20), zslices=range(50,size[2]-50,20), dpi=30) myshow3d(img_T2,yslices=range(50,size[1]-50,30), zslices=range(50,size[2]-50,20), dpi=30) myshow3d(sitk.LabelToRGB(img_labels),yslices=range(50,size[1]-50,20), zslices=range(50,size[2]-50,20), dpi=30) ``` We next visualize the T1 image with an overlay of the labels. ``` # Why doesn't this work? The images do overlap in physical space. myshow3d(sitk.LabelOverlay(img_T1,img_labels),yslices=range(50,size[1]-50,20), zslices=range(50,size[2]-50,20), dpi=30) ``` Two ways to solve our problem: (1) resample the labels onto the image grid (2) resample the image onto the label grid. The difference between the two from a computation standpoint depends on the grid sizes and on the interpolator used to estimate values at non-grid locations. Note interpolating a label image with an interpolator that can generate non-label values is problematic as you may end up with an image that has more classes/labels than your original. This is why we only use the nearest neighbor interpolator when working with label images. ``` # Option 1: Resample the label image using the identity transformation resampled_img_labels = sitk.Resample(img_labels, img_T1, sitk.Transform(), sitk.sitkNearestNeighbor, 0.0, img_labels.GetPixelID()) # Overlay onto the T1 image, requires us to rescale the intensity of the T1 image to [0,255] and cast it so that it can # be combined with the color overlay (we use an alpha blending of 0.5). myshow3d(sitk.LabelOverlay(sitk.Cast(sitk.RescaleIntensity(img_T1), sitk.sitkUInt8),resampled_img_labels, 0.5), yslices=range(50,size[1]-50,20), zslices=range(50,size[2]-50,20), dpi=30) # Option 2: Resample the T1 image using the identity transformation resampled_T1 = sitk.Resample(img_T1, img_labels, sitk.Transform(), sitk.sitkLinear, 0.0, img_T1.GetPixelID()) # Overlay onto the T1 image, requires us to rescale the intensity of the T1 image to [0,255] and cast it so that it can # be combined with the color overlay (we use an alpha blending of 0.5). myshow3d(sitk.LabelOverlay(sitk.Cast(sitk.RescaleIntensity(resampled_T1), sitk.sitkUInt8),img_labels, 0.5), yslices=range(50,size[1]-50,20), zslices=range(50,size[2]-50,20), dpi=30) ``` Why are the two displays above different? (hint: in the calls to the "myshow3d" function the indexes of the y and z slices are the same). ### There and back again In some cases you may want to work with the intensity values or labels outside of SimpleITK, for example you implement an algorithm in Python and you don't care about the physical spacing of things (you are actually assuming the volume is isotropic). How do you get back to the physical space? ``` def my_algorithm(image_as_numpy_array): # res is the image result of your algorithm, has the same grid size as the original image res = image_as_numpy_array return res # There (run your algorithm), and back again (convert the result into a SimpleITK image) res_image = sitk.GetImageFromArray(my_algorithm(sitk.GetArrayFromImage(img_T1))) # Now lets add the original image to the processed one, why doesn't this work? res_image + img_T1 ``` When we converted the SimpleITK image to a numpy array we lost the spatial information (origin, spacing, direction cosine matrix). Converting the numpy array back to a SimpleITK image simply set the value of these components to reasonable defaults. We can easily restore the original values to deal with this issue. ``` res_image.CopyInformation(img_T1) res_image + img_T1 ``` The ``myshow`` and ``myshow3d`` functions are really useful. They have been copied into a "myshow.py" file so that they can be imported into other notebooks.
github_jupyter
## Logistic Regression ``` import numpy as np import pandas as pd from sklearn.model_selection import train_test_split,KFold from sklearn.utils import shuffle from sklearn.metrics import confusion_matrix,accuracy_score,precision_score,\ recall_score,roc_curve,auc import expectation_reflection as ER from sklearn.linear_model import LogisticRegression from sklearn.linear_model import SGDClassifier from sklearn.model_selection import GridSearchCV import matplotlib.pyplot as plt import seaborn as sns from sklearn.preprocessing import MinMaxScaler from function import split_train_test,make_data_balance np.random.seed(1) ``` First of all, the processed data are imported. ``` #data_list = ['1paradox','2peptide','3stigma'] #data_list = np.loadtxt('data_list.txt',dtype='str') data_list = np.loadtxt('data_list_30sets.txt',dtype='str') #data_list = ['9coag'] print(data_list) def read_data(data_id): data_name = data_list[data_id] print('data_name:',data_name) Xy = np.loadtxt('../classification_data/%s/data_processed_knn_sqrt.dat'%data_name) X = Xy[:,:-1] y = Xy[:,-1] #print(np.unique(y,return_counts=True)) X,y = make_data_balance(X,y) print(np.unique(y,return_counts=True)) X, y = shuffle(X, y, random_state=1) X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,random_state = 1) sc = MinMaxScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) return X_train,X_test,y_train,y_test def measure_performance(X_train,X_test,y_train,y_test): #model = LogisticRegression(max_iter=100) model = SGDClassifier(loss='log',max_iter=1000,tol=0.001) # 'log' for logistic regression, 'hinge' for SVM # regularization penalty space #penalty = ['l1','l2'] penalty = ['elasticnet'] # solver #solver=['saga'] #solver=['liblinear'] # regularization hyperparameter space #C = np.logspace(0, 4, 10) #C = [0.001,0.1,1.0,10.0,100.0] alpha = [0.001,0.01,0.1,1.0,10.,100.] # l1_ratio #l1_ratio = [0.1,0.5,0.9] l1_ratio = [0.,0.2,0.4,0.6,0.8,1.0] # Create hyperparameter options #hyperparameters = dict(penalty=penalty,solver=solver,C=C,l1_ratio=l1_ratio) #hyper_parameters = dict(penalty=penalty,solver=solver,C=C) hyper_parameters = dict(penalty=penalty,alpha=alpha,l1_ratio=l1_ratio) # Create grid search using cross validation clf = GridSearchCV(model, hyper_parameters, cv=4, iid='deprecated') # Fit grid search best_model = clf.fit(X_train, y_train) # View best hyperparameters #print('Best Penalty:', best_model.best_estimator_.get_params()['penalty']) #print('Best C:', best_model.best_estimator_.get_params()['C']) #print('Best alpha:', best_model.best_estimator_.get_params()['alpha']) #print('Best l1_ratio:', best_model.best_estimator_.get_params()['l1_ratio']) # best hyper parameters print('best_hyper_parameters:',best_model.best_params_) # performance: y_test_pred = best_model.best_estimator_.predict(X_test) acc = accuracy_score(y_test,y_test_pred) #print('Accuracy:', acc) p_test_pred = best_model.best_estimator_.predict_proba(X_test) # prob of [0,1] p_test_pred = p_test_pred[:,1] # prob of 1 fp,tp,thresholds = roc_curve(y_test, p_test_pred, drop_intermediate=False) roc_auc = auc(fp,tp) #print('AUC:', roc_auc) precision = precision_score(y_test,y_test_pred) #print('Precision:',precision) recall = recall_score(y_test,y_test_pred) #print('Recall:',recall) f1_score = 2*precision*recall/(precision+recall) return acc,roc_auc,precision,recall,f1_score n_data = len(data_list) roc_auc = np.zeros(n_data) ; acc = np.zeros(n_data) precision = np.zeros(n_data) ; recall = np.zeros(n_data) f1_score = np.zeros(n_data) #data_id = 0 for data_id in range(n_data): X_train,X_test,y_train,y_test = read_data(data_id) acc[data_id],roc_auc[data_id],precision[data_id],recall[data_id],f1_score[data_id] =\ measure_performance(X_train,X_test,y_train,y_test) print(data_id,acc[data_id],roc_auc[data_id],precision[data_id],recall[data_id],f1_score[data_id]) print('acc_mean:',acc.mean()) print('roc_mean:',roc_auc.mean()) print('precision:',precision.mean()) print('recall:',recall.mean()) print('f1_score:',f1_score.mean()) np.savetxt('result_knn_sqrt_LR.dat',(roc_auc,acc,precision,recall,f1_score),fmt='%f') ```
github_jupyter
![Astrofisica Computacional](../logo.PNG) --- ## 01. Interpolación de Funciones Eduard Larrañaga (ealarranaga@unal.edu.co) --- ## Interpolación ### Resumen En este cuaderno se presentan algunas de las técnicas de interpolación de una función. --- ## Interpolación Los datos astrofísicos (experimentales y sintéticos) usualmente consiten de un conjunto de valores discretos con la forma $(x_j, f_j)$ en donde se representa el valor de una función $f(x)$ paa un conjunto finito de argumentos $\{ x_0, x_1, x_2, ..., x_{n} \}$. Sin embargo, en muchas ocasiones se necesita conocer el valor de la función en puntos adicionales (que no pertenecen al conjunto dado). La **interpolación** es el método que permite obtener estos valores. Por **interpolación** entenderemos el definir una función $g(x)$, utilizando la información discreta conocida y de tal forma que $g(x_j) = f(x_j)$ y que se aproxime el valor de la función $f$ en cualquier punto $x \in [x_{min}, x_{max}]$, done $x_{min} = \min [x_j]$ y $x_{max} = \max \{ x_j \}$. Por otro lado, la **extrapolación** correspondería a aproximar el valor de la función $f$ en un punto $x \notin [x_{min}, x_{max}]$}. Sin embargo, este caso no será analizado aquí. --- ## Interpolación Lineal Simple El método de interpolación más simple es denominado **Interpolación Polinomial** y consiste en encontrar un polinomio $p_n(x)$ de grado $n$ que pasa por $N = n+1$ puntos $x_j$ tomando los valores $p(x_j) = f(x_j)$, donde $j=0,1,2,...,n$. El polinomio se escribe en la forma general $p_n(x) = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n$ donde $a_i$ son $n+1$-constantes reales que se determinarán por las condiciones $\left( \begin{array}{ccccc} 1&x_0^1&x_0^2&\cdots&x_0^n\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ 1&x_n^1&x_n^2&\cdots&x_n^n\\ \end{array} \right) \left(\begin{array}{c} a_0\\ \vdots\\ \vdots\\ a_n \end{array}\right) = \left(\begin{array}{c} f(x_0)\\ \vdots\\ \vdots\\ f(x_n) \end{array}\right)$ La solución de este sistema es fácil de obtener en los casos de interpolación lineal ($n=1$) y cuadrática ($n=2$), pero puede ser dificil de encontrar para un valor grande de $n$. --- ### Interpolación Lineal La interpolación lineal ($n=1$) de una función $f(x)$ en un intervalo $[x_i,x_{i+1}]$ requiere conocer solamente dos puntos. Resolviendo el sistema lineal resultante se obtiene el polinomio interpolado \begin{equation} p_1(x) = f(x_i) + \frac{f(x_{i+1}) - f(x_i)}{x_{i+1} - x_i} (x-x_i) + \mathcal{O}(\Delta x^2) \end{equation} donde $\Delta x = x_{i+1} - x_i$. El método de interpolación lineal provee un polinomio con una precisión de segundo orden que puede ser derivado una vez, pero esta derivada no es continua en los puntos extremos del intervalo de interpolación, $x_i$ y $x_{i+1}$. #### Ejemplo. Interpolación Lineal por intervalos A continuación se leerá un conjunto de datos desde un archivo .txt y se interpolará linealmente entre cada par de puntos (*piecewise interpolation*) ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Reading the data data = np.loadtxt('data_points.txt', comments='#', delimiter=',') x = data[:,0] f = data[:,1] plt.figure() plt.scatter(x,f) plt.xlabel(r'$x$') plt.ylabel(r'$f(x)$') plt.show() data.shape def linearInterpolation(x1, x2, f1, f2, x): p1 = f1 + ((f2-f1)/(x2-x1))*(x-x1) return p1 N = len(x) plt.figure(figsize=(7,5)) plt.scatter(x, f, color='black') for i in range(N-1): x_interval = np.linspace(x[i],x[i+1],3) # Note that the number 3 in thie above line indeicates the number of # points interpolated in each interval ! # (including the extreme points of the interval) y_interval = linearInterpolation(x[i], x[i+1], f[i], f[i+1], x_interval) plt.plot(x_interval, y_interval,'r') plt.title(r'Linear Piecewise Interpolation') plt.xlabel(r'$x$') plt.ylabel(r'$p_1(x)$') plt.show() ``` --- ### Interpolación Cuadrática La interpolación cuadrática ($n=2$) requiere información de tres puntos. Por ejemplo, se pueden tomar los tres puntos $x_i$ , $x_{i+1}$ y $x_{i+2}$ para interpolar la función $f(x)$ en el rango$[x_{i},x_{i+1}]$. Al solucionar el sistema de ecuaciones lineales correspondiente se obtiene el polinomio $p_2(x) = \frac{(x-x_{i+1})(x-x_{i+2})}{(x_i - x_{i+1})(x_i - x_{i+2})} f(x_i) + \frac{(x-x_{i})(x-x_{i+2})}{(x_{i+1} - x_{i})(x_{i+1} - x_{i+2})} f(x_{i+1}) + \frac{(x-x_i)(x-x_{i+1})}{(x_{i+2} - x_i)(x_{i+2} - x_{i+1})} f(x_{i+2}) + \mathcal{O}(\Delta x^3)$, donde $\Delta x = \max \{ x_{i+2}-x_{i+1},x_{i+1}-x_i \}$. En este caso, el polinomio interpolado se puede derivar dos veces pero, aunque su primera derivada es continua, la segunda derivada no es continua en los puntos extremos del intervalo. #### Ejemplo. Interpolación Cuadrática por Intervalos A continuación se leerá un conjunto de datos desde un archivo .txt y se interpolará cuadráticamente en sub-intervalos (*quadratic piecewise interpolation*) ``` import numpy as np import matplotlib.pyplot as plt # Reading the data data = np.loadtxt('data_points.txt', comments='#', delimiter=',') x = data[:,0] f = data[:,1] def quadraticInterpolation(x1, x2, x3, f1, f2, f3, x): p2 = (((x-x2)*(x-x3))/((x1-x2)*(x1-x3)))*f1 +\ (((x-x1)*(x-x3))/((x2-x1)*(x2-x3)))*f2 +\ (((x-x1)*(x-x2))/((x3-x1)*(x3-x2)))*f3 return p2 N = len(x) plt.figure(figsize=(7,5)) plt.scatter(x, f, color='black') for i in range(N-2): x_interval = np.linspace(x[i],x[i+1],6) # 6 interpolate points in each interval y_interval = quadraticInterpolation(x[i], x[i+1], x[i+2], f[i], f[i+1], f[i+2], x_interval) plt.plot(x_interval, y_interval,'r') plt.title(r' Quadratic Polynomial Piecewise Interpolation') plt.xlabel(r'$x$') plt.ylabel(r'$p_2(x)$') plt.show() ``` **Nota:** Por la forma de realizar la interpolación cuadrática, el último intervalo queda sin información. En esta región se puede extender la interpolación del penúltimo intervalo o también se puede interpolar un polinomio lineal. --- ## Interpolación de Lagrange La **Interpolación de Lagrange Interpolation** también busca un polinomio de grado $n$ utilizando $n+1$ puntos, pero utiliza un método alternativo de encontrar los coeficientes. Para comprender esta idea, re-escribimos el polinomio lineal encontrado antes en la forma \begin{equation} p_1(x) = \frac{x-x_{i+1}}{x_i - x_{i+1}} f(x_i) + \frac{x-x_i}{x_{i+1}-x_i} f(x_{i+1}) + \mathcal{O}(\Delta x^2), \end{equation} o así, \begin{equation} p_1(x) = \sum_{j=i}^{i+1} f(x_j) L_{1j}(x) + \mathcal{O}(\Delta x^2) \end{equation} donde se han introducido los *coeficientes de Lagrange* \begin{equation} L_{1j}(x) = \frac{x-x_k}{x_j-x_k}\bigg|_{k\ne j}. \end{equation} Nótese que estos coeficientes aseguran que el polinomio pasa por los puntos conocidos, i.e. $p_1(x_i) = f(x_i)$ y $p_1(x_{i+1}) = f(x_{i+1})$ La **interpolación de Lagrange** generaliza estas expresiones para un polinomio de grado $n$ que pasa por los $n+1$ puntos conocidos, \begin{equation} p_n (x) = \sum_{j=0}^{n} f(x_j) L_{nj}(x) + \mathcal{O}(\Delta x^{n+1})\,, \label{eq:LagrangeInterpolation} \end{equation} donde los coeficientes de Lagrange se generalizan a \begin{equation} L_{nj}(x) = \prod_{k\ne j}^{n} \frac{x-x_k}{x_j - x_k}\,. \end{equation} De nuevo, es posible notar que estos coeficientes aseguran que el polinomio pasa por los puntos concidos $p(x_j) = f(x_j)$. ``` # %load lagrangeInterpolation ''' Eduard Larrañaga Computational Astrophysics 2020 Lagrange Interpolation Method ''' import numpy as np #Lagrange Coefficients def L(x, xi, j): ''' ------------------------------------------ L(x, xi, j) ------------------------------------------ Returns the Lagrange coefficient for the interpolation evaluated at points x Receives as arguments: x : array of points where the interpolated polynomial will be evaluated xi : array of N data points j : index of the coefficient to be calculated ------------------------------------------ ''' # Number of points N = len(xi) prod = 1 for k in range(N): if (k != j): prod = prod * (x - xi[k])/(xi[j] - xi[k]) return prod # Interpolated Polynomial def p(x, xi, fi): ''' ------------------------------------------ p(x, xi, fi) ------------------------------------------ Returns the values of the Lagrange interpolated polynomial in a set of points defined by x x : array of points where the interpolated polynomial will be evaluated xi : array of N data points points fi : values of the function to be interpolated ------------------------------------------ ''' # Number of points N = len(xi) summ = 0 for j in range(N): summ = summ + fi[j]*L(x, xi, j) return summ import numpy as np import matplotlib.pyplot as plt #import lagrangeInterpolation as lagi import sys # Reading the data data = np.loadtxt('data_points.txt', comments='#', delimiter=',') x = data[:,0] f = data[:,1] N = len(x) # Degree of the polynomial to be interpolated piecewise n = 3 # Check if the number of point is enough to interpolate such a polynomial if n>=N: print('\nThere are not enough points to interpolate this polynomial.') print(f'Using {N:.0f} points it is possible to interpolate polynomials up to order n={N-1:.0f}') sys.exit() plt.figure(figsize=(7,5)) plt.title(f'Lagrange Polynomial Piecewise Interpolation n={n:.0f}') plt.scatter(x, f, color='black') # Piecewise Interpolation Loop for i in range(N-n): xi = x[i:i+n+1] fi = f[i:i+n+1] x_interval = np.linspace(x[i],x[i+1],3*n) y_interval = p(x_interval,xi,fi) plt.plot(x_interval, y_interval,'r') plt.xlabel(r'$x$') plt.ylabel(r'$p_n(x)$') plt.show() ``` Nótese que los últimos $n$ puntos no estan interpolados. Qué se puede hacer? ``` import numpy as np import matplotlib.pyplot as plt #import lagrangeInterpolation as lagi import sys # Reading the data data = np.loadtxt('data_points.txt', comments='#', delimiter=',') x = data[:,0] f = data[:,1] N = len(x) # Degree of the polynomial to be interpolated piecewise n = 6 # Check if the number of point is enough to interpolate such a polynomial if n>=N: print('\nThere are not enough points to interpolate this polynomial.') print(f'Using {N:.0f} points it is possible to interpolate polynomials up to order n={N-1:.0f}') sys.exit() plt.figure(figsize=(7,5)) plt.title(f'Lagrange Polynomial Piecewise Interpolation n={n:.0f}') plt.scatter(x, f, color='black') # Piecewise Interpolation Loop for i in range(N-n): xi = x[i:i+n+1] fi = f[i:i+n+1] x_interval = np.linspace(x[i],x[i+1],3*n) y_interval = p(x_interval,xi,fi) plt.plot(x_interval, y_interval,'r') # Piecewise Interpolation for the final N-n points, # using a lower degree polynomial while n>1: m = n-1 for i in range(N-n,N-m): xi = x[i:i+m+1] fi = f[i:i+m+1] x_interval = np.linspace(x[i],x[i+1],3*m) y_interval = p(x_interval,xi,fi) plt.plot(x_interval, y_interval,'r') n=n-1 plt.xlabel(r'$x$') plt.ylabel(r'$p_n(x)$') plt.show() ``` ### Fenómeno de Runge Por qué se se interpola por sub-intervalos? Cuando se tiene una gran cantidad de puntos conocidos, es posible interpolar un polinomio de grado alto. Sin embargo, el comportamiento del polinomio interpolado puede no ser el esperado (especialmente en los extremos del intervalo de interpolación) debido a la existencia de oscilaciones no controladas. A este comportamiento se le denomina el fenómeno de Runge. Por ejemplo, para un conjunto de datos con $20$ puntos es posible interpolar un polinomio de orden $n=19$, ``` import numpy as np import matplotlib.pyplot as plt import lagrangeInterpolation as lagi import sys # Reading the data data = np.loadtxt('data_points.txt', comments='#', delimiter=',') x = data[:,0] f = data[:,1] N = len(x) # Higher Degree polynomial to be interpolated n = N-1 plt.figure(figsize=(7,5)) plt.title(f'Lagrange Polynomial Piecewise Interpolation n={n:.0f}') plt.scatter(x, f, color='black') #Interpolation of the higher degree polynomial x_int = np.linspace(x[0],x[N-1],3*n) y_int = lagi.p(x_int,x,f) plt.plot(x_int, y_int,'r') plt.xlabel(r'$x$') plt.ylabel(r'$p_n(x)$') plt.show() ``` Sin embargo, es claro que el comportamiento del polinomio interpolado no es bueno en los extremos del intervalo considerado. Por esta razón, es muy aconsejable utilizar una interpolación de polinomios de grado pequeño por sub-intervalos. --- ## Interpolación Cúbica de Hermite por Intervalos La interpolación de Hermite es unc aso particular de interpolación polinomica que utiliza un conjunto de puntos conocidos en donde se conoce el valor de la función $f(x_j)$ y su derivada $f'(x_j)$. Al incorporar la primera derivada se pueden interpolar polinomios de un grado alto controlando las osiclaciones no deseadas. Adicionalmente, al conocer la primera derivada, se necesitan menos puntos para realizar la interpolación. Dentro de este tipo de interpolación, la más utilizada es la de polinomios de tercer orden. DE esta forma, en un intervalo $[x_i , x_{i+1}]$, se requiere conocer (o evaluar) los valores de $f(x_i)$, $f(x_{i+1})$, $f'(x_i)$ y $f'(x_{i+1})$ para obtener el polinomio interpolado de Hermite cúbico, \begin{equation} H_3(x) = f(x_i)\psi_0(z) + f(x_{i+1})\psi_0(1-z)+ f'(x_i)(x_{i+1} - x_{i})\psi_1(z) - f'(x_{i+1})(x_{i+1}-x_i)\psi_1 (1-z), \end{equation} donde \begin{equation} z = \frac{x-x_i}{x_{i+1}-x_i} \end{equation} y \begin{align} \psi_0(z) =&2z^3 - 3z^2 + 1 \\ \psi_1(z) =&z^3-2z^2+z\,\,. \end{align} Nótese que con esta formulación, es posible interpolar un polinomio de tercer orden en un intervalo con solo dos puntos. De esta forma, al trabajar con un conjunto de muhos puntos, se podría interpolar un polinomio cúbico entre cada par de datos, incluso en el último sub-intervalo! ``` # %load HermiteInterpolation ''' Eduard Larrañaga Computational Astrophysics 2020 Hermite Interpolation Method ''' import numpy as np #Hermite Coefficients def psi0(z): ''' ------------------------------------------ psi0(z) ------------------------------------------ Returns the Hermite coefficients Psi_0 for the interpolation Receives as arguments: z ------------------------------------------ ''' psi_0 = 2*z**3 - 3*z**2 + 1 return psi_0 def psi1(z): ''' ------------------------------------------ psi1(z) ------------------------------------------ Returns the Hermite coefficients Psi_1 for the interpolation Receives as arguments: z ------------------------------------------ ''' psi_1 = z**3 - 2*z**2 + z return psi_1 # Interpolated Polynomial def H3(x, xi, fi, dfidx): ''' ------------------------------------------ H3(x, xi, fi, dfidx) ------------------------------------------ Returns the values of the Cubic Hermite interpolated polynomial in a set of points defined by x x : array of points where the interpolated polynomial will be evaluated xi : array of 2 data points fi : array of values of the function at xi dfidx : array of values of the derivative of the function at xi ------------------------------------------ ''' # variable z in the interpolation z = (x - xi[0])/(xi[1] - x[0]) h1 = psi0(z) * fi[0] h2 = psi0(1-z)*fi[1] h3 = psi1(z)*(xi[1] - xi[0])*dfidx[0] h4 = psi1(1-z)*(xi[1] - xi[0])*dfidx[1] H = h1 + h2 + h3 - h4 return H import numpy as np import matplotlib.pyplot as plt import HermiteInterpolation as heri def Derivative(x, f): ''' ------------------------------------------ Derivative(x, f) ------------------------------------------ This function returns the numerical derivative of a discretely-sample function using one-side derivatives in the extreme points of the interval and second order accurate derivative in the middle points. The data points may be evenly or unevenly spaced. ------------------------------------------ ''' # Number of points N = len(x) dfdx = np.zeros([N, 2]) dfdx[:,0] = x # Derivative at the extreme points dfdx[0,1] = (f[1] - f[0])/(x[1] - x[0]) dfdx[N-1,1] = (f[N-1] - f[N-2])/(x[N-1] - x[N-2]) #Derivative at the middle points for i in range(1,N-1): h1 = x[i] - x[i-1] h2 = x[i+1] - x[i] dfdx[i,1] = h1*f[i+1]/(h2*(h1+h2)) - (h1-h2)*f[i]/(h1*h2) -\ h2*f[i-1]/(h1*(h1+h2)) return dfdx # Loading the data data = np.loadtxt('data_points.txt', comments='#', delimiter=',') x = data[:,0] f = data[:,1] N = len(x) # Calling the derivative function and chosing only the second column dfdx = Derivative(x,f)[:,1] plt.figure(figsize=(7,5)) plt.title(f'Cubic Hermite Polynomial Piecewise Interpolation') plt.scatter(x, f, color='black') # Piecewise Hermite Interpolation Loop for i in range(N-1): xi = x[i:i+2] fi = f[i:i+2] dfidx = dfdx[i:i+2] x_interval = np.linspace(x[i],x[i+1],4) y_interval = heri.H3(x_interval, xi, fi, dfidx) plt.plot(x_interval, y_interval,'r') plt.xlabel(r'$x$') plt.ylabel(r'$H_3(x)$') plt.show() ```
github_jupyter
# Introduction # Most of the techniques we've seen in this course have been for numerical features. The technique we'll look at in this lesson, *target encoding*, is instead meant for categorical features. It's a method of encoding categories as numbers, like one-hot or label encoding, with the difference that it also uses the *target* to create the encoding. This makes it what we call a **supervised** feature engineering technique. ``` import pandas as pd autos = pd.read_csv("../input/fe-course-data/autos.csv") ``` # Target Encoding # A **target encoding** is any kind of encoding that replaces a feature's categories with some number derived from the target. A simple and effective version is to apply a group aggregation from Lesson 3, like the mean. Using the *Automobiles* dataset, this computes the average price of each vehicle's make: ``` autos["make_encoded"] = autos.groupby("make")["price"].transform("mean") autos[["make", "price", "make_encoded"]].head(10) ``` This kind of target encoding is sometimes called a **mean encoding**. Applied to a binary target, it's also called **bin counting**. (Other names you might come across include: likelihood encoding, impact encoding, and leave-one-out encoding.) # Smoothing # An encoding like this presents a couple of problems, however. First are *unknown categories*. Target encodings create a special risk of overfitting, which means they need to be trained on an independent "encoding" split. When you join the encoding to future splits, Pandas will fill in missing values for any categories not present in the encoding split. These missing values you would have to impute somehow. Second are *rare categories*. When a category only occurs a few times in the dataset, any statistics calculated on its group are unlikely to be very accurate. In the *Automobiles* dataset, the `mercurcy` make only occurs once. The "mean" price we calculated is just the price of that one vehicle, which might not be very representative of any Mercuries we might see in the future. Target encoding rare categories can make overfitting more likely. A solution to these problems is to add **smoothing**. The idea is to blend the *in-category* average with the *overall* average. Rare categories get less weight on their category average, while missing categories just get the overall average. In pseudocode: ``` encoding = weight * in_category + (1 - weight) * overall ``` where `weight` is a value between 0 and 1 calculated from the category frequency. An easy way to determine the value for `weight` is to compute an **m-estimate**: ``` weight = n / (n + m) ``` where `n` is the total number of times that category occurs in the data. The parameter `m` determines the "smoothing factor". Larger values of `m` put more weight on the overall estimate. <figure style="padding: 1em;"> <img src="https://i.imgur.com/1uVtQEz.png" width=500, alt=""> <figcaption style="textalign: center; font-style: italic"><center> </center></figcaption> </figure> In the *Automobiles* dataset there are three cars with the make `chevrolet`. If you chose `m=2.0`, then the `chevrolet` category would be encoded with 60% of the average Chevrolet price plus 40% of the overall average price. ``` chevrolet = 0.6 * 6000.00 + 0.4 * 13285.03 ``` When choosing a value for `m`, consider how noisy you expect the categories to be. Does the price of a vehicle vary a great deal within each make? Would you need a lot of data to get good estimates? If so, it could be better to choose a larger value for `m`; if the average price for each make were relatively stable, a smaller value could be okay. <blockquote style="margin-right:auto; margin-left:auto; background-color: #ebf9ff; padding: 1em; margin:24px;"> <strong>Use Cases for Target Encoding</strong><br> Target encoding is great for: <ul> <li><strong>High-cardinality features</strong>: A feature with a large number of categories can be troublesome to encode: a one-hot encoding would generate too many features and alternatives, like a label encoding, might not be appropriate for that feature. A target encoding derives numbers for the categories using the feature's most important property: its relationship with the target. <li><strong>Domain-motivated features</strong>: From prior experience, you might suspect that a categorical feature should be important even if it scored poorly with a feature metric. A target encoding can help reveal a feature's true informativeness. </ul> </blockquote> # Example - MovieLens1M # The [*MovieLens1M*](https://www.kaggle.com/grouplens/movielens-20m-dataset) dataset contains one-million movie ratings by users of the MovieLens website, with features describing each user and movie. This hidden cell sets everything up: ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import warnings plt.style.use("seaborn-whitegrid") plt.rc("figure", autolayout=True) plt.rc( "axes", labelweight="bold", labelsize="large", titleweight="bold", titlesize=14, titlepad=10, ) warnings.filterwarnings('ignore') df = pd.read_csv("../input/fe-course-data/movielens1m.csv") df = df.astype(np.uint8, errors='ignore') # reduce memory footprint print("Number of Unique Zipcodes: {}".format(df["Zipcode"].nunique())) ``` With over 3000 categories, the `Zipcode` feature makes a good candidate for target encoding, and the size of this dataset (over one-million rows) means we can spare some data to create the encoding. We'll start by creating a 25% split to train the target encoder. ``` X = df.copy() y = X.pop('Rating') X_encode = X.sample(frac=0.25) y_encode = y[X_encode.index] X_pretrain = X.drop(X_encode.index) y_train = y[X_pretrain.index] ``` The `category_encoders` package in `scikit-learn-contrib` implements an m-estimate encoder, which we'll use to encode our `Zipcode` feature. ``` from category_encoders import MEstimateEncoder # Create the encoder instance. Choose m to control noise. encoder = MEstimateEncoder(cols=["Zipcode"], m=5.0) # Fit the encoder on the encoding split. encoder.fit(X_encode, y_encode) # Encode the Zipcode column to create the final training data X_train = encoder.transform(X_pretrain) ``` Let's compare the encoded values to the target to see how informative our encoding might be. ``` plt.figure(dpi=90) ax = sns.distplot(y, kde=False, norm_hist=True) ax = sns.kdeplot(X_train.Zipcode, color='r', ax=ax) ax.set_xlabel("Rating") ax.legend(labels=['Zipcode', 'Rating']); ``` The distribution of the encoded `Zipcode` feature roughly follows the distribution of the actual ratings, meaning that movie-watchers differed enough in their ratings from zipcode to zipcode that our target encoding was able to capture useful information. # Your Turn # [**Apply target encoding**](https://www.kaggle.com/kernels/fork/14393917) to features in *Ames* and investigate a surprising way that target encoding can lead to overfitting. --- *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/221677) to chat with other Learners.*
github_jupyter
## Deploy an ONNX model to an IoT Edge device using ONNX Runtime and the Azure Machine Learning ![End-to-end pipeline with ONNX Runtime](https://github.com/manashgoswami/byoc/raw/master/ONNXRuntime-AML.png) ``` !python -m pip install --upgrade pip !pip install azureml-core azureml-contrib-iot azure-mgmt-containerregistry azure-cli !az extension add --name azure-cli-iot-ext import os print(os.__file__) # Check core SDK version number import azureml.core as azcore print("SDK version:", azcore.VERSION) ``` ## 1. Setup the Azure Machine Learning Environment ### 1a AML Workspace : using existing config ``` #Initialize Workspace from azureml.core import Workspace ws = Workspace.from_config() ``` ### 1.2 AML Workspace : create a new workspace ``` #Initialize Workspace from azureml.core import Workspace ### Change this cell from markdown to code and run this if you need to create a workspace ### Update the values for your workspace below ws=Workspace.create(subscription_id="<subscription-id goes here>", resource_group="<resource group goes here>", name="<name of the AML workspace>", location="<location>") ws.write_config() ``` ### 1.3 AML Workspace : initialize an existing workspace Download the `config.json` file for your AML Workspace from the Azure portal ``` #Initialize Workspace from azureml.core import Workspace ## existing AML Workspace in config.json ws = Workspace.from_config('config.json') print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n') ``` ## 2. Setup the trained model to use in this example ### 2.1 Register the trained model in workspace from the ONNX Model Zoo ``` import urllib.request onnx_model_url = "https://onnxzoo.blob.core.windows.net/models/opset_8/tiny_yolov2/tiny_yolov2.tar.gz" urllib.request.urlretrieve(onnx_model_url, filename="tiny_yolov2.tar.gz") !tar xvzf tiny_yolov2.tar.gz from azureml.core.model import Model model = Model.register(workspace = ws, model_path = "./tiny_yolov2/Model.onnx", model_name = "Model.onnx", tags = {"data": "Imagenet", "model": "object_detection", "type": "TinyYolo"}, description = "real-time object detection model from ONNX model zoo") ``` ### 2.2 Load the model from your workspace model registry For e.g. this could be the ONNX model exported from your training experiment ``` from azureml.core.model import Model model = Model(name='Model.onnx', workspace=ws) ``` ## 3. Create the application container image This container is the IoT Edge module that will be deployed on the UP<sup>2</sup> device. 1. This container is using a pre-build base image for ONNX Runtime. 2. Includes a `score.py` script, Must include a `run()` and `init()` function. The `init()` is entrypoint that reads the camera frames from /device/video0. The `run()` function is a dummy module to satisfy AML-sdk checks. 3. `amlpackage_inference.py` script which is used to process the input frame and run the inference session and 4. the ONNX model, label file used by the ONNX Runtime ``` %%writefile score.py # Copyright (c) Microsoft. All rights reserved. # Licensed under the MIT license. See LICENSE file in the project root for # full license information. import sys import time import io import csv # Imports for inferencing import onnxruntime as rt from amlpackage_inference import run_onnx import numpy as np import cv2 # Imports for communication w/IOT Hub from iothub_client import IoTHubModuleClient, IoTHubClientError, IoTHubTransportProvider from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult, IoTHubError from azureml.core.model import Model # Imports for the http server from flask import Flask, request import json # Imports for storage import os # from azure.storage.blob import BlockBlobService, PublicAccess, AppendBlobService import random import string import csv from datetime import datetime from pytz import timezone import time import json class HubManager(object): def __init__( self, protocol=IoTHubTransportProvider.MQTT): self.client_protocol = protocol self.client = IoTHubModuleClient() self.client.create_from_environment(protocol) # set the time until a message times out self.client.set_option("messageTimeout", MESSAGE_TIMEOUT) # Forwards the message received onto the next stage in the process. def forward_event_to_output(self, outputQueueName, event, send_context): self.client.send_event_async( outputQueueName, event, send_confirmation_callback, send_context) def send_confirmation_callback(message, result, user_context): """ Callback received when the message that we're forwarding is processed. """ print("Confirmation[%d] received for message with result = %s" % (user_context, result)) def get_tinyyolo_frame_from_encode(msg): """ Formats jpeg encoded msg to frame that can be processed by tiny_yolov2 """ #inp = np.array(msg).reshape((len(msg),1)) #frame = cv2.imdecode(inp.astype(np.uint8), 1) frame = cv2.cvtColor(msg, cv2.COLOR_BGR2RGB) # resize and pad to keep input frame aspect ratio h, w = frame.shape[:2] tw = 416 if w > h else int(np.round(416.0 * w / h)) th = 416 if h > w else int(np.round(416.0 * h / w)) frame = cv2.resize(frame, (tw, th)) pad_value=114 top = int(max(0, np.round((416.0 - th) / 2))) left = int(max(0, np.round((416.0 - tw) / 2))) bottom = 416 - top - th right = 416 - left - tw frame = cv2.copyMakeBorder(frame, top, bottom, left, right, cv2.BORDER_CONSTANT, value=[pad_value, pad_value, pad_value]) frame = np.ascontiguousarray(np.array(frame, dtype=np.float32).transpose(2, 0, 1)) # HWC -> CHW frame = np.expand_dims(frame, axis=0) return frame def run(msg): # this is a dummy function required to satisfy AML-SDK requirements. return msg def init(): # Choose HTTP, AMQP or MQTT as transport protocol. Currently only MQTT is supported. PROTOCOL = IoTHubTransportProvider.MQTT DEVICE = 0 # when device is /dev/video0 LABEL_FILE = "labels.txt" MODEL_FILE = "Model.onnx" global MESSAGE_TIMEOUT # setting for IoT Hub Manager MESSAGE_TIMEOUT = 1000 LOCAL_DISPLAY = "OFF" # flag for local display on/off, default OFF # Create the IoT Hub Manager to send message to IoT Hub print("trying to make IOT Hub manager") hub_manager = HubManager(PROTOCOL) if not hub_manager: print("Took too long to make hub_manager, exiting program.") print("Try restarting IotEdge or this module.") sys.exit(1) # Get Labels from labels file labels_file = open(LABEL_FILE) labels_string = labels_file.read() labels = labels_string.split(",") labels_file.close() label_lookup = {} for i, val in enumerate(labels): label_lookup[val] = i # get model path from within the container image model_path=Model.get_model_path(MODEL_FILE) # Loading ONNX model print("loading model to ONNX Runtime...") start_time = time.time() ort_session = rt.InferenceSession(model_path) print("loaded after", time.time()-start_time,"s") # start reading frames from video endpoint cap = cv2.VideoCapture(DEVICE) while cap.isOpened(): _, _ = cap.read() ret, img_frame = cap.read() if not ret: print('no video RESETTING FRAMES TO 0 TO RUN IN LOOP') cap.set(cv2.CAP_PROP_POS_FRAMES, 0) continue """ Handles incoming inference calls for each fames. Gets frame from request and calls inferencing function on frame. Sends result to IOT Hub. """ try: draw_frame = img_frame start_time = time.time() # pre-process the frame to flatten, scale for tiny-yolo frame = get_tinyyolo_frame_from_encode(img_frame) # run the inference session for the given input frame objects = run_onnx(frame, ort_session, draw_frame, labels, LOCAL_DISPLAY) # LOOK AT OBJECTS AND CHECK PREVIOUS STATUS TO APPEND num_objects = len(objects) print("NUMBER OBJECTS DETECTED:", num_objects) print("PROCESSED IN:",time.time()-start_time,"s") if num_objects > 0: output_IOT = IoTHubMessage(json.dumps(objects)) hub_manager.forward_event_to_output("inferenceoutput", output_IOT, 0) continue except Exception as e: print('EXCEPTION:', str(e)) continue ``` ### 3.1 Include the dependent packages required by the application scripts ``` from azureml.core.conda_dependencies import CondaDependencies myenv = CondaDependencies() myenv.add_pip_package("azure-iothub-device-client") myenv.add_pip_package("numpy") myenv.add_pip_package("opencv-python") myenv.add_pip_package("requests") myenv.add_pip_package("pytz") myenv.add_pip_package("onnx") with open("myenv.yml", "w") as f: f.write(myenv.serialize_to_string()) ``` ### 3.2 Build the custom container image with the ONNX Runtime + OpenVINO base image This step uses pre-built container images with ONNX Runtime and the different HW execution providers. A complete list of base images are located [here](https://github.com/microsoft/onnxruntime/tree/master/dockerfiles#docker-containers-for-onnx-runtime). ``` from azureml.core.image import ContainerImage from azureml.core.model import Model openvino_image_config = ContainerImage.image_configuration(execution_script = "score.py", runtime = "python", dependencies=["labels.txt", "amlpackage_inference.py"], conda_file = "myenv.yml", description = "TinyYolo ONNX Runtime inference container", tags = {"demo": "onnx"}) # Use the ONNX Runtime + OpenVINO base image for Intel MovidiusTM USB sticks openvino_image_config.base_image = "mcr.microsoft.com/azureml/onnxruntime:latest-openvino-myriad" # For the Intel Movidius VAD-M PCIe card use this: # openvino_image_config.base_image = "mcr.microsoft.com/azureml/onnxruntime:latest-openvino-vadm" openvino_image = ContainerImage.create(name = "name-of-image", # this is the model object models = [model], image_config = openvino_image_config, workspace = ws) # Alternative: Re-use an image that you have already built from the workspace image registry # openvino_image = ContainerImage(name = "<name-of-image>", workspace = ws) openvino_image.wait_for_creation(show_output = True) if openvino_image.creation_state == 'Failed': print("Image build log at: " + openvino_image.image_build_log_uri) if openvino_image.creation_state != 'Failed': print("Image URI at: " +openvino_image.image_location) ``` ## 4. Deploy to the UP<sup>2</sup> device using Azure IoT Edge ### 4.1 Login with the Azure subscription to provision the IoT Hub and the IoT Edge device ``` !az login !az account set --subscription $ws.subscription_id # confirm the account !az account show ``` ### 4.2 Specify the IoT Edge device details ``` # Parameter list to configure the IoT Hub and the IoT Edge device # Pick a name for what you want to call the module you deploy to the camera module_name = "module-name-here" # Resource group in Azure resource_group_name= ws.resource_group iot_rg=resource_group_name # Azure region where your services will be provisioned iot_location="location-here" # Azure IoT Hub name iot_hub_name="name-of-IoT-Hub" # Pick a name for your camera iot_device_id="name-of-IoT-Edge-device" # Pick a name for the deployment configuration iot_deployment_id="Infernce Module from AML" ``` ### 4.2a Optional: Provision the IoT Hub, create the IoT Edge device and Setup the Intel UP<sup>2</sup> AI Vision Developer Kit ``` !az iot hub create --resource-group $resource_group_name --name $iot_hub_name --sku S1 # Register an IoT Edge device (create a new entry in the Iot Hub) !az iot hub device-identity create --hub-name $iot_hub_name --device-id $iot_device_id --edge-enabled !az iot hub device-identity show-connection-string --hub-name $iot_hub_name --device-id $iot_device_id ``` The following steps need to be executed in the device terminal 1. Open the IoT edge configuration file in UP<sup>2</sup> device to update the IoT Edge device *connection string* `sudo nano /etc/iotedge/config.yaml` provisioning: source: "manual" device_connection_string: "<ADD DEVICE CONNECTION STRING HERE>" 2. To update the DPS TPM provisioning configuration: provisioning: source: "dps" global_endpoint: "https://global.azure-devices-provisioning.net" scope_id: "{scope_id}" attestation: method: "tpm" registration_id: "{registration_id}" 3. Save and close the file. `CTRL + X, Y, Enter 4. After entering the privisioning information in the configuration file, restart the *iotedge* daemon `sudo systemctl restart iotedge` 5. We will show the object detection results from the camera connected (`/dev/video0`) to the UP<sup>2</sup> on the display. Update your .profile file: `nano ~/.profile` add the following line to the end of file __xhost +__ ### 4.3 Construct the deployment file ``` # create the registry uri container_reg = ws.get_details()["containerRegistry"] reg_name=container_reg.split("/")[-1] container_url = "\"" + openvino_image.image_location + "\"," subscription_id = ws.subscription_id print('{}'.format(openvino_image.image_location), "<-- this is the URI configured in the IoT Hub for the device") print('{}'.format(reg_name)) print('{}'.format(subscription_id)) from azure.mgmt.containerregistry import ContainerRegistryManagementClient from azure.mgmt import containerregistry client = ContainerRegistryManagementClient(ws._auth,subscription_id) result= client.registries.list_credentials(resource_group_name, reg_name, custom_headers=None, raw=False) username = result.username password = result.passwords[0].value ``` #### Create the `deplpyment.json` with the AML image registry details We have provided here a sample deployment template this reference implementation. ``` file = open('./AML-deployment.template.json') contents = file.read() contents = contents.replace('__AML_MODULE_NAME', module_name) contents = contents.replace('__AML_REGISTRY_NAME', reg_name) contents = contents.replace('__AML_REGISTRY_USER_NAME', username) contents = contents.replace('__AML_REGISTRY_PASSWORD', password) contents = contents.replace('__AML_REGISTRY_IMAGE_LOCATION', openvino_image.image_location) with open('./deployment.json', 'wt', encoding='utf-8') as output_file: output_file.write(contents) ``` ### 4.4 Push the *deployment* to the IoT Edge device ``` print("Pushing deployment to IoT Edge device") print ("Set the deployement") !az iot edge set-modules --device-id $iot_device_id --hub-name $iot_hub_name --content deployment.json ``` ### 4.5 Monitor IoT Hub Messages ``` !az iot hub monitor-events --hub-name $iot_hub_name -y ``` ## 5. CLEANUP ``` !rm score.py deployment.json myenv.yml ```
github_jupyter
``` % matplotlib inline import json, re import fiona from shapely.geometry import MultiPolygon, shape import pandas as pd import geopandas as gp import numpy as np import matplotlib import matplotlib.pyplot as plt import matplotlib.cm as cm from matplotlib.collections import PatchCollection from matplotlib.colors import Normalize from mpl_toolkits.basemap import Basemap from descartes import PolygonPatch from datetime import datetime from dateutil.parser import parse from pysal.esda.mapclassify import Natural_Breaks as nb from itertools import chain matplotlib.style.use('ggplot') ``` #### Create Dublin Electoral Division basemap from CSO boundary files (already converted to geojson) ``` with open("../data/raw/dublin_ed.geojson", "r") as f: dub = json.load(f) # Creating new geojson for geometries and basic id info # Including Ward names to incorporate 1961 data dub2 = {'type':dub['type'], 'crs': dub['crs'], 'features': []} for feat in dub['features']: prop = feat prop['properties'] = {"ed": feat['properties']['EDNAME'], "geogid": feat['properties']['GEOGID'], "osied": feat['properties']['OSIED'], "ward": re.sub("\s[A-Z]$", "", feat['properties']['EDNAME'])} dub2['features'].append(prop) with open("../data/processed/dublin_ed_base.geojson", "w") as f: json.dump(dub2, f, indent=2, sort_keys=True) #Create Geopandas data frame from dublin data dub_gp = gp.GeoDataFrame.from_file("../data/processed/dublin_ed_base.geojson") dub_gp.set_index("geogid") dub_gp.geometry.total_bounds dub_gp.plot(legend=True) #Import census tenure data into dataframe with open("../data/interim/ten_71_81_91_02_11.json", "r") as f: tenure = json.load(f) ten_df = pd.DataFrame.from_dict(data) ten_df[ten_df['year']==1971].describe() for t in tenure: t['year'] = str(t['year']) dub_df = pd.DataFrame.from_dict(sorted(({k:t[k] for k in t if k in ['area', 'geogid', "year"] or k.endswith("units")} for t in tenure if t['geogid'] == "C02"), key=lambda x: x['year'])) dub_df.set_index("year") dub_df.loc[dub_df['owner_occ_units'].isnull(), 'owner_occ_units'] = dub_df['owner_mortg_units']+dub_df['owner_no_mortg_units'] dub_df.loc[dub_df['private_rent_units'].isnull(), 'private_rent_units'] = dub_df['rent_furn_units']+dub_df['rent_unfurn_units'] ax = plt.figure(figsize=(7,4), dpi=300).add_subplot(111) dub_df.plot(ax=ax, x="year", y=['rent_furn_units', 'rent_unfurn_units', "private_rent_units"]) #bx = plt.figure(figsize=(7,4), dpi=300).add_subplot(111) dub_df.plot(x="year", y=['owner_occ_units', 'owner_mortg_units', "owner_no_mortg_units", "private_rent_units"]) dub71 = pd.DataFrame.from_dict({k:t[k] for k in t if k in ['area', 'geogid', "year"] or k.endswith("units")} for t in tenure if t['geogid'] and t['geogid'].startswith("E") and t['year'] == "2002") dub71.set_index("geogid") dub71.describe() bds = dub_gp.total_bounds extra = 0.01 ll = (bds[0], bds[1]) ur = (bds[2], bds[3]) coords = list(chain(ll, ur)) w, h = coords[2] - coords[0], coords[3] - coords[1] w, h = coords[2] - coords[0], coords[3] - coords[1] extra = 0.01 ?Basemap.readshapefile m = Basemap( projection='tmerc', lon_0=-2., lat_0=49., ellps = 'WGS84', llcrnrlon=coords[0] - extra * w, llcrnrlat=coords[1] - extra + 0.01 * h, urcrnrlon=coords[2] + extra * w, urcrnrlat=coords[3] + extra + 0.01 * h, lat_ts=0, resolution='i', suppress_ticks=True) m.readshapefile( '../data/external/census2011DED/Census2011_Electoral_Divisions_generalised20m', 'Dublin City', color='none', zorder=2) # Calculate Jenks natural breaks for density breaks = nb( dub71[dub71['rent_furn_units'].notnull()].rent_unfurn_units.values, initial=300, k=5) # the notnull method lets us match indices when joining jb = pd.DataFrame({'jenks_bins': breaks.yb}, index=dub71[dub71['rent_furn_units'].notnull()].index) dub71 = dub71.join(jb) dub71.jenks_bins.fillna(-1, inplace=True) jenks_labels = ["<= %0.f units (%s EDs)" % (b, c) for b, c in zip( breaks.bins, breaks.counts)] #jenks_labels.insert(0, 'No plaques (%s wards)' % len(df_map[df_map['density_km'].isnull()])) jenks_labels #fig = plt.figure(figsize=(20,15), dpi=300)#.add_subplot(111) plt.clf() fig = plt.figure() ax = fig.add_subplot(111, axisbg='w', frame_on=False) # use a blue colour ramp - we'll be converting it to a map using cmap() cmap = plt.get_cmap('Blues') # draw wards with grey outlines patches = [] d71 = pd.merge(dub_gp,dub71) d71['patches'] = d71.geometry.map(lambda x: PolygonPatch(x, ec='#555555', lw=.2, alpha=1., zorder=4)) pc = PatchCollection(d71['patches'], match_original=True) # impose our colour map onto the patch collection norm = Normalize() pc.set_facecolor(cmap(norm(d71['jenks_bins'].values))) ax.add_collection(pc) d71.plot(column="jenks_bins", colormap="Blues", categorical=True, alpha=1.0, ) # Add a colour bar cb = colorbar_index(ncolors=len(jenks_labels), cmap=cmap, shrink=0.5, labels=jenks_labels) cb.ax.tick_params(labelsize=10) # this will set the image width to 722px at 100dpi plt.tight_layout() fig.set_size_inches(12,8) #plt.savefig('data/london_plaques.png', dpi=100, alpha=True) plt.show() # Convenience functions for working with colour ramps and bars def colorbar_index(ncolors, cmap, labels=None, **kwargs): """ This is a convenience function to stop you making off-by-one errors Takes a standard colour ramp, and discretizes it, then draws a colour bar with correctly aligned labels """ cmap = cmap_discretize(cmap, ncolors) mappable = cm.ScalarMappable(cmap=cmap) mappable.set_array([]) mappable.set_clim(-0.5, ncolors+0.5) colorbar = plt.colorbar(mappable, **kwargs) colorbar.set_ticks(np.linspace(0, ncolors, ncolors)) colorbar.set_ticklabels(range(ncolors)) if labels: colorbar.set_ticklabels(labels) return colorbar def cmap_discretize(cmap, N): """ Return a discrete colormap from the continuous colormap cmap. cmap: colormap instance, eg. cm.jet. N: number of colors. Example x = resize(arange(100), (5,100)) djet = cmap_discretize(cm.jet, 5) imshow(x, cmap=djet) """ if type(cmap) == str: cmap = get_cmap(cmap) colors_i = np.concatenate((np.linspace(0, 1., N), (0., 0., 0., 0.))) colors_rgba = cmap(colors_i) indices = np.linspace(0, 1., N + 1) cdict = {} for ki, key in enumerate(('red', 'green', 'blue')): cdict[key] = [(indices[i], colors_rgba[i - 1, ki], colors_rgba[i, ki]) for i in range(N + 1)] return matplotlib.colors.LinearSegmentedColormap(cmap.name + "_%d" % N, cdict, 1024) [(t['geogid'], t['area'], t['year']) for t in tenure if t['geogid'] == "E02146"] dub_gp['geometry'].iloc[6:8].values ax.collections ```
github_jupyter
# About this kernel + efficientnet_b3 + CurricularFace + Mish() activation + Ranger (RAdam + Lookahead) optimizer + margin = 0.9 ## Imports ``` import sys sys.path.append('../input/shopee-competition-utils') sys.path.insert(0,'../input/pytorch-image-models') import numpy as np import pandas as pd import torch from torch import nn from torch.utils.data import Dataset, DataLoader import albumentations from albumentations.pytorch.transforms import ToTensorV2 from custom_scheduler import ShopeeScheduler from custom_activation import replace_activations, Mish from custom_optimizer import Ranger import math import cv2 import timm import os import random import gc from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import GroupKFold from sklearn.neighbors import NearestNeighbors from tqdm.notebook import tqdm ``` ## Config ``` class CFG: DATA_DIR = '../input/shopee-product-matching/train_images' TRAIN_CSV = '../input/shopee-product-matching/train.csv' # data augmentation IMG_SIZE = 512 MEAN = [0.485, 0.456, 0.406] STD = [0.229, 0.224, 0.225] SEED = 2021 # data split N_SPLITS = 5 TEST_FOLD = 0 VALID_FOLD = 1 EPOCHS = 8 BATCH_SIZE = 8 NUM_WORKERS = 4 DEVICE = 'cuda:0' CLASSES = 6609 SCALE = 30 MARGIN = 0.9 MODEL_NAME = 'efficientnet_b3' MODEL_PATH = f'{MODEL_NAME}_curricular_face_epoch_{EPOCHS}_bs_{BATCH_SIZE}_margin_{MARGIN}.pt' FC_DIM = 512 SCHEDULER_PARAMS = { "lr_start": 1e-5, "lr_max": 1e-5 * 32, "lr_min": 1e-6, "lr_ramp_ep": 5, "lr_sus_ep": 0, "lr_decay": 0.8, } ``` ## Augmentations ``` def get_train_transforms(): return albumentations.Compose( [ albumentations.Resize(CFG.IMG_SIZE,CFG.IMG_SIZE,always_apply=True), albumentations.HorizontalFlip(p=0.5), albumentations.VerticalFlip(p=0.5), albumentations.Rotate(limit=120, p=0.8), albumentations.RandomBrightness(limit=(0.09, 0.6), p=0.5), albumentations.Normalize(mean=CFG.MEAN, std=CFG.STD), ToTensorV2(p=1.0), ] ) def get_valid_transforms(): return albumentations.Compose( [ albumentations.Resize(CFG.IMG_SIZE,CFG.IMG_SIZE,always_apply=True), albumentations.Normalize(mean=CFG.MEAN, std=CFG.STD), ToTensorV2(p=1.0) ] ) def get_test_transforms(): return albumentations.Compose( [ albumentations.Resize(CFG.IMG_SIZE,CFG.IMG_SIZE,always_apply=True), albumentations.Normalize(mean=CFG.MEAN, std=CFG.STD), ToTensorV2(p=1.0) ] ) ``` ## Reproducibility ``` def seed_everything(seed): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = True # set True to be faster seed_everything(CFG.SEED) ``` ## Dataset ``` class ShopeeDataset(torch.utils.data.Dataset): """for training """ def __init__(self,df, transform = None): self.df = df self.root_dir = CFG.DATA_DIR self.transform = transform def __len__(self): return len(self.df) def __getitem__(self,idx): row = self.df.iloc[idx] img_path = os.path.join(self.root_dir,row.image) image = cv2.imread(img_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) label = row.label_group if self.transform: augmented = self.transform(image=image) image = augmented['image'] return { 'image' : image, 'label' : torch.tensor(label).long() } class ShopeeImageDataset(torch.utils.data.Dataset): """for validating and test """ def __init__(self,df, transform = None): self.df = df self.root_dir = CFG.DATA_DIR self.transform = transform def __len__(self): return len(self.df) def __getitem__(self,idx): row = self.df.iloc[idx] img_path = os.path.join(self.root_dir,row.image) image = cv2.imread(img_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) label = row.label_group if self.transform: augmented = self.transform(image=image) image = augmented['image'] return image,torch.tensor(1) ``` ## Curricular Face + NFNet-L0 ``` ''' credit : https://github.com/HuangYG123/CurricularFace/blob/8b2f47318117995aa05490c05b455b113489917e/head/metrics.py#L70 ''' def l2_norm(input, axis = 1): norm = torch.norm(input, 2, axis, True) output = torch.div(input, norm) return output class CurricularFace(nn.Module): def __init__(self, in_features, out_features, s = 30, m = 0.50): super(CurricularFace, self).__init__() print('Using Curricular Face') self.in_features = in_features self.out_features = out_features self.m = m self.s = s self.cos_m = math.cos(m) self.sin_m = math.sin(m) self.threshold = math.cos(math.pi - m) self.mm = math.sin(math.pi - m) * m self.kernel = nn.Parameter(torch.Tensor(in_features, out_features)) self.register_buffer('t', torch.zeros(1)) nn.init.normal_(self.kernel, std=0.01) def forward(self, embbedings, label): embbedings = l2_norm(embbedings, axis = 1) kernel_norm = l2_norm(self.kernel, axis = 0) cos_theta = torch.mm(embbedings, kernel_norm) cos_theta = cos_theta.clamp(-1, 1) # for numerical stability with torch.no_grad(): origin_cos = cos_theta.clone() target_logit = cos_theta[torch.arange(0, embbedings.size(0)), label].view(-1, 1) sin_theta = torch.sqrt(1.0 - torch.pow(target_logit, 2)) cos_theta_m = target_logit * self.cos_m - sin_theta * self.sin_m #cos(target+margin) mask = cos_theta > cos_theta_m final_target_logit = torch.where(target_logit > self.threshold, cos_theta_m, target_logit - self.mm) hard_example = cos_theta[mask] with torch.no_grad(): self.t = target_logit.mean() * 0.01 + (1 - 0.01) * self.t cos_theta[mask] = hard_example * (self.t + hard_example) cos_theta.scatter_(1, label.view(-1, 1).long(), final_target_logit) output = cos_theta * self.s return output, nn.CrossEntropyLoss()(output,label) class ShopeeModel(nn.Module): def __init__( self, n_classes = CFG.CLASSES, model_name = CFG.MODEL_NAME, fc_dim = CFG.FC_DIM, margin = CFG.MARGIN, scale = CFG.SCALE, use_fc = True, pretrained = True): super(ShopeeModel,self).__init__() print('Building Model Backbone for {} model'.format(model_name)) self.backbone = timm.create_model(model_name, pretrained=pretrained) if 'efficientnet' in model_name: final_in_features = self.backbone.classifier.in_features self.backbone.classifier = nn.Identity() self.backbone.global_pool = nn.Identity() elif 'resnet' in model_name: final_in_features = self.backbone.fc.in_features self.backbone.fc = nn.Identity() self.backbone.global_pool = nn.Identity() elif 'resnext' in model_name: final_in_features = self.backbone.fc.in_features self.backbone.fc = nn.Identity() self.backbone.global_pool = nn.Identity() elif 'nfnet' in model_name: final_in_features = self.backbone.head.fc.in_features self.backbone.head.fc = nn.Identity() self.backbone.head.global_pool = nn.Identity() self.pooling = nn.AdaptiveAvgPool2d(1) self.use_fc = use_fc if use_fc: self.dropout = nn.Dropout(p=0.0) self.fc = nn.Linear(final_in_features, fc_dim) self.bn = nn.BatchNorm1d(fc_dim) self._init_params() final_in_features = fc_dim self.final = CurricularFace(final_in_features, n_classes, s=scale, m=margin) def _init_params(self): nn.init.xavier_normal_(self.fc.weight) nn.init.constant_(self.fc.bias, 0) nn.init.constant_(self.bn.weight, 1) nn.init.constant_(self.bn.bias, 0) def forward(self, image, label): feature = self.extract_feat(image) logits = self.final(feature,label) return logits def extract_feat(self, x): batch_size = x.shape[0] x = self.backbone(x) x = self.pooling(x).view(batch_size, -1) if self.use_fc: x = self.dropout(x) x = self.fc(x) x = self.bn(x) return x ``` ## Engine ``` def train_fn(model, data_loader, optimizer, scheduler, i): model.train() fin_loss = 0.0 tk = tqdm(data_loader, desc = "Epoch" + " [TRAIN] " + str(i+1)) for t,data in enumerate(tk): for k,v in data.items(): data[k] = v.to(CFG.DEVICE) optimizer.zero_grad() _, loss = model(**data) loss.backward() optimizer.step() fin_loss += loss.item() tk.set_postfix({'loss' : '%.6f' %float(fin_loss/(t+1)), 'LR' : optimizer.param_groups[0]['lr']}) scheduler.step() return fin_loss / len(data_loader) def eval_fn(model, data_loader, i): model.eval() fin_loss = 0.0 tk = tqdm(data_loader, desc = "Epoch" + " [VALID] " + str(i+1)) with torch.no_grad(): for t,data in enumerate(tk): for k,v in data.items(): data[k] = v.to(CFG.DEVICE) _, loss = model(**data) fin_loss += loss.item() tk.set_postfix({'loss' : '%.6f' %float(fin_loss/(t+1))}) return fin_loss / len(data_loader) def read_dataset(): df = pd.read_csv(CFG.TRAIN_CSV) df['matches'] = df.label_group.map(df.groupby('label_group').posting_id.agg('unique').to_dict()) df['matches'] = df['matches'].apply(lambda x: ' '.join(x)) gkf = GroupKFold(n_splits=CFG.N_SPLITS) df['fold'] = -1 for i, (train_idx, valid_idx) in enumerate(gkf.split(X=df, groups=df['label_group'])): df.loc[valid_idx, 'fold'] = i labelencoder= LabelEncoder() df['label_group'] = labelencoder.fit_transform(df['label_group']) train_df = df[df['fold']!=CFG.TEST_FOLD].reset_index(drop=True) train_df = train_df[train_df['fold']!=CFG.VALID_FOLD].reset_index(drop=True) valid_df = df[df['fold']==CFG.VALID_FOLD].reset_index(drop=True) test_df = df[df['fold']==CFG.TEST_FOLD].reset_index(drop=True) train_df['label_group'] = labelencoder.fit_transform(train_df['label_group']) return train_df, valid_df, test_df def precision_score(y_true, y_pred): y_true = y_true.apply(lambda x: set(x.split())) y_pred = y_pred.apply(lambda x: set(x.split())) intersection = np.array([len(x[0] & x[1]) for x in zip(y_true, y_pred)]) len_y_pred = y_pred.apply(lambda x: len(x)).values precision = intersection / len_y_pred return precision def recall_score(y_true, y_pred): y_true = y_true.apply(lambda x: set(x.split())) y_pred = y_pred.apply(lambda x: set(x.split())) intersection = np.array([len(x[0] & x[1]) for x in zip(y_true, y_pred)]) len_y_true = y_true.apply(lambda x: len(x)).values recall = intersection / len_y_true return recall def f1_score(y_true, y_pred): y_true = y_true.apply(lambda x: set(x.split())) y_pred = y_pred.apply(lambda x: set(x.split())) intersection = np.array([len(x[0] & x[1]) for x in zip(y_true, y_pred)]) len_y_pred = y_pred.apply(lambda x: len(x)).values len_y_true = y_true.apply(lambda x: len(x)).values f1 = 2 * intersection / (len_y_pred + len_y_true) return f1 def get_valid_embeddings(df, model): model.eval() image_dataset = ShopeeImageDataset(df,transform=get_valid_transforms()) image_loader = torch.utils.data.DataLoader( image_dataset, batch_size=CFG.BATCH_SIZE, pin_memory=True, num_workers = CFG.NUM_WORKERS, drop_last=False ) embeds = [] with torch.no_grad(): for img,label in tqdm(image_loader): img = img.to(CFG.DEVICE) label = label.to(CFG.DEVICE) feat,_ = model(img,label) image_embeddings = feat.detach().cpu().numpy() embeds.append(image_embeddings) del model image_embeddings = np.concatenate(embeds) print(f'Our image embeddings shape is {image_embeddings.shape}') del embeds gc.collect() return image_embeddings def get_valid_neighbors(df, embeddings, KNN = 50, threshold = 0.36): model = NearestNeighbors(n_neighbors = KNN, metric = 'cosine') model.fit(embeddings) distances, indices = model.kneighbors(embeddings) predictions = [] for k in range(embeddings.shape[0]): idx = np.where(distances[k,] < threshold)[0] ids = indices[k,idx] posting_ids = ' '.join(df['posting_id'].iloc[ids].values) predictions.append(posting_ids) df['pred_matches'] = predictions df['f1'] = f1_score(df['matches'], df['pred_matches']) df['recall'] = recall_score(df['matches'], df['pred_matches']) df['precision'] = precision_score(df['matches'], df['pred_matches']) del model, distances, indices gc.collect() return df, predictions ``` # Training ``` def run_training(): train_df, valid_df, test_df = read_dataset() train_dataset = ShopeeDataset(train_df, transform = get_train_transforms()) train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size = CFG.BATCH_SIZE, pin_memory = True, num_workers = CFG.NUM_WORKERS, shuffle = True, drop_last = True ) print(train_df['label_group'].nunique()) model = ShopeeModel() model = replace_activations(model, torch.nn.SiLU, Mish()) model.to(CFG.DEVICE) optimizer = Ranger(model.parameters(), lr = CFG.SCHEDULER_PARAMS['lr_start']) #optimizer = torch.optim.Adam(model.parameters(), lr = config.SCHEDULER_PARAMS['lr_start']) scheduler = ShopeeScheduler(optimizer,**CFG.SCHEDULER_PARAMS) best_valid_f1 = 0. for i in range(CFG.EPOCHS): avg_loss_train = train_fn(model, train_dataloader, optimizer, scheduler, i) valid_embeddings = get_valid_embeddings(valid_df, model) valid_df, valid_predictions = get_valid_neighbors(valid_df, valid_embeddings) valid_f1 = valid_df.f1.mean() valid_recall = valid_df.recall.mean() valid_precision = valid_df.precision.mean() print(f'Valid f1 score = {valid_f1}, recall = {valid_recall}, precision = {valid_precision}') if valid_f1 > best_valid_f1: best_valid_f1 = valid_f1 print('Valid f1 score improved, model saved') torch.save(model.state_dict(),CFG.MODEL_PATH) run_training() ``` ## Best threshold Search ``` train_df, valid_df, test_df = read_dataset() print("Searching best threshold...") search_space = np.arange(10, 50, 1) model = ShopeeModel() model.eval() model = replace_activations(model, torch.nn.SiLU, Mish()) model.load_state_dict(torch.load(CFG.MODEL_PATH)) model = model.to(CFG.DEVICE) valid_embeddings = get_valid_embeddings(valid_df, model) best_f1_valid = 0. best_threshold = 0. for i in search_space: threshold = i / 100 valid_df, valid_predictions = get_valid_neighbors(valid_df, valid_embeddings, threshold=threshold) valid_f1 = valid_df.f1.mean() valid_recall = valid_df.recall.mean() valid_precision = valid_df.precision.mean() print(f"threshold = {threshold} -> f1 score = {valid_f1}, recall = {valid_recall}, precision = {valid_precision}") if (valid_f1 > best_f1_valid): best_f1_valid = valid_f1 best_threshold = threshold print("Best threshold =", best_threshold) print("Best f1 score =", best_f1_valid) BEST_THRESHOLD = best_threshold print("Searching best knn...") search_space = np.arange(40, 80, 2) best_f1_valid = 0. best_knn = 0 for knn in search_space: valid_df, valid_predictions = get_valid_neighbors(valid_df, valid_embeddings, KNN=knn, threshold=BEST_THRESHOLD) valid_f1 = valid_df.f1.mean() valid_recall = valid_df.recall.mean() valid_precision = valid_df.precision.mean() print(f"knn = {knn} -> f1 score = {valid_f1}, recall = {valid_recall}, precision = {valid_precision}") if (valid_f1 > best_f1_valid): best_f1_valid = valid_f1 best_knn = knn print("Best knn =", best_knn) print("Best f1 score =", best_f1_valid) BEST_KNN = best_knn test_embeddings = get_valid_embeddings(test_df,model) test_df, test_predictions = get_valid_neighbors(test_df, test_embeddings, KNN = BEST_KNN, threshold = BEST_THRESHOLD) test_f1 = test_df.f1.mean() test_recall = test_df.recall.mean() test_precision = test_df.precision.mean() print(f'Test f1 score = {test_f1}, recall = {test_recall}, precision = {test_precision}') ```
github_jupyter
``` import pandas as pd import numpy as np import datetime import matplotlib.pyplot as plt import math import seaborn as sns import multiprocessing from multiprocessing import Process from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn import metrics from sklearn.linear_model import LinearRegression from sklearn.neighbors import KNeighborsRegressor class Result: def __init__(self, dataset_type, random_state, test_size, algorithm_name, fit_time, figure, r_square, mea, mse, rmse, k): self.dataset_type = dataset_type self.random_state = random_state self.test_size = test_size self.algorithm_name = algorithm_name self.fit_time = fit_time self.figure = figure self.r_square = r_square self.mea = mea self.mse = mse self.rmse = rmse self.k = k def print_object(self): print(f""" Algorithm: {self.algorithm_name} Dataset: {self.dataset_type} Test Size: {self.test_size} Random State: {self.random_state} Fit Time: {self.fit_time} Figure Path: {self.figure} R Square: {self.r_square} MEA: {self.mea} MSE: {self.mse} RMSE: {self.rmse} K: {self.k}\n\n """) results1 = multiprocessing.Manager().list() results2 = multiprocessing.Manager().list() results3 = multiprocessing.Manager().list() def runInParallel(*fns): proc = [] for fn in fns: p = Process(target=fn) p.start() proc.append(p) for p in proc: p.join() def standard_scaler(dataset): columns = dataset.columns.to_list() scaler = StandardScaler() scaled = scaler.fit_transform(dataset.to_numpy()) return pd.DataFrame(scaled, columns=columns) def min_max_scaler(dataset): columns = dataset.columns.to_list() scaler = MinMaxScaler() scaled = scaler.fit_transform(dataset.to_numpy()) return pd.DataFrame(scaled, columns=columns) data1 = pd.read_csv('process.csv') data2 = standard_scaler(data1) data3 = min_max_scaler(data1) def plotGraph(y_test, y_pred, regressorName, figName): plt.figure(figsize=(10,10)) plt.scatter(y_test,y_pred, c='crimson') plt.yscale('log') plt.xscale('log') p1 = max(max(y_pred), max(y_test)) p2 = min(min(y_pred), min(y_test)) plt.plot([p1, p2], [p1, p2], 'b-') plt.xlabel('True Values', fontsize=15) plt.ylabel('Predictions', fontsize=15) plt.axis('equal') plt.savefig(figName, format="svg") plt.close() def split_test(rand, size, dataset): y = dataset['new_cases'].values x = dataset.drop(['new_cases'], axis=1).values return train_test_split(x, y, random_state=rand, test_size=size) def liner_test(name, rand, size, X_train, X_test, y_train, y_test): start = datetime.datetime.now() regressor = LinearRegression() regressor.fit(X_train, y_train) y_pred = regressor.predict(X_test) end = datetime.datetime.now() figureName = f"LN-{name}-{rand}-{size}" plotGraph(y_test, y_pred, 'Linear Regression', figureName) r_square = (1 - ((1 - metrics.r2_score(y_test, y_pred)) * (len(y_test) - 1)) / (len(y_test) - X_train.shape[1] - 1)) mea = metrics.mean_absolute_error(y_test, y_pred) mse = metrics.mean_squared_error(y_test, y_pred) rmse = np.sqrt(metrics.mean_squared_error(y_test, y_pred)) res = Result(name, rand, size, "LinearRegression", (end-start), figureName, r_square, mea, mse, rmse, 0) results1.append(res) def rand_forest(name, rand, size, X_train, X_test, y_train, y_test): start = datetime.datetime.now() rf = RandomForestRegressor(n_estimators = 1000 - rand, random_state = rand + 10) rf.fit(X_train, y_train); y_pred = rf.predict(X_test) end = datetime.datetime.now() figureName = f"RF-{name}-{rand}-{size}" plotGraph(y_test, y_pred, 'Random Forest', figureName) r_square = (1 - ((1 - metrics.r2_score(y_test, y_pred)) * (len(y_test) - 1)) / (len(y_test) - X_train.shape[1] - 1)) mea = metrics.mean_absolute_error(y_test, y_pred) mse = metrics.mean_squared_error(y_test, y_pred) rmse = np.sqrt(metrics.mean_squared_error(y_test, y_pred)) res = Result(name, rand, size, "RandomForest", (end-start), figureName, r_square, mea, mse, rmse, 0) results2.append(res) def knn_regression(name, rand, size, X_train, X_test, y_train, y_test): start = datetime.datetime.now() for K in range(20): K = K+1 clf = KNeighborsRegressor(n_neighbors = K) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) end = datetime.datetime.now() figureName = f"RF-{name}-{rand}-{size}-{K}" plotGraph(y_test, y_pred, f"KNN with K {K}", figureName) r_square = (1 - ((1 - metrics.r2_score(y_test, y_pred)) * (len(y_test) - 1)) / (len(y_test) - X_train.shape[1] - 1)) mea = metrics.mean_absolute_error(y_test, y_pred) mse = metrics.mean_squared_error(y_test, y_pred) rmse = np.sqrt(metrics.mean_squared_error(y_test, y_pred)) res = Result(name, rand, size, "KNN", (end-start), figureName, r_square, mea, mse, rmse, K) results3.append(res) def test1(): start = datetime.datetime.now() first_time = True for i in range(0, 11): if first_time: rand = 0 size = 0.05 first_time = False else: rand = i * 5 size = i * 0.05 X_train, X_test, y_train, y_test = split_test(rand, size, data1) liner_test('Original', rand, size, X_train, X_test, y_train, y_test) rand_forest('Original', rand, size, X_train, X_test, y_train, y_test) knn_regression('Original', rand, size, X_train, X_test, y_train, y_test) end = datetime.datetime.now() print(f"\nTest1 Time: ", end-start) def test2(): start = datetime.datetime.now() first_time = True for i in range(0, 11): if first_time: rand = 0 size = 0.05 first_time = False else: rand = i * 5 size = i * 0.05 X_train, X_test, y_train, y_test = split_test(rand, size, data2) liner_test('Standard Scaler', rand, size, X_train, X_test, y_train, y_test) rand_forest('Standard Scaler', rand, size, X_train, X_test, y_train, y_test) knn_regression('Standard Scaler', rand, size, X_train, X_test, y_train, y_test) end = datetime.datetime.now() print(f"\nTest2 Time: ", end-start) def test3(): start = datetime.datetime.now() first_time = True for i in range(0, 11): if first_time: rand = 0 size = 0.05 first_time = False else: rand = i * 5 size = i * 0.05 rand = int(rand) X_train, X_test, y_train, y_test = split_test(rand, size, data3) liner_test('MinMax Scaler', rand, size, X_train, X_test, y_train, y_test) rand_forest('MinMax Scaler', rand, size, X_train, X_test, y_train, y_test) knn_regression('MinMax Scaler', rand, size, X_train, X_test, y_train, y_test) end = datetime.datetime.now() print(f"\nTest3 Time: ", end-start) start = datetime.datetime.now() print(f"Parallel start at {start}") runInParallel(test1, test2, test3) end = datetime.datetime.now() print(f"\n\nParallel time: ", end-start) for item in results2: results3.append(item) for item in results1: results3.append(item) results = list(results3) results.sort(key=lambda x: x.r_square, reverse=True) for i in range(0, 11): print(f"Position {i}:") results[i].print_object() results.sort(key=lambda x: x.mea, reverse=False) for i in range(0, 11): print(f"Position {i}:") results[i].print_object() results.sort(key=lambda x: x.mse, reverse=False) for i in range(0, 11): print(f"Position {i}:") results[i].print_object() results.sort(key=lambda x: x.rmse, reverse=False) for i in range(0, 11): print(f"Position {i}:") results[i].print_object() ```
github_jupyter
# North and South Chickamauga Creek USGS Stations Streamflow Data for HSPF Model ## Shuvashish Roy ### *We will use hydrofunctions package for flow data retrieval* ``` pip show hydrofunctions import hydrofunctions as hf import pandas as pd %matplotlib inline import hydrofunctions as hf import pandas as pd %matplotlib inline hf.__version__ pd.__version__ pd.__version__ sites = ['03566535', '03567500'] # USGS gage ids start = '2017-01-01' end = '2021-01-01' service = 'dv' # Request our data. data = hf.NWIS(sites, service, start, end, file='graphing-1.parquet') # you can also do it in csv data # Verify that the data request went fine. ``` ### *See the precipitation and discharge columns of these two USGS gages from the dataset* ``` import pandas as pd Q1=pd.read_parquet('graphing-1.parquet') Q1.head() ``` ### *Extract only the Discharge data* ``` Q = data.df('00060') # q or 00060 for discharge data Q.head(5) Q.columns=["NORTH CHICKAMAUGA","SOUTH CHICKAMAUGA" ] Q.index.names = ['Date'] Q.head() ``` ### *Plot the daily discharge data* ``` Q.plot() # or data.df('discharge').plot() or data.df('00060').plot() data.df('00045').plot() # precipitation plot ``` ### *Let's plot weekly monthly or quarterly data* ``` ax=Q.resample('W').sum().plot(); # use 'M or 'Q' for mothly and quarterly data plot ax.set_ylabel('Discharge (cfs)') ax.set_xlabel(' Date (UTC)') ax.set_title(' Weekly Flow') ``` ### Let's check the annual trend ``` ax=Q.resample('D').sum().rolling(365).sum().plot() # yearly rolling mean ax.set_ylim(0,None); ``` #### In North Chickamauga stream, discharge increased until early 2019 and decreased after that. Every point indicates sum of discharge of previous 365 days ### *Get the total discharge of North and South Chick streams which can later be used as upstream boundry conditon for TN River EFDC model* ``` Q.loc[:,'Total']=Q.loc[:,'NORTH CHICKAMAUGA']+Q.loc[:,'SOUTH CHICKAMAUGA'] Q # check the Total flow ax=Q.resample('m').sum().plot(); ax.set_ylabel('Discharge (cfs)') ax.set_xlabel(' Date (UTC)') ax.set_title(' monthly Flow') Q.groupby(Q.index.month).mean().plot(); df=Q df.loc[:,'Month'] = df.index.month df Q ``` # Let's draw some box plots ``` import matplotlib.pyplot as plt ax = df.boxplot(column=['NORTH CHICKAMAUGA'], by='Month',showfliers=False) plt.suptitle("North Chickamauga") ax.set_title('') ax = df.boxplot(column=['SOUTH CHICKAMAUGA'], by='Month',showfliers=False) plt.suptitle("") ax.set_title('South Chickamauga') ```
github_jupyter
<center> <img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" /> </center> # **Space X Falcon 9 First Stage Landing Prediction** ## Assignment: Machine Learning Prediction Estimated time needed: **60** minutes Space X advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars; other providers cost upward of 165 million dollars each, much of the savings is because Space X can reuse the first stage. Therefore if we can determine if the first stage will land, we can determine the cost of a launch. This information can be used if an alternate company wants to bid against space X for a rocket launch. In this lab, you will create a machine learning pipeline to predict if the first stage will land given the data from the preceding labs. ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/landing\_1.gif) Several examples of an unsuccessful landing are shown here: ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/crash.gif) Most unsuccessful landings are planed. Space X; performs a controlled landing in the oceans. ## Objectives Perform exploratory Data Analysis and determine Training Labels * create a column for the class * Standardize the data * Split into training data and test data \-Find best Hyperparameter for SVM, Classification Trees and Logistic Regression * Find the method performs best using test data *** ## Import Libraries and Define Auxiliary Functions We will import the following libraries for the lab ``` # Pandas is a software library written for the Python programming language for data manipulation and analysis. import pandas as pd # NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays import numpy as np # Matplotlib is a plotting library for python and pyplot gives us a MatLab like plotting framework. We will use this in our plotter function to plot data. import matplotlib.pyplot as plt #Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics import seaborn as sns # Preprocessing allows us to standarsize our data from sklearn import preprocessing # Allows us to split our data into training and testing data from sklearn.model_selection import train_test_split # Allows us to test parameters of classification algorithms and find the best one from sklearn.model_selection import GridSearchCV # Logistic Regression classification algorithm from sklearn.linear_model import LogisticRegression # Support Vector Machine classification algorithm from sklearn.svm import SVC # Decision Tree classification algorithm from sklearn.tree import DecisionTreeClassifier # K Nearest Neighbors classification algorithm from sklearn.neighbors import KNeighborsClassifier ``` This function is to plot the confusion matrix. ``` def plot_confusion_matrix(y,y_predict): "this function plots the confusion matrix" from sklearn.metrics import confusion_matrix cm = confusion_matrix(y, y_predict) ax= plt.subplot() sns.heatmap(cm, annot=True, ax = ax); #annot=True to annotate cells ax.set_xlabel('Predicted labels') ax.set_ylabel('True labels') ax.set_title('Confusion Matrix'); ax.xaxis.set_ticklabels(['did not land', 'land']); ax.yaxis.set_ticklabels(['did not land', 'landed']) ``` ## Load the dataframe Load the data ``` data = pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/dataset_part_2.csv") # If you were unable to complete the previous lab correctly you can uncomment and load this csv # data = pd.read_csv('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/dataset_part_2.csv') data.head() X = pd.read_csv('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/dataset_part_3.csv') # If you were unable to complete the previous lab correctly you can uncomment and load this csv # X = pd.read_csv('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/dataset_part_3.csv') X.head(100) ``` ## TASK 1 Create a NumPy array from the column <code>Class</code> in <code>data</code>, by applying the method <code>to_numpy()</code> then assign it to the variable <code>Y</code>,make sure the output is a Pandas series (only one bracket df\['name of column']). ``` Y = data['Class'].to_numpy() Y ``` ## TASK 2 Standardize the data in <code>X</code> then reassign it to the variable <code>X</code> using the transform provided below. ``` # students get this transform = preprocessing.StandardScaler() ``` We split the data into training and testing data using the function <code>train_test_split</code>. The training data is divided into validation data, a second set used for training data; then the models are trained and hyperparameters are selected using the function <code>GridSearchCV</code>. ## TASK 3 Use the function train_test_split to split the data X and Y into training and test data. Set the parameter test_size to 0.2 and random_state to 2. The training data and test data should be assigned to the following labels. <code>X_train, X_test, Y_train, Y_test</code> ``` X_train,X_test,y_train,y_test = train_test_split(X,Y,test_size=0.2,random_state=2) ``` we can see we only have 18 test samples. ``` y_test.shape ``` ## TASK 4 Create a logistic regression object then create a GridSearchCV object <code>logreg_cv</code> with cv = 10. Fit the object to find the best parameters from the dictionary <code>parameters</code>. ``` parameters ={"C":[0.01,0.1,1],'penalty':['l2'], 'solver':['lbfgs']}# l1 lasso l2 ridge lr=LogisticRegression() logreg_cv = GridSearchCV(lr, parameters, cv=10, refit=True) logreg_cv.fit(X_train, y_train) ``` We output the <code>GridSearchCV</code> object for logistic regression. We display the best parameters using the data attribute <code>best_params\_</code> and the accuracy on the validation data using the data attribute <code>best_score\_</code>. ``` print("tuned hpyerparameters :(best parameters) ",logreg_cv.best_params_) print("accuracy :",logreg_cv.best_score_) ``` ## TASK 5 Calculate the accuracy on the test data using the method <code>score</code>: ``` from sklearn.metrics import classification_report, confusion_matrix, f1_score, precision_score, recall_score, roc_auc_score glm_acc=logreg_cv.score(X_test, y_test) print("Test set accuracy: {:.1%}".format(glm_acc)) glm_probs = logreg_cv.predict_proba(X_test)[:,1] glm_auc=roc_auc_score(y_test, glm_probs) print("Test set AUC: {:.3}".format(glm_auc)) ``` Lets look at the confusion matrix: ``` yhat=logreg_cv.predict(X_test) plot_confusion_matrix(y_test,yhat) ``` Examining the confusion matrix, we see that logistic regression can distinguish between the different classes. We see that the major problem is false positives. ## TASK 6 Create a support vector machine object then create a <code>GridSearchCV</code> object <code>svm_cv</code> with cv - 10. Fit the object to find the best parameters from the dictionary <code>parameters</code>. ``` parameters = {'kernel':('linear', 'rbf','poly','rbf', 'sigmoid'), 'C': np.logspace(-3, 3, 5), 'gamma':np.logspace(-3, 3, 5)} svm = SVC(probability=True, random_state=1) svm_cv = GridSearchCV(svm,parameters,cv=10) svm_cv.fit(X_train, y_train) print("tuned hpyerparameters :(best parameters) ",svm_cv.best_params_) print("accuracy :",svm_cv.best_score_) ``` ## TASK 7 Calculate the accuracy on the test data using the method <code>score</code>: We can plot the confusion matrix ``` yhat=svm_cv.predict(X_test) plot_confusion_matrix(Y_test,yhat) ``` ## TASK 8 Create a decision tree classifier object then create a <code>GridSearchCV</code> object <code>tree_cv</code> with cv = 10. Fit the object to find the best parameters from the dictionary <code>parameters</code>. ``` parameters = {'criterion': ['gini', 'entropy'], 'splitter': ['best', 'random'], 'max_depth': [2*n for n in range(1,10)], 'max_features': ['auto', 'sqrt'], 'min_samples_leaf': [1, 2, 4], 'min_samples_split': [2, 5, 10]} tree = DecisionTreeClassifier() print("tuned hpyerparameters :(best parameters) ",tree_cv.best_params_) print("accuracy :",tree_cv.best_score_) ``` ## TASK 9 Calculate the accuracy of tree_cv on the test data using the method <code>score</code>: We can plot the confusion matrix ``` yhat = svm_cv.predict(X_test) plot_confusion_matrix(Y_test,yhat) ``` ## TASK 10 Create a k nearest neighbors object then create a <code>GridSearchCV</code> object <code>knn_cv</code> with cv = 10. Fit the object to find the best parameters from the dictionary <code>parameters</code>. ``` parameters = {'n_neighbors': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'algorithm': ['auto', 'ball_tree', 'kd_tree', 'brute'], 'p': [1,2]} KNN = KNeighborsClassifier() print("tuned hpyerparameters :(best parameters) ",knn_cv.best_params_) print("accuracy :",knn_cv.best_score_) ``` ## TASK 11 Calculate the accuracy of tree_cv on the test data using the method <code>score</code>: We can plot the confusion matrix ``` yhat = knn_cv.predict(X_test) plot_confusion_matrix(Y_test,yhat) ``` ## TASK 12 Find the method performs best: ## Authors <a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD. ## Change Log | Date (YYYY-MM-DD) | Version | Changed By | Change Description | | ----------------- | ------- | ------------- | ----------------------- | | 2021-08-31 | 1.1 | Lakshmi Holla | Modified markdown | | 2020-09-20 | 1.0 | Joseph | Modified Multiple Areas | Copyright © 2020 IBM Corporation. All rights reserved.
github_jupyter
``` # While in argo environment: Import necessary packages for this notebook import numpy as np from matplotlib import pyplot as plt import xarray as xr import pandas as pd from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() %matplotlib inline import glob ``` !python -m pip install "dask[complete]" ``` float_id = '9094' # '9094' '9099' '7652' '9125' rootdir = '../data/raw/LowRes/' fd = xr.open_mfdataset(rootdir + float_id + 'SOOCNQC.nc') JULD = pd.to_datetime(fd.JULD.values) ``` #reads float data file_folder = "../data/raw/WGfloats/" #file_folder = "../../data/raw/LowRes" float_number = "5904468" #7900918 #9094 files = sorted(glob.glob(file_folder+"/*"+float_number+"*.nc")) print(files) #files = sorted(glob.glob(file_folder+"/*.nc")) fd = xr.open_mfdataset(file_folder+"/*"+float_number+"*.nc") JULD = pd.to_datetime(fd.JULD.values) ``` fd ``` #help(xr.open_mfdataset) rootdir + float_id + 'SOOCNQC.nc' #Data/LowRes/9099SOOCNQC.nc #fd ``` # HELPER FUNCTIONS #define a function that smooths using a boxcar filter (running mean) # not sure this function is actually used in the notebook?? def smooth(y, box_pts): box = np.ones(box_pts)/box_pts y_smooth = np.convolve(y, box, mode='same') return y_smooth #interpolate the data onto the standard depth grid given by x_int def interpolate(x_int, xvals, yvals): yvals_int = [] for n in range(0, len(yvals)): # len(yvals) = profile number yvals_int.append(np.interp(x_int, xvals[n, :], yvals[n, :])) #convert the interpolated data from a list to numpy array return np.asarray(yvals_int) # calculate the vertically integrated data column inventory using the composite trapezoidal rule def integrate(zi, data, depth_range): n_profs = len(data) zi_start = abs(zi - depth_range[0]).argmin() # find location of start depth zi_end = abs(zi - depth_range[1]).argmin() # find location of end depth zi_struct = np.ones((n_profs, 1)) * zi[zi_start : zi_end] # add +1 to get the 200m value data = data[:, zi_start : zi_end] # add +1 to get the 200m value col_inv = [] for n in range(0, len(data)): col_inv.append(np.trapz(data[n,:][~np.isnan(data[n,:])], zi_struct[n,:][~np.isnan(data[n,:])])) return col_inv #fd #(float data) #fd.Pressure.isel(N_PROF=0).values # Interpolate nitrate and poc zi = np.arange(0, 1600, 5) # 5 = 320 depth intervals between 0m to 1595m nitr_int = interpolate(zi, fd.Pressure[:, ::-1], fd.Nitrate[:, ::-1]) # interpolate nitrate values across zi depth intervals for all 188 profiles # Integrate nitrate and poc - total nitrate in upper 200m upperlim=25 lowerlim=200 nitr = np.array(integrate(zi, nitr_int, [upperlim, lowerlim])) # integrate interpolated nitrate values between 25m-200m print(nitr) #nitr.shape # Find winter maximum and summer minimum upper ocean nitrate levels def find_extrema(data, date_range, find_func): # Find indices of float profiles in the date range date_mask = (JULD > date_range[0]) & (JULD < date_range[1]) # Get the index where the data is closest to the find_func index = np.where(data[date_mask] == find_func(data[date_mask]))[0][0] # Get the average data for the month of the extrema month_start = JULD[date_mask][index].replace(day = 1) # .replace just changes the day of max/min to 1 month_dates = (JULD > month_start) & (JULD < month_start + pd.Timedelta(days = 30)) # ^ not sure why this is needed? or what it does? - it is not used later on... #month_avg = np.mean(data[date_mask]) #average whole winter or summer values # ^ but it should be just the month of max/min nitrate, # not the average for the whole season?... month_mask = (JULD.month[date_mask] == month_start.month) month_avg = np.mean(data[date_mask][month_mask]) return month_avg, JULD[date_mask][index], data[date_mask][index] years = [2015, 2016, 2017, 2018] nitr_extrema = [] nitr_ancp = [] for y in years: winter_range = [pd.datetime(y, 8, 1), pd.datetime(y, 12, 1)] #4 months summer_range = [pd.datetime(y, 12, 1), pd.datetime(y + 1, 4, 1)] #4 months # Find maximum winter and minimum summer nitrate avg_max_nitr, max_nitr_date, max_nitr = find_extrema(nitr, winter_range, np.max) avg_min_nitr, min_nitr_date, min_nitr = find_extrema(nitr, summer_range, np.min) # Convert to annual nitrate drawdown redfield_ratio = 106.0/16.0 #106C:16NO3- # Nitrate units: umol/kg --> divide by 1000 to convert to mol/kg nitr_drawdown = (avg_max_nitr - avg_min_nitr)/1000.0 * redfield_ratio nitr_ancp.append(nitr_drawdown) nitr_extrema.append(((max_nitr, max_nitr_date), (min_nitr, min_nitr_date))) print(y, max_nitr_date, max_nitr, avg_max_nitr) print(y, min_nitr_date, min_nitr, avg_min_nitr) # plot ANCP for chosen float over specified time period fig, ax = plt.subplots(figsize = (10, 5)) ax.plot(years, nitr_ancp) ax.set_ylabel('ANCP [mol/m$^2$]', size = 12) ax.set_xticks(years) ax.set_xticklabels(['2015', '2016', '2017', '2018']) ax.set_title('ANCP for Float ' + float_id) # Find winter maximum and summer minimum upper ocean nitrate levels def find_extrema(data, date_range, find_func): # Find indices of float profiles in the date range date_mask = (JULD > date_range[0]) & (JULD < date_range[1]) # Get the index where the data is closest to the find_func index = np.where(data[date_mask] == find_func(data[date_mask]))[0][0] # Get the average data for the month of the extrema month_start = JULD[date_mask][index].replace(day = 1) # .replace just changes the day of max/min to 1 month_dates = (JULD > month_start) & (JULD < month_start + pd.Timedelta(days = 30)) # ^ not sure why this is needed? or what it does? - it is not used later on... month_avg = np.mean(data[date_mask]) #average whole winter or summer values # ^ but it should be just the month of max/min nitrate, # not the average for the whole season?... # month_mask = (JULD.month[date_mask] == month_start.month) # month_avg = np.mean(data[date_mask][month_mask]) return month_avg, JULD[date_mask][index], data[date_mask][index] years = [2015, 2016, 2017, 2018] nitr_extrema = [] nitr_ancp = [] for y in years: winter_range = [pd.datetime(y, 8, 1), pd.datetime(y, 12, 1)] summer_range = [pd.datetime(y, 12, 1), pd.datetime(y + 1, 4, 1)] # Find maximum winter and minimum summer nitrate avg_max_nitr, max_nitr_date, max_nitr = find_extrema(nitr, winter_range, np.max) avg_min_nitr, min_nitr_date, min_nitr = find_extrema(nitr, summer_range, np.min) # Convert to annual nitrate drawdown redfield_ratio = 106.0/16.0 #106C:16NO3- # Nitrate units: umol/kg --> divide by 1000 to convert to mol/kg nitr_drawdown = (avg_max_nitr - avg_min_nitr)/1000.0 * redfield_ratio nitr_ancp.append(nitr_drawdown) nitr_extrema.append(((max_nitr, max_nitr_date), (min_nitr, min_nitr_date))) print(y, max_nitr_date, max_nitr, avg_max_nitr) print(y, min_nitr_date, min_nitr, avg_min_nitr) # plot ANCP for chosen float over specified time period fig, ax = plt.subplots(figsize = (10, 5)) ax.plot(years, nitr_ancp) ax.set_ylabel('ANCP [mol/m$^2$]', size = 12) ax.set_xticks(years) ax.set_xticklabels(['2015', '2016', '2017', '2018']) ax.set_title('ANCP for Float ' + float_id) # Plot values of integrated nitrate (mol/m2) fig, ax = plt.subplots(figsize = (20, 5)) # Integrate nitrate and poc between given depth range zi_range = [25, 200] nitr_v = np.array(integrate(zi, nitr_int, zi_range))/1000.0 # Function to mark the maximum/minimum values of the data for summer and winter def add_extrema(ax, ydata, extrema): for i in range(len(years)): y = years[i] winter_range = [pd.datetime(y, 8, 1), pd.datetime(y, 12, 1)] summer_range = [pd.datetime(y, 12, 1), pd.datetime(y + 1, 4, 1)] plt.axvspan(winter_range[0], winter_range[1], color='grey', alpha=0.1) plt.axvspan(summer_range[0], summer_range[1], color='y', alpha=0.1) (nmax, dmax), (nmin, dmin) = extrema[i] nitr_vmax = ydata[JULD == dmax] nitr_vmin = ydata[JULD == dmin] ax.plot([dmax], nitr_vmax, color = 'g', marker='o', markersize=8) ax.plot([dmin], nitr_vmin, color = 'r', marker='o', markersize=8) return ax #ax = plt.subplot(2, 1, 1) ax.plot(JULD, nitr_v) add_extrema(ax, nitr_v, nitr_extrema) ax.set_ylabel('Nitrate [mol/$m^2$]') ax.set_title('Integrated Nitrate (' + str(zi_range[0]) + '-' + str(zi_range[1]) + 'm)') ax.set_ylim([4.5, ax.get_ylim()[1]]) ```
github_jupyter
``` %matplotlib inline %run notebook_setup ndim = 15 import time import emcee import numpy as np import pymc3 as pm from pymc3.step_methods.hmc import quadpotential as quad np.random.seed(41) with pm.Model() as model: pm.Normal("x", shape=ndim) potential = quad.QuadPotentialDiag(np.ones(ndim)) step_kwargs = dict() step_kwargs["model"] = model step_kwargs["step_scale"] = 1.0 * model.ndim ** 0.25 step_kwargs["adapt_step_size"] = False step = pm.NUTS(potential=potential, **step_kwargs) start = time.time() trace = pm.sample(tune=0, draws=10000, step=step, cores=1) time_pymc3 = time.time() - start samples_pymc3 = np.array(trace.get_values("x", combine=False)) samples_pymc3 = np.moveaxis(samples_pymc3, 0, 1) tau_pymc3 = emcee.autocorr.integrated_time(samples_pymc3) neff_pymc3 = np.prod(samples_pymc3.shape[:2]) / tau_pymc3 teff_pymc3 = time_pymc3 / neff_pymc3 teff_pymc3 np.random.seed(1234) import exoplanet as xo with model: func = xo.get_theano_function_for_var(model.logpt) def logprob(theta): point = model.bijection.rmap(theta) args = xo.get_args_for_theano_function(point) return func(*args) x0 = np.random.randn(ndim) nwalkers = 36 x0 = np.random.randn(nwalkers, ndim) emcee_sampler = emcee.EnsembleSampler(nwalkers, ndim, logprob) state = emcee_sampler.run_mcmc(x0, 2000, progress=True) emcee_sampler.reset() strt = time.time() emcee_sampler.run_mcmc(state, 20000, progress=True) time_emcee = time.time() - strt samples_emcee = emcee_sampler.get_chain() tau_emcee = emcee.autocorr.integrated_time(samples_emcee) neff_emcee = np.prod(samples_emcee.shape[:2]) / tau_emcee teff_emcee = time_emcee / neff_emcee teff_emcee def get_function(x): n_t, n_w = x.shape f = np.zeros(n_t) for k in range(n_w): f += emcee.autocorr.function_1d(x[:, k]) f /= n_w return f fig, axes = plt.subplots(2, 2, figsize=(10, 8)) ax = axes[0, 0] t_emcee = np.linspace(0, time_emcee, samples_emcee.shape[0]) m = t_emcee < 5 ax.plot(t_emcee[m], samples_emcee[m, 5, 0], lw=0.75) ax.annotate( "emcee", xy=(0, 1), xycoords="axes fraction", ha="left", va="top", fontsize=16, xytext=(5, -5), textcoords="offset points", ) ax.set_ylabel(r"$\theta_1$") ax.set_xlim(0, 5) ax.set_ylim(-4, 4) ax.set_xticklabels([]) ax = axes[1, 0] t_pymc3 = np.linspace(0, time_pymc3, samples_pymc3.shape[0]) m = t_pymc3 < 5 ax.plot(t_pymc3[m], samples_pymc3[m, 0, 0], lw=0.75) ax.annotate( "pymc3", xy=(0, 1), xycoords="axes fraction", ha="left", va="top", fontsize=16, xytext=(5, -5), textcoords="offset points", ) ax.set_ylabel(r"$\theta_1$") ax.set_xlabel("walltime [s]") ax.set_xlim(0, 5) ax.set_ylim(-4, 4) ax = axes[0, 1] f_emcee = get_function(samples_emcee[:, :, 0]) scale = 1e3 * time_emcee / np.prod(samples_emcee.shape[:2]) ax.plot(scale * np.arange(len(f_emcee)), f_emcee) ax.axhline(0, color="k", lw=0.5) ax.annotate( "emcee", xy=(1, 1), xycoords="axes fraction", ha="right", va="top", fontsize=16, xytext=(-5, -5), textcoords="offset points", ) val = 2 * scale * tau_emcee[0] max_x = 1.5 * val ax.axvline(val, color="k", lw=3, alpha=0.3) ax.annotate( "{0:.1f} ms".format(val), xy=(val / max_x, 1), xycoords="axes fraction", ha="right", va="top", fontsize=12, alpha=0.3, xytext=(-5, -5), textcoords="offset points", ) ax.set_xlim(0, max_x) ax.set_xticklabels([]) ax.set_yticks([]) ax.set_ylabel("autocorrelation") ax = axes[1, 1] f_pymc3 = get_function(samples_pymc3[:, :, 0]) scale = 1e3 * time_pymc3 / np.prod(samples_pymc3.shape[:2]) ax.plot(scale * np.arange(len(f_pymc3)), f_pymc3) ax.axhline(0, color="k", lw=0.5) ax.annotate( "pymc3", xy=(1, 1), xycoords="axes fraction", ha="right", va="top", fontsize=16, xytext=(-5, -5), textcoords="offset points", ) val = 2 * scale * tau_pymc3[0] ax.axvline(val, color="k", lw=3, alpha=0.3) ax.annotate( "{0:.1f} ms".format(val), xy=(val / max_x, 1), xycoords="axes fraction", ha="left", va="top", fontsize=12, alpha=0.3, xytext=(5, -5), textcoords="offset points", ) ax.set_xlim(0, max_x) ax.set_yticks([]) ax.set_xlabel("walltime lag [ms]") ax.set_ylabel("autocorrelation") fig.subplots_adjust(hspace=0.05, wspace=0.15) fig.savefig("gaussians.pdf", bbox_inches="tight"); ```
github_jupyter
<h1 id="i1"> Data 512 - A6 : Predicting Earthquakes : Final Project</h1> Gautam Moogimane <br> University of Washington - Fall 2018 <h2 id="i1">I. Introduction </h2> Having stayed in Japan for a long time, I've been accustomed to getting up at odd hours, with everything shaking around me. Tremors and earthquakes are so common, its rare for a week to go by without one. Considering the amount of damage these natural disasters are capable of, the fact that we are no closer to predicting them than we were a 100 years ago, is slightly surprising to say the least. With this analysis, I aim to use machine learning to analyze historic earthquake data from around the world, and use to it to hopefully come up with some observations about earthquakes in the future. As per the statistics, there are hundreds of earthquakes that happen every day around the globe, with magnitudes ranging from 2-4 on the richter scale. The larger ones, between 5-7 on the scale occur every week or so, while the biggest at 7.5 and above are seen once a year. The 2011 Japan earthquake with its epicenter in Fukushima only had a magnitude of about 6.6, but was strong enough to cause buildings in Tokyo, which is a 100 miles away from Fukushima, to sway for around 30 seconds. Electricty went down, trains were stalled on the tracks, traffic came to a standstill, and thousands of people took to the streets, walking many miles to get back home. I was reading a paper on the economic damages caused by this earthquake, and it was in the range of around 360 billion. This is about 6% of Japans revenue for that year. The aim of this study is not to break some new ground as far as predicting earthquakes go, but a way to see how appropriate or accurate machine learning can be in this situation. Statistics deals with probabilites, and machine learning, which applies these concepts on a computer, can only predict with a certain amount of accuracy, based on the data that it is trained on. To accurately predict earthquakes, one needs to be able to say with a degree of certainty, what the exact location, time of occurance and the magnitude will be. Considering the data we have, and how spread out the locations are, it would be an unrealistic goal trying to achieve this using machine learning at this point in time. Instead, with this analysis, given a certain location(like country with latitudes and longitudes), and time( year, instead of down to the exact second), if we can come up with information about the magnitude of the next earthquake/earthquakes, based on what happened in the past, that would be a good start. From a human centered ethics perspective, the system needs to ensure that any inherent bias in the data does not trickle down in the output generated. From a personal standpoint, I would need to ensure my experience in Japan, does not make me expect or not expect certain results from the model, thereby skewing the predictions in a particular direction. <h2 id="i2">II. Background </h2> There has been plenty of work done in this space, and some interesting work that I happened to come across are detailed below <br> [Machine learning predicts earthquakes in a lab](https://www.cam.ac.uk/research/news/machine-learning-used-to-predict-earthquakes-in-a-lab-setting) <br> This is a fascinating read of a team of researchers from University of Cambridge, Los Alamos National Laboratory and Boston University, who were able to identify signals that preempt the arrival of an earthquake under controlled settings. They had steel blocks set up to replicate the physical forces at work under the earths surface, and found particular acoustic signals being generated by the faults, just before an earthquake. Using a machine learning algorithm, that was fed this signal, it was able to identify instances when the fault was under stress, and about to cause an earthquake. <br> An obvious drawback, is that the actual forces under the earths surface, are a lot different than what is found under controlled settings. So the sucess they have had in a lab, has not yet been replicated in actual settings. But the results are encouraging. <br> Another paper that is an interesting read, is the research done in the Hindukush region, to predict the magnitude of an earthquake using machine learning. <br> [Hindukush earthquake magnitude prediction](https://www.researchgate.net/publication/307951466_Earthquake_magnitude_prediction_in_Hindukush_region_using_machine_learning_techniques) <br> They use mathematically calculated eight seismic indicators, and then apply four machine learning algorithms, pattern recognition neural network, recurrent neural network, random forest and linear programming boost ensemble classifier to model relationships between the indicators and future earthquake occurences. <h2 id="i3">III. Data</h2> The data is from the significant earthquake database, and contains historic data from 2150 BC to the present day, of earthquakes from all over the world. The data is constantly updated to reflect the latest events. As per the description from the 'data.nodc.noaa.gov' site where the data is hosted: > The Significant Earthquake Database is a global listing of over 5,700 earthquakes from 2150 BC to the present. A significant earthquake is classified as one that meets at least one of the following criteria: > caused deaths, caused moderate damage (approximately 1 million dollars or more), magnitude 7.5 or greater, Modified Mercalli Intensity (MMI) X or greater, or the earthquake generated a tsunami. > The database provides information on the date and time of occurrence, latitude and longitude, focal depth, magnitude, maximum MMI intensity, and socio-economic data such as the total number of casualties, injuries, houses destroyed, and houses damaged, and dollar damage estimates. References, political geography, and additional comments are also provided for each earthquake. If the earthquake was associated with a tsunami or volcanic eruption, it is flagged and linked to the related tsunami event or significant volcanic eruption. The fields that are key from the perspective of our analysis are detailed in the table below <br> | Field Name | Datatype | Description | | :-------------: | :-------------: | :-------------: | | Year | Integer | The year of occurence | | Focal Depth | Integer | Depth of the epicenter | | Eq_Primary | Float | Magnitude of the earthquake | | Country | String | Name of the country | | Location_Name | String | Specific location in the country | | Latitude | Float | Coordinates of the exact location | | Longitude | Float | Coordinates of the exact location | The data can be downloaded in a tab separated file, and can be imported into excel or python for analysis. <br> We start by ingesting the data from the link provided below <br> [National Geophysical Data Center / World Data Service (NGDC/WDS): Significant Earthquake Database. National Geophysical Data Center, NOAA](http://dx.doi.org/10.7289/V5TD9V7K) ### Step-1 Ingesting the data ``` import pandas as pd import numpy as np # the data is contained in a tab seperated file raw_data = pd.read_csv('earthquake_historical.tsv', sep='\t') # Describe the data raw_data.describe() ``` ### Step-2 Preparing the data First thing to check is what the EQ fields other than EQ_PRIMARY contain, and if at any point they contain different values. <br> We intend to use EQ_PRIMARY, but need to make sure the other fields do not contain additional information.<br> Also dropping rows where EQ_PRIMARY or EQ_MAG_MS is null ``` # drop empty rows comp = raw_data.dropna(subset=['EQ_PRIMARY', 'EQ_MAG_MS']) comp = comp[['I_D','EQ_PRIMARY','EQ_MAG_MS']] # compare the values of EQ_PRIMARY and EQ_MAG_MS comp['Result'] = np.where(comp['EQ_PRIMARY'] == comp['EQ_MAG_MS'], 'TRUE', 'FALSE') # find the ratio of rows that match to total rows (len(comp[comp.Result == 'TRUE']))/(len(comp)) ``` About 25% of the values seem to be different for the 2 columns. I suspect the different EQ columns are just recording values from different sources. Another point to check is if there are rows where EQ_PRIMARY is null and data exists in other 6 EQ rows. ``` # filter the data to rows where EQ_PRIMARY is null temp = raw_data[pd.isnull(raw_data['EQ_PRIMARY'])] # check for cases where the other rows are not null temp[(temp.EQ_MAG_MS.notnull()) | (temp.EQ_MAG_MW.notnull()) | (temp.EQ_MAG_MB.notnull()) | (temp.EQ_MAG_ML.notnull()) | (temp.EQ_MAG_MFA.notnull()) | (temp.EQ_MAG_UNK.notnull())] ``` Turns out there are no such cases.<br> We will continue to stick with EQ_PRIMARY for our analysis The fields we are mainly concerned with are 'YEAR','MONTH','DAY','EQ_PRIMARY','COUNTRY','LATITUDE' and 'LONGITUDE'. Creating a new dataframe with these fields ``` eq_df = raw_data[['YEAR','MONTH','DAY','EQ_PRIMARY','COUNTRY','LATITUDE','LONGITUDE']] # renaming EQ_PRIMARY to MAGNITUDE eq_df = eq_df.rename(columns={"EQ_PRIMARY":"MAGNITUDE"}) # rows where the magnitude is missing, can be deleted as they are of no use to us. eq_df = eq_df[eq_df.MAGNITUDE.notnull()] eq_df.head() ``` <h2 id="i4">IV. Methods</h2> ### Visualizing the data Using the Basemap python package along with matplotlib, we can plot the different latitudes and longitudes on a world map, and color code the different points based on the magnitude. ``` # prepare the data for projection on a world map # extract latitudes,longitudes and magnitude from the dataframe as a list eq_df['LATITUDE'] = lat = pd.to_numeric(eq_df['LATITUDE'], errors='coerce').tolist() eq_df['LONGITUDE'] = lon = pd.to_numeric(eq_df['LONGITUDE'], errors='coerce').tolist() eq_df['MAGNITUDE'] = mag = pd.to_numeric(eq_df['MAGNITUDE'], errors='coerce').tolist() # some rows have null data, these can be manually filled in eq_df.loc[1759,'LATITUDE'] = 40.9006 eq_df.loc[1759,'LONGITUDE'] = 174.8860 eq_df.loc[2261,'LATITUDE'] = 0.7893 eq_df.loc[3256,'LATITUDE'] = 0.7893 eq_df.loc[3341,'LATITUDE'] = 0.8893 eq_df.loc[3382,'LATITUDE'] = 0.8693 eq_df.loc[3437,'LATITUDE'] = 1.3733 # save the cleaned data eq_df.to_csv('clean_earthquake.csv') from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt from pylab import rcParams %matplotlib inline # set the size of the plot in width and height rcParams['figure.figsize'] = (16,12) # lon_0 is central longitude of projection. # resolution = 'l' means use low resolution, which loads faster and works for a large map. eq_map = Basemap(projection='robin', lon_0=0, lat_0=0, resolution='l', area_thresh=1500.0) eq_map.drawcoastlines() eq_map.fillcontinents(color='gray') eq_map.drawmapboundary(fill_color='white') x,y = eq_map(lon, lat) # ro stands for red color and 'o' marker eq_map.plot(x, y, 'ro', markersize=4) plt.title("Map of all Earthquakes") plt.show() # for a slightly different perspective, lets visualize different colors based on the magnitude import matplotlib.lines as mlines def color(magnitude): # Returns green for small earthquakes, yellow for medium earthquakes, and red for large earthquakes. if magnitude < 4.0: return ('go') elif magnitude < 6.0: return ('yo') else: return ('ro') # specify the labels for the legend green_line = mlines.Line2D([], [], color='green', marker='o', markersize=5, label='Magnitude less than 4') yellow_line = mlines.Line2D([], [], color='yellow', marker='o', markersize=5, label='Magnitude between 4 and 6') red_line = mlines.Line2D([], [], color='red', marker='o', markersize=5, label='Magnitude greater than 6') rcParams['figure.figsize'] = (16,12) # lon_0 is central longitude of projection. # resolution = 'l' means use low resolution, which loads faster and works for a large map. eq_map_col = Basemap(projection='robin', lon_0=0, lat_0=0, resolution='l', area_thresh=1500.0) eq_map_col.drawcoastlines() eq_map_col.fillcontinents(color='gray') eq_map_col.drawmapboundary(fill_color='white') # go through each value and plot with color according to the magnitude for lo, la, ma in zip(lon, lat, mag): x,y = eq_map_col(lo, la) col = color(ma) eq_map_col.plot(x, y, col, markersize=4) plt.title("Map of all Earthquakes") plt.legend(handles=[green_line,yellow_line,red_line],bbox_to_anchor=(1, 1)) plt.show() ``` ### Running algorithms on the Data We start by trying to fit a linear regression model on the data, which will use a linear combination of the input variables that can best map to the target value . In this instance, I do not think this would be a great fit, as intuitively, it does not seem that there is a linear relationship between Year, latitude and longitude with the magnitude of an earthquake. ``` # for running machine learning algorthms, lets concentrate on the features YEAR,LATITUDE and LONGITUDE with MAGNITUDE as the label # creating a new dataset with these 3 columns eqml_df = eq_df[['YEAR','LATITUDE','LONGITUDE','MAGNITUDE']] X = eq_df[['YEAR','LATITUDE','LONGITUDE']] y = eq_df['MAGNITUDE'] # splitting the data into test and train sets using sklearn modules from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=28) print(X_train.shape, X_test.shape, y_train.shape, X_test.shape) from sklearn.linear_model import LinearRegression # initialize a linear regression model regression_model = LinearRegression() # fit the model using the training data above regression_model.fit(X_train, y_train) # after fitting, check the values predicted by the model on the test data regression_model.predict(X_test) # prediction score for this model on the test data regression_model.score(X_test,y_test) ``` As sort of expected, the score is very low on the predicted values, which does not seem to map to the magnitude values from our target variable. The next algorithm is a decision tree regressor, and the way this works is similar to a binary tree with a root and nodes on it. It proceeds down a sequential root answering 'If this then that' questions along the way, that ultimately leads to a result. It works well on both categorical and numerical data, and performance wise is also fast. Some of its downfalls include overfitting, especially if there are many features in the model and the tree runs deep. ``` from sklearn import tree mod_tree = tree.DecisionTreeRegressor() mod_tree.fit(X_train,y_train) mod_tree.predict(X_test) # prediction score for this model on the test data mod_tree.score(X_test,y_test) ``` Surprisingly the score isnt great here either, on default settings, with max_depth as none and min_samples_split=2. A random forest is an aggregation of multiple decision trees that combines all their results into one final result. This makes it a lot more robust than a single decision tree, and can help reduce errors due to bias and variance, that can creep into decision tree models. It randomly selects a subset of the features for each of its decision trees, which ultimately when combined, tends to cover all the features. Its also fairly resistant to overfitting. ``` from sklearn.ensemble import RandomForestRegressor # initialize the model reg = RandomForestRegressor(random_state=28) # fit the parameters reg.fit(X_train, y_train) reg.predict(X_test) reg.score(X_test,y_test) ``` This score is a significant improvement over the previous algorithms used, and we can try to further tune it by using cross validation to optimize the values of the hyperparameters. ``` from sklearn.model_selection import GridSearchCV # initialize the parameters in multiples of 10 parameters = {'n_estimators':[1,10,100,500,1000]} # initialize the gridsearch cross validation object grid_obj = GridSearchCV(reg, parameters) grid_fit = grid_obj.fit(X_train, y_train) # find the best fit from the parameters best_fit = grid_fit.best_estimator_ best_fit.score(X_test,y_test) ``` This model gives us a pretty decent score of around 45%, which we will use for our findings ahead <h2 id="i4">V. Findings </h2> In the following section, we try to answer some of the research questions we had before the analysis began ### What would the approximate magnitude of an earthquake in the US in 2019 be? ``` # predicting using the best fit model, by passing the year as 2019, and the country latitudes and longitudes for the US. x1 = [[2019,37.09,95.71]] best_fit.predict(x1) ``` So, we can expect an earthquake of magnitude around 5.95 in the US next year. ### In what year can we expect a large earthquake in the US (magnitude > 7.0)? To answer this question, we would need to train the model using Year as the output while the magnitude, and country coordinates act as the features. ``` # training the model with year as the output label X = eq_df[['MAGNITUDE','LATITUDE','LONGITUDE']] y = eq_df['YEAR'] # training and fitting the same model as previously X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=28) reg = RandomForestRegressor(random_state=30) reg.fit(X_train, y_train) grid_obj = GridSearchCV(reg, parameters) grid_fit = grid_obj.fit(X_train, y_train) best_fit = grid_fit.best_estimator_ best_fit.score(X_test,y_test) # the score is a lot less than with Magnitude as the label x2 = [[7.0,37.09,95.71]] best_fit.predict(x2) ``` The problem with this prediction, is that the Year column is being fed in as integer instead of datetime. Trying to convert this column to datetime, gives an error since python does not seem to work well with negative dates(BC). ### Which country has historically had most number of earthquakes? ``` eq_df.groupby('COUNTRY').size().sort_values(ascending=False) ``` Grouping the dataset by Country, looking at the count and ordering by a descending number of the count, gives us the list above. China leads the pack with 559 occurences in the list, which was slightly surprising, but I suspect it has something to do with how the territories were divided between China and Japan historically. <h2 id="i5">VI. Conclusion </h2> Overall, I think this was a learning experience for me personally, where I could experiment with different machine learning algorithms and see how well they scaled with historical earthquake data. <br> What makes predicting earthquakes so challenging, in spite of having detailed historical data, is that it doesn't seem to follow any particular pattern precisely. Fault lines around the world do tend to slip in a somewhat cyclical pattern, but any predictions made using this, is likely to be a window of +-20 years at best, rather than a precise time. <br> This case is best highlighted when in 1981, scientists made a prediction saying the California fault line could trigger an earthquake in the next few months, and in order to monitor the forces before and after the event, huge stations and monitoring equipment was set up all over the place, with a significant investment. It turned out that the earthquake actually occured in 2001, 20 years after they were expecting it. <br> However, machine learning does open up a new perspective to tackle this challenge, and I believe that if we can finetune our algorithms to achieve a high degree of accuracy with the data, it can definitely lead to more accurate predictions, that might not pinpoint the occurrence down to the exact second, but can help reduce the uncertainty window down significantly. That would be a huge step forward. <br> From a human design perspective, the solution must present end users, with enough time to act upon their warnings. There is a system currently in Japan, that can sound an alarm before an earthquake is about to occur. This is apparently the time it takes for the waves to travel from the epicenter to the location of the alarm, which is usually less than 30 seconds. While better than nothing, the window is still too small for people to really act on it, and just provides enough time to duck underneath a sturdy table perhaps. <br> The most promising research in my opinion, comes from trying to mimic the actual forces at play under the ground, in a lab setting, to produce acoustic signals that can then be used to train a machine learning algorithm. The signals produced just before the fault line releases that energy, has a particular sound signature to it, that can be used to identify and predict future occurrences. This method was highlighted in one of the papers referenced above, in the background section. <br> [Machine learning predicts earthquakes in a lab](https://www.cam.ac.uk/research/news/machine-learning-used-to-predict-earthquakes-in-a-lab-setting) <br> There is a lot of exciting ongoing research on this topic, and I do hope one of these days we manage to make a breakthrough, and end one of the long standing mysteries of our planet. <h2 id="i7">VII. References </h2> [Machine learning predicts earthquakes in a lab](https://www.cam.ac.uk/research/news/machine-learning-used-to-predict-earthquakes-in-a-lab-setting) <br> [Hindukush earthquake magnitude prediction](https://www.researchgate.net/publication/307951466_Earthquake_magnitude_prediction_in_Hindukush_region_using_machine_learning_techniques) <br> [Analysis of soil radon data using decision trees](https://www.sciencedirect.com/science/article/pii/S0969804303000940)<br> [Using Neural Networks to predict earthquake magnitude](https://www.sciencedirect.com/science/article/pii/S0893608009000926)<br> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6033417/ <br> https://earthquake.usgs.gov/data/data.php#eq <br> https://arxiv.org/pdf/1702.05774.pdf <br> https://www.scientificamerican.com/article/can-artificial-intelligence-predict-earthquakes/
github_jupyter
# Pre-Calculus Notebook ``` # Begin with some useful imports import numpy as np import pylab from fractions import Fraction from IPython.display import display, Math from sympy import init_printing, N, pi, sqrt, symbols, Symbol from sympy.abc import x, y, theta, phi init_printing(use_latex='mathjax') # do this to allow sympy to print mathy stuff %matplotlib inline ``` ## Trig Functions This notebook will introduce basic trigonometric functions. Given an angle, θ, there are two units used to indicate the measure of the angle: degrees and radians. ### First make a simple function to convert from degrees to radians ``` def deg_to_rad(angle, as_expr=False): """ Convert degrees to radians param: theta: degrees as a float or numpy array of values that can be cast to a float param: as_text: True/False output as text with multiple of 'π' (Default = False) The formula for this is: 2πθ/360° Note: it's usually bad form to allow a function argument to change an output type. Oh, well. """ radians = (angle * 2 * np.pi) / 360.0 if as_expr: # note: this requires sympy radians = 2*pi*angle/360 return radians # Test our function # Note: normally you would print results, but with sympy expressions, # the `display` function of jupyter notebooks is used to print using MathJax. display(deg_to_rad(210, as_expr=True)) display(deg_to_rad(360, as_expr=True)) ``` ### Now, let's make a list of angles around a unit circle and convert them to radians ``` base_angles = [0, 30, 45, 60] unit_circle_angles = [] for i in range(4): unit_circle_angles.extend(np.array(base_angles)+(i*90)) unit_circle_angles.append(360) # make a list of the angles converted to radians (list items are a sympy expression) unit_circle_angles_rad = [deg_to_rad(x, False) for x in unit_circle_angles] unit_circle_angles_rad_expr = [deg_to_rad(x, True) for x in unit_circle_angles] unit_circle_angles_rad unit_circle_angles_rad_expr # show the list of angles in degrees unit_circle_angles ``` ### So, what are radians Radians are a unit of angle measure (similar to degrees). Radians correspond to the chord length that an angle sweeps out on the unit circle. A complete circle with a radius of 1 (a unit circle) has a circumference of $2\pi$ (from the formula for circumference: $C = 2\pi r$ ) Let's import some things to help us create a plot to see this in action. ``` from bokeh.plotting import figure, show, output_notebook output_notebook() ``` If you were to map out our angle values on the circle drawn on a cartesian coordinate grid, it might look like this: ``` p = figure(plot_width=600, plot_height=600, match_aspect=True) # draw angle lines x = np.cos(unit_circle_angles_rad) y = np.sin(unit_circle_angles_rad) origin = np.zeros(x.shape) for i in range(len(unit_circle_angles_rad)): #print (x[i], y[i]) p.line((0, x[i]), (0, y[i])) # draw unit circle p.circle(0,0, radius=1.0, color="white", line_width=0.5, line_color='black', alpha=0.0, line_alpha=1.0) show(p) import bokeh bokeh.__version__ from bokeh.plotting import figure, output_file, show output_file("circle_v1.html") p = figure(plot_width=400, plot_height=400, match_aspect=True, tools='save') p.circle(0, 0, radius=1.0, fill_alpha=0) p.line((0, 1), (0, 0)) p.line((0, -1), (0, 0)) p.line((0, 0), (0, 1)) p.line((0, 0), (0, -1)) #p.rect(0, 0, 2, 2, fill_alpha=0) show(p) import sys, IPython, bokeh print "Python: ", sys.version print "IPython: ", IPython.__version__ print "Bokeh: ", bokeh.__version__ %reload_ext version_information %version_information from bokeh.plotting import figure, output_file, show from bokeh.layouts import layout p1 = figure(match_aspect=True, title="Circle touches all 4 sides of square") #p1.rect(0, 0, 300, 300, line_color='black') p1.circle(x=0, y=0, radius=10, line_color='black', fill_color='grey', radius_units='data') def draw_test_figure(aspect_scale=1, width=300, height=300): p = figure( plot_width=width, plot_height=height, match_aspect=True, aspect_scale=aspect_scale, title="Aspect scale = {0}".format(aspect_scale), toolbar_location=None) p.circle([-1, +1, +1, -1], [-1, -1, +1, +1]) return p aspect_scales = [0.25, 0.5, 1, 2, 4] p2s = [draw_test_figure(aspect_scale=i) for i in aspect_scales] sizes = [(100, 400), (200, 400), (400, 200), (400, 100)] p3s = [draw_test_figure(width=a, height=b) for (a, b) in sizes] layout = layout(children=[[p1], p2s, p3s]) output_file("aspect.html") show(layout) ```
github_jupyter
``` #importing the packages import pandas as pd import numpy as np import random import matplotlib.pyplot as plt import seaborn as sns sns.set() import joblib # for saving algorithm and preprocessing objects from sklearn.linear_model import LinearRegression # uploading the dataset df = pd.read_csv('pollution_us_2000_2016.csv') df.head() df.columns #droping all the unnecessary features df.drop(['Unnamed: 0','State Code', 'County Code', 'Site Num', 'Address', 'County', 'City', 'NO2 Units', 'O3 Units' ,'SO2 Units', 'CO Units', 'NO2 1st Max Hour', 'O3 1st Max Hour', 'SO2 1st Max Hour', 'CO 1st Max Hour'], axis=1, inplace=True) df.shape df.describe() #IQR range Q1 = df.quantile(0.25) Q3 = df.quantile(0.75) IQR = Q3 - Q1 print(IQR) #removing Outliers df = df[~((df < (Q1 - 1.5 * IQR)) |(df > (Q3 + 1.5 * IQR))).any(axis=1)] df.shape #encoding dates df.insert(loc=1, column='Year', value=df['Date Local'].apply(lambda year: year.split('-')[0])) df.drop('Date Local', axis=1, inplace=True) df['Year']=df['Year'].astype('int') #filling the FIRST Nan values with the means by the state for i in df.columns[2:]: df[i] = df[i].fillna(df.groupby('State')[i].transform('mean')) df[df["State"]=='Missouri']['NO2 AQI'].plot(kind='density', subplots=True, layout=(1, 2), sharex=False, figsize=(10, 4)); plt.scatter(df[df['State']=='Missouri']['Year'], df[df['State']=='Missouri']['NO2 AQI']); # grouped dataset dfG = df.groupby(['State', 'Year']).mean().reset_index() dfG.shape dfG.describe() plt.scatter(dfG[dfG['State']=='Missouri']['Year'], dfG[dfG['State']=='Missouri']['NO2 AQI']); #function for inserting a row def Insert_row_(row_number, df, row_value): # Slice the upper half of the dataframe df1 = df[0:row_number] # Store the result of lower half of the dataframe df2 = df[row_number:] # Inser the row in the upper half dataframe df1.loc[row_number]=row_value # Concat the two dataframes df_result = pd.concat([df1, df2]) # Reassign the index labels df_result.index = [*range(df_result.shape[0])] # Return the updated dataframe return df_result #all the years year_list = df['Year'].unique() print(year_list) #all the states state_list = df['State'].unique() print(state_list) # add more years with NaN values for state in state_list: year_diff = set(year_list).difference(list(dfG[dfG['State']==state]['Year'])) for i in year_diff: row_value = [state, i, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,np.nan,np.nan,np.nan,np.nan,np.nan] dfG = Insert_row_(random.randint(1,494), dfG, row_value) # fill Nan values with means by the state for i in dfG.columns[2:]: dfG[i] = dfG[i].fillna(dfG.groupby('State')[i].transform('mean')) total_AQI = dfG['NO2 AQI'] + dfG['SO2 AQI'] + \ dfG['CO AQI'] + dfG['O3 AQI'] dfG.insert(loc=len(dfG.columns), column='Total_AQI', value=total_AQI) dfG.head() plt.scatter(dfG[dfG['State']=='Missouri']['Year'], dfG[dfG['State']=='Missouri']['Total_AQI']); dfG[dfG["State"]=='Missouri']['Total_AQI'].plot(kind='density', subplots=True, layout=(1, 2), sharex=False, figsize=(10, 4)); joblib.dump(dfG, "./processed_data.joblib", compress=True) testing_Data = joblib.load("./processed_data.joblib") states = list(testing_Data['State'].unique()) print(states) from sklearn.linear_model import LinearRegression def state_data(state, data, df): t = df[df['State']==state].sort_values(by='Year') clf = LinearRegression() clf.fit(t[['Year']], t[data]) years = np.arange(2017, 2020, 1) tt = pd.DataFrame({'Year': years, data: clf.predict(years.reshape(-1, 1))}) pd.concat([t, tt], sort=False).set_index('Year')[data].plot(color='red') t.set_index('Year')[data].plot(figsize=(15, 5), xticks=(np.arange(2000, 2020, 1))) return print(clf.predict(years.reshape(-1, 1))) state_data('Missouri', 'NO2 AQI', testing_Data) ```
github_jupyter
# Modeling and Simulation in Python Chapter 5: Design Copyright 2017 Allen Downey License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0) ``` # If you want the figures to appear in the notebook, # and you want to interact with them, use # %matplotlib notebook # If you want the figures to appear in the notebook, # and you don't want to interact with them, use # %matplotlib inline # If you want the figures to appear in separate windows, use # %matplotlib qt5 # To switch from one to another, you have to select Kernel->Restart %matplotlib inline from modsim import * ``` ### SIR implementation We'll use a `State` object to represent the number or fraction of people in each compartment. ``` init = State(S=89, I=1, R=0) init ``` To convert from number of people to fractions, we divide through by the total. ``` init /= sum(init) init ``` `make_system` creates a `System` object with the given parameters. ``` def make_system(beta, gamma): """Make a system object for the SIR model. beta: contact rate in days gamma: recovery rate in days returns: System object """ init = State(S=89, I=1, R=0) init /= sum(init) t0 = 0 t_end = 7 * 14 return System(init=init, t0=t0, t_end=t_end, beta=beta, gamma=gamma) ``` Here's an example with hypothetical values for `beta` and `gamma`. ``` tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma) ``` The update function takes the state during the current time step and returns the state during the next time step. ``` def update1(state, system): """Update the SIR model. state: State with variables S, I, R system: System with beta and gamma returns: State object """ s, i, r = state infected = system.beta * i * s recovered = system.gamma * i s -= infected i += infected - recovered r += recovered return State(S=s, I=i, R=r) ``` To run a single time step, we call it like this: ``` state = update1(init, system) state ``` Now we can run a simulation by calling the update function for each time step. ``` def run_simulation(system, update_func): """Runs a simulation of the system. system: System object update_func: function that updates state returns: State object for final state """ state = system.init for t in linrange(system.t0, system.t_end-1): state = update_func(state, system) return state ``` The result is the state of the system at `t_end` ``` run_simulation(system, update1) ``` **Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected? Hint: what is the change in `S` between the beginning and the end of the simulation? ``` # Solution goes here ``` ### Using Series objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable. ``` def run_simulation(system, update_func): """Runs a simulation of the system. Add three Series objects to the System: S, I, R system: System object update_func: function that updates state """ S = TimeSeries() I = TimeSeries() R = TimeSeries() state = system.init t0 = system.t0 S[t0], I[t0], R[t0] = state for t in linrange(system.t0, system.t_end-1): state = update_func(state, system) S[t+1], I[t+1], R[t+1] = state system.S = S system.I = I system.R = R ``` Here's how we call it. ``` tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma) run_simulation(system, update1) ``` And then we can plot the results. ``` def plot_results(S, I, R): """Plot the results of a SIR model. S: TimeSeries I: TimeSeries R: TimeSeries """ plot(S, '--', color='blue', label='Susceptible') plot(I, '-', color='red', label='Infected') plot(R, ':', color='green', label='Recovered') decorate(xlabel='Time (days)', ylabel='Fraction of population') ``` Here's what they look like. ``` plot_results(system.S, system.I, system.R) savefig('chap05-fig01.pdf') ``` ### Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`. We have to use `loc` to indicate which row we want to assign the results to. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`. ``` def run_simulation(system, update_func): """Runs a simulation of the system. Add a DataFrame to the System: results system: System object update_func: function that updates state """ frame = DataFrame(columns=system.init.index) frame.loc[system.t0] = system.init for t in linrange(system.t0, system.t_end-1): frame.loc[t+1] = update_func(frame.loc[t], system) system.results = frame ``` Here's how we run it, and what the result looks like. ``` tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma) run_simulation(system, update1) system.results.head() ``` We can extract the results and plot them. ``` frame = system.results plot_results(frame.S, frame.I, frame.R) ``` **Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 days and plot the results. ``` # Solution goes here ``` ### Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example. ``` def calc_total_infected(system): """Fraction of population infected during the simulation. system: System object with results. returns: fraction of population """ frame = system.results return frame.S[system.t0] - frame.S[system.t_end] ``` Here's an example.| ``` system.beta = 0.333 system.gamma = 0.25 run_simulation(system, update1) print(system.beta, system.gamma, calc_total_infected(system)) ``` **Exercise:** Write functions that take a `System` object as a parameter, extract the `results` object from it, and compute the other metrics mentioned in the book: 1. The fraction of students who are sick at the peak of the outbreak. 2. The day the outbreak peaks. 3. The fraction of students who are sick at the end of the semester. Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max() And the index of the largest value like this: I.idxmax() You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html). ``` # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here ``` ### What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts. ``` def add_immunization(system, fraction): """Immunize a fraction of the population. Moves the given fraction from S to R. system: System object fraction: number from 0 to 1 """ system.init.S -= fraction system.init.R += fraction ``` Let's start again with the system we used in the previous sections. ``` tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma) system.beta, system.gamma ``` And run the model without immunization. ``` run_simulation(system, update1) calc_total_infected(system) ``` Now with 10% immunization. ``` system2 = make_system(beta, gamma) add_immunization(system2, 0.1) run_simulation(system2, update1) calc_total_infected(system2) ``` 10% immunization leads to a drop in infections of 16 percentage points. Here's what the time series looks like for S, with and without immunization. ``` plot(system.results.S, '-', label='No immunization') plot(system2.results.S, 'g--', label='10% immunization') decorate(xlabel='Time (days)', ylabel='Fraction susceptible') savefig('chap05-fig02.pdf') ``` Now we can sweep through a range of values for the fraction of the population who are immunized. ``` immunize_array = linspace(0, 1, 11) for fraction in immunize_array: system = make_system(beta, gamma) add_immunization(system, fraction) run_simulation(system, update1) print(fraction, calc_total_infected(system)) ``` This function does the same thing and stores the results in a `Sweep` object. ``` def sweep_immunity(immunize_array): """Sweeps a range of values for immunity. immunize_array: array of fraction immunized returns: Sweep object """ sweep = SweepSeries() for fraction in immunize_array: system = make_system(beta, gamma) add_immunization(system, fraction) run_simulation(system, update1) sweep[fraction] = calc_total_infected(system) return sweep ``` Here's how we run it. ``` immunize_array = linspace(0, 1, 21) infected_sweep = sweep_immunity(immunize_array) ``` And here's what the results look like. ``` plot(infected_sweep) decorate(xlabel='Fraction immunized', ylabel='Total fraction infected', title='Fraction infected vs. immunization rate', legend=False) savefig('chap05-fig03.pdf') ``` If 40% of the population is immunized, less than 4% of the population gets sick. ### Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario. ``` def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1): """Computes the generalize logistic function. A: controls the lower bound B: controls the steepness of the transition C: not all that useful, AFAIK M: controls the location of the transition K: controls the upper bound Q: shift the transition left or right nu: affects the symmetry of the transition returns: float or array """ exponent = -B * (x - M) denom = C + Q * exp(exponent) return A + (K-A) / denom ** (1/nu) ``` The following array represents the range of possible spending. ``` spending = linspace(0, 1200, 21) spending ``` `compute_factor` computes the reduction in `beta` for a given level of campaign spending. `M` is chosen so the transition happens around \$500. `K` is the maximum reduction in `beta`, 20%. `B` is chosen by trial and error to yield a curve that seems feasible. ``` def compute_factor(spending): """Reduction factor as a function of spending. spending: dollars from 0 to 1200 returns: fractional reduction in beta """ return logistic(spending, M=500, K=0.2, B=0.01) ``` Here's what it looks like. ``` percent_reduction = compute_factor(spending) * 100 plot(spending, percent_reduction) decorate(xlabel='Hand-washing campaign spending (USD)', ylabel='Percent reduction in infection rate', title='Effect of hand washing on infection rate', legend=False) savefig('chap05-fig04.pdf') ``` **Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have. ### Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta` ``` def add_hand_washing(system, spending): """Modifies system to model the effect of hand washing. system: System object spending: campaign spending in USD """ factor = compute_factor(spending) system.beta *= (1 - factor) ``` Let's start with the same values of `beta` and `gamma` we've been using. ``` tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day beta, gamma ``` Now we can sweep different levels of campaign spending. ``` spending_array = linspace(0, 1200, 13) for spending in spending_array: system = make_system(beta, gamma) add_hand_washing(system, spending) run_simulation(system, update1) print(spending, system.beta, calc_total_infected(system)) ``` Here's a function that sweeps a range of spending and stores the results in a `Sweep` object. ``` def sweep_hand_washing(spending_array): """Run simulations with a range of spending. spending_array: array of dollars from 0 to 1200 returns: Sweep object """ sweep = SweepSeries() for spending in spending_array: system = make_system(beta, gamma) add_hand_washing(system, spending) run_simulation(system, update1) sweep[spending] = calc_total_infected(system) return sweep ``` Here's how we run it. ``` spending_array = linspace(0, 1200, 20) infected_sweep = sweep_hand_washing(spending_array) ``` And here's what it looks like. ``` plot(infected_sweep) decorate(xlabel='Hand-washing campaign spending (USD)', ylabel='Total fraction infected', title='Effect of hand washing on total infections', legend=False) savefig('chap05-fig05.pdf') ``` Now let's put it all together to make some public health spending decisions. ### Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign. ``` num_students = 90 budget = 1200 price_per_dose = 100 max_doses = int(budget / price_per_dose) dose_array = linrange(max_doses) max_doses ``` We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations. For each scenario, we compute the fraction of students who get sick. ``` for doses in dose_array: fraction = doses / num_students spending = budget - doses * price_per_dose system = make_system(beta, gamma) add_immunization(system, fraction) add_hand_washing(system, spending) run_simulation(system, update1) print(doses, system.init.S, system.beta, calc_total_infected(system)) ``` The following function wraps that loop and stores the results in a `Sweep` object. ``` def sweep_doses(dose_array): """Runs simulations with different doses and campaign spending. dose_array: range of values for number of vaccinations return: Sweep object with total number of infections """ sweep = SweepSeries() for doses in dose_array: fraction = doses / num_students spending = budget - doses * price_per_dose system = make_system(beta, gamma) add_immunization(system, fraction) add_hand_washing(system, spending) run_simulation(system, update1) sweep[doses] = calc_total_infected(system) return sweep ``` Now we can compute the number of infected students for each possible allocation of the budget. ``` infected_sweep = sweep_doses(dose_array) ``` And plot the results. ``` plot(infected_sweep) decorate(xlabel='Doses of vaccine', ylabel='Total fraction infected', title='Total infections vs. doses', legend=False) savefig('chap05-fig06.pdf') ``` **Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious. How might you incorporate the effect of quarantine in the SIR model? ``` # Solution goes here ```
github_jupyter
## Dependencies ``` import os import cv2 import shutil import random import warnings import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.utils import class_weight from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, cohen_kappa_score from keras import backend as K from keras.models import Model from keras.utils import to_categorical from keras import optimizers, applications from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input # Set seeds to make the experiment more reproducible. from tensorflow import set_random_seed def seed_everything(seed=0): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) set_random_seed(0) seed = 0 seed_everything(seed) %matplotlib inline sns.set(style="whitegrid") warnings.filterwarnings("ignore") ``` ## Load data ``` hold_out_set = pd.read_csv('../input/aptos-data-split/hold-out.csv') X_train = hold_out_set[hold_out_set['set'] == 'train'] X_val = hold_out_set[hold_out_set['set'] == 'validation'] test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv') print('Number of train samples: ', X_train.shape[0]) print('Number of validation samples: ', X_val.shape[0]) print('Number of test samples: ', test.shape[0]) # Preprocecss data X_train["id_code"] = X_train["id_code"].apply(lambda x: x + ".png") X_val["id_code"] = X_val["id_code"].apply(lambda x: x + ".png") test["id_code"] = test["id_code"].apply(lambda x: x + ".png") X_train['diagnosis'] = X_train['diagnosis'] X_val['diagnosis'] = X_val['diagnosis'] display(X_train.head()) ``` # Model parameters ``` # Model parameters BATCH_SIZE = 8 EPOCHS = 40 WARMUP_EPOCHS = 5 LEARNING_RATE = 1e-4 WARMUP_LEARNING_RATE = 1e-3 HEIGHT = 224 WIDTH = 224 CHANNELS = 3 ES_PATIENCE = 5 RLROP_PATIENCE = 3 DECAY_DROP = 0.5 ``` # Pre-procecess images ``` train_base_path = '../input/aptos2019-blindness-detection/train_images/' test_base_path = '../input/aptos2019-blindness-detection/test_images/' train_dest_path = 'base_dir/train_images/' validation_dest_path = 'base_dir/validation_images/' test_dest_path = 'base_dir/test_images/' # Making sure directories don't exist if os.path.exists(train_dest_path): shutil.rmtree(train_dest_path) if os.path.exists(validation_dest_path): shutil.rmtree(validation_dest_path) if os.path.exists(test_dest_path): shutil.rmtree(test_dest_path) # Creating train, validation and test directories os.makedirs(train_dest_path) os.makedirs(validation_dest_path) os.makedirs(test_dest_path) def crop_image(img, tol=7): if img.ndim ==2: mask = img>tol return img[np.ix_(mask.any(1),mask.any(0))] elif img.ndim==3: gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) mask = gray_img>tol check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0] if (check_shape == 0): # image is too dark so that we crop out everything, return img # return original image else: img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))] img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))] img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))] img = np.stack([img1,img2,img3],axis=-1) return img def circle_crop(img): img = crop_image(img) height, width, depth = img.shape x = int(width/2) y = int(height/2) r = np.amin((x,y)) circle_img = np.zeros((height, width), np.uint8) cv2.circle(circle_img, (x,y), int(r), 1, thickness=-1) img = cv2.bitwise_and(img, img, mask=circle_img) img = crop_image(img) return img def preprocess_image(base_path, save_path, image_id, HEIGHT, WIDTH, sigmaX=10): image = cv2.imread(base_path + image_id) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = circle_crop(image) image = cv2.resize(image, (HEIGHT, WIDTH)) image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128) cv2.imwrite(save_path + image_id, image) # Pre-procecss train set for i, image_id in enumerate(X_train['id_code']): preprocess_image(train_base_path, train_dest_path, image_id, HEIGHT, WIDTH) # Pre-procecss validation set for i, image_id in enumerate(X_val['id_code']): preprocess_image(train_base_path, validation_dest_path, image_id, HEIGHT, WIDTH) # Pre-procecss test set for i, image_id in enumerate(test['id_code']): preprocess_image(test_base_path, test_dest_path, image_id, HEIGHT, WIDTH) ``` # Data generator ``` datagen=ImageDataGenerator(rescale=1./255, rotation_range=360, horizontal_flip=True, vertical_flip=True, fill_mode='constant', cval=0.) train_generator=datagen.flow_from_dataframe( dataframe=X_train, directory=train_dest_path, x_col="id_code", y_col="diagnosis", class_mode="raw", batch_size=BATCH_SIZE, target_size=(HEIGHT, WIDTH), seed=seed) valid_generator=datagen.flow_from_dataframe( dataframe=X_val, directory=validation_dest_path, x_col="id_code", y_col="diagnosis", class_mode="raw", batch_size=BATCH_SIZE, target_size=(HEIGHT, WIDTH), seed=seed) test_generator=datagen.flow_from_dataframe( dataframe=test, directory=test_dest_path, x_col="id_code", batch_size=1, class_mode=None, shuffle=False, target_size=(HEIGHT, WIDTH), seed=seed) ``` # Model ``` def create_model(input_shape): input_tensor = Input(shape=input_shape) base_model = applications.ResNet50(weights=None, include_top=False, input_tensor=input_tensor) base_model.load_weights('../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5') x = GlobalAveragePooling2D()(base_model.output) x = Dropout(0.5)(x) x = Dense(2048, activation='relu')(x) x = Dropout(0.5)(x) final_output = Dense(1, activation='linear', name='final_output')(x) model = Model(input_tensor, final_output) return model ``` # Train top layers ``` model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS)) for layer in model.layers: layer.trainable = False for i in range(-5, 0): model.layers[i].trainable = True metric_list = ["accuracy"] optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE) model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list) model.summary() STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size history_warmup = model.fit_generator(generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=valid_generator, validation_steps=STEP_SIZE_VALID, epochs=WARMUP_EPOCHS, verbose=1).history ``` # Fine-tune the complete model (1st step) ``` for i in range(-15, 0): model.layers[i].trainable = True es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1) rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1) callback_list = [es, rlrop] optimizer = optimizers.Adam(lr=LEARNING_RATE) model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list) model.summary() history_finetunning = model.fit_generator(generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=valid_generator, validation_steps=STEP_SIZE_VALID, epochs=int(EPOCHS*0.8), callbacks=callback_list, verbose=1).history ``` # Fine-tune the complete model (2nd step) ``` optimizer = optimizers.SGD(lr=LEARNING_RATE, momentum=0.9, nesterov=True) model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list) history_finetunning_2 = model.fit_generator(generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=valid_generator, validation_steps=STEP_SIZE_VALID, epochs=int(EPOCHS*0.2), callbacks=callback_list, verbose=1).history ``` # Model loss graph ``` history = {'loss': history_finetunning['loss'] + history_finetunning_2['loss'], 'val_loss': history_finetunning['val_loss'] + history_finetunning_2['val_loss'], 'acc': history_finetunning['acc'] + history_finetunning_2['acc'], 'val_acc': history_finetunning['val_acc'] + history_finetunning_2['val_acc']} sns.set_style("whitegrid") fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 14)) ax1.plot(history['loss'], label='Train loss') ax1.plot(history['val_loss'], label='Validation loss') ax1.legend(loc='best') ax1.set_title('Loss') ax2.plot(history['acc'], label='Train accuracy') ax2.plot(history['val_acc'], label='Validation accuracy') ax2.legend(loc='best') ax2.set_title('Accuracy') plt.xlabel('Epochs') sns.despine() plt.show() # Create empty arays to keep the predictions and labels df_preds = pd.DataFrame(columns=['label', 'pred', 'set']) train_generator.reset() valid_generator.reset() # Add train predictions and labels for i in range(STEP_SIZE_TRAIN + 1): im, lbl = next(train_generator) preds = model.predict(im, batch_size=train_generator.batch_size) for index in range(len(preds)): df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train'] # Add validation predictions and labels for i in range(STEP_SIZE_VALID + 1): im, lbl = next(valid_generator) preds = model.predict(im, batch_size=valid_generator.batch_size) for index in range(len(preds)): df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation'] df_preds['label'] = df_preds['label'].astype('int') ``` # Threshold optimization ``` def classify(x): if x < 0.5: return 0 elif x < 1.5: return 1 elif x < 2.5: return 2 elif x < 3.5: return 3 return 4 def classify_opt(x): if x <= (0 + best_thr_0): return 0 elif x <= (1 + best_thr_1): return 1 elif x <= (2 + best_thr_2): return 2 elif x <= (3 + best_thr_3): return 3 return 4 def find_best_threshold(df, label, label_col='label', pred_col='pred', do_plot=True): score = [] thrs = np.arange(0, 1, 0.01) for thr in thrs: preds_thr = [label if ((pred >= label and pred < label+1) and (pred < (label+thr))) else classify(pred) for pred in df[pred_col]] score.append(cohen_kappa_score(df[label_col].astype('int'), preds_thr)) score = np.array(score) pm = score.argmax() best_thr, best_score = thrs[pm], score[pm].item() print('Label %s: thr=%.2f, Kappa=%.3f' % (label, best_thr, best_score)) plt.rcParams["figure.figsize"] = (20, 5) plt.plot(thrs, score) plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max()) plt.show() return best_thr # Best threshold for label 3 best_thr_3 = find_best_threshold(df_preds, 3) # Best threshold for label 2 best_thr_2 = find_best_threshold(df_preds, 2) # Best threshold for label 1 best_thr_1 = find_best_threshold(df_preds, 1) # Best threshold for label 0 best_thr_0 = find_best_threshold(df_preds, 0) # Classify predictions df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x)) # Apply optimized thresholds to the predictions df_preds['predictions_opt'] = df_preds['pred'].apply(lambda x: classify_opt(x)) train_preds = df_preds[df_preds['set'] == 'train'] validation_preds = df_preds[df_preds['set'] == 'validation'] ``` # Model Evaluation ## Confusion Matrix ### Original thresholds ``` labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR'] def plot_confusion_matrix(train, validation, labels=labels): train_labels, train_preds = train validation_labels, validation_preds = validation fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7)) train_cnf_matrix = confusion_matrix(train_labels, train_preds) validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds) train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis] validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis] train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels) validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels) sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train') sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation') plt.show() plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions'])) ``` ### Optimized thresholds ``` plot_confusion_matrix((train_preds['label'], train_preds['predictions_opt']), (validation_preds['label'], validation_preds['predictions_opt'])) ``` ## Quadratic Weighted Kappa ``` def evaluate_model(train, validation): train_labels, train_preds = train validation_labels, validation_preds = validation print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic')) print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic')) print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic')) print(" Original thresholds") evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions'])) print(" Optimized thresholds") evaluate_model((train_preds['label'], train_preds['predictions_opt']), (validation_preds['label'], validation_preds['predictions_opt'])) ``` ## Apply model to test set and output predictions ``` def apply_tta(model, generator, steps=10): step_size = generator.n//generator.batch_size preds_tta = [] for i in range(steps): generator.reset() preds = model.predict_generator(generator, steps=step_size) preds_tta.append(preds) return np.mean(preds_tta, axis=0) preds = apply_tta(model, test_generator) predictions = [classify(x) for x in preds] predictions_opt = [classify_opt(x) for x in preds] results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions}) results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4]) results_opt = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions_opt}) results_opt['id_code'] = results_opt['id_code'].map(lambda x: str(x)[:-4]) # Cleaning created directories if os.path.exists(train_dest_path): shutil.rmtree(train_dest_path) if os.path.exists(validation_dest_path): shutil.rmtree(validation_dest_path) if os.path.exists(test_dest_path): shutil.rmtree(test_dest_path) ``` # Predictions class distribution ``` fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 8.7)) sns.countplot(x="diagnosis", data=results, palette="GnBu_d", ax=ax1).set_title('Test') sns.countplot(x="diagnosis", data=results_opt, palette="GnBu_d", ax=ax2).set_title('Test optimized') sns.despine() plt.show() val_kappa = cohen_kappa_score(validation_preds['label'], validation_preds['predictions'], weights='quadratic') val_opt_kappa = cohen_kappa_score(validation_preds['label'], validation_preds['predictions_opt'], weights='quadratic') results_name = 'submission.csv' results_opt_name = 'submission_opt.csv' # if val_kappa > val_opt_kappa: # results_name = 'submission.csv' # results_opt_name = 'submission_opt.csv' # else: # results_name = 'submission_norm.csv' # results_opt_name = 'submission.csv' results.to_csv(results_name, index=False) display(results.head()) results_opt.to_csv(results_opt_name, index=False) display(results_opt.head()) ```
github_jupyter
# Covid-19 Prediction from Lung CT This notebook utilizes a trained **classifier** to recognize **Covid-19** positive patients from their **CT lungs** scans in order to support the physician’s decision process with a quantitative approach. ``` from esmlib import * ``` ## Select an exam folder With the following code you can select the folder that contains the **.dcm** file images. ``` path_exam = askdirectory() ``` Read some interesting **metadata** relative to the patient exam: ``` get_metadata(path_exam) ``` ## Interactive Slider for visualizing the exam folder ``` def dicom_animation(x): plt.figure(1, figsize=(12,10)) plt.imshow(m[x], cmap=plt.cm.gray) return x files = sorted(os.listdir(path_exam)) m = np.zeros((len(files), 410, 450)) for n in range(0,len(files)): pathtemp = path_exam + "\\" + files[n] img = load_and_preprocess_dcm(pathtemp) m[n, : , : ] = img ``` Move the slider to scroll through the images ``` interact(dicom_animation, x=(0, len(m)-1)) ``` ## Automatic selection of three scans ``` lista, idx_sel = select_paths(path_exam) lista_img = select_and_show_3(lista, idx_sel) ``` ### If the selected scans are not good enough, you can specify in the following code the number of scans to select ``` a = input("Insert the index number of the first scan: ") b = input("Insert the index number of the second scan: ") c = input("Insert the index number of the third scan: ") idx_sel = [int(a), int(b), int(c)] lista_img = select_and_show_3(lista, idx_sel) ``` ## Get the mask of the three images ``` mask_list = [] for img in lista_img: first_mask = mask_kmeans(img) final_mask = fill_mask(first_mask) mask_list.append(final_mask) show_3_mask(mask_list, idx_sel) ``` ### If the selected mask are not good enough, you can specify in the following code the number of mask to modify ``` n_mask = input("Insert the index number of the scan you want to replace: ") ``` With the following code, you can *draw manually* the lung mask (ROI) ``` %matplotlib qt roi1, mask = manually_roi_select(lista_img, mask_list, n_mask) ``` Show the manually selected mask ``` %matplotlib inline show_roi_selected(roi1, mask, n_mask, lista_img) ``` #### If you want to modify another mask, just repeat the process Finally, the mask considered are: ``` show_3_mask(mask_list, idx_sel) ``` ## Features extraction We extract the **Haralick Features** from the equalized and filtered masked images selected ``` feat = features_and_plot(lista_img, mask_list, printing=True) ``` ## Covid-19 Prediction Run the classifier on the extracted features and get the prediction for the patient ``` covid_predict(feat) ```
github_jupyter
# Monte Carlo Simulations with Python (Part 1) [Patrick Hanbury](https://towardsdatascience.com/monte-carlo-simulations-with-python-part-1-f5627b7d60b0) - Notebook author: Israel Oliveira [\[e-mail\]](mailto:'Israel%20Oliveira%20'<prof.israel@gmail.com>) ``` %load_ext watermark import numpy as np import math import random from matplotlib import pyplot as plt from IPython.display import clear_output, display, Markdown, Latex, Math import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Polygon import scipy.integrate as integrate from decimal import Decimal import pandas as pd PI = math.pi e = math.e # Run this cell before close. %watermark %watermark --iversion %watermark -b -r -g ``` We want: $I_{ab} = \int\limits_{a}^{b} f(x) dx ~~~(1)$ so, we could achive that with a avarage value of $f$: $\hat{f}_{ab} = \frac{1}{b-a} \int\limits_{a}^{b} f(x) dx ~~~(2)$ ``` def func(x): return (x - 3) * (x - 5) * (x - 7) + 85 a, b = 2, 9 # integral limits x = np.linspace(0, 10) y = func(x) fig, ax = plt.subplots() ax.plot(x, y, 'r', linewidth=2) ax.set_ylim(bottom=0) # Make the shaded region ix = np.linspace(a, b) iy = func(ix) verts = [(a, 0), *zip(ix, iy), (b, 0)] poly = Polygon(verts, facecolor='0.9', edgecolor='0.5') ax.add_patch(poly) ax.text(0.5 * (a + b), 30, r"$\int_a^b f(x)\mathrm{d}x$", horizontalalignment='center', fontsize=20) fig.text(0.9, 0.05, '$x$') fig.text(0.1, 0.9, '$y$') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.xaxis.set_ticks_position('bottom') ax.set_xticks((a, b)) ax.set_xticklabels(('$a$', '$b$')) ax.set_yticks([]) plt.axhline(90, xmin=0.225, xmax=0.87) ax.text(0.5 * (a + b), 30, r"$\int_a^b f(x)\mathrm{d}x$", horizontalalignment='center', fontsize=20) plt.show() ``` With $(1)$ and $(2)$: $\hat{f}_{ab} = \frac{1}{b-a} I $ $I = (b-a)\hat{f}_{ab} ~~~(3)$ Sampling $f(\cdot)$, it is possible to calculate a approximate value for $\hat{f}_{ab}$ (with a random variable $\mathbf{x}$): $\mathbf{F}_{ab} = {f(\mathbf{x}) ~|~ \mathbf{x} ~\in~ [a, b]}$ The expectation for $\mathbf{F}_{ab}$ is: $E[\mathbf{F}_{ab}] = \hat{f}_{ab}$ and concluding with $I = E[\mathbf{F}_{ab}](b-a)$ So, how we could calculate $E[\mathbf{F}_{ab}]$? With $N$ uniform sampling of $x~\in~[a, b]$. If $N$ is large enough and $\mathbf{x}$ is uniform between $[a, b]$: $ E[\mathbf{F}_{ab}] = \frac{1}{N} \sum\limits_{i}^N f(\mathbf{x}) ~|~ \mathbf{x} ~\in~ [a, b]$ and $I = E[\mathbf{F}_{ab}](b-a) = \lim\limits_{N \rightarrow \infty} \frac{b-a}{N} \sum\limits_{i}^N f(\mathbf{x}) ~|~ \mathbf{x} ~\in~ [a, b] ~~~(4)$ This is the *Crude Monte Carlo*. #### Example 1: Calculate: $I = \int\limits_{0}^{+\infty} \frac{e^{-x}}{(x-1)^2 + 1} dx ~~~(5)$ ``` def get_rand_number(min_value, max_value): """ This function gets a random number from a uniform distribution between the two input values [min_value, max_value] inclusively Args: - min_value (float) - max_value (float) Return: - Random number between this range (float) """ range = max_value - min_value choice = random.uniform(0,1) return min_value + range*choice def f_of_x(x): """ This is the main function we want to integrate over. Args: - x (float) : input to function; must be in radians Return: - output of function f(x) (float) """ return (e**(-1*x))/(1+(x-1)**2) lower_bound = 0 upper_bound = 5 def crude_monte_carlo(num_samples=10000, lower_bound = 0, upper_bound = 5): """ This function performs the Crude Monte Carlo for our specific function f(x) on the range x=0 to x=5. Notice that this bound is sufficient because f(x) approaches 0 at around PI. Args: - num_samples (float) : number of samples Return: - Crude Monte Carlo estimation (float) """ sum_of_samples = 0 for i in range(num_samples): x = get_rand_number(lower_bound, upper_bound) sum_of_samples += f_of_x(x) return (upper_bound - lower_bound) * float(sum_of_samples/num_samples) display(Math(r'I \approx {:.4f}, ~N = 10^4'.format(crude_monte_carlo()))) display(Math(r'\left . f(a) \right |_{a=0} \approx '+r'{:.4f} '.format(f_of_x(lower_bound)))) display(Math(r'\left . f(b) \right |_{b=5} \approx '+r'{:.4f} '.format(f_of_x(upper_bound)))) ``` Why $b=5$? $ \lim\limits_{x \rightarrow +\infty} \frac{e^{-x}}{(x-1)^2 + 1} \rightarrow 0 $ We could consider $0.0004 ~\approx~ 0$. If $b = 10$? ``` upper_bound = 10 display(Math(r'\left . f(b) \right |_{b=5} \approx '+r'{:.6f}'.format(f_of_x(upper_bound))+'= 10^{-6}')) display(Math(r'I \approx {:.4f}, ~N = 10^5 '.format(crude_monte_carlo(num_samples=100000, upper_bound = 10)))) plt.figure() def func(x): return f_of_x(x) a, b = 0, 5 # integral limits x = np.linspace(0, 6) y = func(x) fig, ax = plt.subplots() ax.plot(x, y, 'r', linewidth=2) ax.set_ylim(bottom=0) # Make the shaded region ix = np.linspace(a, b) iy = func(ix) verts = [(a, 0), *zip(ix, iy), (b, 0)] poly = Polygon(verts, facecolor='0.9', edgecolor='0.5') ax.add_patch(poly) ax.text(0.2 * (a + b), 0.05, r"$\int_a^b f(x)\mathrm{d}x$", horizontalalignment='center', fontsize=20) fig.text(0.9, 0.05, '$x$') fig.text(0.1, 0.9, '$y$') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.xaxis.set_ticks_position('bottom') ax.set_xticks((a, b)) ax.set_xticklabels(('$a$', '$b$')) ax.set_yticks([]) ax.axhline(0.2*crude_monte_carlo(),color='b', xmin=0.051, xmax=0.81) iy = iy*0+0.2*crude_monte_carlo() verts = [(a, 0), *zip(ix, iy), (b, 0)] poly = Polygon(verts, facecolor='0.94', edgecolor='0.99') ax.add_patch(poly) ax.text(5,0.2*crude_monte_carlo()+0.03 , r"$\hat{f}_{ab}$", horizontalalignment='center', fontsize=20) plt.show() ``` Comparing with [Integration (scipy.integrate)](https://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html). ``` results = integrate.quad(lambda x: f_of_x(x), lower_bound, upper_bound) tmp = (Decimal(results[1]).as_tuple().digits[0], Decimal(results[1]).as_tuple().exponent + len(Decimal(results[1]).as_tuple().digits) -1) display(Math(r'I_{\text{SciPy}} = '+r'{:.4f}, ~e \approx {}'.format(results[0],tmp[0])+r'\cdot 10^{'+'{}'.format(tmp[1])+r'}')) diff = [] for _ in range(100): diff.append(crude_monte_carlo(num_samples=100000, upper_bound = 10)-results[0]) df = pd.DataFrame([abs(x) for x in diff], columns=['$I- I_{\text{SciPy}}$']) display(df.describe()) df.plot(grid = True) df = pd.DataFrame([abs(x)/results[0] for x in diff], columns=['$(I- I_{\text{SciPy}})/I_{\text{SciPy}}$']) display(df.describe()) df.plot(grid = True) ``` Confirm the estimated error with variance. ``` def get_crude_MC_variance(num_samples = 10000, upper_bound = 5): """ This function returns the variance fo the Crude Monte Carlo. Note that the inputed number of samples does not neccissarily need to correspond to number of samples used in the Monte Carlo Simulation. Args: - num_samples (int) Return: - Variance for Crude Monte Carlo approximation of f(x) (float) """ int_max = upper_bound # this is the max of our integration range # get the average of squares running_total = 0 for i in range(num_samples): x = get_rand_number(0, int_max) running_total += f_of_x(x)**2 sum_of_sqs = running_total*int_max / num_samples # get square of average running_total = 0 for i in range(num_samples): x = get_rand_number(0, int_max) running_total = f_of_x(x) sq_ave = (int_max*running_total/num_samples)**2 return sum_of_sqs - sq_ave s1 = get_crude_MC_variance() "{:.4f}".format(s1) s2 = get_crude_MC_variance(100000,10) "{:.4f}".format(s2) math.sqrt(s1 / 10000) df.describe().loc['mean'].to_list()[0] ``` ### Importance Sampling ``` # this is the template of our weight function g(x) def g_of_x(x, A, lamda): e = 2.71828 return A*math.pow(e, -1*lamda*x) def inverse_G_of_r(r, lamda): return (-1 * math.log(float(r)))/lamda def get_IS_variance(lamda, num_samples): """ This function calculates the variance if a Monte Carlo using importance sampling. Args: - lamda (float) : lamdba value of g(x) being tested Return: - Variance """ A = lamda int_max = 5 # get sum of squares running_total = 0 for i in range(num_samples): x = get_rand_number(0, int_max) running_total += (f_of_x(x)/g_of_x(x, A, lamda))**2 sum_of_sqs = running_total / num_samples # get squared average running_total = 0 for i in range(num_samples): x = get_rand_number(0, int_max) running_total += f_of_x(x)/g_of_x(x, A, lamda) sq_ave = (running_total/num_samples)**2 return sum_of_sqs - sq_ave # get variance as a function of lambda by testing many # different lambdas test_lamdas = [i*0.05 for i in range(1, 61)] variances = [] for i, lamda in enumerate(test_lamdas): print(f"lambda {i+1}/{len(test_lamdas)}: {lamda}") A = lamda variances.append(get_IS_variance(lamda, 10000)) clear_output(wait=True) optimal_lamda = test_lamdas[np.argmin(np.asarray(variances))] IS_variance = variances[np.argmin(np.asarray(variances))] print(f"Optimal Lambda: {optimal_lamda}") print(f"Optimal Variance: {IS_variance}") print(f"Error: {(IS_variance/10000)**0.5}") def importance_sampling_MC(lamda, num_samples): A = lamda running_total = 0 for i in range(num_samples): r = get_rand_number(0,1) running_total += f_of_x(inverse_G_of_r(r, lamda=lamda))/g_of_x(inverse_G_of_r(r, lamda=lamda), A, lamda) approximation = float(running_total/num_samples) return approximation # run simulation num_samples = 10000 approx = importance_sampling_MC(optimal_lamda, num_samples) variance = get_IS_variance(optimal_lamda, num_samples) error = (variance/num_samples)**0.5 # display results print(f"Importance Sampling Approximation: {approx}") print(f"Variance: {variance}") print(f"Error: {error}") display(Math(r'(I_{IS} - I_{\text{SciPy}})/I_{\text{SciPy}} = '+'{:.4}\%'.format(100*abs((approx-results[0])/results[0])))) ```
github_jupyter
``` import sys; sys.path.append('../rrr') from multilayer_perceptron import * from figure_grid import * from local_linear_explanation import * from toy_colors import generate_dataset, imgshape, ignore_rule1, ignore_rule2, rule1_score, rule2_score import lime import lime.lime_tabular ``` # Toy Color Dataset This is a simple, two-class image classification dataset with two independent ways a model could learn to distinguish between classes. The first is whether all four corner pixels are the same color, and the second is whether the top-middle three pixels are all different colors. Images in class 1 satisfy both conditions and images in class 2 satisfy neither. See `color_dataset_generator` for more details. We will train a multilayer perceptron to classify these images, explore which rule(s) it implicitly learns, and constrain it to use only one rule (or neither). Let's first load our dataset: ``` X, Xt, y, yt = generate_dataset(cachefile='../data/toy-colors.npz') E1 = np.array([ignore_rule2 for _ in range(len(y))]) E2 = np.array([ignore_rule1 for _ in range(len(y))]) print(X.shape, Xt.shape, y.shape, yt.shape, E1.shape, E2.shape) ``` ## Understanding the Dataset Let's just examine images from each class quickly and verify that in class 1, the corners are all the same color and the top-middle three pixels are all different (none of which should hold true in class 2): ``` plt.subplot(121) plt.title('Class 1') image_grid(X[np.argwhere(y == 0)[:9]], (5,5,3), 3) plt.subplot(122) plt.title('Class 2') image_grid(X[np.argwhere(y == 1)[:9]], (5,5,3), 3) plt.show() ``` Great. ## Explaining and learning diverse classifiers Now let's see if we can train our model to implicitly learn each rule: ``` def explain(model, title='', length=4): plt.title(title) explanation_grid(model.grad_explain(Xt[:length*length]), imgshape, length) # Train a model without any constraints mlp_plain = MultilayerPerceptron() mlp_plain.fit(X, y) mlp_plain.score(Xt, yt) # Train a model constrained to use the first rule mlp_rule1 = MultilayerPerceptron(l2_grads=1000) mlp_rule1.fit(X, y, E1) mlp_rule1.score(Xt, yt) # Train a model constrained to use the second rule mlp_rule2 = MultilayerPerceptron(l2_grads=1000) mlp_rule2.fit(X, y, E2) mlp_rule2.score(Xt, yt) # Visualize largest weights with figure_grid(1,3, rowwidth=8) as g: g.next() explain(mlp_plain, 'No annotations') g.next() explain(mlp_rule1, '$A$ penalizing top middle') g.next() explain(mlp_rule2, '$A$ penalizing corners') ``` Notice that when we explicitly penalize corners or the top middle, the model appears to learn the _other_ rule perfectly. We haven't identified the pixels it does treat as significant in any way, but they are significant, so the fact that they show up in the explanations means that the explanation is probably an accurate reflection of the model's implicit logic. When we don't have any annotations, the model does identify the top-middle pixels occasionally, suggesting it defaults to learning a heavily but not completely corner-weighted combination of the rules. What happens when we forbid it from using either rule? ``` mlp_neither = MultilayerPerceptron(l2_grads=1e6) mlp_neither.fit(X, y, E1 + E2) mlp_neither.score(Xt, yt) explain(mlp_neither, '$A$ biased against all relevant features') plt.show() ``` As we might expect, accuracy goes down and we start identifying random pixels as significant. ## Find-another-explanation Let's now pretend we have no knowledge of what $A$ _should_ be for this dataset. Can we still train models that use diverse rules just by examining explanations? ``` A1 = mlp_plain.largest_gradient_mask(X) mlp_fae1 = MultilayerPerceptron(l2_grads=1000) mlp_fae1.fit(X, y, A1) mlp_fae1.score(Xt, yt) explain(mlp_fae1, '$A$ biased against first model') plt.show() ``` Excellent. When we train a model to have small gradients where the $A=0$ model has large ones, we reproduce the top middle rule, though in some cases we learn a hybrid of the two. Now let's train another model to be different from either one. Note: I'm going to iteratively increase the L2 penalty until I get explanation divergence. I'm doing this manually now but it could easily be automated. ``` A2 = mlp_fae1.largest_gradient_mask(X) mlp_fae2 = MultilayerPerceptron(l2_grads=1e6) mlp_fae2.fit(X, y, A1 + A2) mlp_fae2.score(Xt, yt) explain(mlp_fae2, '$A$ biased against models 1 and 2') plt.show() ``` When we run this twice, we get low accuracy and random gradient placement. Let's visualize this all together: ``` gridsize = (2,3) plt.subplot2grid(gridsize, (0,0)) explain(mlp_plain, r'$M_{0.67}\left[ f_X|\theta_0 \right]$', 4) plt.subplot2grid(gridsize, (0,1)) explain(mlp_fae1, r'$M_{0.67}\left[ f_X|\theta_1 \right]$', 4) plt.subplot2grid(gridsize, (0,2)) explain(mlp_fae2, r'$M_{0.67}\left[ f_X|\theta_2 \right]$', 4) plt.subplot2grid(gridsize, (1,0), colspan=3) plt.axhline(1, color='red', ls='--') test_scores = [mlp_plain.score(Xt, yt), mlp_fae1.score(Xt, yt), mlp_fae2.score(Xt, yt)] train_scores = [mlp_plain.score(X, y), mlp_fae1.score(X, y), mlp_fae2.score(X, y)] plt.plot([0,1,2], train_scores, marker='^', label='Train', alpha=0.5, color='blue', markersize=10) plt.plot([0,1,2], test_scores, marker='o', label='Test', color='blue') plt.xlim(-0.5, 2.5) plt.ylim(0.4, 1.05) plt.ylabel(' Accuracy') plt.xlabel('Find-another-explanation iteration') plt.legend(loc='best', fontsize=10) plt.xticks([0,1,2]) plt.show() ``` So this more or less demonstrates the find-another-explanation method on the toy color dataset. ## Transitions between rules Separately, I ran a script to train many MLPs on this dataset, all biased against using corners, but with varying numbers of annotations in $A$ and varying L2 penalties. Let's see if we can find any transition behavior between these two rules: ``` import pickle n_vals = pickle.load(open('../data/color_n_vals.pkl', 'rb')) n_mlps = pickle.load(open('../data/color_n_mlps.pkl', 'rb')) l2_vals = pickle.load(open('../data/color_l2_vals.pkl', 'rb')) l2_mlps = pickle.load(open('../data/color_l2_mlps.pkl', 'rb')) def realize(mlp_params): return [MultilayerPerceptron.from_params(p) for p in mlp_params] l2_rule1_scores = [rule1_score(mlp, Xt[:1000]) for mlp in realize(l2_mlps)] l2_rule2_scores = [rule2_score(mlp, Xt[:1000]) for mlp in realize(l2_mlps)] l2_acc_scores = [mlp.score(Xt[:1000], yt[:1000]) for mlp in realize(l2_mlps)] n_rule1_scores = [rule1_score(mlp, Xt[:1000]) for mlp in realize(n_mlps)] n_rule2_scores = [rule2_score(mlp, Xt[:1000]) for mlp in realize(n_mlps)] n_acc_scores = [mlp.score(Xt[:1000], yt[:1000]) for mlp in realize(n_mlps)] plt.figure(figsize=(8,4)) plt.subplot(121) plt.plot(l2_vals, l2_rule1_scores, 'o', label='Corners', marker='^') plt.plot(l2_vals, l2_rule2_scores, 'o', label='Top mid.') plt.plot(l2_vals, l2_acc_scores, label='Accuracy') plt.title('Effect of $\lambda_1$ on implicit rule (full $A$)') plt.ylabel(r'Mean % $M_{0.67}\left[f_X\right]$ in corners / top middle') plt.ylim(0,1.1) plt.xscale("log") plt.yticks([]) plt.xlim(0,1000) plt.legend(loc='best', fontsize=10) plt.xlabel(r'$\lambda_1$ (explanation L2 penalty)') plt.subplot(122) plt.plot(n_vals, n_rule1_scores, 'o', label='Corners', marker='^') plt.plot(n_vals, n_rule2_scores, 'o', label='Top mid.') plt.plot(n_vals, n_acc_scores, label='Accuracy') plt.xscale('log') plt.ylim(0,1.1) plt.xlim(0,10000) plt.legend(loc='best', fontsize=10) plt.title('Effect of $A$ on implicit rule ($\lambda_1=1000$)') plt.xlabel('Number of annotations (nonzero rows of $A$)') plt.tight_layout() plt.show() ``` Cool. So we can definitely see a clear transition effect between rules. ## Comparison with LIME Although we have some pretty clear evidence that gradient explanations are descriptive for our MLP on this simple dataset, let's make sure LIME produces similar results. We'll also do a very basic benchmark to see how long each of the respective methods take. ``` explainer = lime.lime_tabular.LimeTabularExplainer( Xt, feature_names=list(range(len(Xt[0]))), class_names=[0,1]) import time t1 = time.clock() lime_explanations = [ explainer.explain_instance(Xt[i], mlp_plain.predict_proba, top_labels=1) for i in range(25) ] t2 = time.clock() input_grads = mlp_plain.input_gradients(Xt[:25]) t3 = time.clock() print('LIME took {:.6f}s/example'.format((t2-t1)/25.)) print('grads took {:.6f}s/example, which is {:.0f}x faster'.format((t3-t2)/25., (t2-t1)/float(t3-t2))) preds = mlp_plain.predict(Xt[:25]) lime_exps = [LocalLinearExplanation.from_lime(Xt[i], preds[i], lime_explanations[i]) for i in range(25)] grad_exps = [LocalLinearExplanation(Xt[i], preds[i], input_grads[i]) for i in range(25)] plt.subplot(121) plt.title('LIME', fontsize=16) explanation_grid(lime_exps, imgshape, 3) plt.subplot(122) plt.title(r'$M_{0.67}\left[f_X\right]$', fontsize=16) explanation_grid(grad_exps, imgshape, 3) plt.show() ``` So our explanation methods agree somewhat closely, which is good to see. Also, gradients are significantly faster. ## Learning from less data Do explanations allow our model to learn with less data? Separately, we trained many models on increasing fractions of the dataset with different annotations; some penalizing the corners/top-middle and some penalizing everything but the corners/top-middle. Let's see how each version of the model performs: ``` import pickle data_counts = pickle.load(open('../data/color_data_counts.pkl', 'rb')) normals_by_count = pickle.load(open('../data/color_normals_by_count.pkl', 'rb')) pro_r1s_by_count = pickle.load(open('../data/color_pro_r1s_by_count.pkl', 'rb')) pro_r2s_by_count = pickle.load(open('../data/color_pro_r2s_by_count.pkl', 'rb')) anti_r1s_by_count = pickle.load(open('../data/color_anti_r1s_by_count.pkl', 'rb')) anti_r2s_by_count = pickle.load(open('../data/color_anti_r2s_by_count.pkl', 'rb')) def score_all(ms): return [m.score(Xt,yt) for m in realize(ms)] def realize(mlp_params): return [MultilayerPerceptron.from_params(p) for p in mlp_params] sc_normal = score_all(normals_by_count) sc_pro_r1 = score_all(pro_r1s_by_count) sc_pro_r2 = score_all(pro_r2s_by_count) sc_anti_r1 = score_all(anti_r1s_by_count) sc_anti_r2 = score_all(anti_r2s_by_count) from matplotlib import ticker def plot_A(A): plt.gca().set_xticks([]) plt.gca().set_yticks([]) plt.imshow((A[0].reshape(5,5,3) * 255).astype(np.uint8), interpolation='none') for i in range(5): for j in range(5): if A[0].reshape(5,5,3)[i][j][0]: plt.text(j,i+0.025,'1',ha='center',va='center',fontsize=8) else: plt.text(j,i+0.025,'0',ha='center',va='center',color='white',fontsize=8) gridsize = (4,9) plt.figure(figsize=(10,5)) cs=3 plt.subplot2grid(gridsize, (0,2*cs)) plt.title('Pro-Rule 1') plot_A(~E2) plt.subplot2grid(gridsize, (1,2*cs)) plt.title('Pro-Rule 2') plot_A(~E1) plt.subplot2grid(gridsize, (2,2*cs)) plt.title('Anti-Rule 1') plot_A(E2) plt.subplot2grid(gridsize, (3,2*cs)) plt.title('Anti-Rule 2') plot_A(E1) plt.subplot2grid(gridsize, (0,0), rowspan=4, colspan=cs) plt.title('Learning Rule 1 with $A$') plt.errorbar(data_counts, sc_normal, label=r'Normal', lw=2) plt.errorbar(data_counts, sc_pro_r1, label=r'Pro-Rule 1', marker='H') plt.errorbar(data_counts, sc_anti_r2, label=r'Anti-Rule 2', marker='^') plt.xscale('log') plt.ylim(0.5,1) plt.ylabel('Test Accuracy') plt.xlabel('# Training Examples') plt.legend(loc='best', fontsize=10) plt.gca().xaxis.set_major_formatter(ticker.FormatStrFormatter("%d")) plt.gca().set_xticks([10,100,1000]) plt.subplot2grid(gridsize, (0,cs), rowspan=4, colspan=cs) plt.title('Learning Rule 2 with $A$') plt.gca().set_yticklabels([]) plt.errorbar(data_counts, sc_normal, label=r'Normal', lw=2) plt.errorbar(data_counts, sc_pro_r2, label=r'Pro-Rule 2', marker='H') plt.errorbar(data_counts, sc_anti_r1, label=r'Anti-Rule 1', marker='^') plt.xscale('log') plt.ylim(0.5,1) plt.xlabel('# Training Examples') plt.legend(loc='best', fontsize=10) plt.gca().xaxis.set_major_formatter(ticker.FormatStrFormatter("%d")) plt.show() def improvement_over_normal(scores, cutoff): norm = data_counts[next(i for i,val in enumerate(sc_normal) if val > cutoff)] comp = data_counts[next(i for i,val in enumerate(scores) if val > cutoff)] return norm / float(comp) def print_improvement(name, scores, cutoff): print('Extra data for normal model to reach {:.2f} accuracy vs. {}: {:.2f}'.format( cutoff, name, improvement_over_normal(scores, cutoff))) print_improvement('Anti-Rule 2', sc_anti_r2, 0.8) print_improvement('Anti-Rule 2', sc_anti_r2, 0.9) print_improvement('Anti-Rule 2', sc_anti_r2, 0.95) print_improvement('Anti-Rule 2', sc_anti_r2, 0.99) print('') print_improvement('Pro-Rule 1', sc_pro_r1, 0.8) print_improvement('Pro-Rule 1', sc_pro_r1, 0.9) print_improvement('Pro-Rule 1', sc_pro_r1, 0.95) print_improvement('Pro-Rule 1', sc_pro_r1, 0.99) print('') print_improvement('Pro-Rule 2', sc_pro_r2, 0.9) print_improvement('Pro-Rule 2', sc_pro_r2, 0.95) print_improvement('Pro-Rule 2', sc_pro_r2, 0.97) print('') print_improvement('Anti-Rule 1', sc_anti_r1, 0.7) print_improvement('Anti-Rule 1', sc_anti_r1, 0.8) print_improvement('Anti-Rule 1', sc_anti_r1, 0.9) ``` Generally, we learn better classifiers with less data using explanations (especially in the Pro-Rule 1 case, where we provide the most information). Biasing against the top-middle or against everything but the corners / top-middle tends to give us more accurate classifiers. Biasing against the corners, however, gives us _lower_ accuracy until we obtain more examples. This may be because it's an inherently harder rule to learn; there are only 4 ways that all corners can match, but $4*3*2=24$ ways the top-middle pixels can differ. ## Investigating cutoffs We chose a 0.67 cutoff for most of our training a bit arbitrarily, so let's just investigate that briefly: ``` def M(input_gradients, cutoff=0.67): return np.array([np.abs(e) > cutoff*np.abs(e).max() for e in input_gradients]).astype(int).ravel() grads = mlp_plain.input_gradients(Xt) grads2 = mlp_rule1.input_gradients(Xt) cutoffs = np.linspace(0,1,100) cutoff_pcts = np.array([M(grads, c).sum() / float(len(grads.ravel())) for c in cutoffs]) cutoff_pcts2 = np.array([M(grads2, c).sum() / float(len(grads2.ravel())) for c in cutoffs]) plt.plot(cutoffs, cutoff_pcts, label='$A=0$') plt.plot(cutoffs, cutoff_pcts2, label='$A$ against corners') plt.legend(loc='best') plt.xlabel('Cutoff') plt.ylabel('Mean fraction of qualifying gradient entries') plt.yticks(np.linspace(0,1,21)) plt.yscale('log') plt.axhline(0.06, ls='--', c='red') plt.axvline(0.67, ls='--', c='blue') plt.title('-- Toy color dataset- --\n# qualifying entries falls exponentially\n0.67 cutoff takes the top ~6%') plt.show() ``` On average, the number of elements we keep falls exponentially without any clear kink in the curve, so perhaps our arbitrariness is justified, though it's problematic that it exists in the first place.
github_jupyter
``` import os import re import torch import pickle import pandas as pd import numpy as np from tqdm.auto import tqdm tqdm.pandas() ``` # 1. Pre-processing ### Create a combined dataframe > This creates a dataframe containing the image IDs & labels for both original images provided by the Bristol Myers Squibb pharmaceutical company, and the augmentations generated per each original image. ``` train_df = pd.read_csv('../../../../../../../../../Downloads/train/train_labels.csv') ``` ### InChI pre-processing > This, firstly, splits the first part of the InChI string (the chemical formula) into sequences of text and numbers. Secondly, this splits the second part of the InChI string (the other layers) into sequences of text and numbers. ``` def split_inchi_formula(formula: str) -> str: """ This function splits the chemical formula (in the first layer of InChI) into its separate element and number components. :param formula: chemical formula, e.g. C13H20OS :type formula: string :return: splitted chemical formula :rtype: string """ string = '' # for each chemical element in the formula for i in re.findall(r"[A-Z][^A-Z]*", formula): # return each separate element, i.e. text elem = re.match(r"\D+", i).group() # return each separate number num = i.replace(elem, "") # add either the element or both element and number (space-separated) to the string if num == "": string += f"{elem} " else: string += f"{elem} {str(num)} " return string.rstrip(' ') def split_inchi_layers(layers: str) -> str: """ This function splits the layers (following the first layer of InChI) into separate element and number components. :param layers: layer string, e.g. c1-9(2)8-15-13-6-5-10(3)7-12(13)11(4)14/h5-7,9,11,14H,8H2,1-4H3 :type layers: string :return: splitted layer info :rtype: string """ string = '' # for each layer in layers for i in re.findall(r"[a-z][^a-z]*", layers): # get the character preceding the layer info elem = i[0] # get the number string succeeding the character num = i.replace(elem, "").replace("/", "") num_string = '' # for each number string for j in re.findall(r"[0-9]+[^0-9]*", num): # get the list of numbers num_list = list(re.findall(r'\d+', j)) # get the first number _num = num_list[0] # add the number string to the overall result if j == _num: num_string += f"{_num} " else: extra = j.replace(_num, "") num_string += f"{_num} {' '.join(list(extra))} " string += f"/{elem} {num_string}" return string.rstrip(' ') ``` ### Tokenize texts and predict captions > This tokenizes each text by converting it to a sequence of characters. Backward compatibility is also maintained, i.e. sequence to text conversion. Image caption predictions also take place within the Tokenizer class. ``` class Tokenizer(object): def __init__(self): # string to integer mapping self.stoi = {} # integer to string mapping self.itos = {} def __len__(self) -> None: """ This method returns the length of token:index map. :return: length of map :rtype: int """ # return the length of the map return len(self.stoi) def fit_on_texts(self, texts: list) -> None: """ This method creates a vocabulary of all tokens contained in provided texts, and updates the mapping of token to index, and index to token. :param texts: list of texts :type texts: list """ # create a storage for all tokens vocab = set() # add tokens from each text to vocabulary for text in texts: vocab.update(text.split(' ')) # sort the vocabulary in alphabetical order vocab = sorted(vocab) # add start, end and pad for sentence vocab.append('<sos>') vocab.append('<eos>') vocab.append('<pad>') # update the string to integer mapping, where integer is the index of the token for i, s in enumerate(vocab): self.stoi[s] = i # reverse the previous vocabulary to create integer to string mapping self.itos = {item[1]: item[0] for item in self.stoi.items()} def text_to_sequence(self, text: str) -> list: """ This method converts the given text to a list of its individual tokens, including start and end of string symbols. :param text: input textual data :type text: str :return: list of tokens :rtype: list """ # storage to append symbols to sequence = [] # add the start of string symbol to storage sequence.append(self.stoi['<sos>']) # add each token in text to storage for s in text.split(' '): sequence.append(self.stoi[s]) # add the end of string symbol to storage sequence.append(self.stoi['<eos>']) return sequence def texts_to_sequences(self, texts: list) -> list: """ This method converts each text in the provided list into sequences of characters. Each sequence is appended to a list and the said list is returned. :param texts: a list of input texts :type texts: list :return: a list of sequences :rtype: list """ # storage to append sequences to sequences = [] # for each text do for text in texts: # convert the text to a list of characters sequence = self.text_to_sequence(text) # append the lists of characters to an aggregated list storage sequences.append(sequence) return sequences def sequence_to_text(self, sequence: list) -> str: """ This method converts the sequence of characters back into text. :param sequence: list of characters :type sequence: list :return: text :rtype: str """ # join the characters with no space in between return ''.join(list(map(lambda i: self.itos[i], sequence))) def sequences_to_texts(self, sequences: list) -> list: """ This method converts each provided sequence into text and returns all texts inside a list. :param sequences: list of character sequences :type sequences: list :return: list of texts :rtype: list """ # storage for texts texts = [] # convert each sequence to text and append to storage for sequence in sequences: text = self.sequence_to_text(sequence) texts.append(text) return texts def predict_caption(self, sequence: list) -> str: """ This method predicts the caption by adding each symbol in sequence to a resulting string. This keeps happening up until the end of sentence or padding is met. :param sequence: list of characters :type sequence: list :return: image caption :rtype: string """ # storage for the final caption caption = '' # for each index in a sequence of symbols for i in sequence: # if symbol is the end of sentence or padding, break if i == self.stoi['<eos>'] or i == self.stoi['<pad>']: break # otherwise, add the symbol to the final caption caption += self.itos[i] return caption def predict_captions(self, sequences: list) -> list: """ This method predicts the captions for each sequence in a list of sequences. :param sequences: list of sequences :type sequences: list :return: list of final image captions :rtype: list """ # storage for captions captions = [] # for each sequence, do for sequence in sequences: # predict the caption per sequence caption = self.predict_caption(sequence) # append to the storage of captions captions.append(caption) return captions # split the InChI string with the backslash delimiter train_df['InChI_chemical_formula'] = train_df['InChI'].apply(lambda x: x.split('/')[1]) ``` ### Pre-process > This performs all preprocessing steps, mainly: (1) converting InChI string to space separated list of elements, (2) tokenizing the InChI string by creating lists of elements, and (3) computing the actual lengths of each such list. The results are returned in `train_df`. ``` # split the InChI string into the chemical formula part and the other layers part train_df['InChI_text'] = ( train_df['InChI_chemical_formula'].apply(split_inchi_formula) + ' ' + train_df['InChI'].apply(lambda x: '/'.join(x.split('/')[2:])).apply(split_inchi_layers).values + ' ' + train_df['InChI'].apply(lambda x: x[x.find('/h'):]).apply(split_inchi_layers).values ) # adjust for cases where hydrogen was not found and NaN was returned for idx in range(len(train_df['InChI_text'])): if '/h' not in train_df.loc[idx, 'InChI']: train_df.loc[idx, 'InChI_text'] = ( split_inchi_formula(train_df.loc[idx, 'InChI_chemical_formula']) + ' ' + split_inchi_layers('/'.join(train_df.loc[idx, 'InChI'].split('/')[2:]) ) ) # save the train_df in a separate csv train_df.to_csv('../../../data/train_df.csv') # create a tokenizer class tokenizer = Tokenizer() # create a vocabulary of all InChI tokens tokenizer.fit_on_texts(train_df['InChI_text'].values) # save the tokenizer torch.save(tokenizer, '../../../data/tokenizer.pth') # store all sequence lengths lengths = [] # creates a progress bar around the iterable tk = tqdm(train_df['InChI_text'].values, total=len(train_df)) # for each text, i.e. InChI string, in the iterable, do for text in tk: # convert text to sequence of characters seq = tokenizer.text_to_sequence(text) # update the caption length (reduced by 2 for <end> and <pad>) and append to the aggregated storage length = len(seq) - 2 lengths.append(length) # write down the lengths in the dataframe train_df['InChI_length'] = lengths # save as a pickle file train_df.to_pickle('../../../data/train.pkl') print('Saved the train dataframe as a pickle file.') ```
github_jupyter
``` import functools import pathlib import numpy as np import matplotlib.pyplot as plt import shapely.geometry import skimage.draw import tensorflow as tf import pydicom import pymedphys import pymedphys._dicom.structure as dcm_struct # Put all of the DICOM data here, file structure doesn't matter: data_path_root = pathlib.Path.home().joinpath('.data/dicom-ct-and-structures') dcm_paths = list(data_path_root.rglob('**/*.dcm')) dcm_headers = [] for dcm_path in dcm_paths: dcm_headers.append(pydicom.read_file( dcm_path, force=True, specific_tags=['SOPInstanceUID', 'SOPClassUID'])) ct_image_paths = { header.SOPInstanceUID: path for header, path in zip(dcm_headers, dcm_paths) if header.SOPClassUID.name == "CT Image Storage" } structure_set_paths = { header.SOPInstanceUID: path for header, path in zip(dcm_headers, dcm_paths) if header.SOPClassUID.name == "RT Structure Set Storage" } # names = set() # for uid, path in structure_set_paths.items(): # dcm = pydicom.read_file( # path, force=True, specific_tags=['StructureSetROISequence']) # for item in dcm.StructureSetROISequence: # names.add(item.ROIName) names_map = { 'BB': "bite_block", 'Bladder': "bladder", "Bladder_obj": None, "Bowel": 'bowel', "Bowel_obj": None, "Box Adapter": None, "BoxAdaptor": None, "Brain": "brain", "Brainstem": "brainstem", "brainstem": "brainstem", "Bulla Lt": "bulla_left", "L bulla": "bulla_left", "Bulla Rt": "bulla_right", "Bulla L": "bulla_left", "Bulla Left": "bulla_left", "Bulla R": "bulla_right", "Bulla Right": "bulla_right", "R bulla": "bulla_right", "CTV": None, "CTV Eval": None, "CTV thyroids": None, "CTVCT": None, "CTVMRI": None, "CTVSmall": None, "CTVeval": None, "CTVnew": None, "Chiasm": "chiasm", "Colon": "colon", "colon": "colon", "Colon_obj": None, "Cord": "spinal_cord", "SPINAL CORD": "spinal_cord", "Spinal Cord": "spinal_cord", "Cord PRV": None, "Couch Edge": None, "Couch Foam Half Couch": None, "Couch Outer Half Couch": None, "GTV": None, "24.000Gy": None, "15.000Gy_AH": None, "15.000Gy_NC": None, "15.000Gy_v": None, "30.000Gy_AH": None, "30.000Gy_NC": None, "30.000Gy_v": None, "95%_Large": None, "95.00%_SMALL": None, "BowelObj_Large": None, "BowelObj_small": None, "AdrenalGTV": None, "Bone_or": None, "BrainObj": None, "CTV1": None, "CTV_LN": None, "CTV_obj": None, "CTV_uncropped": None, "CTVmargin": None, "CTVmargin_eval": None, "CTVobj": None, "CTVobjnew": None, "CTVoptimise": None, "CTVoptimisenew": None, "Cauda equina": "cauda_equina", "GTV LN": None, "GTV thyroids": None, "GTV+SCAR": None, "GTV-2": None, "GTV/scar": None, "GTVCT": None, "GTVMRI": None, "GTV_Combined": None, "GTVcombined": None, "GTVobj": None, "GTVoptimise": None, "Heart": "heart", "Heart/GVs": None, "INGUINALobj": None, "Implant": None, "Implant_Avoid": None, "InguinalLn": None, "Kidney Lt": "kidney_left", "Lkidney": "kidney_left", "Kidney Rt": "kidney_right", "Rkidney": "kidney_right", "LN": None, "LN GTV": None, "LN Mandibular": None, "LN Retropharyngeal": None, "LNCTV": None, "LNeval": None, "Lacrimal Lt": "lacrimal_left", "Lacrimal Rt": "lacrimal_right", "Larynx": "larynx", "Larynx/trachea": None, "Liver": "liver", "Lung Lt": "lung_left", "Lung Left": "lung_left", "Lung Rt": "lung_right", "Lung_Combined": None, "Lung_L": "lung_left", "Lung_R": "lung_right", "Lung Right": "lung_right", "Oesophagus": "oesophagus", "Esophagus": 'oesophagus', "esophagus": 'oesophagus', "OD": "lens_right", "OD Lens": "lens_right", "Lens OD": "lens_right", "ODlens": "lens_right", "OS": "lens_left", "OS lens": "lens_left", "Lens OS": "lens_left", "OSlens": "lens_left", "OpPathPRV": None, "L optic N": "optic_nerve_left", "OpticNLeft": "optic_nerve_left", "LopticN": "optic_nerve_left", "OpticL": "optic_nerve_left", "Loptic": "optic_nerve_left", "OpticNRight": "optic_nerve_right", "OpticR": "optic_nerve_right", "R optic N": "optic_nerve_right", "Roptic": "optic_nerve_right", "RopticN": "optic_nerve_right", "PTV": None, "PTV LN eval": None, "PTV Prostate": None, "PTV bladder": None, "PTV crop": None, "PTV eval": None, "PTV nodes": None, "PTV thyroids": None, "PTV thyroid eval": None, "PTV uncropped": None, "PTV+2cm": None, "PTV+4cm": None, "PTV_Combined": None, "PTV_INGUINAL": None, 'Pituitary': 'pituitary', "Prostate": 'prostate', "prostate": 'prostate', "Rectum": 'rectum', "OpticPathway": 'optic_pathway', "Small Bowel": "small_bowel", "Spleen": "spleen", "Stomach": "stomach", "Thyroid": "thyroid", "Tongue": "tongue", "tongue": "tongue", "Trachea": "trachea", "trachea": "trachea", "Urethra": "urethra", "Vacbag": "vacuum_bag", "vacbag": "vacuum_bag", "patient": "patient", "testicles": "testicles" } ignore_list = [ 'CTV start', 'CTV_Combined', 'ColonObj_large', 'ColonObj_small', 'CordPRV', 'CORDprv', 'Couch Foam Full Couch', 'Couch Outer Full Couch', 'Couch Parts Full Couch', 'LnCTV', 'LnGTV', 'Mand Ln', 'OpPathway', 'PTV Ln', 'PTV Ln PreSc', 'PTV total combined', 'PTVCombined_Large', 'PTVCombined_Small', 'PTVLarge', 'PTVSmall', 'PTV_Eval', 'PTV_LN', 'PTV_LN_15/2', 'PTV_LNeval', 'PTV_eval', 'PTV_eval_small', 'PTV_obj', 'PTVcombined', 'PTVcombined_15/2', 'PTVcombined_Eval', 'PTVeval', 'PTVeval_combined', 'PTVnew', 'PTVobj', 'PTVobjnew', 'PTVoptimise', 'PTVoptimisenew', 'PTVpituitary', 'PTVprimary', 'PTVsmooth', 'Patient small', 'Patient-bolus', 'R prescap Ln', 'Rectal_Syringe', 'RectumObj_large', 'RectumObj_small', 'Rectum_obj', 'RetroLn', 'CTVcombined', 'CTVlns', 'CTVprimary', 'CombinedLung', 'Cord-PTV', 'External', 'GTVscar', 'HeartCTV', 'HeartGTV', 'HeartPTV', 'Inguinal LnCTV', 'Inguinal+2cm', 'InguinalPTV_eval', 'Kidneys (Combined)', 'LN CTV', 'LN_PTV_eval', 'LN_ring', 'Lung', 'Lung total', 'LungGTV1', 'LungGTV2', 'LungGTV3', 'LungGTV4', 'LungGTV5', 'LungGTVMIP', 'LungPTV', 'MandiblePTV_eval', 'Mandible_ring', 'Nasal PTVeval', 'Nasal_ring', 'ODnew', 'OR_ Bone', 'OR_Metal', 'OR_Tissue', 'PTV Eval', 'PTVLn', 'PTV_Combined_eval', 'PTV_Distal', 'PTV_Distal_Crop', 'PTV_LN_Inguinal', 'PTV_LN_Popliteal', 'PTV_LN_Smooth', 'PTV_Sup', 'PTVdistal_eval', 'PTVsubSIB2mm', 'Popliteal LnCTV', 'Popliteal+2cm', 'SC_Olap', 'SC_Olap2', 'SC_Olapnew', 'SC_Olapnew2', 'SIB', 'Scar', 'Scar marker', 'Skin Spare', 'Skin Sparing', 'Skin spare', 'Small Bowel Replan', 'Small Bowel replan', 'SmallPTV', 'Small_PTV_Combined', 'Structure1', 'Structure2', 'Structure3', 'Structure4', 'Structure5', 'Syringe fill', 'TEST', 'Tissue_or', 'Tracheaoesophagus', 'Urethra/vulva', 'Urinary System', 'bolus_5mm', 'bowel_obj', 'brain-PTV', 'brain-ptv', 'combined PTV', 'ctv cropped', 'lungs', 'p', 'patient & bolus', 'patient no bolus', 'patient&Bolus', 'patient&bolus', 'patient-bolus', 'patient_1', 'patientbol', 'skin spare', 'urethra_PRV', 'whole lung' ] for key in ignore_list: names_map[key] = None # mapped_names = set(names_map.keys()) # print(mapped_names.difference(names)) # names.difference(mapped_names) set([item for key, item in names_map.items()]).difference({None}) # structure_uid = list(structure_set_paths.items())[0][0] structure_uid = '1.2.840.10008.5.1.4.1.1.481.3.1574822743' structure_set_path = structure_set_paths[structure_uid] structure_set_path structure_set = pydicom.read_file( structure_set_path, force=True, specific_tags=['ROIContourSequence', 'StructureSetROISequence']) number_to_name_map = { roi_sequence_item.ROINumber: names_map[roi_sequence_item.ROIName] for roi_sequence_item in structure_set.StructureSetROISequence if names_map[roi_sequence_item.ROIName] is not None } number_to_name_map contours_by_ct_uid = {} for roi_contour_sequence_item in structure_set.ROIContourSequence: try: structure_name = number_to_name_map[roi_contour_sequence_item.ReferencedROINumber] except KeyError: continue for contour_sequence_item in roi_contour_sequence_item.ContourSequence: ct_uid = contour_sequence_item.ContourImageSequence[0].ReferencedSOPInstanceUID try: _ = contours_by_ct_uid[ct_uid] except KeyError: contours_by_ct_uid[ct_uid] = dict() try: contours_by_ct_uid[ct_uid][structure_name].append(contour_sequence_item.ContourData) except KeyError: contours_by_ct_uid[ct_uid][structure_name] = [contour_sequence_item.ContourData] # ct_uid = list(contours_by_ct_uid.keys())[50] ct_uid = '1.2.840.113704.1.111.2804.1556591059.12956' ct_path = ct_image_paths[ct_uid] dcm_ct = pydicom.read_file(ct_path, force=True) dcm_ct.file_meta.TransferSyntaxUID = pydicom.uid.ImplicitVRLittleEndian def get_image_transformation_parameters(dcm_ct): # From Matthew Coopers work in ../old/data_generator.py position = dcm_ct.ImagePositionPatient spacing = [x for x in dcm_ct.PixelSpacing] + [dcm_ct.SliceThickness] orientation = dcm_ct.ImageOrientationPatient dx, dy, *_ = spacing Cx, Cy, *_ = position Ox, Oy = orientation[0], orientation[4] return dx, dy, Cx, Cy, Ox, Oy contours_by_ct_uid[ct_uid].keys() organ = 'urethra' original_contours = contours_by_ct_uid[ct_uid][organ] def reduce_expanded_mask(expanded_mask, img_size, expansion): return np.mean(np.mean( tf.reshape(expanded_mask, (img_size, expansion, img_size, expansion)), axis=1), axis=2) def calculate_aliased_mask(contours, dcm_ct, expansion=5): dx, dy, Cx, Cy, Ox, Oy = get_image_transformation_parameters(dcm_ct) ct_size = np.shape(dcm_ct.pixel_array) x_grid = np.arange(Cx, Cx + ct_size[0]*dx*Ox, dx*Ox) y_grid = np.arange(Cy, Cy + ct_size[1]*dy*Oy, dy*Oy) new_ct_size = np.array(ct_size) * expansion expanded_mask = np.zeros(new_ct_size) for xyz in contours: x = np.array(xyz[0::3]) y = np.array(xyz[1::3]) z = xyz[2::3] assert len(set(z)) == 1 r = (((y - Cy) / dy * Oy)) * expansion + (expansion - 1) * 0.5 c = (((x - Cx) / dx * Ox)) * expansion + (expansion - 1) * 0.5 expanded_mask = np.logical_or(expanded_mask, skimage.draw.polygon2mask(new_ct_size, np.array(list(zip(r, c))))) mask = reduce_expanded_mask(expanded_mask, ct_size[0], expansion) mask = 2 * mask - 1 return x_grid, y_grid, mask def get_contours_from_mask(x_grid, y_grid, mask): cs = plt.contour(x_grid, y_grid, mask, [0]); contours = [ path.vertices for path in cs.collections[0].get_paths() ] plt.close() return contours x_grid, y_grid, mask_with_aliasing = calculate_aliased_mask(original_contours, dcm_ct) _, _, mask_without_aliasing = calculate_aliased_mask(original_contours, dcm_ct, expansion=1) contours_with_aliasing = get_contours_from_mask(x_grid, y_grid, mask_with_aliasing) contours_without_aliasing = get_contours_from_mask(x_grid, y_grid, mask_without_aliasing) plt.figure(figsize=(10,10)) for xyz in original_contours: x = np.array(xyz[0::3]) y = np.array(xyz[1::3]) plt.plot(x, y) plt.axis('equal') plt.figure(figsize=(10,10)) for contour in contours_with_aliasing: plt.plot(contour[:,0], contour[:,1]) plt.plot(contour[:,0], contour[:,1]) plt.axis('equal') plt.figure(figsize=(10,10)) for contour in contours_with_aliasing: plt.plot(contour[:,0], contour[:,1]) plt.plot(contour[:,0], contour[:,1]) for xyz in original_contours: x = np.array(xyz[0::3]) y = np.array(xyz[1::3]) plt.plot(x, y) plt.axis('equal') plt.figure(figsize=(10,10)) for contour in contours_without_aliasing: plt.plot(contour[:,0], contour[:,1]) plt.plot(contour[:,0], contour[:,1]) plt.axis('equal') plt.figure(figsize=(10,10)) for contour in contours_without_aliasing: plt.plot(contour[:,0], contour[:,1]) plt.plot(contour[:,0], contour[:,1]) for xyz in original_contours: x = np.array(xyz[0::3]) y = np.array(xyz[1::3]) plt.plot(x, y) plt.axis('equal') ```
github_jupyter
``` import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from keras.datasets import mnist (x_train, y_train), _ = mnist.load_data() x_train = x_train / 255.0 x_train = np.expand_dims(x_train, axis=3) print(x_train.shape) print(y_train.shape) num_classes = 10 plt.imshow(np.squeeze(x_train[10])) plt.show() print(y_train[10]) '''def generator(z, y, reuse=False, verbose=True): with tf.variable_scope("generator", reuse=reuse): # Concatenate noise and conditional one-hot variable inputs = tf.concat([z, y], 1) # FC layer fc1 = tf.layers.dense(inputs=inputs, units= 7 * 7 * 128, activation=tf.nn.leaky_relu) reshaped = tf.reshape(fc1, shape=[-1, 7, 7, 128]) upconv1 = tf.layers.conv2d_transpose(inputs=reshaped, filters=32, kernel_size=[5,5], kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), strides=[2,2], activation=tf.nn.leaky_relu, padding='same', name='upscore1') upconv2 = tf.layers.conv2d_transpose(inputs=upconv1, filters=1, kernel_size=[3,3], kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), strides=[2,2], activation=None, padding='same', name='upscore2') prob = tf.nn.sigmoid(upconv2) if verbose: print("\nGenerator:") print(inputs) print(fc1) print(reshaped) print(upconv1) print(upconv2) return prob def discriminator(x, y, reuse=False, verbose=True): with tf.variable_scope("discriminator", reuse=reuse): conv1 = tf.layers.conv2d(inputs=x, filters=64, kernel_size=[5,5], kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), strides=[1,1], activation=tf.nn.leaky_relu, padding='same', name='Conv1') pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2,2], strides=[2,2], padding='same', name='Pool1') conv2 = tf.layers.conv2d(inputs=pool1, filters=32, kernel_size=[3,3], kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), strides=[2,2], activation=tf.nn.leaky_relu, padding='same', name='Conv2') pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2,2], strides=[2,2], padding='same', name='Pool2') flattened = tf.layers.flatten(pool2) concatened = tf.concat([flattened, y], 1) fc1 = tf.layers.dense(inputs=concatened, units=256, activation=tf.nn.leaky_relu) fc2 = tf.layers.dense(inputs=fc1, units=1, activation=None) prob = tf.nn.sigmoid(fc2) if verbose: print("\nDiscriminator:") print(conv1) print(pool1) print(conv2) print(pool2) print(flattened) print(concatened) print(fc1) print(fc2) return prob, fc2''' def sample_Z(batch_size, img_size): # Sample noise for generator return np.random.uniform(-1., 1., size=[batch_size, img_size]) def one_hot(batch_size, num_classes, labels): assert(batch_size == len(labels)) y_one_hot = np.zeros(shape=[batch_size, num_classes]) y_one_hot[np.arange(batch_size), labels] = 1 return y_one_hot def generator(z, y, reuse=False, verbose=True): with tf.variable_scope("generator", reuse=reuse): inputs = tf.concat([z, y], 1) fc1 = tf.layers.dense(inputs=inputs, units=256, activation=tf.nn.leaky_relu) fc2 = tf.layers.dense(inputs=fc1, units=784, activation=None) logits = tf.nn.sigmoid(fc2) if verbose: print("\nGenerator:") print(inputs) print(fc1) print(fc2) return logits def discriminator(x, y, reuse=False, verbose=True): with tf.variable_scope("discriminator", reuse=reuse): inputs = tf.concat([x, y], 1) fc1 = tf.layers.dense(inputs=inputs, units=256, activation=tf.nn.leaky_relu) fc2 = tf.layers.dense(inputs=fc1, units=1, activation=None) prob = tf.nn.sigmoid(fc2) if verbose: print("\nDiscriminator:") print(inputs) print(fc1) print(fc2) return prob, fc2 tf.reset_default_graph() # Discriminator input #X = tf.placeholder(tf.float32, shape=[None, x_train.shape[1], x_train.shape[2], 1], name='X') X = tf.placeholder(tf.float32, shape=[None, x_train.shape[1] * x_train.shape[2] * 1], name='X') # Generator noise input Z_dim = 100 Z = tf.placeholder(tf.float32, shape=[None, Z_dim], name='Z') # Generator conditional Y = tf.placeholder(tf.float32, shape=[None, num_classes], name='Y') # Print shapes print("Inputs:") print("Discriminator input: " + str(X)) print("Conditional variable: " + str(Y)) print("Generator input noise: " + str(Z)) # Networks gen_sample = generator(Z, Y) D_real, D_logit_real = discriminator(X, Y) D_fake, D_logit_fake = discriminator(gen_sample, Y, reuse=True, verbose=False) ``` ## Theoretical remark ### Binary cross entropy loss \begin{equation*} L(\theta) = - \frac{1}{n} \sum_{i=1}^n [y_i log(p_i) + (1 - y_i) log(1 - p_i)] \end{equation*} - Discriminator final probability is 1 => REAL IMAGE - Discriminator final probability is 0 => FAKE IMAGE Log values: - Log(1) => Loss would be 0 - Log(0+) => Loss would be to - ∞ ### Generator: Maximize D(G(z)) ### Discriminator: Maximize D(x) AND minimize D(G(z)) ``` # Losses have minus sign because I have to maximize them D_loss = - tf.reduce_mean( tf.log(D_real) + tf.log(1. - D_fake) ) G_loss = - tf.reduce_mean( tf.log(D_fake) ) # Optimizers D_var = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='discriminator') D_optimizer = tf.train.AdamOptimizer(learning_rate=0.0005).minimize(D_loss, var_list=D_var) G_var = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='generator') G_optimizer = tf.train.AdamOptimizer(learning_rate=0.0005).minimize(G_loss, var_list=G_var) ``` ## Training of the generator and discriminator network ``` batch_size = 64 with tf.Session() as sess: # Run the initializer saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) # Epochs for n in range(30): print("Epoch " + str(n+1)) for i in range(len(x_train) // batch_size): X_tmp = np.reshape(x_train[i*batch_size:(i+1)*batch_size], (batch_size, -1)) #X_tmp = x_train[i*batch_size:(i+1)*batch_size] sampled_noise = sample_Z(batch_size, Z_dim) one_hot_sampled = one_hot(batch_size, num_classes, y_train[i*batch_size:(i+1)*batch_size]) _, D_loss_val = sess.run([D_optimizer, D_loss], feed_dict={X: X_tmp, Y: one_hot_sampled, Z: sampled_noise}) _, G_loss_val = sess.run([G_optimizer, G_loss], feed_dict={Y: one_hot_sampled, Z: sampled_noise}) if i % 300 == 0: print(str(D_loss_val) + " " + str(G_loss_val)) save_path = saver.save(sess, "./checkpoints/model.ckpt") ``` ## Generate images with constraint (Y) ``` generate_number = 9 with tf.Session() as sess: saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) saver.restore(sess, "./checkpoints/model.ckpt") sampled_noise = sample_Z(1, Z_dim) one_hot_sampled = one_hot(1, num_classes, [generate_number]) generated = sess.run(gen_sample, feed_dict={Y: one_hot_sampled, Z: sampled_noise}) img_generated = np.reshape(generated, (28, 28)) plt.imshow(img_generated) plt.show() fig, axes = plt.subplots(4, 4) fig.subplots_adjust(hspace=0.1) with tf.Session() as sess: saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) saver.restore(sess, "./checkpoints/model.ckpt") for i, ax in enumerate(axes.flat): generate_number = int(i / 4) sampled_noise = sample_Z(1, Z_dim) one_hot_sampled = one_hot(1, num_classes, [generate_number]) generated = sess.run(gen_sample, feed_dict={Y: one_hot_sampled, Z: sampled_noise}) img_generated = np.reshape(generated, (28, 28)) ax.imshow(img_generated) ax.set_xticks([]) ax.set_yticks([]) plt.show() ```
github_jupyter
## Accessing High Resolution Electricity Access (HREA) data with the Planetary Computer STAC API The HREA project aims to provide open access to new indicators of electricity access and reliability across the world. Leveraging VIIRS satellite imagery with computational methods, these high-resolution data provide new tools to track progress towards reliable and sustainable energy access across the world. This notebook provides an example of accessing HREA data using the Planetary Computer STAC API. ### Environment setup This notebook works with or without an API key, but you will be given more permissive access to the data with an API key. The Planetary Computer Hub is pre-configured to use your API key. ``` import matplotlib.colors as colors import matplotlib.pyplot as plt import planetary_computer as pc import rasterio import rioxarray from pystac_client import Client from rasterio.plot import show ``` ### Selecting a region and querying the API The HREA dataset covers all of Africa as well as Ecuador. Let's pick up an area of interest that covers Djibouti and query the Planetary Computer API for data coverage for the year 2019. ``` area_of_interest = { "type": "Polygon", "coordinates": [ [ [41.693115234375, 10.865675826639414], [43.275146484375, 10.865675826639414], [43.275146484375, 12.554563528593656], [41.693115234375, 12.554563528593656], [41.693115234375, 10.865675826639414], ] ], } catalog = Client.open("https://planetarycomputer.microsoft.com/api/stac/v1") search = catalog.search( collections=["hrea"], intersects=area_of_interest, datetime="2019-12-31" ) # Check how many items were returned, there could be more pages of results as well items = [pc.sign(item) for item in search.get_items()] print(f"Returned {len(items)} Items") ``` We found 3 items for our search. We'll grab jsut the one for Djibouti and see what data assets are available on it. ``` (item,) = [x for x in items if "Djibouti" in x.id] data_assets = [ f"{key}: {asset.title}" for key, asset in item.assets.items() if "data" in asset.roles ] print(*data_assets, sep="\n") ``` ### Plotting the data Let's pick the variable `light-composite`, and read in the entire GeoTIFF to plot. ``` light_comp_asset = item.assets["light-composite"] data_array = rioxarray.open_rasterio(light_comp_asset.href) fig, ax = plt.subplots(1, 1, figsize=(14, 7), dpi=100) show( data_array, ax=ax, norm=colors.PowerNorm(1, vmin=0.01, vmax=1.4), cmap="magma", title="Djibouti (2019)", ) plt.axis("off") plt.show() ``` ### Read a window Cloud Optimized GeoTIFFs (COGs) allows us to effeciently download and read sections of a file, rather than the entire file, when only part of the region is required. The COGs are stored on disk with an internal set of windows. You can read sections of any shape and size, but reading them in the file-defined window size is most efficient. Let's read the same asset, but this time only request the second window. ``` # Reading only the second window of the file, as an example i_window = 2 with rasterio.open(light_comp_asset.href) as src: windows = list(src.block_windows()) print("Available windows:", *windows, sep="\n") _, window = windows[i_window] section = data_array.rio.isel_window(window) fig, xsection = plt.subplots(1, 1, figsize=(14, 7)) show( section, ax=xsection, norm=colors.PowerNorm(1, vmin=0.01, vmax=1.4), cmap="magma", title="Reading a single window", ) plt.axis("off") plt.show() ``` ### Zoom in on a region within the retrieved window Let's plot the region around the city of Dikhil, situated within that second data window, around this bounding box (in x/y coordinates, which is latitude / longitude): ``` (42.345868941491204, 11.079694223371735, 42.40420227530527, 11.138027557181712) ``` ``` fig, xsection = plt.subplots(1, 1, figsize=(14, 7)) show( section.sel( x=slice(42.345868941491204, 42.40420227530527), y=slice(11.138027557181712, 11.079694223371735), ), ax=xsection, norm=colors.PowerNorm(1, vmin=0.01, vmax=1.4), cmap="magma", title="Dikhil (2019)", ) plt.axis("off") plt.show() ``` ### Plot change over time The HREA dataset goes back several years. Let's search again for the same area, but this time over a longer temporal span. ``` search = catalog.search( collections=["hrea"], intersects=area_of_interest, datetime="2012-12-31/2019-12-31" ) items = [ pc.sign(item).to_dict() for item in search.get_items() if "Djibouti" in item.id ] print(f"Returned {len(items)} Items:") ``` We got 8 items this time, each corresponding to single year. To plot the change of light intensity over time, we'll open the same asset on each of these year-items and read in the window with Dikhil. Since we're using multiple items, we'll use `stackstac` to stack them together into a single DataArray for us. ``` import stackstac bounds_latlon = ( 42.345868941491204, 11.079694223371735, 42.40420227530527, 11.138027557181712, ) dikhil = ( stackstac.stack(items, assets=["light-composite"], bounds_latlon=bounds_latlon) .squeeze() .compute() .quantile(0.9, dim=["y", "x"]) ) fig, ax = plt.subplots(figsize=(12, 6)) dikhil.plot(ax=ax) ax.set(title="Dikhil composite light output", ylabel="Annual light output, normalize"); ```
github_jupyter
``` import os os.environ['CUDA_VISIBLE_DEVICES'] = "0" import numpy as np from matplotlib import pyplot as plt import seaborn as sns import pandas as pd from tqdm.auto import tqdm import torch from torch import nn import gin import pickle import io from sparse_causal_model_learner_rl.trainable.gumbel_switch import WithInputSwitch, sample_from_logits_simple gin.enter_interactive_mode() from sparse_causal_model_learner_rl.loss.losses import fit_loss from sparse_causal_model_learner_rl.metrics.context_rewrite import context_rewriter from sparse_causal_model_learner_rl.visual.learner_visual import graph_for_matrices ckpt = '/home/sergei/ray_results/5x5_1f1c1k_obs_rec_nonlin_gnn_gumbel_siamese_l2_kc_dec_stop_after_completes/main_fcn_d66d7_00000_0_2021-02-01_22-09-57/checkpoint_0/checkpoint' class LinearModel(nn.Module): def __init__(self, input_shape): super(LinearModel, self).__init__() self.layer = nn.Linear(in_features=10, out_features=1, bias=True) def forward(self, x): return self.layer(x) import ray ray.init(address='10.90.38.7:6379', ignore_reinit_error=True) # https://github.com/pytorch/pytorch/issues/16797 class CPU_Unpickler(pickle.Unpickler): def find_class(self, module, name): if module == 'torch.storage' and name == '_load_from_bytes': return lambda b: torch.load(io.BytesIO(b), map_location='cpu') else: return super().find_class(module, name) with open(ckpt, 'rb') as f: learner = pickle.load(f)#CPU_Unpickler(f).load()#pickle.load(f) learner.collect_steps() ctx = learner._context ox = ctx['obs_x'] oy = ctx['obs_y'] ax = ctx['action_x'] obs = ctx['obs'] obs_ns = None obs_s = None def siamese_feature_discriminator_l2(obs, decoder, margin=1.0, **kwargs): def loss(y_true, y_pred): """L2 norm for the distance, no flat.""" delta = y_true - y_pred delta = delta.pow(2) delta = delta.flatten(start_dim=1) delta = delta.sum(1) return delta # original inputs order batch_dim = obs.shape[0] # random permutation for incorrect inputs idxes = torch.randperm(batch_dim).to(obs.device) obs_shuffled = obs[idxes] idxes_orig = torch.arange(start=0, end=batch_dim).to(obs.device) target_incorrect = (idxes == idxes_orig).to(obs.device) delta_obs_obs_shuffled = (obs - obs_shuffled).pow(2).flatten(start_dim=1).max(1).values # distance_shuffle = loss(obs, obs_shuffled) distance_f = loss(decoder(obs), decoder(obs_shuffled)) global obs_ns, obs_s obs_ns = obs obs_s = obs_shuffled # print(torch.nn.ReLU()(margin - distance_f), torch.where) return {'loss': torch.where(~target_incorrect, torch.nn.ReLU()(margin - distance_f), distance_f).mean(), 'metrics': {'distance_plus': distance_f[~target_incorrect].mean().item(), 'distance_minus': distance_f[target_incorrect].mean().item(), 'delta_obs_obs_shuffled': delta_obs_obs_shuffled.detach().cpu().numpy(), 'same_input_frac': (1.*target_incorrect).mean().item()} } siamese_feature_discriminator_l2(**ctx) #plt.hist(np.log(siamese_feature_discriminator_l2(**ctx)['metrics']['delta_obs_obs_shuffled'])) delta = obs_s - obs_ns from collections import Counter #learner.collect_steps() learner.env.engine plt.hist(delta.abs().flatten(start_dim=1).max(1).values.cpu().numpy()) opt = torch.optim.Adam(params=learner.decoder.parameters(), lr=1e-3) from causal_util.collect_data import EnvDataCollector learner.env = learner.create_env() learner.collector = EnvDataCollector(learner.env) learner.env.engine losses = [] dplus = [] for _ in tqdm(range(1000)): learner.collect_steps() ctx = learner._context for _ in range(5): opt.zero_grad() l = siamese_feature_discriminator_l2(**ctx, margin=1.0) loss = l['loss'] loss.backward() losses.append(loss.item()) dplus.append(l['metrics']['distance_plus']) opt.step() plt.figure(figsize=(15, 5)) plt.subplot(1, 2, 1) plt.plot(losses) plt.yscale('log') plt.subplot(1, 2, 2) plt.yscale('log') plt.plot(dplus) ```
github_jupyter
# [ATM 623: Climate Modeling](../index.ipynb) [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany # Lecture 16: Modeling the seasonal cycle of surface temperature ### About these notes: This document uses the interactive [`Jupyter notebook`](https://jupyter.org) format. The notes can be accessed in several different ways: - The interactive notebooks are hosted on `github` at https://github.com/brian-rose/ClimateModeling_courseware - The latest versions can be viewed as static web pages [rendered on nbviewer](http://nbviewer.ipython.org/github/brian-rose/ClimateModeling_courseware/blob/master/index.ipynb) - A complete snapshot of the notes as of May 2017 (end of spring semester) are [available on Brian's website](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2017/Notes/index.html). [Also here is a legacy version from 2015](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/Notes/index.html). Many of these notes make use of the `climlab` package, available at https://github.com/brian-rose/climlab ``` # Ensure compatibility with Python 2 and 3 from __future__ import print_function, division ``` ## Contents 1. [The observed seasonal cycle from NCEP Reanalysis data](#section1) 2. [Analytical toy model of the seasonal cycle](#section2) 3. [Exploring the amplitude of the seasonal cycle with an EBM](#section3) 4. [The seasonal cycle for a planet with 90º obliquity](#section4) ____________ <a id='section1'></a> ## 1. The observed seasonal cycle from NCEP Reanalysis data ____________ Look at the observed seasonal cycle in the NCEP reanalysis data. Read in the necessary data from the online server. The catalog is here: <http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/catalog.html> ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import xarray as xr import climlab from climlab import constants as const # Disable interactive plotting (use explicit display calls to show figures) plt.ioff() datapath = "http://ramadda.atmos.albany.edu:8080/repository/opendap/latest/Top/Users/BrianRose/CESM_runs/" endstr = "/entry.das" ncep_url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/" ncep_Ts = xr.open_dataset(ncep_url + "surface_gauss/skt.sfc.mon.1981-2010.ltm.nc", decode_times=False) lat_ncep = ncep_Ts.lat; lon_ncep = ncep_Ts.lon Ts_ncep = ncep_Ts.skt print( Ts_ncep.shape) ``` Load the topography file from CESM, just so we can plot the continents. ``` topo = xr.open_dataset(datapath+'som_input/USGS-gtopo30_1.9x2.5_remap_c050602.nc'+endstr, decode_times=False) lat_cesm = topo.lat lon_cesm = topo.lon ``` Make two maps: one of annual mean surface temperature, another of the seasonal range (max minus min). ``` maxTs = Ts_ncep.max(dim='time') minTs = Ts_ncep.min(dim='time') meanTs = Ts_ncep.mean(dim='time') fig = plt.figure( figsize=(16,6) ) ax1 = fig.add_subplot(1,2,1) cax1 = ax1.pcolormesh( lon_ncep, lat_ncep, meanTs, cmap=plt.cm.seismic ) cbar1 = plt.colorbar(cax1) ax1.set_title('Annual mean surface temperature (degC)', fontsize=14 ) ax2 = fig.add_subplot(1,2,2) cax2 = ax2.pcolormesh( lon_ncep, lat_ncep, maxTs - minTs ) cbar2 = plt.colorbar(cax2) ax2.set_title('Seasonal temperature range (degC)', fontsize=14) for ax in [ax1,ax2]: ax.contour( lon_cesm, lat_cesm, topo.variables['LANDFRAC'][:], [0.5], colors='k'); ax.axis([0, 360, -90, 90]) ax.set_xlabel('Longitude', fontsize=14 ); ax.set_ylabel('Latitude', fontsize=14 ) fig ``` Make a contour plot of the zonal mean temperature as a function of time ``` Tmax = 65; Tmin = -Tmax; delT = 10 clevels = np.arange(Tmin,Tmax+delT,delT) fig_zonobs, ax = plt.subplots( figsize=(10,6) ) cax = ax.contourf(np.arange(12)+0.5, lat_ncep, Ts_ncep.mean(dim='lon').transpose(), levels=clevels, cmap=plt.cm.seismic, vmin=Tmin, vmax=Tmax) ax.set_xlabel('Month', fontsize=16) ax.set_ylabel('Latitude', fontsize=16 ) cbar = plt.colorbar(cax) ax.set_title('Zonal mean surface temperature (degC)', fontsize=20) fig_zonobs ``` ____________ <a id='section2'></a> ## 2. Analytical toy model of the seasonal cycle ____________ What factors determine the above pattern of seasonal temperatures? How large is the winter-to-summer variation in temperature? What is its phasing relative to the seasonal variations in insolation? We will start to examine this in a very simple zero-dimensional EBM. Suppose the seasonal cycle of insolation at a point is $$ Q = Q^* \sin\omega t + Q_0$$ where $\omega = 2\pi ~ \text{year}^{-1}$, $Q_0$ is the annual mean insolation, and $Q^*$ is the amplitude of the seasonal variations. Here $\omega ~ t=0$ is spring equinox, $\omega~t = \pi/2$ is summer solstice, $\omega~t = \pi$ is fall equinox, and $ \omega ~t = 3 \pi/2$ is winter solstice. Now suppose the temperature is governed by $$ C \frac{d T}{d t} = Q - (A + B~T) $$ so that we have a simple model $$ C \frac{d T}{d t} = Q^* \sin\omega t + Q_0 - (A + B~T) $$ We want to ask two questions: 1. **What is the amplitude of the seasonal temperature variation?** 2. **When does the temperature maximum occur?** We will look for an oscillating solution $$ T(t) = T_0 + T^* \sin(\omega t - \Phi) $$ where $\Phi$ is an unknown phase shift and $T^*$ is the unknown amplitude of seasonal temperature variations. ### The annual mean: Integrate over one year to find $$ \overline{T} = T_0 $$ $$ Q_0 = A + B ~ \overline{T} $$ so that $$T_0 = \frac{Q_0 - A}{B} $$ ### The seasonal problem Now we need to solve for $T^*$ and $\Phi$. Take the derivative $$ \frac{d T}{dt} = T^* \omega \cos(\omega t - \Phi) $$ and plug into the model equation to get \begin{align*} C~ T^* \omega \cos(\omega t - \Phi) &= Q^* \sin\omega t + Q_0 \\ & - \left( A + B~(T_0 + T^* \sin(\omega t - \Phi) )\right) \end{align*} Subtracting out the annual mean leaves us with $$ C~ T^* \omega \cos(\omega t - \Phi) = Q^* \sin\omega t - B ~ T^* \sin(\omega t - \Phi) $$ ### Zero heat capacity: the radiative equilibrium solution It's instructive to first look at the case with $C=0$, which means that the system is not capable of storing heat, and the temperature must always be in radiative equilibrium with the insolation. In this case we would have $$ Q^* \sin\omega t = B ~ T^* \sin(\omega t - \Phi) $$ which requires that the phase shift is $$ \Phi = 0 $$ and the amplitude is $$ T^* = \frac{Q^*}{B} $$ With no heat capacity, there can be no phase shift! The temperature goes up and does in lockstep with the insolation. As we will see, the amplitude of the temperature variations is maximum in this limit. As a practical example: at 45ºN the amplitude of the seasonal insolation cycle is about 180 W m$^{-2}$ (see the [Insolation notes](Lecture11 -- Insolation.ipynb) -- the difference between insolation at summer and winter solstice is about 360 W m$^{-2}$ which we divide by two to get the amplitude of seasonal variations). We will follow our previous EBM work and take $B = 2$ W m$^{-2}$ K$^{-1}$. This would give a seasonal temperature amplitude of 90ºC! This highlights to important role for heat capacity to buffer the seasonal variations in sunlight. ### Non-dimensional heat capacity parameter We can rearrange the seasonal equation to give $$ \frac{C~\omega}{B} \cos(\omega t - \Phi) + \sin(\omega t - \Phi) = \frac{Q^*}{B~T^*} \sin\omega t $$ The heat capacity appears in our equation through the non-dimensional ratio $$ \tilde{C} = \frac{C~\omega}{B} $$ This parameter measures the efficiency of heat storage versus damping of energy anomalies through longwave radiation to space in our system. We will now use trigonometric identities \begin{align*} \cos(\omega t - \Phi) &= \cos\omega t \cos\Phi + \sin\omega t \sin\Phi \\ \sin(\omega t - \Phi) &= \sin\omega t \cos\Phi - \cos\omega t \sin\Phi \end{align*} to express our equation as \begin{align*} \frac{Q^*}{B~T^*} \sin\omega t = &\tilde{C} \cos\omega t \cos\Phi \\ + &\tilde{C} \sin\omega t \sin\Phi \\ + &\sin\omega t \cos\Phi \\ - &\cos\omega t \sin\Phi \end{align*} Now gathering together all terms in $\cos\omega t$ and $\sin\omega t$: $$ \cos\omega t \left( \tilde{C} \cos\Phi - \sin\Phi \right) = \sin\omega t \left( \frac{Q^*}{B~T^*} - \tilde{C} \sin\Phi - \cos\Phi \right) $$ ### Solving for the phase shift The equation above must be true for all $t$, which means that sum of terms in each set of parentheses must be zero. We therefore have an equation for the phase shift $$ \tilde{C} \cos\Phi - \sin\Phi = 0 $$ which means that the phase shift is $$ \Phi = \arctan \tilde{C} $$ ### Solving for the amplitude The other equation is $$ \frac{Q^*}{B~T^*} - \tilde{C} \sin\Phi - \cos\Phi = 0 $$ or $$ \frac{Q^*}{B~T^*} - \cos\Phi \left( 1+ \tilde{C}^2 \right) = 0 $$ which we solve for $T^*$ to get $$ T^* = \frac{Q^*}{B} \frac{1}{\left( 1+ \tilde{C}^2 \right) \cos\left(\arctan \tilde{C} \right) } $$ ### Shallow water limit: In low heat capacity limit, $$ \tilde{C} << 1 $$ the phase shift is $$ \Phi \approx \tilde{C} $$ and the amplitude is $$ T^* = \frac{Q^*}{B} \left( 1 - \tilde{C} \right) $$ Notice that for a system with very little heat capacity, the **phase shift approaches zero** and the **amplitude approaches its maximum value** $T^* = \frac{Q^*}{B}$. In the shallow water limit the temperature maximum will occur just slightly after the insolation maximum, and the seasonal temperature variations will be large. ### Deep water limit: Suppose instead we have an infinitely large heat reservoir (e.g. very deep ocean mixed layer). In the limit $\tilde{C} \rightarrow \infty$, the phase shift tends toward $$ \Phi \rightarrow \frac{\pi}{2} $$ so the warming is nearly perfectly out of phase with the insolation -- peak temperature would occur at fall equinox. But the amplitude in this limit is very small! $$ T^* \rightarrow 0 $$ ### What values of $\tilde{C}$ are realistic? We need to evaluate $$ \tilde{C} = \frac{C~\omega}{B} $$ for reasonable values of $C$ and $B$. $B$ is the longwave radiative feedback in our system: a measure of how efficiently a warm anomaly is radiated away to space. We have previously chosen $B = 2$ W m$^{-2}$ K$^{-1}$. $C$ is the heat capacity of the whole column, a number in J m$^{-2}$ K$^{-1}$. #### Heat capacity of the atmosphere Integrating from the surface to the top of the atmosphere, we can write $$ C_a = \int_0^{p_s} c_p \frac{dp}{g} $$ where $c_p = 10^3$ J kg$^{-1}$ K$^{-1}$ is the specific heat at constant pressure for a unit mass of air, and $dp/g$ is a mass element. This gives $C_a \approx 10^7$ J m$^{-2}$ K$^{-1}$. #### Heat capacity of a water surface As we wrote [back in Lecture 2](Lecture02 -- Solving the zero-dimensional EBM.ipynb), the heat capacity for a well-mixed column of water is $$C_w = c_w \rho_w H_w $$ where $c_w = 4 \times 10^3$ J kg$^{-1}$ $^\circ$C$^{-1}$ is the specific heat of water, $\rho_w = 10^3$ kg m$^{-3}$ is the density of water, and $H_w $ is the depth of the water column **The heat capacity of the entire atmosphere is thus equivalent to 2.5 meters of water.** #### $\tilde{C}$ for a dry land surface A dry land surface has very little heat capacity and $C$ is actually dominated by the atmosphere. So we can take $C = C_a = 10^7$ J m$^{-2}$ K$^{-1}$ as a reasonable lower bound. So our lower bound on $\tilde{C}$ is thus, taking $B = 2$ W m$^{-2}$ K$^{-1}$ and $\omega = 2\pi ~ \text{year}^{-1} = 2 \times 10^{-7} \text{ s}^{-1}$: $$ \tilde{C} = 1 $$ #### $\tilde{C}$ for a 100 meter ocean mixed layer Setting $H_w = 100$ m gives $C_w = 4 \times 10^8$ J m$^{-2}$ K$^{-1}$. Then our non-dimensional parameter is $$ \tilde{C} = 40 $$ ### The upshot: $\tilde{C}$ is closer to the deep water limit Even for a dry land surface, $\tilde{C}$ is not small. This means that there is always going to be a substantial phase shift in the timing of the peak temperatures, and a reduction in the seasonal amplitude. ### Plot the full solution for a range of water depths ``` omega = 2*np.pi / const.seconds_per_year omega B = 2. Hw = np.linspace(0., 100.) Ctilde = const.cw * const.rho_w * Hw * omega / B amp = 1./((Ctilde**2+1)*np.cos(np.arctan(Ctilde))) Phi = np.arctan(Ctilde) color1 = 'b' color2 = 'r' fig = plt.figure(figsize=(8,6)) ax1 = fig.add_subplot(111) ax1.plot(Hw, amp, color=color1) ax1.set_xlabel('water depth (m)', fontsize=14) ax1.set_ylabel('Seasonal amplitude ($Q^* / B$)', fontsize=14, color=color1) for tl in ax1.get_yticklabels(): tl.set_color(color1) ax2 = ax1.twinx() ax2.plot(Hw, np.rad2deg(Phi), color=color2) ax2.set_ylabel('Seasonal phase shift (degrees)', fontsize=14, color=color2) for tl in ax2.get_yticklabels(): tl.set_color(color2) ax1.set_title('Dependence of seasonal cycle phase and amplitude on water depth', fontsize=16) ax1.grid() ax1.plot([2.5, 2.5], [0, 1], 'k-'); fig ``` The blue line shows the amplitude of the seasonal cycle of temperature, expressed as a fraction of its maximum value $\frac{Q^*}{B}$ (the value that would occur if the system had zero heat capacity so that temperatures were always in radiative equilibrium with the instantaneous insolation). The red line shows the phase lag (in degrees) of the temperature cycle relative to the insolation cycle. The vertical black line indicates 2.5 meters of water, which is the heat capacity of the atmosphere and thus our effective lower bound on total column heat capacity. ### The seasonal phase shift Even for the driest surfaces the phase shift is about 45º and the amplitude is half of its theoretical maximum. For most wet surfaces the cycle is damped out and delayed further. Of course we are already familiar with this phase shift from our day-to-day experience. Our calendar says that summer "begins" at the solstice and last until the equinox. ``` fig, ax = plt.subplots() years = np.linspace(0,2) Harray = np.array([0., 2.5, 10., 50.]) for Hw in Harray: Ctilde = const.cw * const.rho_w * Hw * omega / B Phi = np.arctan(Ctilde) ax.plot(years, np.sin(2*np.pi*years - Phi)/np.cos(Phi)/(1+Ctilde**2), label=Hw) ax.set_xlabel('Years', fontsize=14) ax.set_ylabel('Seasonal amplitude ($Q^* / B$)', fontsize=14) ax.set_title('Solution of toy seasonal model for several different water depths', fontsize=14) ax.legend(); ax.grid() fig ``` The blue curve in this figure is in phase with the insolation. ____________ <a id='section3'></a> ## 3. Exploring the amplitude of the seasonal cycle with an EBM ____________ Something important is missing from this toy model: heat transport! The amplitude of the seasonal cycle of insolation increases toward the poles, but the seasonal temperature variations are partly mitigated by heat transport from lower, warmer latitudes. Our 1D diffusive EBM is the appropriate tool for exploring this further. We are looking at the 1D (zonally averaged) energy balance model with diffusive heat transport. The equation is $$ C \frac{\partial T_s}{\partial t} = (1-\alpha) ~ Q - \left( A + B~T_s \right) + \frac{D}{\cos⁡\phi } \frac{\partial }{\partial \phi} \left( \cos⁡\phi ~ \frac{\partial T_s}{\partial \phi} \right) $$ with the albedo given by $$ \alpha(\phi) = \alpha_0 + \alpha_2 P_2(\sin\phi) $$ and we will use ``` climlab.EBM_seasonal ``` to solve this model numerically. One handy feature of `climlab` process code: the function `integrate_years()` automatically calculates the time averaged temperature. So if we run it for exactly one year, we get the annual mean temperature (and many other diagnostics) saved in the dictionary `timeave`. We will look at the seasonal cycle of temperature in three different models with different heat capacities (which we express through an equivalent depth of water in meters). All other parameters will be [as chosen in Lecture 15](Lecture15 -- Diffusive energy balance model.ipynb) (which focussed on tuning the EBM to the annual mean energy budget). ``` # for convenience, set up a dictionary with our reference parameters param = {'A':210, 'B':2, 'a0':0.354, 'a2':0.25, 'D':0.6} param # We can pass the entire dictionary as keyword arguments using the ** notation model1 = climlab.EBM_seasonal(**param) print( model1) ``` Notice that this model has an insolation subprocess called `DailyInsolation`, rather than `AnnualMeanInsolation`. These should be fairly self-explanatory. ``` # We will try three different water depths water_depths = np.array([2., 10., 50.]) num_depths = water_depths.size Tann = np.empty( [model1.lat.size, num_depths] ) models = [] for n in range(num_depths): ebm = climlab.EBM_seasonal(water_depth=water_depths[n], **param) models.append(ebm) models[n].integrate_years(20., verbose=False ) models[n].integrate_years(1., verbose=False) Tann[:,n] = np.squeeze(models[n].timeave['Ts']) ``` All models should have the same annual mean temperature: ``` lat = model1.lat fig, ax = plt.subplots() ax.plot(lat, Tann) ax.set_xlim(-90,90) ax.set_xlabel('Latitude') ax.set_ylabel('Temperature (degC)') ax.set_title('Annual mean temperature in the EBM') ax.legend( water_depths ) fig ``` There is no automatic function in the `climlab` code to keep track of minimum and maximum temperatures (though we might add that in the future!) Instead we'll step through one year "by hand" and save all the temperatures. ``` num_steps_per_year = int(model1.time['num_steps_per_year']) Tyear = np.empty((lat.size, num_steps_per_year, num_depths)) for n in range(num_depths): for m in range(num_steps_per_year): models[n].step_forward() Tyear[:,m,n] = np.squeeze(models[n].Ts) ``` Make a figure to compare the observed zonal mean seasonal temperature cycle to what we get from the EBM with different heat capacities: ``` fig = plt.figure( figsize=(16,10) ) ax = fig.add_subplot(2,num_depths,2) cax = ax.contourf(np.arange(12)+0.5, lat_ncep, Ts_ncep.mean(dim='lon').transpose(), levels=clevels, cmap=plt.cm.seismic, vmin=Tmin, vmax=Tmax) ax.set_xlabel('Month') ax.set_ylabel('Latitude') cbar = plt.colorbar(cax) ax.set_title('Zonal mean surface temperature - observed (degC)', fontsize=20) for n in range(num_depths): ax = fig.add_subplot(2,num_depths,num_depths+n+1) cax = ax.contourf(4*np.arange(num_steps_per_year), lat, Tyear[:,:,n], levels=clevels, cmap=plt.cm.seismic, vmin=Tmin, vmax=Tmax) cbar1 = plt.colorbar(cax) ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 ) ax.set_xlabel('Days of year', fontsize=14 ) ax.set_ylabel('Latitude', fontsize=14 ) fig ``` Which one looks more realistic? Depends a bit on where you look. But overall, the observed seasonal cycle matches the 10 meter case best. The effective heat capacity governing the seasonal cycle of the zonal mean temperature is closer to 10 meters of water than to either 2 or 50 meters. ### Making an animation of the EBM solutions Let's animate the seasonal cycle of insolation and temperature in our models with the three different water depths ``` def initial_figure(models): fig, axes = plt.subplots(1,len(models), figsize=(15,4)) lines = [] for n in range(len(models)): ax = axes[n] c1 = 'b' Tsline = ax.plot(lat, models[n].Ts, c1)[0] ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 ) ax.set_xlabel('Latitude', fontsize=14 ) if n is 0: ax.set_ylabel('Temperature', fontsize=14, color=c1 ) ax.set_xlim([-90,90]) ax.set_ylim([-60,60]) for tl in ax.get_yticklabels(): tl.set_color(c1) ax.grid() c2 = 'r' ax2 = ax.twinx() Qline = ax2.plot(lat, models[n].insolation, c2)[0] if n is 2: ax2.set_ylabel('Insolation (W m$^{-2}$)', color=c2, fontsize=14) for tl in ax2.get_yticklabels(): tl.set_color(c2) ax2.set_xlim([-90,90]) ax2.set_ylim([0,600]) lines.append([Tsline, Qline]) return fig, axes, lines def animate(step, models, lines): for n, ebm in enumerate(models): ebm.step_forward() # The rest of this is just updating the plot lines[n][0].set_ydata(ebm.Ts) lines[n][1].set_ydata(ebm.insolation) return lines # Plot initial data fig, axes, lines = initial_figure(models) fig # Some imports needed to make and display animations from IPython.display import HTML from matplotlib import animation num_steps = int(models[0].time['num_steps_per_year']) ani = animation.FuncAnimation(fig, animate, frames=num_steps, interval=80, fargs=(models, lines), ) HTML(ani.to_html5_video()) ``` ____________ <a id='section4'></a> ## 4. The seasonal cycle for a planet with 90º obliquity ____________ The EBM code uses our familiar `insolation.py` code to calculate insolation, and therefore it's easy to set up a model with different orbital parameters. Here is an example with **very** different orbital parameters: 90º obliquity. We looked at the distribution of insolation by latitude and season for this type of planet in the last homework. ``` orb_highobl = {'ecc':0., 'obliquity':90., 'long_peri':0.} print( orb_highobl) model_highobl = climlab.EBM_seasonal(orb=orb_highobl, **param) print( model_highobl.param['orb']) ``` Repeat the same procedure to calculate and store temperature throughout one year, after letting the models run out to equilibrium. ``` Tann_highobl = np.empty( [lat.size, num_depths] ) models_highobl = [] for n in range(num_depths): model = climlab.EBM_seasonal(water_depth=water_depths[n], orb=orb_highobl, **param) models_highobl.append(model) models_highobl[n].integrate_years(40., verbose=False ) models_highobl[n].integrate_years(1., verbose=False) Tann_highobl[:,n] = np.squeeze(models_highobl[n].timeave['Ts']) Tyear_highobl = np.empty([lat.size, num_steps_per_year, num_depths]) for n in range(num_depths): for m in range(num_steps_per_year): models_highobl[n].step_forward() Tyear_highobl[:,m,n] = np.squeeze(models_highobl[n].Ts) ``` And plot the seasonal temperature cycle same as we did above: ``` fig = plt.figure( figsize=(16,5) ) Tmax_highobl = 125; Tmin_highobl = -Tmax_highobl; delT_highobl = 10 clevels_highobl = np.arange(Tmin_highobl, Tmax_highobl+delT_highobl, delT_highobl) for n in range(num_depths): ax = fig.add_subplot(1,num_depths,n+1) cax = ax.contourf( 4*np.arange(num_steps_per_year), lat, Tyear_highobl[:,:,n], levels=clevels_highobl, cmap=plt.cm.seismic, vmin=Tmin_highobl, vmax=Tmax_highobl ) cbar1 = plt.colorbar(cax) ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 ) ax.set_xlabel('Days of year', fontsize=14 ) ax.set_ylabel('Latitude', fontsize=14 ) fig ``` Note that the temperature range is much larger than for the Earth-like case above (but same contour interval, 10 degC). Why is the temperature so uniform in the north-south direction with 50 meters of water? To see the reason, let's plot the annual mean insolation at 90º obliquity, alongside the present-day annual mean insolation: ``` lat2 = np.linspace(-90, 90, 181) days = np.linspace(1.,50.)/50 * const.days_per_year Q_present = climlab.solar.insolation.daily_insolation( lat2, days ) Q_highobl = climlab.solar.insolation.daily_insolation( lat2, days, orb_highobl ) Q_present_ann = np.mean( Q_present, axis=1 ) Q_highobl_ann = np.mean( Q_highobl, axis=1 ) fig, ax = plt.subplots() ax.plot( lat2, Q_present_ann, label='Earth' ) ax.plot( lat2, Q_highobl_ann, label='90deg obliquity' ) ax.grid() ax.legend(loc='lower center') ax.set_xlabel('Latitude', fontsize=14 ) ax.set_ylabel('W m$^{-2}$', fontsize=14 ) ax.set_title('Annual mean insolation for two different obliquities', fontsize=16) fig ``` Though this is a bit misleading, because our model prescribes an increase in albedo from the equator to the pole. So the absorbed shortwave gradients look even more different. <div class="alert alert-success"> [Back to ATM 623 notebook home](../index.ipynb) </div> ____________ ## Version information ____________ ``` %load_ext version_information %version_information numpy, xarray, climlab ``` ____________ ## Credits The author of this notebook is [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany. It was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php) Development of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation. ____________
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn as sk from sklearn import linear_model from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error from sklearn.preprocessing import MinMaxScaler # Набор данных взят с https://www.kaggle.com/aungpyaeap/fish-market # Параметры нескольких популярных промысловых рыб # length 1 = Body height # length 2 = Total Length # length 3 = Diagonal Length fish_data = pd.read_csv("datasets/Fish.csv", delimiter=',') print(fish_data) # Выделим две переменных x_label = 'Length1' y_label = 'Weight' data = fish_data[[x_label, y_label]] print(data) # Определим размер валидационной и тестовой выборок val_test_size = round(0.2*len(data)) print(val_test_size) # Генерируем уникальный seed my_code = "Грушин" seed_limit = 2 ** 32 my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit # Создадим обучающую, валидационную и тестовую выборки random_state = my_seed train_val, test = train_test_split(data, test_size=val_test_size, random_state=random_state) train, val = train_test_split(train_val, test_size=val_test_size, random_state=random_state) print(len(train), len(val), len(test)) # Преобразуем данные к ожидаемому библиотекой skleran формату train_x = np.array(train[x_label]).reshape(-1,1) train_y = np.array(train[y_label]).reshape(-1,1) val_x = np.array(val[x_label]).reshape(-1,1) val_y = np.array(val[y_label]).reshape(-1,1) test_x = np.array(test[x_label]).reshape(-1,1) test_y = np.array(test[y_label]).reshape(-1,1) # Нарисуем график plt.plot(train_x, train_y, 'o') plt.show() # Создадим модель линейной регрессии и обучим ее на обучающей выборке. model1 = linear_model.LinearRegression() model1.fit(train_x, train_y) # Результат обучения: значения a и b: y = ax+b print(model1.coef_, model1.intercept_) a = model1.coef_[0] b = model1.intercept_ print(a, b) # Добавим полученную линию на график x = np.linspace(min(train_x), max(train_x), 100) y = a * x + b plt.plot(train_x, train_y, 'o') plt.plot(x, y) plt.show() # Проверим результат на валидационной выборке val_predicted = model1.predict(val_x) mse1 = mean_squared_error(val_y, val_predicted) print(mse1) # Результат не очень хорош для интерпретации, попробуем сначала нормировать значения scaler_x = MinMaxScaler() scaler_x.fit(train_x) scaled_train_x = scaler_x.transform(train_x) scaler_y = MinMaxScaler() scaler_y.fit(train_y) scaled_train_y = scaler_y.transform(train_y) plt.plot(scaled_train_x, scaled_train_y, 'o') plt.show() # Строим модель и выводим результаты для нормированных данных model2 = linear_model.LinearRegression() model2.fit(scaled_train_x, scaled_train_y) a = model2.coef_[0] b = model2.intercept_ x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100) y = a * x + b plt.plot(scaled_train_x, scaled_train_y, 'o') plt.plot(x, y) plt.show() # Проверим результат на валидационной выборке scaled_val_x = scaler_x.transform(val_x) scaled_val_y = scaler_y.transform(val_y) val_predicted = model2.predict(scaled_val_x) mse2 = mean_squared_error(scaled_val_y, val_predicted) print(mse2) # Построим модель линейной регресси с L1-регуляризацией и выведем результаты для нормированных данных. model3 = linear_model.Lasso(alpha=0.01) model3.fit(scaled_train_x, scaled_train_y) a = model3.coef_[0] b = model3.intercept_ x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100) y = a * x + b plt.plot(scaled_train_x, scaled_train_y, 'o') plt.plot(x, y) plt.show() # Проверим результат на валидационной выборке scaled_val_x = scaler_x.transform(val_x) scaled_val_y = scaler_y.transform(val_y) val_predicted = model3.predict(scaled_val_x) mse3 = mean_squared_error(scaled_val_y, val_predicted) print(mse3) # Можете поэкспериментировать со значением параметра alpha, чтобы уменьшить ошибку # Построим модель линейной регресси с L2-регуляризацией и выведем результаты для нормированных данных model4 = linear_model.Ridge(alpha=0.01) model4.fit(scaled_train_x, scaled_train_y) a = model4.coef_[0] b = model4.intercept_ x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100) y = a * x + b plt.plot(scaled_train_x, scaled_train_y, 'o') plt.plot(x, y) plt.show() # Проверим результат на валидационной выборке scaled_val_x = scaler_x.transform(val_x) scaled_val_y = scaler_y.transform(val_y) val_predicted = model4.predict(scaled_val_x) mse4 = mean_squared_error(scaled_val_y, val_predicted) print(mse4) # Можете поэкспериментировать со значением параметра alpha, чтобы уменьшить ошибку # Построим модель линейной регресси с ElasticNet-регуляризацией и выведем результаты для нормированных данных model5 = linear_model.ElasticNet(alpha=0.01, l1_ratio = 0.01) model5.fit(scaled_train_x, scaled_train_y) a = model5.coef_[0] b = model5.intercept_ x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100) y = a * x + b plt.plot(scaled_train_x, scaled_train_y, 'o') plt.plot(x, y) plt.show() # Проверим результат на валидационной выборке scaled_val_x = scaler_x.transform(val_x) scaled_val_y = scaler_y.transform(val_y) val_predicted = model5.predict(scaled_val_x) mse5 = mean_squared_error(scaled_val_y, val_predicted) print(mse5) # Можете поэкспериментировать со значениями параметров alpha и l1_ratio, чтобы уменьшить ошибку # Выведем ошибки для моделей на нормированных данных print(mse2, mse3, mse4, mse5) # Минимальное значение достигается для второй модели, получим итоговую величину ошибки на тестовой выборке scaled_test_x = scaler_x.transform(test_x) scaled_test_y = scaler_y.transform(test_y) test_predicted = model2.predict(scaled_test_x) mse_test = mean_squared_error(scaled_test_y, test_predicted) print(mse_test) # Повторите выделение данных, нормирование, и анализ 4 моделей # (обычная линейная регрессия, L1-регуляризация, L2-регуляризация, ElasticNet-регуляризация) # для x = Length2 и y = Width. x_label = 'Length2' y_label = 'Weight' data = fish_data[[x_label, y_label]] print(data) x_label = 'Length2' y_label = 'Weight' data = fish_data[[x_label, y_label]] print(data) val_test_size = round(0.2*len(data)) print(val_test_size) random_state = my_seed train_val, test = train_test_split(data, test_size=val_test_size, random_state=random_state) train, val = train_test_split(train_val, test_size=val_test_size, random_state=random_state) print(len(train), len(val), len(test)) train_x = np.array(train[x_label]).reshape(-1,1) train_y = np.array(train[y_label]).reshape(-1,1) val_x = np.array(val[x_label]).reshape(-1,1) val_y = np.array(val[y_label]).reshape(-1,1) test_x = np.array(test[x_label]).reshape(-1,1) test_y = np.array(test[y_label]).reshape(-1,1) plt.plot(train_x, train_y, 'o') plt.show() model1 = linear_model.LinearRegression().fit(train_x, train_y) print(model1.coef_, model1.intercept_) a = model1.coef_[0] b = model1.intercept_ print(a, b) x = np.linspace(min(train_x), max(train_x), 100) y = a * x + b plt.plot(train_x, train_y, 'o') plt.plot(x, y) plt.show() val_predicted = model1.predict(val_x) mse1 = mean_squared_error(val_y, val_predicted) print(mse1) scaler_x = MinMaxScaler().fit(train_x) scaled_train_x = scaler_x.transform(train_x) scaler_y = MinMaxScaler().fit(train_y) scaled_train_y = scaler_y.transform(train_y) plt.plot(scaled_train_x, scaled_train_y, 'o') plt.show() model2 = linear_model.LinearRegression().fit(scaled_train_x, scaled_train_y) a = model2.coef_[0] b = model2.intercept_ x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100) y = a * x + b plt.plot(scaled_train_x, scaled_train_y, 'o') plt.plot(x, y) plt.show() scaled_val_x = scaler_x.transform(val_x) scaled_val_y = scaler_y.transform(val_y) val_predicted = model2.predict(scaled_val_x) mse2 = mean_squared_error(scaled_val_y, val_predicted) print(mse2) model3 = linear_model.Lasso(alpha=0.01).fit(scaled_train_x, scaled_train_y) a = model3.coef_[0] b = model3.intercept_ x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100) y = a * x + b plt.plot(scaled_train_x, scaled_train_y, 'o') plt.plot(x, y) plt.show() scaled_val_x = scaler_x.transform(val_x) scaled_val_y = scaler_y.transform(val_y) val_predicted = model3.predict(scaled_val_x) mse3 = mean_squared_error(scaled_val_y, val_predicted) print(mse3) model4 = linear_model.Ridge(alpha=0.01).fit(scaled_train_x, scaled_train_y) a = model4.coef_[0] b = model4.intercept_ x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100) y = a * x + b plt.plot(scaled_train_x, scaled_train_y, 'o') plt.plot(x, y) plt.show() scaled_val_x = scaler_x.transform(val_x) scaled_val_y = scaler_y.transform(val_y) val_predicted = model4.predict(scaled_val_x) mse4 = mean_squared_error(scaled_val_y, val_predicted) print(mse4) model5 = linear_model.ElasticNet(alpha=0.01, l1_ratio = 0.01) model5.fit(scaled_train_x, scaled_train_y) a = model5.coef_[0] b = model5.intercept_ x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100) y = a * x + b plt.plot(scaled_train_x, scaled_train_y, 'o') plt.plot(x, y) plt.show() scaled_val_x = scaler_x.transform(val_x) scaled_val_y = scaler_y.transform(val_y) val_predicted = model5.predict(scaled_val_x) mse5 = mean_squared_error(scaled_val_y, val_predicted) print(mse5) print(mse2, mse3, mse4, mse5) scaled_test_x = scaler_x.transform(test_x) scaled_test_y = scaler_y.transform(test_y) test_predicted = model2.predict(scaled_test_x) mse_test = mean_squared_error(scaled_test_y, test_predicted) print(mse_test) ```
github_jupyter
<h1>2b. Machine Learning using tf.estimator </h1> In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is. ``` import tensorflow as tf import pandas as pd import numpy as np import shutil print(tf.__version__) ``` Read data created in the previous chapter. ``` # In CSV, label is the first column, after the features, followed by the key CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key'] FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1] LABEL = CSV_COLUMNS[0] df_train = pd.read_csv('https://raw.githubusercontent.com/nhanle83/training-data-analyst/master/courses/machine_learning/deepdive/03_tensorflow/taxi-train.csv', header = None, names = CSV_COLUMNS) df_valid = pd.read_csv('https://raw.githubusercontent.com/nhanle83/training-data-analyst/master/courses/machine_learning/deepdive/03_tensorflow/taxi-valid.csv', header = None, names = CSV_COLUMNS) df_test = pd.read_csv('https://raw.githubusercontent.com/nhanle83/training-data-analyst/master/courses/machine_learning/deepdive/03_tensorflow/taxi-test.csv', header = None, names = CSV_COLUMNS) ``` <h2> Train and eval input functions to read from Pandas Dataframe </h2> ``` def make_train_input_fn(df, num_epochs): return tf.compat.v1.estimator.inputs.pandas_input_fn( x = df, y = df[LABEL], batch_size = 128, num_epochs = num_epochs, shuffle = True, queue_capacity = 1000 ) def make_eval_input_fn(df): return tf.compat.v1.estimator.inputs.pandas_input_fn( x = df, y = df[LABEL], batch_size = 128, shuffle = False, queue_capacity = 1000 ) ``` Our input function for predictions is the same except we don't provide a label ``` def make_prediction_input_fn(df): return tf.compat.v1.estimator.inputs.pandas_input_fn( x = df, y = None, batch_size = 128, shuffle = False, queue_capacity = 1000 ) ``` ### Create feature columns for estimator ``` def make_feature_cols(): input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES] return input_columns ``` <h3> Linear Regression with tf.Estimator framework </h3> ``` tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO) OUTDIR = 'taxi_trained' shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time model = tf.estimator.LinearRegressor( feature_columns = make_feature_cols(), model_dir = OUTDIR) model.train(input_fn = make_train_input_fn(df_train, num_epochs = 10)) ``` Evaluate on the validation data (we should defer using the test data to after we have selected a final model). ``` def print_rmse(model, df): metrics = model.evaluate(input_fn = make_eval_input_fn(df)) print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss']))) print_rmse(model, df_valid) ``` This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction. ``` predictions = model.predict(input_fn = make_prediction_input_fn(df_test)) for items in predictions: print(items) ``` This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. <h3> Deep Neural Network regression </h3> ``` #tf.compat.v1.logging.set_verbosity(tf.logging.INFO) shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2], feature_columns = make_feature_cols(), model_dir = OUTDIR) model.train(input_fn = make_train_input_fn(df_train, num_epochs = 100)); print_rmse(model, df_valid) ``` We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about! But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. <h2> Benchmark dataset </h2> Let's do this on the benchmark dataset. ``` from google.cloud import bigquery import numpy as np import pandas as pd def create_query(phase, EVERY_N): """ phase: 1 = train 2 = valid """ base_query = """ SELECT (tolls_amount + fare_amount) AS fare_amount, EXTRACT(DAYOFWEEK FROM pickup_datetime) * 1.0 AS dayofweek, EXTRACT(HOUR FROM pickup_datetime) * 1.0 AS hourofday, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, CONCAT(CAST(pickup_datetime AS STRING), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key FROM `nyc-tlc.yellow.trips` WHERE trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 """ if EVERY_N == None: if phase < 2: # Training query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) < 2".format(base_query) else: # Validation query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) = {1}".format(base_query, phase) else: query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}".format(base_query, EVERY_N, phase) return query query = create_query(2, 100000) df = bigquery.Client().query(query).to_dataframe() print_rmse(model, df) ``` RMSE on benchmark dataset is <b>9.41</b> (your results will vary because of random seeds). This is not only way more than our original benchmark of 6.00, but it doesn't even beat our distance-based rule's RMSE of 8.02. Fear not -- you have learned how to write a TensorFlow model, but not to do all the things that you will have to do to your ML model performant. We will do this in the next chapters. In this chapter though, we will get our TensorFlow model ready for these improvements. In a software sense, the rest of the labs in this chapter will be about refactoring the code so that we can improve it. ## Challenge Exercise Create a neural network that is capable of finding the volume of a cylinder given the radius of its base (r) and its height (h). Assume that the radius and height of the cylinder are both in the range 0.5 to 2.0. Simulate the necessary training dataset. <p> Hint (highlight to see): <p style='color:white'> The input features will be r and h and the label will be $\pi r^2 h$ Create random values for r and h and compute V. Your dataset will consist of r, h and V. Then, use a DNN regressor. Make sure to generate enough data. </p> Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
github_jupyter
``` from sklearn.datasets import make_blobs X, y = make_blobs(n_samples=50, centers=2, cluster_std=0.5, random_state=4) y = 2 * y - 1 plt.scatter(X[y == -1, 0], X[y == -1, 1], marker='o', label="-1 class") plt.scatter(X[y == +1, 0], X[y == +1, 1], marker='x', label="+1 class") plt.xlabel("x1") plt.ylabel("x2") plt.legend() plt.title("train data") plt.show() from sklearn.svm import SVC model = SVC(kernel='linear', C=1e10).fit(X, y) model.n_support_ model.support_ model.support_vectors_ y[model.support_] xmin = X[:, 0].min() xmax = X[:, 0].max() ymin = X[:, 1].min() ymax = X[:, 1].max() xx = np.linspace(xmin, xmax, 10) yy = np.linspace(ymin, ymax, 10) X1, X2 = np.meshgrid(xx, yy) Z = np.empty(X1.shape) for (i, j), val in np.ndenumerate(X1): x1 = val x2 = X2[i, j] p = model.decision_function([[x1, x2]]) Z[i, j] = p[0] levels = [-1, 0, 1] linestyles = ['dashed', 'solid', 'dashed'] plt.scatter(X[y == -1, 0], X[y == -1, 1], marker='o', label="-1 class") plt.scatter(X[y == +1, 0], X[y == +1, 1], marker='x', label="+1 class") plt.contour(X1, X2, Z, levels, colors='k', linestyles=linestyles) plt.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=300, alpha=0.3) x_new = [10, 2] plt.scatter(x_new[0], x_new[1], marker='^', s=100) plt.text(x_new[0] + 0.03, x_new[1] + 0.08, "test data") plt.xlabel("x1") plt.ylabel("x2") plt.legend() plt.title("SVM") plt.show() x_new = [10, 2] model.decision_function([x_new]) model.coef_.dot(x_new) + model.intercept_ # dual_coef_ = a_i * y_i model.dual_coef_ model.dual_coef_[0][0] * model.support_vectors_[0].dot(x_new) + \ model.dual_coef_[0][1] * model.support_vectors_[1].dot(x_new) + \ model.intercept_ # iris example from sklearn.datasets import load_iris iris = load_iris() idx = np.in1d(iris.target, [0, 1]) X = iris.data[idx, :2] y = (2 * iris.target[idx] - 1).astype(np.int) model = SVC(kernel='linear', C=1e10).fit(X, y) xmin = X[:, 0].min() xmax = X[:, 0].max() ymin = X[:, 1].min() ymax = X[:, 1].max() xx = np.linspace(xmin, xmax, 10) yy = np.linspace(ymin, ymax, 10) X1, X2 = np.meshgrid(xx, yy) Z = np.empty(X1.shape) for (i, j), val in np.ndenumerate(X1): x1 = val x2 = X2[i, j] p = model.decision_function([[x1, x2]]) Z[i, j] = p[0] levels = [-1, 0, 1] linestyles = ['dashed', 'solid', 'dashed'] plt.scatter(X[y == -1, 0], X[y == -1, 1], marker='o', label="-1 class") plt.scatter(X[y == +1, 0], X[y == +1, 1], marker='x', label="+1 class") plt.contour(X1, X2, Z, levels, colors='k', linestyles=linestyles) plt.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=300, alpha=0.3) plt.xlabel("x1") plt.ylabel("x2") plt.legend() plt.title("SVM") plt.show() from sklearn.datasets import load_iris iris = load_iris() idx = np.in1d(iris.target, [1, 2]) X = iris.data[idx, 2:] y = (2 * iris.target[idx] - 3).astype(np.int) model = SVC(kernel='linear', C=10).fit(X, y) xmin = X[:, 0].min() xmax = X[:, 0].max() ymin = X[:, 1].min() ymax = X[:, 1].max() xx = np.linspace(xmin, xmax, 10) yy = np.linspace(ymin, ymax, 10) X1, X2 = np.meshgrid(xx, yy) Z = np.empty(X1.shape) for (i, j), val in np.ndenumerate(X1): x1 = val x2 = X2[i, j] p = model.decision_function([[x1, x2]]) Z[i, j] = p[0] levels = [-1, 0, 1] linestyles = ['dashed', 'solid', 'dashed'] plt.scatter(X[y == -1, 0], X[y == -1, 1], marker='o', label="-1 class") plt.scatter(X[y == +1, 0], X[y == +1, 1], marker='x', label="+1 class") plt.contour(X1, X2, Z, levels, colors='k', linestyles=linestyles) plt.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=200, alpha=0.3) plt.xlabel("x1") plt.ylabel("x2") plt.legend() plt.title("C=10") plt.show() from sklearn.datasets import load_digits digits = load_digits() N = 2 M = 5 np.random.seed(0) fig = plt.figure(figsize=(9, 5)) plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05) klist = np.random.choice(range(len(digits.data)), N * M) for i in range(N): for j in range(M): k = klist[i * M + j] ax = fig.add_subplot(N, M, i * M + j + 1) ax.imshow(digits.images[k], cmap=plt.cm.bone) ax.grid(False) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) plt.title(digits.target[k]) plt.tight_layout() plt.show() from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target, test_size=0.4, random_state=0) from sklearn.svm import SVC svc = SVC(kernel='linear').fit(X_train, y_train) N = 2 M = 5 np.random.seed(4) fig = plt.figure(figsize=(9, 5)) plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05) klist = np.random.choice(range(len(y_test)), N * M) for i in range(N): for j in range(M): k = klist[i * M + j] ax = fig.add_subplot(N, M, i * M + j + 1) ax.imshow(X_test[k:(k + 1), :].reshape(8,8), cmap=plt.cm.bone) ax.grid(False) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) plt.title("%d => %d" % (y_test[k], svc.predict(X_test[k:(k + 1), :])[0])) plt.tight_layout() plt.show() from sklearn.metrics import classification_report, accuracy_score y_pred_train = svc.predict(X_train) y_pred_test = svc.predict(X_test) print(classification_report(y_train, y_pred_train)) print(classification_report(y_test, y_pred_test)) ```
github_jupyter
### Set up google colab and unzip train data ``` # Set up google drive in google colab #save this change to master from google.colab import drive drive.mount('/content/drive') # Unzip training data from drive !unzip -q 'drive/My Drive/VOCdevkit.zip' ``` ### Import Libraries ``` import random import os import math import xml.etree.ElementTree as ET import torch import torch.nn as nn import torch.nn.functional as F import torchvision.models as models from torchvision import transforms import numpy as np import matplotlib.pyplot as plt from torch.utils.data import Dataset, DataLoader, random_split from PIL import Image,ImageFilter, ImageEnhance import cv2 import torch.optim as optim from torchvision.ops import nms from tqdm import tqdm import time ``` ### Define Model Hyper Parameters ``` select_classes = {'aeroplane', 'bicycle','boat','bus', 'dog','train','motorbike'} # Only training for these classes anchor_scale = torch.FloatTensor([8,16,32]) anchor_ratio = torch.FloatTensor([0.5,1,2]) conversion_scale = 16 input_image_height = 800 input_image_width = 800 num_anchors_sample = 256 # Non Maximum Suppression hyper parameters nms_threshold = 0.7 # only consider anchors with IOU < nms_threshold nms_num_train_pre = 12000 # Select top nms_num_train_pre anchors to apply nms on nms_num_train_post = 2000 # number of proposal regions after NMS nms_num_test_pre = 6000 # For test nms_num_test_post = 300 # For test nms_min_size = 16 # only consider anchors as valid while applying nms if ht and wt < nms_min_size #proposal target (pt) hyper parameters pt_n_sample = 128 #Number of samples to sample from roi pt_pos_ratio = 0.25 #the number of positive examples out of the n_samples pt_pos_iou_threshold = 0.5 #Min overlap required between roi and gt for considering positive label (object) pt_neg_iou_threshold = 0.5 # below this value marks as negative #ROI pooling roi_size = (7,7) # RPN loss rpn_loss_lambda = 1 # same is used in the decider loss # Number of validation images num_valid_img = 150 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) ``` ### Handle Training Data ##### Get all classes and create label encoding ``` # Getting all the classes and creating label encoding all_labels = [] for out in sorted(os.listdir('VOCdevkit/VOC2007/Annotations/')): tree = ET.parse('VOCdevkit/VOC2007/Annotations/' + out) for obj in tree.findall('object'): lab = (obj.find('name').text) # all_labels.append(lab) if (lab in (select_classes)): all_labels.append(lab) distict_labels = list(set(all_labels)) distict_labels = sorted(distict_labels) #label 0 is set for background lab_to_val = {j:i+1 for i,j in enumerate(distict_labels)} val_to_lab = {i+1:j for i,j in enumerate(distict_labels)} num_classes = len(distict_labels) + 1 print(np.unique(np.array(all_labels), return_counts=True)[0]) print(np.unique(np.array(all_labels), return_counts=True)[1]) print(num_classes) ``` ##### Data augmentation ``` # Random blur on training image def random_blur(img): if random.random() < 0.5: return img rad = random.choice([1,2]) img = img.filter(ImageFilter.BoxBlur(radius=rad)) return img # Random brightness, contrast, satutration and hue def random_color(img): if random.random() < 0.1: return img img = transforms.ColorJitter(brightness=(0.5,2.0), contrast=(0.5,2.0), saturation=(0.5,2.0), hue=(-0.25,0.25))(img) return img # Random horizontal flip def random_flip(img, gt_box): if random.random() < 0.5: return img,gt_box img = transforms.RandomHorizontalFlip(p=1)(img) temp = (gt_box[:,1]).copy() gt_box[:,1] = img.size[0] - gt_box[:,0] #x1 gt_box[:,0] = img.size[0] - temp #x2 return img, gt_box # Random crop on image def random_crop(img, gt_box, labels): if random.random() < 0.5: return img,gt_box,labels width, height = img.size select_w = random.uniform(0.6*width, width) select_h = random.uniform(0.6*height, height) start_x = random.uniform(0,width - select_w) start_y = random.uniform(0,height - select_h) left = start_x upper = start_y right = start_x + select_w bottom = start_y + select_h gt_box_copy = gt_box.copy() gt_box_copy[gt_box_copy[:,0] < left, 0] = left gt_box_copy[gt_box_copy[:,1] > right, 1] = right gt_box_copy[gt_box_copy[:,2] < upper, 2] = upper gt_box_copy[gt_box_copy[:,3] > bottom, 3] = bottom final_gt_box = [] final_labels = [] for i in range((gt_box_copy.shape[0])): if (((gt_box_copy[i,1] - gt_box_copy[i,0])/(gt_box[i,1]-gt_box[i,0])) < 0.5): continue if (((gt_box_copy[i,3] - gt_box_copy[i,2])/(gt_box[i,3] - gt_box[i,2])) < 0.5): continue final_gt_box.append(gt_box_copy[i]) final_labels.append(labels[i]) if len(final_gt_box) == 0: return img,gt_box,labels final_gt_box = np.array(final_gt_box) final_gt_box[:,0] = final_gt_box[:,0] - left final_gt_box[:,1] = final_gt_box[:,1] - left final_gt_box[:,2] = final_gt_box[:,2] - upper final_gt_box[:,3] = final_gt_box[:,3] - upper return img.crop((left, upper, right, bottom)), final_gt_box, final_labels ``` ##### Create Pytorch Dataset and Dataloader ``` class pascal_voc_data(Dataset): def __init__(self, img_dir,desc_dir,type_list, isTrain, transform = None): super().__init__() self.img_dir = img_dir self.desc_dir = desc_dir self.type_list = type_list self.isTrain = isTrain self.transform = transform self.img_names = [] self.img_descs = [] for img in sorted(os.listdir(img_dir)): if img[:-4] in self.type_list: self.img_names.append(img) for desc in sorted(os.listdir(desc_dir)): if desc[:-4] in self.type_list: self.img_descs.append(desc) self.img_names = [os.path.join(img_dir, img_name) for img_name in self.img_names] self.img_descs = [os.path.join(desc_dir, img_desc) for img_desc in self.img_descs] # self.img_names = sorted(os.listdir(img_dir)) # self.img_descs = sorted(os.listdir(desc_dir)) # self.img_names = [os.path.join(img_dir, img_name) for img_name in self.img_names] # self.img_descs = [os.path.join(desc_dir, img_desc) for img_desc in self.img_descs] self.loc_gts = [] self.loc_labels = [] self.final_img_names = [] for img_idx,img_desc in enumerate(self.img_descs): tree = ET.parse(img_desc) gt = [] loc_lab = [] for obj in tree.findall('object'): if ((obj.find('name').text) not in (select_classes)): continue lab = lab_to_val[(obj.find('name').text)] loc1 = int(obj.find('bndbox').find('xmin').text) loc2 = int(obj.find('bndbox').find('xmax').text) loc3 = int(obj.find('bndbox').find('ymin').text) loc4 = int(obj.find('bndbox').find('ymax').text) # if ht or width is less than 10, ignore the gt box if ((loc2 - loc1) < 10 ) or ((loc4 - loc3) < 10): continue gt.append([int(loc1),int(loc2),int(loc3),int(loc4)]) loc_lab.append(lab) if (len(gt) == 0): continue self.loc_gts.append(gt) self.loc_labels.append(loc_lab) self.final_img_names.append(self.img_names[img_idx]) self.img_names = self.final_img_names def __len__(self): return len(self.img_names) def __getitem__(self,idx): img_name = self.img_names[idx] img = Image.open(img_name) arr_loc_gts = np.array(self.loc_gts[idx]) label = self.loc_labels[idx] if self.isTrain: img = random_blur(img) img = random_color(img) img,arr_loc_gts = random_flip(img,arr_loc_gts) img,arr_loc_gts,label = random_crop(img,arr_loc_gts,label) img_h_pre = img.size[1] img_w_pre = img.size[0] if self.transform: img = self.transform(img) img_h_post = img.shape[1] img_w_post = img.shape[2] height_ratio = img_h_post/img_h_pre width_ratio = img_w_post/img_w_pre arr_loc_gts[:,0] = arr_loc_gts[:,0]*width_ratio arr_loc_gts[:,1] = arr_loc_gts[:,1]*width_ratio arr_loc_gts[:,2] = arr_loc_gts[:,2]*height_ratio arr_loc_gts[:,3] = arr_loc_gts[:,3]*height_ratio gts = (arr_loc_gts).tolist() return img, gts,label ''' While using pretrained models - Pytorch torchvision documentation - https://pytorch.org/docs/master/torchvision/models.html The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225] ''' transform = transforms.Compose( [transforms.Resize((input_image_height,input_image_width)), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))]) inv_normalize = transforms.Normalize( mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225], std=[1/0.229, 1/0.224, 1/0.225] ) ``` ##### Split into train and validations ``` file = open('drive/My Drive/val.txt', "r") valid_images = file.read().split('\n') valid_images = valid_images[:-1] train_images = [] for img in os.listdir('VOCdevkit/VOC2007/JPEGImages/'): if img[:-4] in valid_images: continue train_images.append(img[:-4]) train_dataset = pascal_voc_data('VOCdevkit/VOC2007/JPEGImages/', 'VOCdevkit/VOC2007/Annotations/',train_images , True,transform) valid_dataset = pascal_voc_data('VOCdevkit/VOC2007/JPEGImages/', 'VOCdevkit/VOC2007/Annotations/',valid_images ,False,transform) print(len(train_dataset)) print(len(valid_dataset)) ``` ##### Validation class images ``` test_loader = DataLoader(valid_dataset, batch_size = len(valid_dataset)) a,b,c = iter(test_loader).next() print(torch.unique(c[0], return_counts=True)) train_loader = DataLoader(train_dataset, batch_size=1) valid_loader = DataLoader(valid_dataset, batch_size=1,shuffle=True) ``` ### Visualise input image with box co-ordinates provided in format - x0,x1,y0,y1 ``` # Given input image, draw rectangles as specified by gt_box and pred_box and display def visualize_tensor(img, gt_box, pred_box,save_image='',tb_writer=None): plt.figure(figsize=(5,5)) transform_img = inv_normalize(img[0]).permute(1,2,0).to('cpu').numpy() transform_img = transform_img.copy() for box in gt_box: x0, x1, y0, y1 = box cv2.rectangle(transform_img, (int(x0),int(y0)), (int(x1),int(y1)), color=(0, 255, 255), thickness=2) for box in pred_box: x0, x1, y0, y1 = box cv2.rectangle(transform_img, (int(x0), int(y0)), (int(x1), int(y1)), color=(255, 0, 0), thickness=2) if tb_writer: # grid = torchvision.utils.make_grid(transform_img) tb_writer.add_image(save_image, transform_img, dataformats='HWC') elif save_image == '': plt.imshow(transform_img) plt.show() else: plt.imshow(transform_img) plt.savefig(save_image + '.png') ``` ### Faster RCNN backbone used - VGG16 model * Top 10 layers (top 4 conv layers) are not trained ``` # Base VGG16 model def Faster_RCNN_vgg16(num_freeze_top): vgg16 = models.vgg16(pretrained=True) vgg_feature_extracter = vgg16.features[:-1] vgg_classifier = vgg16.classifier[:-1] # Freeze learning of top few conv layers for layer in vgg_feature_extracter[:num_freeze_top]: for param in layer.parameters(): param.requires_grad = False return vgg_feature_extracter.to(device), vgg_classifier.to(device) ``` ### Code for anchor generation on an image * For train and test, only anchors lying inside the image boundary are considered ``` ''' RCNN Paper For anchors, we use 3 scales with box areas of (128*128) || (256*256) , and (512*512) pixels, and 3 aspect ratios of 1:1, 1:2, and 2:1. ''' # At point x,y from feature map, return 9 anchors on the input image scale def generate_anchor_at_point(x,y): anchor_positions = torch.zeros((len(anchor_scale) * len(anchor_ratio),4)) ctr_x = (x*2+1)*(conversion_scale/2) # for x = 0, centre is 8 || for x = 1, centre is 24 ctr_y = (y*2+1)*(conversion_scale/2) for ratio_idx in range(len(anchor_ratio)): for scale_idx in range(len(anchor_scale)): current = len(anchor_scale)*ratio_idx + scale_idx ratio = anchor_ratio[ratio_idx] scale = anchor_scale[scale_idx] h = conversion_scale*scale*torch.sqrt(ratio) w = conversion_scale*scale*torch.sqrt(1.0/ratio) anchor_positions[current,0] = ctr_x - w/2 anchor_positions[current,1] = ctr_x + w/2 anchor_positions[current,2] = ctr_y - h/2 anchor_positions[current,3] = ctr_y + h/2 return anchor_positions # For features of scale (x,y) , generate all the anchor boxes # input is height,width # returns output on x*y*9,4 def generate_anchors(x,y): anchor_positions = torch.zeros((x*y,len(anchor_scale) * len(anchor_ratio),4)) for ctr_x in range(x): for ctr_y in range(y): current = ctr_x*y + ctr_y anchors = generate_anchor_at_point(ctr_x, ctr_y) anchor_positions[current] = anchors return anchor_positions.reshape(-1,4) ''' RCNN paper - During training, we ignore all cross-boundary anchors so they do not contribute to the loss - During testing, however, we still apply the fully convolutional RPN to the entire image. This may generate crossboundary proposal boxes, which we clip to the image boundary ''' ''' For this code, cross boundary anchors are ignored at this step both in train and test ''' def get_valid_anchors(anchor_positions): valid_anchors_idx = torch.where((anchor_positions[:,0] >= 0) & (anchor_positions[:,1] <= input_image_width) & (anchor_positions[:,2] >= 0) & (anchor_positions[:,3] <= input_image_height) )[0] anchor_positions = anchor_positions[valid_anchors_idx] return anchor_positions, valid_anchors_idx # valid_anchors,valid_anchors_idx = get_valid_anchors(anchor_positions) # print(valid_anchors) ``` ### Getting intersection over union between 2 sets of boxes ``` # Get Intersection over Union between all the boxes in anchor_positions and gt_boxes def get_iou_matrix(anchor_positions, gt_boxes): iou_matrix = torch.zeros((len(anchor_positions), len(gt_boxes))) for idx,box in enumerate(gt_boxes): if isinstance(box,torch.Tensor): gt = torch.cat([box]*len(anchor_positions)).view(1,-1,4)[0] else: gt = torch.FloatTensor([box]*len(anchor_positions)) max_x = torch.max(gt[:,0],anchor_positions[:,0]) min_x = torch.min(gt[:,1],anchor_positions[:,1]) max_y = torch.max(gt[:,2],anchor_positions[:,2]) min_y = torch.min(gt[:,3],anchor_positions[:,3]) invalid_roi_idx = (min_x < max_x) | (min_y < max_y) roi_area = (min_x - max_x)*(min_y - max_y) roi_area[invalid_roi_idx] = 0 total_area = (gt[:,1] - gt[:,0])*(gt[:,3] - gt[:,2]) + \ (anchor_positions[:,1] - anchor_positions[:,0])*(anchor_positions[:,3]-anchor_positions[:,2]) - \ roi_area iou = roi_area/(total_area + 1e-6) iou_matrix[:,idx] = iou return iou_matrix ``` ### After IOU calculation between anchors and gt boxes * Assign +1/-1/0 labels to anchors based on IOU * Sample 128 positive and 128 negative anchors for training * If less than 128 positive anchors, pad with negative ``` ''' RCNN paper: assign a positive label to two kinds of anchors: (i) the anchor/anchors with the highest Intersection-overUnion (IoU) overlap with a ground-truth box, (ii) an anchor that has an IoU overlap higher than 0.7 with assign a negative label to a non-positive anchor if its IoU ratio is lower than 0.3 Anchors that are neither positive nor negative do not contribute to the training objective. ''' def get_max_iou_data(iou_matrix): gt_max_value = iou_matrix.max(axis=0)[0] # for each gt box, this is the max iou #There is a possiblilty that corresponding to one gt box, there are multiple anchors with same iou (max value) all_gt_max = torch.where(iou_matrix == gt_max_value)[0] # For each anchor box, this is the max iou with any of the gt_box anchor_max_value = torch.max(iou_matrix, axis=1)[0] anchor_max = torch.argmax(iou_matrix, axis=1) return all_gt_max, anchor_max_value,anchor_max # 1 - positive || 0 - negative || -1 - ignore def get_anchor_labels(anchor_positions, all_gt_max, anchor_max_value): anchor_labels = torch.zeros(anchor_positions.shape[0]) anchor_labels.fill_(-1.0) # for each anchor box, if iou with any of the gt_box is less than threshold, mark as 0 anchor_labels[anchor_max_value < 0.3] = 0 # If corresponding to any gt_box, the anchor has max iou -> mark as 1 anchor_labels[all_gt_max] = 1.0 # If for any anchor box, iou is greater than threshold for any of the gt_box, mark as 1 anchor_labels[anchor_max_value > 0.7] = 1.0 return anchor_labels ''' RCNN paper randomly sample 256 anchors in an image to compute the loss function of a mini-batch, where the sampled positive and negative anchors have a ratio of up to 1:1. If there are fewer than 128 positive samples in an image, we pad the mini-batch with negative ones. ''' def sample_anchors_for_train(anchor_labels): pos_anchor_labels = torch.where(anchor_labels == 1)[0] num_pos = min(num_anchors_sample/2, len(pos_anchor_labels)) pos_idx = np.random.choice(pos_anchor_labels, int(num_pos), replace=False) neg_anchor_labels = torch.where(anchor_labels == 0)[0] num_neg = num_anchors_sample - num_pos neg_idx = np.random.choice(neg_anchor_labels, int(num_neg), replace=False) anchor_labels[:] = -1 anchor_labels[pos_idx] = 1 anchor_labels[neg_idx] = 0 return anchor_labels ``` ### bbox regression training data calculation * delta_anchor_gt - Get delta between anchor_positions and gt_boxes * correct_anchor_positions - Given delta and anchor_positions, correct the anchors ``` ''' x,y,w,h are ctr_x, ctr_y, width and height for GT box dx = (x - x_{a})/w_{a} dy = (y - y_{a})/h_{a} dw = log(w/ w_a) dh = log(h/ h_a) ''' # Get delta between anchor_positions and gt_boxes def delta_anchor_gt(anchor_positions, gt_boxes , anchor_max): anchor_gt_map = gt_boxes[anchor_max] anchor_height = anchor_positions[:,3] - anchor_positions[:,2] # y2-y1 anchor_width = anchor_positions[:,1] - anchor_positions[:,0] # x2-x1 anchor_ctr_y = anchor_positions[:,2] + anchor_height/2 # y1 + h/2 anchor_ctr_x = anchor_positions[:,0] + anchor_width/2 # x1 + w/2 gt_height = anchor_gt_map[:,3] - anchor_gt_map[:,2] # y2-y1 gt_width = anchor_gt_map[:,1] - anchor_gt_map[:,0] # x2-x1 gt_ctr_y = anchor_gt_map[:,2] + gt_height/2 # y1 + h/2 gt_ctr_x = anchor_gt_map[:,0] + gt_width/2 # x1 + w/2 dx = (gt_ctr_x - anchor_ctr_x)/anchor_width dy = (gt_ctr_y - anchor_ctr_y)/anchor_height dw = torch.log(gt_width/anchor_width) dh = torch.log(gt_height/anchor_height) delta = torch.zeros_like(anchor_positions) delta[:,0] = dx delta[:,1] = dy delta[:,2] = dw delta[:,3] = dh return delta # Given delta and anchor_positions, correct the anchors def correct_anchor_positions(anchor_positions, delta): ha = anchor_positions[:,3] - anchor_positions[:,2] # y2-y1 wa = anchor_positions[:,1] - anchor_positions[:,0] # x2-x1 ya = anchor_positions[:,2] + ha/2 # y1 + h/2 xa = anchor_positions[:,0] + wa/2 # x1 + w/2 dx = delta[:,0] dy = delta[:,1] dw = delta[:,2] dh = delta[:,3] x = dx*wa + xa y = dy*ha + ya w = torch.exp(dw)*wa h = torch.exp(dh)*ha correct_anchor_positions = torch.zeros_like(anchor_positions) correct_anchor_positions[:,0] = x - w/2 correct_anchor_positions[:,1] = x + w/2 correct_anchor_positions[:,2] = y - h/2 correct_anchor_positions[:,3] = y + h/2 return correct_anchor_positions ``` ### Faster RCNN region proposal network defination ``` # Region Proposal Network class Faster_RCNN_rpn(nn.Module): def __init__(self,extracter): super().__init__() self.extracter = extracter self.conv1 = nn.Conv2d(512, 512, 3, 1, 1) #class_conv1 checks corresponding to 1 point in feature map, 18 outputs. # 9 anchors * (2) || anchor has object or not self.class_conv = nn.Conv2d(512, 2*len(anchor_scale)*len(anchor_ratio), 1, 1, 0) #reg_conv1 checks corresponding to 1 point in feature map, 36 outputs. # 9 anchors * (4) || anchor delta wrt ground truth boxes self.reg_conv = nn.Conv2d(512, 4*len(anchor_scale)*len(anchor_ratio), 1 ,1 , 0) self.conv1.weight.data.normal_(0, 0.01) self.conv1.bias.data.zero_() self.class_conv.weight.data.normal_(0, 0.01) self.class_conv.bias.data.zero_() self.reg_conv.weight.data.normal_(0, 0.01) self.reg_conv.bias.data.zero_() def forward(self,x): # input and output features of CNN are (nSamples x nChannels x Height x Width) features = self.extracter(x) conv1_out = F.relu(self.conv1(features)) class_out = self.class_conv(conv1_out) reg_out = self.reg_conv(conv1_out) return features, class_out, reg_out ``` ### Loss calculation - The same loss function is used for decider and RPN layer ``` # Loss for Region Proposal Netwoek # Same loss is used for decider network def RPN_loss(locs_preditct, class_predict, final_RPN_locs,final_RPN_class): final_RPN_locs = final_RPN_locs.to(device) final_RPN_class = final_RPN_class.long().to(device) # Cross entropy loss (check if the target is background or foreground) # Only consider labels with values 1 or 0. ignore -1 class_loss = F.cross_entropy(class_predict, final_RPN_class, ignore_index=-1) #smooth L1 regression loss (calculate the loss in predicted locations of foreground) ''' Smooth L1 loss uses a squared term if the absolute element-wise error falls below 1 and an L1 term otherwise ''' foreground_class_idx = (final_RPN_class > 0) locs_preditct = locs_preditct[foreground_class_idx] final_RPN_locs = final_RPN_locs[foreground_class_idx] loc_loss = F.smooth_l1_loss(locs_preditct, final_RPN_locs) / (sum(foreground_class_idx)+1e-6) rpn_loss = class_loss + rpn_loss_lambda*loc_loss return rpn_loss ``` ### Non Maximum Suppression * Custom implemented method is implemented but not used * Pytorch nms method gives ~5-6 times speed up ``` # Apply non max suppression on anchors def non_max_suppression(correct_anchor_positions, class_score, img_h, img_w, isTrain): if isTrain: nms_pre = nms_num_train_pre nms_post = nms_num_train_post else : nms_pre = nms_num_test_pre nms_post = nms_num_test_post # Clip the anchors to image dimensions correct_anchor_positions[correct_anchor_positions[:,0] < 0,0] = 0 # x1 correct_anchor_positions[correct_anchor_positions[:,1] > img_w,1] = img_w # x2 correct_anchor_positions[correct_anchor_positions[:,2] < 0,2] = 0 # y1 correct_anchor_positions[correct_anchor_positions[:,3] > img_h,3] = img_h # y2 # Only keep anchors with height and width > nms_min_size anchor_width = correct_anchor_positions[:,1] - correct_anchor_positions[:,0] anchor_height = correct_anchor_positions[:,3] - correct_anchor_positions[:,2] keep_idx = (anchor_height > nms_min_size) & (anchor_width > nms_min_size) correct_anchor_positions = correct_anchor_positions[keep_idx] class_score = class_score[keep_idx] # Get the index of sorted class scores in descending order and select top nms_pre idx sorted_class_scores = torch.argsort(class_score, descending=True) pre_nms_idx = sorted_class_scores[:nms_pre] correct_anchor_positions = correct_anchor_positions[pre_nms_idx] class_score = class_score[pre_nms_idx] # Implementation for non max suppression ''' sorted_class_scores = torch.argsort(class_score, descending=True).to('cpu') keep_anchors = [] Apply NMS while len(sorted_class_scores) > 1: current = sorted_class_scores[0] keep_anchors.append(current) iou_matrix = get_iou_matrix(correct_anchor_positions[sorted_class_scores[1:]],correct_anchor_positions[current].reshape(1,-1,4)[0]) sorted_class_scores = sorted_class_scores[np.where(iou_matrix < nms_threshold)[0] + 1] if (len(sorted_class_scores) == 1): keep_anchors.append(sorted_class_scores[0]) ''' ''' using pytorch standard nms function as it gives 5-6 times speedup ''' change_format = torch.zeros_like(correct_anchor_positions) change_format[:,0] = correct_anchor_positions[:,0] change_format[:,1] = correct_anchor_positions[:,2] change_format[:,2] = correct_anchor_positions[:,1] change_format[:,3] = correct_anchor_positions[:,3] keep_anchors = nms(change_format.to('cpu'), class_score.clone().detach().to('cpu'), nms_threshold) keep_anchors = keep_anchors[:nms_post] correct_anchor_positions = correct_anchor_positions[keep_anchors] return correct_anchor_positions ``` ### Assign classes to output of Region Proposal Network ``` def assign_classification_anchors(extracted_roi, gt_boxes, gt_labels, isTrain): # calculate iou of rois and gt boxes iou_matrix = get_iou_matrix(extracted_roi, gt_boxes) # for each ROI, find gt with max iou and corresponding value gt_roi_argmax = iou_matrix.argmax(axis=1) gt_roi_max = iou_matrix.max(axis=1)[0] #If a particular ROI has max overlap with a gt_box, assign label of gt_box to roi assign_labels = gt_labels[gt_roi_argmax] num_pos = pt_n_sample*pt_pos_ratio pos_idx = torch.where(gt_roi_max > pt_pos_iou_threshold)[0] if isTrain: pos_idx = np.random.choice(pos_idx, int(min(len(pos_idx), num_pos)), replace=False) num_neg = pt_n_sample - len(pos_idx) neg_idx = torch.where(gt_roi_max < pt_neg_iou_threshold)[0] if isTrain: neg_idx = np.random.choice(neg_idx, int(min(len(neg_idx), num_neg)), replace=False) keep_idx = np.append(pos_idx, neg_idx) assign_labels[neg_idx] = 0 assign_labels = assign_labels[keep_idx] extracted_roi = extracted_roi[keep_idx] gt_roi_argmax = gt_roi_argmax[keep_idx] return assign_labels, extracted_roi,gt_roi_argmax, keep_idx ``` ### ROI Pooling Layer ``` def ROI_pooling(extracted_roi, feature,ROI_pooling_layer): extracted_roi = extracted_roi/16.0 out = [] for roi in extracted_roi: x1 = int(roi[0]) x2 = int(roi[1]+1) y1 = int(roi[2]) y2 = int(roi[3]+1) out.append(ROI_pooling_layer(feature[:,:,y1:y2,x1:x2])) out = torch.cat(out) return out ``` ### Faster RCNN decider layer definition ``` class Faster_RCNN_decider(nn.Module): def __init__(self,classifier): super().__init__() self.flatten = nn.Flatten() self.classifier = classifier self.class_lin = nn.Linear(4096, num_classes) self.reg_lin = nn.Linear(4096, num_classes*4) def forward(self, x): x = self.flatten(x) x = self.classifier(x) decider_class = self.class_lin(x) decider_loc = self.reg_lin(x) return decider_class, decider_loc ``` ### Pipeline (train/validation) for a single image ``` img_anchors_all = generate_anchors(int(input_image_height/conversion_scale), int(input_image_width/conversion_scale)).to(device) def single_image_pipeline(input_image, gt_box, label,isTrain): input_image = input_image.to(device) gt_box = torch.FloatTensor(gt_box).to(device) label = torch.FloatTensor(label).to(device) # Generate CNN features for input image # Genearate region proposals predictions features, class_out, reg_out = rpn(input_image) locs_preditct = reg_out.permute(0,2,3,1).reshape(1,-1,4)[0] class_predict = class_out.permute(0,2,3,1).reshape(1,-1,2)[0] class_score = class_out.reshape(1,features.shape[2],features.shape[3],9,2)[:,:,:,:,1].reshape(1,-1)[0].detach() # For training region proposal network, generate anchors on the image. img_anchors_valid, img_anchors_valid_idx = get_valid_anchors(img_anchors_all.clone()) iou_anchors_gt = get_iou_matrix(img_anchors_valid, gt_box) all_gt_max, anchor_max_value,anchor_max = get_max_iou_data(iou_anchors_gt) anchor_labels = get_anchor_labels(img_anchors_valid, all_gt_max, anchor_max_value) anchor_labels = sample_anchors_for_train(anchor_labels) # TODO - check if correct. - Done delta = delta_anchor_gt(img_anchors_valid, gt_box, anchor_max) final_RPN_locs = torch.zeros_like(img_anchors_all) final_RPN_locs[img_anchors_valid_idx] = delta final_RPN_class = torch.zeros(img_anchors_all.shape[0]) final_RPN_class.fill_(-1) final_RPN_class[img_anchors_valid_idx] = anchor_labels # Loss for RPN layer loss1 = RPN_loss(locs_preditct, class_predict, final_RPN_locs, final_RPN_class).to(device) # Based on the bbox output of rpn, correct the generated anchors corrected_anchors = correct_anchor_positions(img_anchors_all.to(device), locs_preditct).detach() # Apply nms on the region proposals extracted_rois = non_max_suppression(corrected_anchors, class_score, input_image.shape[2],input_image.shape[3],isTrain) final_decider_class, extracted_roi_samples,gt_roi_argmax,idx = assign_classification_anchors(extracted_rois, gt_box, label, isTrain) final_decider_locs = delta_anchor_gt(extracted_roi_samples, gt_box,gt_roi_argmax) # Apply ROI pooling on the extracted ROIs pooled_features = ROI_pooling(extracted_roi_samples, features, ROI_pooling_layer) decider_class, decider_loc = decider(pooled_features) decider_loc = decider_loc.reshape(pooled_features.shape[0],-1,4) # 128*21*4 decider_loc = decider_loc[torch.arange(0,pooled_features.shape[0]), final_decider_class.long()] # 128*4 # Loss for decider layer loss2 = RPN_loss(decider_loc, decider_class, final_decider_locs, final_decider_class).to(device) with torch.no_grad(): decider_loc_no_grad = decider_loc.clone().to(device) # Correct the ROIs based on bbox output for decider layers corrected_roi = correct_anchor_positions(extracted_roi_samples,decider_loc_no_grad).detach() return loss1, loss2 , decider_class, corrected_roi ``` #### Test model for input image ``` def test_model(input_image): rpn.eval() decider.eval() input_image = input_image.to(device) features, class_out, reg_out = rpn(input_image) locs_preditct = reg_out.permute(0,2,3,1).reshape(1,-1,4)[0] class_score = class_out.reshape(1,features.shape[2],features.shape[3],9,2)[:,:,:,:,1].reshape(1,-1)[0].detach() corrected_anchors = correct_anchor_positions(img_anchors_all.to(device), locs_preditct).detach() extracted_rois = non_max_suppression(corrected_anchors, class_score, input_image.shape[2],input_image.shape[3],False) pooled_features = ROI_pooling(extracted_rois, features, ROI_pooling_layer) decider_class, decider_loc = decider(pooled_features) with torch.no_grad(): decider_loc_no_grad = decider_loc.clone().to(device) corrected_roi = correct_anchor_positions(extracted_rois,decider_loc_no_grad).detach() corrected_roi[corrected_roi[:,0] < 0,0] = 0 # x1 corrected_roi[corrected_roi[:,1] > input_image_width,1] = input_image_width # x2 corrected_roi[corrected_roi[:,2] < 0,2] = 0 # y1 corrected_roi[corrected_roi[:,3] > input_image_height,3] = input_image_height # y2 print(corrected_roi) decoder_conf = decider_class.softmax(dim=1).max(dim=1)[0] decoder_conf = decoder_conf.detach() decoder_conf = decoder_conf[decider_class.argmax(axis=1) != 0] keep_anchors = [] sorted_class_scores = torch.argsort(decoder_conf, descending=True) while len(sorted_class_scores) > 1: current = sorted_class_scores[0] keep_anchors.append(current.item()) iou_matrix = get_iou_matrix(corrected_roi[sorted_class_scores[1:]],corrected_roi[current].reshape(1,-1,4)[0]) sorted_class_scores = sorted_class_scores[np.where(iou_matrix < 0.2)[0] + 1] if (len(sorted_class_scores) == 1): keep_anchors.append(sorted_class_scores[0].item()) for pred in decider_class.argmax(axis=1)[decider_class.argmax(axis=1) != 0][keep_anchors]: print(val_to_lab[int(pred)], end=', ') print('') for pred in decoder_conf[keep_anchors]: print(pred.item(), end=', ') visualize_tensor(input_image,extracted_rois[decider_class.argmax(axis=1) != 0][keep_anchors], []) ``` ### Use a single image pipeline multiple times for training or batch testing ##### Define variables, models, load saved models or training steps ``` load_model = '' loss1_hist = [] loss2_hist = [] loss_hist = [] valid_loss1_hist = [] valid_loss2_hist = [] valid_loss_hist = [] epoch_start = 0 best_valid_score = 10000 vgg_feature_extracter, vgg_classifier = Faster_RCNN_vgg16(num_freeze_top=10) rpn = Faster_RCNN_rpn(vgg_feature_extracter).to(device) ROI_pooling_layer = nn.AdaptiveMaxPool2d(roi_size).to(device) decider = Faster_RCNN_decider(vgg_classifier).to(device) all_params = list(list(rpn.parameters()) + list(decider.parameters())) optimizer = optim.Adam(all_params, lr=0.00005) if load_model != '': print('loading model ... ') checkpoint = torch.load(load_model, map_location=device) rpn.load_state_dict(checkpoint['rpn_state_dict']) decider.load_state_dict(checkpoint['decider_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) loss1_hist = checkpoint['loss1_hist'] loss2_hist = checkpoint['loss2_hist'] loss_hist = checkpoint['loss_hist'] valid_loss1_hist =checkpoint['valid_loss1_hist'] valid_loss2_hist = checkpoint['valid_loss2_hist'] valid_loss_hist = checkpoint['valid_loss_hist'] epoch_start = checkpoint['epoch_start'] best_valid_score = checkpoint['best_valid_score'] rpn.train() decider.train() print('model loaded ...' ) running_count = 0 running_net_loss = 0 running_loss1 = 0 running_loss2 = 0 loss_avg_step = 20 train_visualise_step =150 ``` ##### Faster RCNN train/validation code (Supported batch size is 1) ``` print("Start epoch - " + str(epoch_start)) print("best_valid_score - " + str(best_valid_score)) for epoch in range(epoch_start,epoch_start+50): train_loader = DataLoader(train_dataset, batch_size=1, shuffle=True) for i, data in enumerate(train_loader, 0): img, gt_box, labels = data loss1, loss2, pred_class, pred_box = single_image_pipeline(img.to(device), gt_box, labels, True) net_loss = loss1 + loss2 optimizer.zero_grad() net_loss.backward() optimizer.step() loss1 = loss1.detach() loss2 = loss2.detach() pred_class = pred_class.detach() pred_box = pred_box.detach() net_loss = net_loss.detach() if not((math.isnan(net_loss)) or (math.isnan(loss1)) or (math.isnan(loss2))): running_count += 1 running_net_loss += net_loss.data running_loss1 += loss1.data running_loss2 += loss2.data if ((i+1)%(loss_avg_step) == 0): loss1_hist.append(running_loss1/(running_count + 1e-6)) loss2_hist.append(running_loss2/(running_count + 1e-6)) loss_hist.append(running_net_loss/(running_count + 1e-6)) print(str(epoch) + '--' + str(i) + '--- ' + str(running_net_loss/(running_count + 1e-6))) running_count = 0 running_net_loss = 0 running_loss1 = 0 running_loss2 = 0 if ((i+1)%(train_visualise_step)) == 0: print("Train Data Visualise") fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(30, 5)) ax1.plot(loss1_hist) ax1.title.set_text('Train RPN Loss') ax2.plot(loss2_hist) ax2.title.set_text('Train Decider Loss') ax3.plot(loss_hist) ax3.title.set_text('Train Net Loss') visualize_tensor(img, pred_box[pred_class.argmax(axis=1) != 0], gt_box) for pred in pred_class.argmax(axis=1)[pred_class.argmax(axis=1) != 0]: print(val_to_lab[int(pred)], end=', ') print('') plt.show() ''' Validation Code start ''' valid_loader = DataLoader(valid_dataset, batch_size=1, shuffle=True) rpn.eval() decider.eval() print("-------------------------------") print("Evaluating valid sets") # test_model(iter(valid_loader).next()[0]) valid_loss1 = 0 valid_loss2 = 0 valid_net_loss = 0 valid_run_count = 0 for valid_idx, valid_data in (enumerate(valid_loader,0)): img, gt_box, labels = valid_data loss1, loss2, pred_class, pred_box = single_image_pipeline(img.to(device), gt_box, labels, False) net_loss = loss1 + loss2 if not((math.isnan(net_loss)) or (math.isnan(loss1)) or (math.isnan(loss2))): valid_loss1 += loss1.data valid_loss2 += loss2.data valid_net_loss += net_loss.data valid_run_count += 1 valid_loss1_hist.append(valid_loss1/(valid_run_count + 1e-6)) valid_loss2_hist.append(valid_loss2/(valid_run_count+ 1e-6)) valid_loss_hist.append(valid_net_loss/(valid_run_count+ 1e-6)) print("-------------------------------") print("-------------------------------") print("Validation Data Visualise -- ") print("-------------------------------") print("-------------------------------") fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(30, 5)) ax1.plot(valid_loss1_hist) ax1.title.set_text('Valid RPN Loss') ax2.plot(valid_loss2_hist) ax2.title.set_text('Valid Decider Loss') ax3.plot(valid_loss_hist) ax3.title.set_text('Valid Net Loss') test_model(img) for pred in pred_class.argmax(axis=1)[pred_class.argmax(axis=1) != 0]: print(val_to_lab[int(pred)], end=', ') print('') plt.show() print(valid_run_count) print("-------------------------------") print("-------------------------------") rpn.train() decider.train() ''' Validation Code end ''' PATH = 'drive/My Drive/saved_models_faster_rcnn/current.pt' print(str(epoch)+ '--' + str(i) + ' saving model ' + PATH) torch.save({ 'rpn_state_dict': rpn.state_dict(), 'decider_state_dict': decider.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'loss1_hist':loss1_hist, 'loss2_hist':loss2_hist, 'loss_hist':loss_hist, 'valid_loss1_hist': valid_loss1_hist , 'valid_loss2_hist':valid_loss2_hist, 'valid_loss_hist': valid_loss_hist, 'epoch_start':epoch, 'best_valid_score': best_valid_score }, PATH) valid_net_loss = valid_net_loss/(valid_run_count+ 1e-6) if (valid_net_loss < best_valid_score): best_valid_score = valid_net_loss print("-------------------------------") print("-------------------------------") print("Found new Best :)") print("-------------------------------") print("-------------------------------") PATH = 'drive/My Drive/saved_models_faster_rcnn/best.pt' print(str(epoch)+ '--' + str(i) + ' saving model ' + PATH) torch.save({ 'rpn_state_dict': rpn.state_dict(), 'decider_state_dict': decider.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'loss1_hist':loss1_hist, 'loss2_hist':loss2_hist, 'loss_hist':loss_hist, 'valid_loss1_hist': valid_loss1_hist , 'valid_loss2_hist':valid_loss2_hist, 'valid_loss_hist': valid_loss_hist, 'epoch_start':epoch, 'best_valid_score': best_valid_score }, PATH) ``` #### Test model on a single image ``` img = Image.open('drive/My Drive/saved_models/bike.jpg') img = transform(img) img = img.unsqueeze(0) test_model(img) ```
github_jupyter
# LeetCode #804. Unique Morse Code Words ## Question https://leetcode.com/problems/unique-morse-code-words/ International Morse Code defines a standard encoding where each letter is mapped to a series of dots and dashes, as follows: "a" maps to ".-", "b" maps to "-...", "c" maps to "-.-.", and so on. For convenience, the full table for the 26 letters of the English alphabet is given below: [".-","-...","-.-.","-..",".","..-.","--.","....","..",".---","-.-",".-..","--","-.","---",".--.","--.-",".-.","...","-","..-","...-",".--","-..-","-.--","--.."] Now, given a list of words, each word can be written as a concatenation of the Morse code of each letter. For example, "cba" can be written as "-.-..--...", (which is the concatenation "-.-." + "-..." + ".-"). We'll call such a concatenation, the transformation of a word. Return the number of different transformations among all words we have. Example: Input: words = ["gin", "zen", "gig", "msg"] Output: 2 Explanation: The transformation of each word is: "gin" -> "--...-." "zen" -> "--...-." "gig" -> "--...--." "msg" -> "--...--." There are 2 different transformations, "--...-." and "--...--.". Note: The length of words will be at most 100. Each words[i] will have length in range [1, 12]. words[i] will only consist of lowercase letters. ## My Solution ``` def uniqueMorseRepresentations(words): morse = [".-","-...","-.-.","-..",".","..-.","--.","....","..",".---","-.-",".-..","--","-.","---",".--.","--.-",".-.","...","-","..-","...-",".--","-..-","-.--","--.."] from string import ascii_lowercase alphabet = list(ascii_lowercase) morse_dic = {} for i, a in enumerate(alphabet): morse_dic[a] = morse[i] res = list() for word in words: temp = "" for w in word: temp += morse_dic[w] res.append(temp) return len(set(res)) # test code words = ["gin", "zen", "gig", "msg"] uniqueMorseRepresentations(words) ``` ## My Result __Runtime__ : 16 ms, faster than 94.77% of Python online submissions for Unique Morse Code Words. __Memory Usage__ : 11.8 MB, less than 35.71% of Python online submissions for Unique Morse Code Words. ## @lee215's Solution ``` def uniqueMorseRepresentations(words): d = [".-", "-...", "-.-.", "-..", ".", "..-.", "--.", "....", "..", ".---", "-.-", ".-..", "--", "-.", "---", ".--.", "--.-", ".-.", "...", "-", "..-", "...-", ".--", "-..-", "-.--", "--.."] return len({''.join(d[ord(i) - ord('a')] for i in w) for w in words}) # test code words = ["gin", "zen", "gig", "msg"] uniqueMorseRepresentations(words) ``` ## @lee215's Result __Runtime__: 24 ms, faster than 51.63% of Python online submissions for Unique Morse Code Words. __Memory Usage__ : 11.9 MB, less than 28.57% of Python online submissions for Unique Morse Code Words.
github_jupyter
*아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.* <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://nbviewer.org/github/rickiepark/nlp-with-pytorch/blob/master/chapter_1/PyTorch_Basics.ipynb"><img src="https://jupyter.org/assets/share.png" width="60" />주피터 노트북 뷰어로 보기</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/rickiepark/nlp-with-pytorch/blob/master/chapter_1/PyTorch_Basics.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a> </td> </table> # 샘플과 타깃의 인코딩 ## TF 표현 ``` import matplotlib.pyplot as plt from sklearn.feature_extraction.text import CountVectorizer import seaborn as sns corpus = ['Time flies like an arrow.', 'Fruit flies like a banana.'] one_hot_vectorizer = CountVectorizer(binary=True) one_hot = one_hot_vectorizer.fit_transform(corpus).toarray() vocab = one_hot_vectorizer.get_feature_names() sns.heatmap(one_hot, annot=True, cbar=False, xticklabels=vocab, yticklabels=['Sentence 1', 'Sentence 2']) # plt.savefig('1-04.png', dpi=300) plt.show() ``` ## TF-IDF 표현 ``` from sklearn.feature_extraction.text import TfidfVectorizer import seaborn as sns tfidf_vectorizer = TfidfVectorizer() tfidf = tfidf_vectorizer.fit_transform(corpus).toarray() sns.heatmap(tfidf, annot=True, cbar=False, xticklabels=vocab, yticklabels= ['Sentence 1', 'Sentence 2']) # plt.savefig('1-05.png', dpi=300) plt.show() ``` $IDF(w) = \text{log} \left(\dfrac{N+1}{N_w+1}\right)+1$ 첫 번째 문장의 'flies'와 'like'의 경우 TF = 1이므로 $\text{TF-IDF}=1\times\text{log}\left(\dfrac{2+1}{2+1}\right)+1=1$입니다. 단어 'an', 'arrow', 'time'의 경우 $N_w=1$입니다. 따라서 $\text{TF-IDF}=1\times\text{log}\left(\dfrac{2+1}{1+1}\right)+1=1.4054651081081644$입니다. L2 정규화를 적용하면 'flies'와 'like'는 $\dfrac{1}{\sqrt{2\times1^2+3\times1.4054651081081644^2+}}=0.3552$가 됩니다. 'an', 'arrow', 'time'는 $\dfrac{1.4054651081081644}{\sqrt{2\times1^2+3\times1.4054651081081644^2+}}=0.4992$가 됩니다. # 파이토치 기초 ``` import torch import numpy as np torch.manual_seed(1234) ``` ## 텐서 * 스칼라는 하나의 숫자입니다. * 벡터는 숫자의 배열입니다. * 행렬은 숫자의 2-D 배열입니다. * 텐서는 숫자의 N-D 배열입니다. #### 텐서 만들기 크기를 지정하여 텐서를 만들 수 있습니다. 여기서는 행이 5개이고 열이 3개인 텐서를 만듭니다. ``` def describe(x): print("타입: {}".format(x.type())) print("크기: {}".format(x.shape)) print("값: \n{}".format(x)) describe(torch.Tensor(2, 3)) describe(torch.randn(2, 3)) ``` 특정 크기의 랜덤한 텐서를 만드느 것이 일반적입니다. ``` x = torch.rand(2, 3) describe(x) ``` 1이나 0으로 채워진 텐서를 만들 수도 있습니다. ``` describe(torch.zeros(2, 3)) x = torch.ones(2, 3) describe(x) x.fill_(5) describe(x) ``` 텐서를 초기화한 후 값을 바꿀 수 있습니다. 노트: 밑줄 문자(`_`)로 끝나는 연산은 인-플레이스 연산입니다. ``` x = torch.Tensor(3,4).fill_(5) print(x.type()) print(x.shape) print(x) ``` 리스트의 리스트로 텐서를 만들 수 있습니다. ``` x = torch.Tensor([[1, 2,], [2, 4,]]) describe(x) ``` 넘파이 배열로 텐서를 만들 수 있습니다. ``` npy = np.random.rand(2, 3) describe(torch.from_numpy(npy)) print(npy.dtype) ``` #### 텐서 타입 The FloatTensor has been the default tensor that we have been creating all along ``` import torch x = torch.arange(6).view(2, 3) describe(x) x = torch.FloatTensor([[1, 2, 3], [4, 5, 6]]) describe(x) x = x.long() describe(x) x = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.int64) describe(x) x = x.float() describe(x) x = torch.randn(2, 3) describe(x) describe(torch.add(x, x)) describe(x + x) x = torch.arange(6) describe(x) x = x.view(2, 3) describe(x) describe(torch.sum(x, dim=0)) describe(torch.sum(x, dim=1)) describe(torch.transpose(x, 0, 1)) import torch x = torch.arange(6).view(2, 3) describe(x) describe(x[:1, :2]) describe(x[0, 1]) indices = torch.LongTensor([0, 2]) describe(torch.index_select(x, dim=1, index=indices)) indices = torch.LongTensor([0, 0]) describe(torch.index_select(x, dim=0, index=indices)) row_indices = torch.arange(2).long() col_indices = torch.LongTensor([0, 1]) describe(x[row_indices, col_indices]) ``` 인덱싱 연산에는 넘파이 `int64` 타입에 해당하는 LongTensor가 사용됩니다. ``` x = torch.LongTensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) describe(x) print(x.dtype) print(x.numpy().dtype) ``` FloatTensor를 LongTensor로 바꿀 수 있습니다. ``` x = torch.FloatTensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) x = x.long() describe(x) ``` ### 특별한 텐서 초기화 숫자가 증가되는 벡터를 만들 수 있습니다. ``` x = torch.arange(0, 10) print(x) ``` 이따금 인덱싱을 위해 정수 기반의 배열이 필요합니다. ``` x = torch.arange(0, 10).long() print(x) ``` ## 연산 텐서로 선형 대수 계산을 하는 것은 최신 딥러닝 기술의 기초가 되었습니다. 파이토치의 `view` 메서드를 사용하면 원소의 순서를 유지하면서 텐서의 차원을 자유롭게 바꿀 수 있습니다. ``` x = torch.arange(0, 20) print(x.view(1, 20)) print(x.view(2, 10)) print(x.view(4, 5)) print(x.view(5, 4)) print(x.view(10, 2)) print(x.view(20, 1)) ``` 뷰를 사용하여 크기가 1인 차원을 추가할 수 있습니다. 이렇게 하면 다른 텐서와 연산할 때 브로드캐스팅을 활용할 수 있습니다. ``` x = torch.arange(12).view(3, 4) y = torch.arange(4).view(1, 4) z = torch.arange(3).view(3, 1) print(x) print(y) print(z) print(x + y) print(x + z) ``` `unsqueeze`와 `squeeze`는 크기가 1인 차원을 추가하고 삭제합니다. ``` x = torch.arange(12).view(3, 4) print(x.shape) x = x.unsqueeze(dim=1) print(x.shape) x = x.squeeze() print(x.shape) ``` 표준 수학 연산을 모두 지원합니다(예를 들어 `add`). ``` x = torch.rand(3,4) print("x: \n", x) print("--") print("torch.add(x, x): \n", torch.add(x, x)) print("--") print("x+x: \n", x + x) ``` 메서드 이름 끝에 `_` 문자가 있으면 인-플레이스(in-place) 연산을 의미합니다. ``` x = torch.arange(12).reshape(3, 4) print(x) print(x.add_(x)) ``` 차원을 줄이는 연산이 많이 있습니다. 예를 들면 `sum`입니다. ``` x = torch.arange(12).reshape(3, 4) print("x: \n", x) print("---") print("행을 따라 덧셈 (dim=0): \n", x.sum(dim=0)) print("---") print("열을 따라 덧셈 (dim=1): \n", x.sum(dim=1)) ``` #### 인덱싱, 슬라이싱, 연결, 수정 ``` x = torch.arange(6).view(2, 3) print("x: \n", x) print("---") print("x[:2, :2]: \n", x[:2, :2]) print("---") print("x[0][1]: \n", x[0][1]) print("---") print("[0][1]에 8을 할당") x[0][1] = 8 print(x) ``` `index_select`을 사용해 텐서의 원소를 선택할 수 있습니다. ``` x = torch.arange(9).view(3,3) print(x) print("---") indices = torch.LongTensor([0, 2]) print(torch.index_select(x, dim=0, index=indices)) print("---") indices = torch.LongTensor([0, 2]) print(torch.index_select(x, dim=1, index=indices)) ``` 넘파이 스타일의 인덱싱도 사용할 수 있습니다. ``` x = torch.arange(9).view(3,3) indices = torch.LongTensor([0, 2]) print(x[indices]) print("---") print(x[indices, :]) print("---") print(x[:, indices]) ``` 텐서를 연결할 수 있습니다. 먼저 행을 따라 열결합니다. ``` x = torch.arange(6).view(2,3) describe(x) describe(torch.cat([x, x], dim=0)) describe(torch.cat([x, x], dim=1)) describe(torch.stack([x, x])) ``` 열을 따라 연결할 수 있습니다. ``` x = torch.arange(9).view(3,3) print(x) print("---") new_x = torch.cat([x, x, x], dim=1) print(new_x.shape) print(new_x) ``` 텐서를 쌓아 새로운 0번째 차원에 연결할 수 있습니다. ``` x = torch.arange(9).view(3,3) print(x) print("---") new_x = torch.stack([x, x, x]) print(new_x.shape) print(new_x) ``` #### 선형 대수 텐서 함수 전치는 다른 축의 차원을 서로 바꿉니다. 예를 들어 행과 열을 바꿀 수 있습니다. ``` x = torch.arange(0, 12).view(3,4) print("x: \n", x) print("---") print("x.tranpose(1, 0): \n", x.transpose(1, 0)) ``` 3차원 텐서는 시퀀스의 배치로 표현됩니다. 시퀀스에 있는 각 아이템은 하나의 특성 벡터를 가집니다. 시퀀스 모델에서 시퀀스를 쉽게 인덱싱하기 위해 배치 차원과 시퀀스 차원을 바꾸는 일이 종종 있습니다. 노트: 전치는 2개의 축을 바꿉니다. `permute`는 여러 축을 다룰 수 있습니다(다음 셀에서 설명합니다). ``` batch_size = 3 seq_size = 4 feature_size = 5 x = torch.arange(batch_size * seq_size * feature_size).view(batch_size, seq_size, feature_size) print("x.shape: \n", x.shape) print("x: \n", x) print("-----") print("x.transpose(1, 0).shape: \n", x.transpose(1, 0).shape) print("x.transpose(1, 0): \n", x.transpose(1, 0)) ``` `permute`는 전치의 일반화된 버전입니다. ``` batch_size = 3 seq_size = 4 feature_size = 5 x = torch.arange(batch_size * seq_size * feature_size).view(batch_size, seq_size, feature_size) print("x.shape: \n", x.shape) print("x: \n", x) print("-----") print("x.permute(1, 0, 2).shape: \n", x.permute(1, 0, 2).shape) print("x.permute(1, 0, 2): \n", x.permute(1, 0, 2)) ``` 행렬 곱셈은 `mm`입니다. ``` torch.randn(2, 3, requires_grad=True) x1 = torch.arange(6).view(2, 3).float() describe(x1) x2 = torch.ones(3, 2) x2[:, 1] += 1 describe(x2) describe(torch.mm(x1, x2)) x = torch.arange(0, 12).view(3,4).float() print(x) x2 = torch.ones(4, 2) x2[:, 1] += 1 print(x2) print(x.mm(x2)) ``` 더 자세한 내용은 [파이토치 수학 연산 문서](https://pytorch.org/docs/stable/torch.html#math-operations)를 참고하세요! ## 그레이디언트 계산 ``` x = torch.tensor([[2.0, 3.0]], requires_grad=True) z = 3 * x print(z) ``` 아래 간단한 코드에서 그레이디언트 계산을 엿볼 수 있습니다. 텐서 하나를 만들고 3을 곱합니다. 그다음 `sum()`을 사용해 스칼라 출력을 만듭니다. 손실 함수에는 스칼라 값이 필요하기 때문입니다. 그다음 손실에 `backward()`를 호출해 입력에 대한 변화율을 계산합니다. `sum()`으로 스칼라 값을 만들었기 때문에 `z`와 `x`에 있는 각 원소는 손실 스칼라 값에 대해 독립적입니다. 출력에 대한 `x`의 변화율은 `x`에 곱한 상수 3입니다. ``` x = torch.tensor([[2.0, 3.0]], requires_grad=True) print("x: \n", x) print("---") z = 3 * x print("z = 3*x: \n", z) print("---") loss = z.sum() print("loss = z.sum(): \n", loss) print("---") loss.backward() print("loss.backward()를 호출한 후, x.grad: \n", x.grad) ``` ### 예제: 조건 그레이디언트 계산하기 $$ \text{x=1에서 f(x)의 그레이디언트 찾기} $$ $$ {} $$ $$ f(x)=\left\{ \begin{array}{ll} sin(x) \; x>0 \text{ 일 때 }\\ cos(x) \text{ 그 외 } \\ \end{array} \right.$$ ``` def f(x): if (x.data > 0).all(): return torch.sin(x) else: return torch.cos(x) x = torch.tensor([1.0], requires_grad=True) y = f(x) y.backward() print(x.grad) ``` 큰 벡터에 적용할 수 있지만 출력은 스칼라 값이어야 합니다. ``` x = torch.tensor([1.0, 0.5], requires_grad=True) y = f(x) # 에러가 발생합니다! y.backward() print(x.grad) ``` 스칼라 출력을 만들어 보죠. ``` x = torch.tensor([1.0, 0.5], requires_grad=True) y = f(x) y.sum().backward() print(x.grad) ``` 하지만 이슈가 있습니다. 이 함수는 예외적인 경우에 맞지 않습니다. ``` x = torch.tensor([1.0, -1], requires_grad=True) y = f(x) y.sum().backward() print(x.grad) x = torch.tensor([-0.5, -1], requires_grad=True) y = f(x) y.sum().backward() print(x.grad) ``` 이는 원소별로 불리언 연산과 코사인/사인 계산이 수행되지 않기 때문입니다. 이를 해결하기 위해 자주 사용되는 방법은 마스킹입니다. ``` def f2(x): mask = torch.gt(x, 0).float() return mask * torch.sin(x) + (1 - mask) * torch.cos(x) x = torch.tensor([1.0, -1], requires_grad=True) y = f2(x) y.sum().backward() print(x.grad) def describe_grad(x): if x.grad is None: print("그레이디언트 정보 없음") else: print("그레이디언트: \n{}".format(x.grad)) print("그레이디언트 함수: {}".format(x.grad_fn)) import torch x = torch.ones(2, 2, requires_grad=True) describe(x) describe_grad(x) print("--------") y = (x + 2) * (x + 5) + 3 describe(y) z = y.mean() describe(z) describe_grad(x) print("--------") z.backward(create_graph=True, retain_graph=True) describe_grad(x) print("--------") x = torch.ones(2, 2, requires_grad=True) y = x + 2 y.grad_fn ``` ### CUDA 텐서 파이토치 연산은 GPU나 CPU에서 수행할 수 있습니다. 두 장치를 사용하기 위한 몇 가지 연산을 제공합니다. (코랩에서 실행할 경우 런타임 유형을 GPU로 바꾸세요) ``` print(torch.cuda.is_available()) x = torch.rand(3,3) describe(x) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(device) x = torch.rand(3, 3).to(device) describe(x) print(x.device) cpu_device = torch.device("cpu") # 에러 발생! y = torch.rand(3, 3) x + y y = y.to(cpu_device) x = x.to(cpu_device) x + y if torch.cuda.is_available(): # GPU가 있을 경우에 a = torch.rand(3,3).to(device='cuda:0') # CUDA 텐서 print(a) b = torch.rand(3,3).cuda() print(b) print(a + b) a = a.cpu() # 에러 발생 print(a + b) ``` ### 연습문제 연습문제에 필요한 일부 연산은 이 노트북에 있지 않습니다. [파이토치 문서](https://pytorch.org/docs/)를 참고하세요! (정답은 맨 아래 있습니다) #### 문제 1 2D 텐서를 만들고 차원 0 위치에 크기가 1인 차원을 추가하세요. #### 문제 2 이전 텐서에 추가한 차원을 삭제하세요. #### 문제 3 [3, 7) 범위를 갖는 5x3 크기의 랜덤한 텐서를 만드세요. #### 문제 4 정규 분포(평균=0, 표준편차=1)를 사용해 텐서를 만드세요. #### 문제 5 텐서 `torch.Tensor([1, 1, 1, 0, 1])`에서 0이 아닌 원소의 인덱스를 추출하세요. #### 문제 6 (3,1) 크기가 인 랜덤한 텐서를 만들고 네 벌을 복사해 쌓으세요. #### 문제 7 두 개의 2차원 행렬(`a=torch.rand(3,4,5)`, `b=torch.rand(3,5,4)`)의 배치 행렬 곱셈(batch matrix-matrix product)을 계산하세요. #### 문제 8 3차원 행렬(`a=torch.rand(3,4,5)`)과 2차원 행렬(`b=torch.rand(5,4)`)의 배치 행렬 곱셈을 계산하세요. 정답은 아래에.. 정답은 더 아래에.. #### 문제 1 2D 텐서를 만들고 차원 0 위치에 크기가 1인 차원을 추가하세요. ``` a = torch.rand(3,3) a = a.unsqueeze(0) print(a) print(a.shape) ``` #### 문제 2 이전 텐서에 추가한 차원을 삭제하세요. ``` a = a.squeeze(0) print(a.shape) ``` #### 문제 3 [3, 7) 범위를 갖는 5x3 크기의 랜덤한 텐서를 만드세요. ``` 3 + torch.rand(5, 3) * 4 ``` #### 문제 4 정규 분포(평균=0, 표준편차=1)를 사용해 텐서를 만드세요. ``` a = torch.rand(3,3) a.normal_(mean=0, std=1) ``` #### 문제 5 텐서 `torch.Tensor([1, 1, 1, 0, 1])`에서 0이 아닌 원소의 인덱스를 추출하세요. ``` a = torch.Tensor([1, 1, 1, 0, 1]) torch.nonzero(a) ``` #### 문제 6 (3,1) 크기가 인 랜덤한 텐서를 만들고 네 벌을 복사해 쌓으세요. ``` a = torch.rand(3,1) a.expand(3,4) ``` #### 문제 7 두 개의 2차원 행렬(`a=torch.rand(3,4,5)`, `b=torch.rand(3,5,4)`)의 배치 행렬 곱셈(batch matrix-matrix product)을 계산하세요. ``` a = torch.rand(3,4,5) b = torch.rand(3,5,4) torch.bmm(a, b) ``` #### 문제 8 3차원 행렬(`a=torch.rand(3,4,5)`)과 2차원 행렬(`b=torch.rand(5,4)`)의 배치 행렬 곱셈을 계산하세요. ``` a = torch.rand(3,4,5) b = torch.rand(5,4) torch.bmm(a, b.unsqueeze(0).expand(a.size(0), *b.size())) ```
github_jupyter
#1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. ``` !pip install git+https://github.com/google/starthinker ``` #2. Get Cloud Project ID To run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ``` CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ``` #3. Get Client Credentials To read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ``` CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ``` #4. Enter CM To Sheets Parameters Move existing CM report into a Sheet tab. 1. Specify an account id. 1. Specify either report name or report id to move a report. 1. The most recent valid file will be moved to the sheet. 1. Schema is pulled from the official CM specification. Modify the values below for your use case, can be done multiple times, then click play. ``` FIELDS = { 'auth_read': 'user', # Credentials used for reading data. 'account': '', 'report_id': '', 'report_name': '', 'sheet': '', 'tab': '', } print("Parameters Set To: %s" % FIELDS) ``` #5. Execute CM To Sheets This does NOT need to be modified unles you are changing the recipe, click play. ``` from starthinker.util.project import project from starthinker.script.parse import json_set_fields, json_expand_includes USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'dcm': { 'auth': 'user', 'report': { 'account': {'field': {'name': 'account','kind': 'integer','order': 2,'default': ''}}, 'report_id': {'field': {'name': 'report_id','kind': 'integer','order': 3,'default': ''}}, 'name': {'field': {'name': 'report_name','kind': 'string','order': 4,'default': ''}} }, 'out': { 'sheets': { 'sheet': {'field': {'name': 'sheet','kind': 'string','order': 5,'default': ''}}, 'tab': {'field': {'name': 'tab','kind': 'string','order': 6,'default': ''}}, 'range': 'A1' } } } } ] json_set_fields(TASKS, FIELDS) json_expand_includes(TASKS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True) project.execute() ```
github_jupyter
``` # Exploratory Data Analysis and Plotting Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # Scikit-Learn Models from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier # Model Evaluations from sklearn.model_selection import train_test_split, cross_val_score from sklearn.model_selection import RandomizedSearchCV, GridSearchCV from sklearn.metrics import confusion_matrix, classification_report from sklearn.metrics import precision_score, recall_score, f1_score from sklearn.metrics import plot_roc_curve df = pd.read_csv("heart.csv") df.shape # (rows, columns) df.target.value_counts().plot(kind='bar', figsize=(10, 6), color=['pink', 'blue']) plt.title('Frequency of Heart Disease') plt.xlabel('1 = Disease, 0 = No Disease') plt.ylabel('Amount') plt.xticks(rotation=0); df.describe() # sex: (1 = male; 0 = female) df.sex.value_counts() pd.crosstab(df.target, df.sex).plot(kind='bar', figsize=(10, 6), color=['pink', 'blue']) plt.title('Heart Disease Frequency for Sex') plt.xlabel('0 = No Disease, 1 = Disease') plt.ylabel('Amount') plt.legend(['Female', 'Male']); plt.xticks(rotation=0); df.age.plot.hist(); pd.crosstab(df.cp, df.target) pd.crosstab(df.cp, df.target).plot(kind='bar', figsize=(10, 6), color=['blue', 'pink']) plt.title('Heart Disease Frequency for Chest Pain Type') plt.xlabel('Chest Pain Type') plt.ylabel('Amount') plt.legend(['No Disease', 'Disease']); plt.xticks(rotation=0); df.corr() ix = df.corr().sort_values('target', ascending=True).index df_sorted = df.loc[:, ix] corrmat = df_sorted.corr() fig, ax = plt.subplots(figsize=(20, 25)) g = sns.heatmap(corrmat, vmin=-1.0, vmax=1.0, square=True, annot=True, fmt='.2f', annot_kws={'size': 12}, cbar_kws= {'shrink': 0.65}, linecolor='black', center=0, linewidths=.25, cmap='BrBG') g.tick_params(labelsize=15) df.head() X = df.drop('target', axis=1) y = df.target X y np.random.seed(33) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) print(X_train.shape, y_train.shape, X_test.shape, y_test.shape) # Put models in dictionary models = {'Logistic Regression': LogisticRegression(solver='lbfgs', max_iter=1000), 'KNN': KNeighborsClassifier(), 'Random Forest': RandomForestClassifier()} # Create a function to fit and score models def fit_and_score(models, X_train, X_test, y_train, y_test): """ Fits and evaluates given machine learning models. models: a dictionary of different Scikit-Learn machine learning models X_train: training data (no labels) X_test: testing data (no labels) y_train: training labels y_test: test labels """ # Set random seed np.random.seed(33) # Make dictionary to keep model scores model_scores = {} # Loop through models for name, model in models.items(): # Fit the model to the data model.fit(X_train, y_train) # Evaluate the model and append its score to model_scores model_scores[name] = model.score(X_test, y_test) return model_scores model_scores = fit_and_score(models, X_train, X_test, y_train, y_test) model_scores # Tuning KNN train_scores = [] test_scores = [] # Create a list of different values for n_neighbors neighbors = range(1, 21) # Create KNN instance knn = KNeighborsClassifier() # Loop through different n_neighbors for i in neighbors: knn.set_params(n_neighbors = i) # Fit the algorithm knn.fit(X_train, y_train) # Update the training scores list train_scores.append(knn.score(X_train, y_train)) # Update the testing scores list test_scores.append(knn.score(X_test, y_test)) train_scores test_scores plt.plot(neighbors, train_scores, label='Train Score') plt.plot(neighbors, test_scores, label='Test Score') plt.xticks(np.arange(1, 21, 1)) plt.xlabel('Number of Neighbors') plt.ylabel('Model Score') plt.legend() print(f'Maximum KNN score on the test data: {max(test_scores)*100:.2f}%') rf_grid = {'n_estimators': np.arange(100, 102, 1), 'max_depth': np.arange(5, 10, 1), 'min_samples_split': np.arange(10, 15, 1), 'min_samples_leaf': np.arange(10, 15, 1), 'n_jobs': [-1]} gs_rf = GridSearchCV(RandomForestClassifier(), param_grid=rf_grid, cv=5, verbose=True) # Fit random hyperparameter search model gs_rf.fit(X_train, y_train) gs_rf.best_params_ gs_rf.score(X_test, y_test) y_preds = gs_rf.predict(X_test) y_preds y_test plot_roc_curve(gs_rf, X_test, y_test); print(confusion_matrix(y_test, y_preds)) sns.set(font_scale=1.5) # Increase font size def plot_conf_mat(y_test, y_preds): """ Plots a confusion matrix using Seaborn's heatmap() """ fig, ax = plt.subplots(figsize=(3, 3)) ax = sns.heatmap(confusion_matrix(y_test, y_preds), annot=True, # Annotate the boxes cbar=False) plt.xlabel("Predicted label") # predictions go on the x-axis plt.ylabel("True label") # true labels go on the y-axis plot_conf_mat(y_test, y_preds) print(classification_report(y_test, y_preds)) best_params = gs_rf.best_params_ best_params clf = RandomForestClassifier(n_estimators=best_params['n_estimators'], min_samples_split=best_params['min_samples_split'], min_samples_leaf=best_params['min_samples_leaf'], max_depth=best_params['max_depth'], n_jobs=best_params['n_jobs']) cv_acc = cross_val_score(clf, X, y, scoring='accuracy') cv_acc cv_acc = np.mean(cv_acc) cv_acc cv_precision = cross_val_score(clf, X, y, scoring='precision') cv_precision = np.mean(cv_precision) cv_precision cv_recall = cross_val_score(clf, X, y, scoring='recall') cv_recall = np.mean(cv_recall) cv_recall cv_f1 = cross_val_score(clf, X, y, scoring='f1') cv_f1 = np.mean(cv_f1) cv_f1 cv_metrics = pd.DataFrame({'Accuracy': cv_acc, 'Precision': cv_precision, 'Recall': cv_recall, 'F1': cv_f1}, index=[0]) cv_metrics.T.plot.bar(title='Cross-Validated Classification Metrics', legend=False); best_params = gs_rf.best_params_ best_params # Different parameters for our RandomForestClassifier model rf_grid = {'n_estimators': [102], 'max_depth': [6], 'min_samples_split': [14], 'min_samples_leaf': [13], 'n_jobs': [-1], 'max_features': range(1, 14),} # Create hyperparameter search gs_rf = GridSearchCV(RandomForestClassifier(), param_grid=rf_grid, cv=5, verbose=True) # Fit hyperparameter search model gs_rf.fit(X_train, y_train) best_params = gs_rf.best_params_ best_params model = RandomForestClassifier(max_depth=best_params["max_depth"], n_estimators=best_params["n_estimators"], max_features=best_params["max_features"], min_samples_leaf=best_params["min_samples_leaf"], min_samples_split=best_params['min_samples_split'], n_jobs=-1, random_state=33) model.fit(X_train, y_train) model.score(X_test, y_test) model.feature_importances_ X.columns feat_importance = [] for feature in zip(X.columns, model.feature_importances_): feat_importance.append(feature) importance = pd.DataFrame(feat_importance,columns=["Feature", "Importance"]) print(importance.sort_values(by=["Importance"],ascending=False)) from sklearn.ensemble import AdaBoostClassifier clf = AdaBoostClassifier(n_estimators=100, base_estimator=model, learning_rate=1) clf.fit(X_train, y_train) clf.score(X_test, y_test) from sklearn.ensemble import GradientBoostingClassifier clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=5) clf.fit(X_train, y_train) clf.score(X_test, y_test) !pip install xgboost from xgboost import XGBClassifier clf = XGBClassifier() clf.fit(X_train, y_train) clf.score(X_test, y_test) import pickle clf = AdaBoostClassifier(n_estimators=100, base_estimator=model, learning_rate=1) clf.fit(X_train, y_train) pickle.dump(clf, open('./heart.pkl','wb')) input_data = input("Enter comm seperated values : ") input_data = input_data.split(',') input_data = map(float , input_data) input_data = np.asarray(list(input_data)) input_data = input_data.reshape(1,-1) prediction =model.predict(input_data) if prediction[0] == 0: print('Congratulation! You do not have heart dieases') else: print('The person have heart dieases') input_data = input("Enter comm seperated values: ") input_data = input_data.split(',') input_data = map(float , input_data) input_data = np.asarray(list(input_data)) input_data = input_data.reshape(1,-1) prediction =model.predict(input_data) if prediction[0] == 0: print('Congratulation! You do not have heart dieases') else: print('The person have heart dieases') ```
github_jupyter
``` import pymongo uri = "mongodb://user1:MongoPassWd1@localhost:27017/" client = pymongo.MongoClient(uri, 27017) db = client["DjangoServer"] col = db["dangdangBook"] col.estimated_document_count() from bson import json_util import json from bson import ObjectId class JSONEncoder(json.JSONEncoder): """处理ObjectId,该类型无法转为json""" def default(self, obj): if isinstance(obj, ObjectId): return str(obj) if isinstance(obj, datetime.datetime): return datetime.datetime.strftime(obj, '%Y-%m-%d %H:%M:%S') return json.JSONEncoder.default(self, obj) ret_all = col.find({'book_name':'小王子'}) lst = [] for ret in ret_all: ret = JSONEncoder(ensure_ascii=False).encode(ret) lst.append(ret) with open('b.json', 'w', encoding='utf8') as json_file: json.dump(lst, json_file, ensure_ascii=False) lst = list(col.find()) import pandas as pd df = pd.DataFrame(lst) df.to_pickle('a.pkl') import pandas as pd b = pd.read_pickle('a.pkl') c = b[:10] c from bs4 import BeautifulSoup import requests headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36', 'Cookie': "dest_area=country_id%3D9000%26province_id%3D111%26city_id%20%3D0%26district_id%3D0%26town_id%3D0; __permanent_id=20210512154623746232757573324420947; __visit_id=20210512154623747290487860872431240; __out_refer=1620805584%7C!%7Cwww.google.com.hk%7C!%7C; secret_key=c678ec4c9955d819a97c84a7606fda28; ddscreen=2; permanent_key=202105121557215846416956186de617; alipay_request_from=https://login.dangdang.com/signin.aspx?returnurl=http%253A//product.dangdang.com/25105726.html; __rpm=p_25105726...1620806240994%7Clogin_page.login_nolocal_mobile_div..1620806284555; USERNUM=7afn2DlXXMRAh7wXO8lg6w==; login.dangdang.com=.AYH=2021051215572205227690999&.ASPXAUTH=incqod/9NNx+A93OUEXKrFCQf9h2PIRXNf/GKJOOytGgahjg4CbwQA==; dangdang.com=email=MTgzMzI2NTY1MDAzNjI4N0BkZG1vYmlscGhvbmVfX3VzZXIuY29t&nickname=&display_id=2489274153678&customerid=LHcFQVLu5mnRLjof+Erviw==&viptype=MkzDXh7F+lA=&show_name=183%2A%2A%2A%2A6500; ddoy=email=1833265650036287%40ddmobilphone__user.com&nickname=&agree_date=1&validatedflag=0&uname=18332656500&utype=1&.ALFG=on&.ALTM=1620806284; sessionID=pc_b425763bb9cb47e97dc946f42fec14f95f8629044985e6e025d8ce00a91597fc; __dd_token_id=20210512155804775128630933e71fe8; LOGIN_TIME=1620806285979; __trace_id=20210512155822842100828995796889730; pos_6_start=1620806302879; pos_6_end=1620806303248", # 登录B站后复制一下cookie中的SESSDATA字段,有效期1个月 } col2 = db["dangdang1"] for i, cur_row in b.iterrows(): url = cur_row['url'] print(url) r = requests.get(url,headers=headers) soup = BeautifulSoup(r.text, 'html.parser') cat = "" res = soup.select('#breadcrumb > a:nth-child(3)') if (len(res)): cat = res[0].text # cat = soup.select('#breadcrumb > a:nth-child(3)')[0].text print(cat, i) cur_row['cat'] = cat # print(i, cat, cur_row) col2.insert_one(dict(cur_row)) # 测试单页面 url = "http://product.dangdang.com/20093876.html" headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36', 'Cookie': "dest_area=country_id%3D9000%26province_id%3D111%26city_id%20%3D0%26district_id%3D0%26town_id%3D0; __permanent_id=20210512154623746232757573324420947; __visit_id=20210512154623747290487860872431240; __out_refer=1620805584%7C!%7Cwww.google.com.hk%7C!%7C; secret_key=c678ec4c9955d819a97c84a7606fda28; ddscreen=2; permanent_key=202105121557215846416956186de617; alipay_request_from=https://login.dangdang.com/signin.aspx?returnurl=http%253A//product.dangdang.com/25105726.html; __rpm=p_25105726...1620806240994%7Clogin_page.login_nolocal_mobile_div..1620806284555; USERNUM=7afn2DlXXMRAh7wXO8lg6w==; login.dangdang.com=.AYH=2021051215572205227690999&.ASPXAUTH=incqod/9NNx+A93OUEXKrFCQf9h2PIRXNf/GKJOOytGgahjg4CbwQA==; dangdang.com=email=MTgzMzI2NTY1MDAzNjI4N0BkZG1vYmlscGhvbmVfX3VzZXIuY29t&nickname=&display_id=2489274153678&customerid=LHcFQVLu5mnRLjof+Erviw==&viptype=MkzDXh7F+lA=&show_name=183%2A%2A%2A%2A6500; ddoy=email=1833265650036287%40ddmobilphone__user.com&nickname=&agree_date=1&validatedflag=0&uname=18332656500&utype=1&.ALFG=on&.ALTM=1620806284; sessionID=pc_b425763bb9cb47e97dc946f42fec14f95f8629044985e6e025d8ce00a91597fc; __dd_token_id=20210512155804775128630933e71fe8; LOGIN_TIME=1620806285979; __trace_id=20210512155822842100828995796889730; pos_6_start=1620806302879; pos_6_end=1620806303248", # 登录B站后复制一下cookie中的SESSDATA字段,有效期1个月 } r = requests.get(url,headers=headers) soup = BeautifulSoup(r.text, 'html.parser') a = soup.select('#breadcrumb > a:nth-child(1)') # 保存数据到pkl import pymongo uri = "mongodb://user1:MongoPassWd1@localhost:27017/" client = pymongo.MongoClient(uri, 27017) db = client["DjangoServer"] col = db["dangdang1"] col.estimated_document_count() lst = list(col.find()) import pandas as pd df = pd.DataFrame(lst) df.to_pickle('a2.pkl') # 读pkl到df import pandas as pd b2 = pd.read_pickle('a2.pkl') c = b2[:10] c ```
github_jupyter
# Sklearn compatible Grid Search for classification Grid search is an in-processing technique that can be used for fair classification or fair regression. For classification it reduces fair classification to a sequence of cost-sensitive classification problems, returning the deterministic classifier with the lowest empirical error subject to fair classification constraints among the candidates searched. The code for grid search wraps the source class `fairlearn.reductions.GridSearch` available in the https://github.com/fairlearn/fairlearn library, licensed under the MIT Licencse, Copyright Microsoft Corporation. ``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.preprocessing import OneHotEncoder from aif360.sklearn.inprocessing import GridSearchReduction from aif360.sklearn.datasets import fetch_adult from aif360.sklearn.metrics import disparate_impact_ratio, average_odds_error, generalized_fpr from aif360.sklearn.metrics import generalized_fnr, difference ``` ### Loading data Datasets are formatted as separate `X` (# samples x # features) and `y` (# samples x # labels) DataFrames. The index of each DataFrame contains protected attribute values per sample. Datasets may also load a `sample_weight` object to be used with certain algorithms/metrics. All of this makes it so that aif360 is compatible with scikit-learn objects. For example, we can easily load the Adult dataset from UCI with the following line: ``` X, y, sample_weight = fetch_adult() X.head() # there is one unused category ('Never-worked') that was dropped during dropna X.workclass.cat.remove_unused_categories(inplace=True) ``` We can then map the protected attributes to integers, ``` X.index = pd.MultiIndex.from_arrays(X.index.codes, names=X.index.names) y.index = pd.MultiIndex.from_arrays(y.index.codes, names=y.index.names) ``` and the target classes to 0/1, ``` y = pd.Series(y.factorize(sort=True)[0], index=y.index) ``` split the dataset, ``` (X_train, X_test, y_train, y_test) = train_test_split(X, y, train_size=0.7, random_state=1234567) ``` We use Pandas for one-hot encoding for easy reference to columns associated with protected attributes, information necessary for grid search reduction. ``` X_train, X_test = pd.get_dummies(X_train), pd.get_dummies(X_test) X_train.head() ``` The protected attribute information is also replicated in the labels: ``` y_train.head() ``` ### Running metrics With the data in this format, we can easily train a scikit-learn model and get predictions for the test data: ``` y_pred = LogisticRegression(solver='lbfgs').fit(X_train, y_train).predict(X_test) lr_acc = accuracy_score(y_test, y_pred) print(lr_acc) ``` We can assess how close the predictions are to equality of odds. `average_odds_error()` computes the (unweighted) average of the absolute values of the true positive rate (TPR) difference and false positive rate (FPR) difference, i.e.: $$ \tfrac{1}{2}\left(|FPR_{D = \text{unprivileged}} - FPR_{D = \text{privileged}}| + |TPR_{D = \text{unprivileged}} - TPR_{D = \text{privileged}}|\right) $$ ``` lr_aoe = average_odds_error(y_test, y_pred, prot_attr='sex') print(lr_aoe) ``` ### Grid Search Choose a base model for the candidate classifiers. Base models should implement a fit method that can take a sample weight as input. For details refer to the docs. ``` estimator = LogisticRegression(solver='lbfgs') ``` Determine the columns associated with the protected attribute(s). Grid search can handle more then one attribute but it is computationally expensive. A similar method with less computational overhead is exponentiated gradient reduction, detailed at [examples/sklearn/demo_exponentiated_gradient_reduction_sklearn.ipynb](sklearn/demo_exponentiated_gradient_reduction_sklearn.ipynb). ``` prot_attr_cols = [colname for colname in X_train if "sex" in colname] ``` Search for the best classifier and observe test accuracy. Other options for `constraints` include "DemographicParity," "TruePositiveRateDifference", and "ErrorRateRatio." ``` np.random.seed(0) #need for reproducibility grid_search_red = GridSearchReduction(prot_attr_cols=prot_attr_cols, estimator=estimator, constraints="EqualizedOdds", grid_size=20, drop_prot_attr=False) grid_search_red.fit(X_train, y_train) gs_acc = grid_search_red.score(X_test, y_test) print(gs_acc) #Check if accuracy is comparable assert abs(lr_acc-gs_acc)<0.03 gs_aoe = average_odds_error(y_test, grid_search_red.predict(X_test), prot_attr='sex') print(gs_aoe) #Check if average odds error improved assert gs_aoe<lr_aoe ``` Instead of passing in a value for `constraints`, we can also pass a `fairlearn.reductions.moment` object in for `constraints_moment`. You could use a predefined moment as we do below or create a custom moment using the fairlearn library. ``` import fairlearn.reductions as red np.random.seed(0) #need for reproducibility grid_search_red = GridSearchReduction(prot_attr_cols=prot_attr_cols, estimator=estimator, constraints_moment=red.EqualizedOdds(), grid_size=20, drop_prot_attr=False) grid_search_red.fit(X_train, y_train) grid_search_red.score(X_test, y_test) average_odds_error(y_test, grid_search_red.predict(X_test), prot_attr='sex') ```
github_jupyter
# "Reverse mode autodiff" > "Understand how machine learning frameworks implement autodiff" - toc:true - branch: master - badges: true - comments: true - author: Saibaba Telukunta - categories: [autodiff, autograd, gradient, partial-derivative, chain-rule] Introduction -- Gradient descent (GD) is the main workhorse of deep neural networks. When you work with higher level machine learning frameworks like Pytorch or Tensorflow, it is hidden under the layers and you do not get to understand or appreciate how it works. The GD itself is implemented using something called reverse mode AD (called autodiff in rest of the doc). When you start using the low level libraries in pytorch or tensorflow you end up using constructs like `forward`, `backward`, `tape` and so on. Following is my attempt to demystify autodiff starting from ground up as well as to gain an understading of these terms. This article proceeds as below: * Starts from hard-coded gradient calculation for a loss function in estimating a parameter for a model. * Add reverse mode autodiff functionality to demonstrate how it can generically calculate the (partial) derivative of a loss function. * Pytorch built-in autograd is used to demonstrate the same features. * For comparison, the same functionality is implemented using tensorflow. **Sample problem** The problem being tackled is to estimate the exponent of a function $y = f(x) = x^\theta$ from a set of sample $(x_i, y_i)$. Let's use the RMSE as the loss function and want to find the $\theta$ that minimizes it: RMSE loss = $L(\theta) = \sqrt{\frac{\sum_{i=1}^n (x_i^\theta - y_i)^2}{n}}$ So, its gradient: $\frac{\partial L}{\partial \theta} = \frac{1}{2} \frac{\frac{1}{n} \sum_{i=1}^n 2 (x_i^\theta - y_i) ln(x_i) x_i^\theta}{\sqrt{\frac{\sum_{i=1}^n (x_i^\theta - y_i)^2}{n}}}$ Above gradient when used in batch gradient descent, we use below equation to update $\theta$: $\theta^{new} = \theta^{old} - lr * \frac{\partial L}{\partial \theta}$ In stochastic gradient descent (SGD), gradient is calculated and the current $\theta$ is updated for each of the training samples, so it simplifies to: RMSE loss for sample, i = $L_i(\theta) = \sqrt{(x_i^\theta - y_i)^2}$ $\frac{\partial L_i}{\partial \theta} = \frac{1}{2} \frac{2 (x_i^\theta - y_i) ln(x_i) x_i^\theta}{\sqrt{(x_i^\theta - y_i)^2}}$ (Let's ignore for now that we really do not need to take square root in this case since there is only one sample). $\theta^{new} = \theta^{old} - lr * \frac{\partial L_i}{\partial \theta}$ First let's create some sample data from the actual model. ``` import torch def sample(count, theta): xs = torch.rand(count) * 10 ys = xs.pow(theta) return xs, ys x_train, y_train = sample(10, 1.5) ``` So the actual value of $\theta$ is 1.5 and we are going to estimate it from the samples. Following is a utility function to plot losses and exponent discovered in each epoch. ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline def plot_loss_and_exponent(loss_history, theta_history): fig = plt.figure(figsize=(15, 10)) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) for ax in [ax1, ax2]: ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.grid(color='b', linestyle='--', linewidth=0.5, alpha=0.3) ax.tick_params(direction='out', color='b', width='2') ax1.set_title('Loss(RMSE)') ax2.set_title('Exponent') ax1.plot(np.arange(len(loss_history)), loss_history) ax2.plot(np.arange(len(theta_history)), theta_history) ``` Naive solution with hand coded gradient descent --- First we build our own function that calculates gradient descent directly using above formula. ``` def dLoss(x, y, theta_hat): n = len(x) w = (x.pow(theta_hat) - y) * torch.log(x) * x.pow(theta_hat) num = torch.sum(w)/n z = (x.pow(theta_hat) - y).pow(2) denom = torch.sqrt(sum(z)/n) return num/denom ``` **Batch GD** Here all data is used per update of $\theta$. ``` import torch from torch.autograd import Variable import numpy as np def rmse(y, y_hat): return torch.sqrt(torch.mean((y-y_hat).pow(2))) # fancy name for function evaluation def forward(x, theta): return x.pow(theta.repeat(x.size(0))) # Calculate gradient of loss w.r.t. # Its semantics are not exactly that of pytorch or tensorflow, but keeping this name as it is the closest. def backward(x_train, y_train, theta_hat): return dLoss(x_train, y_train, theta_hat) def optimizer_step(theta_hat, lr, grad): return theta_hat - lr * grad lr = 5e-6 n_epoch = 5000 theta_hat = torch.FloatTensor([4.0]) loss_history = [] theta_history = [] for epoch in range(n_epoch): y_hat = forward(x_train, theta_hat) loss = rmse(y_train, y_hat) grad = backward(x_train, y_train, theta_hat) loss_history.append(loss) theta_history.append(theta_hat) theta_hat = optimizer_step(theta_hat, lr, grad) plot_loss_and_exponent(loss_history, theta_history) ``` **Stochastic Gradient Descent(SGD)** Here only one sample is used per update of $\theta$. ``` # SGD import random def rmse(y, y_hat): return torch.sqrt((y-y_hat).pow(2)) def forward(x, theta): return x.pow(theta) def backward(x_train, y_train, theta_hat): return dLoss(x_train, y_train, theta_hat) def optimizer_step(theta_hat, lr, grad): return theta_hat - lr * grad lr = 5e-6 n_epoch = 2000 theta_hat = torch.FloatTensor([4.0]) loss_history = [] theta_history = [] for epoch in range(n_epoch): # Ideally we need to shuffle training samples in each epoch to let SGD converge close to global minimum. # It dies not matter in our case as they are not sorted in any order. # In general, in a classification problem the training examples might be sorted by label and to # ensure IID condition we need to shuffle them). for xi, yi in zip(x_train, y_train): y_hat = forward(xi, theta_hat) loss = rmse(torch.tensor([yi]), torch.tensor([y_hat])) grad = backward(torch.tensor([xi]), torch.tensor([yi]), theta_hat) loss_history.append(loss) theta_history.append(theta_hat) theta_hat = optimizer_step(theta_hat, lr, grad) plot_loss_and_exponent(loss_history, theta_history) ``` Home grown reverse mode autodiff(AD) -- As can be seen from above example, for every kind of loss function, implementing gradient descent by hand is not optimal and error prone and boring. Can it be automated ? Let's walk through an example to see how this can be done using something called reverse mode AD. Let's use SGD RMSE loss function from the introduction above: $L_i(\theta) = \sqrt{(x_i^\theta - y_i)^2}$ $\frac{\partial L_i}{\partial \theta} = \frac{1}{2} \frac{2 (x_i^\theta - y_i) ln(x_i) x_i^\theta}{\sqrt{(x_i^\theta - y_i)^2}}$ The goal of autodiff is create a generic mechanism to evaluate gradient of any function without writing code for it for every expression. Reverse mode AD is one way to do it and it depends on partial derivatives and chain rule heavily. Forward mode AD is another mechainsm and we do not talk about it in this article. To be able to automatically calculating derivatives of arbitrary expressions, we have to first define some primitive expressions and then build complex ones out of them. There are a lot of function compositions in above RMSE equation. Breaking into simpler functions starting from primitives and using function composition: $v_0 = \theta$ constant $v_1(v_0) = x^{v_0}$ $v_2 = y_i$ constant function $v_3(v_1, v_2) = v_1 - v_2$ $v_4(v_3) = v_3^2$ $v_5(v_4) = \sqrt{v_4}$ $L_i = v_5 = L(v_5)$ (for simplifying the expressions below). So, if you are calculating the (total) derivative of $L$ w.r.t. $\theta$ how do you do it by hand? We use chain rule as follows: $\begin{aligned}\frac{\partial L}{\partial \theta} &= \frac{\partial L}{\partial v_5} \frac{\partial v_5}{\partial \theta} \\ &= \frac{\partial L}{\partial v_5} (\frac{\partial v_5}{\partial v_4} \frac{\partial v_4}{\partial \theta}) \\ &= \frac{\partial L}{\partial v_5} (\frac{\partial v_5}{\partial v_4} (\frac{\partial v_4}{\partial v_3} \frac{\partial v_3}{\partial \theta} )) \\ &= \frac{\partial L}{\partial v_5} (\frac{\partial v_5}{\partial v_4} (\frac{\partial v_4}{\partial v_3} (\frac{\partial v_3}{\partial v_1} \frac{\partial v_1}{\partial \theta} + \frac{\partial v_3}{\partial v_2} \frac{\partial v_2}{\partial \theta}))) \\ &= \dots \\ \end{aligned}$ The most important aspect of reverse mode AD is realizing that we can rewrite above as: $\begin{aligned}\frac{\partial L}{\partial \theta} &= (((((\frac{\partial L}{\partial v_5}) \frac{\partial v_5}{\partial v_4}) \frac{\partial v_4}{\partial v_3}) \frac{\partial v_3}{\partial v_1} \frac{\partial v_1}{\partial \theta}) + (((((\frac{\partial L}{\partial v_5}) \frac{\partial v_5}{\partial v_4}) \frac{\partial v_4}{\partial v_3}) \frac{\partial v_3}{\partial v_2} \frac{\partial v_2}{\partial \theta}) \end{aligned}$ Execution happens from the innermost parenthesis and procceds outwards. If we imagine that each function ($v_i$) knows how to find the derivative of its output w.r.t to its input, the expansion above shows a pattern that can be leveraged to compute derivative: Imagine that each operation/function is implemented as a class. For example consider $v_i$. It's class implements a function say `backward` that takes in the input $\frac{\partial L}{\partial v_i}$. For each if its inputs, it calculates the derivative of output with respect to that input, multiplies with this input and returns the result. For example, consider $v_5$. The `backward` function takes $\frac{\partial L}{\partial v_5}$ in the input, computes derivative $\frac{\partial v_5}{\partial v_4} = \frac{1}{2} \frac{1}{\sqrt{v_4}}$ and returns the product $\frac{\partial L}{\partial v_4} = \frac{\partial L}{\partial v_5} * \frac{\partial v_5}{\partial v_4}$. If a function like $v_3$ takes 2 inputs, it computes partial derivative w.r.t. each of the inputs and returns the list of updated derivatives. For example, consider $v_3$. It takes $\frac{\partial L}{\partial v_3}$ and returns (1) $\frac{\partial L}{\partial v_1}$ and (2) $\frac{\partial L}{\partial v_2}$. Now we need a driver function, say `backprop` that orchestrates this sequence of `backward` calls starting from the target ($L = v_5$) and does a BFS on the expression to propagate down the derivatives until leaves are reached ($v_0$ in above example). Also, when it receives multiple outputs on calling a function (like `backward` on $v_3$ above), it correctly propagates the derivatives to the corresponding input variables (like to $v_1$ and $v_2$ respectively). ``` import json import torch # Instead of creating a separate Expr class for each operation, # we leverage python operator overloading and create only one class Var. The downside is that now we have only # one class but with different derivative calculation based on the operation performed. So, we store the # derivative calculation as a lambda function in derivatives variable. # Note that I have intentially not abstracted too much in Var class - I do not want to hide the necessary # aspects needed to use reverse mode autodiff, but rather expose them so as to understand it better. class Var: def __init__(self, name, value): self.value = value self.name = name self.derivatives = [] self.grad_value = None self.is_leaf = False self.is_const = False def __str__(self): return "name=" + self.name + "; value = " + str(self.value) def __mul__(self, v2): v1 = self v = Var(v1.name + "*" + v2.name, v1.value * v2.value) v.derivatives.append( (v1, lambda og : og * v2.value) ) v.derivatives.append ( (v2, lambda og : og * v1.value) ) return v def __add__(self, v2): v1 = self v = Var(v1.name + "+" + v2.name, v1.value + v2.value) v.derivatives.append ( (v1, lambda og : og ) ) v.derivatives.append ( (v2, lambda og : og ) ) return v def __sub__(self, v2): v1 = self v = Var(v1.name + "-" + v2.name, v1.value - v2.value) v.derivatives.append ( (v1, lambda og : og ) ) v.derivatives.append ( (v2, lambda og : -og ) ) return v def __pow__(self, v2): v1 = self v = Var(v1.name + "-" + v2.name, v1.value ** v2.value) v.derivatives.append( (v1, lambda og : og * v2.value * (v1.value ** (v2.value-1)))) v.derivatives.append ( (v2, lambda og : og * v.value * torch.log(torch.tensor([v1.value]))) ) return v def backward(self, output_gradient = None): if output_gradient is None: output_gradient = 1.0 if self.is_const: self.grad_value = 0 return self if not self.is_leaf: return [(var, fn(output_gradient)) for var, fn in self.derivatives] else: if self.grad_value is None: self.grad_value = output_gradient else: self.grad_value += output_gradient return self def sin(v): r = Var("sin(" + v.name + ")", torch.sin(torch.tensor([v.value]))) r.derivatives.append ((v, lambda og : og*torch.cos(torch.tensor([v.value])))) return r def sqrt(v): r = Var("sqrt(" + v.name + ")", torch.sqrt(torch.tensor([v.value]))) r.derivatives.append ((v, lambda og : og*0.5/r.value)) return r def dx1(x1, x2): return x2 + torch.cos(torch.tensor([x1])) def dx2(x1, x2): return x1 # logic of backprop could be hosted in the Var.backward def backprop(loss): gradients = loss.backward() q = [g for g in gradients] while len(q) > 0: var, input_gradient = q.pop() gradients = var.backward(input_gradient) if not var.is_leaf: for g in gradients: q.append(g) ``` **Some example exeuctions to check correct working** ``` x1 = Var("x1", 0.50) x1.is_leaf = True x2 = Var("x2", 4.2) x2.is_leaf = True y = (x1 * x2) + sin(x1) print('Expected dx1 should be', dx1(x1.value, x2.value)) print('Expected dx2 should be', dx2(x1.value, x2.value)) backprop(y) print(x1.grad_value) print(x2.grad_value) x = Var("x", 3.0) one = Var("1", 1.0) x.is_leaf = True one.is_leaf = True one.is_const = True y = (x * x * x) + (x * x) + one backprop(y) print(x.grad_value) print(one.grad_value) theta = Var("theta", 4.0) theta.is_leaf = True x = Var("x", 3.0) x.is_leaf = True x.is_const = True y = x ** theta backprop(y) print(theta.grad_value) # verify with: https://www.wolframalpha.com/input/?i=derivative+of+3%5Ex+where+x+%3D4 x = Var("x", 4.0) x.is_leaf = True y = sqrt(x) backprop(y) print(x.grad_value) theta = Var("theta", 4.0) theta.is_leaf = True two = Var("two", 2.0) two.is_leaf = True two.is_const = True def rmse(y, y_hat): return sqrt((y-y_hat) ** two) def forward(x, theta): return x ** theta y = Var("y", 2.828) y_hat = Var("y_hat", 2.3) print(rmse(y, y_hat)) ``` Finally, let's apply this to the SGD of our exponent estimator: ``` import random two = Var("two", 2.0) two.is_leaf = True two.is_const = True def loss(y, y_hat): return sqrt((y-y_hat) ** two) def forward(x, theta): return x ** theta def backward(err): backprop(err) return def optimizer_step(theta, lr, grad): return theta - lr * grad lr = Var("lr", 5e-6) lr.is_leaf = True lr.is_const = True theta = Var("theta", 4.0) theta.is_leaf = True n_epoch = 5000 loss_history = [] theta_history = [] for epoch in range(n_epoch): temp = list(zip(x_train, y_train)) random.shuffle(temp) a, b = zip(*temp) for xi, yi in zip(a, b): var_xi = Var("xi", xi) var_xi.is_leaf = True var_xi.is_const = True y_hat = forward(var_xi, theta) var_yi = Var("yi", yi) var_yi.is_leaf = True var_yi.is_const = True err = loss(var_yi, y_hat) backward(err) loss_history.append(err.value) theta_history.append(theta.value) var_grad = Var("grad", theta.grad_value) var_grad.is_leaf = True var_grad.is_const = True theta = optimizer_step(theta, lr, var_grad) theta.name = "theta" theta.is_leaf = True plot_loss_and_exponent(loss_history, theta_history) ``` Not bad for a demo version of the code! Plain Pytorch autograd for autodiff -- Let's use pytorch's autograd directly for a comparison. ``` import torch from torch.autograd import Variable import numpy as np def rmse(y, y_hat): return torch.sqrt(torch.mean((y - y_hat).pow(2))) def forward(x, e): return x.pow(e.repeat(x.size(0))) learning_rate = 5e-6 n_epoch = 5000 x = Variable(x_train, requires_grad=False) y = Variable(y_train, requires_grad=False) theta_hat = Variable(torch.FloatTensor([4]), requires_grad=True) loss_history = [] theta_history = [] for epoch in range(n_epoch): y_hat = forward(x, theta_hat) loss = rmse(y, y_hat) loss_history.append(loss.data.item()) theta_history.append(theta_hat.data.item()) # compute dLoss/dx for every parameter x with requires_grad=True loss.backward() theta_hat.data -= learning_rate * theta_hat.grad.data theta_hat.grad.data.zero_() plot_loss_and_exponent(loss_history, theta_history) ``` Pytorch with built-in optimizer --- Instead of hand writing code to update the parameters, we can use built-in optimizers (with additional features like momentum that are not used in this article). ``` import torch from torch.autograd import Variable import numpy as np def rmse(y, y_hat): return torch.sqrt(torch.mean((y - y_hat).pow(2))) def forward(x, e): return x.pow(e.repeat(x.size(0))) learning_rate = 5e-6 n_epoch = 5000 x = Variable(x_train, requires_grad=False) y = Variable(y_train, requires_grad=False) theta_hat = Variable(torch.FloatTensor([4]), requires_grad=True) # Using momentum = 0 to compare results identically opt = torch.optim.SGD([theta_hat], lr=learning_rate, momentum=0) loss_history = [] theta_history = [] for i in range(n_epoch): opt.zero_grad() y_hat = forward(x, theta_hat) loss = rmse(y, y_hat) loss_history.append(loss.data.item()) theta_history.append(theta_hat.data.item()) loss.backward() # Updates the value of theta using the gradient theta.grad opt.step() plot_loss_and_exponent(loss_history, theta_history) ``` Tensorflow --- Now let's try the same using tensorflow autograd functions for comparison sake. ``` import tensorflow as tf def rmse(y, y_hat): return tf.sqrt(tf.reduce_mean(tf.square((y - y_hat)))) def forward(x, e): return tf.pow(x, e) n_epoch = 5000 learning_rate = 5e-6 x = tf.Variable(x_train) y = tf.Variable(y_train) theta_hat = tf.Variable(4.0, name='theta_hat') opt = tf.keras.optimizers.SGD(learning_rate, momentum=0.0) loss_history = [] theta_history = [] for i in range(n_epoch): with tf.GradientTape() as tape: y_hat = forward(x, theta_hat) loss = rmse(y, y_hat) grads = tape.gradient(loss, [theta_hat]) opt.apply_gradients(zip(grads, [theta_hat])) loss_history.append(loss) theta_history.append(theta_hat.numpy()) plot_loss_and_exponent(loss_history, theta_history) ``` References -- * https://jmlr.org/papers/v18/17-468.html * https://rufflewind.com/2016-12-30/reverse-mode-automatic-differentiation * http://neuralnetworksanddeeplearning.com/chap2.html * The exponent estimator problem is taken from this: https://towardsdatascience.com/pytorch-vs-tensorflow-spotting-the-difference-25c75777377b
github_jupyter
``` import pandas as pd import numpy as np def load_datasets(): file = '../Data-files/1352S_101F.xlsx' sheets = ['整数-101F', 'top5F', 'top3F', '3F'] tables = {name: pd.read_excel(file, sheet_name=name) for name in sheets} for i in tables.values(): i.index += 1 return tables def to_otus(raw_tables): for name, table in raw_tables.items(): table = table.drop(columns='Status') table.index = table['SampleID'] table = table.drop(columns='SampleID') # negative values treated as zeros raw_tables[name] = table.applymap(lambda x: x if x >=0 else 0) return raw_tables def make_meta(otus_with_status): sources_ix = np.random.rand(1352) < 0.9 sinks_ix = ~sources_ix meta = pd.DataFrame(data={'Env': otus_with_status['Status'].tolist(), 'SourceSink': 'Source'}, index=otus_with_status['SampleID']) meta.iloc[sinks_ix, 1] = 'Sink' meta['id'] = 0 meta[meta['SourceSink'] == 'Sink'].loc[:, 2] = range(1, sum(sinks_ix) + 1) return meta def plot_roc(series): #... return None # Define functions def FEAST_post_proc(df): # df = pd.read_csv('../tmp/F3_source_contributions_matrix.txt', sep='\t').T df = df.T df['GroupID'] = df.index.to_series().apply(lambda x: x.split('_')[1] if x != 'Unknown' else x) df = df.groupby(by='GroupID').sum().T df['Env'] = df.index.to_series().apply(lambda x: x.split('_')[1]) df['SampleID'] = df.index.tolist() return df def JSD_post_proc(df, meta): # probas = softmax(1-distance) for each sample df = 1 - df.T df['Env'] = meta.loc[meta['SourceSink']=='Source', 'Env'].tolist() softmax = lambda x: np.exp(x) / np.exp(x).sum() df = df.groupby(by='Env').mean().apply(softmax, axis=0) df = df.T df.rename(mapper={'Env': '1', 'died': 'died', 'survived': 'survived'}, axis=1, inplace=True) df['Env'] = meta.loc[meta['SourceSink']=='Sink', 'Env'].tolist() df['SampleID'] = df.index.tolist() return df ``` # Prepare names = ["F5_top", "F3_top", "F101", "F3"] meta = pd.read_csv('../tmp/meta.csv') res_JSD = {name: '../Tmp/'+name+'.jsd.distance.csv' for name in names} res_FEAST = {name: '../Tmp/'+name+'_source_contributions_matrix.txt' for name in names} pred_JSD = {name: '../DLMER-Bio/'+name+'.JSD.predictions.csv' for name in names} pred_FEAST = {name: '../DLMER-Bio/'+name+'.FEAST.predictions.csv' for name in names} DLMER_in_JSD = {name: '../DLMER-Bio/'+name+'.JSD.DLMER.in.txt' for name in names} DLMER_in_FEAST = {name: '../DLMER-Bio/'+name+'.FEAST.DLMER.in.txt' for name in names} # Read res res_JSD_dfs = {name: pd.read_csv(f, index_col=0) for name, f in res_JSD.items()} res_FEAST_dfs = {name: pd.read_csv(f, sep='\t') for name, f in res_FEAST.items()} JSD_post_proc(res_JSD_dfs['F101'], meta)
github_jupyter
# TicTacToe ### Main File NOTE:Add in Sections as they are completed ### Completed Sections 1. TODO: 2. TODO: 3. TODO: 4. TODO: 5. Complete 6. TODO: 7. TODO: 8. TODO: ## Sections 1. Selection of the first player. (e.g. Would you like to play first?) 2. Assignment of "O" or "X" for a user. (e.g. Please choose 'O' or 'X' for your turn.) 3. Computer selects a position based on either random position. 4. If computer can select a position based on an algorithm (e.g. MiniMax algorithm or rule-based) approach, you will earn 20 extra points. 5. At each run, the program should display the board (3x3) as shown below. Placement is made based on the pre-assigned positional number. 6. Your program prints appropriate message if there is a winner and then exits. 7. Your program saves all the moves between players in a file called tictactoe.txt (X:5 O:2 X:1 O:9 etc...) 8. Your program should handle incorrect inputs (e.g. input validation) and continue to play without an error. ``` # place all imports at the top of the file here import re, logging # Random integer in place of # the minimax algorithm unless # minimax works board = [1,2,3,4,5,6,7,8,9] #2 Logging for python file logging.basicConfig(filename='ttt_game_log.txt', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') class TicTacToe: '''Super Class for playing tic tac toe''' def __init__(self, player, computer): '''defines the player and their choice of X || O''' self.player = player self.computer = computer print() def spaceIsFree(pos): '''Check if space is free on the board''' if(board[pos] == ' '): return True else: return False def checkTie(): '''Checks board for a Tie''' for key in board.keys(): if board[key] == ' ': return False else: return True def checkWin(board, player): '''Checks for winner using the board''' i, mark = board, player return( (i[0] == i[1] == i[2] == mark) or # 1. top_horizon (i[4] == i[5] == i[6] == mark) or # 2. mid_horizon (i[1] == i[4] == i[7] == mark) or # 3. bottom_horizon (i[2] == i[5] == i[8] == mark) or # 4. left_vertical (i[3] == i[6] == i[9] == mark) or # 5. middle_vertical (i[3] == i[6] == i[9] == mark) or # 6. right_vertical (i[1] == i[5] == i[9] == mark) or # 7. tl_br (i[3] == i[5] == i[7] == mark)) # 8. tr_bl def minimax(self, board, isMaximizing, computer ): '''criteria for Minimax class''' self.computer = computer self.player = player if TicTacToe.checkWin(mark): return -100 elif TicTacToe.checkWin(player): return 100 elif TicTacToe.checkTie(): return 0 if isMaximizing: bestScore = -100 for key in board.keys(): if (board[key] == ' '): board[key] = computer score = (board, 0, False) board[key] = ' ' if (score > bestScore): bestScore = score return bestScore else: bestScore = 100 for key in board.keys(): if (board[key] == ' '): board[key] = self.player score = self(board, 0, True) if (score < bestScore): bestScore = score return bestScore ``` ### Part 1 - Player Selection ``` # TODO who goes first # TODO: write coin flip to decide who goes first print('''Welcome to Tic Tac Toe.\n Our Version by Steph & Thad\n To Play Please Select either X's or O's.\n For X's press 1\n For O's press 2''') ``` ### Part 2 - Piece Selection ``` import random heads = 0 for i in range(1, 1001): if random.randint(0, 1) == 1: heads = heads + 1 if i == 500: print('Halfway done!') # TODO: This print(input(), 'X', input(), 'O') ``` ### Part 3 - Computer selects randomly ``` # TODO: This # should be a Minimax or randomized import random def comp_move(jam, n): '''Computes the computers turn''' random.shuffle(jam) jam = [1,2,3,4,5,6,7,8,9] result = [] for i in range(0, len(jam), n): result.append(jam[i:i + n]) return result ``` ### Part 4 - Minimax **(optional)** ``` def Minimax(self, board, isMaximizing): '''criteria for Minimax class''' self.computer = computer self.player = player TicTacToe.board = board if checkWin(computer): return -100 elif checkWin(player): return 100 elif checkTie: return 0 if isMaximizing: bestScore = -100 for key in board.keys(): if (board[key] == ' '): board[key] = computer score = (board, 0, False) board[key] = ' ' if (score > bestScore): bestScore = score return bestScore else: bestScore = 100 for key in board.keys(): if (board[key] == ' '): board[key] = self.player score = self(board, 0, True) if (score < bestScore): bestScore = score return bestScore ``` ### Part 5 - Complete - Display Board - Must Display every turn ``` #prints the board # Finished # 5,1,3,7,9,2,4,6,8 def getBoard(board): '''Displays board with index starting in top left and finishing in bottom right [0:8]''' board['a1':'1', 'b1':'2', 'c1':'3', 'a2':'4', 'b2':'5', 'c2':'6', 'a3':'7', 'b3':'8', 'c3':'9']=( """'A' 'B' 'C' ____________________ | | | | | {a1} | {b1} | {c1} | '1' |______|______|______| | | | | | {a2} | {b2} | {c2} | '2' |______|______|______| | | | | | {a3} | {b3} | {c3} | '3' |______|______|______|""" ) return board ``` ### Part 7 - Program Logs All Files - Logs all moves in file(tictactoe.txt) ### Part 8 - Handle input error and continue ``` print('t') # TODO: check raise Exception('Doesnt work') ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. # 06. Distributed CNTK using custom docker images In this tutorial, you will train a CNTK model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using a custom docker image and distributed training. ## Prerequisites * Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning * Go through the [00.configuration.ipynb]() notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`) ``` # Check core SDK version number import azureml.core print("SDK version:", azureml.core.VERSION) ``` ## Diagnostics Opt-in diagnostics for better experience, quality, and security of future releases. ``` from azureml.telemetry import set_diagnostics_collection set_diagnostics_collection(send_diagnostics = True) ``` ## Initialize workspace Initialize a [Workspace](https://review.docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture?branch=release-ignite-aml#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`. ``` from azureml.core.workspace import Workspace ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep = '\n') ``` ## Create a remote compute target You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an [Azure Batch AI](https://docs.microsoft.com/azure/batch-ai/overview) cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace. **Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process. ``` from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # choose a name for your cluster cluster_name = "gpucluster" try: compute_target = ComputeTarget(workspace=ws, name=cluster_name) print('Found existing compute target.') except ComputeTargetException: print('Creating a new compute target...') compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', max_nodes=6) # create the cluster compute_target = ComputeTarget.create(ws, cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) # Use the 'status' property to get a detailed status for the current cluster. print(compute_target.status.serialize()) ``` ## Upload training data For this tutorial, we will be using the MNIST dataset. First, let's download the dataset. We've included the `install_mnist.py` script to download the data and convert it to a CNTK-supported format. Our data files will get written to a directory named `'mnist'`. ``` import install_mnist install_mnist.main('mnist') ``` To make the data accessible for remote training, you will need to upload the data from your local machine to the cloud. AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data, and interact with it from your remote compute targets. Each workspace is associated with a default datastore. In this tutorial, we will upload the training data to this default datastore, which we will then mount on the remote compute for training in the next section. ``` ds = ws.get_default_datastore() print(ds.datastore_type, ds.account_name, ds.container_name) ``` The following code will upload the training data to the path `./mnist` on the default datastore. ``` ds.upload(src_dir='./mnist', target_path='./mnist') ``` Now let's get a reference to the path on the datastore with the training data. We can do so using the `path` method. In the next section, we can then pass this reference to our training script's `--data_dir` argument. ``` path_on_datastore = 'mnist' ds_data = ds.path(path_on_datastore) print(ds_data) ``` ## Train model on the remote compute Now that we have the cluster ready to go, let's run our distributed training job. ### Create a project directory Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on. ``` import os project_folder = './cntk-distr' os.makedirs(project_folder, exist_ok=True) ``` Copy the training script `cntk_distr_mnist.py` into this project directory. ``` import shutil shutil.copy('cntk_distr_mnist.py', project_folder) ``` ### Create an experiment Create an [experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed CNTK tutorial. ``` from azureml.core import Experiment experiment_name = 'cntk-distr' experiment = Experiment(ws, name=experiment_name) ``` ### Create an Estimator The AML SDK's base Estimator enables you to easily submit custom scripts for both single-node and distributed runs. You should this generic estimator for training code using frameworks such as sklearn or CNTK that don't have corresponding custom estimators. For more information on using the generic estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-ml-models). ``` from azureml.train.estimator import * script_params = { '--num_epochs': 20, '--data_dir': ds_data.as_mount(), '--output_dir': './outputs' } estimator = Estimator(source_directory=project_folder, compute_target=compute_target, entry_script='cntk_distr_mnist.py', script_params=script_params, node_count=2, process_count_per_node=1, distributed_backend='mpi', pip_packages=['cntk-gpu==2.6'], custom_docker_base_image='microsoft/mmlspark:gpu-0.12', use_gpu=True) ``` We would like to train our model using a [pre-built Docker container](https://hub.docker.com/r/microsoft/mmlspark/). To do so, specify the name of the docker image to the argument `custom_docker_base_image`. You can only provide images available in public docker repositories such as Docker Hub using this argument. To use an image from a private docker repository, use the constructor's `environment_definition` parameter instead. Finally, we provide the `cntk` package to `pip_packages` to install CNTK 2.6 on our custom image. The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to run distributed CNTK, which uses MPI, you must provide the argument `distributed_backend='mpi'`. ### Submit job Run your experiment by submitting your estimator object. Note that this call is asynchronous. ``` run = experiment.submit(estimator) print(run.get_details()) ``` ### Monitor your run You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes. ``` from azureml.widgets import RunDetails RunDetails(run).show() ``` Alternatively, you can block until the script has completed training before running more code. ``` run.wait_for_completion(show_output=True) ```
github_jupyter
``` from time import time from importlib import reload import pandas as pd import sklearn import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression, ElasticNet, LogisticRegression from sklearn.ensemble import ExtraTreesClassifier, ExtraTreesRegressor, GradientBoostingClassifier, AdaBoostClassifier, BaggingClassifier from sklearn.cross_decomposition import PLSRegression from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV, RandomizedSearchCV from sklearn.metrics import mean_squared_error, accuracy_score import xgboost as xgb import titanic.preprocess reload(titanic.preprocess) preprocess = titanic.preprocess test = pd.read_csv("C:\\Data\\kaggle\\titanic\\test.csv") train = pd.read_csv("C:\\Data\\kaggle\\titanic\\train.csv") all_df = pd.concat([train, test], axis=0, sort=False) ##### Estimate Age all_df['Surname'] = preprocess.get_surnames(all_df) all_df['Title'] = preprocess.get_titles(all_df) # Get families families = preprocess._get_families(all_df) #all_df['mean_fare'] = preprocess.mean_family_fare(all_df, families) # get Class, sex and title dummies. all_df = pd.get_dummies(all_df, columns=['Pclass', 'Sex', 'Title', 'Embarked']) #all_df["Fare"] = all_df["Fare"].fillna(np.nanmean(all_df['Fare'])) all_df['AgeUnknown'] = np.where(all_df['Age'] != all_df['Age'], 1, 0) all_df['CabinUnknown'] = np.where(all_df['Cabin'] != all_df['Cabin'], 1, 0) all_df['CabinKnown'] = np.where(all_df['Cabin'] == all_df['Cabin'], 1, 0) all_df['FamilySize'] = all_df['Parch'] + all_df['SibSp'] + 1 all_df['Alone'] = np.where(all_df['FamilySize'] == 1, 1, 0) all_df['Couple'] = np.where(all_df['FamilySize'] == 2, 1, 0) all_df['SmallFamily'] = np.where(all_df['FamilySize'].isin((3,4)), 1, 0) all_df['Family'] = np.where(all_df['FamilySize'] > 4, 1, 0) # split df into two for those where age is known and unknown # known will be training/testing, unknown will be predicts known, unknown = preprocess.age_split(all_df) columns = ['Pclass_1', 'Pclass_2','Pclass_3', 'Sex_female', 'Sex_male', 'Title_Master', 'Title_Miss', 'Title_Mr', 'Title_Mrs', 'Title_Rev', 'Parch', 'SibSp'] X = known[columns] y = known['Age'] train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.2) clf = ExtraTreesRegressor(bootstrap=False, criterion='mse', max_depth=None, max_features=12, min_samples_split=30, n_estimators=1000) clf.fit(train_X, train_y) unknown['Age'] = clf.predict(unknown[columns]).astype(int) all_df = pd.concat([known, unknown], axis=0, sort=False) all_df.head(40) adult_died, adult_unknown, adult_survived, child_died, child_unknown, child_survived = preprocess.rs(all_df, families) all_df['AdultDied'] = np.where(adult_died, 1, 0) all_df['AdultUnknown'] = np.where(adult_unknown, 1, 0) all_df['AdultSurvived'] = np.where(adult_survived, 1, 0) all_df['ChildDied'] = np.where(child_died, 1, 0) all_df['ChildUnknown'] = np.where(child_unknown, 1, 0) all_df['ChildSurvived'] = np.where(child_survived, 1, 0) train = all_df[all_df['Survived'].isin((0,1))] test = all_df[all_df['Survived'] != all_df['Survived']] columns = ['Age', 'SibSp', 'Parch', 'Pclass_1', 'Pclass_2', 'Pclass_3', 'Sex_male', 'Sex_female', 'AdultDied', 'AdultSurvived', 'ChildDied', 'ChildSurvived', 'Embarked_C', 'Embarked_Q', 'Embarked_S', 'AgeUnknown', 'ChildUnknown', 'Alone', 'Couple', 'SmallFamily', 'Family', 'CabinKnown', 'CabinUnknown'] X = train[columns] y = train['Survived'] for clf in [LogisticRegression, GradientBoostingClassifier, AdaBoostClassifier, ExtraTreesClassifier, BaggingClassifier]: scores = [] for i in range(25): X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) clfi = clf() clfi.fit(X_train, y_train) y_pred = clfi.predict(X_test) scores.append(accuracy_score(y_pred, y_test)) print(sum(scores) / 25) def report(results, n_top=3): for i in range(1, n_top + 1): candidates = np.flatnonzero(results['rank_test_score'] == i) for candidate in candidates: print("Model with rank: {0}".format(i)) print("Mean validation score: {0:.3f} (std: {1:.3f})".format( results['mean_test_score'][candidate], results['std_test_score'][candidate])) print("Parameters: {0}".format(results['params'][candidate])) print("") clf = GradientBoostingClassifier() param_grid = {'loss' : ['deviance'], 'learning_rate' : [0.01, 0.1], 'n_estimators' : [100, 300, 500], 'subsample' : [0.4, 0.5, 0.6], 'max_depth':[None], 'max_features':[None], 'min_samples_split':[10,30]} grid_search = GridSearchCV(clf, param_grid=param_grid, cv=5, iid=False, verbose=1, n_jobs=-1) grid_search.fit(X, y) report(grid_search.cv_results_) clf = LogisticRegression(solver='newton-cg', penalty='l2') clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print(accuracy_score(y_pred, y_test)) for col, coef in zip(columns, *clf.coef_): print(col, coef) # Create the parameter grid: gbm_param_grid gbm_param_grid = { 'n_estimators': range(8, 500), 'max_depth': range(1, 20), 'learning_rate': [.3, .4, .45, .5, .55, .6], 'colsample_bytree': [.8, .9, 1] } # Instantiate the regressor: gbm gbm = xgb.XGBClassifier(n_estimators=10) # Perform random search: grid_mse xgb_random = RandomizedSearchCV(param_distributions=gbm_param_grid, estimator = gbm, scoring = "accuracy", verbose = 1, n_iter = 50, cv = 5, n_jobs=-1) # Fit randomized_mse to the data xgb_random.fit(X, y) # Print the best parameters and lowest RMSE print("Best parameters found: ", xgb_random.best_params_) print("Best accuracy found: ", xgb_random.best_score_) report(xgb_random.cv_results_) xgb_random.coef_ # Create submission params = {'learning_rate': 0.01, 'loss': 'deviance', 'max_depth': None, 'max_features': None, 'min_samples_split': 30, 'n_estimators': 300, 'subsample': 0.4 clf = GradientBoostingClassifier(**params) clf.fit(X, y) PassengerId = test['PassengerId'] test = scaler.transform(test[columns]) gdb_prediction = clf.predict(test) output = pd.DataFrame({ 'PassengerId' : PassengerId, 'Survived': gdb_prediction }) output.to_csv('GDB3.csv', index=False) X.columns ```
github_jupyter
<a href="https://colab.research.google.com/github/skojaku/cidre/blob/second-edit/examples/example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # About this notebook In this notebook, we apply CIDRE to a network with communities and demonstrate how to use CIDRE and visualize the detected groups. ## Preparation ### Install CIDRE package First, we install `cidre` package with `pip`: ``` !pip install cidre ``` ### Loading libraries Next, we load some libraries ``` import sys import numpy as np from scipy import sparse import pandas as pd import cidre import networkx as nx ``` # Example 1 We first present an example of a small artificial network, which can be loaded by ``` # Data path edge_file = "https://raw.githubusercontent.com/skojaku/cidre/main/data/synthe/edge-table.csv" node_file = "https://raw.githubusercontent.com/skojaku/cidre/main/data/synthe/node-table.csv" # Load node_table = pd.read_csv(node_file) A, node_labels = cidre.utils.read_edge_list(edge_file) # Visualization nx.draw(nx.from_scipy_sparse_matrix(A), linewidths = 1, edge_color="#8d8d8d", edgecolors="b") ``` ## About this network We constructed this synthetic network by generating a network using a stochastic block model (SBM) composed of two blocks and then adding excessive citation edges among uniformly randomly selected pairs of nodes. Each block corresponds to a community, i.e., a group of nodes that are densely connected with each other within it but sparsely connected with those in the opposite group. Such communities overshadow anomalous groups in networks. ## Community detection with graph-tool Let's pretend that we do not know that the network is composed of two communities plus additional edges. To run CIDRE, we first need to find the communities. We use [graph-tool package](https://graph-tool.skewed.de/) to do this, which can be installed by ```bash conda install -c conda-forge graph-tool ``` or in `Colaboratory` platform: ``` %%capture !echo "deb http://downloads.skewed.de/apt bionic main" >> /etc/apt/sources.list !apt-key adv --keyserver keys.openpgp.org --recv-key 612DEFB798507F25 !apt-get update !apt-get install python3-graph-tool python3-cairo python3-matplotlib ``` Now, let's detect communities by fitting the degree-corrected stochastic block model (dcSBM) to the network and consider each detected block as a community. ``` import graph_tool.all as gt def detect_community(A, K = None, **params): """Detect communities using the graph-tool package :param A: adjacency matrix :type A: scipy.csr_sparse_matrix :param K: Maximum number of communities. If K = None, the number of communities is automatically determined by graph-tool. :type K: int or None :param params: parameters passed to graph_tool.gt.minimize_blockmodel_dl """ def to_graph_tool_format(adj, membership=None): g = gt.Graph(directed=True) r, c, v = sparse.find(adj) nedges = v.size edge_weights = g.new_edge_property("double") g.edge_properties["weight"] = edge_weights g.add_edge_list( np.hstack([np.transpose((r, c)), np.reshape(v, (nedges, 1))]), eprops=[edge_weights], ) return g G = to_graph_tool_format(A) states = gt.minimize_blockmodel_dl( G, state_args=dict(eweight=G.ep.weight), multilevel_mcmc_args = {"B_max": A.shape[0] if K is None else K }, **params ) b = states.get_blocks() return np.unique(np.array(b.a), return_inverse = True)[1] group_membership = detect_community(A) ``` ## Detecting anomalous groups in the network Now, we feed the network and its community structure to CIDRE. To to this, we create a `cidre.Cidre` object and input `group_membership` along with some key parameters to `cidre.Cidre`. ``` alg = cidre.Cidre(group_membership = group_membership, alpha = 0.05, min_edge_weight = 1) ``` - `alpha` (default 0.01) is the statistical significance level. - `min_edge_weight` is the threshold of the edge weight, i.e., the edges with weight less than this value will be removed. Then, we input the network to `cidre.Cidre.detect`. ``` groups = alg.detect(A, threshold=0.15) ``` `groups` is a list of `Group` instances. A `Group` instance represents a group of nodes detected by CIDRE and contains information about the type of each member node (i.e., donor and recipient). We can get the donor nodes of a group, for example `groups[0]`, by ``` groups[0].donors ``` The keys and values of this dict object are the IDs of the nodes and their donor scores, respectively. The recipients and their recipient scores can be obtained by ``` groups[0].recipients ``` ## Visualization `cidre` package provides an API to visualize small groups. To use this API, we first need to import some additional libraries. ``` import seaborn as sns import matplotlib.pyplot as plt ``` Then, plot the group by ``` # The following three lines are purely for visual enhancement, i.e., changing the saturation of the colors and font size. sns.set_style("white") sns.set(font_scale=1.2) sns.set_style("ticks") # Set the figure size width, height = 5,5 fig, ax = plt.subplots(figsize=(width, height)) # Plot a citation group cidre.DrawGroup().draw(groups[0], ax = ax) ``` # Example 2 Let's apply CIDRE to a much larger empirical citation network, i.e., the citation network of journals in 2013. ``` # Data path edge_file = "https://raw.githubusercontent.com/skojaku/cidre/main/data/journal-citation/edge-table-2013.csv" node_file = "https://raw.githubusercontent.com/skojaku/cidre/main/data/journal-citation/community-label.csv" # Load node_table = pd.read_csv(node_file) A, node_labels = cidre.utils.read_edge_list(edge_file) ``` ## About this network This network is a citation network of journals in 2013 constructed from Microsoft Academic Graph. Each edge is weighted by the number of citations made to the papers in the prior two years. The following are basic statistics of this network. ``` print("Number of nodes: %d" % A.shape[0]) print("Number of edges: %d" % A.sum()) print("Average degree: %.2f" % (A.sum()/A.shape[0])) print("Max in-degree: %d" % np.max(A.sum(axis = 0))) print("Max out-degree: %d" % np.max(A.sum(axis = 1))) print("Maximum edge weight: %d" % A.max()) print("Minimum edge weight: %d" % np.min(A.data)) ``` ## Communities [In our paper](https://www.nature.com/articles/s41598-021-93572-3), we identified the communities of journals using [graph-tool](https://graph-tool.skewed.de/). `node_table` contains the community membership of each journal, from which we prepare `group_membership` array as follows. ``` # Get the group membership node2com = dict(zip(node_table["journal_id"], node_table["community_id"])) group_membership = [node2com[node_labels[i]] for i in range(A.shape[0])] ``` ## Detecting anomalous groups in the network As is demonstrated in the first example, we detect the anomalous groups in the network by ``` alg = cidre.Cidre(group_membership = group_membership, alpha = 0.01, min_edge_weight = 10) groups = alg.detect(A, threshold=0.15) print("The number of journals in the largest group: %d" % np.max([group.size() for group in groups])) print("Number of groups detected: %d" % len(groups)) ``` [In our paper](https://www.nature.com/articles/s41598-021-93572-3), we omitted the groups that have within-group citations less than 50 because we expect that anomalous citation groups contain sufficiently many within-group citations, i.e., ``` groups = [group for group in groups if group.get_num_edges()>=50] ``` where `group.get.num_edges()` gives the sum of the weights of the non-self-loop edges within the group. ## Visualization Let us visualize the groups detected by CIDRE. For expository purposes, we sample three groups to visualize uniformly at random. ``` groups_sampled = [groups[i] for i in np.random.choice(len(groups), 3, replace = False)] import seaborn as sns import matplotlib.pyplot as plt sns.set_style("white") sns.set(font_scale=1.2) sns.set_style("ticks") fig, axes = plt.subplots(ncols = 3, figsize=(6 * 3, 5)) for i in range(3): cidre.DrawGroup().draw(groups_sampled[i], ax = axes.flat[i]) ``` The numbers beside the nodes are the IDs of the journals in the network. To show the journals' names, we do the following. First, we load node lables and make a dictionary that maps the ID of each node to the label: ``` df = pd.read_csv("https://raw.githubusercontent.com/skojaku/cidre/main/data/journal-citation/journal_names.csv") journalid2label = dict(zip(df.journal_id.values, df.name.values)) # Dictionary from MAG journal ID to the journal name id2label = {k:journalid2label[v] for k, v in node_labels.items()} # This is a dictionary from ID to label, i.e., {ID:journal_name} ``` Then, give `id2label` to `cidre.DrawGroup.draw`, i.e., ``` sns.set_style("white") sns.set(font_scale=1.2) sns.set_style("ticks") fig, axes = plt.subplots(ncols = 3, figsize=(9 * 3, 5)) for i in range(3): plotter = cidre.DrawGroup() plotter.font_size = 12 # Font size plotter.label_node_margin = 0.7 # Margin between labels and node plotter.draw(groups_sampled[i], node_labels = id2label, ax = axes.flat[i]) ```
github_jupyter
# Fine-Tuning a BERT Model and Create a Text Classifier In the previous section, we've already performed the Feature Engineering to create BERT embeddings from the `reviews_body` text using the pre-trained BERT model, and split the dataset into train, validation and test files. To optimize for Tensorflow training, we saved the files in TFRecord format. Now, let’s fine-tune the BERT model to our Customer Reviews Dataset and add a new classification layer to predict the `star_rating` for a given `review_body`. ![BERT Training](img/bert_training.png) As mentioned earlier, BERT’s attention mechanism is called a Transformer. This is, not coincidentally, the name of the popular BERT Python library, “Transformers,” maintained by a company called HuggingFace. We will use a variant of BERT called [**DistilBert**](https://arxiv.org/pdf/1910.01108.pdf) which requires less memory and compute, but maintains very good accuracy on our dataset. ``` import time import random import pandas as pd from glob import glob import argparse import json import subprocess import sys import os import tensorflow as tf from transformers import DistilBertTokenizer from transformers import TFDistilBertForSequenceClassification from transformers import DistilBertConfig %store -r max_seq_length try: max_seq_length except NameError: print("++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++") print("[ERROR] Please run the notebooks in the PREPARE section before you continue.") print("++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++") print(max_seq_length) def select_data_and_label_from_record(record): x = { "input_ids": record["input_ids"], "input_mask": record["input_mask"], # 'segment_ids': record['segment_ids'] } y = record["label_ids"] return (x, y) def file_based_input_dataset_builder(channel, input_filenames, pipe_mode, is_training, drop_remainder): # For training, we want a lot of parallel reading and shuffling. # For eval, we want no shuffling and parallel reading doesn't matter. if pipe_mode: print("***** Using pipe_mode with channel {}".format(channel)) from sagemaker_tensorflow import PipeModeDataset dataset = PipeModeDataset(channel=channel, record_format="TFRecord") else: print("***** Using input_filenames {}".format(input_filenames)) dataset = tf.data.TFRecordDataset(input_filenames) dataset = dataset.repeat(100) dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE) name_to_features = { "input_ids": tf.io.FixedLenFeature([max_seq_length], tf.int64), "input_mask": tf.io.FixedLenFeature([max_seq_length], tf.int64), # "segment_ids": tf.io.FixedLenFeature([max_seq_length], tf.int64), "label_ids": tf.io.FixedLenFeature([], tf.int64), } def _decode_record(record, name_to_features): """Decodes a record to a TensorFlow example.""" return tf.io.parse_single_example(record, name_to_features) dataset = dataset.apply( tf.data.experimental.map_and_batch( lambda record: _decode_record(record, name_to_features), batch_size=8, drop_remainder=drop_remainder, num_parallel_calls=tf.data.experimental.AUTOTUNE, ) ) dataset.cache() if is_training: dataset = dataset.shuffle(seed=42, buffer_size=10, reshuffle_each_iteration=True) return dataset train_data = "./data-tfrecord/bert-train" train_data_filenames = glob("{}/*.tfrecord".format(train_data)) print("train_data_filenames {}".format(train_data_filenames)) train_dataset = file_based_input_dataset_builder( channel="train", input_filenames=train_data_filenames, pipe_mode=False, is_training=True, drop_remainder=False ).map(select_data_and_label_from_record) validation_data = "./data-tfrecord/bert-validation" validation_data_filenames = glob("{}/*.tfrecord".format(validation_data)) print("validation_data_filenames {}".format(validation_data_filenames)) validation_dataset = file_based_input_dataset_builder( channel="validation", input_filenames=validation_data_filenames, pipe_mode=False, is_training=False, drop_remainder=False, ).map(select_data_and_label_from_record) test_data = "./data-tfrecord/bert-test" test_data_filenames = glob("{}/*.tfrecord".format(test_data)) print(test_data_filenames) test_dataset = file_based_input_dataset_builder( channel="test", input_filenames=test_data_filenames, pipe_mode=False, is_training=False, drop_remainder=False ).map(select_data_and_label_from_record) ``` # Specify Manual Hyper-Parameters ``` epochs = 1 steps_per_epoch = 10 validation_steps = 10 test_steps = 10 freeze_bert_layer = True learning_rate = 3e-5 epsilon = 1e-08 ``` # Load Pretrained BERT Model https://huggingface.co/transformers/pretrained_models.html ``` CLASSES = [1, 2, 3, 4, 5] config = DistilBertConfig.from_pretrained( "distilbert-base-uncased", num_labels=len(CLASSES), id2label={0: 1, 1: 2, 2: 3, 3: 4, 4: 5}, label2id={1: 0, 2: 1, 3: 2, 4: 3, 5: 4}, ) print(config) transformer_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", config=config) input_ids = tf.keras.layers.Input(shape=(max_seq_length,), name="input_ids", dtype="int32") input_mask = tf.keras.layers.Input(shape=(max_seq_length,), name="input_mask", dtype="int32") embedding_layer = transformer_model.distilbert(input_ids, attention_mask=input_mask)[0] X = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))( embedding_layer ) X = tf.keras.layers.GlobalMaxPool1D()(X) X = tf.keras.layers.Dense(50, activation="relu")(X) X = tf.keras.layers.Dropout(0.2)(X) X = tf.keras.layers.Dense(len(CLASSES), activation="softmax")(X) model = tf.keras.Model(inputs=[input_ids, input_mask], outputs=X) for layer in model.layers[:3]: layer.trainable = not freeze_bert_layer ``` # Setup the Custom Classifier Model Here ``` loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy("accuracy") optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate, epsilon=epsilon) model.compile(optimizer=optimizer, loss=loss, metrics=[metric]) model.summary() callbacks = [] log_dir = "./tmp/tensorboard/" tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir) callbacks.append(tensorboard_callback) history = model.fit( train_dataset, shuffle=True, epochs=epochs, steps_per_epoch=steps_per_epoch, validation_data=validation_dataset, validation_steps=validation_steps, callbacks=callbacks, ) print("Trained model {}".format(model)) ``` # Evaluate on Holdout Test Dataset ``` test_history = model.evaluate(test_dataset, steps=test_steps, callbacks=callbacks) print(test_history) ``` # Save the Model ``` tensorflow_model_dir = "./tmp/tensorflow/" !mkdir -p $tensorflow_model_dir model.save(tensorflow_model_dir, include_optimizer=False, overwrite=True) !ls -al $tensorflow_model_dir !saved_model_cli show --all --dir $tensorflow_model_dir # !saved_model_cli run --dir $tensorflow_model_dir --tag_set serve --signature_def serving_default \ # --input_exprs 'input_ids=np.zeros((1,64));input_mask=np.zeros((1,64))' ``` # Predict with Model ``` import pandas as pd import numpy as np from transformers import DistilBertTokenizer tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") sample_review_body = "This product is terrible." encode_plus_tokens = tokenizer.encode_plus( sample_review_body, padding='max_length', max_length=max_seq_length, truncation=True, return_tensors="tf" ) # The id from the pre-trained BERT vocabulary that represents the token. (Padding of 0 will be used if the # of tokens is less than `max_seq_length`) input_ids = encode_plus_tokens["input_ids"] # Specifies which tokens BERT should pay attention to (0 or 1). Padded `input_ids` will have 0 in each of these vector elements. input_mask = encode_plus_tokens["attention_mask"] outputs = model.predict(x=(input_ids, input_mask)) prediction = [{"label": config.id2label[item.argmax()], "score": item.max().item()} for item in outputs] print("") print('Predicted star_rating "{}" for review_body "{}"'.format(prediction[0]["label"], sample_review_body)) ``` # Release Resources ``` %%html <p><b>Shutting down your kernel for this notebook to release resources.</b></p> <button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button> <script> try { els = document.getElementsByClassName("sm-command-button"); els[0].click(); } catch(err) { // NoOp } </script> %%javascript try { Jupyter.notebook.save_checkpoint(); Jupyter.notebook.session.delete(); } catch(err) { // NoOp } ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline ``` ### Loading Data ``` data = pd.read_csv('Coffee.csv') ``` ### Finding out the any missing values across columns **We can see there are no missing values** ``` pd.isnull(data).sum() ``` ### Looking at some part of data ``` data.head() data['No_of_Packet'] =data['No_of_Packet'].astype('category').cat.rename_categories(['one packet', 'two packets','three or more packets']) ``` ### Note: We can see that all of our categorical features are numeric ### Let's make it as categorical and assign proper categories to features, #### I am mapping categories form the word file given to us ``` data['Price_per_Packet'] =data['Price_per_Packet'].astype('category').cat.rename_categories(['less than 6,50 DM', '6,50 DM to 8,50 DM','more than 8,50 DM']) data['Brand'] =data['Brand'].astype('category').cat.rename_categories(['Jacobs Krönung', 'Jacobs other', 'Aldi', 'Aldi other', 'Eduscho Gala', 'Eduscho other', 'Tchibo Feine Milde', 'Tchibo other', 'Andere Kaffeemarken']) data['Age'] =data['Age'].astype('category').cat.rename_categories(['less than 24 years', '25 to 39 years', '40 to 49 years', '50 to 59 years', '60 years or more']) data['SEC'] =data['SEC'].astype('category').cat.rename_categories(['upper class', 'upper middle class', 'middle class', 'lower middle class', 'lower class']) data['Income'] =data['Income'].astype('category').cat.rename_categories(['less than 1499 DM', '1500 to 2499 DM', '2500 to 3499 DM', '3500 DM or more']) data['Price_Conscious'] =data['Price_Conscious'].astype('category').cat.rename_categories(['not at all', 'a little', 'price-conscious', 'distinctly price-conscious']) data['Education'] =data['Education'].astype('category').cat.rename_categories(['nine-year elementary school', 'intermediate high school', 'high-school / university']) data['Loyalty'] =data['Loyalty'].astype('category').cat.rename_categories(['Loyal', 'Not Loyal']) data['IDNo'].value_counts().plot(kind='kde',color='red',) ``` ## Data points are right skewed and has positive kurtosis ``` cat=data.dtypes[data.dtypes=="category"].index data[cat].describe() ``` ------------------------------------------------- #### Q1. Which brands of coffee are more popular? Given a brand, are all variants equally preferred? <font color='green'>**We can see from below graph and summary statistics that *Andere Kaffeemarken* is more popular brand. All brands are not equally preffered.** </font> ------------------------------------------------- ``` brand_plot = data['Brand'].value_counts().plot.bar(grid=True,alpha = 0.3,rot=80, title="Popularity of brand") brand_plot.set_xlabel("Brands") brand_plot.set_ylabel("Counts") data['Brand'].value_counts() ``` ------------------------------------------------- #### Q2. What are the prices of different brands of coffee? <font color='green'>**We can see from below graph and summary statistics that:** </font> * <font color='Blue'>*Aldi other* </font> is more cheaper brand. * <font color='Blue'>*Eduscho other* </font> is more costlier brand., because propotionally it is dominating over <font color='Blue'>*Tchibo other* </font> * <font color='Blue'>*Andere Kaffeemarken* </font> is neither cheap nor costly because its most of the data points has price per packet = **6,50 DM to 8,50 DM** <font color='Red'>**Inference:**<font color='Gray'> *Andere Kaffeemarken* is a more economical brand hance popular one ------------------------------------------------- ``` df2 = data.groupby(['Brand', 'Price_per_Packet'])['Brand'].count().unstack('Price_per_Packet') price_plot = df2.plot(kind='bar', stacked=False, grid=True,alpha = 0.7,rot=80, title="Prices of different Brands") price_plot.set_xlabel("Brands") price_plot.set_ylabel("Counts") pd.DataFrame(data.groupby(['Brand', 'Price_per_Packet'])['Brand'].count().unstack('Price_per_Packet')) ``` ------------------------------------------------- #### Q3. How frequently does a household buy coffee? How many packets of coffee are bought at a time? ------------------------------------------------- ``` data['Days_between_Purchase'].describe() ``` #### you can see from the above summary that max is 741 , min is 1, mean=15 Hence *Days_between_Purchase* has outliers, Lets find them & treat them ``` plt.figure(figsize=(10,4)) plt.subplot(121) data['Days_between_Purchase'].plot(kind='box') plt.title('Box plot for \n Days between purchase(Outliers)') plt.subplot(122) data['Days_between_Purchase'].plot(kind='kde') plt.title('Frequency Distribution of \n days between purchase(Outliers)') q75, q25 = np.percentile(data['Days_between_Purchase'], [75 ,25]) iqr = q75 - q25 min = q25 - (iqr*1.5) max = q75 + (iqr*1.5) print(min) print(max) ``` #### We have outliers for max value not min value so lets forget about min value ``` print("Number of outliers:",data[data['Days_between_Purchase']>max].shape[0]) plt.figure(figsize=(10,4)) plt.subplot(121) data['Days_between_Purchase'][data.Days_between_Purchase<max].plot(kind='box') plt.title('Box plot for \n Days between purchase(WithOut Outliers)') plt.subplot(122) data['Days_between_Purchase'][data.Days_between_Purchase<max].plot(kind='kde') plt.title('Frequency Distribution of \n days between purchase(WithOut Outliers)') data['Days_between_Purchase'][data.Days_between_Purchase<max].describe() plt.figure(figsize=(10,5)) plt.subplot(121) data[data.Days_between_Purchase>35]['Income'].value_counts().plot(kind='bar',alpha=.7,color="Green",rot=100) plt.ylabel('Counts') plt.xlabel("Income") plt.subplot(122) data[data.Days_between_Purchase<35]['Income'].value_counts().plot(kind='bar',alpha=.7,color="Green",rot=100) plt.ylabel('Counts') plt.xlabel("Income") ``` #### Q4.a How frequently does a household buy coffee? Buying pattern is varying a lot but 30 % of the households are buying coffee within 5-7 days. followed 25% of households are buying within 9-13 days #### Q4.b How many packets of coffee are bought at a time? Mostly households have purchased one packet of coffee ``` plt.figure(figsize=(16,5)) plt.subplot(121) pd.qcut(data['Days_between_Purchase'][data.Days_between_Purchase<max], 6).value_counts().plot(kind='bar',alpha=.7,color="Green",rot=360) plt.ylabel('Counts') plt.xlabel("Days Interval") plt.title('Frequency of buying Coffee in Days') plt.subplot(122) data['No_of_Packet'].value_counts().plot(kind='bar',alpha=.6,color="Orange",rot=360) plt.ylabel('Counts') plt.xlabel("No Of Packets") plt.title('Frequency of buying Coffee Packets') ``` ----------------------- ### What are the factors that have an impact on a household’s coffee purchase pattern? Does brand preference depend on household size? Does purchase depend on a person’s income or education level? #### Does brand preference depend on household size? No, Famous and more economic is demanded more for all size of households ``` #plt.figure(figsize=(50,21)) df2 = data.groupby(['Brand', 'Household_Sz']).size().unstack('Household_Sz').sort_index() df2.plot(kind='bar', stacked=False,alpha=.9,colormap='RdYlBu_r',rot=100, title='Brand Preference on Household Size') plt.xlabel('Brands') plt.ylabel('Counts') ``` #### Does purchase depend on a person’s income or education level? Not much ``` plt.figure(figsize=(16,5)) plt.subplot(121) data['Education'].value_counts().plot(kind='bar',alpha=0.5,color="Red",rot=300) #no plt.xlabel("Education") plt.ylabel("Counts") plt.title("Buying Frequency by Education") plt.subplot(122) data['Income'].value_counts().plot(kind='bar',alpha=.6,color='pink',rot=300) #no plt.xlabel("Income") plt.ylabel("Counts") plt.title("Buying Frequency by Income") ``` #### What are the factors that have an impact on a household’s coffee purchase pattern? ** Impact is in Decreasing order** * Age * Price_Conscious * SEC * Loyalty * Price_per_Packet ``` plt.figure(figsize=(16,5)) plt.subplot(131) data['SEC'].value_counts().plot(kind='bar', color='gray',alpha=.7,rot=300) #yes plt.xlabel("Socio Economic Level") plt.ylabel("Counts") plt.title("Buying Frequency by Socio Economic Level") plt.subplot(132) data['Price_Conscious'].value_counts().plot(kind='bar', color='Green',alpha=.7,rot=300) #yes plt.xlabel("Price Consciousness") plt.ylabel("Counts") plt.title("Buying Frequency by Price Consciousness") plt.subplot(133) data['Loyalty'].value_counts().plot(kind='bar',color='Lime',alpha=.7,rot=300) #yes plt.xlabel("Loyalty") plt.ylabel("Counts") plt.title("Buying Frequency by Loyalty") plt.figure(figsize=(16,5)) plt.subplot(131) data['Age'].value_counts().plot(kind='bar', color='Orange',alpha=.7,rot=300) #yes plt.xlabel("Age") plt.ylabel("Counts") plt.title("Buying Frequency by Price Consciousness") plt.subplot(132) data['Price_per_Packet'].value_counts().plot(kind='bar', color='Purple',alpha=.7,rot=300) #yes plt.xlabel("Price Per Packet") plt.ylabel("Counts") plt.title("Buying Frequency by Price Per Packet") plt.subplot(133) ``` ### Are there any variables/columns you think will be eventually irrelevant for the study? Why? * Identification number of household, its a unique number assigned to a house hold ### Can you possibly identify some outliers? * As we saw previously, Days_between_Purchase has outliers ### For better analysis do you think a few brand variants needs to be clubbed together? yes, it will be usefull for coffee makers for better decision making ### Can you do some probability estimates- Say what is the probability that a randomly chosen household belongs to the top two social levels? What is the probability that households belonging to the top two social levels buy more expensive coffee? 0.34,.89 ### You are also expected to identify a few (between 2 and 4) relevant business questions, aligning data against each question or pointing out if any relevant data is not available. ### How does loyalty play a part in determining coffee purchasing patterns? Can the group take a shot at analysing how loyalty is reflected in various socio-economic classes and/or educational backgrounds? ``` data.groupby(['Loyalty','SEC']).size().unstack('Loyalty').plot(kind='bar', stacked=False,alpha=.7,rot=300) plt.xlabel("Socio Economic Level") plt.ylabel("Counts") plt.title("Buying Frequency by Socio Economic Level with Loyalty") data.groupby(['Loyalty','Education']).count().unstack('Education')['No_of_Packet'].plot(kind='bar', stacked=False,alpha=.7,rot=360) plt.legend(loc=1, fontsize = 'x-small',title='Education') plt.ylabel("Counts") plt.title("Buying Frequency by Education with Loyalty") data.groupby(['Loyalty','Brand']).count().unstack('Brand')['No_of_Packet'].T.apply(lambda x: x / float(x.sum())).plot(kind='bar', stacked=False,alpha=.7) plt.legend(bbox_to_anchor=(2,1)) plt.legend(loc=2, fontsize = 'x-small',title='Loyalty') plt.ylabel("Counts") plt.title("Buying Frequency by Education with Loyalty") data.groupby(['Loyalty','Education']).count().unstack('Loyalty')['No_of_Packet'].apply(lambda x: x / float(x.sum())) data.dtypes ``` # Lets Model the Data ``` from sklearn.model_selection import train_test_split from sklearn.cluster import KMeans # Remove all the data points having days between purchase dataForModeling = data[data.Days_between_Purchase>35] features = ['No_of_Packet','Price_per_Packet','Days_between_Purchase', 'Age','SEC','Income','Household_Sz', 'Price_Conscious','Education','Loyalty','Brand'] estimators = [('k_means_8', KMeans(n_clusters=8)), ('k_means_3', KMeans(n_clusters=3)), ('k_means_bad_init', KMeans(n_clusters=3, n_init=1, init='random'))] ```
github_jupyter
``` import pandas as pd import numpy as np from tqdm import tqdm import seaborn as sns import matplotlib.pyplot as plt import re from nltk.corpus import stopwords stop = list(set(stopwords.words('english'))) f_train = open('../data/train_14k_split_conll.txt','r',encoding='utf8') line_train = f_train.readlines() f_val = open('../data/dev_3k_split_conll.txt','r',encoding='utf8') line_val = f_val.readlines() f_test = open('../data/Hindi_test_unalbelled_conll_updated.txt','r',encoding='utf8') line_test = f_test.readlines() def get_data(lines): doc_count = 0 i = 0 #global train_word, train_word_type, train_doc_sentiment, train_uid, train_doc_id train_word = [] train_word_type = [] train_doc_sentiment = [] train_uid = [] train_doc_id = [] while i < len(lines): ''' if i < len(lines) and lines[i].split('\t')[0] in ['http','https']: i += 1 while i < len(lines) and lines[i].split('\t')[0] != '/': i += 1 i += 1 ''' if i < len(lines) and lines[i].split('\t')[0] in ['@']: i += 2 if i < len(lines): line = lines[i] line = line.replace('\n','').strip().lower() if line.split('\t')[0] == 'meta': doc_count += 1 uid = int(line.split('\t')[1]) sentiment = line.split('\t')[2] #if doc_count == 1575: # print (uid) i += 1 elif len(line.split('\t')) >= 2: if line.split('\t')[0] not in stop: train_uid.append(uid) train_doc_sentiment.append(sentiment) train_word.append(line.split('\t')[0]) train_word_type.append(line.split('\t')[1]) train_doc_id.append(doc_count) i += 1 else: i += 1 train_df = pd.DataFrame() train_df['doc_id'] = train_doc_id train_df['word'] = train_word train_df['word_type'] = train_word_type train_df['uid'] = train_uid train_df['sentiment'] = train_doc_sentiment return train_df def get_data_test(lines): doc_count = 0 i = 0 #global train_word, train_word_type, train_doc_sentiment, train_uid, train_doc_id train_word = [] train_word_type = [] train_uid = [] train_doc_id = [] while i < len(lines): ''' if i < len(lines) and lines[i].split('\t')[0] in ['http','https']: i += 1 while i < len(lines) and lines[i].split('\t')[0] != '/': i += 1 i += 1 ''' if i < len(lines) and lines[i].split('\t')[0] in ['@']: i += 2 if i < len(lines): line = lines[i] line = line.replace('\n','').strip().lower() if line.split('\t')[0] == 'meta': doc_count += 1 uid = int(line.split('\t')[1]) #if doc_count == 1575: # print (uid) i += 1 elif len(line.split('\t')) >= 2: if line.split('\t')[0] not in stop: train_uid.append(uid) train_word.append(line.split('\t')[0]) train_word_type.append(line.split('\t')[1]) train_doc_id.append(doc_count) i += 1 else: i += 1 train_df = pd.DataFrame() train_df['doc_id'] = train_doc_id train_df['word'] = train_word train_df['word_type'] = train_word_type train_df['uid'] = train_uid return train_df train_df = get_data(line_train) val_df = get_data(line_val) test_df = get_data_test(line_test) train_df.shape, val_df.shape, test_df.shape train_df.uid.nunique(), val_df.uid.nunique(), test_df.uid.nunique() train_df = train_df[train_df.word != 'http'] train_df = train_df[train_df.word != 'https'] train_df = train_df[train_df.word_type != 'o'] print (train_df.shape) val_df = val_df[val_df.word != 'http'] val_df = val_df[val_df.word != 'https'] val_df = val_df[val_df.word_type != 'o'] print (val_df.shape) test_df = test_df[test_df.word != 'http'] test_df = test_df[test_df.word != 'https'] test_df = test_df[test_df.word_type != 'o'] print (test_df.shape) train_df.word = train_df.word.apply(lambda x: re.sub("[^a-zA-Z0-9]", "",x)) train_df = train_df[train_df.word.str.len() >= 3] print (train_df.shape) val_df.word = val_df.word.apply(lambda x: re.sub("[^a-zA-Z0-9]", "",x)) val_df = val_df[val_df.word.str.len() >= 3] print (val_df.shape) test_df.word = test_df.word.apply(lambda x: re.sub("[^a-zA-Z0-9]", "",x)) test_df = test_df[test_df.word.str.len() >= 3] print (test_df.shape) all_words = set(pd.concat([train_df[['word']],val_df[['word']],test_df[['word']]], axis=0).word) print ("Total number of words {}".format(len(all_words))) all_words = pd.concat([train_df[['word']],val_df[['word']],test_df[['word']]], axis=0) all_words = all_words.word.value_counts().reset_index() all_words.columns = ['word','tot_count'] top_words = all_words[all_words.tot_count >= 2] print (top_words.shape) print (top_words.head(5)) all_bert_words = pd.DataFrame(['[PAD]','[UNK]','[CLS]','[SEP]'],columns=['word']) all_bert_words = pd.concat([all_bert_words,top_words[['word']].drop_duplicates().reset_index(drop=True)],axis=0) all_bert_words.to_csv("../bert_vocab.txt",index=False,header=False) train_texts = train_df.groupby(['uid'],sort=True)['word'].apply(lambda x: " ".join(x)).reset_index() train_texts = pd.merge(train_texts,train_df[['uid','sentiment']],how='left').drop_duplicates().reset_index(drop=True) train_texts.columns = ['uid','text','sentiment'] #train_texts.text = train_texts.text.apply(lambda x: '[CLS] ' + x + ' [SEP]') val_texts = val_df.groupby(['uid'],sort=True)['word'].apply(lambda x: " ".join(x)).reset_index().reset_index(drop=True) val_texts = pd.merge(val_texts,val_df[['uid','sentiment']],how='left').drop_duplicates() val_texts.columns = ['uid','text','sentiment'] #val_texts.text = val_texts.text.apply(lambda x: '[CLS] ' + x + ' [SEP]') test_texts = test_df.groupby(['uid'],sort=True)['word'].apply(lambda x: " ".join(x)).reset_index().reset_index(drop=True) test_texts.columns = ['uid','text'] #test_texts.text = test_texts.text.apply(lambda x: '[CLS] ' + x + ' [SEP]') train_texts.head(5) test_texts.head(5) all_texts = pd.concat([train_texts[['uid','text','sentiment']],val_texts[['uid','text','sentiment']]],axis=0) all_texts.sentiment.value_counts() from transformers import BertTokenizer, BertConfig, BertModel import torch import math import torch.nn as nn import torch.nn.functional as F from keras.preprocessing.sequence import pad_sequences from keras.utils import to_categorical tokenizer = BertTokenizer.from_pretrained('../bert_vocab.txt') max_len = 20 n_output = all_texts.sentiment.nunique() #number of possible outputs senti_dict = {'negative':0,'neutral':1,'positive':2} in_senti_dict = {0:'negative',1:'neutral',2:'positive'} all_texts.sentiment = all_texts.sentiment.apply(lambda x: senti_dict[x]) train_tokens = [] for text in tqdm(all_texts.text.values.tolist()): train_tokens += [tokenizer.encode(text,add_special_tokens=False)] test_tokens = [] for text in tqdm(test_texts.text.values.tolist()): test_tokens += [tokenizer.encode(text,add_special_tokens=False)] train_tokens_padded = [] train_attention_mask = [] train_seg_ids = [] for tokens in tqdm(train_tokens): tokens = tokens[:max_len] token_len = len(tokens) one_mask = [1]*token_len zero_mask = [0]*(max_len-token_len) padded_input = tokens + zero_mask attention_mask = one_mask + zero_mask segments = [] first_sep = True current_segment_id = 0 for token in tokens: segments.append(current_segment_id) if token == 3: current_segment_id = 1 segments = segments + [0] * (max_len - len(tokens)) train_tokens_padded += [padded_input] train_attention_mask += [attention_mask] train_seg_ids += [segments] test_tokens_padded = [] test_attention_mask = [] test_seg_ids = [] for tokens in tqdm(test_tokens): tokens = tokens[:max_len] token_len = len(tokens) one_mask = [1]*token_len zero_mask = [0]*(max_len-token_len) padded_input = tokens + zero_mask attention_mask = one_mask + zero_mask segments = [] first_sep = True current_segment_id = 0 for token in tokens: segments.append(current_segment_id) if token == 102: current_segment_id = 1 segments = segments + [0] * (max_len - len(tokens)) test_tokens_padded += [padded_input] test_attention_mask += [attention_mask] test_seg_ids += [segments] print (train_tokens_padded[0], train_attention_mask[0], train_seg_ids[0]) train_output = torch.LongTensor(to_categorical(all_texts.sentiment)) train_tokens_padded = torch.LongTensor(np.asarray(train_tokens_padded)) train_attention_mask = torch.LongTensor(np.asarray(train_attention_mask)) train_seg_ids = torch.LongTensor(np.asarray(train_seg_ids)) test_tokens_padded = torch.LongTensor(np.asarray(test_tokens_padded)) test_attention_mask = torch.LongTensor(np.asarray(test_attention_mask)) test_seg_ids = torch.LongTensor(np.asarray(test_seg_ids)) print (train_tokens_padded.shape, train_attention_mask.shape, train_seg_ids.shape, test_tokens_padded.shape, test_attention_mask.shape, test_seg_ids.shape) dev_tokens_padded = train_tokens_padded[train_texts.shape[0]:] train_tokens_padded = train_tokens_padded[:train_texts.shape[0]] dev_attention_mask = train_attention_mask[train_texts.shape[0]:] train_attention_mask = train_attention_mask[:train_texts.shape[0]] dev_seg_ids = train_seg_ids[train_texts.shape[0]:] train_seg_ids = train_seg_ids[:train_texts.shape[0]] dev_output = train_output[train_texts.shape[0]:] train_output = train_output[:train_texts.shape[0]] from torch.utils.data import DataLoader, TensorDataset from sklearn.metrics import f1_score, accuracy_score batch_size = 128 train_data = TensorDataset(train_tokens_padded, train_output) val_data = TensorDataset(dev_tokens_padded, dev_output) #dataloader train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True) val_loader = DataLoader(val_data, batch_size=batch_size, shuffle=True) class TransformerModel(nn.Module): def __init__(self, ntoken, ninp, nhead, nhid, nlayers, nout, dropout=0.5): super(TransformerModel, self).__init__() from torch.nn import TransformerEncoder, TransformerEncoderLayer self.model_type = 'Transformer' self.src_mask = None self.pos_encoder = PositionalEncoding(ninp, dropout) encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout) self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers) self.encoder = nn.Embedding(ntoken, ninp) self.ninp = ninp self.decoder = nn.Linear(ninp, nout) self.init_weights() def _generate_square_subsequent_mask(self, sz): mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1) mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0)) return mask def init_weights(self): initrange = 0.1 self.encoder.weight.data.uniform_(-initrange, initrange) self.decoder.bias.data.zero_() self.decoder.weight.data.uniform_(-initrange, initrange) def forward(self, src): if self.src_mask is None or self.src_mask.size(0) != len(src): device = src.device mask = self._generate_square_subsequent_mask(len(src)).to(device) self.src_mask = mask src = self.encoder(src) * math.sqrt(self.ninp) #print (src.shape) src = self.pos_encoder(src) output = self.transformer_encoder(src, self.src_mask) output = self.decoder(torch.mean(output,1)) return output class PositionalEncoding(nn.Module): def __init__(self, d_model, dropout=0.1, max_len=max_len): super(PositionalEncoding, self).__init__() self.dropout = nn.Dropout(p=dropout) pe = torch.zeros(max_len, d_model) position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1) div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model)) pe[:, 0::2] = torch.sin(position * div_term) pe[:, 1::2] = torch.cos(position * div_term) pe = pe.unsqueeze(0) #.transpose(0, 1) self.register_buffer('pe', pe) def forward(self, x): #print (self.pe.shape) x = x + self.pe return self.dropout(x) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') ntokens = tokenizer.vocab_size # the size of vocabulary emsize = 200 # embedding dimension nhid = 200 # the dimension of the feedforward network model in nn.TransformerEncoder nlayers = 4 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder nhead = 8 # the number of heads in the multiheadattention models dropout = 0.2 # the dropout value nout = all_texts.sentiment.nunique() model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, nout, dropout) epochs = 20 model print ("Total number of parameters to learn {}".format(sum(p.numel() for p in model.parameters() if p.requires_grad))) optimizer = torch.optim.Adam(model.parameters(),lr=.001) criterion = torch.nn.CrossEntropyLoss() model = model.to(device) criterion = criterion.to(device) def binary_accuracy(preds, y): """ Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 """ #round predictions to the closest integer if device == 'cpu': rounded_preds = preds.detach().numpy().argmax(1) rounded_correct = y.detach().numpy().argmax(1) else: rounded_preds = preds.detach().cpu().numpy().argmax(1) rounded_correct = y.detach().cpu().numpy().argmax(1) return accuracy_score(rounded_correct,rounded_preds) def f1_torch(preds, y): """ Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 """ #round predictions to the closest integer if device == 'cpu': rounded_preds = preds.detach().numpy().argmax(1) rounded_correct = y.detach().numpy().argmax(1) else: rounded_preds = preds.detach().cpu().numpy().argmax(1) rounded_correct = y.detach().cpu().numpy().argmax(1) return f1_score(rounded_correct,rounded_preds,average='macro') def train(model, train_loader, optimizer, criterion): global predictions, labels, loss epoch_loss = 0 epoch_acc = 0 f1_scores = 0 model.train() counter = 0 for tokens, labels in tqdm(train_loader): counter += 1 optimizer.zero_grad() predictions = model(tokens) #.squeeze(1) predictions = torch.softmax(predictions,dim=-1) #loss = criterion(predictions, labels) loss = criterion(predictions, torch.max(labels, 1)[1]) acc = binary_accuracy(predictions, labels) f1_score_batch = f1_torch(predictions, labels) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc f1_scores += f1_score_batch return epoch_loss / counter, epoch_acc / counter, f1_scores/counter def evaluate(model, val_loader, criterion): epoch_loss = 0 epoch_acc = 0 f1_scores = 0 model.eval() counter = 0 with torch.no_grad(): for tokens, labels in tqdm(val_loader): counter += 1 predictions = model(tokens) #.squeeze(1) predictions = torch.softmax(predictions,dim=-1) #loss = criterion(predictions, labels) loss = criterion(predictions, torch.max(labels, 1)[1]) acc = binary_accuracy(predictions, labels) f1_score_batch = f1_torch(predictions, labels) epoch_loss += loss.item() epoch_acc += acc f1_scores += f1_score_batch return epoch_loss / counter, epoch_acc / counter, f1_scores/counter import time def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs best_valid_loss = 999 for epoch in range(epochs): start_time = time.time() train_loss, train_acc, train_f1 = train(model, train_loader, optimizer, criterion) valid_loss, valid_acc, valid_f1 = evaluate(model, val_loader, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), '../models/model_transformer.pt') print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}% | Train F1: {train_f1*100:.2f}%') print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}% | Val F1: {valid_f1*100:.2f}%') ```
github_jupyter
``` import pyspark import pandas as pd import csv from pyspark.ml.feature import ChiSqSelector from pyspark.ml.linalg import Vectors from pyspark import SparkContext ,SparkConf from pyspark.ml import Pipeline from pyspark.ml.feature import OneHotEncoder, StringIndexer from pyspark import SQLContext as sql from pyspark.ml.feature import StringIndexer,HashingTF, IDF, Tokenizer import matplotlib.pyplot as plt conf=SparkConf() context=SparkContext(conf=conf) sql_context=sql(context) from pyspark.ml.linalg import Vectors from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler from pyspark.ml.feature import VectorAssembler,Normalizer from pyspark.ml.classification import LinearSVC col_names = ["duration","protocol_type","service","flag","src_bytes", "dst_bytes","land","wrong_fragment","urgent","hot","num_failed_logins", "logged_in","num_compromised","root_shell","su_attempted","num_root", "num_file_creations","num_shells","num_access_files","num_outbound_cmds", "is_host_login","is_guest_login","count","srv_count","serror_rate", "srv_serror_rate","rerror_rate","srv_rerror_rate","same_srv_rate", "diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count", "dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate", "dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate", "dst_host_rerror_rate","dst_host_srv_rerror_rate","label"] pdd=pd.read_csv("/home/salman/Documents/KDDTrain.csv",names=col_names) pdd df=sql_context.createDataFrame(pdd) from pyspark.mllib.regression import LabeledPoint from pyspark.ml.classification import OneVsRest from pyspark.ml.classification import DecisionTreeClassifier from pyspark.ml.feature import StringIndexer, VectorIndexer for col_name in pdd.columns: if pdd[col_name].dtypes == 'object' : unique_cat = len(pdd[col_name].unique()) print("Feature '{col_name}' has {unique_cat} categories".format(col_name=col_name, unique_cat=unique_cat)) categorical_columns=['protocol_type', 'service', 'duration','dst_host_srv_rerror_rate','label'] df_categorical_values = df[categorical_columns] #labelencoder stringIndexer = StringIndexer(inputCol="dst_host_srv_rerror_rate", outputCol="labeled") model_string=stringIndexer.fit(df_categorical_values) indexed=model_string.transform(df_categorical_values) indexed.show(3000) #encoder encoder = OneHotEncoder(inputCol="labeled", outputCol="features") encoded = encoder.transform(indexed) VA=VectorAssembler(inputCols=["label","labelVec"],outputCol="features",handleInvalid ="skip") selector = ChiSqSelector(numTopFeatures=4, featuresCol="features", outputCol="selectedFeatures", labelCol="labeled") print("ChiSqSelector output with top %d features selected" % selector.getNumTopFeatures()) Normalized=Normalizer(inputCol="features",outputCol="features_norm",p=1.0) result = selector.fit(encoded).transform(encoded) result.show() labelIndexer = StringIndexer(inputCol="labeled", outputCol="indexedLabel").fit(result) featureIndexer =\ VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=4).fit(result) dt = DecisionTreeClassifier(labelCol="indexedLabel", featuresCol="indexedFeatures") pipeline = Pipeline(stages=[labelIndexer, featureIndexer, dt]) (trainingData, testData) = result.randomSplit([0.7, 0.3]) model = pipeline.fit(trainingData) predictions = model.transform(testData) p=predictions.toPandas() p.head(3000) normal=[0.0] attackdos=[1.0,5.0,7.0,9.0,10.0,1] attackprobe=[5.0,4.0,3.0,2.0,6.0,2,3,4,5,6] attackr2l=[8.0,11.0] totalnotu2r=[0.0,1.0,5.0,7.0,9.0,10.0,1,5.0,4.0,3.0,2.0,6.0,2,3,4,5,6,8.0,11.0] def replace(): for x in p['prediction'] : if x==0.0: p['prediction'].replace(to_replace=[0.0], value='normal',inplace=True) elif x in attackdos: p['prediction'].replace(to_replace=[1.0,5.0,7.0,9.0,10.0,1], value=' ATTACK DOS',inplace=True) elif x in attackprobe: p['prediction'].replace(to_replace=[5.0,4.0,3.0,2.0,6.0,2,3,4,5,6], value=' ATTACK PROBE',inplace=True) elif x in [8.0,11.0]: p['prediction'].replace(to_replace=[8.0,11.0], value=' ATTACK R2L',inplace=True) elif x in [12.0,13.0]: p['prediction'].replace(to_replace=[12.0,13.0], value=' ATTACK U2L',inplace=True) if __name__=='__main__': replace() p.head(1000) p["prediction"].value_counts() normal=7952 Attackdos=5825 Attackprobe=1217 Total=normal+Attackdos+Attackprobe %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np fig = plt.figure() ax = plt.axes() normal = np.linspace(0,7952) ax.plot(normal, np.sin(attac),label='Normal'); attackdos = np.linspace(0,5825) ax.plot(attackdos, np.sin(attac),label='Attack Dos'); attackprobe = np.linspace(0,1217) ax.plot(attackprobe, np.sin(attac),label='Attack Probe'); plt.legend(); def counter(): a=0 b=0 for x in p['prediction']: if x=='normal': a=a+1 return a else: b=b+1 return b if __name__=='__main__': counter() counter() from pyspark.ml.evaluation import MulticlassClassificationEvaluator evaluator = MulticlassClassificationEvaluator( labelCol="indexedLabel", predictionCol="prediction", metricName="accuracy") accuracy = evaluator.evaluate(predictions) print("Test Error = %g " % (1.0 - accuracy)) print("accuracy= %g " % (accuracy)) %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np fig = plt.figure() ax = plt.axes() acc = np.linspace(0,0.961113 ) ax.plot(attac, np.sin(acc),label='Accuracy'); err = np.linspace(0,0.0388867 ) ax.plot(norm, np.sin(err),linestyle='--',label='Error'); plt.legend(); import seaborn as sns ```
github_jupyter
# Exploratory Data Analysis ## Loading Imports and Data ``` !python -m spacy download en_core_web_lg !pip install squarify # Base import pandas as pd import numpy as np from collections import Counter # Plotting import squarify import matplotlib.pyplot as plt import seaborn as sns # NLP Libraries import spacy from spacy.tokenizer import Tokenizer from sklearn.neighbors import NearestNeighbors from sklearn.feature_extraction.text import TfidfVectorizer nlp = spacy.load('en_core_web_lg') df = pd.read_csv("cannabis.csv") print(df.shape) df df.info() ``` ## Data Cleaning/Wrangling ``` df[df['Effects']=='None'].value_counts() df['Description'].isnull().value_counts() df[df['Description'].isnull()] df = df.dropna() df.shape df[df['Effects']=='None'].value_counts() df = df[df.Effects != 'None'] df.shape ``` # Natural Language Processing ``` # Instantiate Tokenizer tokenizer = Tokenizer(nlp.vocab) # Create description tokens tokens = [] for doc in tokenizer.pipe(df['Description'], batch_size=500): doc_tokens = [token.text for token in doc] tokens.append(doc_tokens) df['Description_Tokens'] = tokens print(df['Description_Tokens'].head()) def count(docs): word_counts = Counter() appears_in = Counter() total_docs = len(docs) for doc in docs: word_counts.update(doc) appears_in.update(set(doc)) temp = zip(word_counts.keys(), word_counts.values()) wc = pd.DataFrame(temp, columns = ['word', 'count']) wc['rank'] = wc['count'].rank(method='first', ascending=False) total = wc['count'].sum() wc['pct_total'] = wc['count'].apply(lambda x: x / total) wc = wc.sort_values(by='rank') wc['cul_pct_total'] = wc['pct_total'].cumsum() t2 = zip(appears_in.keys(), appears_in.values()) ac = pd.DataFrame(t2, columns=['word', 'appears_in']) wc = ac.merge(wc, on='word') wc['appears_in_pct'] = wc['appears_in'].apply(lambda x: x / total_docs) return wc.sort_values(by='rank') wc = count(df['Description_Tokens']) wc.head() tokens = [] """ Update those tokens w/o stopwords""" for doc in nlp.pipe(df['Description'], batch_size=500): doc_tokens = [] for token in doc: if (token.is_stop == False) & (token.is_punct == False) & (token.text is not ' '): doc_tokens.append(token.lemma_.lower()) tokens.append(doc_tokens) df['Description_Tokens'] = tokens df['Description_Tokens'] wc = count(df['Description_Tokens']) wc_top20 = wc[wc['rank'] <= 20] squarify.plot(sizes=wc_top20['pct_total'], label=wc_top20['word'], alpha=.8) plt.axis('off') plt.show() wc_top20['word'] # Instantiate vectorizer tfidf = TfidfVectorizer(stop_words = 'english', #ngram_range = (1,2), max_features = 5000) # Create a vocabulary and tf-idf score per document dtm = tfidf.fit_transform(df['Description']) # Get feature names to use as dataframe column headers dtm = pd.DataFrame(dtm.todense(), columns=tfidf.get_feature_names()) nn = NearestNeighbors(n_neighbors=4, algorithm='kd_tree') nn.fit(dtm) # View Feature Matrix as DataFrame print(dtm.shape) dtm.head() ideal = ["""Indica,Relaxed,Pain,Stress,Ache,Sleep"""] # Query ideal descprition new = tfidf.transform(ideal) new nn.kneighbors(new.todense()) print(df['Strain'][727]) print(df['Flavor'][727]) print(df['Effects'][727]) print(df['Description'][727]) print(df['Strain'][617]) print(df['Flavor'][617]) print(df['Effects'][617]) print(df['Description'][617]) print(df['Strain'][2168]) print(df['Flavor'][2168]) print(df['Effects'][2168]) print(df['Description'][2168]) print(df['Strain'][1249]) print(df['Flavor'][1249]) print(df['Effects'][1249]) print(df['Description'][1249]) import pickle Pkl_Filename = "nn_model.pkl" with open(Pkl_Filename, 'wb') as file: pickle.dump(nn, file) ```
github_jupyter
# 2-22: Intro to scikit-learn <img src="https://www.cityofberkeley.info/uploadedImages/Public_Works/Level_3_-_Transportation/DSC_0637.JPG" style="width: 500px; height: 275px;" /> --- ** Regression** is useful for predicting a value that varies on a continuous scale from a bunch of features. This lab will introduce the regression methods available in the scikit-learn extension to scipy, focusing on ordinary least squares linear regression, LASSO, and Ridge regression. *Estimated Time: 45 minutes* --- ### Table of Contents 1 - [The Test-Train-Validation Split](#section 1)<br> 2 - [Linear Regression](#section 2)<br> 3 - [LASSO Regression](#section 3)<br> 4 - [Ridge Regression](#section 4)<br> 5 - [Choosing a Model](#section 5)<br> **Dependencies:** ``` import numpy as np from datascience import * import datetime as dt import pandas as pd import matplotlib.pyplot as plt %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.linear_model import Ridge, Lasso, LinearRegression from sklearn.model_selection import KFold ``` ## The Data: Bike Sharing In your time at Cal, you've probably passed by one of the many bike sharing station around campus. Bike sharing systems have become more and more popular as traffic and concerns about global warming rise. This lab's data describes one such bike sharing system in Washington D.C., from [UC Irvine's Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset). ``` bike=Table().read_table(('data/Bike-Sharing-Dataset/day.csv')) # reformat the date column to integers representing the day of the year, 001-366 bike['dteday'] = pd.to_datetime(bike['dteday']).strftime('%j') # get rid of the index column bike = bike.drop(0) bike.show(4) ``` Take a moment to get familiar with the data set. In data science, you'll often hear rows referred to as **records** and columns as **features**. Before you continue, make sure you can answer the following: - How many records are in this data set? - What does each record represent? - What are the different features? - How is each feature represented? What values does it take, and what are the data types of each value? Use Table methods and check the UC Irvine link for more information. ``` # explore the data set here ``` --- ## 1. The Test-Train-Validation Split <a id='section 1'></a> When we train a model on a data set, we run the risk of [**over-fitting**](http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html). Over-fitting happens when a model becomes so complex that it makes very accurate predictions for the data it was trained on, but it can't generalize to make good predictions on new data. We can reduce the risk of overfitting by using a **test-train split**. 1. Randomly divide our data set into two smaller sets: one for training and one for testing 2. Train the data on the training set, changing our model along the way to increase accuracy 3. Test the data's predictions using the test set. Scikit-learn's [`test_train_split`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function will help here. First, separate your data into two parts: a Table containing the features used to make our prediction, and an array of the true values. To start, let's predict the *total number of riders* (y) using *every feature that isn't a rider count* (X). Note: for the function to work, X can't be a Table. Save X as a pandas DataFrame by calling `.to_df()` on the feature Table. ``` # the features used to predict riders X = bike.drop('casual', 'registered', 'cnt') X = X.to_df() # the number of riders y = bike['cnt'] ``` Next, set the random seed using `np.random.seed(...)`. This will affect the way numpy pseudo-randomly generates the numbers it uses to decide how to split the data into training and test sets. Any seed number is fine- the important thing is to document the number you used in case we need to recreate this pseudorandom split in the future. Then, call `train_test_split` on your X and y. Also set the parameters `train_size=` and `test_size=` to set aside 80% of the data for training and 20% for testing. ``` # set the random seed np.random.seed(10) # split the data # train_test_split returns 4 values: X_train, X_test, y_train, y_test X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80, test_size=0.20) ``` ### The Validation Set Our test data should only be used once: after our model has been selected, trained, and tweaked. Unfortunately, it's possible that in the process of tweaking our model, we could still overfit it to the training data and only find out when we return a poor test data score. What then? A **validation set** can help here. By trying your trained models on a validation set, you can (hopefully) weed out models that don't generalize well. Call `train_test_split` again, this time on your X_train and y_train. We want to set aside 25% of the data to go to our validation set, and keep the remaining 75% for our training set. Note: This means that out of the original data, 20% is for testing, 20% is for validation, and 60% is for training. ``` # split the data # Returns 4 values: X_train, X_validate, y_train, y_validate X_train, X_validate, y_train, y_validate = train_test_split(X_train, y_train, train_size=0.75, test_size=0.25) ``` ## 2. Linear Regression (Ordinary Least Squares) <a id='section 2'></a> Now, we're ready to start training models and making predictions. We'll start with a **linear regression** model. [Scikit-learn's linear regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.score) is built around scipy's ordinary least squares, which you used in the last lab. The syntax for each scikit-learn model is very similar: 1. Create a model by calling its constructor function. For example, `LinearRegression()` makes a linear regression model. 2. Train the model on your training data by calling `.fit(train_X, train_y)` on the model Create a linear regression model in the cell below. ``` # create a model lin_reg = LinearRegression(normalize=True) # fit the model lin_model = lin_reg.fit(X_train, y_train) ``` With the model fit, you can look at the best-fit slope for each feature using `.coef_`, and you can get the intercept of the regression line with `.intercept_`. ``` print(lin_model.coef_) print(lin_model.intercept_) ``` Now, let's get a sense of how good our model is. We can do this by looking at the difference between the predicted values and the actual values, also called the error. We can see this graphically using a scatter plot. - Call `.predict(X)` on your linear regression model, using your training X and training y, to return a list of predicted number of riders per hour. Save it to a variable `lin_pred`. - Using a scatter plot (`plt.scatter(...)`), plot the predicted values against the actual values (`y_train`) ``` # predict the number of riders lin_pred = lin_model.predict(X_train) # plot the residuals on a scatter plot plt.scatter(y_train, lin_pred) plt.title('Linear Model (OLS)') plt.xlabel('actual value') plt.ylabel('predicted value') plt.show() ``` Question: what should our scatter plot look like if our model was 100% accurate? **ANSWER:** All points (i.e. errors) would fall on a line with a slope of one: the predicted value would always equal the actual value. We can also get a sense of how well our model is doing by calculating the **root mean squared error**. The root mean squared error (RMSE) represents the average difference between the predicted and the actual values. To get the RMSE: - subtract each predicted value from its corresponding actual value (the errors) - square each error (this prevents negative errors from cancelling positive errors) - average the squared errors - take the square root of the average (this gets the error back in the original units) Write a function `rmse` that calculates the mean squared error of a predicted set of values. ``` def rmse(pred, actual): return np.sqrt(np.mean((pred - actual) ** 2)) ``` Now calculate the mean squared error for your linear model. ``` rmse(lin_pred, y_train) ``` ## 3. Ridge Regression <a id='section 3'></a> Now that you've gone through the process for OLS linear regression, it's easy to do the same for [**Ridge Regression**](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html). In this case, the constructor function that makes the model is `Ridge()`. ``` # make and fit a Ridge regression model ridge_reg = Ridge() ridge_model = ridge_reg.fit(X_train, y_train) # use the model to make predictions ridge_pred = ridge_model.predict(X_train) # plot the predictions plt.scatter(y_train, ridge_pred) plt.title('Ridge Model') plt.xlabel('actual values') plt.ylabel('predicted values') plt.show() # calculate the rmse for the Ridge model rmse(ridge_pred, y_train) ``` Note: the documentation for Ridge regression shows it has lots of **hyperparameters**: values we can choose when the model is made. Now that we've tried it using the defaults, look at the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) and try changing some parameters to see if you can get a lower RMSE (`alpha` might be a good one to try). ## 4. LASSO Regression <a id='section 4'></a> Finally, we'll try using [LASSO regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html). The constructor function to make the model is `Lasso()`. You may get a warning message saying the objective did not converge. The model will still work, but to get convergence try increasing the number of iterations (`max_iter=`) when you construct the model. ``` # create and fit the model lasso_reg = Lasso(max_iter=10000) lasso_model = lasso_reg.fit(X_train, y_train) # use the model to make predictions lasso_pred = lasso_model.predict(X_train) # plot the predictions plt.scatter(y_train, lasso_pred) plt.title('LASSO Model') plt.xlabel('actual values') plt.ylabel('predicted values') plt.show() # calculate the rmse for the LASSO model rmse(lasso_pred, y_train) ``` Note: LASSO regression also has many tweakable hyperparameters. See how changing them affects the accuracy! Question: How do these three models compare on performance? What sorts of things could we do to improve performance? **ANSWER:** All three models have very similar accuracy, around 900 RMSE for each. We could try changing which features we use or adjust the hyperparameters. --- ## 5. Choosing a model <a id='section 5'></a> ### Validation Once you've tweaked your models' hyperparameters to get the best possible accuracy on your training sets, we can compare your models on your validation set. Make predictions on `X_validate` with each one of your models, then calculate the RMSE for each set of predictions. ``` # make predictions for each model lin_vpred = lin_model.predict(X_validate) ridge_vpred = ridge_model.predict(X_validate) lasso_vpred = lasso_model.predict(X_validate) # calculate RMSE for each set of validation predictions print("linear model rmse: ", rmse(lin_vpred, y_validate)) print("Ridge rmse: ", rmse(ridge_vpred, y_validate)) print("LASSO rmse: ", rmse(lasso_vpred, y_validate)) ``` How do the RMSEs for the validation data compare to those for the training data? Why? Did the model that performed best on the training set also do best on the validation set? **YOUR ANSWER:** The RMSE for the validation set tends to be larger than for the training set, simply because the models were fit to the training data. ### Predicting the Test Set Finally, select one final model to make predictions for your test set. This is often the model that performed best on the validation data. ``` # make predictions for the test set using one model of your choice final_pred = lin_model.predict(X_test) # calculate the rmse for the final predictions print('Test set rmse: ', rmse(final_pred, y_test)) ``` Coming up this semester: how to select your models, model parameters, and features to get the best performance. --- Notebook developed by: Keeley Takimoto Data Science Modules: http://data.berkeley.edu/education/modules
github_jupyter
# Author: Faique Ali ## Task 01 : Prediction Using Supervised ML <p> Using Linear Regression, predict the percentage of an student based on his no. of study hours. </p> # Imports ``` import pandas as pd from pandas import DataFrame import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn import metrics %matplotlib inline ``` # Constants ``` DATA_SOURCE_LINK = 'http://bit.ly/w-data' COL_HRS = 'Hours' COL_SCORES = 'Scores' SCATTER_PLT_TITLE = 'Scatter Plot showing Students Nr. of Hours Studied VS Scored Marks' SCATTER_PLT_XLABEL = 'HOURS' SCATTER_PLT_YLABEL = 'SCORES' ``` ## Step1: Gather the Data ``` data = pd.read_csv(DATA_SOURCE_LINK) # Show first 5 rows data.head() # Checking for null values in the data set data.info() ``` ## Step 2: Visualizing the Data #### Splitting Hours and Scores into two varibales ``` X = DataFrame(data, columns=[COL_HRS]) y = DataFrame(data, columns=[COL_SCORES]) ``` #### Creating a scatter plot ``` plt.figure(figsize=(9,6)) # Specifying plot type plt.scatter(X,y, s=120, alpha=0.7) # Adding labels to the plot plt.title(SCATTER_PLT_TITLE) plt.xlabel(SCATTER_PLT_XLABEL) plt.ylabel(SCATTER_PLT_YLABEL) # plt.show() ``` ## Step 3: Training the Model on Testing Data ``` X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=0, test_size=0.2) # Creating an obj regression = LinearRegression() regression.fit(X_train, y_train) ``` #### Slope coefficient (Theta_1) ``` regression.coef_ ``` #### Intercept The intercept here is a +ve value. It represents that a student with `0hrs` of preparation can score at least `2 marks` ``` regression.intercept_ ``` ## Step 4: Testing outcomes on Test Data ``` # Making predictions on testing data predicted_scores = regression.predict(X_test) predicted_scores # Convert numpy nd array to 1d array predicted_scores = predicted_scores.flatten() # Extracting Hrs cols from X_test nr_of_hrs = X_test['Hours'].to_numpy() # Displaying our test results into a Data frame test_df = pd.DataFrame(data={'Nr.of Hrs': nr_of_hrs, 'Scored Marks': predicted_scores}) test_df ``` ## Step 5: Plotting Regression Line ``` plt.figure(figsize=(9,6)) # Specifying plot type plt.scatter(X,y, s=120, alpha=0.85) # To plot regression line plt.plot(X, regression.predict(X), color='red', linewidth=2, alpha= 0.9) # Adding labels to the plot plt.title(SCATTER_PLT_TITLE) plt.xlabel(SCATTER_PLT_XLABEL) plt.ylabel(SCATTER_PLT_YLABEL) # plt.show() ``` ## Step 6: Comparing & Evaluating our Model ``` # Extracting Scores cols from y_test actual_scores = y_test['Scores'].to_numpy() # Displaying our test results into a Data frame act_vs_pred_df = pd.DataFrame(data={'Actual Scores': actual_scores, 'Predicted Scores': predicted_scores}) act_vs_pred_df ``` ### Estimating training and testing accuracy or $R^{2}$ ``` print(f'Training Accuracy: ', regression.score(X_train,y_train)) print(f'Testing Accuracy: ', regression.score(X_test,y_test)) ``` ### Evaluating model ``` print(f'Mean Squared Error: ', metrics.mean_squared_error(y_test,predicted_scores)) print(f'Mean Absolute Error: ', metrics.mean_absolute_error(y_test,predicted_scores)) ``` # `Making Predictions ` <b>What will be the predicted score if a student studies for 9.25 hrs/day?<b> ``` hrs = 9.25 pred = regression.predict([[hrs]]) print(f'Nr of hrs studied: {hrs}') print(f'Obtained marks/score: {pred[0][0]}') ```
github_jupyter
# Analysis of SHEMAT-Suite models The created geological models with gempy were exported as SHEMAT-Suite input files. SHEMAT-Suite (https://git.rwth-aachen.de/SHEMAT-Suite/SHEMAT-Suite-open) [1] is a code for solving coupled heat transport in porous media. It is written in fortran and uses a finite differences scheme in a hexahedral grid. In this example, we will load a heat transport simulation from the base POC model we created in "Geological model creation and gravity simulation". We will demonstrate methods contained in OpenWF for loading the result file, displaying the parameters it contains and how to visualize these parameters. Finally, we will calculate the conductive heat flow and plot it. ``` # import some libraries #from scipy.stats import hmean #import pandas as pd import numpy as np #import glob import h5py import os, sys sys.path.append('../../') import OpenWF.postprocessing as pp import matplotlib.pyplot as plt #from mpl_toolkits.axes_grid1 import make_axes_locatable %matplotlib inline import seaborn as sns sns.set_style('ticks') sns.set_context('talk') ``` Simulation results from SHEMAT-Suite can be written in different file formats, such as VTK (Visualization Toolkit) HDF5 (Hierarchical Data Format), or PLT (TecPlot). In addition, optional outputs, such as ASCII-Files with comparison of simulated values to measured ones can be provided. Further, a status log of the simulation and other meta-files. A full coverage of possible output files is provided in the SHEMAT-Suite wiki ( https://git.rwth-aachen.de/SHEMAT-Suite/SHEMAT-Suite-open/-/wikis/home). In this tutorial, we will work with HDF5 files and VTK files. The majority of methods in OpenWF are tailored towards HDF5 files, which are smaller than their VTK relatives. However, there exists a powerful visualization code for python which builds upon vtk, called pyvista. We will briefly showcase its capabilities at the end of this tutorial. ## Load HDF5 file From the base POC model, we created a SHEMAT-Suite input file. This was then executed with the compiled SHEMAT-Suite code. As basic information: we look at conductive heat transport, i.e. no fluid flow, and heat transport is described by Fourier's law of heat conduction $q = - \lambda \nabla T$. At the base of the model, the heat flow boundary condition is set to 72 mW/m$^2$. OpenWF has a built in method for loading HDF5 files, though reading a file is a one-liner using the library ``h5py``. ```python fid = h5py.File('../../models/SHEMAT-Suite_output/SHEMAT_PCT_base_model_final.h5') ``` The file can be loaded in different states, among others for 'r' for read, 'a' for append, 'w' for write, etc. The ``read_hdf`` method in OpenWF lets the user also choose the state to load the HDF5 file. ``` model_path = '../../models/SHEMAT-Suite_output/SHEMAT_PCT_base_model_temp_final.h5' fid = pp.read_hdf_file(model_path, write=False) ``` To check the parameters stored in the HDF5 file, you can query the loaded h5py file for its keys, i.e. the "labels" of the data boxes stored in the HDF5 file. ``` fid.keys() ``` As some of these acronyms can have no meaning to new users, we implemented a method, specifically for SHEMAT-Suite generated HDF5 files to present information about the stored parameters: ``` pp.available_parameters(fid) ``` The postprocessing in OpenWF has methods for quickly displaying the parameters in each of the model dimensions in a 2D slice. For instance, we will look at a profile through the model parallel to the y direction, thus showing a crosssection of the model. In lines, it shows the interfaces between different geological units, and the specified parameter as a colored contour field. ``` # plot slice temperature fig = plt.figure(figsize=[15,7]) pp.plot_slice(model_path, parameter='temp', direction='y', cell_number=25, model_depth=6500.) # plot slice fluid density fig = plt.figure(figsize=[15,7]) pp.plot_slice(model_path, parameter='rhof', direction='y', cell_number=25, model_depth=6500) ``` ## Heat flow estimation SHEMAT-Suite does not provide the heat flow for HDF5 files. It does, however, store it in the vtk output. To also have the heat flow for HDF5 files, we provide a method for calculating it. In the future, it may be directly written in HDF5 files by SHEMAT-Suite. The method calculates the conductive heat flow in all model dimensions per default. The user can specify, if only one direction should be yielded by the method. ``` qx, qy, qz = pp.calc_cond_hf(fid) print(qz.shape) ``` Maybe here is a good point to talk about the dimensions and according directions in the HDF5 file. We see above, that qz has three dimensions, one with 60 entries, one with 50 entries, and one with 100 entries. These is also the cell discretization in z, y, and x direction. That is, in an HDF5 file from SHEMAT-Suite, we have the dimensions [z,y,x], so here qz[z,y,x]. The three variables now contain the heat flow in x, y, z direction for each cell in the model. Let's have a look at a horizontal slice through the model center and the heat flow in z-direction. Remembering the notation for directions, and seeing that in z-direction, we have 60 cells, the horizontal slice would reflect ``qz[29,:,:]``, i.e. all entries in x- and y-direction at index 29 of the z-direction. In the HDF5-file, we further count from the bottom, so ``qz[0,:,:]`` is the deepest slice, ``qz[-1,:,:]`` the shallowest. ``` # Get the model dimensions in x, y, z x = fid['x'][0,0,:] y = fid['y'][0,:,0] z = fid['z'][:,0,0] cell_number = 29 ui_cs = fid['uindex'][cell_number,:,:] fig = plt.figure(figsize=[15,7]) cs = plt.contourf(x,y,qz[cell_number]*1000, 20, cmap='viridis') plt.contour(x,y,ui_cs, colors='k') plt.colorbar(cs, label='HF mW/m$^2$') plt.xlabel('X [m]') plt.ylabel('Y [m]'); ``` Next to calculating the heatflow in each cell, we implemented a method to calculate it over a specified interval, e.g. over the depth interval of -4000 m to -3000 m, so providing the average heat flow over this depth interval. ``` deeper = -4000 shallower = -3000 mid_depth = deeper - (deeper - shallower) / 2 qz_int = pp.calc_cond_hf_over_interval(fid, depth_interval=[deeper,shallower], model_depth=6500) fig = plt.figure(figsize=[15,7]) cs = plt.contourf(x,y,qz_int*1000, 20, cmap='viridis') plt.contour(x,y,ui_cs, colors='k') plt.colorbar(cs, label='HF mW/m$^2$') plt.title(f"Heat flow, at {mid_depth} m a.s.l. depth, over a {np.abs(deeper-shallower)} m depth interval") plt.xlabel('X [m]') plt.ylabel('Y [m]'); ``` ## VTK and pyvista Pyvista ( https://docs.pyvista.org/ ) is a python library for working with 3D meshes and providing an interface for VTK files. ``` import pyvista as pv pv.set_plot_theme("document") simulation = pv.read('../../models/SHEMAT-Suite_output/SHEMAT_PCT_base_model_temp_final.vtk') ``` This line loads the VTK file. For information about its content, we can simply call the variable: ``` simulation simulation.plot(); ``` The vtk file has a couple of scalar values stored (seen in the table with data arrays). We can switch the active scalars to temperature for example using: ``` simulation.set_active_scalars('temp') simulation.plot(); ```
github_jupyter
**Chapter 19 – Training and Deploying TensorFlow Models at Scale** _This notebook contains all the sample code in chapter 19._ <table align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/19_training_and_deploying_at_scale.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> </table> # Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0. ``` # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x !echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" > /etc/apt/sources.list.d/tensorflow-serving.list !curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add - !apt update && apt-get install -y tensorflow-model-server !pip install -q -U tensorflow-serving-api IS_COLAB = True except Exception: IS_COLAB = False # TensorFlow ≥2.0 is required import tensorflow as tf from tensorflow import keras assert tf.__version__ >= "2.0" if not tf.config.list_physical_devices('GPU'): print("No GPU was detected. CNNs can be very slow without a GPU.") if IS_COLAB: print("Go to Runtime > Change runtime and select a GPU hardware accelerator.") # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) tf.random.set_seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "deploy" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ``` # Deploying TensorFlow models to TensorFlow Serving (TFS) We will use the REST API or the gRPC API. ## Save/Load a `SavedModel` ``` (X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.load_data() X_train_full = X_train_full[..., np.newaxis].astype(np.float32) / 255. X_test = X_test[..., np.newaxis].astype(np.float32) / 255. X_valid, X_train = X_train_full[:5000], X_train_full[5000:] y_valid, y_train = y_train_full[:5000], y_train_full[5000:] X_new = X_test[:3] np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28, 1]), keras.layers.Dense(100, activation="relu"), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(lr=1e-2), metrics=["accuracy"]) model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid)) np.round(model.predict(X_new), 2) model_version = "0001" model_name = "my_mnist_model" model_path = os.path.join(model_name, model_version) model_path !rm -rf {model_name} tf.saved_model.save(model, model_path) for root, dirs, files in os.walk(model_name): indent = ' ' * root.count(os.sep) print('{}{}/'.format(indent, os.path.basename(root))) for filename in files: print('{}{}'.format(indent + ' ', filename)) !saved_model_cli show --dir {model_path} !saved_model_cli show --dir {model_path} --tag_set serve !saved_model_cli show --dir {model_path} --tag_set serve \ --signature_def serving_default !saved_model_cli show --dir {model_path} --all ``` Let's write the new instances to a `npy` file so we can pass them easily to our model: ``` np.save("my_mnist_tests.npy", X_new) input_name = model.input_names[0] input_name ``` And now let's use `saved_model_cli` to make predictions for the instances we just saved: ``` !saved_model_cli run --dir {model_path} --tag_set serve \ --signature_def serving_default \ --inputs {input_name}=my_mnist_tests.npy np.round([[1.1739199e-04, 1.1239604e-07, 6.0210604e-04, 2.0804715e-03, 2.5779348e-06, 6.4079795e-05, 2.7411186e-08, 9.9669880e-01, 3.9654213e-05, 3.9471846e-04], [1.2294615e-03, 2.9207937e-05, 9.8599273e-01, 9.6755642e-03, 8.8930705e-08, 2.9156188e-04, 1.5831805e-03, 1.1311053e-09, 1.1980456e-03, 1.1113169e-07], [6.4066830e-05, 9.6359509e-01, 9.0598064e-03, 2.9872139e-03, 5.9552520e-04, 3.7478798e-03, 2.5074568e-03, 1.1462728e-02, 5.5553433e-03, 4.2495009e-04]], 2) ``` ## TensorFlow Serving Install [Docker](https://docs.docker.com/install/) if you don't have it already. Then run: ```bash docker pull tensorflow/serving export ML_PATH=$HOME/ml # or wherever this project is docker run -it --rm -p 8500:8500 -p 8501:8501 \ -v "$ML_PATH/my_mnist_model:/models/my_mnist_model" \ -e MODEL_NAME=my_mnist_model \ tensorflow/serving ``` Once you are finished using it, press Ctrl-C to shut down the server. Alternatively, if `tensorflow_model_server` is installed (e.g., if you are running this notebook in Colab), then the following 3 cells will start the server: ``` os.environ["MODEL_DIR"] = os.path.split(os.path.abspath(model_path))[0] %%bash --bg nohup tensorflow_model_server \ --rest_api_port=8501 \ --model_name=my_mnist_model \ --model_base_path="${MODEL_DIR}" >server.log 2>&1 !tail server.log import json input_data_json = json.dumps({ "signature_name": "serving_default", "instances": X_new.tolist(), }) repr(input_data_json)[:1500] + "..." ``` Now let's use TensorFlow Serving's REST API to make predictions: ``` import requests SERVER_URL = 'http://localhost:8501/v1/models/my_mnist_model:predict' response = requests.post(SERVER_URL, data=input_data_json) response.raise_for_status() # raise an exception in case of error response = response.json() response.keys() y_proba = np.array(response["predictions"]) y_proba.round(2) ``` ### Using the gRPC API ``` from tensorflow_serving.apis.predict_pb2 import PredictRequest request = PredictRequest() request.model_spec.name = model_name request.model_spec.signature_name = "serving_default" input_name = model.input_names[0] request.inputs[input_name].CopyFrom(tf.make_tensor_proto(X_new)) import grpc from tensorflow_serving.apis import prediction_service_pb2_grpc channel = grpc.insecure_channel('localhost:8500') predict_service = prediction_service_pb2_grpc.PredictionServiceStub(channel) response = predict_service.Predict(request, timeout=10.0) response ``` Convert the response to a tensor: ``` output_name = model.output_names[0] outputs_proto = response.outputs[output_name] y_proba = tf.make_ndarray(outputs_proto) y_proba.round(2) ``` Or to a NumPy array if your client does not include the TensorFlow library: ``` output_name = model.output_names[0] outputs_proto = response.outputs[output_name] shape = [dim.size for dim in outputs_proto.tensor_shape.dim] y_proba = np.array(outputs_proto.float_val).reshape(shape) y_proba.round(2) ``` ## Deploying a new model version ``` np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28, 1]), keras.layers.Dense(50, activation="relu"), keras.layers.Dense(50, activation="relu"), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(lr=1e-2), metrics=["accuracy"]) history = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid)) model_version = "0002" model_name = "my_mnist_model" model_path = os.path.join(model_name, model_version) model_path tf.saved_model.save(model, model_path) for root, dirs, files in os.walk(model_name): indent = ' ' * root.count(os.sep) print('{}{}/'.format(indent, os.path.basename(root))) for filename in files: print('{}{}'.format(indent + ' ', filename)) ``` **Warning**: You may need to wait a minute before the new model is loaded by TensorFlow Serving. ``` import requests SERVER_URL = 'http://localhost:8501/v1/models/my_mnist_model:predict' response = requests.post(SERVER_URL, data=input_data_json) response.raise_for_status() response = response.json() response.keys() y_proba = np.array(response["predictions"]) y_proba.round(2) ``` # Deploy the model to Google Cloud AI Platform Follow the instructions in the book to deploy the model to Google Cloud AI Platform, download the service account's private key and save it to the `my_service_account_private_key.json` in the project directory. Also, update the `project_id`: ``` project_id = "onyx-smoke-242003" import googleapiclient.discovery os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "my_service_account_private_key.json" model_id = "my_mnist_model" model_path = "projects/{}/models/{}".format(project_id, model_id) model_path += "/versions/v0001/" # if you want to run a specific version ml_resource = googleapiclient.discovery.build("ml", "v1").projects() def predict(X): input_data_json = {"signature_name": "serving_default", "instances": X.tolist()} request = ml_resource.predict(name=model_path, body=input_data_json) response = request.execute() if "error" in response: raise RuntimeError(response["error"]) return np.array([pred[output_name] for pred in response["predictions"]]) Y_probas = predict(X_new) np.round(Y_probas, 2) ``` # Using GPUs ``` tf.test.is_gpu_available() tf.test.gpu_device_name() tf.test.is_built_with_cuda() from tensorflow.python.client.device_lib import list_local_devices devices = list_local_devices() devices ``` # Distributed Training ``` keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) def create_model(): return keras.models.Sequential([ keras.layers.Conv2D(filters=64, kernel_size=7, activation="relu", padding="same", input_shape=[28, 28, 1]), keras.layers.MaxPooling2D(pool_size=2), keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"), keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"), keras.layers.MaxPooling2D(pool_size=2), keras.layers.Flatten(), keras.layers.Dense(units=64, activation='relu'), keras.layers.Dropout(0.5), keras.layers.Dense(units=10, activation='softmax'), ]) batch_size = 100 model = create_model() model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(lr=1e-2), metrics=["accuracy"]) model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid), batch_size=batch_size) keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) distribution = tf.distribute.MirroredStrategy() # Change the default all-reduce algorithm: #distribution = tf.distribute.MirroredStrategy( # cross_device_ops=tf.distribute.HierarchicalCopyAllReduce()) # Specify the list of GPUs to use: #distribution = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"]) # Use the central storage strategy instead: #distribution = tf.distribute.experimental.CentralStorageStrategy() #resolver = tf.distribute.cluster_resolver.TPUClusterResolver() #tf.tpu.experimental.initialize_tpu_system(resolver) #distribution = tf.distribute.experimental.TPUStrategy(resolver) with distribution.scope(): model = create_model() model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(lr=1e-2), metrics=["accuracy"]) batch_size = 100 # must be divisible by the number of workers model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid), batch_size=batch_size) model.predict(X_new) ``` Custom training loop: ``` keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) K = keras.backend distribution = tf.distribute.MirroredStrategy() with distribution.scope(): model = create_model() optimizer = keras.optimizers.SGD() with distribution.scope(): dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train)).repeat().batch(batch_size) input_iterator = distribution.make_dataset_iterator(dataset) @tf.function def train_step(): def step_fn(inputs): X, y = inputs with tf.GradientTape() as tape: Y_proba = model(X) loss = K.sum(keras.losses.sparse_categorical_crossentropy(y, Y_proba)) / batch_size grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) return loss per_replica_losses = distribution.experimental_run(step_fn, input_iterator) mean_loss = distribution.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None) return mean_loss n_epochs = 10 with distribution.scope(): input_iterator.initialize() for epoch in range(n_epochs): print("Epoch {}/{}".format(epoch + 1, n_epochs)) for iteration in range(len(X_train) // batch_size): print("\rLoss: {:.3f}".format(train_step().numpy()), end="") print() batch_size = 100 # must be divisible by the number of workers model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid), batch_size=batch_size) ``` ## Training across multiple servers A TensorFlow cluster is a group of TensorFlow processes running in parallel, usually on different machines, and talking to each other to complete some work, for example training or executing a neural network. Each TF process in the cluster is called a "task" (or a "TF server"). It has an IP address, a port, and a type (also called its role or its job). The type can be `"worker"`, `"chief"`, `"ps"` (parameter server) or `"evaluator"`: * Each **worker** performs computations, usually on a machine with one or more GPUs. * The **chief** performs computations as well, but it also handles extra work such as writing TensorBoard logs or saving checkpoints. There is a single chief in a cluster. If no chief is specified, then the first worker is the chief. * A **parameter server** (ps) only keeps track of variable values, it is usually on a CPU-only machine. * The **evaluator** obviously takes care of evaluation. There is usually a single evaluator in a cluster. The set of tasks that share the same type is often called a "job". For example, the "worker" job is the set of all workers. To start a TensorFlow cluster, you must first specify it. This means defining all the tasks (IP address, TCP port, and type). For example, the following cluster specification defines a cluster with 3 tasks (2 workers and 1 parameter server). It's a dictionary with one key per job, and the values are lists of task addresses: ``` { "worker": ["my-worker0.example.com:9876", "my-worker1.example.com:9876"], "ps": ["my-ps0.example.com:9876"] } ``` Every task in the cluster may communicate with every other task in the server, so make sure to configure your firewall to authorize all communications between these machines on these ports (it's usually simpler if you use the same port on every machine). When a task is started, it needs to be told which one it is: its type and index (the task index is also called the task id). A common way to specify everything at once (both the cluster spec and the current task's type and id) is to set the `TF_CONFIG` environment variable before starting the program. It must be a JSON-encoded dictionary containing a cluster specification (under the `"cluster"` key), and the type and index of the task to start (under the `"task"` key). For example, the following `TF_CONFIG` environment variable defines a simple cluster with 2 workers and 1 parameter server, and specifies that the task to start is the first worker: ``` import os import json os.environ["TF_CONFIG"] = json.dumps({ "cluster": { "worker": ["my-work0.example.com:9876", "my-work1.example.com:9876"], "ps": ["my-ps0.example.com:9876"] }, "task": {"type": "worker", "index": 0} }) print("TF_CONFIG='{}'".format(os.environ["TF_CONFIG"])) ``` Some platforms (e.g., Google Cloud ML Engine) automatically set this environment variable for you. Then you would write a short Python script to start a task. The same script can be used on every machine, since it will load the `TF_CONFIG` variable, which will tell it which task to start: ``` import tensorflow as tf resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver() worker0 = tf.distribute.Server(resolver.cluster_spec(), job_name=resolver.task_type, task_index=resolver.task_id) ``` Another way to specify the cluster specification is directly in Python, rather than through an environment variable: ``` cluster_spec = tf.train.ClusterSpec({ "worker": ["127.0.0.1:9901", "127.0.0.1:9902"], "ps": ["127.0.0.1:9903"] }) ``` You can then start a server simply by passing it the cluster spec and indicating its type and index. Let's start the two remaining tasks (remember that in general you would only start a single task per machine; we are starting 3 tasks on the localhost just for the purpose of this code example): ``` #worker1 = tf.distribute.Server(cluster_spec, job_name="worker", task_index=1) ps0 = tf.distribute.Server(cluster_spec, job_name="ps", task_index=0) os.environ["TF_CONFIG"] = json.dumps({ "cluster": { "worker": ["127.0.0.1:9901", "127.0.0.1:9902"], "ps": ["127.0.0.1:9903"] }, "task": {"type": "worker", "index": 1} }) print(repr(os.environ["TF_CONFIG"])) distribution = tf.distribute.experimental.MultiWorkerMirroredStrategy() keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) os.environ["TF_CONFIG"] = json.dumps({ "cluster": { "worker": ["127.0.0.1:9901", "127.0.0.1:9902"], "ps": ["127.0.0.1:9903"] }, "task": {"type": "worker", "index": 1} }) #CUDA_VISIBLE_DEVICES=0 with distribution.scope(): model = create_model() model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(lr=1e-2), metrics=["accuracy"]) import tensorflow as tf from tensorflow import keras import numpy as np # At the beginning of the program (restart the kernel before running this cell) distribution = tf.distribute.experimental.MultiWorkerMirroredStrategy() (X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.load_data() X_train_full = X_train_full[..., np.newaxis] / 255. X_test = X_test[..., np.newaxis] / 255. X_valid, X_train = X_train_full[:5000], X_train_full[5000:] y_valid, y_train = y_train_full[:5000], y_train_full[5000:] X_new = X_test[:3] n_workers = 2 batch_size = 32 * n_workers dataset = tf.data.Dataset.from_tensor_slices((X_train[..., np.newaxis], y_train)).repeat().batch(batch_size) def create_model(): return keras.models.Sequential([ keras.layers.Conv2D(filters=64, kernel_size=7, activation="relu", padding="same", input_shape=[28, 28, 1]), keras.layers.MaxPooling2D(pool_size=2), keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"), keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"), keras.layers.MaxPooling2D(pool_size=2), keras.layers.Flatten(), keras.layers.Dense(units=64, activation='relu'), keras.layers.Dropout(0.5), keras.layers.Dense(units=10, activation='softmax'), ]) with distribution.scope(): model = create_model() model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(lr=1e-2), metrics=["accuracy"]) model.fit(dataset, steps_per_epoch=len(X_train)//batch_size, epochs=10) # Hyperparameter tuning # Only talk to ps server config_proto = tf.ConfigProto(device_filters=['/job:ps', '/job:worker/task:%d' % tf_config['task']['index']]) config = tf.estimator.RunConfig(session_config=config_proto) # default since 1.10 strategy.num_replicas_in_sync ```
github_jupyter
Lambda School Data Science *Unit 2, Sprint 3, Module 4* --- # Model Interpretation - Visualize and interpret **partial dependence plots** - Explain individual predictions with **shapley value plots** ### Setup Run the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab. Libraries: - category_encoders - matplotlib - numpy - pandas - [**pdpbox**](https://github.com/SauceCat/PDPbox) - plotly - scikit-learn - scipy.stats - [**shap**](https://github.com/slundberg/shap) - xgboost ``` %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* !pip install pdpbox !pip install shap # If you're working locally: else: DATA_PATH = '../data/' # Ignore this warning: https://github.com/dmlc/xgboost/issues/4300 # xgboost/core.py:587: FutureWarning: Series.base is deprecated and will be removed in a future version import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='xgboost') ``` # Visualize and interpret partial dependence plots ## Overview Partial dependence plots show the relationship between 1-2 individual features and the target — how predictions partially depend on the isolated features. It's explained well by [PDPbox library documentation](https://pdpbox.readthedocs.io/en/latest/): >**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. Let's also look at an [animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/pdp.html#examples): > Partial dependence plots show how a feature affects predictions of a Machine Learning model on average. > 1. Define grid along feature > 2. Model predictions at grid points > 3. Line per data instance -> ICE (Individual Conditional Expectation) curve > 4. Average curves to get a PDP (Partial Dependence Plot) To demonstrate, we'll use a Lending Club dataset, to predict interest rates. (Like [this example](https://rrherr-project2-example.herokuapp.com/).) ``` import pandas as pd # Stratified sample, 10% of expired Lending Club loans, grades A-D # Source: https://www.lendingclub.com/info/download-data.action history = pd.read_csv(DATA_PATH+'lending-club/lending-club-subset.csv') history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True) # Just use 36 month loans history = history[history.term==' 36 months'] # Index & sort by issue date history = history.set_index('issue_d').sort_index() # Clean data, engineer feature, & select subset of features history = history.rename(columns= {'annual_inc': 'Annual Income', 'fico_range_high': 'Credit Score', 'funded_amnt': 'Loan Amount', 'title': 'Loan Purpose'}) history['Interest Rate'] = history['int_rate'].str.strip('%').astype(float) history['Monthly Debts'] = history['Annual Income'] / 12 * history['dti'] / 100 columns = ['Annual Income', 'Credit Score', 'Loan Amount', 'Loan Purpose', 'Monthly Debts', 'Interest Rate'] history = history[columns] history = history.dropna() # Test on the last 10,000 loans, # Validate on the 10,000 before that, # Train on the rest test = history[-10000:] val = history[-20000:-10000] train = history[:-20000] # Assign to X, y target = 'Interest Rate' features = history.columns.drop('Interest Rate') X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] y_test = test[target] # The target has some right skew, but it's not too bad %matplotlib inline import seaborn as sns sns.distplot(y_train); y_train.describe() ``` ### Fit Linear Regression model ``` import category_encoders as ce from sklearn.linear_model import LinearRegression from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler lr = make_pipeline( ce.TargetEncoder(), LinearRegression() ) lr.fit(X_train, y_train) print('Linear Regression R^2', lr.score(X_val, y_val)) ``` ### Explaining Linear Regression ``` coefficients = lr.named_steps['linearregression'].coef_ pd.Series(coefficients, features) ``` ### Fit Gradient Boosting model ``` from sklearn.metrics import r2_score from xgboost import XGBRegressor gb = make_pipeline( ce.OrdinalEncoder(), XGBRegressor(n_estimators=200, objective='reg:squarederror', n_jobs=-1) ) gb.fit(X_train, y_train) y_pred = gb.predict(X_val) print('Gradient Boosting R^2', r2_score(y_val, y_pred)) ``` ### Explaining Gradient Boosting??? Linear models have coefficients, but trees do not. Instead, to see the relationship between individual feature(s) and the target, we can use partial dependence plots. ## Follow Along ### Partial Dependence Plots with 1 feature PDPbox - [Gallery](https://github.com/SauceCat/PDPbox#gallery) - [API Reference: pdp_isolate](https://pdpbox.readthedocs.io/en/latest/pdp_isolate.html) - [API Reference: pdp_plot](https://pdpbox.readthedocs.io/en/latest/pdp_plot.html) ``` # Later, when you save matplotlib images to include in blog posts or web apps, # increase the dots per inch (double it), so the text isn't so fuzzy import matplotlib.pyplot as plt plt.rcParams['figure.dpi'] = 72 ``` ### Partial Dependence Plots with 2 features See interactions! PDPbox - [Gallery](https://github.com/SauceCat/PDPbox#gallery) - [API Reference: pdp_interact](https://pdpbox.readthedocs.io/en/latest/pdp_interact.html) - [API Reference: pdp_interact_plot](https://pdpbox.readthedocs.io/en/latest/pdp_interact_plot.html) Be aware of a bug in PDPBox version <= 0.20 with some versions of matplotlib: - With the `pdp_interact_plot` function, `plot_type='contour'` gets an error, but `plot_type='grid'` works - This will be fixed in the next release of PDPbox: https://github.com/SauceCat/PDPbox/issues/40 ``` from pdpbox.pdp import pdp_interact, pdp_interact_plot features = ['Annual Income', 'Credit Score'] interaction = pdp_interact( model=gb, dataset=X_val, model_features=X_val.columns, features=features ) pdp_interact_plot(interaction, plot_type='grid', feature_names=features); ``` ### BONUS: 3D with Plotly! Just for your future reference, here's how you can make it 3D! (Like [this example](https://rrherr-project2-example.herokuapp.com/).) ``` # First, make the 2D plot above. Then ... pdp = interaction.pdp.pivot_table( values='preds', columns=features[0], index=features[1] )[::-1] # Slice notation to reverse index order so y axis is ascending pdp = pdp.drop(columns=[1000.0, 751329.0]) import plotly.graph_objs as go surface = go.Surface( x=pdp.columns, y=pdp.index, z=pdp.values ) layout = go.Layout( scene=dict( xaxis=dict(title=features[0]), yaxis=dict(title=features[1]), zaxis=dict(title=target) ) ) fig = go.Figure(surface, layout) fig.show() ``` ### BONUS: PDPs with categorical features Just for your future reference, here's a bonus example to demonstrate partial dependence plots with categorical features. 1. I recommend you use Ordinal Encoder or Target Encoder, outside of a pipeline, to encode your data first. (If there is a natural ordering, then take the time to encode it that way, instead of random integers.) Then use the encoded data with pdpbox. 2. There's some extra work to get readable category names on your plot, instead of integer category codes. ``` # Fit a model on Titanic data import category_encoders as ce import seaborn as sns from sklearn.ensemble import RandomForestClassifier df = sns.load_dataset('titanic') df.age = df.age.fillna(df.age.median()) df = df.drop(columns='deck') df = df.dropna() target = 'survived' features = df.columns.drop(['survived', 'alive']) X = df[features] y = df[target] # Use Ordinal Encoder, outside of a pipeline encoder = ce.OrdinalEncoder() X_encoded = encoder.fit_transform(X) model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) model.fit(X_encoded, y) # Use Pdpbox %matplotlib inline import matplotlib.pyplot as plt from pdpbox import pdp feature = 'sex' pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature) pdp.pdp_plot(pdp_dist, feature); # Look at the encoder's mappings encoder.mapping pdp.pdp_plot(pdp_dist, feature) # Manually change the xticks labels plt.xticks([1, 2], ['male', 'female']); # Let's automate it feature = 'sex' for item in encoder.mapping: if item['col'] == feature: feature_mapping = item['mapping'] feature_mapping = feature_mapping[feature_mapping.index.dropna()] category_names = feature_mapping.index.tolist() category_codes = feature_mapping.values.tolist() pdp.pdp_plot(pdp_dist, feature) # Automatically change the xticks labels plt.xticks(category_codes, category_names); features = ['sex', 'age'] interaction = pdp_interact( model=model, dataset=X_encoded, model_features=X_encoded.columns, features=features ) pdp_interact_plot(interaction, plot_type='grid', feature_names=features); pdp = interaction.pdp.pivot_table( values='preds', columns=features[0], # First feature on x axis index=features[1] # Next feature on y axis )[::-1] # Reverse the index order so y axis is ascending pdp = pdp.rename(columns=dict(zip(category_codes, category_names))) plt.figure(figsize=(10,8)) sns.heatmap(pdp, annot=True, fmt='.2f', cmap='viridis') plt.title('Partial Dependence of Titanic survival, on sex & age'); ``` # Explain individual predictions with shapley value plots ## Overview We’ll use TreeExplainer from an awesome library called [SHAP](https://github.com/slundberg/shap), for “additive explanations” — we can explain individual predictions by seeing how the features add up! <img src="https://raw.githubusercontent.com/slundberg/shap/master/docs/artwork/shap_header.png" width="800" /> ### Regression example We're coming full circle, with the NYC Apartment Rent dataset! Remember this code you wrote for your first assignment? ```python # Arrange X features matrix & y target vector features = ['bedrooms', 'bathrooms'] target = 'price' X = df[features] y = df[target] # Fit model from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(X, y) def predict(bedrooms, bathrooms): y_pred = model.predict([[bedrooms, bathrooms]]) estimate = y_pred[0] bed_coef = model.coef_[0] bath_coef = model.coef_[1] # Format with $ and comma separators. No decimals. result = f'Rent for a {bedrooms}-bed, {bathrooms}-bath apartment in NYC is estimated at ${estimate:,.0f}.' explanation = f' In this model, each bedroom adds ${bed_coef:,.0f} & each bathroom adds ${bath_coef:,.0f}.' return result + explanation ``` Let’s do something similar, but with a tuned Random Forest and Shapley Values. ``` import numpy as np import pandas as pd # Read New York City apartment rental listing data df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv') assert df.shape == (49352, 34) # Remove the most extreme 1% prices, # the most extreme .1% latitudes, & # the most extreme .1% longitudes df = df[(df['price'] >= np.percentile(df['price'], 0.5)) & (df['price'] <= np.percentile(df['price'], 99.5)) & (df['latitude'] >= np.percentile(df['latitude'], 0.05)) & (df['latitude'] < np.percentile(df['latitude'], 99.95)) & (df['longitude'] >= np.percentile(df['longitude'], 0.05)) & (df['longitude'] <= np.percentile(df['longitude'], 99.95))] # Do train/test split # Use data from April & May 2016 to train # Use data from June 2016 to test df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True) cutoff = pd.to_datetime('2016-06-01') train = df[df.created < cutoff] test = df[df.created >= cutoff] # Assign to X, y features = ['bedrooms', 'bathrooms', 'longitude', 'latitude'] target = 'price' X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] from scipy.stats import randint, uniform from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import RandomizedSearchCV param_distributions = { 'n_estimators': randint(50, 500), 'max_depth': [5, 10, 15, 20, None], 'max_features': uniform(0, 1), } search = RandomizedSearchCV( RandomForestRegressor(random_state=42), param_distributions=param_distributions, n_iter=5, cv=2, scoring='neg_mean_absolute_error', verbose=10, return_train_score=True, n_jobs=-1, random_state=42 ) search.fit(X_train, y_train); print('Best hyperparameters', search.best_params_) print('Cross-validation MAE', -search.best_score_) model = search.best_estimator_ ``` ## Follow Along #### [Dan Becker explains Shapley Values:](https://www.kaggle.com/dansbecker/shap-values) >You've seen (and used) techniques to extract general insights from a machine learning model. But what if you want to break down how the model works for an individual prediction? > >SHAP Values (an acronym from SHapley Additive exPlanations) break down a prediction to show the impact of each feature. > >There is some complexity to the technique ... We won't go into that detail here, since it isn't critical for using the technique. [This blog post](https://towardsdatascience.com/one-feature-attribution-method-to-supposedly-rule-them-all-shapley-values-f3e04534983d) has a longer theoretical explanation. ``` # Get an individual observation to explain. # For example, the 0th row from the test set. row = X_test.iloc[[0]] row # What was the actual rent for this apartment? y_test.iloc[[0]] # What does the model predict for this apartment? model.predict(row) # Why did the model predict this? # Look at a Shapley Values Force Plot import shap explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(row) shap.initjs() shap.force_plot( base_value=explainer.expected_value, shap_values=shap_values, features=row ) ``` ### Define the predict function ``` def predict(bedrooms, bathrooms, longitude, latitude): # Make dataframe from the inputs df = pd.DataFrame( data=[[bedrooms, bathrooms, longitude, latitude]], columns=['bedrooms', 'bathrooms', 'longitude', 'latitude'] ) # Get the model's prediction pred = model.predict(df)[0] # Calculate shap values explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(df) # Get series with shap values, feature names, & feature values feature_names = df.columns feature_values = df.values[0] shaps = pd.Series(shap_values[0], zip(feature_names, feature_values)) # Print results result = f'${pred:,.0f} estimated rent for this NYC apartment. \n\n' result += f'Starting from baseline of ${explainer.expected_value:,.0f} \n' result += shaps.to_string() print(result) # Show shapley values force plot shap.initjs() return shap.force_plot( base_value=explainer.expected_value, shap_values=shap_values, features=df ) predict(3, 1.5, -73.9425, 40.7145) # What if it was a 2 bedroom? predict(2, 1.5, -73.9425, 40.7145) # What if it was a 1 bedroom? predict(1, 1.5, -73.9425, 40.7145) ``` ### BONUS: Classification example Just for your future reference, here's a bonus example for a classification problem. This uses Lending Club data, historical and current. The goal: Predict if peer-to-peer loans are charged off or fully paid. Decide which loans to invest in. ``` import pandas as pd # Stratified sample, 10% of expired Lending Club loans, grades A-D # Source: https://www.lendingclub.com/info/download-data.action history = pd.read_csv(DATA_PATH+'lending-club/lending-club-subset.csv') history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True) # Current loans available for manual investing, June 17, 2019 # Source: https://www.lendingclub.com/browse/browse.action current = pd.read_csv(DATA_PATH+'../data/lending-club/primaryMarketNotes_browseNotes_1-RETAIL.csv') # Transform earliest_cr_line to an integer: # How many days the earliest credit line was open, before the loan was issued. # For current loans available for manual investing, assume the loan will be issued today. history['earliest_cr_line'] = pd.to_datetime(history['earliest_cr_line'], infer_datetime_format=True) history['earliest_cr_line'] = history['issue_d'] - history['earliest_cr_line'] history['earliest_cr_line'] = history['earliest_cr_line'].dt.days current['earliest_cr_line'] = pd.to_datetime(current['earliest_cr_line'], infer_datetime_format=True) current['earliest_cr_line'] = pd.Timestamp.today() - current['earliest_cr_line'] current['earliest_cr_line'] = current['earliest_cr_line'].dt.days # Transform earliest_cr_line for the secondary applicant history['sec_app_earliest_cr_line'] = pd.to_datetime(history['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce') history['sec_app_earliest_cr_line'] = history['issue_d'] - history['sec_app_earliest_cr_line'] history['sec_app_earliest_cr_line'] = history['sec_app_earliest_cr_line'].dt.days current['sec_app_earliest_cr_line'] = pd.to_datetime(current['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce') current['sec_app_earliest_cr_line'] = pd.Timestamp.today() - current['sec_app_earliest_cr_line'] current['sec_app_earliest_cr_line'] = current['sec_app_earliest_cr_line'].dt.days # Engineer features for issue date year & month history['issue_d_year'] = history['issue_d'].dt.year history['issue_d_month'] = history['issue_d'].dt.month current['issue_d_year'] = pd.Timestamp.today().year current['issue_d_month'] = pd.Timestamp.today().month # Calculate percent of each loan repaid history['percent_paid'] = history['total_pymnt'] / history['funded_amnt'] # Train on the historical data. # For the target, use `loan_status` ('Fully Paid' or 'Charged Off') target = 'loan_status' X = history.drop(columns=target) y = history[target] # Do train/validate/test 3-way split from sklearn.model_selection import train_test_split X_trainval, X_test, y_trainval, y_test = train_test_split( X, y, test_size=20000, stratify=y, random_state=42) X_train, X_val, y_train, y_val = train_test_split( X_trainval, y_trainval, test_size=20000, stratify=y_trainval, random_state=42) print('X_train shape', X_train.shape) print('y_train shape', y_train.shape) print('X_val shape', X_val.shape) print('y_val shape', y_val.shape) print('X_test shape', X_test.shape) print('y_test shape', y_test.shape) # Save the ids for later, so we can look up actual results, # to compare with predicted results train_id = X_train['id'] val_id = X_val['id'] test_id = X_test['id'] # Use Python sets to compare the historical columns & current columns common_columns = set(history.columns) & set(current.columns) just_history = set(history.columns) - set(current.columns) just_current = set(current.columns) - set(history.columns) # For features, use only the common columns shared by the historical & current data. features = list(common_columns) X_train = X_train[features] X_val = X_val[features] X_test = X_test[features] def wrangle(X): X = X.copy() # Engineer new feature for every feature: is the feature null? for col in X: X[col+'_NULL'] = X[col].isnull() # Convert percentages from strings to floats X['int_rate'] = X['int_rate'].str.strip('%').astype(float) X['revol_util'] = X['revol_util'].str.strip('%').astype(float) # Convert employment length from string to float X['emp_length'] = X['emp_length'].str.replace(r'\D','').astype(float) # Create features for three employee titles: teacher, manager, owner X['emp_title'] = X['emp_title'].str.lower() X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False) X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False) X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False) # Get length of free text fields X['title'] = X['title'].str.len() X['desc'] = X['desc'].str.len() X['emp_title'] = X['emp_title'].str.len() # Convert sub_grade from string "A1"-"D5" to numbers sub_grade_ranks = {'A1': 1.1, 'A2': 1.2, 'A3': 1.3, 'A4': 1.4, 'A5': 1.5, 'B1': 2.1, 'B2': 2.2, 'B3': 2.3, 'B4': 2.4, 'B5': 2.5, 'C1': 3.1, 'C2': 3.2, 'C3': 3.3, 'C4': 3.4, 'C5': 3.5, 'D1': 4.1, 'D2': 4.2, 'D3': 4.3, 'D4': 4.4, 'D5': 4.5} X['sub_grade'] = X['sub_grade'].map(sub_grade_ranks) # Drop some columns X = X.drop(columns='id') # Always unique X = X.drop(columns='url') # Always unique X = X.drop(columns='member_id') # Always null X = X.drop(columns='grade') # Duplicative of sub_grade X = X.drop(columns='zip_code') # High cardinality # Only use these features which had nonzero permutation importances in earlier models features = ['acc_open_past_24mths', 'addr_state', 'all_util', 'annual_inc', 'annual_inc_joint', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util', 'collections_12_mths_ex_med', 'delinq_amnt', 'desc_NULL', 'dti', 'dti_joint', 'earliest_cr_line', 'emp_length', 'emp_length_NULL', 'emp_title', 'emp_title_NULL', 'emp_title_owner', 'fico_range_high', 'funded_amnt', 'home_ownership', 'inq_last_12m', 'inq_last_6mths', 'installment', 'int_rate', 'issue_d_month', 'issue_d_year', 'loan_amnt', 'max_bal_bc', 'mo_sin_old_il_acct', 'mo_sin_old_rev_tl_op', 'mo_sin_rcnt_rev_tl_op', 'mort_acc', 'mths_since_last_major_derog_NULL', 'mths_since_last_record', 'mths_since_recent_bc', 'mths_since_recent_inq', 'num_actv_bc_tl', 'num_actv_rev_tl', 'num_op_rev_tl', 'num_rev_tl_bal_gt_0', 'num_tl_120dpd_2m_NULL', 'open_rv_12m_NULL', 'open_rv_24m', 'pct_tl_nvr_dlq', 'percent_bc_gt_75', 'pub_rec_bankruptcies', 'purpose', 'revol_bal', 'revol_bal_joint', 'sec_app_earliest_cr_line', 'sec_app_fico_range_high', 'sec_app_open_acc', 'sec_app_open_act_il', 'sub_grade', 'term', 'title', 'title_NULL', 'tot_coll_amt', 'tot_hi_cred_lim', 'total_acc', 'total_bal_il', 'total_bc_limit', 'total_cu_tl', 'total_rev_hi_lim'] X = X[features] # Reset index X = X.reset_index(drop=True) # Return the wrangled dataframe return X X_train = wrangle(X_train) X_val = wrangle(X_val) X_test = wrangle(X_test) print('X_train shape', X_train.shape) print('X_val shape', X_val.shape) print('X_test shape', X_test.shape) import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline from xgboost import XGBClassifier processor = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median') ) X_train_processed = processor.fit_transform(X_train) X_val_processed = processor.transform(X_val) eval_set = [(X_train_processed, y_train), (X_val_processed, y_val)] model = XGBClassifier(n_estimators=1000, n_jobs=-1) model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc', early_stopping_rounds=10) # THIS CELL ISN'T ABOUT THE NEW OBJECTIVES FOR TODAY # BUT IT IS IMPORTANT FOR YOUR SPRINT CHALLENGE from sklearn.metrics import roc_auc_score X_test_processed = processor.transform(X_test) class_index = 1 y_pred_proba = model.predict_proba(X_test_processed)[:, class_index] print(f'Test ROC AUC for class {class_index}:') print(roc_auc_score(y_test, y_pred_proba)) # Ranges from 0-1, higher is better ``` #### Look at predictions vs actuals ``` df = pd.DataFrame({ 'id': test_id, 'pred_proba': y_pred_proba, 'status_group': y_test }) df = df.merge( history[['id', 'issue_d', 'sub_grade', 'percent_paid', 'term', 'int_rate']], how='left' ) df.head() fully_paid = df['status_group'] == 'Fully Paid' charged_off = ~fully_paid right = (fully_paid) == (df['pred_proba'] > 0.50) wrong = ~right ``` #### Loan was fully paid, model's prediction was right ``` df[fully_paid & right].sample(n=10, random_state=1).sort_values(by='pred_proba') # To explain the prediction for test observation with index #3094, # first, get all of the features for that observation row = X_test.iloc[[3094]] row ``` #### Explain individual predictions with shapley value plots ``` # STUDY/PRACTICE THIS CELL FOR THE SPRINT CHALLENGE import shap explainer = shap.TreeExplainer(model) row_processed = processor.transform(row) shap_values = explainer.shap_values(row_processed) shap.initjs() shap.force_plot( base_value=explainer.expected_value, shap_values=shap_values, features=row, link='logit' # For classification, this shows predicted probabilities ) ``` #### Make a function to explain predictions Goal Output: ``` The model predicts this loan is Fully Paid, with 74% probability. Top 3 reasons for prediction: 1. dti is 10.97. 2. term is 36 months. 3. total_acc is 45.0. Top counter-argument against prediction: - sub_grade is 4.2. <INSERT SHAPLEY VALUE FORCE PLOT HERE> ``` ``` feature_names = row.columns feature_values = row.values[0] shaps = pd.Series(shap_values[0], zip(feature_names, feature_values)) pros = shaps.sort_values(ascending=False)[:3].index cons = shaps.sort_values(ascending=True)[:3].index print('Top 3 reasons for fully paid:') for i, pro in enumerate(pros, start=1): feature_name, feature_value = pro print(f'{i}. {feature_name} is {feature_value}.') print('\n') print('Cons:') for i, con in enumerate(cons, start=1): feature_name, feature_value = con print(f'{i}. {feature_name} is {feature_value}.') def explain(row_number): positive_class = 'Fully Paid' positive_class_index = 1 # Get & process the data for the row row = X_test.iloc[[row_number]] row_processed = processor.transform(row) # Make predictions (includes predicted probability) pred = model.predict(row_processed)[0] pred_proba = model.predict_proba(row_processed)[0, positive_class_index] pred_proba *= 100 if pred != positive_class: pred_proba = 100 - pred_proba # Show prediction & probability print(f'The model predicts this loan is {pred}, with {pred_proba:.0f}% probability.') # Get shapley additive explanations shap_values = explainer.shap_values(row_processed) # Get top 3 "pros & cons" for fully paid feature_names = row.columns feature_values = row.values[0] shaps = pd.Series(shap_values[0], zip(feature_names, feature_values)) pros = shaps.sort_values(ascending=False)[:3].index cons = shaps.sort_values(ascending=True)[:3].index # Show top 3 reason for prediction print('\n') print('Top 3 reasons for prediction:') evidence = pros if pred == positive_class else cons for i, info in enumerate(evidence, start=1): feature_name, feature_value = info print(f'{i}. {feature_name} is {feature_value}.') # Show top 1 counter-argument against prediction print('\n') print('Top counter-argument against prediction:') evidence = cons if pred == positive_class else pros feature_name, feature_value = evidence[0] print(f'- {feature_name} is {feature_value}.') # Show Shapley Values Force Plot shap.initjs() return shap.force_plot( base_value=explainer.expected_value, shap_values=shap_values, features=row, link='logit' # For classification, this shows predicted probabilities ) explain(3094) ``` #### Look at more examples You can choose an example from each quadrant of the confusion matrix, and get an explanation for the model's prediction. #### Loan was charged off, model's prediction was right ``` df[charged_off & right].sample(n=10, random_state=1).sort_values(by='pred_proba') explain(8383) ``` #### Loan was fully paid, model's prediction was wrong ``` df[fully_paid & wrong].sample(n=10, random_state=1).sort_values(by='pred_proba') explain(18061) explain(6763) ``` #### Loan was charged off, model's prediction was wrong ``` df[charged_off & wrong].sample(n=10, random_state=1).sort_values(by='pred_proba') explain(19883) ``` ## References #### Partial Dependence Plots - [Kaggle / Dan Becker: Machine Learning Explainability — Partial Dependence Plots](https://www.kaggle.com/dansbecker/partial-plots) - [Christoph Molnar: Interpretable Machine Learning — Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) + [animated explanation](https://twitter.com/ChristophMolnar/status/1066398522608635904) - [pdpbox repo](https://github.com/SauceCat/PDPbox) & [docs](https://pdpbox.readthedocs.io/en/latest/) #### Shapley Values - [Kaggle / Dan Becker: Machine Learning Explainability — SHAP Values](https://www.kaggle.com/learn/machine-learning-explainability) - [Christoph Molnar: Interpretable Machine Learning — Shapley Values](https://christophm.github.io/interpretable-ml-book/shapley.html) - [SHAP repo](https://github.com/slundberg/shap) & [docs](https://shap.readthedocs.io/en/latest/) ## Recap You learned about three types of model explanations during the past 1.5 lessons: #### 1. Global model explanation: all features in relation to each other - Feature Importances: _Default, fastest, good for first estimates_ - Drop-Column Importances: _The best in theory, but much too slow in practice_ - Permutaton Importances: _A good compromise!_ #### 2. Global model explanation: individual feature(s) in relation to target - Partial Dependence plots #### 3. Individual prediction explanation - Shapley Values _Note that the coefficients from a linear model give you all three types of explanations!_ ## Challenge Complete these tasks for your project, and document your work. - Continue to iterate on your project: data cleaning, exploratory visualization, feature engineering, modeling. - Make at least 1 partial dependence plot to explain your model. - Make at least 1 Shapley force plot to explain an individual prediction. - **Share at least 1 visualization (of any type) on Slack!** If you aren't ready to make these plots with your own dataset, you can practice these objectives with any dataset you've worked with previously. Example solutions are available for Partial Dependence Plots with the Tanzania Waterpumps dataset, and Shapley force plots with the Titanic dataset. (These datasets are available in the data directory of this repository.) Please be aware that **multi-class classification** will result in multiple Partial Dependence Plots (one for each class), and multiple sets of Shapley Values (one for each class).
github_jupyter
<div align="center"> <font size="5">__cta-lstchain: Notebook for training and storage Random Forests for LST1 data analysis__</font> <font size="4"> To run this notebook you will need the last version of cta-lstchain: git clone https://github.com/cta-observatory/cta-lstchain <br> <br> **If you have ctapipe already installed in a conda environment:** <br><br> source activate cta-dev <br> python setup.py install <br> <font size="4"> **If you don't have ctapipe installed:**</font> <br><br> conda env create -f environment.yml <br> source activate cta-dev <br> python setup.py install Also, you will need the datafiles from **cta-lstchain-extra:** git clone https://github.com/misabelber/cta-lstchain-extra <font size="4"> **Some imports...** ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.externals import joblib from lstchain.reco import reco_dl1_to_dl2 from lstchain.visualization import plot_dl2 from lstchain.reco import utils %matplotlib inline plt.rcParams['figure.figsize'] = (20, 15) plt.rcParams['font.size'] = 14 ``` <font size="4"> **Get event DL1 files for training.** <br> We need two files, for **gammas** and for **protons**. These gammas are pointlike. ``` try: PATH_EVENTS = "/home/queenmab/DATA/LST1/Events" gammafile = PATH_EVENTS+"/gamma_events_point.h5" protonfile = PATH_EVENTS+"/proton_events.h5" except: PATH_EVENTS = "../../cta-lstchain-extra/reco/sample_data/dl1/" gammafile = PATH_EVENTS+"gamma_events_point_tiny.h5" protonfile = PATH_EVENTS+"proton_events_tiny.h5" ``` <font size="4"> We read the files as pandas dataframes: ``` df_gammas = pd.read_hdf(gammafile) df_proton = pd.read_hdf(protonfile) df_gammas.keys() df_gammas ``` <font size="4"> From all the previous information, we choose certain features to train the Random Forests: ``` features = ['intensity', 'time_gradient', 'width', 'length', 'wl', 'phi', 'psi'] ``` <font size="4"> Now we must split the data into train and test sets. Gamma events will train energy and direction reconstruction, and gamma/hadron separation, but protons are only used for separation. ``` train,test = reco_dl1_to_dl2.split_traintest(df_gammas,0.5) test = test.append(df_proton,ignore_index=True) #Protons are only for testing when trainin Energy/Direction reco. ``` <font size="4"> **Train the Reconstruction:** <br> We train two Random Forest Regressors, from scikit-learn, to reconstruct energy and "disp" of the **test** set. ``` RFreg_Energy, RFreg_Disp = reco_dl1_to_dl2.trainRFreco(train,features) ``` <font size="4"> We can now predict the **energy** and **disp** of the test events, and from **disp**, calculate the reconstructed direction. ``` #Predict energy and direction for the test set: test['e_rec'] = RFreg_Energy.predict(test[features]) test['disp_rec'] = RFreg_Disp.predict(test[features]) test['src_x_rec'],test['src_y_rec'] = utils.disp_to_pos(test['disp_rec'], test['x'], test['y'], test['psi']) test.keys() ``` <font size="4"> We use this test events with reconstructed energy and direction to **train the gamma/hadron separation.** <br> We add these two features to the list of features for training: ``` features_sep = list(features) features_sep.append('e_rec') features_sep.append('disp_rec') ``` <font size="4"> We split again the set into **train** and **test**. ``` train,test = reco_dl1_to_dl2.split_traintest(test,0.75) ``` <font size="4"> **Train the gamma/hadron classifier:** <br> Now we train a scikit-learn **RandomForestClassifier** which will separate events in two classes: 0 for **gammas** and 1 for **protons**. We call this parameter **hadroness**. ``` #Now we can train the Random Forest Classifier RFcls_GH = reco_dl1_to_dl2.trainRFsep(train,features_sep) ``` <font size="4"> Predict the hadroness of the test events: ``` test['hadro_rec'] = RFcls_GH.predict(test[features_sep]) ``` <font size="4"> **Save the Random Forests:** <br> We can save these trained RF into files to apply them later on any set of data: ``` fileE = "RFreg_Energy.sav" fileD = "RFreg_Disp.sav" fileH = "RFcls_GH.sav" joblib.dump(RFreg_Energy, fileE) joblib.dump(RFreg_Disp, fileD) joblib.dump(RFcls_GH, fileH) ``` <font size="4"> **Now we can plot some results...** <br><br> **Distribution of features** ``` plot_dl2.plot_features(test) ``` <font size="4"> **Energy reconstruction** ``` plot_dl2.plot_e(test) ``` <font size="4"> **Disp reconstruction** ``` plot_dl2.plot_disp(test) ``` <font size="4"> **Source position in camera coordinates** ``` plot_dl2.plot_pos(test) ``` <font size="4"> **Importance of features for Gamma/Hadron separation** ``` plot_dl2.plot_importances(RFcls_GH,features_sep) ``` <font size="4"> **ROC curve** ``` plot_dl2.plot_ROC(RFcls_GH,test,features_sep,-1) ``` <font size="4"> Mono analysis is very limited, specially at low energies, where it's much more difficult to separate gammas from hadrons without stereo information. If we discard low energy events, we can see that the performance improves. <br> <br> **For example, we can cut at 500 GeV:** ``` cut = 500. e_cut = np.log10(cut) test_cut = test[test['e_rec']>e_cut] plot_dl2.plot_e(test_cut) plot_dl2.plot_disp(test_cut) plot_dl2.plot_pos(test_cut) plot_dl2.plot_ROC(RFcls_GH,test_cut,features_sep,e_cut) ``` <font size="4"> - The reconstruction is still in a very early stage, with close to none optimization of the Random Forests and a rather bad performance. - Also, the statistics are very low, we must retrieve more events for training the RF's. - We've been focusing on having a code that works and can be easily used by anyone. - **Now it's time to look forward having competitive mono reconstruction!**
github_jupyter
# Napa County Model Build the model for Napa County California. The goal of the model will be to predict the total burn area of wildfires in Napa County for a given month ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm from catboost import Pool, CatBoostRegressor from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, median_absolute_error from sklearn.preprocessing import OneHotEncoder, StandardScaler from sklearn.linear_model import LinearRegression, Lasso def evaluate(train_pred, train_actual, test_pred, test_actual): train_rmse = np.sqrt(mean_squared_error(train_pred, train_actual)) train_mape = mean_absolute_percentage_error(train_pred, train_actual) train_mae = median_absolute_error(train_pred, train_actual) test_rmse = np.sqrt(mean_squared_error(test_pred, test_actual)) test_mape = mean_absolute_percentage_error(test_pred, test_actual) test_mae = median_absolute_error(test_pred, test_actual) metric_df = pd.DataFrame({'Train': [train_rmse, train_mape, train_mae], 'Test': [test_rmse, test_mape, test_mae]}, index=['Root Mean Squared Error', 'Mean Absolute Percent Error', 'Median Absolute Error']) return metric_df ``` Read in the data and prep Napa County ``` fire_data = pd.read_csv("./Data/clean_fire_data.csv", parse_dates=["MONTH"]) fire_data['FIPS_CODE'] = fire_data['FIPS_CODE'].astype(str).str.zfill(5) fire_data['STATE_CODE'] = fire_data['FIPS_CODE'].str.slice(0, 2) weather_data = pd.read_csv("./Data/cleaned_weather_data.csv", parse_dates=['MONTH']) weather_data['STATE_CODE'] = weather_data['STATE_CODE'].astype(str).str.zfill(2) min_month = min(fire_data['MONTH']) max_month = max(fire_data['MONTH']) indx = pd.date_range(start=min_month, end=max_month, freq='MS') complete_data = pd.DataFrame(np.dstack(np.meshgrid(np.unique(fire_data['FIPS_CODE']), indx)).reshape(-1, 2), columns=['FIPS_CODE', 'MONTH']) complete_data['MONTH'] = pd.to_datetime(complete_data['MONTH']) fips_data = fire_data.groupby(by='FIPS_CODE').agg({'COUNTY_AREA': min, 'STATE_CODE': min}).reset_index() full_data = pd.merge(complete_data, fire_data[['FIPS_CODE', 'MONTH', 'TOTAL_BURN_AREA']], how='left', on=['FIPS_CODE', 'MONTH']).fillna(0) full_data = pd.merge(full_data, fips_data, how='inner', on=['FIPS_CODE']) final_data = pd.merge(full_data, weather_data, how='inner', on=['MONTH', 'STATE_CODE']) final_data['MONTH_VALUE'] = final_data['MONTH'].dt.month final_data.sort_values(by=['FIPS_CODE', 'MONTH'], inplace=True) cols = ["TOTAL_BURN_AREA", "PRCP", "TMIN", "TMAX"] lags = [1, 2, 3, 4, 5, 6, 11, 12, 13, 23, 24, 25] for c in cols: for l in lags: final_data[f"{c}_lag{l}"] = final_data.groupby('FIPS_CODE')[c].shift(l) final_data.dropna(inplace=True) ``` ### Linear Model ``` y_column = ['TOTAL_BURN_AREA'] x_columns = ['COUNTY_AREA', 'FIPS_CODE', 'PRCP', 'TMAX', 'TOTAL_BURN_AREA_lag1', 'TOTAL_BURN_AREA_lag12'] model_data = final_data[y_column + x_columns] ohe = OneHotEncoder(drop='first', sparse=False).fit(model_data[['FIPS_CODE']]) model_data[ohe.get_feature_names_out()] = ohe.transform(model_data[['FIPS_CODE']]) model_data[x_columns] = StandardScaler().fit_transform(model_data[x_columns]) train_data = model_data.groupby(by='FIPS_CODE').apply(lambda x: x[:-24]) test_data = model_data.groupby(by='FIPS_CODE').apply(lambda x: x[-24:]) X_Train, y_train = train_data.drop(['TOTAL_BURN_AREA'], axis=1), train_data['TOTAL_BURN_AREA'] X_Test, y_test = test_data.drop(['TOTAL_BURN_AREA'], axis=1), test_data['TOTAL_BURN_AREA'] model = LinearRegression().fit(X_Train, y_train) train_preds = model.predict(X_Train) test_preds = model.predict(X_Test) evaluate(train_preds, y_train, test_preds, y_test) ``` ### Lasso Regression Use same train and test data as Linear Regression ``` model = Lasso(alpha=0.1).fit(X_Train, y_train) train_preds = model.predict(X_Train) test_preds = model.predict(X_Test) evaluate(train_preds, y_train, test_preds, y_test) ``` Adjust the Lasso regularization parameter from 0.1 to 0.5 ``` model = Lasso(alpha=0.5).fit(X_Train, y_train) train_preds = model.predict(X_Train) test_preds = model.predict(X_Test) evaluate(train_preds, y_train, test_preds, y_test) final_data.columns ``` ### CatBoost Model - Regression Component Only We want to see the raw performance of our input variables ``` y_column = ['TOTAL_BURN_AREA'] x_columns = ['FIPS_CODE', 'COUNTY_AREA', 'STATE', 'PRCP', 'TMAX', 'TMIN', 'PRCP_lag1', 'PRCP_lag2', 'PRCP_lag3', 'PRCP_lag4', 'PRCP_lag5', 'PRCP_lag6', 'PRCP_lag11', 'PRCP_lag12', 'PRCP_lag13', 'PRCP_lag23', 'PRCP_lag24', 'PRCP_lag25', 'TMIN_lag1', 'TMIN_lag2', 'TMIN_lag3', 'TMIN_lag4', 'TMIN_lag5', 'TMIN_lag6', 'TMIN_lag11', 'TMIN_lag12', 'TMIN_lag13', 'TMIN_lag23', 'TMIN_lag24', 'TMIN_lag25', 'TMAX_lag1', 'TMAX_lag2', 'TMAX_lag3', 'TMAX_lag4', 'TMAX_lag5', 'TMAX_lag6', 'TMAX_lag11', 'TMAX_lag12', 'TMAX_lag13', 'TMAX_lag23', 'TMAX_lag24', 'TMAX_lag25'] model_data = final_data[y_column + x_columns] train_data = model_data.groupby(by='FIPS_CODE').apply(lambda x: x[:-24]) test_data = model_data.groupby(by='FIPS_CODE').apply(lambda x: x[-24:]) X_Train, y_train = train_data.drop(['TOTAL_BURN_AREA'], axis=1), train_data['TOTAL_BURN_AREA'] X_Test, y_test = test_data.drop(['TOTAL_BURN_AREA'], axis=1), test_data['TOTAL_BURN_AREA'] # Leave all parameters - except has_time as defaults train_pool = Pool(X_Train, label=y_train, cat_features=['FIPS_CODE', 'STATE']) model = CatBoostRegressor(iterations=1000, verbose=0, has_time=True, random_seed=42, min_data_in_leaf=1, max_depth=6).fit(train_pool) train_preds = model.predict(X_Train) test_preds = model.predict(X_Test) evaluate(train_preds, y_train, test_preds, y_test) # Increase the number of iterations train_pool = Pool(X_Train, label=y_train, cat_features=['FIPS_CODE', 'STATE']) model = CatBoostRegressor(iterations=1500, verbose=0, has_time=True, random_seed=42, min_data_in_leaf=1, max_depth=6).fit(train_pool) train_preds = model.predict(X_Train) test_preds = model.predict(X_Test) evaluate(train_preds, y_train, test_preds, y_test) ``` Reduce the number of input variables using feature importance. Take only feature which contribute 1% importance ``` pd.DataFrame(zip(X_Train.columns, model.feature_importances_), columns=['Column', 'Importance']).sort_values('Importance', ascending=False).tail(15) y_column = ['TOTAL_BURN_AREA'] x_columns = ['FIPS_CODE', 'COUNTY_AREA', 'STATE', 'PRCP', 'TMAX', 'TMIN', 'PRCP_lag1', 'PRCP_lag2', 'PRCP_lag4', 'PRCP_lag11', 'PRCP_lag12', 'PRCP_lag13', 'PRCP_lag23','PRCP_lag25', 'TMIN_lag3', 'TMIN_lag6', 'TMIN_lag12', 'TMIN_lag23', 'TMIN_lag24', 'TMAX_lag1', 'TMAX_lag5', 'TMAX_lag6', 'TMAX_lag11', 'TMAX_lag12', 'TMAX_lag23', 'TMAX_lag24', 'TMAX_lag25'] model_data = final_data[y_column + x_columns] train_data = model_data.groupby(by='FIPS_CODE').apply(lambda x: x[:-24]) test_data = model_data.groupby(by='FIPS_CODE').apply(lambda x: x[-24:]) X_Train, y_train = train_data.drop(['TOTAL_BURN_AREA'], axis=1), train_data['TOTAL_BURN_AREA'] X_Test, y_test = test_data.drop(['TOTAL_BURN_AREA'], axis=1), test_data['TOTAL_BURN_AREA'] train_pool = Pool(X_Train, label=y_train, cat_features=['FIPS_CODE', 'STATE']) model = CatBoostRegressor(iterations=1500, verbose=0, has_time=True, random_seed=42, min_data_in_leaf=1, max_depth=6).fit(train_pool) train_preds = model.predict(X_Train) test_preds = model.predict(X_Test) evaluate(train_preds, y_train, test_preds, y_test) final_data.columns ``` ### CatBoost Model - Include AR Terms Using the regression terms from the past model we now add in the AR terms for total burn area ``` y_column = ['TOTAL_BURN_AREA'] x_columns = ['TOTAL_BURN_AREA_lag1', 'TOTAL_BURN_AREA_lag2', 'TOTAL_BURN_AREA_lag3', 'TOTAL_BURN_AREA_lag4', 'TOTAL_BURN_AREA_lag5', 'TOTAL_BURN_AREA_lag6', 'TOTAL_BURN_AREA_lag11', 'TOTAL_BURN_AREA_lag12', 'TOTAL_BURN_AREA_lag13', 'TOTAL_BURN_AREA_lag23', 'TOTAL_BURN_AREA_lag24', 'TOTAL_BURN_AREA_lag25', 'FIPS_CODE', 'COUNTY_AREA', 'STATE', 'PRCP', 'TMAX', 'TMIN', 'PRCP_lag1', 'PRCP_lag2', 'PRCP_lag4', 'PRCP_lag11', 'PRCP_lag12', 'PRCP_lag13', 'PRCP_lag23','PRCP_lag25', 'TMIN_lag3', 'TMIN_lag6', 'TMIN_lag12', 'TMIN_lag23', 'TMIN_lag24', 'TMAX_lag1', 'TMAX_lag5', 'TMAX_lag6', 'TMAX_lag11', 'TMAX_lag12', 'TMAX_lag23', 'TMAX_lag24', 'TMAX_lag25'] model_data = final_data[y_column + x_columns] train_data = model_data.groupby(by='FIPS_CODE').apply(lambda x: x[:-24]) test_data = model_data.groupby(by='FIPS_CODE').apply(lambda x: x[-24:]) X_Train, y_train = train_data.drop(['TOTAL_BURN_AREA'], axis=1), train_data['TOTAL_BURN_AREA'] X_Test, y_test = test_data.drop(['TOTAL_BURN_AREA'], axis=1), test_data['TOTAL_BURN_AREA'] train_pool = Pool(X_Train, label=y_train, cat_features=['FIPS_CODE', 'STATE']) model = CatBoostRegressor(iterations=1500, verbose=0, has_time=True, random_seed=42, min_data_in_leaf=10, max_depth=5).fit(train_pool) train_preds = model.predict(X_Train) test_preds = model.predict(X_Test) evaluate(train_preds, y_train, test_preds, y_test) ``` Reduce the number of features using the feature importance, remove any inputs which have importance less than 1 ``` pd.DataFrame(zip(X_Train.columns, model.feature_importances_), columns=['Column', 'Importance']).sort_values('Importance', ascending=False).tail(15) y_column = ['TOTAL_BURN_AREA'] x_columns = ['TOTAL_BURN_AREA_lag1', 'TOTAL_BURN_AREA_lag2', 'TOTAL_BURN_AREA_lag3', 'TOTAL_BURN_AREA_lag5', 'TOTAL_BURN_AREA_lag11', 'TOTAL_BURN_AREA_lag12', 'TOTAL_BURN_AREA_lag13', 'TOTAL_BURN_AREA_lag23', 'TOTAL_BURN_AREA_lag24', 'TOTAL_BURN_AREA_lag25', 'FIPS_CODE', 'COUNTY_AREA', 'PRCP', 'TMAX', 'PRCP_lag1', 'PRCP_lag4', 'PRCP_lag11', 'PRCP_lag12', 'PRCP_lag23', 'TMIN_lag3', 'TMIN_lag6', 'TMIN_lag12', 'TMIN_lag23', 'TMIN_lag24', 'TMAX_lag6', 'TMAX_lag11', 'TMAX_lag12', 'TMAX_lag23', 'TMAX_lag24', 'TMAX_lag25'] model_data = final_data[y_column + x_columns] train_data = model_data.groupby(by='FIPS_CODE').apply(lambda x: x[:-24]) test_data = model_data.groupby(by='FIPS_CODE').apply(lambda x: x[-24:]) X_Train, y_train = train_data.drop(['TOTAL_BURN_AREA'], axis=1), train_data['TOTAL_BURN_AREA'] X_Test, y_test = test_data.drop(['TOTAL_BURN_AREA'], axis=1), test_data['TOTAL_BURN_AREA'] train_pool = Pool(X_Train, label=y_train, cat_features=['FIPS_CODE']) model = CatBoostRegressor(iterations=1500, verbose=0, has_time=True, random_seed=42, min_data_in_leaf=10, max_depth=5).fit(train_pool) train_preds = model.predict(X_Train) test_preds = model.predict(X_Test) evaluate(train_preds, y_train, test_preds, y_test) ``` Check feature importances again to makes sure all features have importance > 1 ``` pd.DataFrame(zip(X_Train.columns, model.feature_importances_), columns=['Column', 'Importance']).sort_values('Importance', ascending=False).tail(15) ``` ### Compute Shaply Explanantions for the Final Model ``` import shap shap.initjs() samps = X_Train.sample(50000) explainer = shap.TreeExplainer(model) shap_vals = explainer(samps) fig = shap.summary_plot(shap_values=shap_vals, features=samps, max_display=40) shap.plots.bar(shap_vals, max_display=31) sample = final_data.iloc[np.where((final_data['FIPS_CODE'] == '06055') & (final_data['MONTH'] == '2015-07-01'))][model_columns].drop(['TOTAL_BURN_AREA'], axis=1) sample exp = explainer(sample) shap.plots.bar(exp[0], max_display=31) fcst_train = X_Train.copy().reset_index(drop=True) fcst_test = X_Test.copy().reset_index(drop=True) fcst_train['Preds'] = np.clip(train_preds, a_min=0, a_max=None) fcst_test['Preds'] = np.clip(test_preds, a_min=0, a_max=None) napa_train_preds = fcst_train[fcst_train['FIPS_CODE'] == '06055']['Preds'] elko_train_preds = fcst_train[fcst_train['FIPS_CODE'] == '32007']['Preds'] napa_test_preds = fcst_test[fcst_test['FIPS_CODE'] == '06055']['Preds'] elko_test_preds = fcst_test[fcst_test['FIPS_CODE'] == '32007']['Preds'] plt.plot(final_data.loc[final_data['FIPS_CODE'] == '06055']['MONTH'][:-24], napa_train_preds.values, color='blue', zorder=2, label='Train Prediction') plt.plot(final_data.loc[final_data['FIPS_CODE'] == '06055']['MONTH'][:-24], final_data.loc[final_data['FIPS_CODE'] == '06055']['TOTAL_BURN_AREA'][:-24], color='red', zorder=1, label='Train Actual') plt.plot(final_data.loc[final_data['FIPS_CODE'] == '06055']['MONTH'][-24:], napa_test_preds.values, color='green', zorder=2, label='Test Prediction', alpha=0.6) plt.plot(final_data.loc[final_data['FIPS_CODE'] == '06055']['MONTH'][-24:], final_data.loc[final_data['FIPS_CODE'] == '06055']['TOTAL_BURN_AREA'][-24:], color='purple', zorder=1, label='Test Actual') plt.grid() plt.title("Napa County Prediction vs. Actual") plt.legend() plt.ylabel("Total Burn Area") plt.xlabel('Month') plt.tight_layout() plt.ylim(0, 10000) plt.savefig("./img/NapaPredPlot.png") plt.show() plt.plot(final_data.loc[final_data['FIPS_CODE'] == '32007']['MONTH'][:-24], elko_train_preds.values, color='blue', zorder=2, label='Train Prediction', alpha=0.5) plt.plot(final_data.loc[final_data['FIPS_CODE'] == '32007']['MONTH'][:-24], final_data.loc[final_data['FIPS_CODE'] == '32007']['TOTAL_BURN_AREA'][:-24], color='red', zorder=1, label='Train Actual', alpha=1) plt.plot(final_data.loc[final_data['FIPS_CODE'] == '32007']['MONTH'][-24:], elko_test_preds.values, color='green', zorder=2, label='Test Prediction', alpha=0.6) plt.plot(final_data.loc[final_data['FIPS_CODE'] == '32007']['MONTH'][-24:], final_data.loc[final_data['FIPS_CODE'] == '32007']['TOTAL_BURN_AREA'][-24:], color='purple', zorder=1, label='Test Actual') plt.grid() plt.title("Elko County Prediction vs. Actual") plt.legend() plt.ylabel("Total Burn Area") plt.xlabel('Month') plt.tight_layout() plt.savefig("./img/ElkoPredPlot.png") # plt.ylim(0, 150000) plt.show() ```
github_jupyter
# Thomas Robbins ### GEOS 505 Final Project Nov. 15 submission #### In conjunction with the abstract that was previously submitted for this project, the work below is the submission requirement for Nov. 15th, 2018. ##### Below are snippets of code that represent the data chosen for this project, along with brief explanations of the code and work to be performed. ``` import pandas as pd import csv import matplotlib.pyplot as plt import numpy as np ``` #### First, I will import the required csv file with concrete test data. This file will be initialized as 'data' ``` data = pd.read_csv('Mix_Design_V2.0.csv') data[0:8] ``` ### Next, I separate out the particular values associated with the 7-Day compressive strength tests. These values are initialized into a separate variable. ``` day7 = data['7-Day'].values day7[0:5] ``` #### Here, I noticed the nan values need to be removed to properly run statistical programs on the file. Conversion from nan to num work was performed. ``` Data = data.fillna(0) day7 = Data['7-Day'].values Data[0:8] day7[0:10] day7.max() ``` #### Next, graphical representation was required to verify that the values made sense and are consistant. ``` plt.plot(day7) ``` ### To display clear and concise graphical representations, the following type of styling, labeling and graph representations will be provided. ``` fig1 = plt.figure(figsize=(3.0, 2.0), dpi = 150) plt.rcParams.update({'font.size':8}) plt.plot(day7,label= '7-Day strength') plt.ylabel('Strength in psi', fontname = "Times New Roman", color = 'blue') plt.xlabel('Sample number [m${}^3$/s]', fontname = "Times New Roman", color = 'Blue') plt.title('7-Day Strength Results', fontsize = 10, color = 'Blue') plt.legend(loc=0) ``` #### The graph now show the 7-Day compressive strength for each sample, the next step is to determine the values for other samples. Graphical representation will be performed similarly with the following test results. ``` day10 = Data['10-Day'].values day28 = Data['28-Day'].values day56 = Data['56-Day'].values ``` #### Once the data is successfully presented graphically, statistical information can be extrapolated from that data. The intent of this project is to perform statistics on the provided data set, and potentially make future predictions. The variables are numerous in this data set, and often there are nan values in empty place holders. The data does have enough information that the inference of future strength properties may be made given variance of input parameters such as temperature, are and moisture content. Most of the test data provides an actual strength to be much larger than the design strength. Noting this, probabilistic analysis may be performed to determine the probability of producing a concrete sample that may have strength less than that of the design strength. To perform this, Monte Carlo simulation may be used. To perform Monte Carlo simulation, random selection of input parameters will generate a mostly likely output parameter in the form of a probability density function. #### Generating random numbers between given max and min values for Monte Carlo Simulation may be performed as follows. ``` import random def rollDice(): roll = random.randint(1,100) return roll # Now, just to test our dice, let's roll the dice 100 times. x = 0 while x < 100: result = rollDice() #print(result) x+=1 ``` ### Using the method shown above, simulations will be ran between several input parameters provided within the data set. Potentially, for a given sample a prediction could be made on the strength of that sample. Verification can be made as the 56-day test results are included with the test data. If the simulations prove to be sufficient in the predictions, further modeling will be performed.
github_jupyter
<a href="https://colab.research.google.com/github/ayan59dutta/MLCC-Exercises/blob/master/Preliminaries/creating_and_manipulating_tensors.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> #### Copyright 2017 Google LLC. ``` # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Creating and Manipulating Tensors **Learning Objectives:** * Initialize and assign TensorFlow `Variable`s * Create and manipulate tensors * Refresh your memory about addition and multiplication in linear algebra (consult an introduction to matrix [addition](https://en.wikipedia.org/wiki/Matrix_addition) and [multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication) if these topics are new to you) * Familiarize yourself with basic TensorFlow math and array operations ``` # from __future__ import print_function import tensorflow as tf try: tf.contrib.eager.enable_eager_execution() print("TF imported with eager execution!") except ValueError: print("TF already imported with eager execution!") ``` ## Vector Addition You can perform many typical mathematical operations on tensors ([TF API](https://www.tensorflow.org/api_guides/python/math_ops)). The code below creates the following vectors (1-D tensors), all having exactly six elements: * A `primes` vector containing prime numbers. * A `ones` vector containing all `1` values. * A vector created by performing element-wise addition over the first two vectors. * A vector created by doubling the elements in the `primes` vector. ``` primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32) print("primes:", primes ) ones = tf.ones([6], dtype=tf.int32) print("ones:", ones) just_beyond_primes = tf.add(primes, ones) print("just_beyond_primes:", just_beyond_primes) twos = tf.constant([2, 2, 2, 2, 2, 2], dtype=tf.int32) primes_doubled = primes * twos print("primes_doubled:", primes_doubled) ``` Printing a tensor returns not only its value, but also its shape (discussed in the next section) and the type of value stored in the tensor. Calling the `numpy` method of a tensor returns the value of the tensor as a numpy array: ``` some_matrix = tf.constant([[1, 2, 3], [4, 5, 6]], dtype=tf.int32) print(some_matrix) print("\nvalue of some_matrix is:\n", some_matrix.numpy()) ``` ### Tensor Shapes Shapes are used to characterize the size and number of dimensions of a tensor. The shape of a tensor is expressed as `list`, with the `i`th element representing the size along dimension `i`. The length of the list then indicates the rank of the tensor (i.e., the number of dimensions). For more information, see the [TensorFlow documentation](https://www.tensorflow.org/programmers_guide/tensors#shape). A few basic examples: ``` # A scalar (0-D tensor). scalar = tf.zeros([]) # A vector with 3 elements. vector = tf.zeros([3]) # A matrix with 2 rows and 3 columns. matrix = tf.zeros([2, 3]) print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.numpy()) print('vector has shape', vector.get_shape(), 'and value:\n', vector.numpy()) print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.numpy()) ``` ### Broadcasting In mathematics, you can only perform element-wise operations (e.g. *add* and *equals*) on tensors of the same shape. In TensorFlow, however, you may perform operations on tensors that would traditionally have been incompatible. TensorFlow supports **broadcasting** (a concept borrowed from numpy), where the smaller array in an element-wise operation is enlarged to have the same shape as the larger array. For example, via broadcasting: * If an operand requires a size `[6]` tensor, a size `[1]` or a size `[]` tensor can serve as an operand. * If an operation requires a size `[4, 6]` tensor, any of the following sizes can serve as an operand: * `[1, 6]` * `[6]` * `[]` * If an operation requires a size `[3, 5, 6]` tensor, any of the following sizes can serve as an operand: * `[1, 5, 6]` * `[3, 1, 6]` * `[3, 5, 1]` * `[1, 1, 1]` * `[5, 6]` * `[1, 6]` * `[6]` * `[1]` * `[]` **NOTE:** When a tensor is broadcast, its entries are conceptually **copied**. (They are not actually copied for performance reasons. Broadcasting was invented as a performance optimization.) The full broadcasting ruleset is well described in the easy-to-read [numpy broadcasting documentation](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html). The following code performs the same tensor arithmetic as before, but instead uses scalar values (instead of vectors containing all `1`s or all `2`s) and broadcasting. ``` primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32) print("primes:", primes) one = tf.constant(1, dtype=tf.int32) print("one:", one) just_beyond_primes = tf.add(primes, one) print("just_beyond_primes:", just_beyond_primes) two = tf.constant(2, dtype=tf.int32) primes_doubled = primes * two print("primes_doubled:", primes_doubled) ``` ### Exercise #1: Arithmetic over vectors. Perform vector arithmetic to create a "just_under_primes_squared" vector, where the `i`th element is equal to the `i`th element in `primes` squared, minus 1. For example, the second element would be equal to `3 * 3 - 1 = 8`. Make use of either the `tf.multiply` or `tf.pow` ops to square the value of each element in the `primes` vector. ``` # Write your code for Task 1 here. just_under_primes_squared = tf.subtract(tf.pow(primes, 2), one) print("just_under_primes_squared:", just_under_primes_squared) ``` ### Solution Click below for a solution. ``` # Task: Square each element in the primes vector, then subtract 1. def solution(primes): primes_squared = tf.multiply(primes, primes) neg_one = tf.constant(-1, dtype=tf.int32) just_under_primes_squared = tf.add(primes_squared, neg_one) return just_under_primes_squared def alternative_solution(primes): primes_squared = tf.pow(primes, 2) one = tf.constant(1, dtype=tf.int32) just_under_primes_squared = tf.subtract(primes_squared, one) return just_under_primes_squared primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32) just_under_primes_squared = solution(primes) print("just_under_primes_squared:", just_under_primes_squared) ``` ## Matrix Multiplication In linear algebra, when multiplying two matrices, the number of *columns* of the first matrix must equal the number of *rows* in the second matrix. - It is **_valid_** to multiply a `3x4` matrix by a `4x2` matrix. This will result in a `3x2` matrix. - It is **_invalid_** to multiply a `4x2` matrix by a `3x4` matrix. ``` # A 3x4 matrix (2-d tensor). x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]], dtype=tf.int32) # A 4x2 matrix (2-d tensor). y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32) # Multiply `x` by `y`; result is 3x2 matrix. matrix_multiply_result = tf.matmul(x, y) print(matrix_multiply_result) ``` ## Tensor Reshaping With tensor addition and matrix multiplication each imposing constraints on operands, TensorFlow programmers must frequently reshape tensors. You can use the `tf.reshape` method to reshape a tensor. For example, you can reshape a 8x2 tensor into a 2x8 tensor or a 4x4 tensor: ``` # Create an 8x2 matrix (2-D tensor). matrix = tf.constant( [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]], dtype=tf.int32) reshaped_2x8_matrix = tf.reshape(matrix, [2, 8]) reshaped_4x4_matrix = tf.reshape(matrix, [4, 4]) print("Original matrix (8x2):") print(matrix.numpy()) print("Reshaped matrix (2x8):") print(reshaped_2x8_matrix.numpy()) print("Reshaped matrix (4x4):") print(reshaped_4x4_matrix.numpy()) ``` You can also use `tf.reshape` to change the number of dimensions (the "rank") of the tensor. For example, you could reshape that 8x2 tensor into a 3-D 2x2x4 tensor or a 1-D 16-element tensor. ``` # Create an 8x2 matrix (2-D tensor). matrix = tf.constant( [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]], dtype=tf.int32) reshaped_2x2x4_tensor = tf.reshape(matrix, [2, 2, 4]) one_dimensional_vector = tf.reshape(matrix, [16]) print("Original matrix (8x2):") print(matrix.numpy()) print("Reshaped 3-D tensor (2x2x4):") print(reshaped_2x2x4_tensor.numpy()) print("1-D vector:") print(one_dimensional_vector.numpy()) ``` ### Exercise #2: Reshape two tensors in order to multiply them. The following two vectors are incompatible for matrix multiplication: * `a = tf.constant([5, 3, 2, 7, 1, 4])` * `b = tf.constant([4, 6, 3])` Reshape these vectors into compatible operands for matrix multiplication. Then, invoke a matrix multiplication operation on the reshaped tensors. ``` # Write your code for Task 2 here. a = tf.constant([5, 3, 2, 7, 1, 4], dtype=tf.int32) b = tf.constant([4, 6, 3], dtype=tf.int32) a_reshaped = tf.reshape(a, [6, 1]) b_reshaped = tf.reshape(b, [1, 3]) product_a_b = tf.matmul(a_reshaped, b_reshaped) print('Tensor a:') print(a.numpy()) print('Tensor b:') print(b.numpy()) print('Reshaped Tensor a:') print(a_reshaped.numpy()) print('Reshaped Tensor b:') print(b_reshaped.numpy()) print('Matrix multiplication product:') print(product_a_b.numpy()) ``` ### Solution Click below for a solution. Remember, when multiplying two matrices, the number of *columns* of the first matrix must equal the number of *rows* in the second matrix. One possible solution is to reshape `a` into a 2x3 matrix and reshape `b` into a a 3x1 matrix, resulting in a 2x1 matrix after multiplication: ``` # Task: Reshape two tensors in order to multiply them a = tf.constant([5, 3, 2, 7, 1, 4]) b = tf.constant([4, 6, 3]) reshaped_a = tf.reshape(a, [2, 3]) reshaped_b = tf.reshape(b, [3, 1]) c = tf.matmul(reshaped_a, reshaped_b) print("reshaped_a (2x3):") print(reshaped_a.numpy()) print("reshaped_b (3x1):") print(reshaped_b.numpy()) print("reshaped_a x reshaped_b (2x1):") print(c.numpy()) ``` An alternative solution would be to reshape `a` into a 6x1 matrix and `b` into a 1x3 matrix, resulting in a 6x3 matrix after multiplication. ## Variables, Initialization and Assignment So far, all the operations we performed were on static values (`tf.constant`); calling `numpy()` always returned the same result. TensorFlow allows you to define `Variable` objects, whose values can be changed. When creating a variable, you can set an initial value explicitly, or you can use an initializer (like a distribution): ``` # Create a scalar variable with the initial value 3. v = tf.contrib.eager.Variable([3]) # Create a vector variable of shape [1, 4], with random initial values, # sampled from a normal distribution with mean 1 and standard deviation 0.35. w = tf.contrib.eager.Variable(tf.random_normal([1, 4], mean=1.0, stddev=0.35)) print("v:", v.numpy()) print("w:", w.numpy()) ``` To change the value of a variable, use the `assign` op: ``` v = tf.contrib.eager.Variable([3]) print(v.numpy()) tf.assign(v, [7]) print(v.numpy()) v.assign([5]) print(v.numpy()) ``` When assigning a new value to a variable, its shape must be equal to its previous shape: ``` v = tf.contrib.eager.Variable([[1, 2, 3], [4, 5, 6]]) print(v.numpy()) try: print("Assigning [7, 8, 9] to v") v.assign([7, 8, 9]) except ValueError as e: print("Exception:", e) ``` There are many more topics about variables that we didn't cover here, such as loading and storing. To learn more, see the [TensorFlow docs](https://www.tensorflow.org/programmers_guide/variables). ### Exercise #3: Simulate 10 rolls of two dice. Create a dice simulation, which generates a `10x3` 2-D tensor in which: * Columns `1` and `2` each hold one throw of one six-sided die (with values 1–6). * Column `3` holds the sum of Columns `1` and `2` on the same row. For example, the first row might have the following values: * Column `1` holds `4` * Column `2` holds `3` * Column `3` holds `7` You'll need to explore the [TensorFlow API reference](https://www.tensorflow.org/api_docs/python/tf) to solve this task. ``` # Write your code for Task 3 here. die1 = tf.contrib.eager.Variable(tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32, seed=None, name='die1')) die2 = tf.contrib.eager.Variable(tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32, seed=None, name='die2')) sum_d1_d2 = tf.add(die1, die2) dice_sim = tf.concat(values=[die1, die2, sum_d1_d2], axis=1) print(dice_sim.numpy()) ``` ### Solution Click below for a solution. We're going to place dice throws inside two separate 10x1 matrices, `die1` and `die2`. The summation of the dice rolls will be stored in `dice_sum`, then the resulting 10x3 matrix will be created by *concatenating* the three 10x1 matrices together into a single matrix. Alternatively, we could have placed dice throws inside a single 10x2 matrix, but adding different columns of the same matrix would be more complicated. We also could have placed dice throws inside two 1-D tensors (vectors), but doing so would require transposing the result. ``` # Task: Simulate 10 throws of two dice. Store the results in a 10x3 matrix. die1 = tf.contrib.eager.Variable( tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32)) die2 = tf.contrib.eager.Variable( tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32)) dice_sum = tf.add(die1, die2) resulting_matrix = tf.concat(values=[die1, die2, dice_sum], axis=1) print(resulting_matrix.numpy()) ```
github_jupyter
### Imports and Helper Functions ``` import numpy as np import dask_xgboost as dxgb_gpu import dask import dask_cudf from dask.delayed import delayed from dask.distributed import Client, wait import xgboost as xgb import cudf from cudf.dataframe import DataFrame from collections import OrderedDict import gc from glob import glob import os import subprocess cmd = "hostname --all-ip-addresses" process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE) output, error = process.communicate() IPADDR = str(output.decode()).split()[0] cmd = "../utils/dask-setup.sh 0" process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE) output, error = process.communicate() cmd = "../utils/dask-setup.sh cudf 8 8786 8787 8790 " + str(IPADDR) + " MASTER" process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE) output, error = process.communicate() print(output.decode()) import dask from dask.delayed import delayed from dask.distributed import Client, wait _client = IPADDR + str(":8786") client = dask.distributed.Client(_client) client # to download data for this notebook, visit https://rapidsai.github.io/demos/datasets/mortgage-data and update the following paths accordingly acq_data_path = "/path/to/mortgage/acq" perf_data_path = "/path/to/mortgage/perf" col_names_path = "/path/to/mortgage/names.csv" start_year = 2000 end_year = 2016 # end_year is inclusive part_count = 16 # the number of data files to train against def initialize_rmm_pool(): from librmm_cffi import librmm_config as rmm_cfg rmm_cfg.use_pool_allocator = True #rmm_cfg.initial_pool_size = 2<<30 # set to 2GiB. Default is 1/2 total GPU memory import cudf return cudf._gdf.rmm_initialize() def initialize_rmm_no_pool(): from librmm_cffi import librmm_config as rmm_cfg rmm_cfg.use_pool_allocator = False import cudf return cudf._gdf.rmm_initialize() client.run(initialize_rmm_pool) def run_dask_task(func, **kwargs): task = func(**kwargs) return task def process_quarter_gpu(year=2000, quarter=1, perf_file=""): ml_arrays = run_dask_task(delayed(run_gpu_workflow), quarter=quarter, year=year, perf_file=perf_file) return client.compute(ml_arrays, optimize_graph=False, fifo_timeout="0ms") def null_workaround(df, **kwargs): for column, data_type in df.dtypes.items(): if str(data_type) == "category": df[column] = df[column].astype('int32').fillna(-1) if str(data_type) in ['int8', 'int16', 'int32', 'int64', 'float32', 'float64']: df[column] = df[column].fillna(-1) return df def run_gpu_workflow(quarter=1, year=2000, perf_file="", **kwargs): names = gpu_load_names() acq_gdf = gpu_load_acquisition_csv(acquisition_path= acq_data_path + "/Acquisition_" + str(year) + "Q" + str(quarter) + ".txt") acq_gdf = acq_gdf.merge(names, how='left', on=['seller_name']) acq_gdf.drop_column('seller_name') acq_gdf['seller_name'] = acq_gdf['new'] acq_gdf.drop_column('new') perf_df_tmp = gpu_load_performance_csv(perf_file) gdf = perf_df_tmp everdf = create_ever_features(gdf) delinq_merge = create_delinq_features(gdf) everdf = join_ever_delinq_features(everdf, delinq_merge) del(delinq_merge) joined_df = create_joined_df(gdf, everdf) testdf = create_12_mon_features(joined_df) joined_df = combine_joined_12_mon(joined_df, testdf) del(testdf) perf_df = final_performance_delinquency(gdf, joined_df) del(gdf, joined_df) final_gdf = join_perf_acq_gdfs(perf_df, acq_gdf) del(perf_df) del(acq_gdf) final_gdf = last_mile_cleaning(final_gdf) return final_gdf def gpu_load_performance_csv(performance_path, **kwargs): """ Loads performance data Returns ------- GPU DataFrame """ cols = [ "loan_id", "monthly_reporting_period", "servicer", "interest_rate", "current_actual_upb", "loan_age", "remaining_months_to_legal_maturity", "adj_remaining_months_to_maturity", "maturity_date", "msa", "current_loan_delinquency_status", "mod_flag", "zero_balance_code", "zero_balance_effective_date", "last_paid_installment_date", "foreclosed_after", "disposition_date", "foreclosure_costs", "prop_preservation_and_repair_costs", "asset_recovery_costs", "misc_holding_expenses", "holding_taxes", "net_sale_proceeds", "credit_enhancement_proceeds", "repurchase_make_whole_proceeds", "other_foreclosure_proceeds", "non_interest_bearing_upb", "principal_forgiveness_upb", "repurchase_make_whole_proceeds_flag", "foreclosure_principal_write_off_amount", "servicing_activity_indicator" ] dtypes = OrderedDict([ ("loan_id", "int64"), ("monthly_reporting_period", "date"), ("servicer", "category"), ("interest_rate", "float64"), ("current_actual_upb", "float64"), ("loan_age", "float64"), ("remaining_months_to_legal_maturity", "float64"), ("adj_remaining_months_to_maturity", "float64"), ("maturity_date", "date"), ("msa", "float64"), ("current_loan_delinquency_status", "int32"), ("mod_flag", "category"), ("zero_balance_code", "category"), ("zero_balance_effective_date", "date"), ("last_paid_installment_date", "date"), ("foreclosed_after", "date"), ("disposition_date", "date"), ("foreclosure_costs", "float64"), ("prop_preservation_and_repair_costs", "float64"), ("asset_recovery_costs", "float64"), ("misc_holding_expenses", "float64"), ("holding_taxes", "float64"), ("net_sale_proceeds", "float64"), ("credit_enhancement_proceeds", "float64"), ("repurchase_make_whole_proceeds", "float64"), ("other_foreclosure_proceeds", "float64"), ("non_interest_bearing_upb", "float64"), ("principal_forgiveness_upb", "float64"), ("repurchase_make_whole_proceeds_flag", "category"), ("foreclosure_principal_write_off_amount", "float64"), ("servicing_activity_indicator", "category") ]) print(performance_path) return cudf.read_csv(performance_path, names=cols, delimiter='|', dtype=list(dtypes.values()), skiprows=1) def gpu_load_acquisition_csv(acquisition_path, **kwargs): """ Loads acquisition data Returns ------- GPU DataFrame """ cols = [ 'loan_id', 'orig_channel', 'seller_name', 'orig_interest_rate', 'orig_upb', 'orig_loan_term', 'orig_date', 'first_pay_date', 'orig_ltv', 'orig_cltv', 'num_borrowers', 'dti', 'borrower_credit_score', 'first_home_buyer', 'loan_purpose', 'property_type', 'num_units', 'occupancy_status', 'property_state', 'zip', 'mortgage_insurance_percent', 'product_type', 'coborrow_credit_score', 'mortgage_insurance_type', 'relocation_mortgage_indicator' ] dtypes = OrderedDict([ ("loan_id", "int64"), ("orig_channel", "category"), ("seller_name", "category"), ("orig_interest_rate", "float64"), ("orig_upb", "int64"), ("orig_loan_term", "int64"), ("orig_date", "date"), ("first_pay_date", "date"), ("orig_ltv", "float64"), ("orig_cltv", "float64"), ("num_borrowers", "float64"), ("dti", "float64"), ("borrower_credit_score", "float64"), ("first_home_buyer", "category"), ("loan_purpose", "category"), ("property_type", "category"), ("num_units", "int64"), ("occupancy_status", "category"), ("property_state", "category"), ("zip", "int64"), ("mortgage_insurance_percent", "float64"), ("product_type", "category"), ("coborrow_credit_score", "float64"), ("mortgage_insurance_type", "float64"), ("relocation_mortgage_indicator", "category") ]) print(acquisition_path) return cudf.read_csv(acquisition_path, names=cols, delimiter='|', dtype=list(dtypes.values()), skiprows=1) def gpu_load_names(**kwargs): """ Loads names used for renaming the banks Returns ------- GPU DataFrame """ cols = [ 'seller_name', 'new' ] dtypes = OrderedDict([ ("seller_name", "category"), ("new", "category"), ]) return cudf.read_csv(col_names_path, names=cols, delimiter='|', dtype=list(dtypes.values()), skiprows=1) ``` ### GPU ETL and Feature Engineering Functions ``` def create_ever_features(gdf, **kwargs): everdf = gdf[['loan_id', 'current_loan_delinquency_status']] everdf = everdf.groupby('loan_id', method='hash').max() del(gdf) everdf['ever_30'] = (everdf['max_current_loan_delinquency_status'] >= 1).astype('int8') everdf['ever_90'] = (everdf['max_current_loan_delinquency_status'] >= 3).astype('int8') everdf['ever_180'] = (everdf['max_current_loan_delinquency_status'] >= 6).astype('int8') everdf.drop_column('max_current_loan_delinquency_status') return everdf def create_delinq_features(gdf, **kwargs): delinq_gdf = gdf[['loan_id', 'monthly_reporting_period', 'current_loan_delinquency_status']] del(gdf) delinq_30 = delinq_gdf.query('current_loan_delinquency_status >= 1')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min() delinq_30['delinquency_30'] = delinq_30['min_monthly_reporting_period'] delinq_30.drop_column('min_monthly_reporting_period') delinq_90 = delinq_gdf.query('current_loan_delinquency_status >= 3')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min() delinq_90['delinquency_90'] = delinq_90['min_monthly_reporting_period'] delinq_90.drop_column('min_monthly_reporting_period') delinq_180 = delinq_gdf.query('current_loan_delinquency_status >= 6')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min() delinq_180['delinquency_180'] = delinq_180['min_monthly_reporting_period'] delinq_180.drop_column('min_monthly_reporting_period') del(delinq_gdf) delinq_merge = delinq_30.merge(delinq_90, how='left', on=['loan_id'], type='hash') delinq_merge['delinquency_90'] = delinq_merge['delinquency_90'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]')) delinq_merge = delinq_merge.merge(delinq_180, how='left', on=['loan_id'], type='hash') delinq_merge['delinquency_180'] = delinq_merge['delinquency_180'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]')) del(delinq_30) del(delinq_90) del(delinq_180) return delinq_merge def join_ever_delinq_features(everdf_tmp, delinq_merge, **kwargs): everdf = everdf_tmp.merge(delinq_merge, on=['loan_id'], how='left', type='hash') del(everdf_tmp) del(delinq_merge) everdf['delinquency_30'] = everdf['delinquency_30'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]')) everdf['delinquency_90'] = everdf['delinquency_90'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]')) everdf['delinquency_180'] = everdf['delinquency_180'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]')) return everdf def create_joined_df(gdf, everdf, **kwargs): test = gdf[['loan_id', 'monthly_reporting_period', 'current_loan_delinquency_status', 'current_actual_upb']] del(gdf) test['timestamp'] = test['monthly_reporting_period'] test.drop_column('monthly_reporting_period') test['timestamp_month'] = test['timestamp'].dt.month test['timestamp_year'] = test['timestamp'].dt.year test['delinquency_12'] = test['current_loan_delinquency_status'] test.drop_column('current_loan_delinquency_status') test['upb_12'] = test['current_actual_upb'] test.drop_column('current_actual_upb') test['upb_12'] = test['upb_12'].fillna(999999999) test['delinquency_12'] = test['delinquency_12'].fillna(-1) joined_df = test.merge(everdf, how='left', on=['loan_id'], type='hash') del(everdf) del(test) joined_df['ever_30'] = joined_df['ever_30'].fillna(-1) joined_df['ever_90'] = joined_df['ever_90'].fillna(-1) joined_df['ever_180'] = joined_df['ever_180'].fillna(-1) joined_df['delinquency_30'] = joined_df['delinquency_30'].fillna(-1) joined_df['delinquency_90'] = joined_df['delinquency_90'].fillna(-1) joined_df['delinquency_180'] = joined_df['delinquency_180'].fillna(-1) joined_df['timestamp_year'] = joined_df['timestamp_year'].astype('int32') joined_df['timestamp_month'] = joined_df['timestamp_month'].astype('int32') return joined_df def create_12_mon_features(joined_df, **kwargs): testdfs = [] n_months = 12 for y in range(1, n_months + 1): tmpdf = joined_df[['loan_id', 'timestamp_year', 'timestamp_month', 'delinquency_12', 'upb_12']] tmpdf['josh_months'] = tmpdf['timestamp_year'] * 12 + tmpdf['timestamp_month'] tmpdf['josh_mody_n'] = ((tmpdf['josh_months'].astype('float64') - 24000 - y) / 12).floor() tmpdf = tmpdf.groupby(['loan_id', 'josh_mody_n'], method='hash').agg({'delinquency_12': 'max','upb_12': 'min'}) tmpdf['delinquency_12'] = (tmpdf['max_delinquency_12']>3).astype('int32') tmpdf['delinquency_12'] +=(tmpdf['min_upb_12']==0).astype('int32') tmpdf.drop_column('max_delinquency_12') tmpdf['upb_12'] = tmpdf['min_upb_12'] tmpdf.drop_column('min_upb_12') tmpdf['timestamp_year'] = (((tmpdf['josh_mody_n'] * n_months) + 24000 + (y - 1)) / 12).floor().astype('int16') tmpdf['timestamp_month'] = np.int8(y) tmpdf.drop_column('josh_mody_n') testdfs.append(tmpdf) del(tmpdf) del(joined_df) return cudf.concat(testdfs) def combine_joined_12_mon(joined_df, testdf, **kwargs): joined_df.drop_column('delinquency_12') joined_df.drop_column('upb_12') joined_df['timestamp_year'] = joined_df['timestamp_year'].astype('int16') joined_df['timestamp_month'] = joined_df['timestamp_month'].astype('int8') return joined_df.merge(testdf, how='left', on=['loan_id', 'timestamp_year', 'timestamp_month'], type='hash') def final_performance_delinquency(gdf, joined_df, **kwargs): merged = null_workaround(gdf) joined_df = null_workaround(joined_df) merged['timestamp_month'] = merged['monthly_reporting_period'].dt.month merged['timestamp_month'] = merged['timestamp_month'].astype('int8') merged['timestamp_year'] = merged['monthly_reporting_period'].dt.year merged['timestamp_year'] = merged['timestamp_year'].astype('int16') merged = merged.merge(joined_df, how='left', on=['loan_id', 'timestamp_year', 'timestamp_month'], type='hash') merged.drop_column('timestamp_year') merged.drop_column('timestamp_month') return merged def join_perf_acq_gdfs(perf, acq, **kwargs): perf = null_workaround(perf) acq = null_workaround(acq) return perf.merge(acq, how='left', on=['loan_id'], type='hash') def last_mile_cleaning(df, **kwargs): drop_list = [ 'loan_id', 'orig_date', 'first_pay_date', 'seller_name', 'monthly_reporting_period', 'last_paid_installment_date', 'maturity_date', 'ever_30', 'ever_90', 'ever_180', 'delinquency_30', 'delinquency_90', 'delinquency_180', 'upb_12', 'zero_balance_effective_date','foreclosed_after', 'disposition_date','timestamp' ] for column in drop_list: df.drop_column(column) for col, dtype in df.dtypes.iteritems(): if str(dtype)=='category': df[col] = df[col].cat.codes df[col] = df[col].astype('float32') df['delinquency_12'] = df['delinquency_12'] > 0 df['delinquency_12'] = df['delinquency_12'].fillna(False).astype('int32') for column in df.columns: df[column] = df[column].fillna(-1) return df.to_arrow(index=False) ``` ## Process the data using the functions ### Dask + cuDF multi-year ``` %%time # NOTE: The ETL calculates additional features which are then dropped before creating the XGBoost DMatrix. # This can be optimized to avoid calculating the dropped features. gpu_dfs = [] gpu_time = 0 quarter = 1 year = start_year count = 0 while year <= end_year: for file in glob(os.path.join(perf_data_path + "/Performance_" + str(year) + "Q" + str(quarter) + "*")): gpu_dfs.append(process_quarter_gpu(year=year, quarter=quarter, perf_file=file)) count += 1 quarter += 1 if quarter == 5: year += 1 quarter = 1 wait(gpu_dfs) client.run(cudf._gdf.rmm_finalize) client.run(initialize_rmm_no_pool) ``` ### GPU Machine Learning ``` dxgb_gpu_params = { 'nround': 100, 'max_depth': 8, 'max_leaves': 2**8, 'alpha': 0.9, 'eta': 0.1, 'gamma': 0.1, 'learning_rate': 0.1, 'subsample': 1, 'reg_lambda': 1, 'scale_pos_weight': 2, 'min_child_weight': 30, 'tree_method': 'gpu_hist', 'n_gpus': 1, 'distributed_dask': True, 'loss': 'ls', 'objective': 'gpu:reg:linear', 'max_features': 'auto', 'criterion': 'friedman_mse', 'grow_policy': 'lossguide', 'verbose': True } %%time gpu_dfs = [delayed(DataFrame.from_arrow)(gpu_df) for gpu_df in gpu_dfs[:part_count]] gpu_dfs = [gpu_df for gpu_df in gpu_dfs] wait(gpu_dfs) tmp_map = [(gpu_df, list(client.who_has(gpu_df).values())[0]) for gpu_df in gpu_dfs] new_map = {} for key, value in tmp_map: if value not in new_map: new_map[value] = [key] else: new_map[value].append(key) del(tmp_map) gpu_dfs = [] for list_delayed in new_map.values(): gpu_dfs.append(delayed(cudf.concat)(list_delayed)) del(new_map) gpu_dfs = [(gpu_df[['delinquency_12']], gpu_df[delayed(list)(gpu_df.columns.difference(['delinquency_12']))]) for gpu_df in gpu_dfs] gpu_dfs = [(gpu_df[0].persist(), gpu_df[1].persist()) for gpu_df in gpu_dfs] gpu_dfs = [dask.delayed(xgb.DMatrix)(gpu_df[1], gpu_df[0]) for gpu_df in gpu_dfs] gpu_dfs = [gpu_df.persist() for gpu_df in gpu_dfs] gc.collect() wait(gpu_dfs) %%time labels = None bst = dxgb_gpu.train(client, dxgb_gpu_params, gpu_dfs, labels, num_boost_round=dxgb_gpu_params['nround']) ```
github_jupyter
## Import modules ``` import numpy as np import matplotlib.pyplot as plt import os import time from sklearn.linear_model import SGDRegressor, LinearRegression from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline from sklearn.metrics import mean_squared_error ``` ## Utils ``` def load_matrix(path: str, verbose: bool = True, num_to_print: int = 3) -> np.ndarray: matrix = np.loadtxt(path, delimiter=",") if verbose: print("Loaded matrix from", os.path.basename(path)) print("Shape:", matrix.shape) num_to_print = min(num_to_print, len(matrix)) print("Example:\n", matrix[:num_to_print], end="\n\n") return matrix %cd ../../ ``` ## Generate data ``` !python demo/gen_data.py --help !python demo/gen_data.py --num-train 10000 --num-eval 1000 --num-features 50 --seed 42 --out-dir data ``` ## Look at data ``` !ls data x_train = load_matrix("data/x_train.csv") y_train = load_matrix("data/y_train.csv") x_eval = load_matrix("data/x_eval.csv") y_eval = load_matrix("data/y_eval.csv") ``` ## Run Linear regression ``` !./build/bin/gradient_descent -h ``` ### SGD ``` !cat examples/serial_config.cfg !./build/bin/gradient_descent --config examples/serial_config.cfg y_pred_sgd = load_matrix("output/serial_pred.csv", verbose=False) print("MSE:", mean_squared_error(y_pred_sgd.squeeze(), y_eval)) ``` ### Parallel SGD ``` !cat examples/parallel_config.cfg !./build/bin/gradient_descent --config examples/parallel_config.cfg y_pred_parallel_sgd = load_matrix("output/parallel_pred.csv", verbose=False) print("MSE:", mean_squared_error(y_pred_parallel_sgd, y_eval)) ``` ### Sklearn ``` regressor = make_pipeline(StandardScaler(), LinearRegression()) # regressor = make_pipeline( # StandardScaler(), # SGDRegressor(alpha=0.01, learning_rate="constant", eta0=0.001), # ) # start = time.perf_counter_ns() regressor.fit(x_train, y_train.squeeze()) # end = time.perf_counter_ns() # print("SGD:", (end - start) / 1e6, "ms") y_pred = regressor.predict(x_eval) print("MSE:", mean_squared_error(y_pred, y_eval)) ``` ## Cost decay comparison ### Prepare data ``` parallel_cost = load_matrix("output/parallel_cost.csv", verbose=False) serial_cost = load_matrix("output/serial_cost.csv", verbose=False) num_parallel = parallel_cost.shape[0] num_serial = serial_cost.shape[0] serial_idxs = np.arange(num_serial) parallel_idxs = np.linspace(0, num_serial, num_parallel) ``` ### Plot ``` plt.figure(figsize=(18, 9)) plt.plot(parallel_idxs, parallel_cost, color="red", label="parallel") plt.plot(serial_idxs, serial_cost, color="blue", label="serial") plt.legend() plt.xlabel("Epoch") plt.ylabel("Cost") plt.show() ``` ## Benchmark ``` !python demo/benchmark.py -h !cat examples/benchmark_config.json !python demo/benchmark.py build/bin/gradient_descent \ --serial-config examples/serial_config.cfg \ --parallel-config examples/parallel_config.cfg \ --config examples/benchmark_config.json ```
github_jupyter
``` import pandas as pd import numpy as np %matplotlib inline if 'bigDataFrame' in globals(): print("Exist, do nothing!") else: print("Read data.") bigDataFrame = pd.read_pickle("../output/bigDataFrame.pkl") bigDataFrame.rename(columns={"PM2.5": "PM25"}, inplace=True) bigDataFrame.head() pollutedPlaces = bigDataFrame['2015-01-01 00:00:00':'2015-12-31 23:00:00'].idxmax() print(pollutedPlaces) pollutedPlaces = set([x[1] for x in pollutedPlaces]) pollutedPlaces #reducedDataFrame = bigDataFrame['2015-01-01 00:00:00':'2015-12-31 23:00:00'].loc[(slice(None),pollutedPlaces), :] reducedDataFrame = bigDataFrame['2015-01-01 00:00:00':'2015-12-31 23:00:00'].loc[(slice(None), slice(None)), :] hours = len(reducedDataFrame.index.get_level_values("Hour").unique()) def C6H6qual (value): if (value < 0.0): return np.NaN elif (value >= 0.0 and value <= 5.0): return "1 Very good" elif (value > 5.0 and value <= 10.0): return "2 Good" elif (value > 10.0 and value <= 15.0): return "3 Moderate" elif (value > 15.0 and value <= 20.0): return "4 Sufficient" elif (value > 20.0 and value <= 50.0): return "5 Bad" elif (value > 50.0): return "6 Very bad" else: return value def COqual (value): if (value < 0.0): return np.NaN elif (value >= 0.0 and value <= 2.0): return "1 Very good" elif (value > 2.0 and value <= 6.0): return "2 Good" elif (value > 6.0 and value <= 10.0): return "3 Moderate" elif (value > 10.0 and value <= 14.0): return "4 Sufficient" elif (value > 14.0 and value <= 20.0): return "5 Bad" elif (value > 20.0): return "6 Very bad" else: return value def NO2qual (value): if (value < 0.0): return np.NaN elif (value >= 0.0 and value <= 40.0): return "1 Very good" elif (value > 40.0 and value <= 100.0): return "2 Good" elif (value > 100.0 and value <= 150.0): return "3 Moderate" elif (value > 150.0 and value <= 200.0): return "4 Sufficient" elif (value > 200.0 and value <= 400.0): return "5 Bad" elif (value > 400.0): return "6 Very bad" else: return value def O3qual (value): if (value < 0.0): return np.NaN elif (value >= 0.0 and value <= 30.0): return "1 Very good" elif (value > 30.0 and value <= 70.0): return "2 Good" elif (value > 70.0 and value <= 120.0): return "3 Moderate" elif (value > 120.0 and value <= 160.0): return "4 Sufficient" elif (value > 160.0 and value <= 240.0): return "5 Bad" elif (value > 240.0): return "6 Very bad" else: return value def PM10qual (value): if (value < 0.0): return np.NaN elif (value >= 0.0 and value <= 20.0): return "1 Very good" elif (value > 20.0 and value <= 60.0): return "2 Good" elif (value > 60.0 and value <= 100.0): return "3 Moderate" elif (value > 100.0 and value <= 140.0): return "4 Sufficient" elif (value > 140.0 and value <= 200.0): return "5 Bad" elif (value > 200.0): return "6 Very bad" else: return value def PM25qual (value): if (value < 0.0): return np.NaN elif (value >= 0.0 and value <= 12.0): return "1 Very good" elif (value > 12.0 and value <= 36.0): return "2 Good" elif (value > 36.0 and value <= 60.0): return "3 Moderate" elif (value > 60.0 and value <= 84.0): return "4 Sufficient" elif (value > 84.0 and value <= 120.0): return "5 Bad" elif (value > 120.0): return "6 Very bad" else: return value def SO2qual (value): if (value < 0.0): return np.NaN elif (value >= 0.0 and value <= 50.0): return "1 Very good" elif (value > 50.0 and value <= 100.0): return "2 Good" elif (value > 100.0 and value <= 200.0): return "3 Moderate" elif (value > 200.0 and value <= 350.0): return "4 Sufficient" elif (value > 350.0 and value <= 500.0): return "5 Bad" elif (value > 500.0): return "6 Very bad" else: return value descriptiveFrame = pd.DataFrame() for pollutant in bigDataFrame.columns: reducedDataFrame[pollutant+".desc"] = reducedDataFrame[pollutant].apply(lambda x: globals()[pollutant+"qual"](x)) tmpseries = reducedDataFrame.groupby(level="Station")[pollutant+".desc"].value_counts(dropna = False).apply(lambda x: (x/float(hours))*100) descriptiveFrame = pd.concat([descriptiveFrame, tmpseries], axis=1) qualities = sorted(descriptiveFrame.index.get_level_values(1).unique().tolist()) for quality in qualities: reducedDataFrame.loc[(reducedDataFrame[["C6H6.desc", "CO.desc", "NO2.desc", "O3.desc", "PM10.desc", "PM25.desc", "SO2.desc"]] == quality).any(axis=1),"overall"] = quality descriptiveFrame.columns = bigDataFrame.columns overall = reducedDataFrame.groupby(level="Station")["overall"].value_counts(dropna = False).apply(lambda x: (x/float(hours))*100) descriptiveFrame = pd.concat([descriptiveFrame, overall], axis=1) descriptiveFrame.rename(columns={0: "overall"}, inplace=True) worstPlace = descriptiveFrame.xs('6 Very bad', level=1)["overall"].idxmax() bestPlace = descriptiveFrame.xs('1 Very good', level=1)["overall"].idxmax() descriptiveFrame.xs(worstPlace, level=0) descriptiveFrame.xs(bestPlace, level=0) worstPlace, bestPlace ``` # Part for Tricity and Kashubia ``` stations = pd.read_excel("../input/Metadane_wer20160914.xlsx") coolStation = [u'Gdańsk', u'Gdynia', u'Sopot', u'Kościerzyna'] selectedStations = stations[stations[u'Miejscowość'].isin(coolStation)] stationCodes = set(list(selected_stations[u'Kod stacji'].values) + list(selected_stations[u'Stary Kod stacji'].values)) reducedDataFrame = bigDataFrame['2015-01-01 01:00:00':'2016-01-01 00:00:00'].loc[(slice(None),stationCodes), :] ```
github_jupyter
# Longest Palindromic Subsequence In this notebook, you'll be tasked with finding the length of the *Longest Palindromic Subsequence* (LPS) given a string of characters. As an example: * With an input string, `ABBDBCACB` * The LPS is `BCACB`, which has `length = 5` In this notebook, we'll focus on finding an optimal solution to the LPS task, using dynamic programming. There will be some similarities to the Longest Common Subsequence (LCS) task, which is outlined in detail in a previous notebook. It is recommended that you start with that notebook before trying out this task. ### Hint **Storing pre-computed values** The LPS algorithm depends on looking at one string and comparing letters to one another. Similar to how you compared two strings in the LCS (Longest Common Subsequence) task, you can compare the characters in just *one* string with one another, using a matrix to store the results of matching characters. For a string on length n characters, you can create an `n x n` matrix to store the solution to subproblems. In this case, the subproblem is the length of the longest palindromic subsequence, up to a certain point in the string (up to the end of a certain substring). It may be helpful to try filling up a matrix on paper before you start your code solution. If you get stuck with this task, you may look at some example matrices below (see the section titled **Example matrices**), before consulting the complete solution code. ``` # imports for printing a matrix, nicely import pprint pp = pprint.PrettyPrinter() # complete LPS solution def lps(input_string): n = len(input_string) # create a lookup table to store results of subproblems L = [[0 for x in range(n)] for x in range(n)] # strings of length 1 have LPS length = 1 for i in range(n): L[i][i] = 1 # consider all substrings for s_size in range(2, n+1): for start_idx in range(n-s_size+1): end_idx = start_idx + s_size - 1 if s_size == 2 and input_string[start_idx] == input_string[end_idx]: # match with a substring of length 2 L[start_idx][end_idx] = 2 elif input_string[start_idx] == input_string[end_idx]: # general match case L[start_idx][end_idx] = L[start_idx+1][end_idx-1] + 2 else: # no match case, taking the max of two values L[start_idx][end_idx] = max(L[start_idx][end_idx-1], L[start_idx+1][end_idx]); # debug line # pp.pprint(L) return L[0][n-1] # value in top right corner of matrix def test_function(test_case): string = test_case[0] solution = test_case[1] output = lps(string) print(output) if output == solution: print("Pass") else: print("Fail") string = "TACOCAT" solution = 7 test_case = [string, solution] test_function(test_case) string = 'BANANA' solution = 5 test_case = [string, solution] test_function(test_case) string = 'BANANO' solution = 3 test_case = [string, solution] test_function(test_case) ``` ### Example matrices Example LPS Subproblem matrix 1: ``` input_string = 'BANANO' LPS subproblem matrix: B A N A N O B [[1, 1, 1, 3, 3, 3], A [0, 1, 1, 3, 3, 3], N [0, 0, 1, 1, 3, 3], A [0, 0, 0, 1, 1, 1], N [0, 0, 0, 0, 1, 1], O [0, 0, 0, 0, 0, 1]] LPS length: 3 ``` Example LPS Subproblem matrix 2: ``` input_string = 'TACOCAT' LPS subproblem matrix: T A C O C A T T [[1, 1, 1, 1, 3, 5, 7], A [0, 1, 1, 1, 3, 5, 5], C [0, 0, 1, 1, 3, 3, 3], O [0, 0, 0, 1, 1, 1, 1], C [0, 0, 0, 0, 1, 1, 1], A [0, 0, 0, 0, 0, 1, 1], T [0, 0, 0, 0, 0, 0, 1]] LPS length: 7 ``` Note: The lower diagonal values will remain 0 in all cases. ### The matrix rules You can efficiently fill up this matrix one cell at a time. Each grid cell only depends on the values in the grid cells that are directly on bottom and to the left of it, or on the diagonal/bottom-left. The rules are as follows: * Start with an `n x n ` matrix where n is the number of characters in a given string; the diagonal should all have the value 1 for the base case, the rest can be zeros. * As you traverse your string: * If there is a match, fill that grid cell with the value to the bottom-left of that cell *plus* two. * If there is not a match, take the *maximum* value from either directly to the left or the bottom cell, and carry that value over to the non-match cell. * After completely filling the matrix, **the top-right cell will hold the final LPS length**. <span class="graffiti-highlight graffiti-id_d28fhk7-id_3yrlf09"><i></i><button>Show Solution</button></span> ### Complexity What was the complexity of this? In the solution, we are looping over the elements of our `input_string` using two `for` loops; these are each of $O(N)$ and nested this becomes $O(N^2)$. This behavior dominates our optimized solution.
github_jupyter
# Pool-based Active Learning - Getting Started The main purpose of this tutorial is to ease the implementation of our library `scikit-activeml` to new users. `scikit-activeml` is a library that executes the most important query strategies. It is built upon the well-known machine learning frame-work `scikit-learn`, which makes it user-friendly. For better understanding, we show an exemplary active learning cycle here. Let's start by importing the relevant packages from both `scikit-learn` and `scikit-activeml`. ``` import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LogisticRegression from sklearn.datasets import make_classification from skactiveml.classifier import SklearnClassifier from skactiveml.pool import UncertaintySampling from skactiveml.utils import unlabeled_indices, labeled_indices, MISSING_LABEL from skactiveml.visualization import plot_decision_boundary, plot_utilities import warnings warnings.filterwarnings("ignore") ``` ## Data Set Generation We generate a data set of 100 data points with two clusters from the `make_classification` method of `scikit-learn`. This method also returns the true labels of each data point. In practice, however, we do not know these labels unless we ask an oracle. The labels are stored in `y_true`, which acts as an oracle. ``` X, y_true = make_classification(n_features=2, n_redundant=0, random_state=0) bound = [[min(X[:, 0]), min(X[:, 1])], [max(X[:, 0]), max(X[:, 1])]] plt.scatter(X[:, 0], X[:, 1], c=y_true, cmap='jet') plt.xlabel('Feature 1') plt.ylabel('Feature 2') plt.title('Data set'); ``` ## Classification Our goal is to classify the data points into two classes. To do so, we introduce a vector `y` to store the labels that we acquire from the oracle (`y_true`). As shown below, the vector `y` is unlabeled at the beginning. ``` y = np.full(shape=y_true.shape, fill_value=MISSING_LABEL) print(y) ``` There are many easy-to-use classification algorithms in `scikit-learn`. In this example, we use the logistic regression classifier. Details of other classifiers can be accessed from here: https://scikit-activeml.readthedocs.io/en/latest/api/classifier.html. As `scikit-learn` classifiers cannot cope with missing labels, we need to wrap these with the `SklearnClassifier`. ``` clf = SklearnClassifier(LogisticRegression(), classes=np.unique(y_true)) ``` ## Query Strategy The query strategies are the central part of our library. In this example, we use uncertainty sampling with entropy to determine the most uncertain data points. All implemented strategies can be accessed from here: https://scikit-activeml.readthedocs.io/en/latest/api/pool.html. ``` qs = UncertaintySampling(method='entropy', random_state=42) ``` ## Active Learning Cycle In this example, we perform 20 iterations of the active learning cycle (`n_cycles=20`). In each iteration, we select one unlabeled sample to be labeled (`batch_size=1`). The sample is selected by an index (`query_idx`). The label is acquired by assigning the label from `y_true` to `y`. Then we retrain our classifier. We continue until we reach the 20 labeled data points. Below, we see the implementation of an active learning cycle. The first figure shows the decision boundary after acquiring the label of two data points. The second figure shows the decision boundary with 10 acquired labels, which shows significant improvement compared to the first figure. The last figure shows the decision boundary after acquiring labels for 20 data points. Finally, we use the accuracy score as a performance measure, which shows the accuracy of our classifier in the specified iterations. ``` n_cycles = 20 y = np.full(shape=y_true.shape, fill_value=MISSING_LABEL) clf.fit(X, y) for c in range(n_cycles): query_idx = qs.query(X=X, y=y, clf=clf, batch_size=1) y[query_idx] = y_true[query_idx] clf.fit(X, y) # plotting unlbld_idx = unlabeled_indices(y) lbld_idx = labeled_indices(y) if len(lbld_idx) in [2, 10, 20]: print(f'After {len(lbld_idx)} iterations:') print(f'The accuracy score is {clf.score(X,y_true)}.') plot_utilities(qs, X=X, y=y, clf=clf, feature_bound=bound) plot_decision_boundary(clf, feature_bound=bound) plt.scatter(X[unlbld_idx,0], X[unlbld_idx,1], c='gray') plt.scatter(X[:,0], X[:,1], c=y, cmap='jet') plt.show() ```
github_jupyter
# Centerpartiets budgetmotion 2022 https://www.riksdagen.se/sv/dokument-lagar/dokument/motion/centerpartiets-budgetmotion-2022_H9024121 ``` import pandas as pd import requests pd.options.mode.chained_assignment = None multiplier = 1_000_000 docs = [ {'utgiftsområde': 1, 'dok_id': 'H9024141'}, {'utgiftsområde': 2, 'dok_id': 'H9024140'}, {'utgiftsområde': 3, 'dok_id': 'H9024142'}, {'utgiftsområde': 4, 'dok_id': 'H9024143'}, {'utgiftsområde': 5, 'dok_id': 'H9024144'}, {'utgiftsområde': 6, 'dok_id': 'H9024145'}, {'utgiftsområde': 7, 'dok_id': 'H9024146'}, {'utgiftsområde': 8, 'dok_id': 'H9024147'}, {'utgiftsområde': 9, 'dok_id': 'H9024128'}, {'utgiftsområde': 10, 'dok_id': 'H9024148'}, {'utgiftsområde': 11, 'dok_id': 'H9024149'}, {'utgiftsområde': 12, 'dok_id': 'H9024150'}, {'utgiftsområde': 13, 'dok_id': 'H9024127'}, {'utgiftsområde': 14, 'dok_id': 'H9024129'}, {'utgiftsområde': 15, 'dok_id': 'H9024125'}, {'utgiftsområde': 16, 'dok_id': 'H9024126'}, {'utgiftsområde': 17, 'dok_id': 'H9024130'}, {'utgiftsområde': 18, 'dok_id': 'H9024122'}, {'utgiftsområde': 19, 'dok_id': 'H9024123'}, {'utgiftsområde': 20, 'dok_id': 'H9024124'}, {'utgiftsområde': 21, 'dok_id': 'H9024136'}, {'utgiftsområde': 22, 'dok_id': 'H9024135'}, {'utgiftsområde': 23, 'dok_id': 'H9024134'}, {'utgiftsområde': 24, 'dok_id': 'H9024133'}, {'utgiftsområde': 25, 'dok_id': 'H9024132'}] def find_matching_table(tables, loc=None): if loc: return tables[loc] for table in tables: if table.columns.shape == (5,): break return table def fetch_table(url, area): tables = pd.read_html(url, encoding='utf8', header=2) cols = ['Anslag', 'Namn', '2022', '2023', '2024'] loc = 3 if area in [14, 24] else None df = find_matching_table(tables, loc) df.columns = cols df = df.dropna(how='all') df = df[~df.Anslag.str.startswith('Summa', na=False)] df['Utgiftsområde'] = area return df tables = [] for doc in docs: url = f'http://data.riksdagen.se/dokument/{doc["dok_id"]}.html' table = fetch_table(url, area=doc['utgiftsområde']) tables.append(table) df = pd.concat(tables, sort=False) df = df.dropna(how='all') for col in ['2022', '2023', '2024']: df[col] = df[col].astype(str) df[col] = df[col].str.split('.', expand=True)[0] df[col] = df[col].str.replace('±0', '0', regex=False) df[col] = df[col].str.replace('\s+', '', regex=True) df[col] = df[col].str.replace('−', '-') df[col] = df[col].astype(int) * multiplier df.to_csv('../data/budgetmotion-2022-c.csv', index=False) ```
github_jupyter
## Plotting of profile results ``` #!/usr/bin/env python # -*- coding: utf-8 -*- # common import os import os.path as op # pip import numpy as np import pandas as pd import math import xarray as xr import matplotlib.pyplot as plt from matplotlib import gridspec # DEV: override installed teslakit import sys sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..')) # teslakit from teslakit.database import Database, hyswan_db # interactive widgets from ipywidgets import interact, interact_manual, interactive, HBox, Layout, VBox from ipywidgets import widgets from natsort import natsorted, ns from moviepy.editor import * from IPython.display import display, Image, Video sys.path.insert(0, op.join(os.getcwd(),'..')) # bluemath swash module (bluemath.DD.swash path_swash='/media/administrador/HD/Dropbox/Guam/wrapswash-1d' sys.path.append(path_swash) from lib.wrap import SwashProject, SwashWrap from lib.plots import SwashPlot from lib.io import SwashIO from lib.MDA import * from lib.RBF import * def Plot_profile(profile): colors=['royalblue','crimson','gold','darkmagenta','darkgreen','darkorange','mediumpurple','coral','pink','lightgreen','darkgreen','darkorange'] fig=plt.figure(figsize=[17,4]) gs1=gridspec.GridSpec(1,1) ax=fig.add_subplot(gs1[0]) ax.plot(profile.Distance_profile, -profile.Elevation,linewidth=3,color=colors[prf],alpha=0.7,label='Profile: ' + str(prf)) s=np.where(profile.Elevation<0)[0][0] ax.plot(profile.Distance_profile[s],-profile.Elevation[s],'s',color=colors[prf],markersize=10) s=np.argmin(profile.Elevation.values) ax.plot(profile.Distance_profile[s],-profile.Elevation[s],'d',color=colors[prf],markersize=10) ax.plot([0,1500],[0,0],':',color='plum',alpha=0.7) ax.set_xlabel(r'Distance (m)', fontsize=14) ax.set_ylabel(r'Elevation (m)', fontsize=14) ax.legend() ax.set_xlim([0,np.nanmax(profile.Distance_profile)]) ax.set_ylim(-profile.Elevation[0], -np.nanmin(profile.Elevation)+3) def get_bearing(lat1,lon1,lat2,lon2): dLon = np.deg2rad(lon2) - np.deg2rad(lon1); y = math.sin(dLon) * math.cos(np.deg2rad(lat2)); x = math.cos(np.deg2rad(lat1))*math.sin(np.deg2rad(lat2)) - math.sin(np.deg2rad(lat1))*math.cos(np.deg2rad(lat2))*math.cos(dLon); brng = np.rad2deg(math.atan2(y, x)); if brng < 0: brng+= 360 return brng # -------------------------------------- # Teslakit database p_data = r'/media/administrador/HD/Dropbox/Guam/teslakit/data' # p_data=r'/Users/laurac/Dropbox/Guam/teslakit/data' db = Database(p_data) # set site db.SetSite('GUAM') #Define profile to run prf=11 # sl=0 #Sea level p_out = os.path.join(p_data, 'sites', 'GUAM','HYSWASH') if not os.path.exists(p_out): os.mkdir(p_out) p_dataset = op.join(p_out, 'dataset_prf'+str(prf)+'.pkl') p_subset = op.join(p_out, 'subset_prf'+str(prf)+'.pkl') p_waves = op.join(p_out, 'waves_prf'+str(prf)+'.pkl') ds_output = op.join(p_out, 'reconstruction_p{0}.nc'.format(prf)) # Create the project directory p_proj = op.join(p_out, 'projects') # swash projects main directory n_proj = 'Guam_prf_{0}'.format(prf) # project name sp = SwashProject(p_proj, n_proj) sw = SwashWrap(sp) si = SwashIO(sp) sm = SwashPlot(sp) ``` ### Set profile and load data ``` min_depth=-20 profiles=xr.open_dataset('/media/administrador/HD/Dropbox/Guam/bati guam/Profiles_Guam_curt.nc') profile=profiles.sel(profile=prf) profile['Orientation']=get_bearing(profile.Lat[0],profile.Lon[0],profile.Lat[-1],profile.Lon[-1]) s=np.where(profile.Elevation>min_depth)[0] profile = xr.Dataset({'Lon': (['number_points'],np.flipud(profile.Lon[s])), 'Lat': (['number_points'],np.flipud(profile.Lat[s])), 'Elevation': (['number_points'],-np.flipud(profile.Elevation[s])), 'Distance_profile': (['number_points'],(profile.Distance_profile[s])), 'Rep_coast_distance': (profile.Rep_coast_distance), 'Orientation': (profile.Orientation), }, coords={'number_points': range(len(s)), }) print(profile) Plot_profile(profile) ``` ### Cut profile after maximum ``` extra_positions=[25,28,7,25,-30,-9,-10,15,-22,12,20,18] #Cut profile at maximum + extra_positions s=np.argmin(profile.Elevation.values) profile=profile.isel(number_points=range(s+extra_positions[prf])) print(profile) Plot_profile(profile) profile.to_netcdf(path=os.path.join(p_out,'Prf_'+str(prf)+'.nc')) ``` ### Load waves <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> In the following cell, the input paths are defined. The user must specified the path to the hydraulic boundary conditions, considering the wind forcing and sea conditions. For simplicity, those files must be in NetCDF format. </span> * `profile` <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> : profile id </span><br> * `sl`: <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> sea level (sl) with respect to msl</span> * `orientation`: <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> angle (º) between the</span> * `waves` <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> : significant wave height (Hs) and peak period (Tp). </span><br> * `wind`: <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> wind speed (W), wind direction (B) </span> ``` tp_lim=3 SIM=pd.read_pickle(os.path.join(db.paths.site.SIMULATION.nearshore,'Simulations_profile_'+str(prf))) print(len(SIM['Tp'][np.where(SIM['Tp']<=tp_lim)[0]])) SIM['Tp'][np.where(SIM['Tp']<=tp_lim)[0]]=tp_lim ``` ## Keep 200 years of simulation for MDA ``` SIM=SIM.loc[np.where(SIM.time<(SIM.time[0]+len(np.unique(SIM.time))/5))[0],:] SIM=SIM.reset_index().drop(columns=['index']) SIM ``` ### Load SLR ``` SLR=xr.open_dataset(db.paths.site.CLIMATE_CHANGE.slr_nc) sl=np.tile(SLR.SLR.values, (100,1))[:len(SIM)] #We repeat the level 100 times (10simulations of 1000 years) SIM['slr1']=sl[:,0] SIM['slr2']=sl[:,1] SIM['slr3']=sl[:,2] SIM fig=plt.figure(figsize=[18,9]) vars=['Hs','Tp','Dir','level','wind_dir','wind_speed'] units=[' (m)',' (s)',' (º)',' (m)',' (º)',' (m/s)'] gs1=gridspec.GridSpec(2,3) for a in range(len(vars)): ax=fig.add_subplot(gs1[a]) ax.hist(SIM[vars[a]][:np.int(len(SIM)/100)],50,density=True,color='navy') #Plot a small fraction of data ax.set_xlabel(vars[a] + units[a],fontsize=13) ``` ## **1. Clustering and selection method MDA** <span style="font-family: times, Times New Roman; font-size:12pt; color:black; text-align: justify"> The high computational cost of propagating the entire hindcast dataset requires statistical tools to reduce the set of data to a number of representative cases to perform hybrid downscaling. The maximum dissimilarity algorithm (MDA) defined in the work of Camus et al., 2011, is implemented for this purpose.<br> <br> Given a data sample $X=\{x_{1},x_{2},…,x_{N}\}$ consisting of $N$ $n$-dimensional vectors, a subset of $M$ vectors $\{v_{1},…,v_{M}\}$ representing the diversity of the data is obtained by applying this algorithm. The selection starts initializing the subset by transferring one vector from the data sample ${v1}$. The rest of the $M-1$ elements are selected iteratively, calculating the dissimilarity between each remaining data in the database and the elements of the subset and transferring the most dissimilar one to the subset. The process finishes when the algorithm reaches $M$ iterations. The MDA will be applied to 2 data sets, on the one hand, to the aggregated spectrum parameters and on the other hand to the spectrum partitions, differentiating seas and swells. In this way, the wind sea will have two added dimensions, wind speed and wind direction. <br> Representative variables for the different hydraulic boundary conditions:<br> <br> (1) Waves $H_{s}$, $T_p$<br> (2) Wind $W_{x}$<br> </span><br> ### **1.1 Data preproccesing** <span style="font-family: times, Times New Roman; font-size:12pt; color:black; text-align: justify"> Select $H_{s}$, $T_p$, $W$, $W_{dir}$ from waves dataset and proyect wind velocity over the shore-cross profile (Wx). Unlike not being able to model the direction of the waves, Swash allows to model the wind direction. In order to reduce the MDA dimensions, working with the projected wind is desirable. </span><br> ``` # Dataset # waves = waves.squeeze() SIM = pd.DataFrame( { 'time': SIM.time.values, 'hs': SIM.Hs.values, 'tp': SIM.Tp.values, 'w':SIM.wind_speed.values, 'wdir': SIM.wind_dir.values, 'sl':SIM.level.values, 'slr1':SIM.slr1.values, 'slr2':SIM.slr2.values, 'slr3':SIM.slr3.values } ) #TEST WIND PROJECTION # rel_beta=np.nanmin([np.abs(xds_states.wdir- profile.Orientation.values),np.abs(xds_states.wdir+360-profile.Orientation.values)],axis=0) # rel_beta[np.where(rel_beta>=90)[0]]=np.nan # rad_beta = (rel_beta*np.pi)/180 # xds_states['wx'] = xds_states.w.values*np.cos(rad_beta) # #Test that wind projection is right # pos_test=86281 # print('Orientation: ' + str(profile.Orientation.values)) # print('Wind speed: ' + str(xds_states.w[pos_test])) # print('Wind dir: ' + str(xds_states.wdir[pos_test])) # print('Beta relativo: ' + str(rel_beta[pos_test])) # print('Wind proyected: ' + str(xds_states.wx[pos_test])) # Proyect wind direction over bathymetry orientation #Check wind is correctily projected SIM = proy_wind(profile.Orientation.values, SIM) SIM=SIM.drop(columns=['time', 'w', 'wdir']).dropna().reset_index() SIM dataset = pd.DataFrame( { 'hs': np.tile(SIM.hs,4), 'tp': np.tile(SIM.tp,4), 'wx': np.tile(SIM.wx,4), 'level':np.concatenate((SIM.sl,SIM.slr1+SIM.sl,SIM.slr2+SIM.sl,SIM.slr3+SIM.sl)) } ) # dataset=np.column_stack((np.tile(SIM.hs,4),np.tile(SIM.tp,4),np.tile(SIM.wx,4),np.concatenate((SIM.sl,SIM.slr1+SIM.sl,SIM.slr2+SIM.sl,SIM.slr3+SIM.sl)))) # print(len(dataset)) # dataset fig=plt.figure(figsize=[25,6]) gs1=gridspec.GridSpec(1,1) ax=fig.add_subplot(gs1[0]) ax.plot(SIM.slr3+SIM.sl,linewidth=0.4) ax.plot(SIM.slr2+SIM.sl,linewidth=0.4) ax.plot(SIM.slr1+SIM.sl,linewidth=0.4) ax.plot(SIM.sl,linewidth=0.4) ``` ### **1.2 MDA algorithm** ``` # dataset = SIM.drop(columns=['time', 'sl', 'w', 'wdir']) # data = np.array(dataset.dropna())[:,:] # subset, scalar and directional indexes ix_scalar = [0, 1, 2, 3] # hs, tp, wx, level ix_directional = [] # n_subset = 1500 # subset size # MDA algorithm out = MaxDiss_Simplified_NoThreshold( np.array(dataset), n_subset, ix_scalar, ix_directional ) subset = pd.DataFrame({'hs':out[:, 0],'tp':out[:, 1],'wx':out[:, 2],'level':out[:, 3]}) print(subset.info()) # store dataset and subset SIM.to_pickle(p_dataset) subset.to_pickle(p_subset) # Plot subset-dataset fig = scatter_mda(dataset.loc[np.arange(1,len(dataset),100),:], subset, names = ['Hs(m)', 'Tp(s)', 'Wx(m/s)', 'Level (m)'], figsize=(15,15)) fig.savefig(op.join(p_out, 'mda_profile_'+str(prf)+'.png'),facecolor='w') ``` ## **2. Numerical model SWASH** ### **2.1 Data preprocessing** <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> In this section, the computational grid is defined from the bathymetric data and, optionally, wave dissipation characteristics due to the bottom friction or vegetation. The input grids will be considered uniform and rectangular, with the computational grid covering the whole bathymetric region. <br> #### **2.1.1 Cross-shore profile** <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> Model boundaries should be far enough from the area of interest and away from steep topography to avoid unrealistic frictional or numerical dispersion effects but close enough to remain computationally feasible </span> <span style="font-family: times, Times New Roman; font-size:11pt; color:black; background:whitesmoke"> kh < 5. </span> </span> <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> As a recommendation, the area of interest should be kept at least two wave lengths away from the boundary. In the following cells, different input choices for defining the cross-shore profile will be given. </span> * `dxL` <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> : number of nodes per wavelength. This command sets the grid resolution from the number of nodes desired per wavelength in 1m depth (assuming that in the beach due to the infragravigity waves the water colum can reach 1m heigh). </span><br><br> * `dxinp`: <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> The resolution of the bathymetric grid is not the same as that of the computational grid. It is advised to avoid extremely steep bottom slopes or sharp obstacles as much as posible. </span> <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> Land points are defined as negative while wet points are defined as positive. </span> ``` # Import depth FILE sp.dxL = 30 # nº nodes per wavelength sp.dxinp = np.abs(profile.Distance_profile.values[0]- profile.Distance_profile.values[1]) sp.depth=profile.Elevation.values fig = sm.plot_depthfile() ``` #### **2.1.2 Friction** <p style="font-family: times, Times New Roman; font-size:12pt; color:black;"> With this option the user can activate the bottom friction controlled by the Manning formula. As the friction coefficient may vary over the computational region, the friction data can be read from file or defined by specifyng the start and end point along it is defined the frictional area (e.g. reef). </p> ``` # Set a constant friction between two points of the profile sp.friction_file = False sp.friction = True sp.Cf = 0.02 # manning frictional coefficient (m^-1/3 s) sp.cf_ini = 0 # first point along the profile sp.cf_fin = sp.dxinp*np.where(profile.Elevation<0)[0][0]+5 # last point along the profile fig = sm.plot_depthfile() plt.savefig(op.join(p_out, 'profile_'+str(prf)+'.png'),facecolor='w') ``` ### **2.2 Boundary conditions**<br> <p style="font-family: times, Times New Roman; font-size:12pt; color:black;"> The boundaries of the computational grid in SWASH are either land, beach or water. The wave condition is imposed on the west boundary of the computational domain, so that the wave propagation is pointing eastward. To simulate entering waves without some reflections at the wavemaker boundary, a weakly-reflective boundary condition allowing outgoing waves is adopted. For this test case, a time series synthesized from parametric information (wave height, period, etc.) will be given as wavemaker. Here, the wavemaker must be defined as irregular unidirectional waves by means of 1D spectrum. Both the initial water level and velocity components are set to zero. </p> ``` # Set the simulation period and grid resolution sp.tendc = 3600 # simulation period (SEC) sp.warmup = 0.15 * sp.tendc # spin-up time (s) (default 15%) ``` #### **2.2.1 Sea state**<br> <p style="font-family: times, Times New Roman; font-size:12pt; color:black;"> The input wave forcing is set as a 1D Jonswap spectrum with $\gamma$ parameter 3. As the water level is a deterministic variable, it can be included considering different disccrete values in the range of low-high tide. </p> <p style="font-family: times, Times New Roman; font-size:12pt; font-style:italic; font-weight:bold; color:royalblue;"> Water level </p> ``` low_level = 0 high_level = 0 step = 0.5 wl = np.arange(low_level, high_level+step, step) # water level (m) print('Water levels: {0}'.format(wl)) subset ``` <p style="font-family: times, Times New Roman; font-size:12pt; font-style:italic; font-weight:bold; color:royalblue;"> Jonswap spectrum </p> ``` dir_wx = np.full([len(subset.wx)],0) dir_wx = np.where(subset.wx<=0,180,0) print(dir_wx) # Define JONSWAP spectrum by means of the following spectral parameters sp.gamma = 10 waves = pd.DataFrame( { "forcing": ['Jonswap'] * n_subset, "WL": subset.level, "Hs": subset.hs, "Tp": subset.tp, 'Wx': np.abs(subset.wx), 'Wdir': dir_wx, "gamma": np.full([n_subset],sp.gamma), "warmup": np.full([n_subset],sp.warmup) } ) waves # Create wave series and save 'waves.bnd' file sp.deltat = 0.5 # delta time over which the wave series is defined series = sw.make_waves_series(waves) ``` #### **2.2.2 Wind** <p style="font-family: times, Times New Roman; font-size:12pt; color:black;"> The user can optionally specify wind speed, direction and wind drag assuming constant values in the domain. As the test case is using cartesian coordinates, please set the direction where the wind cames from. </p> ``` # Define wind parameters sp.wind = True sp.Wdir = waves.Wdir # wind direction at 10 m height (º) sp.Vel = waves.Wx # wind speed at 10 m height (m/s)Distance_profileÇ sp.Ca = 0.0026 # dimensionless coefficient (default 0.002) ``` ### **2.3. Run** <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> In the following, a series of predefined options have been choosen: <br></span> * <span style="font-family: times, Times New Roman; font-size:12pt; color:black;font-weight:bold; background:khaki;">Grid resolution</span> <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> is determined through a number of points per wavelength criteria: Courant number for numerical stability, number of points per wavelength, and manual upper and lower limits for grid cell sizes.<br></span> * <span style="font-family: times, Times New Roman; font-size:12pt; color:black;">The default value for the maximun </span><span style="font-family: times, Times New Roman; font-size:12pt; color:black; background:khaki; font-weight:bold;">wave breaking steepness</span> <span style="font-family: times, Times New Roman; font-size:12pt; color:black;">parameter is $ \alpha = 0.6$<br></span> * <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> For high, nonlinear waves, or wave interaction with structures with steep slopes (e.g. jetties, quays), a Courant number of 0.5 is advised. Here, a dynamically adjusted </span><span style="font-family: times, Times New Roman; color:black; font-size:12pt; background:khaki; font-weight:bold;">time step</span> <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> controlled by a Courant number range of (0.1 - 0.5) is implemented<br></span> <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> User parametes:<br></span> * `Nonhydrostatic` <span style="font-family: times, Times New Roman; font-size:12pt; color:black;">to include the non-hydrostatic pressure in the shallow water equations. Hydrostatic pressure assumption can be made in case of propagation of long waves, such as large-scale ocean circulations, tides and storm surges. This assumption does not hold in case of propagation of short waves, flows over a steep bottom, unstable stratified flows, and other small-scale applications where vertical acceleration is dominant </span> * `vert` <span style="font-family: times, Times New Roman; font-size:12pt; color:black;">this command set the number of vertical layers in case that the run will be in multi-layered mode </span><br><br> ``` # Create swash wrap sp.Nonhydrostatic = True # True or False sp.vert = 1 # vertical layers sp.delttbl = 1 # time between output fields (s) sw = SwashWrap(sp) waves = sw.build_cases(waves) waves.to_pickle(p_waves) # Run cases sw.run_cases() ## TARGET: Extract output from files #do_extract=1 #if do_extract==1: # target = sw.metaoutput(waves) # target = target.rename({'dim_0': 'case'}) # print(target) # target.to_netcdf(op.join(p_out, 'xds_out_prf'+str(prf)+'.nc')) #else: # target = xr.open_dataset(op.join(p_out, 'xds_out_prf'+str(prf)+'.nc')) #df_target = pd.DataFrame({'ru2':target.Ru2.values, 'q': target.q.values}) #print(df_target) #df_dataset=subset #df_dataset['Ru2']=target.Ru2.values #df_dataset['q']=target.q.values #print(df_dataset) ```
github_jupyter
# Dropout regularization with gluon ``` import mxnet as mx import numpy as np from mxnet import gluon from tqdm import tqdm_notebook as tqdm ``` ## Context ``` ctx = mx.cpu() ``` ## The MNIST Dataset ``` batch_size = 64 num_inputs = 784 num_outputs = 10 def transform(data, label): return data.astype(np.float32) / 255, label.astype(np.float32) train_data = gluon.data.DataLoader(dataset=gluon.data.vision.MNIST(train=True, transform=transform), batch_size=batch_size, shuffle=True) test_data = gluon.data.DataLoader(dataset=gluon.data.vision.MNIST(train=False, transform=transform), batch_size=batch_size, shuffle=False) ``` ## Define the model ``` num_hidden = 256 net = gluon.nn.Sequential() with net.name_scope(): ########################### # Adding first hidden layer ########################### net.add(gluon.nn.Dense(units=num_hidden, activation="relu")) ########################### # Adding dropout with rate .5 to the first hidden layer ########################### net.add(gluon.nn.Dropout(rate=0.5)) ########################### # Adding first hidden layer ########################### net.add(gluon.nn.Dense(units=num_hidden, activation="relu")) ########################### # Adding dropout with rate .5 to the second hidden layer ########################### net.add(gluon.nn.Dropout(rate=0.5)) ########################### # Adding the output layer ########################### net.add(gluon.nn.Dense(units=num_outputs)) ``` ## Parameter initialization ``` net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx) softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss() trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .1}) ``` ## Evaluation ``` def evaluate_accuracy(data_iterator, net, mode='train'): acc = mx.metric.Accuracy() for i, (data, label) in enumerate(data_iterator): data = data.as_in_context(ctx).reshape([-1, 784]) label = label.as_in_context(ctx) if mode == 'train': with mx.autograd.train_mode(): output = net(data) else: with mx.autograd.predict_mode(): output = net(data) predictions = mx.nd.argmax(output, axis=1) acc.update(preds=predictions, labels=label) return acc.get()[1] ``` ## Training ``` epochs = 10 smoothing_constant = .01 for e in tqdm(range(epochs)): for i, (data, label) in tqdm(enumerate(train_data)): data = data.as_in_context(ctx).reshape([-1, 784]) label = label.as_in_context(ctx) with mx.autograd.record(): output = net(data) loss = softmax_cross_entropy(output, label) loss.backward() trainer.step(data.shape[0]) ########################## # Keep a moving average of the losses ########################## curr_loss = mx.nd.mean(loss).asscalar() moving_loss = (curr_loss if ((i == 0) and (e == 0)) else (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss) test_accuracy = evaluate_accuracy(test_data, net, mode='test') train_accuracy = evaluate_accuracy(train_data, net, mode='train') print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" % (e, moving_loss, train_accuracy, test_accuracy)) ``` ## Predict for the first batch ``` for i, (data, label) in enumerate(test_data): data = data[0].as_in_context(ctx).reshape([-1, 784]) label = label[0].as_in_context(ctx) with mx.autograd.record(train_mode=False): output = net(data) predictions = mx.nd.argmax(output, axis=1) print(predictions) break ``` ## Testing if the accuracy is calculated correctly ``` test_accuracy = evaluate_accuracy(test_data, net, mode='test') test_accuracy test_accuracy = evaluate_accuracy(test_data, net, mode='test') test_accuracy ```
github_jupyter
# Update rules ``` import numpy as np import matplotlib.pyplot as plt import mpl_toolkits.mplot3d.axes3d as p3 import matplotlib.animation as animation from IPython.display import HTML from matplotlib import cm from matplotlib.colors import LogNorm def sgd(f, df, x0, y0, lr, steps): x = np.zeros(steps + 1) y = np.zeros(steps + 1) x[0] = x0 y[0] = y0 for i in range(steps): (dx, dy) = df(x[i], y[i]) x[i + 1] = x[i] - lr * dx y[i + 1] = y[i] - lr * dy z = f(x, y) return [x, y, z] def nesterov(f, df, x0, y0, lr, steps, momentum): x = np.zeros(steps + 1) y = np.zeros(steps + 1) x[0] = x0 y[0] = y0 dx_v = 0 dy_v = 0 for i in range(steps): (dx_ahead, dy_ahead) = df(x[i] + momentum * dx_v, y[i] + momentum * dy_v) dx_v = momentum * dx_v - lr * dx_ahead dy_v = momentum * dy_v - lr * dy_ahead x[i + 1] = x[i] + dx_v y[i + 1] = y[i] + dy_v z = f(x, y) return [x, y, z] def adagrad(f, df, x0, y0, lr, steps): x = np.zeros(steps + 1) y = np.zeros(steps + 1) x[0] = x0 y[0] = y0 dx_cache = 0 dy_cache = 0 for i in range(steps): (dx, dy) = df(x[i], y[i]) dx_cache += dx ** 2 dy_cache += dy ** 2 x[i + 1] = x[i] - lr * dx / (1e-8 + np.sqrt(dx_cache)) y[i + 1] = y[i] - lr * dy / (1e-8 + np.sqrt(dy_cache)) z = f(x, y) return [x, y, z] def rmsprop(f, df, x0, y0, lr, steps, decay_rate): x = np.zeros(steps + 1) y = np.zeros(steps + 1) x[0] = x0 y[0] = y0 dx_cache = 0 dy_cache = 0 for i in range(steps): (dx, dy) = df(x[i], y[i]) dx_cache = decay_rate * dx_cache + (1 - decay_rate) * dx ** 2 dy_cache = decay_rate * dy_cache + (1 - decay_rate) * dy ** 2 x[i + 1] = x[i] - lr * dx / (1e-8 + np.sqrt(dx_cache)) y[i + 1] = y[i] - lr * dy / (1e-8 + np.sqrt(dy_cache)) z = f(x, y) return [x, y, z] def adam(f, df, x0, y0, lr, steps, beta1, beta2): # adam with bias correction x = np.zeros(steps + 1) y = np.zeros(steps + 1) x[0] = x0 y[0] = y0 dx_v = 0 dy_v = 0 dx_cache = 0 dy_cache = 0 for i in range(steps): (dx, dy) = df(x[i], y[i]) dx_v = beta1 * dx_v + (1 - beta1) * dx dx_v_hat = dx_v / (1 - beta1 ** (i + 1)) dx_cache = beta2 * dx_cache + (1 - beta2) * dx ** 2 dx_cache_hat = dx_cache / (1 - beta2 ** (i + 1)) dy_v = beta1 * dy_v + (1 - beta1) * dy dy_v_hat = dy_v / (1 - beta1 ** (i + 1)) dy_cache = beta2 * dy_cache + (1 - beta2) * dy ** 2 dy_cache_hat = dy_cache / (1 - beta2 ** (i + 1)) x[i + 1] = x[i] - lr * dx_v_hat / (1e-8 + np.sqrt(dx_cache_hat)) y[i + 1] = y[i] - lr * dy_v_hat / (1e-8 + np.sqrt(dy_cache_hat)) z = f(x, y) return [x, y, z] def update_lines(num, dataLines, lines): for line, data in zip(lines, dataLines): # NOTE: there is no .set_data() for 3 dim data... line.set_data(data[0:2, :num]) line.set_3d_properties(data[2, :num]) line.set_marker('o') line.set_markevery([-1]) return lines def create_and_save_animation(func_title, f, df, params={}, plot_params={}): x0 = params.get('x0', 0) y0 = params.get('y0', 0) lr = params.get('lr', .1) steps = params.get('steps', 8) momentum = params.get('momentum', .9) decay_rate = params.get('decay_rate', .9) beta1 = params.get('beta1', .9) beta2 = params.get('beta2', .999) # sgd params x0_sgd = params.get('x0_sgd', x0) y0_sgd = params.get('y0_sgd', y0) lr_sgd = params.get('lr_sgd', lr) # nesterov params x0_nesterov = params.get('x0_nesterov', x0) y0_nesterov = params.get('y0_nesterov', y0) lr_nesterov = params.get('lr_nesterov', lr) # adagrad params x0_adagrad = params.get('x0_adagrad', x0) y0_adagrad = params.get('y0_adagrad', y0) lr_adagrad = params.get('lr_adagrad', lr) # rmsprop params x0_rmsprop = params.get('x0_rmsprop', x0) y0_rmsprop = params.get('y0_rmsprop', y0) lr_rmsprop = params.get('lr_rmsprop', lr) # adam params x0_adam = params.get('x0_adam', x0) y0_adam = params.get('y0_adam', y0) lr_adam = params.get('lr_adam', lr) azim = plot_params.get('azim', -29) elev = plot_params.get('elev', 49) rotation = plot_params.get('rotation', -7) # attaching 3D axis to the figure fig = plt.figure(figsize=(12, 8)) ax = p3.Axes3D(fig, azim=azim, elev=elev) # plot the surface x = np.arange(-6.5, 6.5, 0.1) y = np.arange(-6.5, 6.5, 0.1) x, y = np.meshgrid(x, y) z = f(x, y) ax.plot_surface(x, y, z, rstride=1, cstride=1, norm = LogNorm(), cmap = cm.jet) ax.set_title(func_title, rotation=rotation) # lines to plot in 3D sgd_data = sgd(f, df, x0_sgd, y0_sgd, lr_sgd, steps) nesterov_data = nesterov(f, df, x0_nesterov, y0_nesterov, lr_nesterov, steps, momentum) adagrad_data = adagrad(f, df, x0_adagrad, y0_adagrad, lr_adagrad, steps) rmsprop_data = rmsprop(f, df, x0_rmsprop, y0_rmsprop, lr_rmsprop, steps, decay_rate) adam_data = adam(f, df, x0_adam, y0_adam, lr_adam, steps, beta1, beta2) data = np.array([sgd_data, nesterov_data, adagrad_data, rmsprop_data, adam_data]) # NOTE: Can't pass empty arrays into 3d version of plot() lines = [ax.plot(dat[0, 0:1], dat[1, 0:1], dat[2, 0:1])[0] for dat in data] ax.legend(lines, ['SGD', 'Nesterov Momentum', 'Adagrad', 'RMSProp', 'Adam']) ax.set_xlabel("x") ax.set_ylabel("y") ax.set_zlabel("z") plt.rcParams['animation.html'] = 'html5' line_ani = animation.FuncAnimation(fig, update_lines, steps+2, fargs=(data, lines), interval=500, blit=False, repeat=False) plt.close() line_ani.save(f'optimization_{func_title}.gif', writer='imagemagick',fps=500/100) return line_ani func_title = 'sphere_function' def f(x, y): return x ** 2 + y ** 2 def df(x, y): return (2 * x, 2 * y) create_and_save_animation(func_title, f, df, params={ 'steps': 15, 'lr': .2, 'x0_sgd': -4, 'y0_sgd': -4, 'x0_nesterov': -4.2, 'y0_nesterov': -3.8, 'x0_adagrad': -4, 'y0_adagrad': 4, 'x0_rmsprop': -4.2, 'y0_rmsprop': 3.8, 'x0_adam': -4, 'y0_adam': 4.2, }, plot_params={ 'azim': 15, 'elev': 60, 'rotation': -7 }) func_title = 'himmelblau_function' def f(x, y): return (x ** 2 + y - 11) ** 2 + (x + y ** 2 - 7) ** 2 def df(x, y): return (4 * x * (x ** 2 + y - 11) + 2 * (x + y ** 2 - 7), 2 * (x ** 2 + y - 11) + 4 * y * (x + y ** 2 - 7)) create_and_save_animation(func_title, f, df, params={ 'steps': 25, 'lr': .005, 'x0': 0, 'y0': -3, 'lr_adagrad': .5, 'lr_rmsprop': .5, 'lr_adam': .5 }, plot_params={ 'azim': -29, 'elev': 70, 'rotation': 17 }) ```
github_jupyter
### Interest Rate Regression Model using Linear Regression+RandomCV - Five models were created. Three Interest Rate Regression Models and two Loan Default Classification Models - This notebook is a deep dive into the Linear Regression Model created to predict Loan Interest Rates. - We will begin by exploring the top features that affect Loan Interest Rates - We will then split the data into train/test sets before training the model - The evaluation metrics will be the R2 and Mean Absolute Error (MAE) #### Import Packages and Data ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings("ignore", category=FutureWarning) pd.options.display.max_columns = None pd.options.display.max_rows = None pd.options.display.float_format = '{:.2f}'.format data = pd.read_csv("lending-club-subset.csv") data.head() data.shape ``` Choose meaningful features needed to run a correlation matrix. This will be used to observe the top features that affect Interest Rates ``` data = data[[ 'loan_amnt' , 'funded_amnt' , 'funded_amnt_inv' , 'term' , 'int_rate' , 'installment' , 'grade' , 'sub_grade' , 'emp_title' , 'emp_length' , 'home_ownership' , 'annual_inc' , 'verification_status' , 'issue_d' , 'loan_status' , 'purpose' , 'addr_state' , 'dti' , 'delinq_2yrs' , 'fico_range_low' , 'fico_range_high' , 'inq_last_6mths' , 'mths_since_last_delinq' , 'mths_since_last_record' , 'open_acc' , 'pub_rec' , 'revol_bal' , 'revol_util' , 'total_acc' , 'initial_list_status' , 'acc_open_past_24mths' , 'mort_acc' , 'pub_rec_bankruptcies' , 'tax_liens' , 'earliest_cr_line' ]] # remove % sign and set to float data['int_rate'] = data['int_rate'].str.replace('%', '') data['int_rate'] = data['int_rate'].astype(float) data['revol_util'] = data['revol_util'].str.replace('%', '') data['revol_util'] = data['revol_util'].astype(float) data.head() data.isnull().sum() ``` ### Feature Exploration Plotting Correlation matrix is a way to understand what features are correlated with eachother for with the target variable, which in this case is the Loan Interest Rate ``` # Create Corelation Matrix corr = data.corr() plt.figure(figsize = (10, 8)) sns.heatmap(corr) plt.show() # Print the correlation values for all features with respect to interest rates corr_int_rate = corr[['int_rate']] corr_int_rate # Top 10 positive features top_10_pos = corr_int_rate[corr_int_rate['int_rate'] > 0].sort_values(by=['int_rate'],ascending=False) top_10_pos # Top 10 negative features top_10_neg = corr_int_rate[corr_int_rate['int_rate']< 0].sort_values(by=['int_rate'],ascending=False) top_10_neg ``` ### Train/Test Split Here we will be split the data into a train and test set. The test set will be 30% of the data set ``` # Split Data from sklearn.model_selection import train_test_split train, test = train_test_split(data, test_size=0.30, random_state=42) train.shape, test.shape #Create Train/Test sets target = 'int_rate' features = train.columns.drop(['int_rate' ,'revol_bal' ,'loan_status' ,'funded_amnt' ,'grade' ,'sub_grade' ,'issue_d' ,'installment' , 'fico_range_high' , 'funded_amnt_inv']) # Here we are removing features that are not known prior to loan dispersion X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] ``` ### Train Linear Regression Model - Next, we will build a Linear Regression Model using Sklearn's pipeline. - The evaluations metrics will be R2 and Mean-Absolute-Error(MAE). - We will print out the K Best features selected by the SelectKBest Method. ``` from sklearn.feature_selection import f_regression, SelectKBest from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_absolute_error,r2_score from sklearn.preprocessing import StandardScaler import category_encoders as ce from sklearn.pipeline import make_pipeline from sklearn.impute import SimpleImputer pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(), SelectKBest(score_func=f_regression), LinearRegression() ) pipeline.fit(X_train, y_train) y_pred = pipeline.predict(X_test) mae = mean_absolute_error(y_test, y_pred) r2 = r2_score(y_test, y_pred) #Print Results print(f'Test MAE: {mae:,.2f}% \n') print(f'Test R2: {r2:,.2f} \n') # Get features chosen by the selectkbest features = pipeline.named_steps['selectkbest'].get_support() selected_feats = [] for count, item in enumerate(features): if item: selected_feats.append(count) list(train.columns[selected_feats]) ```
github_jupyter
#### MTA Project mvp ###### Project Goal - Generate data to facilitate micro-targeted advertising based on time of day and demographics. - I am focusing on the Canal St. station with one month of data, but the methodology and code can be easily be applied to other stations and time frames. ###### Milestone Reached - Designed and implemented methodology to reliably structure data to access traffic data by time intervals as small as four hours, by line name and station entrance. - While this is not direct demographic information, the line routes and time of day in combination, have the potential for demographic targeting - The data resulting from this analysis and processing for the Canal St. station for one month is the last cell in this notebook. ###### Key Tasks Completed - Examined and purged data resulting from turnstile audits and resets - Identified, analyzed and purged data with negative net entires or exits - Identified, analyzed and as appropriate purged data with anomalously large entries/exits. Chose 1,500 per time interval per unique turnstile identifier as the cut-off - Identified, analyzed and purged data with 0 entries/exits. This data indicated a station/line closure on certain days - Analyzed data to confirm consistency of time intervals used by the system within each turnstile unit and line name - Performed analysis to confirm that each unique combination of turnstile identifyers CA, UNIT, and SCP are associated with a single line name. Looking at a map of NYC subway routes, line names claerly match specific entrances. Refer to next to last cell - Please refer to comments on notebook cells for more detail on specific tasks and process ``` reset -fs import pandas as pd import sqlalchemy as alchem from sqlalchemy import inspect from sqlalchemy import create_engine import seaborn as sns import matplotlib from matplotlib import pyplot as plt engine = create_engine("sqlite:///sep_21.db") engine.connect() """ In the next few cells, I just experiment with sqlalchemy and syntax. Here I'm creating a variable to define my column selections use it as part of string literal in a query """ columns_to_select = ['CA', 'UNIT', 'SCP', 'STATION', 'DATE', 'TIME', 'DESC', 'ENTRIES', 'EXITS'] my_select = 'SELECT ' for name in columns_to_select: my_select = my_select + name +', ' my_select = my_select[:-2] my_select """ Just a test a test to put a query in a dataframe """ test_df = pd.read_sql(f'{my_select}\ FROM mta_data WHERE STATION = "CANAL ST";', engine) test_df.sort_values("DATE").tail() test_df.info() test_df.columns """ Looking at recovery audits with a sqlalchemy query """ pd.read_sql('SELECT *\ FROM mta_data WHERE STATION = "CANAL ST" \ AND NOT DESC = "REGULAR" \ ;', \ engine) """ Examining data around audit anomalies with sqlalchemey query """ pd.read_sql('SELECT *\ FROM mta_data \ WHERE STATION = "CANAL ST" \ AND TIME = "21:00:00" \ AND DATE = "10/28/2021"\ ;', \ engine) pd.read_sql('SELECT *\ FROM mta_data WHERE STATION = "CANAL ST" \ AND TIME = "20:00:00" \ AND DATE = "09/25/2021"\ ;', \ engine) pd.read_sql('SELECT *\ FROM mta_data \ WHERE STATION = "CANAL ST" AND DESC = "REGULAR" \ GROUP BY CA,UNIT,SCP;', \ engine) pd.read_sql('SELECT *, COUNT(ENTRIES)\ FROM mta_data \ WHERE STATION = "CANAL ST" AND DESC = "REGULAR" \ GROUP BY LINENAME, DATE, TIME \ ORDER BY LINENAME;', engine).drop('DIVISION', axis = 1) """ At this point I have conducted some exploration and analyis with sqlalchemey queries. I pull data into a dataframe, excluding any rows where DESC is not "REGULAR". This exlcudes all rows with audit resets. """ df = pd.read_sql('SELECT *\ FROM mta_data \ WHERE STATION = "CANAL ST" AND DESC = "REGULAR" '\ , engine).drop('DIVISION', axis = 1) df df.info() """ Create columns for previous entires and exits with Shift """ df[['prev_ent', 'prev_ex']] = df[['ENTRIES', 'EXITS']].shift(1) df """ Calculate columns for net entries and exists for each time interval and turnstile unit """ df['net_ent'] = df.ENTRIES - df.prev_ent df['net_ex'] = df.EXITS - df.prev_ex df """ Drop first row with Nan """ df.drop(0, inplace = True) df # df['net_entries'] = df.ENTRIES.diff() # df['net_exits'] = df.EXITS.diff() # df df[df.net_ent < 0] """ Examine negative net entries """ df[df.net_ent < 0] """ Creating a mask to do some investigating around the time intervals (begining of new day) when net entries are negative. """ mask = ((df['net_ent'] < 0) & ((df['TIME'] == '01:00:00')| (df['TIME'] == '00:00:00')) ) df[mask] """ creating a mask to investigate time intervals other than midnight and 1am for negative net enteries """ mask = ((df['net_ent'] < 0) & (df['TIME'] != '01:00:00')& (df['TIME'] != '00:00:00') ) df[mask] """ mask to investigate specific dates with anomalous entries """ mask = ((df['LINENAME'] == 'ACE') & ((df['DATE'] == '09/25/2021')| (df['DATE'] == '09/24/2021')) ) df[mask] mask2 = ((df['LINENAME'] == '1') & (df['DATE'] == '09/21/2021')) df[mask2] """ based on the examinations in cells above, negative entries are clearly bad data. So dropping all rows with negative net entries. """ df.drop(df[df['net_ent']< 0].index, inplace = True) df mask = ((df['LINENAME'] == 'ACE') & ((df['DATE'] == '09/29/2021')| (df['DATE'] == '09/30/2021')) ) df[mask] """ Performing same analysis as above for net_exits """ df[df.net_ex < 0] """ And dropping all rows with negative net exits. Then just a check to make sure all rows with negative net enteries or exits have been dropped """ df.drop(df[df.net_ex < 0].index, inplace = True) print(df[df.net_ent < 0]) print('-'*40 + '\n') print(df[df.net_ex < 0]) """ looking at the spread of net_entries as a start for identifying more anomalous data. """ df.net_ent.nlargest(100) df.sort_values('net_ent').tail(10) df.net_ent.describe() """ Didn't have great success in trying to figure out Seaborne """ plt.figure(figsize = (10, 5)) sns.displot(df.net_ent, bins = 50) plt.show() """ Decided to generate my own cumlative distribution data with parameters I can set. I then plot with matplot lib """ cum_count = [] for i in range(0, 3000, 10): cum_count.append(df[df.net_ent > i].count()[0]) print(cum_count) # plt.figure(figsize = (20, 25), dpi = 60) # sns.displot(cum_count, bins = 10) # plt.show() f, axs = plt.subplots(1,1, figsize = (15, 10)) plt.xticks(range(0, 10000, 500)) axs.hist(cum_count, bins = 10) plt.show() df[df.net_ent > 2000].count()[0] """ Based on the above analysis, it seems reasonable to assume any data for net entries greater than 1500 in a four hour time interval at a unique CA/UNIT/SCP turnstile is anomalous and should be dropped """ df.drop(df[df.net_ent > 1500].index, inplace = True) """ Just a check to see what the distribution looks like after I drop rows with greater than 1500 net entries """ cum_count = [] for i in range(0, 3000, 10): cum_count.append(df[df.net_ent > i].count()[0]) print(cum_count) # plt.figure(figsize = (20, 25), dpi = 60) # sns.displot(cum_count, bins = 10) # plt.show() f, axs = plt.subplots(1,1, figsize = (15, 10)) plt.xticks(range(0, 10000, 500)) axs.hist(cum_count, bins = 10) plt.show() df[df.net_ent > 720].count()[0] """ Net_exit anomalies have been addressed by droping the net_entries anomalies. So no further adjustment necessary """ cum_count = [] for i in range(0, 3000, 10): cum_count.append(df[df.net_ex > i].count()[0]) print(cum_count) f, axs = plt.subplots(1,1, figsize = (15, 10)) plt.xticks(range(0, 10000, 500)) axs.hist(cum_count, bins = 10) plt.show() df[df.net_ex > 900].count()[0] """ Now I'm checking for days when then there were 0 entries. By examining the head and tail of the data, it appears all 0 entires are associated with the JNQRZ6W line and corresponding portion of the canal st. station complex. It's reasonable to assume that these 0 entries are due to a station or line closure. So these data points should be dropped """ print(df[df.ENTRIES == 0].head(10)) print(df[df.ENTRIES == 0].tail(10)) df.drop(df[df.ENTRIES == 0].index, inplace = True) """ Performing the same analysis as above for days with 0 exits. It again appears all 0 exits are associated with the JNQRZ6W line and corresponding portion of the canal st. station complex and on the same days as 0 entries thus reinforcing the hypothesis that there was some sort line/station closure. So these data points should be dropped as well """ print(df[df.EXITS == 0].head(10)) print(df[df.EXITS == 0].tail(10)) df.drop(df[df.EXITS == 0].index, inplace = True) """ Analyzing time interval consistency by line name. This is good news since there are no anomalous time intervals within each line. The time intervals for the ACE lines are slightly different than the others, but for the purposes of my study, this does not present an issue """ time_set_ACE = set(df[df.LINENAME == 'ACE'].TIME) time_set_J = set(df[df.LINENAME == 'JNQRZ6W'].TIME) time_set_1 = set(df[df.LINENAME == 'JNQRZ6W'].TIME) print(time_set_ACE) print(time_set_J) print(time_set_1) """ Add columns: > calculate net_entry + net_exit. This is a good proxy for total traffic around the station > calculate net_entry - net_exit. This is an indication of traffic direction and perhaps a hint at demographics > date_time for later analyis potentiall against other data such as weather or covid news etc > day of week for later analysis based on day of week """ df['n_ent + n_ex'] = df.net_ent + df.net_ex df['n_ent - n_ex'] = df.net_ent - df.net_ex df['date_time'] = pd.to_datetime(df.DATE + " " + df.TIME, format="%m/%d/%Y %H:%M:%S") df['weekday'] = df['date_time'].dt.day_name() df """ Based on some of the analysis above, I noticed that unique combinations of turnstile identifyers CA, UNIT, SCP are associated with a single line name. The code below confirms that this is the case. I loop over the dataframe, isolating each line name at this station, then generate a string combination of CA, UNIT, SCP for each row and store that in a set for that line name. I end up with a set, for each line name of unique combinations of CA, UNIT, and SCP. I then check for intersections between these sets which are empty. so this tells me that I can isolate traffic data by line name and time interval. When I look at a NYC subway map, the line names cleary match up to different enterances of the Canal st. complex. So his allows me to analyze data based on time intervals as small as 4 hours by enterance/line. This is a very good start at micro targeting advertising. """ stiles_id_ace = set() stiles_id_j = set() stiles_id_1 = set() for row in df.itertuples(index = True): if (row.LINENAME == 'ACE'): stiles_id_ace.add(getattr(row, "CA") + getattr(row, "UNIT") + getattr(row, "SCP") ) print(stiles_id_ace) print('-'*30 + '\n') for row in df.itertuples(index = True): if (row.LINENAME == 'JNQRZ6W'): stiles_id_j.add(getattr(row, "CA") + getattr(row, "UNIT") + getattr(row, "SCP") ) print(stiles_id_j) print('-'*30 + '\n') for row in df.itertuples(index = True): if (row.LINENAME == '1'): stiles_id_1.add(getattr(row, "CA") + getattr(row, "UNIT") + getattr(row, "SCP") ) print(stiles_id_1) print('-'*30 + '\n') print(stiles_id_ace.intersection(stiles_id_j)) print(stiles_id_ace.intersection(stiles_id_1)) print(stiles_id_j.intersection(stiles_id_1)) """ The data, grouped by line name, is now in a form that will facilitate at least some micro targerted advertising. """ df.groupby(['LINENAME', 'DATE', 'TIME']).head(130) ```
github_jupyter
## Deliverable 3. Create a Travel Itinerary Map. ``` # Dependencies and Setup import pandas as pd import requests import gmaps # Import API key import sys sys.path.append('../') from config import g_key # Configure gmaps gmaps.configure(api_key=g_key) # 1. Read the WeatherPy_vacation.csv into a DataFrame. vacation_df = pd.read_csv("../Vacation_Search/WeatherPy_vacation.csv") vacation_df.head() # 2. Using the template add the city name, the country code, the weather description and maximum temperature for the city. info_box_template = """ <dl> <dt>Hotel Name</dt><dd>{Hotel Name}</dd> <dt>City</dt><dd>{City}</dd> <dt>Country</dt><dd>{Country}</dd> <dt>Current Weather</dt><dd>{Current Description} and {Max Temp} °F</dd> </dl> """ # 3a. Get the data from each row and add it to the formatting template and store the data in a list. hotel_info = [info_box_template.format(**row) for index, row in vacation_df.iterrows()] # 3b. Get the latitude and longitude from each row and store in a new DataFrame. locations = vacation_df[["Lat", "Lng"]] # 4a. Add a marker layer for each city to the map. marker_layer = gmaps.marker_layer(locations, info_box_content = hotel_info) fig = gmaps.figure() fig.add_layer(marker_layer) # 4b. Display the figure fig # From the map above pick 4 cities and create a vacation itinerary route to travel between the four cities. # 5. Create DataFrames for each city by filtering the 'vacation_df' using the loc method. # Hint: The starting and ending city should be the same city. vacation_start = vacation_df.loc[vacation_df["City"] == "Pacific Grove"] vacation_end = vacation_df.loc[vacation_df["City"] == "Pacific Grove"] vacation_stop1 = vacation_df.loc[vacation_df["City"] == "Half Moon Bay"] vacation_stop2 = vacation_df.loc[vacation_df["City"] == "Eureka"] vacation_stop3 = vacation_df.loc[vacation_df["City"] == "Truckee"] vacation_start vacation_end vacation_stop1 vacation_stop2 vacation_stop3 # 6. Get the latitude-longitude pairs as tuples from each city DataFrame using the to_numpy function and list indexing. start = vacation_start.to_numpy()[0, 5], vacation_start.to_numpy()[0, 6] end = vacation_end.to_numpy()[0, 5], vacation_end.to_numpy()[0, 6] stop1 = vacation_stop1.to_numpy()[0, 5], vacation_stop1.to_numpy()[0, 6] stop2 = vacation_stop2.to_numpy()[0, 5], vacation_stop2.to_numpy()[0, 6] stop3 = vacation_stop3.to_numpy()[0, 5], vacation_stop3.to_numpy()[0, 6] # 7. Create a direction layer map using the start and end latitude-longitude pairs, # and stop1, stop2, and stop3 as the waypoints. The travel_mode should be "DRIVING", "BICYCLING", or "WALKING". fig = gmaps.figure() vacation_itinerary = gmaps.directions_layer( start, end, waypoints = [stop1, stop2, stop3], travel_mode = "DRIVING" ) fig.add_layer(vacation_itinerary) fig # 8. To create a marker layer map between the four cities. # Combine the four city DataFrames into one DataFrame using the concat() function. itinerary_df = pd.concat([vacation_start, vacation_stop1, vacation_stop2, vacation_stop3],ignore_index=True) itinerary_df itinerary_df.reset_index(drop = True) # 9 Using the template add city name, the country code, the weather description and maximum temperature for the city. info_box_template = """ <dl> <dt>Hotel Name</dt><dd>{Hotel Name}</dd> <dt>City</dt><dd>{City}</dd> <dt>Country</dt><dd>{Country}</dd> <dt>Current Weather</dt><dd>{Current Description} and {Max Temp} °F</dd> </dl> """ # 10a Get the data from each row and add it to the formatting template and store the data in a list. hotel_info = [info_box_template.format(**row) for index, row in itinerary_df.iterrows()] # 10b. Get the latitude and longitude from each row and store in a new DataFrame. locations = itinerary_df[["Lat", "Lng"]] # 11a. Add a marker layer for each city to the map. marker_layer = gmaps.marker_layer(locations, info_box_content = hotel_info) fig = gmaps.figure() fig.add_layer(marker_layer) # 11b. Display the figure fig ```
github_jupyter
# Using Biopython's PDB Header parser to get missing residues Previously this worked out and had to be run at that time with a development version of Biopython that I got working [here](https://github.com/fomightez/BernBiopython). Now current Bioython has the essential functionality about missing residues in structure files, and so this notebook can be run here where the current Biopython is installed. You may also be interested in the notebook entitled, 'Using Biopython's PDB module to list resolved residues and construct fit commands'. Think of this notebook as complementing that one. Depending on what you are trying to do (or use as the source of information), that one may be better suited. ``` #get stuctures import os files_needed = ["6AGB.pdb","6AH3.pdb"] for file_needed in files_needed: if not os.path.isfile(file_needed): #os.system(f"curl -OL https://files.rcsb.org/download/{file_needed}.gz") #version of next line that works outside Jupyter/IPython !curl -OL https://files.rcsb.org/download/{file_needed}.gz # gives more feedback in Jupyter # os.system(f"gunzip {file_needed}.gz") #version of next line that works outside Jupyter/IPYthon !gunzip {file_needed}.gz from Bio.PDB import * h =parse_pdb_header('6AGB.pdb') h['has_missing_residues'] from Bio.PDB import * h =parse_pdb_header('6AGB.pdb') h['missing_residues'] # Missing residue positions for specific chains from Bio.PDB import * from collections import defaultdict h =parse_pdb_header('6AGB.pdb') #parse per chain chains_of_interest = ["F","G"] # make a dictionary for each chain of interest with value of a list. The list will be the list of residues later missing_per_chain = defaultdict(list) # go through missing residues and populate each chain's list for residue in h['missing_residues']: if residue["chain"] in chains_of_interest: missing_per_chain[residue["chain"]].append(residue["ssseq"]) #print(missing_per_chain) print('') print("Missing from chain 'G':\n{}".format(missing_per_chain['G'])) print('\n\n') for chain in missing_per_chain: print(chain,missing_per_chain[chain]) # Missing residue positions for ALL chains from Bio.PDB import * from collections import defaultdict # extract information on chains in structure structure = PDBParser().get_structure('6AGB', '6AGB.pdb') chains = [each.id for each in structure.get_chains()] h =parse_pdb_header('6AGB.pdb') # make a dictionary for each chain of interest with value of a list. The list will be the list of residue positions later missing_per_chain = defaultdict(list) # go through missing residues and populate each chain's list for residue in h['missing_residues']: if residue["chain"] in chains: missing_per_chain[residue["chain"]].append(residue["ssseq"]) print(missing_per_chain) print('') print("Missing from chain 'K':\n{}".format(missing_per_chain['K'])) ``` ---- #### Compare two structures ``` # Does 6AH3 have any residues missing that 6AGB doesn't, for chains shared between them? from Bio.PDB import * from collections import defaultdict # extract information on chains in structure structure = PDBParser().get_structure('6AGB', '6AGB.pdb') # USING CHAIN LISTING FROM THAT BECAUSE ONLY CARE ABOUT ONES SHARED chains = [each.id for each in structure.get_chains()] h =parse_pdb_header('6AH3.pdb') # make a dictionary for each chain of interest with value of a list. The list will be the list of residue positions later missing_per_chainh3 = defaultdict(list) # go through missing residues and populate each chain's list for residue in h['missing_residues']: if residue["chain"] in chains: missing_per_chainh3[residue["chain"]].append(residue["ssseq"]) print("Missing from chains in AGH3:\n{}".format(missing_per_chainh3)) print('') print("Missing from chain 'K':\n{}".format(missing_per_chainh3['K'])) print ('') same_result = missing_per_chainh3 == missing_per_chain print("Same residues missing for chains shared by 6AGB and 6AH3?:\n{}".format(same_result)) print ('') print("Chain by chain accounting of whether missing same residues between 6AGB and 6AH3?:") same_residues_present_list = [] for chain in chains: print(chain) print (missing_per_chainh3[chain] == missing_per_chain[chain]) if missing_per_chainh3[chain] != missing_per_chain[chain]: same_residues_present_list.append(chain) print("\n\nFurther details on those chains where not the same residues missing:") for chain in same_residues_present_list: print("chain '{}' in 6AH3 has more missing residues than 6AGB:\n{}".format(chain,len(missing_per_chainh3[chain]) > len(missing_per_chain[chain]) )) print("\ntotal # residues missing in chain '{}' of 6AH3: {}".format(chain,len(missing_per_chainh3[chain]) )) print("total # residues missing in chain '{}' of 6AGB: {}\n".format(chain,len(missing_per_chain[chain]) )) ``` Chain D is the only one with differences. The one in 6AGB(without the substrate) has more residues total, as it has less missing. (**Note that depending on how the gaps are distributed the the chain with the most residues may not have more information than the other for a region you are particularly interested in. It is just a rough metric and you should look into details. The 'Protein' tab at [PDBsum](https://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetPage.pl?pdbcode=index.html) is particularly useful for comparing missing residues in specific regions of proteins in structures.**) Python's set math can be used to look into some of the specific missing residues. Here that is done to provide some insight into the parts to examine for Chain D: ``` # How does Chain D compare specifically: print("total missing:",len(missing_per_chain['D'])) print("total missing:",len(missing_per_chainh3['D'])) A= set(missing_per_chainh3['D']) B= set(missing_per_chain['D']) print("These are missing in 6AH3 Chain D but present in 6AGB:",A-B) # set math based on https://stackoverflow.com/a/1830195/8508004 print("These are present in 6AH3 Chain D but missing in 6AGB:",B-A) # set math based on https://stackoverflow.com/a/1830195/8508004 ``` Note that there are none present in 6AH3 Chain D but missing in 6AGB and so the result is an empty set for this case. ----
github_jupyter
# Software Carpentry # Welcome to Binder This is where will do all our Python, Shell and Git live coding. ## Jupyter Lab Let's quickly familiarise ourselves with the enironment ... - the overal environment (ie your entire browser tab) is called: *Jupyter Lab* it contains menus, tabs, toolbars and a file browser - Jupyter Lab allows you to *launch* files and application into the *Work Area*. Right now you probably have two tabs in the *Work Area* - this document an another tab called *Launcher*. ## Juptyer Notebooks - this document is document is called an: *Interactive Python Notebook* (or Notebook for short) Notebooks are a documents (files) made up of a sequence of *cells* that contain either code (python in our case) or documentation (text, or formatted text called *markdown*). ### Cells The three types of Cells are: - *Markdown* - formatted text like this cell (with *italics*, **bold** and tables etc ...) - *Raw* - like the following cell, and - *Code* - (in our case) python code Cells can be modified by simply clicking inside them and typing. See if you can change the cell below by replacing *exciting* with something more exciting. #### Executing Cells Both *markdown* and *code* cells can be executed. Excuting a *markdown* causes the *formatted* version of the cell to be displayed. Executing a *code* cell causes the code to run and any results are displayed below the cell. Any cell can be executed by pressing the play icon at the top of the document while the cell is highlighted. You can also press **CTL-ENTER** to execute the active cell. Go ahead and make some more changes to the cells above and execute them - what happens when you execute a *Raw* cell ? #### Adding a Removing Cells You can use the `+` (plus icon) at the top of the docuement to add a new cell, and use the cell type drop down the change the type. You can also use the `A` key to add cell *above* the current cell and the `B` key to add *below* the current cell. Now add a couple of cell of your own ... #### Code Cells Code cells allow us to write (in our case Python) and run our code and see the results right inside the notebook. The next cell is a code cell that contains Python code to add 4 numbers. Try executing the cell and if gets the right result - try some more/different numbers ``` 1 + 2 + 3 + 4 + 5 ``` ## Let's save our work so for and Push to our changes to GitHub ### Saving By pressing the save icon on the document (or File -> Save Notebook) we can save our work to our Binder environment. ### But what about Version Control and Git (wasn't that in the Workshop syllabus) Since our binder environment will disappear when we are no longer using is, we need cause our new version to be saved some where permanent. Luckly we have a GitHub respository already connected to our current environment - however there are couple steps required to make our GitHub repository match the copy of the respository inside our Binder environment. #### Git likes to know who we are ... otherwise it keeps complaining it cannot track who made what commits (changes). To tell Git who you are, we need to do the following: - Launch a Terminal sesson (File -> New -> Terminal, or you can use the *Laucher* tab) - At the command prompt, type: ``` git-setup ``` This operation only needs to be done once per binder session. #### Add your changed files to Git's list of files to track - at the same terminal prompt type: ``` git add . ``` #### Tell Git to commit (record) this state as a version - at the same terminal prompt type: ``` git commit -m "save change made inside binder" ``` at this point Git has added an additional version of your files to your repository inside your curren Binder environment. However, your repository on GitHub remains unchanges (you might like to go check). #### Tell Git to push the new commit (version) to GitHub - again at the same prompt type: ``` git push ``` once you supply the correct GitHub usename and password, all your changes will be *pushed*. Go check out your respository on github.com ...
github_jupyter
# Sorting Objects in Instance Catalogs _Bryce Kalmbach_ This notebook provides a series of commands that take a Twinkles Phosim Instance Catalog and creates different pandas dataframes for different types of objects in the catalog. It first separates the full sets of objects in the Instance Catalogs before picking out the sprinkled strongly lensed systems for further analysis. The complete object dataframes contain: * Stars: All stars in the Instance Catalog * Galaxies: All bulge and disk components of galaxies in the Instance Catalog * AGN: All AGN in the Instance Catalog * SNe: The supernovae that are present in the Instance Catalog Then there are sprinkled strongly lensed systems dataframes containing: * Sprinkled AGN galaxies: The images of the lensed AGNs * Lens Galaxies: These are the foreground galaxies in the lens system. * **(Not Default)** Sprinkled AGN Host galaxies: While these were turned off in Run 1 of Twinkles the original motivation for this notebook was to find these objects in a catalog to help development of lensed hosts at the DESC 2017 SLAC Collaboration Meeting Hack Day. ## Requirements If you already have an instance catalog from Twinkles on hand all you need now are: * Pandas * Numpy ``` import pandas as pd import numpy as np ``` ### Parsing the Instance Catalog Here we run through the instance catalog and store which rows belong to which class of object. This is necessary since the catalog objects do not all have the same number of properties so we cannot just import them all and then sort within a dataframe. ``` filename = 'twinkles_phosim_input_230.txt' i = 0 not_star_rows = [] not_galaxy_rows = [] not_agn_rows = [] not_sne_rows = [] with open(filename, 'r') as f: for line in f: new_str = line.split(' ') #Skip through the header if len(new_str) < 4: not_star_rows.append(i) not_galaxy_rows.append(i) not_agn_rows.append(i) not_sne_rows.append(i) i+=1 continue if new_str[5].startswith('starSED'): #star_rows.append(i) not_galaxy_rows.append(i) not_agn_rows.append(i) not_sne_rows.append(i) elif new_str[5].startswith('galaxySED'): #galaxy_rows.append(i) not_star_rows.append(i) not_agn_rows.append(i) not_sne_rows.append(i) elif new_str[5].startswith('agnSED'): #agn_rows.append(i) not_star_rows.append(i) not_galaxy_rows.append(i) not_sne_rows.append(i) elif new_str[5].startswith('spectra_files'): #sne_rows.append(i) not_star_rows.append(i) not_galaxy_rows.append(i) not_agn_rows.append(i) i += 1 ``` ### Populating Dataframes Now we load the dataframes for the overall sets of objects. ``` df_star = pd.read_csv(filename, delimiter=' ', header=None, names = ['prefix', 'uniqueId', 'raPhosim', 'decPhoSim', 'phoSimMagNorm', 'sedFilepath', 'redshift', 'shear1', 'shear2', 'kappa', 'raOffset', 'decOffset', 'spatialmodel', 'internalExtinctionModel', 'galacticExtinctionModel', 'galacticAv', 'galacticRv'], skiprows=not_star_rows) df_star[:3] df_galaxy = pd.read_csv(filename, delimiter=' ', header=None, names=['prefix', 'uniqueId', 'raPhoSim', 'decPhoSim', 'phoSimMagNorm', 'sedFilepath', 'redshift', 'shear1', 'shear2', 'kappa', 'raOffset', 'decOffset', 'spatialmodel', 'majorAxis', 'minorAxis', 'positionAngle', 'sindex', 'internalExtinctionModel', 'internalAv', 'internalRv', 'galacticExtinctionModel', 'galacticAv', 'galacticRv'], skiprows=not_galaxy_rows) df_galaxy[:3] df_agn = pd.read_csv(filename, delimiter=' ', header=None, names=['prefix', 'uniqueId', 'raPhoSim', 'decPhoSim', 'phoSimMagNorm', 'sedFilepath', 'redshift', 'shear1', 'shear2', 'kappa', 'raOffset', 'decOffset', 'spatialmodel', 'internalExtinctionModel', 'galacticExtinctionModel', 'galacticAv', 'galacticRv'], skiprows = not_agn_rows) df_agn[:3] df_sne = pd.read_csv(filename, delimiter=' ', header=None, names=['prefix', 'uniqueId', 'raPhoSim', 'decPhoSim', 'phoSimMagNorm', 'shorterFileNames', 'redshift', 'shear1', 'shear2', 'kappa', 'raOffset', 'decOffset', 'spatialmodel', 'internalExtinctionModel', 'galacticExtinctionModel', 'galacticAv', 'galacticRv'], skiprows = not_sne_rows) df_sne[:3] ``` ### Sort out sprinkled Strong Lensing Systems Now we will pick out the pieces of strongly lensed systems that were sprinkled into the instance catalogs for the Twinkles project. #### Lensed AGN We start with the Lensed AGN. In Twinkles Instance Catalogs the lensed AGN have larger `uniqueId`s than normal since we added information about the systems into the `uniqueId`s. We use this to find them in the AGN dataframe. ``` sprinkled_agn = df_agn[df_agn['uniqueId'] > 20000000000] ``` Below we see a pair of lensed images from a double. ``` sprinkled_agn[:2] ``` Now we will extract the extra information we have stored in the `uniqueId`. This information is the Twinkles System number in our custom OM10 catalog in the `data` directory in Twinkles and the Twinkles Image Number which identifies which image in that particular system refers to that line in the catalog. ``` # This step undoes the step in CatSim that gives each component of a galaxy a different offset twinkles_nums = [] for agn_id in sprinkled_agn['uniqueId']: twinkles_ids = np.right_shift(agn_id-28, 10) twinkles_nums.append(twinkles_ids) #This parses the information added in the last 4 digits of the unshifted ID twinkles_system_num = [] twinkles_img_num = [] for lens_system in twinkles_nums: lens_system = str(lens_system) twinkles_id = lens_system[-4:] twinkles_id = np.int(twinkles_id) twinkles_base = np.int(np.floor(twinkles_id/4)) twinkles_img = twinkles_id % 4 twinkles_system_num.append(twinkles_base) twinkles_img_num.append(twinkles_img) ``` We once again look at the two images we showed earlier. We see that they are image 0 and image 1 from Twinkles System 24. ``` print twinkles_system_num[:2], twinkles_img_num[:2] ``` We now add this information into our sprinkled AGN dataframe and reset the indices. ``` sprinkled_agn = sprinkled_agn.reset_index(drop=True) sprinkled_agn['twinkles_system'] = twinkles_system_num sprinkled_agn['twinkles_img_num'] = twinkles_img_num sprinkled_agn.iloc[:2, [1, 2, 3, -2, -1]] ``` The last step is to now add a column with the lens galaxy `uniqueId` for each system so that we can cross-reference between the lensed AGN and the lens galaxy dataframe we will create next. We start by finding the `uniqueId`s for the lens galaxies. ``` #The lens galaxy ids do not have the extra 4 digits at the end so we remove them #and then do the shift back to the `uniqueID`. lens_gal_ids = np.left_shift((np.array(twinkles_nums))/10000, 10) + 26 sprinkled_agn['lens_galaxy_uID'] = lens_gal_ids ``` We now see that the same system has the same lens galaxy `uniqueId` as we expect. ``` sprinkled_agn.iloc[:2, [1, 2, 3, -3, -2, -1]] ``` #### Lens Galaxies Now we will create a dataframe with the Lens Galaxies. ``` lens_gal_locs = [] for idx in lens_gal_ids: lens_gal_locs.append(np.where(df_galaxy['uniqueId'] == idx)[0]) lens_gals = df_galaxy.iloc[np.unique(lens_gal_locs)] lens_gals = lens_gals.reset_index(drop=True) ``` We now have the lens galaxies in their own dataframe that can be joined on the lensed AGN dataframe by the `uniqueId`. ``` lens_gals[:1] ``` And we can check how many systems there are by checking the length of this dataframe. ``` len(lens_gals) ``` Showing that we 198 systems in the Twinkles field! #### Lensed AGN Host Galaxies (Not in Twinkles 1 catalogs) In Twinkles 1 catalogs we do not have host galaxies around our lensed AGN, but in the future we will want to be able to include this. We experimented with this at the [2017 DESC SLAC Collaboration Meeting Hack Day](https://confluence.slac.stanford.edu/display/LSSTDESC/SLAC+2017+-+Friday+Hack+Day) since Nan Li, Matt Wiesner and others are working adding lensed hosts into images. Therefore, I have included the capacity to find the host galaxies here for future use. To start we once again cut based on the `uniqueId` which will be larger than a normal galaxy. ``` host_gals = df_galaxy[df_galaxy['uniqueId'] > 178465239310] host_gals = df_galaxy[df_galaxy['uniqueId'] > 170000000000] host_gals[:2] ``` Then like the lensed AGN we add in the info from the longer Ids and the lens galaxy info along with resetting the index. ``` twinkles_gal_nums = [] for gal_id in host_gals['uniqueId']: twinkles_ids = np.right_shift(gal_id-26, 10) twinkles_gal_nums.append(twinkles_ids) host_twinkles_system_num = [] host_twinkles_img_num = [] for host_gal in twinkles_gal_nums: host_gal = str(host_gal) host_twinkles_id = host_gal[-4:] host_twinkles_id = np.int(host_twinkles_id) host_twinkles_base = np.int(np.floor(host_twinkles_id/4)) host_twinkles_img = host_twinkles_id % 4 host_twinkles_system_num.append(host_twinkles_base) host_twinkles_img_num.append(host_twinkles_img) host_lens_gal_ids = np.left_shift((np.array(twinkles_gal_nums))/10000, 10) + 26 host_gals = host_gals.reset_index(drop=True) host_gals['twinkles_system'] = host_twinkles_system_num host_gals['twinkles_img_num'] = host_twinkles_img_num host_gals['lens_galaxy_uID'] = host_lens_gal_ids host_gals.iloc[:2, [1, 2, 3, -3, -2, -1]] ``` Notice that there are different numbers of sprinkled AGN and host galaxy entries. ``` len(sprinkled_agn), len(host_gals) ``` This is because some host galaxies have both bulge and disk components, but not all do. The example we have been using does have both components and thus we have four entries for the doubly lensed system in the host galaxy dataframe. ``` host_gals[host_gals['lens_galaxy_uID'] == 21393434].iloc[:, [1, 2, 3, -3, -2, -1]] ``` ## Final Thoughts The main point of being able to break up the instance catalogs like this is for validation and future development. Being able to find the sprinkled input for Twinkles images helps us validate what appears in our output catalogs. Storing this input in pandas dataframes makes it easy to find and compare against the output catalogs that are accessed using tools in the DESC Monitor. In addition, this is a useful tool for future development like the creation of lensed images for the AGN host galaxies that we hope to add in the next iteration of Twinkles.
github_jupyter
### CS/GTO: step 1 and 2 ``` show_plots=False import numpy as np from scipy.linalg import eig, eigh, eigvals, eigvalsh from scipy.optimize import minimize import matplotlib import matplotlib.pyplot as plt matplotlib.use('Qt5Agg') %matplotlib qt5 import pandas as pd # # extend path by location of the dvr package # import sys sys.path.append('../../Python_libs') from captools import simple_traj_der, five_point_derivative from GTO_basis import GBR from jolanta import Jolanta_3D amu_to_au=1822.888486192 au2cm=219474.63068 au2eV=27.211386027 Angs2Bohr=1.8897259886 # # Jolanata parameters a, b, c: # # CS-DVR: # bound state: -7.17051 eV # resonance (3.1729556 - 0.16085j) eV # jparam=(0.028, 1.0, 0.028) ``` * Create a valence set $[\alpha_0, \alpha_0/s, \alpha_0/s^2, ..., \alpha_N]$ * Diagonalize **H** to compare with $E_0^{DVR}$ * Add diffuse functions $[\alpha_N/s_{df}, ...]$ * Diagonalize **H** again ``` sets=['GTO_unc', 'GTO_DZ', 'GTO_TZ'] bas = sets[0] nval=10 a0=17.0 s=2 ndf=4 sdf=1.5 if bas == 'GTO_unc': contract = (0,0) elif bas == 'GTO_DZ': contract = (1,1) # one contracted, one uncontracted function elif bas == 'GTO_TZ': contract = (1,2) # one contracted, two uncontracted function else: print('No such basis.') fname='Traj_' + bas + '.csv' print(fname) ``` ### Valence set Compare the bound state with DVR: $E_0 = -7.17051$ eV ``` alpha_val=[a0] for i in range(nval-1): alpha_val.append(alpha_val[-1]/s) Val = GBR(alpha_val, jparam, contract=contract, diffuse=(0,0)) S, T, V = Val.STV() Es, cs = eigh(T+V, b=S) print(f'E0 = {Es[0]*au2eV:.6f} Emax = {Es[-1]*au2eV:.6f}') Val.print_exp() if show_plots: scale=10 xmax=15 xs=np.linspace(0.1,xmax,200) Vs=Jolanta_3D(xs, jparam) plt.cla() plt.plot(xs,Vs*au2eV, '-', color="blue") for i in range(len(Es)): ys=Val.eval_vector(cs[:,i], xs) plt.plot(xs,scale*ys**2+Es[i]*au2eV, '-') plt.ylim(-8,10) plt.show() ``` ### Extend the basis by a diffuse set ``` Bas = GBR(alpha_val, jparam, contract=contract, diffuse=(ndf,sdf)) S, T, V = Bas.STV() Es, cs = eigh(T+V, b=S) nEs = len(Es) print(f'E0 = {Es[0]*au2eV:.6f} Emax = {Es[-1]*au2eV:.6f}') if show_plots: Emax=10 # eV plt.cla() plt.plot(xs,Vs*au2eV, '-', color="blue") for i, E in enumerate(Es): ys=Bas.eval_vector(cs[:,i], xs) plt.plot(xs,scale*ys**2+E*au2eV, '-') if E*au2eV > Emax: break plt.ylim(-10,Emax+1) plt.show() ``` *** ### CS Example for testing library. With: `nval=10, a0=17.0, s=2, ndf=4, sdf=1.5, theta = 10 deg` 1. `(-7.171756, 0.000583)` 2. `(0.327329, -0.171381)` 3. `(1.228884, -0.692745)` 4. `(3.165013, -0.147167)` 5. `(3.431676, -1.806220)` ``` if True: theta=8.0/180.0*np.pi print("theta = %f" % (theta)) H_theta = Bas.H_theta(theta+0.00005, 1.00000) energies = eigvals(H_theta, b=S) energies.sort() energies*=au2eV for e in energies: print("(%f, %f)" % (e.real, e.imag)) ``` ### $\theta$-run ``` n_keep=nEs theta_min=0 theta_max=16 n_theta=101 thetas=np.linspace(theta_min, theta_max, num=n_theta) run_data = np.zeros((n_theta,n_keep), complex) # array used to collect all theta-run data for i, tdeg in enumerate(thetas): theta = tdeg/180.0*np.pi H_theta = Bas.H_theta(theta, 1.0) energies = au2eV * eigvals(H_theta, b=S) energies.sort() run_data[i,:] = energies[0:n_keep] print(i+1, end=" ") if (i+1)%10==0: print() ``` Raw $\eta$ trajectories ``` plt.cla() for i in range(0, n_keep): plt.plot(run_data[:,i].real, run_data[:,i].imag, 'o') plt.xlim(0,8) plt.ylim(-2,0.5) plt.show() ``` Get the resonance trajectory by naive nearest follow We start with the energy nearest `follow` ``` #follow=3.7 follow=3.0 es=np.zeros(n_theta,complex) for j in range(0,n_theta): i = np.argmin(abs(run_data[j,:]-follow)) es[j] = run_data[j,i] follow = es[j] plt.cla() plt.ylim(-1,0.0) plt.plot(es.real, es.imag, 'o-') plt.show() abs_der = np.abs(simple_traj_der(thetas, es)) plt.cla() plt.plot(thetas, np.log10(abs_der), 'o-') plt.xlabel(r'$\theta$ [deg.]') plt.ylabel(r'$\log \vert dE/d \theta{} \vert $') plt.show() df = pd.DataFrame({"theta": thetas, "ReE":es.real, "ImE":es.imag, "der": abs_der}) fname = 'Traj_' + bas + '.csv' df.to_csv(fname, index=False) ``` Energy, $\theta_{opt} $, and stability for $\alpha=1$ ``` j_opt = np.argmin(abs_der) if j_opt == len(thetas)-1: j_opt -= 1 theta_opt = thetas[j_opt] Eres=es[j_opt] print('theta_opt', thetas[j_opt]) print('E_res', Eres) print('dE/dtheta', abs_der[j_opt]) print('delta-E down', es[j_opt-1]-es[j_opt]) print('delta-E up ', es[j_opt+1]-es[j_opt]) ``` ### Don't optimize $\eta=\alpha\,\exp(i\theta)$ **For GTOs the primitive numerical derivative is unstable.** ``` def objective(eta, dx, E_near, verbose=False): """ Input: eta: (alpha, theta); z = alpha*exp(i*theta) dx: numerical derivatives using z +/- dx or idx E_near: select the eigenvalue nearest to this energy Output: dE/dz along Re(z), dE/dz along Im(z) for the state near E_near This requires four single points. """ a, t = eta z0 = a*np.exp(1j*t) es = np.zeros(2, complex) zs = np.array([z0-dx, z0+dx]) for i, z in enumerate(zs): a, t = np.abs(z), np.angle(z) es[i] = near_E(a, t, E_near) dEdz_re = (es[1] - es[0])/(2*dx) zs = [z0-1j*dx, z0+1j*dx] for i, z in enumerate(zs): a, t = np.abs(z), np.angle(z) es[i] = near_E(a, t, E_near) dEdz_im = (es[1] - es[0])/(2j*dx) if verbose: print(dEdz_re) print(dEdz_im) return np.abs(0.5*(dEdz_re + dEdz_im)) def ECS(alpha, theta): """ diag H(alpha*exp(i*theta)) and return eigenvalues """ H_theta = Bas.H_theta(theta, alpha) return eigvals(H_theta, b=S) def near_E(alpha, theta, E_near): """ diag H(alpha*exp(i*theta)) and return eigenvalue near E_near """ Es = ECS(alpha, theta) j = np.argmin(np.abs(Es - E_near)) return Es[j] alpha_curr=1.0 theta_curr=theta_opt/180.0*np.pi dx=0.001 E_find = Eres/au2eV objective((alpha_curr, theta_curr), dx, E_find, verbose=True) args=(dx, E_find) p0=[alpha_curr, theta_curr] print('p0=', p0) print('Eres0=', near_E(alpha_curr, theta_curr, E_find)*au2eV) print('f0=', objective(p0, dx, E_find)) res=minimize(objective, p0, args=args, method='Nelder-Mead') print(res.message) print(res.x) print(res.fun) Eopt=near_E(res.x[0], res.x[1], E_find)*au2eV print('Eres=', Eopt) print('Energy change=', Eopt-Eres) ```
github_jupyter
``` from bs4 import BeautifulSoup from splinter import Browser from pprint import pprint import pymongo import pandas as pd import requests !which chromedriver executable_path = {'executable_path': 'chromedriver'} browser = Browser("chrome", **executable_path) url = ('https://mars.nasa.gov/news/') browser.visit(url) #response = requests.get(url) url_html = browser.html soup = BeautifulSoup(url_html, 'html.parser') print(soup.prettify()) #get the newest title from website element = soup.select_one('ul.item_list li.slide') element news_titles = element.find('div', class_="content_title").get_text() news_titles #get paragraph from website news_p = element.find('div', class_="article_teaser_body").get_text() print(news_p) #get the current feature mars image website_url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars' core_url = 'https://www.jpl.nasa.gov' #image_url = "/spaceimages/images/largesize/PIA17838_hires.jpg" #fina_url = core_url + imag_url #fina_url #Visit the url for JPL Featured Space Image here browser.visit(website_url) #Use splinter to navigate the site and find the image url for the current Featured Mars Image and #assign the url string to a variable called featured_image_url element_full_image = browser.find_by_id("full_image") element_full_image.click() browser.is_element_present_by_text('more info', wait_time = 1) element_more_info = browser.links.find_by_partial_text('more info') element_more_info.click() #Make sure to find the image url to the full size .jpg image image_html = browser.html image_soup = BeautifulSoup(image_html, 'html.parser') #Make sure to save a complete url string for this image image_url_final = image_soup.select_one('figure.lede a img').get('src') image_url_final final_url = core_url + image_url_final final_url mars_info = 'https://space-facts.com/mars/' table = pd.read_html(mars_info) table[0] mars_info_df = table[0] mars_info_df.columns = ["Info", "Value"] mars_info_df.set_index(["Info"], inplace=True) mars_info_df info_html = mars_info_df.to_html() info_html = info_html.replace("\n","") info_html cerberus_url = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/cerberus_enhanced' response = requests.get(cerberus_url) soup = BeautifulSoup(response.text, 'html.parser') cerberus_image = soup.find_all('div', class_="wide-image-wrapper") print(cerberus_image) for image in cerberus_image: picture = image.find('li') full_image = picture.find('a')['href'] print(full_image) cerberus_title = soup.find('h2', class_='title').text print(cerberus_title) cerberus_hem = {"Title": cerberus_title, "url": full_image} print(cerberus_hem) schiaparelli_url = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/schiaparelli_enhanced' response = requests.get(cerberus_url) soup = BeautifulSoup(response.text, 'html.parser') schiaparelli_image = soup.find_all('div', class_="wide-image-wrapper") print(schiaparelli_image) for image in schiaparelli_image: picture = image.find('li') full_image2 = picture.find('a')['href'] print(full_image2) schiaparelli_title = soup.find('h2', class_='title').text print(schiaparelli_title) schiaparelli_hem = {"Title": schiaparelli_title, "url": full_image2} print(schiaparelli_hem) syrtis_url = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/syrtis_major_enhanced' response = requests.get(syrtis_url) soup = BeautifulSoup(response.text, 'html.parser') syrtis_image = soup.find_all('div', class_="wide-image-wrapper") print(syrtis_image) for image in syrtis_image: picture = image.find('li') full_image3 = picture.find('a')['href'] print(full_image3) syrtis_title = soup.find('h2', class_='title').text print(syrtis_title) syrtis_hem = {"Title": syrtis_title, "url": full_image3} print(syrtis_hem) valles_url = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/valles_marineris_enhanced' response = requests.get(valles_url) soup = BeautifulSoup(response.text, 'html.parser') valles_image = soup.find_all('div', class_="wide-image-wrapper") print(valles_image) for image in valles_image: picture = image.find('li') full_image4 = picture.find('a')['href'] print(full_image4) valles_title = soup.find('h2', class_='title').text print(valles_title) valles_hem = {"Title": valles_title, "url": full_image4} print(valles_hem) mars_hemispheres = [{"Title": cerberus_title, "url": full_image}, {"Title": schiaparelli_title, "url": full_image2}, {"Title": syrtis_title, "url": full_image3}, {"Title": valles_title, "url": full_image4}] mars_hemispheres ```
github_jupyter
# Cloud-based machine learning || 云端机器学习 Thus far, we have looked at building and fitting ML models “locally.” True, the notebooks have been located in the cloud themselves, but the models with all of their predictive and classification power are stuck in those notebooks. To use these models, you would have to load data into your notebooks and get the results there. 到目前为止,我们已经在“本地”构建和調適機器學習模型。的确,这些notebook檔案本身已经在云端,但是所有预测和分类功能的模型也都只在这些notebook中。 要使用这些模型,您必须将数据加载到其中以获取结果。 In practice, we want those models accessible from a number of locations. And while the management of production ML models has a lifecycle all its own, one part of that is making models accessible from the web. One way to do so is to develop them using third-party cloud tools, such as [Microsoft Azure ML Studio](https://studio.azureml.net) (not to be confused with Microsoft Azure Machine Learning sService, which provides end-to-end lifecycle management for ML models). 在实际操作中,我们希望可以从多个位置取得这些模型。 尽管生产ML模型的管理具有其自己的生命周期,使模型可以从网络获取是其中一部分。使用第三方云端工具是一种开发方式,例如Microsoft Azure ML Studio。(不要与Microsoft Azure Machine Learning sService混淆,后者为ML模型提供端到端的生命周期管理)。 Alternatively, we can develop and deploy a function that can be accessed by other programs over the web—a web service—that runs within Azure ML Studio, and we can do so entirely from a Python notebook. In this section, we will use the [`azureml`](https://github.com/Azure/Azure-MachineLearning-ClientLibrary-Python) package to deploy an Azure ML web service directly from within a Python notebook (or other Python environment). 或者,我们可以开发和部署一个功能,该功能可以在Azure ML Studio中运行,并且我们可以通过Web上的其他程序通过Web服务来访问,并且可以完全通过Python notebook进行操作。 在本节中,我们将使用azureml package直接在一個Python notebook(或其他Python环境)中部署Azure ML Web服务。 > <font color=red>**Note:**</font> The `azureml` package presently works only with Python 2. If your notebook is not currently running Python 2, change it in the menu at the top of the notebook by clicking **Kernel > Change kernel > Python 2**. > 注意:azureml软件包目前仅支持Python2。如果您的notebook当前未运行Python 2,请在上方菜单中单击进行更改 Kernel>Change Kernel> Python 2。 ## Create and connect to an Azure ML Studio workspace || 创建并连接到Azure ML Studio工作区 The `azureml` package is installed by default with Azure Notebooks, so we don't have to worry about that. It uses an Azure ML Studio workspace ID and authorization token to connect your notebook to the workspace; you will obtain the ID and token by following these steps: Azure笔记本会把Azureml软件包默认安装,它使用Azure ML Studio工作区ID和授权令牌确认将笔记本连接到工作区。遵循以下步骤,您将获取ID和令牌: 1. Open [Azure ML Studio](https://studio.azureml.net) in a new browser tab and sign in with a Microsoft account. Azure ML Studio is free and does not require an Azure subscription. Once signed in with your Microsoft account (the same credentials you’ve used for Azure Notebooks), you're in your “workspace.” 在新的浏览器选项标签中打开Azure ML Studio,然后使用Microsoft帐户登录。 Azure ML Studio是免费的,不用认购就能使用。 使用Microsoft帐户(与Azure notebook使用的凭证相同)登录后,就可进入“工作区”。 2. On the left pane, click **Settings**. 在左窗格中单击设置。 ![Settings button](https://github.com/Microsoft/AzureNotebooks/blob/master/Samples/images/azure-ml-studio-settings.png?raw=true)<br/><br/> 3. On the **Name** tab, the **Workspace ID** field contains your workspace ID. Copy that ID into the `workspace_id` value in the code cell in Step 5 of the notebook below. 在``名称''选项标签上的``工作区ID''字段上有您的工作区ID。 将该ID复制到下面第5步代码单元中的workspace_id值。 ![Location of workspace ID](https://github.com/Microsoft/AzureNotebooks/blob/master/Samples/images/azure-ml-studio-workspace-id.png?raw=true)<br/><br/> 4. Click the **Authorization Tokens** tab, and then copy either token into the `authorization_token` value in the code cell in Step 5 of the notebook. 单击“授权token”选项,然后复制任一token至第5步代码单元中的authorization_token”值中。 ![Location of authorization token](https://github.com/Microsoft/AzureNotebooks/blob/master/Samples/images/azure-ml-studio-tokens.png?raw=true)<br/><br/> 5. Run the code cell below; if it runs without error, you're ready to continue. 运行下面的代码单元; 如果没有错误可继续运行。 ``` from azureml import Workspace # Replace the values with those from your own Azure ML Studio instance; see Prerequisites # The workspace_id is a string of hexadecimal characters; the token is a long string of random characters. # 将值替换为您自己的Azure ML Studio实例中的值; 请参阅前提条件(Prerequisites) # workspace_id是十六进制字符的字符串; token是一长串随机字符。 workspace_id = 'deff291c5b7b42f1b5fdc60669aa8f8b' authorization_token = 'token' ws = Workspace(workspace_id, authorization_token) ``` ## Explore forest fire data Let’s look at a meteorological dataset collected by Cortez and Morais for 2007 to study the burned area of forest fires in the northeast region of Portugal. 让我们看一下由Cortez和Morais于2007年收集到的气象数据集,研究的是葡萄牙东北部森林大火燃烧面积 > P. Cortez and A. Morais. A Data Mining Approach to Predict Forest Fires using Meteorological Data. In J. Neves, M. F. Santos and J. Machado Eds., New Trends in Artificial Intelligence, Proceedings of the 13th EPIA 2007 - Portuguese Conference on Artificial Intelligence, December, Guimaraes, Portugal, pp. 512-523, 2007. APPIA, ISBN-13 978-989-95618-0-9. > P. Cortez和A. Morais。一种使用气象数据预测森林火灾的数据挖掘方法。。 参见J. Neves,MF Santos和J. Machado编辑,《人工智能的新趋势》,第13届EPIA大会论文集-葡萄牙人工智能会议,12月,葡萄牙吉马良斯,第512-523页,2007年。APPIA,ISBN -13 978-989-95618-0-9。 The dataset contains the following features: 数据集包含以下特征: - **`X`**: x-axis spatial coordinate within the Montesinho park map: 1 to 9 - **`Y`**: y-axis spatial coordinate within the Montesinho park map: 2 to 9 - **`month`**: month of the year: "1" to "12" jan-dec - **`day`**: day of the week: "1" to "7" sun-sat - **`FFMC`**: FFMC index from the FWI system: 18.7 to 96.20 - **`DMC`**: DMC index from the FWI system: 1.1 to 291.3 - **`DC`**: DC index from the FWI system: 7.9 to 860.6 - **`ISI`**: ISI index from the FWI system: 0.0 to 56.10 - **`temp`**: temperature in Celsius degrees: 2.2 to 33.30 - **`RH`**: relative humidity in %: 15.0 to 100 - **`wind`**: wind speed in km/h: 0.40 to 9.40 - **`rain`**: outside rain in mm/m2 : 0.0 to 6.4 - **`area`**: the burned area of the forest (in ha): 0.00 to 1090.84 - X:Montesinho公园地图内的x轴空间坐标:1到9 - Y:Montesinho公园地图内的y轴空间坐标:2到9 - month:一年中的月份:“ 1”至“ 12” 指一月至十二月 - 日期:一星期的天數:“ 1”至“ 7”指星期一至星期日 - FFMC:FWI系统中的FFMC指数:18.7至96.20 - DMC:来自FWI系统的DMC指数:1.1至291.3 - DC:FWI系统中的DC指数:7.9至860.6 - ISI:FWI系统中的ISI指数:0.0至56.10 - temp:摄氏温度:2.2至33.30 - RH:相对湿度(%):15.0至100 - 风:以km / h为单位的风速:0.40至9.40 - 雨水:室外雨水(mm / m2):0.0至6.4 - 面积:森林被烧毁的面积(公顷):0.00至1090.84 Let's load the dataset and visualize the area that was burned in relation to the temperature in that region. 让我们加载数据集,并可视化区域温度与燃烧区域的关系。 ``` import pandas as pd df = pd.DataFrame(pd.read_csv('./Data/forestfires.csv')) %matplotlib inline from ggplot import * ggplot(aes(x='temp', y='area'), data=df) + geom_line() + geom_point() ``` Intuitively, the hotter the weather, the more hectares burned in forest fires. 直观地说,天气越热,森林大火烧毁的公顷数就越多。 ## Transfer your data to Azure ML Studio We have our data, but how do we get it into Azure ML Studio in order to use it there? That is where the `azureml` package comes in. It enables us to load data and models into Azure ML Studio from an Azure Notebook (or any Python environment). 我们有我们的数据,但是我们如何将其放入Azure ML Studio以便在那里使用?这就是“azureml”包的好处。它使我们能够从Azure笔记本(或任何Python环境)将数据和模型加载到Azure ML Studio中。 The first code cell of this notebook is what establishes the connection with *your* Azure ML Studio account. 此笔记本的第一个代码单元是与*您的*Azure ML Studio帐户建立连接。 Now that you have your notebook talking to Azure ML Studio, you can export your data to it: 现在您的笔记本已经与Azure ML Studio进行了对话,您可以将数据导出到云端: ``` # from azureml import DataTypeIds # dataset = ws.datasets.add_from_dataframe( # dataframe=df, # data_type_id=DataTypeIds.GenericCSV, # name='Forest Fire Data 2', # description='Paulo Cortez and Aníbal Morais (Univ. Minho) @ 2007' # ) ``` After running the code above, you can see the dataset listed in the **Datasets** section of the Azure Machine Learning Studio workspace. (**Note**: You might need to switch between browser tabs and refresh the page in order to see the dataset.) 运行上述代码后,您可以在Azure Machine Learning Studio工作区的**数据集**部分中看到列出的数据集 ![image.png](attachment:image.png)<br/> It is also straightforward to list the datasets available in the workspace and transfer datasets from the workspace to the notebook: 列出工作区中可用的数据集并将数据集从工作区传输到笔记本也很简单: ``` print('\n'.join([i.name for i in ws.datasets if not i.is_example])) # only list user-created datasets ``` You can also interact with and examine the dataset in Azure ML Studio directly from your notebook: 您还可以直接从笔记本与Azure ML Studio中的数据集进行操作和检查: ``` # Read some more of the metadata ds = ws.datasets['Forest Fire Data 2'] print(ds.name) print(ds.description) print(ds.family_id) print(ds.data_type_id) print(ds.created_date) print(ds.size) # Read the contents df2 = ds.to_dataframe() df2.head() ``` ## Create your model || 创建模型 We're now back into familiar territory: prepping data for the model and fitting the model. To keep it interesting, we'll use the scikit-learn `train_test_split()` function with a slight change of parameters to select 75 percent of the data points for training and 25 percent for validation (testing). 现在我们又回到了熟悉的领域:为模型准备数据并拟合模型。 我们将使用scikit learn“train_test_split()”,只需稍微更改参数,就可以选择75%的数据点进行训练,25%的数据点进行验证(测试)。 ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( df[['wind','rain','month','RH']], df['temp'], test_size=0.25, random_state=42 ) ``` Did you see what we did there? Rather than select all of the variables for the model, we were more selective and just chose windspeed, rainfall, month, and relative humidity in order to predict temperature. 我们没有为模型选择所有的变量,而是选择了[风速、降雨量、月份,相对湿度]来预测温度。 Fit scikit-learn's `DecisionTreeRegressor` model using the training data. This algorithm is a combination of the linear regression and decision tree classification that you worked with in Section 6. 使用训练数据拟合scikit learn的“决策树处理器”模型。此算法是您在第6节中使用的线性回归和决策树分类的组合。 ``` from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import r2_score regressor = DecisionTreeRegressor(random_state=42) regressor.fit(X_train, y_train) y_test_predictions = regressor.predict(X_test) print('R^2 for true vs. predicted test set forest temperature: {:0.2f}'.format(r2_score(y_test, y_test_predictions))) # Play around with this algorithm. 玩玩这个算法。 # Can you get better results changing the variables you select for the training and test data? # 改变你为训练和测试数据选择的变量,你能得到更好的结果吗? # What if you look at different variables for the response? # 改变选择的变量来预测温度? ``` ## Deploy your model as a web service || 将模型部署为web服务 This is the important part. Once deployed as a web service, your model can be accessed from anywhere. This means that rather than refit a model every time you need a new prediction for a business or humanitarian use case, you can send the data to the pre-fitted model and get back a prediction. 这是重要的部分。一旦部署为web服务,就可以从任何地方访问您的模型。 First, deploy the model as a predictive web service. To do so, create a wrapper function that takes input data as an argument and calls `predict()` with your trained model and this input data, returning the results. 首先,将模型部署为预测性web服务。 然后,创建一个"包装器"函数,该函数将输入数据作为参数,并使用经过训练的模型和该输入数据调用“predict()”,返回结果。 ``` from azureml import services @services.publish(workspace_id, authorization_token) @services.types(wind=float, rain=float, month=int, RH=float) @services.returns(float) # The name of your web service is set to this function's name # web服务的名称将设置为此函数的名称 def forest_fire_predictor(wind, rain, month, RH): return regressor.predict([wind, rain, month, RH]) # Hold onto information about your web service so # you can call it within the notebook later service_url = forest_fire_predictor.service.url api_key = forest_fire_predictor.service.api_key help_url = forest_fire_predictor.service.help_url service_id = forest_fire_predictor.service.service_id ``` You can also go to the **Web Services** section of your Azure ML Studio workspace to see the predictive web service running there. 您还可以转到Azure ML Studio工作区的**Web服务**部分,查看在那里运行的Web服务。 ## Consuming the web service || 使用web服务 Next, consume the web service. To see if this works, try it here from the notebook session in which the web service was created. Just call the predictor directly: 接下来,使用web服务。若要查看此方法是否有效,请在创建web服务的笔记本直接预测: ``` forest_fire_predictor.service(5.4, 0.2, 9, 22.1) ``` At any later time, you can use the stored API key and service URL to call the service. In the example below, data can be packaged in JavaScript Object Notation (JSON) format and sent to the web service. 以后任何时候,您都可以使用存储的API密钥和服务URL来调用服务。在下面的示例中,数据可以用(JSON)格式打包并发送到web服务。 ``` import urllib2 import json data = {"Inputs": { "input1": { "ColumnNames": [ "wind", "rain", "month", "RH"], "Values": [["5.4", "0.2", "9", "22.1"]] } }, # Specified feature values "GlobalParameters": {} } body = json.dumps(data) headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)} req = urllib2.Request(service_url, body, headers) try: response = urllib2.urlopen(req) result = json.loads(response.read()) # load JSON-formatted string response as dictionary print(result['Results']['output1']['value']['Values'][0][0]) # Get the returned prediction except urllib2.HTTPError, error: print("The request failed with status code: " + str(error.code)) print(error.info()) print(json.loads(error.read())) from sklearn.datasets import load_iris ''' Iris plants dataset Data Set Characteristics: Number of Instances 150 (50 in each of three classes) Number of Attributes 4 numeric, predictive attributes and the class Attribute Information sepal length in cm sepal width in cm petal length in cm petal width in cm class: Iris-Setosa Iris-Versicolour Iris-Virginica Missing Attribute Values None Class Distribution 33.3% for each of 3 classes. Creator R.A. Fisher ''' # 鸢尾花植物数据集 # 数据集特征: # 实例数 # 150(三类各50个) # 属性数 # 4个数值,预测属性和类别 # 属性信息 # 花萼长度(厘米) # 花萼宽度(厘米) # 花瓣长度(厘米) # 花瓣宽度(厘米) # 类: # 山鸢尾 # 杂色鸢尾 # 维吉尼亚鸢尾 # 缺少属性值 # 没有 # 分布 # 3个类别各33.3%。 # 创作者 # 费舍尔(R.A. Fisher) ''' The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken from Fisher’s paper. Note that it’s the same as in R, but not as in the UCI Machine Learning Repository, which has two wrong data points. This is perhaps the best known database to be found in the pattern recognition literature. Fisher’s paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other. ''' #著名的鸢尾花数据库,最早是由费舍尔(R.A. Fisher)所采用。该数据集摘自费舍尔的论文。请注意,它与R中的相同,但与UCI机器学习数据集中的R不同,后者有两个错误的数据点。 #这也许是模式识别文献中最有名的数据库。费舍尔的论文是该领域的经典之作,至今一直被引用。 (Duda&Hart的论文就是一个例子)数据集包含3类,每类50个实例,其中每一类都涉及一种鸢尾植物。有一类与另两类可线性分离;但这两类彼此不能线性分离。 # for photos of the flowers: https://www.jianshu.com/p/147c62ad4d2f 鸢尾花照片:https://www.jianshu.com/p/147c62ad4d2f iris_data_raw = load_iris() X, y = load_iris(return_X_y=True) print(list(iris_data_raw.target_names)) iris_df = pd.DataFrame(iris_data_raw.data, columns=iris_data_raw.feature_names) iris_df.head() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=42) from sklearn.ensemble import GradientBoostingClassifier from sklearn.metrics import accuracy_score # for more details about ensemble methods and gradient boosting please see: https://sklearn.apachecn.org/docs/master/12.html # 更多集成方法与梯度提升信息,请参见:https://sklearn.apachecn.org/docs/master/12.html gbc = GradientBoostingClassifier(random_state=42) gbc.fit(X_train, y_train) y_prediction_gbc = gbc.predict(X_test) #预测精度 print('accuracy_score for true vs. predicted test set forest temperature: {:0.2f}'.format(accuracy_score(y_test, y_prediction_gbc))) #(实际值 , 预测值) print('actual', 'predicted') zip(y_test, y_prediction_gbc) ''' as you can see we were able to achieve a perfect prediction using one of the most powerful models in sklearn 如您所见,我们能够使用sklearn其中一个最强大的模型,来获得完美的预测 and the best part is... we did not even do a parameter tuneing step (if we did we can expect potentially significantly results i.e. on harder problems or regression) 最棒的部分是...我们甚至没有执行参数调整步骤(如果有的话,我们可以预期获得明显更好的结果(在更困难的回归问题上)) sklearn is highly optimized with reasonable default parameters / priors sklearn是经过高度优化,具有合理的默认参数/先验值 when applying the right model to the right data, most of the time it works "right outside of the box/right off the shelf" 当在正确的数据上应用正确的模型,多数/经常 它工作起来是“易用的” / “现成的" ''' ''' now you can try to follow the same steps from above and deploy gbc model 现在您可以试试使用上面相同的步骤把gbc ML模型部署到云端 ''' ``` You have now created your own ML web service. Let's now see how you can also interact with existing ML web services for even more sophisticated applications. ### Exercise: 练习 Try this same process of training and hosting a model through Azure ML Studio with the Pima Indians Diabetes dataset (in CSV format in your data folder). The dataset has nine columns; use any of the eight features you see fit to try and predict the ninth column, Outcome (1 = diabetes, 0 = no diabetes). 尝试使用Pima Indians Diabetes数据集(数据文件夹中的CSV格式)通过Azure ML Studio进行相同的模型培训和部署过程.数据集有九列;使用您认为合适的八个特性中的任何一个来尝试和预测第九列,结果(1=糖尿病,0=无糖尿病)。 > **Takeaway**: In this part, you explored fitting a model and deploying it as a web service. You did this by using now-familiar tools in an Azure Notebook to build a model relating variables surrounding forest fires and then posting that as a function in Azure ML Studio. From there, you saw how you and others can access the pre-fitted models to make predictions on new data from anywhere on the web. 在本部分中,探讨了如何拟合模型并将其部署为web服务。通过在Azure笔记本中使用Data Science必备的工具来构建一个与森林火灾相关的模型,然后将其作为函数发布到Azure ML Studio中。从那里,您看到了您和其他人如何访问预先安装的模型,以便从web上的任何位置对新数据进行预测。
github_jupyter
# Mumbai House Price Prediction - Supervised Machine Learning-Regression Problem ## Data Preprocessing # The main goal of this project is to Predict the price of the houses in Mumbai using their features. # Import Libraries ``` # importing necessary libraries import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import numpy as np from scipy import stats import re ``` # Load dataset ``` # Load the dataset df=pd.read_csv('house_scrape.csv') df.head(5) df = df.drop(['type_of_sale'], axis = 1) df.shape duplicate = df[df.duplicated()] duplicate # Drops the duplicate entires in the dataset. df=df.drop_duplicates() # As number of rows would vary we need to reset index. df=df.reset_index() df.head() # Dropping unnecessary columns in dataset. df=df.drop(labels='index',axis=1) df.head() ``` # Exploratory Data Analysis ``` df.shape df.info() #we have 3 numeric variables and 5 categorical variables #we have column price in lakhs df.describe() #observe 75% and max value it shows huge diff sns.pairplot(df) plt.show() # area_insqft and price(L) have slightly linear correlation with some outliers # value count of each feature def value_count(df): for var in df.columns: print(df[var].value_counts()) print("--------------------------------") value_count(df) # correlation heatmap sns.heatmap(df.corr(),cmap="coolwarm", annot=True) plt.show() ``` # Preare Data for Machine Learning Model # Data Cleaning ``` df.isnull().sum() # find the homuch missing data available df.isnull().mean()*100 # % of measing value # visualize missing value using heatmap to get idea where is the value missing plt.figure(figsize=(16,9)) sns.heatmap(df.isnull()) ``` # Handling the null values of sale_type ``` df.loc[df['construction_status'] == 'Under Construction', 'Sale_type'] = 'new' df #so here we can replace the null values for sale_type with help of construnction_status column we can replace 'new' in sale_type where status is 'Under Construntion'. df1 = df['Sale_type'] df1 df = df.drop(['Sale_type'],axis = 1) df.head() #we can drop the Sale_type as we will concatenate it in df. df1 = df1.fillna(method='ffill') df1.isnull().sum() #to handle rest of the null values in Sale_type we used ffill() method df = pd.concat([df, df1], axis=1) df.head() ``` # Handling the null values of Bathroom ``` #we need to extract the numeric value from string first df["Bathroom"] = df.assign(Bathroom = lambda x: x['Bathroom'].str.extract('(\d+)')) #lets convert the bathroom from object type to numeric df["Bathroom"] = pd.to_numeric(df["Bathroom"]) df2 = df['Bathroom'] df2 df = df.drop(['Bathroom'],axis = 1) df.head() #we can drop the Bathroom as we will concatenate it in df. df2 = df2.fillna(method='bfill') df2.isnull().sum() #to handle rest of the null values in Sale_type we used ffill() method df = pd.concat([df, df2], axis=1) df.head() df.isnull().sum() #check for the null values #so our data has no null values now we can proceed further now with other data preprocessing #lets convert the rate_persqft from object type to numeric # got error cannot convert str "price" at position 604 so replacing price with rate/sqft value. df["rate_persqft"] = df["rate_persqft"].replace("Price", 8761) df["rate_persqft"] = pd.to_numeric(df["rate_persqft"]) #now we can check the description of the data again df.describe() sns.heatmap(df.corr(), annot=True) plt.show() ``` # Finding outliers and removing them ``` # function to create histogram, Q-Q plot and boxplot # for Q-Q plots import scipy.stats as stats def diagnostic_plots(df, variable): # function takes a dataframe (df) and # the variable of interest as arguments # define figure size plt.figure(figsize=(16, 4)) # histogram plt.subplot(1, 3, 1) sns.distplot(df[variable], bins=30) plt.title('Histogram') # Q-Q plot plt.subplot(1, 3, 2) stats.probplot(df[variable], dist="norm", plot=plt) plt.ylabel('Variable quantiles') # boxplot plt.subplot(1, 3, 3) sns.boxplot(y=df[variable]) plt.title('Boxplot') plt.show() num_var = ["BHK","price(L)","rate_persqft","area_insqft","Bathroom"] for var in num_var: print("******* {} *******".format(var)) diagnostic_plots(df, var) # here we observe outlier using histogram,, qq plot and boxplot #here we can see there are outliers in some features that we will remove to balance our dataset so other variable didnt get affected. df3 = df[['price(L)', 'rate_persqft', 'area_insqft']].copy() df3 #here we make a new data frame to remove the outliers of the features needed and then concatenate with previous dataframe df = df.drop(['price(L)','rate_persqft','area_insqft'],axis = 1) df.head() #droping the values so we can concat the new clean features z_scores = stats.zscore(df3) abs_z_scores = np.abs(z_scores) filtered_entries = (abs_z_scores < 3).all(axis=1) df3 = df3[filtered_entries] #using Z-sore to remove the outliers from the features selected df3 #this is our new dataframe with removed outliers affecting our data sns.boxplot(x=df3['price(L)']) #we can compare the above box plots and see the difference outliers has been removed the ones remaining are relevant to our data sns.boxplot(x=df3['rate_persqft']) sns.boxplot(x=df3['area_insqft']) df = pd.concat([df, df3], axis=1) df.head() #concatenate to our previous dataframe df.isnull().sum() #after we removed the outliers we get some na values df = df.dropna() df #we can drop those values and reset the index so we get all aligned dataset df=df.reset_index() #resetting the index df=df.drop(labels='index',axis=1) df.head() #drop the extra index created that we dont need ``` # Categorical variable encoding # Encoding Construction_status ``` for cat_var in ["Under Construction","Ready to move"]: df["construction_status"+cat_var] = np.where(df['construction_status']==cat_var, 1,0) df.shape ``` # Encoding Sale_type ``` for cat_var in ["new","resale"]: df["Sale_type"+cat_var] = np.where(df['Sale_type']==cat_var, 1,0) df.shape ``` # Encoding Location ``` # here we are selecting only the location which have count above 50 location_value_count = df['location'].value_counts() location_value_count location_get_50 = location_value_count[location_value_count>=50].index location_get_50 for cat_var in location_get_50: df['location_'+cat_var]=np.where(df['location']==cat_var, 1,0) df.shape df.head() ``` # Drop categorical variable ``` df = df.drop(["location","construction_status",'Sale_type'], axis =1) df.shape df.head() df.to_csv('final_house_scrape.csv', index=False) ```
github_jupyter
``` import os import glob import math import time from joblib import Parallel, delayed import pandas as pd import numpy as np import scipy as sc from sklearn.model_selection import KFold import lightgbm as lgb import warnings import matplotlib.pyplot as plt import matplotlib from sklearn.model_selection import train_test_split from torch.utils.data import TensorDataset, DataLoader import torch import torch.nn as nn import random import seaborn as sns; sns.set_theme() import torch.nn.functional as F from sklearn.linear_model import LinearRegression from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso import statsmodels.api as sm import pylab as pl from matplotlib.pyplot import figure from IPython import display from pandas.plotting import scatter_matrix from sklearn.decomposition import PCA from sklearn.metrics import r2_score import umap from sklearn import svm from lightgbm import LGBMClassifier from numpy import std from numpy import mean from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from matplotlib import cm from sklearn.metrics import confusion_matrix device = torch.device("cuda" if torch.cuda.is_available() else "cpu") warnings.filterwarnings('ignore') pd.set_option('max_columns', 300) train_test_seurat = pd.read_csv('./integrate.csv') train_test_seurat = train_test_seurat.T train_test_seurat_std = train_test_seurat.std() column_names = list(train_test_seurat.columns) columns_remove = [] for i in range(train_test_seurat.shape[1]): if train_test_seurat_std[i] == 0: columns_remove.append(column_names[i]) train_test_seurat = train_test_seurat.drop(columns_remove, axis=1) train_test_seurat[columns_remove[0]] = train_test_seurat.iloc[:, 0] clusters = pd.read_csv('./clusters.csv') train_test_seurat['class'] = clusters train = pd.read_csv('./MLR_Project_train.csv') test = pd.read_csv('./MLR_Project_test.csv') target = pd.DataFrame(pd.concat([train['TARGET'], test['TARGET']])) target = target.rename(index=lambda s: 'Sample' + str(int(s)+1)) train_test_seurat['TARGET'] = 1.0 for i in range(train_test_seurat.shape[0]): train_test_seurat['TARGET'][i] = target['TARGET'][i] train_test_seurat train_seurat = train_test_seurat.iloc[:90000, :] test_seurfat = train_test_seurat.iloc[90000:, :] pca = PCA(n_components=2) pca.fit(train_seurat.iloc[:, :-2]) print(pca.explained_variance_ratio_) train_pca = pca.fit_transform(train_seurat.iloc[:, :-2]) reducer = umap.UMAP() train_umap = reducer.fit_transform(train_seurat.iloc[:, :-2]) new_inferno = cm.get_cmap('inferno', 5) figure, ax = plt.subplots(1, 2, figsize = (15, 7)) color_map = (train_seurat['TARGET'] - np.min(train_seurat['TARGET'])) / (np.max(train_seurat['TARGET']) - np.min(train_seurat['TARGET']))*2-1 # ax.scatter(train_pca[:, 0], train_pca[:, 1], c=np.sign(train['TARGET']), cmap=matplotlib.colors.ListedColormap(colors)) im_0 = ax[0].scatter(train_pca[:, 0], train_pca[:, 1], c=color_map, cmap='Reds', alpha = 1) im_1 = ax[1].scatter(train_umap[:, 0], train_umap[:, 1], c=color_map, cmap='Reds', alpha = 1) ax[0].set_title('PCA visualization of Test data', fontsize=15) ax[0].set_xlabel('component 1', fontsize=10) ax[0].set_ylabel('component 2', fontsize=10) ax[1].set_title('UMAP visualization of Test data', fontsize=15) ax[1].set_xlabel('component 1', fontsize=10) ax[1].set_ylabel('component 2', fontsize=10) plt.colorbar(im_1) plt.show() train_seurat['group'] = 1.0 quantiles = [train_seurat['TARGET'].quantile(x/10) for x in range(1, 10)] quantiles for i in range(train_seurat.shape[0]): for j in range(9): if train_seurat['TARGET'][i] < quantiles[j]: train_seurat['group'][i] = j break train_seurat['group'][i] = 9 train_seurat colors = ['red', 'green', 'blue', 'orange', 'pink', 'yellow', 'greenyellow', 'gold', 'cyan', 'black'] new_inferno = cm.get_cmap('inferno', 5) figure, ax = plt.subplots(1, 2, figsize = (15, 7)) color_map = (train_seurat['TARGET'] - np.min(train_seurat['TARGET'])) / (np.max(train_seurat['TARGET']) - np.min(train_seurat['TARGET']))*2-1 im_0 = ax[0].scatter(train_pca[:, 0], train_pca[:, 1], c=(np.sign(train_seurat['TARGET'])+1)/2, cmap=matplotlib.colors.ListedColormap(colors)) # im_0 = ax[0].scatter(train_pca[:, 0], train_pca[:, 1], c=color_map, cmap='Reds', alpha = 1) im_1 = ax[1].scatter(train_umap[:, 0], train_umap[:, 1], c=(np.sign(train_seurat['TARGET'])+1)/2, cmap=matplotlib.colors.ListedColormap(colors)) ax[0].set_title('PCA visualization of Test data', fontsize=15) ax[0].set_xlabel('component 1', fontsize=10) ax[0].set_ylabel('component 2', fontsize=10) ax[1].set_title('UMAP visualization of Test data', fontsize=15) ax[1].set_xlabel('component 1', fontsize=10) ax[1].set_ylabel('component 2', fontsize=10) plt.colorbar(im_1) plt.show() colors = ['red', 'green', 'blue', 'orange', 'pink', 'yellow', 'greenyellow', 'gold', 'cyan', 'black'] new_inferno = cm.get_cmap('inferno', 5) figure, ax = plt.subplots(1, 2, figsize = (15, 7)) im_0 = ax[0].scatter(train_pca[:, 0], train_pca[:, 1], c=train_seurat['group'], cmap=matplotlib.colors.ListedColormap(colors)) im_1 = ax[1].scatter(train_umap[:, 0], train_umap[:, 1], c=train_seurat['group'], cmap=matplotlib.colors.ListedColormap(colors)) ax[0].set_title('PCA visualization of Train data', fontsize=15) ax[0].set_xlabel('component 1', fontsize=10) ax[0].set_ylabel('component 2', fontsize=10) ax[1].set_title('UMAP visualization of Train data', fontsize=15) ax[1].set_xlabel('component 1', fontsize=10) ax[1].set_ylabel('component 2', fontsize=10) plt.colorbar(im_1) plt.show() ``` ## 1.3 Show the maximum return of train and test ``` train_max = np.sum(train['TARGET'][train['TARGET']>0]) test_max = np.sum(test['TARGET'][test['TARGET']>0]) print('Maximum return of training set:', train_max) print('Maximum return of testing set:', test_max) reg = Ridge(alpha=0.5).fit(pd.DataFrame(train_seurat.iloc[:, :]), train['TARGET']) pred = reg.predict(pd.DataFrame(train_seurat.iloc[:, :])) pred_test = reg.predict(pd.DataFrame(test_seurat.iloc[:, :])) train_res = np.sum(train['TARGET'][pred>0]) test_res = np.sum(test['TARGET'][pred_test>0]) print(f'Train naive random selection percentage return: {train_res/train_max*100}%') print(f'Test naive random selection percentage return: {test_res/test_max*100}%') ``` ### 1.3.1 Remove the Unnamed columns in dataframe ``` train = train.loc[:, ~train.columns.str.contains('^Unnamed')] test = test.loc[:, ~test.columns.str.contains('^Unnamed')] ``` ## 1.4 Naive random selection experiment ``` train_random = 0 for j in range(10000): ind = np.random.randint(2, size=train.shape[0]) train_random = train_random + sum(train['TARGET'][ind>0]) print('Result of train naive random selection:', train_random/10000) test_random = 0 for j in range(10000): ind = np.random.randint(2, size=test.shape[0]) test_random = test_random + sum(test['TARGET'][ind>0]) print('Result of test naive random selection:', test_random/10000) print(f'Train naive random selection percentage return: {(train_random/10000)/train_max*100}%') print(f'Test naive random selection percentage return: {(test_random/10000)/test_max*100}%') ``` ## 1.5 Get data shape ``` # print('Train shape:', train.shape) # print('Test shape:', test.shape) ``` ## 3.1 Remove extreme Target values ``` # train = train.sort_values(by=['TARGET']) # # remove samples with extreme large target values and samples with extreme negative values # num_remove = 10 # train_remove = train.iloc[num_remove:-num_remove, :] # # train.shape ``` ## 3.2 Remove extreme features values ``` # for i in range(train.shape[1]-1): # train_remove = train_remove.sort_values(by=[str(i)]) # num_remove = 5 # train_remove = train_remove.iloc[num_remove:-num_remove, :] # train_remove.shape # figure, ax = plt.subplots(1, 1, figsize = (35, 7)) # train_remove_mean_values = train_remove.mean() # train_remove_std_values = train_remove.std() # train_remove_std = (train_remove-train_remove.mean())/train_remove.std() # # train_remove_std.boxplot() # # ax.set_xlabel('Features', fontsize=20) # ax.set_ylabel('Values', fontsize=20) # ax.set_title('Train standardized features variability', fontsize=25) # figure.savefig("features_std_dis.png", bbox_inches='tight', dpi=600) ``` ### The devil is in the details, we need to standardize test dataset by using train mean and train standard deviation ``` # test_std = (test-train_remove_mean_values)/train_remove_std_values ``` ## 5.5 Autoencoder Resnet model ``` input_features = train_seurat.to_numpy() output_features = pd.DataFrame(train['TARGET']).to_numpy() ####### for i in range(input_features.shape[0]): input_features[i, :] = (input_features[i, :] - np.mean(input_features[i, :]))/np.std(input_features[i, :]) ####### input_ = np.zeros((input_features.shape[0], 9, 6)) output_ = np.zeros((output_features.shape[0], 9, 6)) for i in range(input_.shape[0]): input_[i, :, :] = np.reshape(input_features[i, :], (9, 6)) input_features = input_ # input_features = torch.tensor(input_features, 1) X_test = test_seurat.to_numpy() Y_test = pd.DataFrame(test['TARGET']).to_numpy() ####### for i in range(X_test.shape[0]): X_test[i, :] = (X_test[i, :] - np.mean(X_test[i, :]))/np.std(X_test[i, :]) ####### X_test_ = np.zeros((X_test.shape[0], 9, 6)) for i in range(X_test_.shape[0]): X_test_[i, :, :] = np.reshape(X_test[i, :], (9, 6)) X_test = X_test_ X_train, X_val, Y_train, Y_val = train_test_split(input_features, output_features, test_size=0.2, random_state=42) ##### # to calculate returns train_data, val_data = train_test_split(train, test_size=0.2, random_state=42) test_data = test ##### auto_train_max = np.sum(train_data['TARGET'][train_data['TARGET']>0]) auto_val_max = np.sum(val_data['TARGET'][val_data['TARGET']>0]) auto_test_max = np.sum(test['TARGET'][test['TARGET']>0]) print('Train X shape:', X_train.shape) print('Validation X shape:', X_val.shape) print('Test X shape:', X_test.shape) print('Train Y shape:', Y_train.shape) print('Val Y shape:', Y_val.shape) print('Test Y shape:', Y_test.shape) print('train_max:', auto_train_max) print('val_max:', auto_val_max) print('test_max:', auto_test_max) train_input = torch.from_numpy(X_train) train_output = torch.from_numpy(Y_train) val_input = torch.from_numpy(X_val) val_output = torch.from_numpy(Y_val) test_input = torch.from_numpy(X_test) test_output = torch.from_numpy(Y_test) train_input = torch.unsqueeze(train_input, 1) val_input = torch.unsqueeze(val_input, 1) test_input = torch.unsqueeze(test_input, 1) train_input = train_input.float() train_output = train_output.float() val_input = val_input.float() val_output = val_output.float() test_input = test_input.float() test_output = test_output.float() input_feature = train_input.shape[1] output_feature = 1 # print('input_feature:', input_feature) # print('output_feature:', output_feature) train_input = train_input.to(device) train_output = train_output.to(device) val_input = val_input.to(device) val_output = val_output.to(device) test_input = test_input.to(device) test_output = test_output.to(device) def seed_everything(seed=1234): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True seed_everything() # auto-encoder model # base model class Autoencoder(nn.Module): def __init__(self): super(Autoencoder, self).__init__() self.conv1 = nn.Conv2d(1, 16, 2) self.conv2 = nn.Conv2d(16, 32, 2) self.maxpool = nn.MaxPool2d(2) self.linear = nn.Linear(96, 1) self.linear2 = nn.Linear(input_feature//2, input_feature//4) self.linear3 = nn.Linear(input_feature//4, input_feature//16) self.linear4 = nn.Linear(input_feature//16, input_feature//16) self.linear5 = nn.Linear(input_feature//16, input_feature//16) self.linear6 = nn.Linear(input_feature//16+input_feature, input_feature//16) self.batchnorm_1 = nn.BatchNorm1d(input_feature//2) self.batchnorm_2 = nn.BatchNorm1d(input_feature//4) self.batchnorm_3 = nn.BatchNorm1d(input_feature//16) self.relu = nn.ReLU() self.dropout = nn.Dropout(0.15) self.softmax = nn.Softmax() def forward(self, x_): x = self.conv1(x_) x = self.maxpool(x) x = self.conv2(x) x = torch.flatten(x, 1) x = self.relu(x) output = self.linear(x) return output.float() batch_size = 100000 train_ds = TensorDataset(train_input, train_output) train_dl = DataLoader(train_ds, batch_size= batch_size, shuffle=False) %matplotlib inline def fit(num_epochs, model, loss_fn, train_input, train_output, val_input, val_output, test_input, test_output, model_path): best_loss = float('inf') train_pred_output = [] val_pred_output = [] train_error = [] val_error = [] test_error = [] epochs = [] train_returns = [] val_returns = [] test_returns = [] train_sum = [] val_sum = [] test_sum = [] for epoch in range(num_epochs): for x,y in train_dl: model = model.train() opt.zero_grad() pred = model(x) y = torch.reshape(y, (y.shape[0], 1)) loss = loss_fn(pred, y) loss.backward() opt.step() if epoch % 50 == 0: model = model.eval() train_pred = model(train_input) train_pred_index = (torch.sign(train_pred)+1)//2 train_output = torch.reshape(train_output, (train_output.shape[0], 1)) train_loss = loss_fn(train_output, train_pred) # train_loss = loss_fn(train_pred, train_output.long().squeeze()) train_loss = train_loss.cpu().detach().numpy() val_pred = model(val_input) val_pred_index = (torch.sign(val_pred)+1)//2 val_output = torch.reshape(val_output, (val_output.shape[0], 1)) val_loss = loss_fn(val_output, val_pred) # val_loss = loss_fn(val_pred, val_output.long().squeeze()) val_loss = val_loss.cpu().detach().numpy() test_pred = model(test_input) test_pred_index = (torch.sign(test_pred)+1)//2 test_output = torch.reshape(test_output, (test_output.shape[0], 1)) test_loss = loss_fn(test_output, test_pred) # test_loss = loss_fn(test_pred, test_output.long().squeeze()) test_loss = test_loss.cpu().detach().numpy() epochs.append(epoch) train_error.append(math.log(train_loss+1)) val_error.append(math.log(val_loss+1)) test_error.append(math.log(test_loss+1)) # figure, ax = plt.subplots(1, 2, figsize = (20, 7)) # ax = ax.flatten() # figure, ax = plt.subplots(1, 4, figsize = (22, 5)) # ax = ax.flatten() # plt.grid(False) # train_conf = confusion_matrix(train_output, train_pred_index) # g1 = sns.heatmap(train_conf, cmap="YlGnBu",cbar=False, ax=ax[0], annot = True) # g1.set_ylabel('True Target') # g1.set_xlabel('Predict Target') # g1.set_title('Train dataset') # plt.grid(False) # val_conf = confusion_matrix(val_output, val_pred_index) # g2 = sns.heatmap(val_conf, cmap="YlGnBu",cbar=False, ax=ax[1], annot = True) # g2.set_ylabel('True Target') # g2.set_xlabel('Predict Target') # g2.set_title('Val dataset') # plt.grid(False) # test_conf = confusion_matrix(test_output, test_pred_index) # g3 = sns.heatmap(test_conf, cmap="YlGnBu",cbar=False, ax=ax[2], annot = True) # g3.set_ylabel('True Target') # g3.set_xlabel('Predict Target') # g3.set_title('Test dataset') train_pred_np = train_pred_index.cpu().detach().numpy() train_output_np = train_output.cpu().detach().numpy() val_pred_np = val_pred_index.cpu().detach().numpy() val_output_np = val_output.cpu().detach().numpy() test_pred_np = test_pred_index.cpu().detach().numpy() test_output_np = test_output.cpu().detach().numpy() # train_max_value = max(max(train_output_np), max(train_pred_np)) # train_min_value = min(min(train_output_np), min(train_pred_np)) # val_max_value = max(max(val_output_np), max(val_pred_np)) # val_min_value = min(min(val_output_np), min(val_pred_np)) # test_max_value = max(max(test_output_np), max(test_pred_np)) # test_min_value = min(min(test_output_np), min(test_pred_np)) # ax[0].scatter(train_output_np, train_pred_np, s = 20, alpha=0.3, c='blue') # ax[1].scatter(val_output_np, val_pred_np, s = 20, alpha=0.3, c='red') # ax[2].scatter(test_output_np, test_pred_np, s = 20, alpha=0.3, c='green') # ax[0].plot(epochs, train_error, c='blue') # ax[0].plot(epochs, val_error, c='red') # ax[0].plot(epochs, test_error, c='green') # ax[0].set_title('Errors vs Epochs', fontsize=15) # ax[0].set_xlabel('Epoch', fontsize=10) # ax[0].set_ylabel('Errors', fontsize=10) # ax[0].legend(['train', 'valid', 'test']) # ax[0].set_xlim([train_min_value, train_max_value]) # ax[0].set_ylim([train_min_value, train_max_value]) # ax[0].set_title('Trainig data', fontsize=15) # ax[0].set_xlabel('Target', fontsize=10) # ax[0].set_ylabel('Prediction', fontsize=10) # ax[0].plot([train_min_value, train_max_value], [train_min_value, train_max_value], 'k-') # ax[1].set_xlim([val_min_value, val_max_value]) # ax[1].set_ylim([val_min_value, val_max_value]) # ax[1].set_title('Validation data', fontsize=15) # ax[1].set_xlabel('Target', fontsize=10) # ax[1].set_ylabel('Prediction', fontsize=10) # ax[1].plot([val_min_value, val_max_value], [val_min_value, val_max_value], 'k-') # ax[2].set_xlim([test_min_value, test_max_value]) # ax[2].set_ylim([test_min_value, test_max_value]) # ax[2].set_title('Testing data', fontsize=15) # ax[2].set_xlabel('Target', fontsize=10) # ax[2].set_ylabel('Prediction', fontsize=10) # ax[2].plot([test_min_value, test_max_value], [test_min_value, test_max_value], 'k-') # ax[3].plot(epochs, train_error, c='blue') # ax[3].plot(epochs, val_error, c='red') # ax[3].plot(epochs, test_error, c='green') # ax[3].set_title('Training and Validation error', fontsize=15) # ax[3].set_xlabel('Epochs', fontsize=10) # ax[3].set_ylabel('MSE error', fontsize=10) # display.clear_output(wait=True) # display.display(pl.gcf()) # print('Epoch ', epoch, 'Train_loss: ', train_loss*1000, ' Validation_loss: ', val_loss*100, ' Test_loss: ', test_loss*100) # print(train_pred_np.shape, train_pred_np) # print(train_pred, train_pred_np) train_pred_np = np.squeeze(train_pred_np) val_pred_np = np.squeeze(val_pred_np) test_pred_np = np.squeeze(test_pred_np) train_res = np.sum(train_data['TARGET'][train_pred_np>0]) train_output_check = np.squeeze(train_output_np) train_check = np.sum(train_data['TARGET'][train_output_check>0]) val_res = np.sum(val_data['TARGET'][val_pred_np>0]) val_output_check = np.squeeze(val_output_np) val_check = np.sum(val_data['TARGET'][val_output_check>0]) test_res = np.sum(test_data['TARGET'][test_pred_np>0]) test_output_check = np.squeeze(test_output_np) test_check = np.sum(test_data['TARGET'][test_output_check>0]) # train_returns.append(train_res) # val_returns.append(val_res) # test_returns.append(test_res) # ax[1].plot(epochs, train_returns, c='blu`e') # ax[1].plot(epochs, val_returns, c='red') # ax[1].plot(epochs, test_returns, c='green') # ax[1].legend(['train', 'valid', 'test']) # ax[1].set_title('Return vs Epochs', fontsize=15) # ax[1].set_xlabel('Epoch', fontsize=10) # ax[1].set_ylabel('Returns', fontsize=10) # display.clear_output(wait=True) # display.display(pl.gcf()) train_sum.append(train_res) val_sum.append(val_res) test_sum.append(test_res) # print(f'Checks: {train_check/auto_train_max*100}%, {val_check/auto_val_max*100}%, {test_check/auto_test_max*100}%') # print(f'Maximum sum train return {train_res}, Total train return: {auto_train_max}, Maximum train percentage return: {train_res/auto_train_max*100}%') # print(f'Maximum sum train return {val_res}, Total train return: {auto_val_max}, Maximum train percentage return: {val_res/auto_val_max*100}%') # print(f'Maximum sum test return {test_res}, Total test return: {auto_test_max}, Maximum test percentage return: {test_res/auto_test_max*100}%') # print('Epoch:', epoch, 'Train loss:', train_loss, 'Val loss:', val_loss, 'Test loss:', test_loss) print(f'Epoch: {epoch}, Train loss: {train_loss}, Train return: {train_res/auto_train_max*100}%, Val loss: {val_loss}, Val return: {val_res/auto_val_max*100}%, Test loss: {test_loss}, Test return: {test_res/auto_test_max*100}%') if val_loss < best_loss: torch.save(model.state_dict(), model_path) best_loss = val_loss # train_pred_output.append([train_pred.cpu().detach().numpy(), train_output.cpu().detach().numpy()]) # val_pred_output.append([val_pred.cpu().detach().numpy(), val_output.cpu().detach().numpy()]) return train_sum, val_sum, test_sum num_epochs = 20000 learning_rate = 0.001 loss_fn = F.mse_loss # loss_fn = nn.CrossEntropyLoss() seed_everything() model = Autoencoder() model = model.to(device) opt = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9) train_sum_1, val_sum_1, test_sum_1, train_conf_1, val_conf_1, test_conf_1 = fit(num_epochs, model, loss_fn, train_input, train_output, val_input, val_output, test_input, test_output, 'model_path_cnn') # fig.savefig("auto_encoder.png", bbox_inches='tight', dpi=600) # model = Autoencoder_model() # model.load_state_dict(torch.load(model_path)) # model.eval() ```
github_jupyter
``` from scipy.stats import ranksums, sem import numpy as np from statannot import add_stat_annotation import copy import os import matplotlib.pyplot as plt import matplotlib save_dir = os.path.join("/analysis/fabiane/documents/publications/patch_individual_filter_layers/MIA_revision") plt.style.use('ggplot') matplotlib.use("pgf") matplotlib.rcParams.update({ "pgf.texsystem": "pdflatex", 'font.family': 'serif', 'font.size':8, 'text.usetex': True, 'pgf.rcfonts': False, }) def get_runtime(filename): all_runtime_lines = ! grep "Total time elapsed:" $filename all_iter_lines = ! grep "output_dir = " $filename times = [] iterations = [] for runtime_line in all_runtime_lines: time = 0 # convert runtime for idx, inner_line in enumerate(runtime_line.split(":")[1:4]): inner_line = inner_line.strip().split("\\")[0] if idx == 0: # remove time symbol and convert to seconds time += int(inner_line.split("h")[0]) * 3600 elif idx == 1: # remove time symbol and convert to seconds time += int(inner_line.split("m")[0]) * 60 elif idx == 2: # remove time symbol time += int(inner_line.split("s")[0]) times.append(time) # UKB baseline full set didn't finish reporting for the last of the 10 runs # it finished running but did not print out its running time. # Therefore, we ran the model again in a separate file for that run. if len(times) == 9: failed_time = 5 * 3600 + 42 * 60 + 49 # taken from individual run of final iteration times.append(failed_time) # convert iterations assert(len(all_iter_lines) == 1) iter_line = all_iter_lines[0].split("\"")[2] all_checkpoints = !ls -l $iter_line for checkpoint in all_checkpoints: if checkpoint.endswith("FINAL.h5"): iterations.append(int(checkpoint.split("_")[-2])) #assert(len(times) == 50 or len(times) == 10) return times, iterations # cleanup script for old runs #!destination="/ritter/share/projects/Methods/Eitel_local_filter/experiments_submission/models/MS/full_set/10xrandom_splits/experiment_r3/backup" find "/ritter/share/projects/Methods/Eitel_local_filter/experiments_submission/models/MS/full_set/10xrandom_splits/experiment_r3/" -type f -newermt "2021-01-01 00:00" -not -newermt "2021-01-22 15:55" -exec bash -c ' dirname=$(dirname {}); mkdir -p "${destination}/${dirname}"; echo ! mv {} ${destination}/${dirname}/' \;; times, iterations = get_runtime("ADNI_experiment-20_percent-10xrandom_sampling-random_search-Copy1.ipynb") filename_list = { "ADNI_small" : [ "ADNI_baseline-20_percent-10xrandom_sampling-random_search.ipynb", "ADNI_LiuPatches-20_percent-10xrandom_sampling_random_search.ipynb", "ADNI_experiment-20_percent-10xrandom_sampling-random_search-Copy1.ipynb" ], "UKB_small" : [ "UKB_sex_baseline-20_percent-10xrandom_sampling_random_search-Copy1.ipynb", "UKB_sex_LiuPatches-20_percent-10xrandom_sampling_random_search.ipynb", "UKB_sex_experiment-20_percent-10xrandom_sampling-random_search-Copy1.ipynb" ], "MS_small" : [ "MS_baseline-full_set-10xrandom_splits-random_search-Copy2.ipynb", "MS_LiuPatches-full_set-10xrandom_splits.ipynb", "MS_experiment-full_set-10xrandom_splits-random_search-Copy1.ipynb" ], "ADNI_big" : [ "ADNI_baseline-full_set-10xrandom_sampling-random_search.ipynb", "ADNI_LiuPatches-full_set-10xrandom_sampling.ipynb", "ADNI_experiment-full_set-10xrandom_sampling-random_search.ipynb" ], "UKB_big" : [ "UKB_sex_baseline-full_set-10xrandom_sampling-random_search-Copy1.ipynb", "UKB_sex_LiuPatches-full_set-10xrandom_sampling_random_search.ipynb", "UKB_sex_experiment-full_set-10xrandom_sampling-random_search-Copy1.ipynb" ], } """fig = plt.Figure() axs = [] for i, experiment in enumerate(filename_list): print(experiment) time_base, iter_base = get_runtime(filename_list[experiment][0]) time_pif, iter_pif = get_runtime(filename_list[experiment][1]) # run statistical test test_time = ranksums(time_base, time_pif) test_iter = ranksums(iter_base, iter_pif) print(f"Avg time baseline in seconds: {np.mean(time_base)}") print(f"Avg time PIF in seconds: {np.mean(time_pif)}") print(test_time) print(f"Avg number of iterations baseline: {np.mean(iter_base)}") print(f"Avg number of iterations PIF: {np.mean(iter_pif)}") print(test_iter) # plot results i *= 3 ax = plt.bar([i, i+1], [np.mean(time_base), np.mean(time_pif)], color=["tab:blue", "tab:orange"], label=["Baseline", "PIF"]) axs.append(ax) plt.errorbar(x=[i, i+1], y=[np.mean(time_base), np.mean(time_pif)], yerr=[sem(time_base), sem(time_pif)], color="black", ls="none", label="_Errorbar") x1, x2 = i, i+1 y, h, col = np.max(time_base) + 70, 2, 'k' #plt.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) if test_time.pvalue < 0.05: plt.text((x1+x2)*.5, y+h, "*", ha='center', va='bottom', color=col) #else: # plt.text((x1+x2)*.5, y+h, "ns", ha='center', va='bottom', color=col) leg = plt.legend(axs, ["Baseline", "PIF"]) leg.legendHandles[0].set_color('tab:blue') leg.legendHandles[1].set_color('tab:orange') plt.xticks(np.arange(0.5, 13, step=3), ["ADNI small", "ADNI big", "UKB small", "UKB big", "VIMS"]) plt.ylabel("Seconds") plt.title("Runtime in seconds") plt.show()""" fig, axes = plt.subplots(1, 2, figsize=(12, 6)) fig.set_size_inches(w=6.1, h=3.1) def inc_y(y): if y > 1000: y += 2500 else: y += 23 return y def sub_plot(i, ax, base_data, patch_data, pif_data, test_base_patch, test_base_pif, test_patch_pif): i *= 4 ax.bar([i, i+1, i+2], [np.mean(base_data), np.mean(patch_data), np.mean(pif_data)], color=["tab:gray", "tab:blue", "tab:orange"], ) #label=["Baseline", "PIF"]) ax.errorbar(x=[i, i+1, i+2], y=[np.mean(base_data), np.mean(patch_data), np.mean(pif_data)], yerr=[sem(base_data), sem(patch_data), sem(pif_data)], color="black", ls="none") # define coords for significance labels y, col = np.mean(patch_data), 'k' if y > 360: y += 800 h = 500 else: y += 5 h = 2 # test between baseline and patch based x1, x2 = i, i+1 if test_base_patch.pvalue < 0.001: ax.text((x1+x2)*.5, y+h/2, "**", ha='center', va='bottom', color=col) ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) elif test_base_patch.pvalue < 0.01: ax.text((x1+x2)*.5, y+h/2, "*", ha='center', va='bottom', color=col) ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) # test between baseline and PIF x1, x2 = i, i+2 if test_base_pif.pvalue < 0.001: y = inc_y(y) ax.text((x1+x2)*.5, y+h/2, "**", ha='center', va='bottom', color=col) ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) elif test_base_pif.pvalue < 0.01: y = inc_y(y) ax.text((x1+x2)*.5, y+h/2, "*", ha='center', va='bottom', color=col) ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) # test between patch-based and PIF x1, x2 = i+1, i+2 if test_patch_pif.pvalue < 0.001: y = inc_y(y) ax.text((x1+x2)*.5, y+h/2, "**", ha='center', va='bottom', color=col) ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) elif test_patch_pif.pvalue < 0.05: y = inc_y(y) ax.text((x1+x2)*.5, y+h/2, "*", ha='center', va='bottom', color=col) ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) for i, experiment in enumerate(filename_list): print(experiment) time_base, iter_base = get_runtime(filename_list[experiment][0]) time_patch, iter_patch = get_runtime(filename_list[experiment][1]) time_pif, iter_pif = get_runtime(filename_list[experiment][2]) # run statistical test test_time_base_pif = ranksums(time_base, time_pif) test_time_base_patch = ranksums(time_base, time_patch) test_time_patch_pif = ranksums(time_patch, time_pif) test_iter_base_pif = ranksums(iter_base, iter_pif) test_iter_base_patch = ranksums(iter_base, iter_patch) test_iter_patch_pif = ranksums(iter_patch, iter_pif) print(f"Avg time baseline in seconds: {np.mean(time_base)}") print(f"Avg time patch-based in seconds: {np.mean(time_patch)}") print(f"Avg time PIF in seconds: {np.mean(time_pif)}") print("Test time base vs patch ", test_time_base_patch) print("Test time base vs pif ", test_time_base_pif) print("Test time patch vs pif ", test_time_patch_pif) print(f"Avg number of iterations baseline: {np.mean(iter_base)}") print(f"Avg number of iterations patch-based: {np.mean(iter_patch)}") print(f"Avg number of iterations PIF: {np.mean(iter_pif)}") print("Test iter base vs patch ", test_iter_base_patch) print("Test iter base vs pif ", test_iter_base_pif) print("Test iter patch vs pif ", test_iter_patch_pif) # plot run time results sub_plot(i, axes[0], time_base, time_patch, time_pif, test_time_base_patch, test_time_base_pif, test_time_patch_pif) # plot training iters results sub_plot(i, axes[1], iter_base, iter_patch, iter_pif, test_iter_base_patch, test_iter_base_pif, test_iter_patch_pif) for ax_idx, ax in enumerate(axes): ax.set_xticks(np.arange(1, 20, step=4)) #ax.set_xticklabels(["ADNI small", "ADNI big", "UKB small", "UKB big", "VIMS"], rotation=45) ax.set_xticklabels(["ADNI", "UKB", "VIMS", "ADNI", "UKB"], rotation=45) ax.annotate('Small', (0.22,0), (0, -42), color="gray", xycoords='axes fraction', textcoords='offset points', va='top') ax.annotate('Big', (0.74,0), (0, -42), color="gray", xycoords='axes fraction', textcoords='offset points', va='top') trans = ax.get_xaxis_transform() #ax.annotate('Big', (0.7,0), (0, -30), xycoords=trans, textcoords='offset points', ha='center', va='top') #ax.annotate('Neonatal', xy=(1, -.1), xycoords=trans, ha="center", va="top") ax.plot([-.4,-.4,10,10],[-.20,-.20-0.03,-.20-0.03,-.20], color="gray", transform=trans, clip_on=False) # line small ax.plot([12,12,19,19],[-.20,-.20-0.03,-.20-0.03,-.20], color="gray", transform=trans, clip_on=False) # line big if ax_idx == 0: ax.set_ylabel("Seconds") ax.set_title("Run time in seconds") handles, labels = ax.get_legend_handles_labels() leg = ax.legend(["Baseline", "Patch-based", "PIF"]) leg.legendHandles[0] = matplotlib.patches.Rectangle(xy=(-0, -0), width=20, height=7, angle=0) leg.legendHandles[0].set_color('tab:gray') leg.legendHandles[1] = matplotlib.patches.Rectangle(xy=(-0, -0), width=20, height=7, angle=0) leg.legendHandles[1].set_color('tab:blue') leg.legendHandles[2] = matplotlib.patches.Rectangle(xy=(-0, -0), width=20, height=7, angle=0) leg.legendHandles[2].set_color('tab:orange') ax.legend(leg.legendHandles, ["Baseline", "Patch-based", "PIF"], loc="upper left") else: ax.set_ylabel("Iterations") ax.set_title("Number of iterations") #leg = plt.legend(axes, ["Baseline", "PIF"]) #plt.show() fig.savefig(os.path.join(save_dir, "Training_speed_comparison.pgf"), bbox_inches='tight', dpi=250) #fig.show() ```
github_jupyter
``` import forecasting as fc import pandas as pd import numpy as np import warnings warnings.simplefilter('ignore') revenues = pd.read_csv('Final Tables by Tax Source/final_data_revenue.csv') revenues['Date'] = pd.to_datetime(revenues['Date']) revenues = revenues.iloc[:, 1:] ``` 1. Create forecasts DataFrame, containing all forecasts on a rolling monthly basis for each tax source. ``` forecasts = pd.DataFrame() for i in revenues.Category.unique(): df = revenues[revenues['Category'] == i].set_index('Date') df['Rolling Total'] = df['Monthly Total'].rolling(12).sum() result = fc.forecast(df, steps = 24) forecasts = pd.concat([result, forecasts]) len(list(revenues.Category.unique())) forecasts.index = list(forecasts.reset_index()['index'][:24])*22 forecasts['Date'] = forecasts.index forecasts['Year'] = forecasts['Date'].dt.year forecasts['Month'] = forecasts['Date'].dt.month ``` 2. Create 2020 Revenue Budget, creating separate DataFrame showing all entries for December 2020. ``` forecast_2020 = forecasts[(forecasts['Year'] == 2020) & (forecasts['Month'] == 12)].sort_values(by = 'Category') forecast_2020['Percentage Method'] = forecast_2020['Percentage Method'].fillna(forecast_2020['Difference Method']) forecast_2020['Difference Method'] = forecast_2020['Difference Method'].astype(int) forecast_2020['Percentage Method'] = forecast_2020['Percentage Method'].astype(int) ``` 2.1. Selects the lower estimate of the two methods. Other determinations can be made, it is ultimately up to professional judgement based on the algorithm results. ``` result_2020 = np.where(forecast_2020['Difference Method'].ge(forecast_2020['Percentage Method']) == True, forecast_2020['Percentage Method'], forecast_2020['Difference Method']) forecast_2020['Final Result'] = result_2020 forecast_2020 ``` 2.2 Apply Final Forecast column to entire forecasts DataFrame ``` result_series = np.where(forecasts['Difference Method'].ge(forecasts['Percentage Method']) == True, forecasts['Percentage Method'], forecasts['Difference Method']) forecasts['Final Result'] = result_series forecasts['Percentage Method'] = forecasts['Percentage Method'].fillna(forecasts['Difference Method']) forecasts.columns = ['Category', 'Difference Method', 'Percentage Method', 'Date', 'Year', 'Month', 'Rolling Total'] forecasts.to_csv('forecast.csv') ``` 3. Add rolling values for prior years ``` rolling_total = pd.DataFrame() for i in revenues.Category.unique(): df = revenues[revenues['Category'] == i].set_index('Date') df['Rolling Total'] = df['Monthly Total'].rolling(12).sum() rolling_total = pd.concat([df, rolling_total]) forecasting = pd.concat([rolling_total, forecasts]) forecasting['Date'] = forecasting.index forecasting.tail(15) ``` 3.2 Create a plot for a total for all net federal revenues (more plots and dashboarding to come) ``` forecast_plot = forecasting.groupby(['Date'])['Rolling Total'].sum().reset_index() import seaborn as sns sns.lineplot(x = forecast_plot['Date'][12:], y = forecast_plot['Rolling Total']) forecasting.to_csv('forecasting.csv') ```
github_jupyter
``` # Import dependencies pandas, # requests, gmaps, census, and finally config's census_key and google_key # Declare a variable "c" and set it to the census with census_key. # https://github.com/datamade/census # We're going to use the default year 2016, however feel free to use another year. # Run a census search to retrieve data on estimate of male, female, population, and unemployment count for each zip code. # https://api.census.gov/data/2013/acs5/variables.html # Show the output of census_data # Create a variable census_pd and set it to a dataframe made with the census_data's list of dictionaries # Rename census_pd with appropriate columns "Male", "Female", "Population", "Unemployment Count", and "Zipcode" # Show the first 5 rows of census_pd # Create a new variable calc_census_pd and set it to census_pd # Calculate the % of male to female ratio and add them as new columns Male % and Female %. # Calculate the unemployment rate based on population # Show the first 5 rows of calc_census_pd # Get the correlation coefficients of calc_census_pd ``` ### Critical Thinking: From the above correlation table. What does the unemployment rate tell you about its correlation with the number of males or females? #### ANSWER: ``` # Use the describe function to get a quick glance at calc_census_pd. ``` ### Do you see anything strange about male % or female % in the describe above? #### ANSWER: ``` # Create two variables called "male_zipcode_outliers" and "female_zipcode_outliers" # Set them to queries where male or female is are the outliers based on the described data from previous task. # Example: anything greater than 0.95 is an outlier # Show all rows for either "male_zipcode_outliers" and "female_zipcode_outliers" ``` ### What is a possible cause of some outliers with larger populations? Hint: Look up the zipcode for larger population of either male or female outliers. What information do these zipcodes have in common? ### ANSWER: # Heatmap of population ``` # Create a variable "zip_lng_lat_data" and using pandas import the zip_codes_states.csv from Resources folder. # https://www.gaslampmedia.com/download-zip-code-latitude-longitude-city-state-county-csv/ # HINT: When loading zipcodes they may turn into integers and lose their 0's. # To correct this check out dtype in the documentation: # https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html # Show the first 5 rows of zip_lng_lat_data # Get the longitude and latitude based calc_census_pd by merging them on their zip code columns. # Show the first 5 rows of merged_table # Configure gmaps with API key # Define locations as a dataframe of latitude and longitude from merged_table. # HINT: You'll need to drop the NaN before storing into locations or population # Define population as the population from merged_table # HINT: You'll need to drop the NaN before storing into locations or population # Create a population Heatmap layer # Note you may need to run the following in your terminal to show the gmaps figure. # jupyter nbextension enable --py --sys-prefix widgetsnbextension # jupyter nbextension enable --py --sys-prefix gmaps # Recommended settings for heatmap layer: max_intensity=2000000 and point radius = 1 # Adjust heat_layer setting to help with heatmap dissipating on zoom ``` ### What is a downfall of using zip codes for mapping? ### ANSWER: https://gis.stackexchange.com/questions/5114/obtaining-up-to-date-list-of-us-zip-codes-with-latitude-and-longitude-geocodes ZIP codes are a habitually abused geography. It's understandable that people want to use them because they are so visible and well-known but they aren't well suited to any use outside the USPS. ZIP codes aren't associated with polygons, they are associated with carrier routes and the USPS doesn't like to share those. Some ZIP codes are points e.g. a ZIP code may be associated with a post office, a university or a large corporate complex. They are used to deliver mail. The Census Bureau creates ZIP Code Tabulation Areas (ZCTA) based on their address database. If it's appropriate to your work you could try taking the centroid of the ZCTAs. ZCTA Geography from the 2010 Census is available on the Census Bureau website.
github_jupyter
# Goals ### Train a classifier using resnet18 on natural-images dataset ### Understand what lies inside resnet18 network # What is resnet ## Readings on resnet 1) Points from https://towardsdatascience.com/an-overview-of-resnet-and-its-variants-5281e2f56035 - The core idea of ResNet is introducing a so-called “identity shortcut connection” that skips one or more layers - The deeper model should not produce a training error higher than its shallower counterparts. - solves the problem of vanishing gradiens as network depth increased - https://medium.com/@anishsingh20/the-vanishing-gradient-problem-48ae7f501257 2) Points from https://medium.com/@14prakash/understanding-and-implementing-architectures-of-resnet-and-resnext-for-state-of-the-art-image-cf51669e1624 - Won 1st place in the ILSVRC 2015 classification competition with top-5 error rate of 3.57% (An ensemble model) - Efficiently trained networks with 100 layers and 1000 layers also. - Replacing VGG-16 layers in Faster R-CNN with ResNet-101. They observed a relative improvements of 28% 3) Read more here - https://arxiv.org/abs/1512.03385 - https://d2l.ai/chapter_convolutional-modern/resnet.html - https://cv-tricks.com/keras/understand-implement-resnets/ - https://mc.ai/resnet-architecture-explained/ # Table of Contents ## [0. Install](#0) ## [1. Load experiment with resnet base architecture](#1) ## [2. Visualize resnet](#2) ## [3. Train the classifier](#3) ## [4. Run inference on trained classifier](#5) <a id='0'></a> # Install Monk - git clone https://github.com/Tessellate-Imaging/monk_v1.git - cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt - (Select the requirements file as per OS and CUDA version) ``` !git clone https://github.com/Tessellate-Imaging/monk_v1.git # Select the requirements file as per OS and CUDA version !cd monk_v1/installation/Linux/ && pip install -r requirements_cu9.txt ``` ## Dataset - Natural Images Classification - https://www.kaggle.com/prasunroy/natural-images ``` ! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1sbQ_KaEDd7kRrTvna-4odLqxM2G0QT0Z' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1sbQ_KaEDd7kRrTvna-4odLqxM2G0QT0Z" -O natural-images.zip && rm -rf /tmp/cookies.txt ! unzip -qq natural-images.zip ``` # Imports ``` # Monk import os import sys sys.path.append("monk_v1/monk/"); #Using mxnet-gluon backend from gluon_prototype import prototype ``` <a id='1'></a> # Load experiment with resnet base architecture ## Creating and managing experiments - Provide project name - Provide experiment name - For a specific data create a single project - Inside each project multiple experiments can be created - Every experiment can be have diferent hyper-parameters attached to it ``` gtf = prototype(verbose=1); gtf.Prototype("Project", "resnet-intro"); ``` ### This creates files and directories as per the following structure workspace | |--------Project | | |-----resnet-intro | |-----experiment-state.json | |-----output | |------logs (All training logs and graphs saved here) | |------models (all trained models saved here) ## Set dataset and select the model ## Quick mode training - Using Default Function - dataset_path - model_name - freeze_base_network - num_epochs ## Sample Dataset folder structure natural-images | |-----train |------aeroplane | |------img1.jpg |------img2.jpg |------.... (and so on) |------car | |------img1.jpg |------img2.jpg |------.... (and so on) |------.... (and so on) | | |-----val |------aeroplane | |------img1.jpg |------img2.jpg |------.... (and so on) |------car | |------img1.jpg |------img2.jpg |------.... (and so on) |------.... (and so on) ``` gtf.Default(dataset_path="natural-images/train", model_name="resnet18_v1", freeze_base_network=False, num_epochs=5); ``` ## From the summary above - Model Params Model name: resnet18 Num of potentially trainable layers: 41 Num of actual trainable layers: 41 <a id='2'></a> # Visualize resnet ``` gtf.Visualize_With_Netron(data_shape=(3, 224, 224), port=8082); ``` ## resnet block - 1 - Creating network and blocks using monk from scratch will be dealt in different roadmap series ``` from IPython.display import Image Image(filename='imgs/resnet18_v1_block1_mxnet.png') ``` ## Properties - This block has 2 branches - First branch is the identity branch, it takes the input and pushes it as the output, the Residual - Second branch has these layers - conv_3x3 -> batchnorm -> relu -> conv_3x3 -> batchnorm - The branches are added elementwise, so both the branches need to have same sized output - The final layer to this block is relu ## resnet block - 2 - Creating network and blocks using monk from scratch will be dealt in different roadmap series ``` from IPython.display import Image Image(filename='imgs/resnet18_v1_block2_mxnet.png') ``` ## Properties - This block has 2 branches - First branch has these layers - conv_1x1 ->batchnorm - Second branch has these layers - conv_3x3 -> batchnorm -> relu -> conv_3x3 -> batchnorm - The branches are added elementwise, so both the branches need to have same sized output - The final layer to this block is relu ## resnet fully connected chain ``` from IPython.display import Image Image(filename='imgs/resnet18_v1_block_fc_mxnet.png') ``` ## resnet Network - Creating network and blocks using monk from scratch will be dealt in different roadmap series ``` from IPython.display import Image Image(filename='imgs/resnet18_v1_mxnet.png') ``` ## Properties - This network - has 5 type-1 blocks - has 3 type-2 blocks - post these blocks the type-3 (fc) block exists <a id='3'></a> # Train the classifier ``` #Start Training gtf.Train(); #Read the training summary generated once you run the cell and training is completed ``` <a id='4'></a> # Run inference on trained classifier ``` gtf = prototype(verbose=1); gtf.Prototype("Project", "resnet-intro", eval_infer=True); output = gtf.Infer(img_name = "natural-images/test/test1.jpg"); from IPython.display import Image Image(filename='natural-images/test/test1.jpg') output = gtf.Infer(img_name = "natural-images/test/test2.jpg"); from IPython.display import Image Image(filename='natural-images/test/test2.jpg') output = gtf.Infer(img_name = "natural-images/test/test3.jpg"); from IPython.display import Image Image(filename='natural-images/test/test3.jpg') ```
github_jupyter
# Speech Identity Inference Let's check if the pretrained model can really identify speakers. ``` import os import numpy as np import pandas as pd from sklearn import metrics from tqdm.notebook import tqdm from IPython.display import Audio from matplotlib import pyplot as plt %matplotlib inline import tensorflow as tf import tensorflow_io as tfio import tensorflow_addons as tfa from train_speech_id_model import BaseSpeechEmbeddingModel from create_audio_tfrecords import AudioTarReader, PersonIdAudio sr = 48000 m = BaseSpeechEmbeddingModel() m.summary() # 90.cpkt: auc = 0.9525 # 110.cpkt: auc = 0.9533 chkpt = 'temp/cp-0110.ckpt' m.load_weights(chkpt) m.compile( optimizer=tf.keras.optimizers.Adam(0.0006), loss=tfa.losses.TripletSemiHardLoss() ) # m.save('speech-id-model-110') # changing the corpus to other languages allows evaluating how the model transfers between languages dev_dataset = tfrecords_audio_dataset = tf.data.TFRecordDataset( 'data/cv-corpus-7.0-2021-07-21-en.tar.gz_dev.tfrecords.gzip', compression_type='GZIP', # 'data/cv-corpus-7.0-2021-07-21-en.tar.gz_test.tfrecords.gzip', compression_type='GZIP', num_parallel_reads=4 ).map(PersonIdAudio.deserialize_from_tfrecords) samples = [x for x in dev_dataset.take(2500)] # decode audio samples = [(tfio.audio.decode_mp3(x[0])[:, 0], x[1]) for x in samples] # is the audio decoded correctly? Audio(samples[10][0], rate=sr) # compute the embeddings embeddings = [] for audio_data, person_id in tqdm(samples): cur_emb = m.predict( tf.expand_dims(audio_data, axis=0) )[0] embeddings.append(cur_emb) ``` ## Check embedding quality Ideally, embeddings from the same person should look the same. ``` n_speakers = len(set([x[1].numpy() for x in samples])) print(f'Loaded {n_speakers} different speakers') pairwise_diff = {'same': [], 'different': []} for p in tqdm(range(len(samples))): for q in range(p + 1, len(samples)): id_1 = samples[p][1] id_2 = samples[q][1] dist = np.linalg.norm(embeddings[p] - embeddings[q]) if id_1 == id_2: pairwise_diff['same'].append(dist) else: pairwise_diff['different'].append(dist) plt.figure(figsize=(12, 8)) plt.boxplot([pairwise_diff[x] for x in pairwise_diff]) plt.xticks([k + 1 for k in range(len(pairwise_diff))], [x for x in pairwise_diff]) plt.ylabel('Embedding distance') plt.title('Boxplot of speaker identifiability') # what do we care about? # given that 2 samples are different, we don't want to predict `same` # secondarily, given that 2 samples are the same, we want to predict `same` # threshold - alpha from 0 (median of same) to 1 (median of different) alpha = 0.2 # if using the validation set, we can calibrate t t = np.median(pairwise_diff['same']) + alpha * (np.median(pairwise_diff['different']) - np.median(pairwise_diff['same'])) specificity = np.sum(np.array(pairwise_diff['different']) > t) / len(pairwise_diff['different']) sensitivity = np.sum(np.array(pairwise_diff['same']) < t) / len(pairwise_diff['same']) print('Sensitivity, specificity = ', sensitivity, specificity) same_lbl = [0] * len(pairwise_diff['same']) diff_lbl = [1] * len(pairwise_diff['different']) scores = pairwise_diff['same'] + pairwise_diff['different'] # scale scores to range [0,1] and chande threshold accordingly scores = np.array(scores) * 0.5 t = t * 0.5 labels = same_lbl + diff_lbl len(scores), len(labels) fpr, tpr, thresholds = metrics.roc_curve(labels, scores, pos_label=1) plt.figure(figsize=(12, 8)) roc_auc = metrics.roc_auc_score(labels, scores) plt.title(f'ROC curve: AUC = {np.round(roc_auc, 4)} {chkpt}') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.plot(fpr, tpr) plt.plot([0, 1], [0, 1]) plt.figure(figsize=(12, 8)) plt.title('Point of operation') plt.plot(thresholds, 1 - fpr, label='Specificity') plt.plot(thresholds, tpr, label='Sensitivity') plt.plot([t, t], [0, 1], label='Threshold') plt.xlabel('Threshold level') plt.xlim([0, 1]) plt.legend() ``` ## Select best model on validation Strategy: compute loss but don't sort validation set, so there are multiple voice repeats in a batch. Also makes the evaluation consistent. Batch size should be as big as possible. ``` triplet_loss = tfa.losses.TripletSemiHardLoss() # compute all predictions def mp3_decode_fn(audio_bytes, audio_class): audio_data = tfio.audio.decode_mp3(audio_bytes)[:, 0] return audio_data, audio_class all_preds = [] all_labels = [] for x in tqdm(dev_dataset.take(1300).map(mp3_decode_fn)): s = x[0] all_preds.append(m.predict( tf.expand_dims(x[0], axis=0) )[0]) all_labels.append(x[1].numpy()) len(all_preds) batch_size = 128 n_batches = len(all_preds) // batch_size vec_size = len(all_preds[0]) np_preds = np.reshape(all_preds[0:batch_size * n_batches], (n_batches, batch_size, vec_size)) np_labls = np.reshape(all_labels[0:batch_size * n_batches], (n_batches, batch_size)) total_loss = 0 for lbl, pred in zip(np_labls, np_preds): total_loss += triplet_loss(lbl, pred).numpy() total_loss = total_loss / len(lbl) print(f'Total loss: {total_loss}') all_checkpoints = [x.split('.')[0] + '.ckpt' for x in os.listdir('temp') if 'ckpt.index' in x] all_results = [] for checkpoint in tqdm(all_checkpoints): m.load_weights(os.path.join('temp', checkpoint)) all_preds = [] all_labels = [] n_items = 4600 for x in tqdm(dev_dataset.take(n_items).map(mp3_decode_fn), total=n_items, leave=False): # for x in tqdm(dev_dataset.map(mp3_decode_fn), # leave=False): s = x[0] all_preds.append(m.predict( tf.expand_dims(x[0], axis=0) )[0]) all_labels.append(x[1].numpy()) batch_size = 128 n_batches = len(all_preds) // batch_size vec_size = len(all_preds[0]) np_preds = np.reshape(all_preds[0:batch_size * n_batches], (n_batches, batch_size, vec_size)) np_labls = np.reshape(all_labels[0:batch_size * n_batches], (n_batches, batch_size)) total_loss = 0 for lbl, pred in zip(np_labls, np_preds): total_loss += triplet_loss(lbl, pred).numpy() total_loss = total_loss / len(lbl) cur_result = { 'checkpoint': checkpoint, 'val_loss': total_loss } print(cur_result) all_results.append(cur_result) df_val = pd.DataFrame(all_results) df_val['idx'] = df_val.checkpoint.apply(lambda z: int(z.split('.')[0].split('-')[1])) df_val = df_val.set_index('idx') df_val.to_csv('val_triplet_loss.csv') # df_val df_val.plot() ```
github_jupyter
``` import numpy as np import cs_vqe as c import ast import os from openfermion import qubit_operator_sparse import conversion_scripts as conv_scr import scipy as sp from openfermion import qubit_operator_sparse import conversion_scripts as conv_scr from openfermion.ops import QubitOperator # with open("hamiltonians.txt", 'r') as input_file: # hamiltonians = ast.literal_eval(input_file.read()) working_dir = os.getcwd() data_dir = os.path.join(working_dir, 'data') data_hamiltonians_file = os.path.join(data_dir, 'hamiltonians.txt') with open(data_hamiltonians_file, 'r') as input_file: hamiltonians = ast.literal_eval(input_file.read()) for key in hamiltonians.keys(): print(f"{key: <25} n_qubits: {hamiltonians[key][1]:<5.0f}") mol_key = 'H2_6-31G_singlet' # mol_key ='H2-O1_STO-3G_singlet' # currently index 2 is contextual part # ''''''''''''''''3 is NON contextual part # join together for full Hamiltonian: ham = hamiltonians[mol_key][2] ham.update(hamiltonians[mol_key][3]) # full H ham print(f"n_qubits: {hamiltonians[mol_key][1]}") ``` # Get non-contextual H ``` nonH_guesses = c.greedy_dfs(ham, 10, criterion='weight') nonH = max(nonH_guesses, key=lambda x:len(x)) # largest nonCon part found by dfs alg ``` Split into: $$H = H_{c} + H_{nc}$$ ``` nonCon_H = {} Con_H = {} for P in ham: if P in nonH: nonCon_H[P]=ham[P] else: Con_H[P]=ham[P] ``` ## Testing contextuality ``` print('Is NONcontextual correct:', not c.contextualQ_ham(nonCon_H)) print('Is contextual correct:',c.contextualQ_ham(Con_H)) ``` # Classical part of problem! Take $H_{nc}$ and split into: - $Z$ = operators that completely comute with all operators in $S$ - $T$ = remaining operators in $S$ - where $S = Z \cup T$ and $S$ is set of Pauli operators in $H_{nc}$ - We then split the set $T$ into cliques $C_{1}, C_{2}, ... , C_{|T|}$ - all ops in a clique commute - ops between cliques anti-commute! ``` bool_flag, Z_list, T_list = c.contextualQ(list(nonCon_H.keys()), verbose=True) Z_list T_list ``` ## Get quasi model First we define - $C_{i1}$ = first Pauli in each $C_{i}$ set - $A_{ij} = C_{ij}C_{1i}$ - $G^{prime} = \{1 P_{i} \;| \; i=1,2,...,|Z| \}$ - aka all the completely commuting terms with coefficients set to +1! - We define G to be an independent set of $G^{prime}$ - where $G \subseteq G^{prime}$ ``` G_list, Ci1_list, all_mappings = c.quasi_model(nonCon_H) print('non-independent Z list:', Z_list) print('G (independent) Z list:', G_list) print('all Ci1 terms:', Ci1_list) ``` $$R = G \cup \{ C_{i1} \;| \; i=1,2,...,N \}$$ ``` # Assemble all the mappings from terms in the Hamiltonian to their products in R: all_mappings ``` Overall $R$ is basically reduced non-contextual set - where everything in original non-contextual set can be found by **inference!** # Function form $$R = G \cup \{ C_{i1} \;| \; i=1,2,...,N \}$$ - note q to do with $G$ - note r to do with $C_{i1}$ ``` model = [G_list, Ci1_list, all_mappings] fn_form = c.energy_function_form(nonCon_H, model) # returns [ # denstion of q, # dimension of r, # [coeff, indices of q's, indices of r's, term in Hamiltonian] # ] fn_form Energy_function = c.energy_function(fn_form) import random ### now for the q terms we only have +1 or -1 assignment! q_variables = [random.choice([1,-1]) for _ in range(fn_form[0])] ### r variables is anything that makes up unit vector! r_variables = c.angular(np.arange(0,2*np.pi, fn_form[1])) r_variables Energy_function(*q_variables,*r_variables) ``` find_gs_nonconfunction optimizes above steps by: 1. brute forcing all choices of ```q_variables``` - ```itertools.product([1,-1],repeat=fn_form[0])``` 2. optimizing over ```r_variables``` (in code ```x```) - using SciPy optimizer! ``` model = [G_list, Ci1_list, all_mappings] lowest_eigenvalue, ground_state_params, model_copy, fn_form_copy, = c.find_gs_noncon(nonCon_H, method = 'differential_evolution', model=model, fn_form=fn_form) # returns: best + [model, fn_form] print(lowest_eigenvalue) print(ground_state_params) ## check Energy_function(*ground_state_params[0],*ground_state_params[1]) == lowest_eigenvalue ``` # Now need to rotate Hamiltonian! We now have non contextual ground state: $(\vec{q}, \vec{r})$ ``` ground_state_params ``` We can use this result - ground state of $H_{nc}$ - as a classical estiamte of our ground state of the full Hamiltonian ($H = H_{c} + H_{nc}$) However we can also obtain a quantum correction using $H_{c}$ By minimizing theenergy of the remaining terms in the Hamiltonian over the quantum states that are **consistent with the noncon-textual ground state**. To do this we first rotate each $G_{j}$ and $\mathcal{A} = \sum_{i=1}^{N} r_{i}A_{i}$: ``` model = [G_list, Ci1_list, all_mappings] print(G_list) # G_j terms! print(Ci1_list) # mathcal(A) ``` to SINGLE QUBIT pauli Z operators! - to map the operators in $G$ to single qubit Pauli operators, we use $\frac{\pi}{2}$ rotations! - note $\mathcal{A}$ is an anti-commuting set... therefore we can use $N-1$ rotations as in unitary partitioning's sequence of rotations to do this! - $R^{\dagger}\mathcal{A} R = \text{single Pauli op}$ # Rotate full Hamiltonian to basis with diagonal noncontextual generators! function ```diagonalize_epistemic```: 1. first if else statement: - if cliques present: - first maps A to single Pauli operator (if cliques present) - then rotates to diagonlize G union with single Pauli opator of A (hence GuA name!) - else if NO cliques present: - gets rotations to diagonlize G - these rotations make up GuA term in code! 2. NEXT code loops over terms in GuA (denoted as g in code) - if g is not a single qubit $Z$: - code generates code to rotate operator to make g diagonal (rotations) - then constructs map of g to single Z (J rotation) - Note R is applied to GuA ######### - Note rotations are given in Appendix A of https://arxiv.org/pdf/2011.10027.pdf - First code checks if g op in GuA is diagonal - if so then needs to apply "K" rotation (involving $Y$ and $I$ operators (see pg 11 top) to make it NOT diagononal - now operator will be diagnoal! - next generate "J" rotation - turns non-diagonal operator into a single qubit $Z$ operator! ``` # Get sequence of rotations requried to diagonalize the generators for the noncontextual ground state! Rotations_list, diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops = c.diagonalize_epistemic(model, fn_form, ground_state_params) # rotations to map A to single Pauli operator! Rotations_list # rotations to diagonlize G diagonalized_generators_GuA eigen_vals_nonC_ground_state_GuA_ops ``` # NEW LCU method ``` N_index=0 check_reduction=True N_Qubits= hamiltonians[mol_key][1] R_LCU, Rotations_list, diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops= c.diagonalize_epistemic_LCU( model, fn_form, ground_state_params, N_Qubits, N_index, check_reduction=check_reduction) R_LCU order = list(range(hamiltonians[mol_key][1])) N_index=0 check_reduction=True N_Qubits= hamiltonians[mol_key][1] reduced_H = c.get_reduced_hamiltonians_LCU(Con_H, model, fn_form, ground_state_params, order, N_Qubits, N_index, check_reduction=check_reduction) len(reduced_H[-1]) reduced_H[3] H = conv_scr.Get_Openfermion_Hamiltonian(reduced_H[-1]) sparseH = qubit_operator_sparse(H, n_qubits=hamiltonians[mol_key][1]) sp.sparse.linalg.eigsh(sparseH, which='SA', k=1)[0][0] ## old way # Get sequence of rotations requried to diagonalize the generators for the noncontextual ground state! Rotations_list, diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops = c.diagonalize_epistemic(model, fn_form, ground_state_params) len(diagonalized_generators_GuA) Rotations_list ### old way order = list(range(hamiltonians[mol_key][1])) red_H = c.get_reduced_hamiltonians(ham, model, fn_form, ground_state_params, order) len(red_H[0]) len(red_H[0]) red_H[0] H = conv_scr.Get_Openfermion_Hamiltonian(red_H[-1]) sparseH = qubit_operator_sparse(H, n_qubits=hamiltonians[mol_key][1]) sp.sparse.linalg.eigsh(sparseH, which='SA', k=1)[0][0] from scipy.sparse.linalg import expm from Misc_functions import sparse_allclose R_LCU_QubitOp = QubitOperator() for P in R_LCU: R_LCU_QubitOp+=P R_LCU_mat = qubit_operator_sparse(R_LCU_QubitOp, n_qubits=hamiltonians[mol_key][1]) R_SeqRot_QubitOp= conv_scr.convert_op_str(Rotations_list[0][1], 1) R_SeqRot_mat = qubit_operator_sparse(R_SeqRot_QubitOp, n_qubits=hamiltonians[mol_key][1]) theta_sk=Rotations_list[0][0] exp_rot = expm(R_SeqRot_mat * theta_sk)# / 2) sparse_allclose(exp_rot, R_LCU_mat) ``` # Restricting the Hamiltonian to a contextualsubspace (Section B of https://arxiv.org/pdf/2011.10027.pdf) In the rotated basis the Hamiltonian is restricted to the subspace stabilized by the noncontextual generators $G_{j}'$ ``` print(diagonalized_generators_GuA) # G_j' terms! ``` The quantum correction is then obtained by minimizing the expectation value of this resticted Hamiltonian! (over +1 eigenvectors of the remaining non-contextual generators $\mathcal{A}'$) ``` print(Ci1_list) # mathcal(A) ``` - $\mathcal{H}_{1}$ denotes Hilbert space of $n_{1}$ qubits acted on by by the single qubit $G_{j}'$ terms - $\mathcal{H}_{2}$ denotes Hilbert space of remaining $n_{2}$ Overall full Hilbert space is: $\mathcal{H}=\mathcal{H}_{1} \otimes \mathcal{H}_{2}$ The **contextual Hamiltonian** in this rotated basis is: $$H_{c}'=\sum_{P \in \mathcal{S_{c}'}} h_{P}P$$ The set of Pauli terms in $H_{c}'$ is $\mathcal{S_{c}'}$, where terms in $\mathcal{S_{c}'}$ act on both $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ subspaces in general! We can write $P$ terms as: $$P=P_{1}^{\mathcal{H}_{1}} \otimes P_{2}^{\mathcal{H}_{2}}$$ $P$ commutes with an element of $G'$ if and only if $P_{1} \otimes \mathcal{I}^{\mathcal{H}_{2}}$ does As the generators $G'$ act only on $\mathcal{H}_{1}$ If $P$ anticommutes with any element of $G'$ then its expection value in the noncontextual state is zero Thus any $P$ must commute with all elements of $G'$ and so $P_{1} \otimes \mathcal{I}^{\mathcal{H}_{2}}$ too As the elements of $G'$ are single-qubit Pauli $Z$ operators acting in $\mathcal{H}_{1}$: ``` print(diagonalized_generators_GuA) # G_j' terms! ``` $P_{1}$ must be a product of such operators! **As the exepcation value of $P_{1}$ is some $p_{1}= \pm 1$ DETERMINED BY THE NONCONTEXTUAL GROUND STATE** ``` eigen_vals_nonC_ground_state_GuA_ops ``` Let $|\psi_{(\vec{q}, \vec{r})} \rangle$ be any quantum state consistent with the nonconxtual ground state $(\vec{q}, \vec{r})$... aka gives correct expection values of: ``` print(diagonalized_generators_GuA) print(eigen_vals_nonC_ground_state_GuA_ops) ``` Then the action of any $P$ which allows our contextual correction has the form: $$P |\psi_{(\vec{q}, \vec{r})} \rangle = \big( P_{1}^{\mathcal{H}_{1}} \otimes P_{2}^{\mathcal{H}_{2}} \big) |\psi_{(\vec{q}, \vec{r})} \rangle$$ $$ = p_{1}\big( \mathcal{I}^{\mathcal{H}_{1}} \otimes P_{2}^{\mathcal{H}_{2}} \big) |\psi_{(\vec{q}, \vec{r})} \rangle$$ - repeating above, but $p_{1}$ is the expectation value of $P_{1}$ determiend by the noncontextual ground state! Thus we can denote $H_{c}' |_{(\vec{q}, \vec{r})}$ as the restriction of $H_{c}'$ on its action on the noncontextual ground state $(\vec{q}, \vec{r})$: $$H_{c}' |_{(\vec{q}, \vec{r})} =\sum_{\substack{P \in \mathcal{S_{c}'} \\ \text{s.t.} [P, G_{i}']=0 \\ \forall G'_{i} \in G'}} p_{1}h_{P}\big( \mathcal{I}^{\mathcal{H}_{1}} \otimes P_{2}^{\mathcal{H}_{2}} \big) $$ $$=\mathcal{I}_{\mathcal{H}_{1}} \otimes H_{c}'|_{\mathcal{H}_{2}} $$ where we can write: $$H_{c}'|_{\mathcal{H}_{2}} = \sum_{\substack{P \in \mathcal{S_{c}'} \\ \text{s.t.} [P, G_{i}']=0 \\ \forall G'_{i} \in G'}} p_{1}h_{P}P_{2}^{\mathcal{H}_{2}}$$ Cleary this Hamiltonian on $n_{2}$ qubits is given by: $$n_{2} = n - |G|$$ - $|G|=$ number of noncontextual generators $G_{j}$ ``` from copy import deepcopy import pprint ``` ```quantum_correction``` function ``` n_q = len(diagonalized_generators_GuA[0]) rotated_H = deepcopy(ham) ##<-- full Hamiltonian # iteratively perform R rotation over all terms in orginal Hamiltonian for R in Rotations_list: newly_rotated_H={} for P in rotated_H.keys(): lin_comb_Rot_P = c.apply_rotation(R,P) # linear combination of Paulis from R rotation on P for P_rot in lin_comb_Rot_P: if P_rot in newly_rotated_H.keys(): newly_rotated_H[P_rot]+=lin_comb_Rot_P[P_rot]*rotated_H[P] # already in it hence += else: newly_rotated_H[P_rot]=lin_comb_Rot_P[P_rot]*rotated_H[P] rotated_H = deepcopy(newly_rotated_H) ##<-- perform next R rotation on this H rotated_H ``` next find where Z indices in $G'$ ``` z_indices = [] for d in diagonalized_generators_GuA: for i in range(n_q): if d[i] == 'Z': z_indices.append(i) print(diagonalized_generators_GuA) print(z_indices) ``` **The exepcation value of $P_{1}$ terms are $p_{1}= \pm 1$ DETERMINED BY THE NONCONTEXTUAL GROUND STATE** ``` print(diagonalized_generators_GuA) print(eigen_vals_nonC_ground_state_GuA_ops) ``` We need to ENFORCE the diagnal geneators assigned values in the diagonal basis to these expectation values above^^^ ``` ham_red = {} for P in rotated_H.keys(): sgn = 1 for j, z_index in enumerate(z_indices): # enforce diagonal generator's assigned values in diagonal basis if P[z_index] == 'Z': sgn = sgn*eigen_vals_nonC_ground_state_GuA_ops[j] #<- eigenvalue of nonC ground state! elif P[z_index] != 'I': sgn = 0 if sgn != 0: # construct term in reduced Hilbert space P_red = '' for i in range(n_q): if not i in z_indices: P_red = P_red + P[i] if P_red in ham_red.keys(): ham_red[P_red] = ham_red[P_red] + rotated_H[P]*sgn else: ham_red[P_red] = rotated_H[P]*sgn ham_red c.quantum_correction(ham, #<- full Ham model, fn_form, ground_state_params) c.quantum_correction(nonCon_H,model,fn_form,ground_state_params) c.get_reduced_hamiltonians(ham, model, fn_form, ground_state_params, list(range(hamiltonians[mol_key][1])))[-1] == rotated_H ### aka when considering all qubit problem it is equal to rotated H! ``` For some reason it seems that when considering full Hamiltonian there is no reduction in the number of terms! Q. Do you expect any term reduction when doing CS-VQE? ``` n2 = hamiltonians[mol_key][1]-len(diagonalized_generators_GuA) n2 ham_red ham==Con_H n_q = len(diagonalized_generators_GuA[0]) rotated_Hcon = deepcopy(Con_H) # iteratively perform R rotation over all terms in orginal Hamiltonian for R in Rotations_list: newly_rotated_H={} for P in rotated_Hcon.keys(): lin_comb_Rot_P = c.apply_rotation(R,P) # linear combination of Paulis from R rotation on P for P_rot in lin_comb_Rot_P: if P_rot in newly_rotated_H.keys(): newly_rotated_H[P_rot]+=lin_comb_Rot_P[P_rot]*rotated_Hcon[P] # already in it hence += else: newly_rotated_H[P_rot]=lin_comb_Rot_P[P_rot]*rotated_Hcon[P] rotated_Hcon = deepcopy(newly_rotated_H) ##<-- perform next R rotation on this H rotated_Hcon print(diagonalized_generators_GuA) print(eigen_vals_nonC_ground_state_GuA_ops) p1_dict = {Gener.index('Z'): p1 for Gener, p1 in zip(diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops)} p1_dict new={} for P1_P2 in rotated_Hcon.keys(): Z_indices = [i for i, sigma in enumerate(P1_P2) if sigma=='Z'] I1_P2=list(deepcopy(P1_P2)) sign=1 for ind in Z_indices: sign*=p1_dict[ind] I1_P2[ind]='I' I1_P2=''.join(I1_P2) new[I1_P2]=rotated_Hcon[P1_P2]*sign new len(rotated_Hcon)-len(diagonalized_generators_GuA) # len(new) H = conv_scr.Get_Operfermion_Hamiltonian(new) sparseH = qubit_operator_sparse(H, n_qubits=hamiltonians[mol_key][1]) sp.sparse.linalg.eigsh(sparseH, which='SA', k=1)[0][0] H_reduced_subspace={} for P in rotated_H.keys(): sign=1 for P_known, eigen_val in zip(diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops): Z_index = P_known.index('Z') # Find single qubit Z in generator! if P[Z_index]== 'Z': # compare location in genertor to P of rotated H sign*=eigen_val #<- eigenvalue of nonC ground state! elif P[Z_index]!= 'I': sign=0 # MUST anti-commute! # build reduced Hilbert Space if sign!=0: P_new = list(deepcopy(P)) P_new[Z_index]='I' P_new= ''.join(P_new) if P_new in H_reduced_subspace.keys(): H_reduced_subspace[P_new] = H_reduced_subspace[P_new] + rotated_H[P]*sign else: H_reduced_subspace[P_new] = rotated_H[P]*sign # else: # H_reduced_subspace[P]=rotated_H[P] print(len(rotated_H)) print(len(H_reduced_subspace)) # H_reduced_subspace lowest_eigenvalue from openfermion import qubit_operator_sparse import conversion_scripts as conv_scr import scipy as sp H = conv_scr.Get_Operfermion_Hamiltonian(H_reduced_subspace) sparseH = qubit_operator_sparse(H, n_qubits=hamiltonians[mol_key][1]) sp.sparse.linalg.eigsh(sparseH, which='SA', k=1)[0][0] c.quantum_correction(ham, #<- full Ham model, fn_form, ground_state_params) lowest_eigenvalue Hfull = conv_scr.Get_Operfermion_Hamiltonian(ham) sparseHfull = qubit_operator_sparse(Hfull, n_qubits=hamiltonians[mol_key][1]) FCI = sp.sparse.linalg.eigsh(sparseHfull, which='SA', k=1)[0][0] print('FCI=', FCI) sp.sparse.linalg.eigsh(sparseH, which='SA', k=1)[0][0] print(diagonalized_generators_GuA) print(eigen_vals_nonC_ground_state_GuA_ops) p1_dict = {Gener.index('Z'): p1 for Gener, p1 in zip(diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops)} p1_dict H_reduced_subspace={} for P in rotated_Hcon.keys(): new_sign=1 P_new = list(P) for index, sigma in enumerate(P): if sigma == 'Z': new_sign*=p1_dict[index] P_new[index]='I' P_new = ''.join(P_new) H_reduced_subspace[P_new] = rotated_Hcon[P]*new_sign H_reduced_subspace H_con_subspace = conv_scr.Get_Operfermion_Hamiltonian(H_reduced_subspace) sparseH_con_subspace = qubit_operator_sparse(H_con_subspace, n_qubits=hamiltonians[mol_key][1]) sp.sparse.linalg.eigsh(sparseH_con_subspace, which='SA', k=1)[0][0] # H_reduced_subspace={} # for P in rotated_Hcon.keys(): # p1=1 # for P_known, eigen_val in zip(diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops): # Z_index = P_known.index('Z') # Find single qubit Z in generator! # if P[Z_index]== 'Z': # compare location in genertor to P of rotated H # p1*=eigen_val #<- eigenvalue of nonC ground state! # P1_P2 = list(deepcopy(P)) # P1_P2[Z_index]='I' # I1_P2= ''.join(P1_P2) # if I1_P2 in H_reduced_subspace.keys(): # H_reduced_subspace[I1_P2] += rotated_Hcon[P]*p1 # else: # H_reduced_subspace[I1_P2] = rotated_Hcon[P]*p1 # elif P[Z_index]== 'I': # H_reduced_subspace[P] = rotated_Hcon[P] # elif P[Z_index]!= 'I': # sign=0 # MUST anti-commute! # H_reduced_subspace[P]=0 # # # build reduced Hilbert Space # # if sign!=0: # # if P_new in H_reduced_subspace.keys(): # # H_reduced_subspace[P_new] = H_reduced_subspace[P_new] + rotated_Hcon[P]*sign # # else: # # H_reduced_subspace[P_new] = rotated_Hcon[P_new]*sign # # # else: # # # H_reduced_subspace[P]=rotated_H[P] H_reduced_subspace nonCon_Energy = lowest_eigenvalue H_con_subspace = conv_scr.Get_Operfermion_Hamiltonian(H_reduced_subspace) sparseH_con_subspace = qubit_operator_sparse(H_con_subspace, n_qubits=hamiltonians[mol_key][1]) Con_Energy = sp.sparse.linalg.eigsh(sparseH_con_subspace, which='SA', k=1)[0][0] Con_Energy+nonCon_Energy FCI c.quantum_correction(ham, #<- full Ham model, fn_form, ground_state_params) FCI-lowest_eigenvalue Con_Energy c.commute(P_gen, P) ```
github_jupyter
# Python for data analysis For those who are new to using Python for scientific work, we first provide a short introduction to Python and the most useful packages for data analysis. ## Python Disclaimer: We can only cover some of the basics here. If you are completely new to Python, we recommend to take an introductory online course, such as the [Definite Guide to Python](https://www.programiz.com/python-programming), or the [Whirlwind Tour of Python](https://github.com/jakevdp/WhirlwindTourOfPython). If you like a step-by-step approach, try the [DataCamp Intro to Python for Data Science](https://www.datacamp.com/courses/intro-to-python-for-data-science). To practice your skills, try the [Hackerrank challenges](https://www.hackerrank.com/domains/python). ### Hello world * Printing is done with the print() function. * Everything after # is considered a comment. * You don't need to end commands with ';'. ``` # This is a comment print("Hello world") print(5 / 8) 5/8 # This only prints in IPython notebooks and shells. ``` _Note: In these notebooks we'll use Python interactively to avoid having to type print() every time._ ### Basic data types Python has all the [basic data types and operations](https://docs.python.org/3/library/stdtypes.htm): int, float, str, bool, None. Variables are __dynamically typed__: you need to give them a value upon creation, and they will have the data type of that value. If you redeclare the same variable, if will have the data type of the new value. You can use type() to get a variable's type. ``` s = 5 type(s) s > 3 # Booleans: True or False s = "The answer is " type(s) ``` Python is also __strongly typed__: it won't implicitly change a data type, but throw a TypeError instead. You will have to convert data types explictly, e.g. using str() or int(). Exception: Arithmetic operations will convert to the _most general_ type. ``` 1.0 + 2 # float + int -> float s + str(42) # string + string # s + 42 # Bad: string + int ``` ### Complex types The main complex data types are lists, tuples, sets, and dictionaries (dicts). ``` l = [1,2,3,4,5,6] # list t = (1,2,3,4,5,6) # tuple: like a list, but immutable s = set((1,2,3,4,5,6)) # set: unordered, you need to use add() to add new elements d = {2: "a", # dict: has key - value pairs 3: "b", "foo": "c", "bar": "d"} l # Note how each of these is printed t s d ``` You can use indices to return a value (except for sets, they are unordered) ``` l l[2] t t[2] d d[2] d["foo"] ``` You can assign new values to elements, except for tuples ``` l l[2] = 7 # Lists are mutable l # t[2] = 7 # Tuples are not -> TypeError ``` Python allows convenient tuple packing / unpacking ``` b = ("Bob", 19, "CS") # tuple packing (name, age, studies) = b # tuple unpacking name age studies ``` ### Strings Strings are [quite powerful](https://www.tutorialspoint.com/python/python_strings.htm). They can be used as lists, e.g. retrieve a character by index. They can be formatted with the format operator (%), e.g. %s for strings, %d for decimal integers, %f for floats. ``` s = "The %s is %d" % ('answer', 42) s s[0] s[4:10] '%.2f' % (3.14159265) # defines number of decimal places in a float ``` They also have a format() function for [more complex formatting](https://pyformat.info/) ``` l = [1,2,3,4,5,6] "{}".format(l) "%s" % l # This is identical "{first} {last}".format(**{'first': 'Hodor', 'last': 'Hodor!'}) ``` ### For loops, If statements For-loops and if-then-else statements are written like this. Indentation defines the scope, not brackets. ``` l = [1,2,3] d = {"foo": "c", "bar": "d"} for i in l: print(i) for k, v in d.items(): # Note how key-value pairs are extracted print("%s : %s" % (k,v)) if len(l) > 3: print('Long list') else: print('Short list') ``` ### Functions Functions are defined and called like this: ``` def myfunc(a, b): return a + b myfunc(2, 3) ``` Function arguments (parameters) can be: * variable-length (indicated with \*) * a dictionary of keyword arguments (indicated with \*\*). * given a default value, in which case they are not required (but have to come last) ``` def func(*argv, **kwarg): print("func argv: %s" % str(argv)) print("func kwarg: %s" % str(kwarg)) func(2, 3, a=4, b=5) def func(a=2): print(a * a) func(3) func() ``` Functions can have any number of outputs. ``` def func(*argv): return sum(argv[0:2]), sum(argv[2:4]) sum1, sum2 = func(2, 3, 4, 5) sum1, sum2 def squares(limit): r = 0 ret = [] while r < limit: ret.append(r**2) r += 1 return ret for i in squares(4): print(i) ``` Functions can be passed as arguments to other functions ``` def greet(name): return "Hello " + name def call_func(func): other_name = "John" return func(other_name) call_func(greet) ``` Functions can return other functions ``` def compose_greet_func(): def get_message(): return "Hello there!" return get_message greet = compose_greet_func() greet() ``` ### Classes Classes are defined like this ``` class TestClass(object): # TestClass inherits from object. myvar = "" def __init__(self, myString): # optional constructor, returns nothing self.myvar = myString # 'self' is used to store instance properties def say(self, what): # you need to add self as the first argument return self.myvar + str(what) a = TestClass("The new answer is ") a.myvar # You can retrieve all properties of self a.say(42) ``` Static functions need the @staticmethod decorator ``` class TestClass(object): myvar = "" def __init__(self, myString): self.myvar = myString def say(self, what): # you need to add self as the first argument return self.myvar + str(what) @staticmethod def sayStatic(what): # or declare the function static return "The answer is " + str(what) a = TestClass("The new answer is ") a.say(42) a.sayStatic(42) ``` ### Functional Python You can write complex procedures in a few elegant lines of code using [built-in functions](https://docs.python.org/2/library/functions.html#map) and libraries such as functools, itertools, operator. ``` def square(num): return num ** 2 # map(function, iterable) applies a given function to every element of a list list(map(square, [1,2,3,4])) # a lambda function is an anonymous function created on the fly list(map(lambda x: x**2, [1,2,3,4])) mydata = list(map(lambda x: x if x>2 else 0, [1,2,3,4])) mydata # reduce(function, iterable ) applies a function with two arguments cumulatively to every element of a list from functools import reduce reduce(lambda x,y: x+y, [1,2,3,4]) mydata # filter(function, iterable)) extracts every element for which the function returns true list(filter(lambda x: x>2, [1,2,3,4])) # zip([iterable,...]) returns tuples of corresponding elements of multiple lists list(zip([1,2,3,4],[5,6,7,8,9])) ``` __list comprehensions__ can create lists as follows: [statement for var in iterable if condition] __generators__ do the same, but are lazy: they don't create the list until it is needed: (statement for var in list if condition) ``` a = [2, 3, 4, 5] lc = [ x for x in a if x >= 4 ] # List comprehension. Square brackets lg = ( x for x in a if x >= 4 ) # Generator. Round brackets a.extend([6,7,8,9]) for i in lc: print("%i " % i, end="") # end tells the print function not to end with a newline print("\n") for i in lg: print("%i " % i, end="") ``` __dict comprehensions__ are possible in Python 3: {key:value for (key,value) in dict.items() if condition} ``` # Quick dictionary creation numbers = range(10) {n:n**2 for n in numbers if n%2 == 0} # Powerful alternative to replace lambda functions # Convert Fahrenheit to Celsius fahrenheit = {'t1': -30,'t2': 0,'t3': 32,'t4': 100} {k:(float(5)/9)*(v-32) for (k,v) in fahrenheit.items()} ```
github_jupyter
``` %matplotlib inline import numpy as np import matplotlib.pylab as plt import scipy import scipy.stats import scipy.integrate ``` # Thème 3 : Analyse Bayésienne ## Jeu du pile ou face On souhaite estimer, à partir d'un série de N mesures au jeu de pile ou face, la probabilité $p$ d'obtenir pile. Pour cela, dans le cadre de l'analyse bayésienne on va considérer $p$ comme une variable aléatoire et on va construire sa fonction densité de probabilité à l'aide du théorème de Bayes~(on suppose que l'on a observé n piles parmi les N mesures): $$ f_N(p|n) \propto P_N(n|p) \times \pi (p) $$ où $f_N(p|n)$ est la fonction de distribution associée à $p$, $P_N(n|p)$ est la probabilité d'observer $n$ piles parmi $N$ mesures pour un $p$ donné (fonction de vraisemblance) et $\pi(p)$ est la fonction densité de probabilité associée à $p$ à priori (ce que l'on appelle le prior). 1. Donner la forme de $P_N(n|p)$. La probabilité d'avoir n piles sachant la valeur de p est donné par une loi binomiale. $P_N(n|p) = \prod_{i = 1}^{x} P_N(n|p_{i}) = \prod_{i = 1}^{x}{n \choose k} p_{i}^k (1-p_{i})^{n-k}$ $P_N(n|p) = {n \choose k} p^k (1-p)^{n-k}$ 2. Simuler une série de $128$ mesures effectuées avec $p=.8$ et stocker les résultats dans un vecteur. ``` #Là on simule PN(n|p) car on a N = 128 lancés, et on connaît p=0.8. r = np.random.random(128)<=0.8 n = np.sum(r) print('Nombre de piles =',n) print(scipy.special.binom(4,2)) x = np.linspace(0,3,100) y = x print(scipy.integrate.simps(y, x)) ``` Dans la suite, on prend comme prior $\pi (p) = 1$ (toutes les valeurs de $p$ sont équiprobables). 3. Représenter $f_{N}(p|n)$ pour N = 1, 2, 4, 8, 16, 32, 64, 128. On prendra soin de normaliser la fonction densité de probabilité (on peut utiliser la méthode `scipy.integrate.simps` pour intégrer numériquement une fonction entre $a$ et $b$). ``` # On crée un vecteur avec 128 lancés, dont les piles représenteront tous les nombres aléatoires en dessous de 0.8. # On fait fluctuer le nombre de lancés, et donc le nombre de piles (n) # Et après on prend un p, une probabilité continue, (donc plus fixée à 0.8) # Et on la rend continue, et y nous donne la probabilité d'avoir un certain nombre de pile, # à partir d'une probabilité '' moyenne '' déjà définie qui ici est 0.8. for N in [2**i for i in range(8)]: r = np.random.random(128)<=0.8 p = np.linspace(0, 1, 1000) #On fait la somme des true sur les N premières composantes de r qui viennent d'être générées. n = np.sum(r[:N]) # Loi binomiale donne la probabilité d'avoir n succès ... il manque quand même (n parmis N): # Comme p est continu, on calcule plus de y puisqu'il y a pleinn de p ? # J'ai rajouté le coefficient binomial .. y = scipy.special.binom(N,n)*p**n*(1-p)**(N-n) #print(p[np.argmax(y)]) #print('\n', np.var(p)) # Là il intègre juste l'aire en dessous de la courbe. norm = scipy.integrate.simps(y, p) plt.plot(p, y/norm, label=f'N={N}', lw = 3) plt.legend() class color: PURPLE = '\033[95m' BOLD = '\033[1m' END = '\033[0m' print(color.PURPLE + color.BOLD +'\nfN(p|n), c est la probabilité d avoir p, sachant que l on a obtenu n piles pour N lancés.'+ color.END) plt.xlabel('p', fontsize =18, color = 'b') plt.ylabel('fN(p|n)', fontsize =18, color = 'b') plt.title('Distribution de probabilité que p sachant n et N\n', fontsize = 18, color='darkorange') plt.show() ``` 4. Pour chaque cas, en déduire la valeur la plus probable de $p$ : $p_{MV}$ (on peut utiliser la fonction `np.argmax`), comparer à la valeur théorique. Calculer la variance $\sigma^2$ de $p$. ``` def gaussian(x, mu, sig): return 1./(np.sqrt(2.*np.pi)*sig)*np.exp(-np.power((x - mu)/sig, 2.)/2) for N in [2**i for i in range(8)]: r = np.random.random(1000)<=0.8 p = np.linspace(0, 1, 10000) n = np.sum(r[:N]) y = scipy.special.binom(N,n)*p**n*(1-p)**(N-n) norm = scipy.integrate.simps(y, p) popt,pcov = scipy.optimize.curve_fit(gaussian,p, y/norm) print('\nNombre de lancés :', N) print('Valeur la plus probable de p :',np.round(p[np.argmax(y/norm)],4)) print('Variance de la distribution de p :', np.round(popt[1]**2,4)) print('Ecart à la valeur théorique de p :', np.round(abs(0.8-p[np.argmax(y/norm)]),4)) print('\nPlus N est grand, plus on tend vers pmv = 0.8, qui est la valeur théorique, et plus la variance diminue, ce qui indique' ' que la distribution de p est d autant plus piquée autour de cette valeur.') print('\nfN(p|n), c est la probabilité d avoir p, sachant que l on a obtenu n piles pour N lancés') ``` 5. En déduire le niveau de confiance associé à l'intervalle $[ p_{MV}-\sigma, p_{MV}+\sigma ]$. ``` #Demander à Mr Derome. ``` 6. Comment évoluent les résultats obtenus si on prend un autre prior ? essayer par exemple $\pi (p) = 2 p$ ou $\pi (p) = 3 p^2$. ``` fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16,6)) for N in [2**i for i in range(10)]: r = np.random.random(1000)<=0.8 p = np.linspace(0, 1, 1000) n = np.sum(r[:N]) y = scipy.special.binom(N,n)*p**n*(1-p)**(N-n) norm = scipy.integrate.simps(y, p) ax1.plot(p, y/norm, label=f'N={N}', lw = 3) ax1.set_xlabel('p', fontsize =18, color = 'b') ax1.set_ylabel('fN(p|n)', fontsize =18, color = 'b') for N in [2**i for i in range(10)]: r = np.random.random(1000)<=0.8 p = np.linspace(0, 1, 1000) n = np.sum(r[:N]) y = scipy.special.binom(N,n)*p**n*(1-p)**(N-n)*2*p norm = scipy.integrate.simps(y, p) ax2.plot(p, y/norm, label=f'N={N}', lw = 3) ax2.set_xlabel('p', fontsize =18, color = 'b') ax2.set_ylabel('fN(p|n)', fontsize =18, color = 'b') for N in [2**i for i in range(10)]: r = np.random.random(1000)<=0.8 p = np.linspace(0, 1, 1000) n = np.sum(r[:N]) y = scipy.special.binom(N,n)*p**n*(1-p)**(N-n)*3*p**2 norm = scipy.integrate.simps(y, p) ax3.plot(p, y/norm, label=f'N={N}', lw = 3) ax3.set_xlabel('p', fontsize =18, color = 'b') ax3.set_ylabel('fN(p|n)', fontsize =18, color = 'b') plt.show() print('\n Pour prior = 2p :\n') def gaussian(x, mu, sig): return 1./(np.sqrt(2.*np.pi)*sig)*np.exp(-np.power((x - mu)/sig, 2.)/2) for N in [2**i for i in range(8)]: r = np.random.random(128)<=0.8 n = np.sum(r[:N]) p = np.linspace(0, 1, 1000) y = scipy.special.binom(N,n)*p**(n)*(1-p)**(N-n)*2*p norm = scipy.integrate.simps(y, p) popt,pcov = scipy.optimize.curve_fit(gaussian,p, y/norm) print('\nNombre de lancés :', N) print('Valeur la plus probable de p :',np.round(p[np.argmax(y/norm)],4)) print('Variance de la distribution de p :', np.round(popt[1]**2,4)) print('Ecart à la valeur théorique de p :', np.round(abs(0.8-p[np.argmax(y/norm)]),4)) print('\n Pour prior = 3p² :\n') def gaussian(x, mu, sig): return 1./(np.sqrt(2.*np.pi)*sig)*np.exp(-np.power((x - mu)/sig, 2.)/2) for N in [2**i for i in range(8)]: r = np.random.random(128)<=0.8 n = np.sum(r[:N]) p = np.linspace(0, 1, 1000) y = scipy.special.binom(N,n)*p**(n)*(1-p)**(N-n)*3*p**2 norm = scipy.integrate.simps(y, p) popt,pcov = scipy.optimize.curve_fit(gaussian,p, y/norm) print('\nNombre de lancés :', N) print('Valeur la plus probable de p :',np.round(p[np.argmax(y/norm)],4)) print('Variance de la distribution de p :', np.round(popt[1]**2,4)) print('Ecart à la valeur théorique de p :', np.round(abs(0.8-p[np.argmax(y/norm)]),4)) ``` ## Loi exponentielle On souhaite estimer le paramètre $\lambda$ d'une loi exponentielle de fonction densité de probabilité : $$f(x) = \left\{ \begin{array}{ll} \lambda \exp ( -\lambda x ) & \textrm{pour } x\ge 0 \\ 0 & \textrm{sinon} \end{array} \right.$$ à partir d'une série de $N$ mesures ${x_1, x_2,\ldots,x_N}$. Dans le cadre de l'analyse bayésienne, on va construire la fonction densité de probabilité associée au paramètre $\lambda$ sous la forme~: $$ f(\lambda | {x_1, x_2,\ldots,x_N} ) \propto L({x_1, x_2,\ldots,x_N}|\lambda) \times \pi (\lambda) $$ où $ L({x_1, x_2,\ldots,x_N}|\lambda) $ est la fonction de vraisemblance. 1. Donner la forme de $ L({x_1, x_2,\ldots,x_N}|\lambda) $ $ L({x_1, x_2,\ldots,x_N}|\lambda) = \prod_{i = 1}^{n} \lambda \exp ( -\lambda xi ) = \lambda^{n} \exp ( -\lambda \sum_{i = 1}^{n} xi )$ La fonction de vraisemblance est donnée par : $$ \log L (\tau) = \sum_i \log f( x_i ; \tau) = \sum_i \log \left(\frac{1}{\tau} \exp\left( -\frac{x_i}{\tau}\right) \right) = - n \log \tau - \frac{1}{\tau} \sum_i x_i $$ 2. Simuler une série de $128$ mesures effectuées avec $\lambda=.5$ et stocker le résultat des mesures dans un vecteur. ``` vecteur =np.random.exponential(0.5, 128) ``` Dans la suite, on prend comme prior $\pi (\lambda) = \textrm{cst}$ pour $ 0 \le \lambda \le 3 $ 3. Représenter $ f(\lambda | {x_1, x_2,\ldots,x_N}) $ pour N = 1, 2, 4, 8, 16, 32, 64, 128. ``` vecteur = np.random.exponential(0.5, 128) plt.figure(figsize=[15, 7]) def ln_vraisemblance(Lambda, vecteur): return -len(vecteur) * np.log(Lambda) - np.sum(vecteur) / Lambda def ln_prior(Lambda): return -np.log(Lambda) def ln_postérior(Lambda, vecteur): ln_vraisemblance(Lambda, vecteur) + ln_postérior (Lambda) Lambda = np.linspace(0.001, 3, 10000) for n in [2**i for i in range(8)]: log_y = ln_vraisemblance(Lambda, vecteur[:n]) + ln_prior (Lambda) y = np.exp(log_y-np.max(log_y)) # on soustrait le max du logy pour limiter les grandes valeurs norm = scipy.integrate.simps(y, Lambda) plt.plot(Lambda, y/norm, label=f'N = {n}') plt.xlabel('Lambda', color='orange', fontsize = 20) plt.ylabel('f(Lambda|x1 ...xn)', color='orange', fontsize = 20) plt.title('Distribution de probabilité de Lambda à partir des mesures effectuées\n', color='b', fontsize = 20) plt.legend() plt.show() ``` 4. Pour chaque cas, en déduire la valeur plus probable de $\lambda_{MV}$ et sa variance $\sigma^2$. ``` vecteur = np.random.exponential(0.5, 128) plt.figure(figsize=[15, 7]) def ln_vraisemblance(Lambda, vecteur): return -len(vecteur) * np.log(Lambda) - np.sum(vecteur) / Lambda def ln_prior(Lambda): return -np.log(Lambda) def ln_postérior(Lambda, vecteur): ln_vraisemblance(Lambda, vecteur) + ln_postérior (Lambda) def gaussian(x, mu, sig): return 1./(np.sqrt(2.*np.pi)*sig)*np.exp(-np.power((x - mu)/sig, 2.)/2) Lambda = np.linspace(0.001, 3, 10000) for n in [2**i for i in range(8)]: log_y = ln_vraisemblance(Lambda, vecteur[:n]) + ln_prior (Lambda) y = np.exp(log_y-np.max(log_y)) # on soustrait le max du logy pour limiter les grandes valeurs norm = scipy.integrate.simps(y, Lambda) plt.plot(Lambda, y/norm, label=f'N = {n}') popt,pcov = scipy.optimize.curve_fit(gaussian,Lambda,y/norm) plt.xlabel('Lambda', color='orange', fontsize = 20) plt.ylabel('f(Lambda|x1 ...xn)', color='orange', fontsize = 20) plt.title('Distribution de probabilité de Lambda à partir des mesures effectuées\n', color='b', fontsize = 20) print('\nPour n =', n) print('Valeur la plus probable de Lambda :',np.round(Lambda[np.argmax(y/norm)],4)) print('Variance de la distribution de Lambda :', np.round(popt[1]**2,4)) plt.legend() ``` 5. En déduire le niveau de confiance associé à l'intervalle $[\lambda_{MV}-\sigma, \lambda_{MV}+\sigma]$. ## Loi gaussienne On souhaite estimer les paramètres $\mu$ et $\sigma$ d'une loi gaussienne de fonction densité de probabilité : $$f(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left( - \frac{(x-\mu)^2}{2\sigma^2}\right)$$ à partir d'une série de $N=1000$ mesures ${x_1, x_2,\ldots,x_N}$ avec $\mu=1$ et $\sigma=2$ , utiliser la bibliothèque `emcee` (https://emcee.readthedocs.io/) pour estimer les lois de distribution associées aux paramètres $\mu$ et $\sigma$. Discuter le choix des priors. ``` mesures = np.random.normal(1, 2, 1000) def ln_vraisemblance(paramètres, mesures): return -len(mesures) * np.log(paramètres[1]) - .5 * np.sum(np.square(mesures - paramètres[0])/paramètres[1]) #prior = 1 donc ln(prior) = 0 def ln_prior(paramètres): return 0 def ln_posterior(paramètres, mesures): return ln_vraisemblance(paramètres, mesures) + ln_prior(paramètres) import emcee nombre_chaînes = 10 #Comme on s'occupe de mu et de sigma il faut faire 2 colonnes p0 = np.random.random(size=(nombre_chaînes, 2))+1 sampler = emcee.EnsembleSampler(nombre_chaînes, 2, ln_posterior, args=[mesures,]) sampler.run_mcmc(p0, 10000) samples = sampler.get_chain() fig, (ax1, ax2) = plt.subplots(2, figsize=(12, 8)) ax1.plot(samples[:, :, 0], alpha=0.4) ax1.set_ylim(0, 2) ax1.set_xlim(0, len(samples)) ax1.set_ylabel('mu') ax1.yaxis.set_label_coords(-0.08, 0.5) ax1.set_xlabel("Nombre de pas") ax2.plot(samples[:, :, 1], alpha=0.4) ax2.set_ylim(0, 4) ax2.set_xlim(0, len(samples)) ax2.set_ylabel('sigma') ax2.yaxis.set_label_coords(-0.08, 0.5) ax2.set_xlabel("Nombre de pas") plt.show() ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import cv2 import pandas as pd import matplotlib.image as mpimg import pylab as pl %matplotlib inline image = cv2.imread('train_images/0a4e1a29ffff.png') # 000c1434d8d7 0a4e1a29ffff 7b87b0015282 #imgplot = plt.imshow(image) imgplot = plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) img_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) img_gray_blur = cv2.GaussianBlur(img_gray, (5,5), 0) canny_edges = cv2.Canny(img_gray_blur, 15, 35) #Vary the parameter for different results ret, mask = cv2.threshold(canny_edges, 70, 255, cv2.THRESH_BINARY) imgplot = plt.imshow(mask,cmap='binary',filternorm=1,filterrad=4.0) #imgplot = plt.imshow(cv2.cvtColor(mask, cv2.COLOR_BGR2RGB)) import numpy as np import cv2 import matplotlib.pyplot as plt import sys def build_filters(): filters = [] ksize = 31 for theta in np.arange(0, np.pi, np.pi / 16): kern = cv2.getGaborKernel((ksize, ksize), 4.0, theta, 10.0, 0.5, 0, ktype=cv2.CV_32F) kern /= 1.5*kern.sum() filters.append(kern) return filters def process(img, filters): accum = np.zeros_like(img) for kern in filters: fimg = cv2.filter2D(img, cv2.CV_8UC3, kern) np.maximum(accum, fimg, accum) return accum img_fn = 'train_images/0a4e1a29ffff.png' #000c1434d8d7 7b87b0015282 0c917c372572 img = cv2.imread(img_fn) if img is None: print ('Failed to load image file:', img_fn) sys.exit(1) filters = build_filters() #img_converted_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) imga = cv2.resize(img, (960, 960)) cv2.imshow('original', imga) #plt.imshow(img) #plt.show() res1 = process(img, filters) resa = cv2.resize(res1, (960, 960)) cv2.imshow('result', resa) #plt.imshow(res1) #plt.show() img_gray = cv2.cvtColor(res1, cv2.COLOR_BGR2GRAY) img_gray_blur = cv2.GaussianBlur(img_gray, (5,5), 0) canny_edges = cv2.Canny(img_gray_blur, 15, 25) #change this parameter for ifferent results ret, mask = cv2.threshold(canny_edges, 70, 255, cv2.THRESH_BINARY_INV) maska = cv2.resize(mask, (960, 960)) cv2.imshow('Mask', maska) #plt.imshow(mask,cmap='plasma') #plt.show() cv2.waitKey(0) cv2.destroyAllWindows() for _ in range(0,10): cv2.waitKey(1) image.reshape(-1,1050,1050) len(image) #Intensity Thresholding ==> desnt work properly for now import cv2 import matplotlib.pyplot as plt import numpy as np image = cv2.imread('train_images/0c917c372572.png') gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU) cv2.namedWindow("image", cv2.WINDOW_AUTOSIZE); cv2.imshow('image', thresh) cv2.waitKey(0) cv2.destroyAllWindows() for _ in range(0,10): cv2.waitKey(1) #thresholding try 2 import cv2 import matplotlib.pyplot as plt import numpy as np max_value = 255 max_type = 4 max_binary_value = 255 trackbar_type = 'Type: \n 0: Binary \n 1: Binary Inverted \n 2: Truncate \n 3: To Zero \n 4: To Zero Inverted' trackbar_value = 'Value' window_name = 'Threshold Demo' src = cv2.imread('train_images/0a38b552372d.png') #3b73a3a4a734 0c917c372572 0a38b552372d 0a4e1a29ffff #convert to gray src_gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY) # Create a window to display results cv2.namedWindow(window_name) def Threshold_Demo(val): #0: Binary #1: Binary Inverted #2: Threshold Truncated #3: Threshold to Zero #4: Threshold to Zero Inverted threshold_type = cv2.getTrackbarPos(trackbar_type, window_name) threshold_value = cv2.getTrackbarPos(trackbar_value, window_name) _, dst = cv2.threshold(src_gray, threshold_value, max_binary_value, threshold_type ) dst = cv2.resize(dst, (800, 750)) cv2.imshow(window_name, dst) # Create Trackbar to choose type of Threshold cv2.createTrackbar(trackbar_type, window_name , 3, max_type, Threshold_Demo) # Create Trackbar to choose Threshold value cv2.createTrackbar(trackbar_value, window_name , 0, max_value, Threshold_Demo) # Call the function to initialize Threshold_Demo(0) # Wait until user finishes program cv2.waitKey(0) cv2.destroyAllWindows() for _ in range(0,10): cv2.waitKey(1) cv2.imshow("1",mask) cv2.waitKey(0) cv2.destroyAllWindows() #for _ in range(0,10): # cv2.waitKey(1) import matplotlib.image as mpimg img=mpimg.imread('train_images/000c1434d8d7.png') imgplot = plt.imshow(img) plt.show() train = pd.read_csv('train.csv') train.head(10) from keras import backend as K K.tensorflow_backend._get_available_gpus() ```
github_jupyter
# Capstone Project: Create a Customer Segmentation Report for Arvato Financial Services In this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task. If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ``` # importing libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pprint import operator import time import ast from sklearn.preprocessing import Imputer from sklearn.cluster import KMeans from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.ensemble import BaggingClassifier from sklearn.pipeline import Pipeline # magic word for producing visualizations in notebook %matplotlib inline ``` ## Part 0: Get to Know the Data There are four data files associated with this project: - `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns). - `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns). - `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns). - `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns). Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company. The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition. Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order. In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely. You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ``` start = time.time() # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') #customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # Be sure to add in a lot more cells (both markdown and code) to document your # approach and findings! # Check the structure of the data after it's loaded (e.g. print the number of # rows and columns, print the first few rows). print(azdias.shape) # print the first 10 rows of the dataset azdias.head(10) azdias['ALTERSKATEGORIE_GROB'].describe() azdias['ALTERSKATEGORIE_GROB'].median() azdias['ALTERSKATEGORIE_GROB'].describe(percentiles=[0.80]) # replacing values for CAMEO_DEUG_2015 azdias['CAMEO_DEUG_2015'] = azdias['CAMEO_DEUG_2015'].replace('X',-1) azdias['CAMEO_DEUG_2015'].describe() azdias['CAMEO_DEUG_2015'].median() azdias['HH_EINKOMMEN_SCORE'].describe() azdias['HH_EINKOMMEN_SCORE'].median() # read the attributes details of the dataset feat_info = pd.read_excel('DIAS Attributes - Values 2017.xlsx') del feat_info['Unnamed: 0'] feat_info.head(15) # Fill the attribute column where the values are NaNs using ffill feat_info_attribute = feat_info['Attribute'].fillna(method='ffill') feat_info['Attribute'] = feat_info_attribute feat_info.head(10) # Get the encoded values that are actually missing or unknown values # Subset the meaning column to contain only those values using "unknown" or "no " terms feat_info = feat_info[(feat_info['Meaning'].str.contains("unknown") | feat_info['Meaning'].str.contains("no "))] pd.set_option('display.max_rows', 500) feat_info # Convert to a list of strings feat_info.loc[feat_info['Attribute'] == 'AGER_TYP', 'Value'].astype(str).str.cat(sep=',').split(',') # Because both of the first 2 rows of feat_info belong to the same attribute, combine the values # for each row into a single list of strings unknowns = [] for attribute in feat_info['Attribute'].unique(): _ = feat_info.loc[feat_info['Attribute'] == attribute, 'Value'].astype(str).str.cat(sep=',') _ = _.split(',') unknowns.append(_) unknowns = pd.concat([pd.Series(feat_info['Attribute'].unique()), pd.Series(unknowns)], axis=1) unknowns.columns = ['attribute', 'missing_or_unknown'] feat_info = unknowns feat_info start = time.time() # Converting the missing values to Nans in the dataset missing_values = pd.Series(feat_info['missing_or_unknown'].values, index=feat_info.index).to_dict() azdias[azdias.isin(missing_values)] = np.nan end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) azdias.shape #azdias.head(10) start = time.time() # Checking how much missing data there is in each column of the dataset. missing_col = azdias.isnull().sum() end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # Investigate patterns in the amount of missing data in each column. plt.hist(missing_col, bins=15, facecolor='b', alpha=1) plt.xlabel('Count of missing values in column') plt.ylabel('Number of columns') plt.title('Histogram for the count of missing values in columns') plt.grid(True) plt.show() missing_columns = missing_col[missing_col>0] missing_columns.sort_values(inplace=True) missing_columns.plot.bar(figsize=(20,15), facecolor='b') plt.xlabel('Column name with missing values') plt.ylabel('Number of missing values') plt.grid(True) plt.title('Column Name vs missing values') plt.show() start = time.time() # This operation is to remove the outlier columns from the dataset. # identify the columns having more than 20K missing values missing_col_updated = missing_col[missing_col>200000] # dropping those columns from the data set azdias.drop(missing_col_updated.index, axis=1, inplace=True) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # listing the dropped columns print(missing_col_updated) azdias.shape start = time.time() # Separate the data into two subsets based on the number of missing # values in each row. # Keep the the rows having less than 20 missing values for the analyis n_missing = azdias.isnull().transpose().sum() azdias_missing_low = azdias[n_missing<20] # rows having less than 20 missing values #azdias_missing_high = azdias[n_missing>=20]; # rows having more or equal to 20 missing values n_missing_low = azdias_missing_low.isnull().transpose().sum() #n_missing_high = azdias_missing_high.isnull().transpose().sum() end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # check the count for remaining number of rows azdias_missing_low.shape # The only binary categorical variable that does not take integer values is OST_WEST_KZ which uses either W or O # Re-encoding with 1 and 0. azdias_missing_low['OST_WEST_KZ'].replace(['W', 'O'], [1, 0], inplace=True) azdias_missing_low['OST_WEST_KZ'].head() # For columns > 10 different values, drop for # simplicity. cat_cols_to_drop = ['CAMEO_DEU_2015', 'GFK_URLAUBERTYP', 'LP_FAMILIE_FEIN', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'LP_STATUS_FEIN', 'PRAEGENDE_JUGENDJAHRE','EINGEFUEGT_AM'] # For columns < 10 levels, re-encode using dummy variables. cat_cols_to_dummy = ['CJT_GESAMTTYP', 'FINANZTYP', 'GEBAEUDETYP', 'GEBAEUDETYP_RASTER', 'HEALTH_TYP', 'KBA05_HERSTTEMP', 'KBA05_MAXHERST', 'KBA05_MODTEMP', 'LP_FAMILIE_GROB', 'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'SHOPPER_TYP', 'VERS_TYP'] start = time.time() # Dropping the categorical columns azdias_missing_low.drop(cat_cols_to_drop, axis=1, inplace = True) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) start = time.time() # Creating dummy variables for columns with less than 10 categories unique values # then drop the original columns for col in cat_cols_to_dummy: dummy = pd.get_dummies(azdias_missing_low[col], prefix = col) azdias_missing_low = pd.concat([azdias_missing_low, dummy], axis = 1) print("Dropping the dummied columns") azdias_missing_low.drop(cat_cols_to_dummy, axis=1, inplace = True) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) print(azdias_missing_low.shape) # replacing values for CAMEO_INTL_2015 azdias_missing_low['CAMEO_INTL_2015'] = azdias_missing_low['CAMEO_INTL_2015'].replace('XX',-1) # replacing values for CAMEO_DEUG_2015 azdias_missing_low['CAMEO_DEUG_2015'] = azdias_missing_low['CAMEO_DEUG_2015'].replace('X',-1) # Create a cleaning function so the same changes can be done on the customer dataset as it was on the # general population dataset. def clean_data(azdias, feat_info): """ INPUT: azdias: Population/Customer demographics DataFrame feat_info: feat info DataFrame OUTPUT: Trimmed and cleaned demographics DataFrame """ # Convert missing values to Nans print("Convert missing values") missing_values = pd.Series(feat_info['missing_or_unknown'].values, index=feat_info.index).to_dict() azdias[azdias.isin(missing_values)] = np.nan missing_col = azdias.isnull().sum() print("dropping missing_col_updated") # Remove the outlier columns from the dataset missing_col_updated = missing_col[missing_col>200000] #taking out the columns having more than 20K missing values azdias.drop(missing_col_updated.index, axis=1, inplace=True) # dropping those columns from the data set n_missing = azdias.isnull().transpose().sum() azdias_missing_low = azdias[n_missing<20] # rows having less than 20 missing values n_missing_low = azdias_missing_low.isnull().transpose().sum() # The only binary categorical variable that does not take integer values is OST_WEST_KZ which uses either W or O print("replacing values for OST_WEST_KZ") azdias_missing_low['OST_WEST_KZ'].replace(['W', 'O'], [1, 0], inplace=True) azdias_missing_low['OST_WEST_KZ'].head() # For columns > 10 different values, drop for # simplicity. # For columns < 10 levels, re-encode using dummy variables. cat_cols_to_drop = ['CAMEO_DEU_2015', 'GFK_URLAUBERTYP', 'LP_FAMILIE_FEIN', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'LP_STATUS_FEIN', 'PRAEGENDE_JUGENDJAHRE','EINGEFUEGT_AM'] cat_cols_to_dummy = ['CJT_GESAMTTYP', 'FINANZTYP', 'GEBAEUDETYP', 'GEBAEUDETYP_RASTER', 'HEALTH_TYP', 'KBA05_HERSTTEMP', 'KBA05_MAXHERST', 'KBA05_MODTEMP', 'LP_FAMILIE_GROB', 'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'SHOPPER_TYP', 'VERS_TYP'] print("Dropping categorical columns with 10 or more values") # Drop categorical columns with 10 or more values azdias_missing_low.drop(cat_cols_to_drop, axis=1, inplace = True) for col in cat_cols_to_dummy: dummy = pd.get_dummies(azdias_missing_low[col], prefix = col) azdias_missing_low = pd.concat([azdias_missing_low, dummy], axis = 1) print("Dropping dummies") azdias_missing_low.drop(cat_cols_to_dummy, axis=1, inplace = True) # replacing values for CAMEO_INTL_2015 azdias_missing_low['CAMEO_INTL_2015'] = azdias_missing_low['CAMEO_INTL_2015'].replace('XX',-1) # replacing values for CAMEO_DEUG_2015 azdias_missing_low['CAMEO_DEUG_2015'] = azdias_missing_low['CAMEO_DEUG_2015'].replace('X',-1) # Return the cleaned dataframe. return azdias_missing_low azdias_missing_low.head(10) start = time.time() azdias_missing_low=azdias_missing_low.fillna(0) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) scaler = StandardScaler() # Apply feature scaling to the population data. start = time.time() azdias_missing_low = pd.DataFrame(scaler.fit_transform(azdias_missing_low), columns = azdias_missing_low.columns) end = time.time() print("Total execution time: {:.2f} seconds".format(end-start)) ``` ## Part 1: Customer Segmentation Report The main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ``` customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') customers.shape customers.head(10) start = time.time() # Run the clean_data function on the population dataset customers = clean_data(customers, feat_info) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) customers.shape start = time.time() customers=customers.fillna(0) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # Drop the extra columns of customers dataset. customers.drop(columns=['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], inplace=True) cols_to_drop = ['AGER_TYP', 'ALTER_HH', 'ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'ALTERSKATEGORIE_FEIN', 'D19_BANKEN_ANZ_12', 'D19_BANKEN_ANZ_24', 'D19_BANKEN_DATUM', 'D19_BANKEN_OFFLINE_DATUM', 'D19_BANKEN_ONLINE_DATUM', 'D19_BANKEN_ONLINE_QUOTE_12', 'D19_GESAMT_ANZ_12', 'D19_GESAMT_ANZ_24', 'D19_GESAMT_DATUM', 'D19_GESAMT_OFFLINE_DATUM', 'D19_GESAMT_ONLINE_DATUM', 'D19_GESAMT_ONLINE_QUOTE_12', 'D19_KONSUMTYP', 'D19_LETZTER_KAUF_BRANCHE', 'D19_LOTTO', 'D19_SOZIALES', 'D19_TELKO_ANZ_12', 'D19_TELKO_ANZ_24', 'D19_TELKO_DATUM', 'D19_TELKO_OFFLINE_DATUM', 'D19_TELKO_ONLINE_DATUM', 'D19_TELKO_ONLINE_QUOTE_12', 'D19_VERSAND_ANZ_12', 'D19_VERSAND_ANZ_24', 'D19_VERSAND_DATUM', 'D19_VERSAND_OFFLINE_DATUM', 'D19_VERSAND_ONLINE_DATUM', 'D19_VERSAND_ONLINE_QUOTE_12', 'D19_VERSI_ANZ_12', 'D19_VERSI_ANZ_24', 'D19_VERSI_ONLINE_QUOTE_12', 'EXTSEL992', 'KBA05_ANTG1', 'KBA05_ANTG2', 'KBA05_ANTG3', 'KBA05_ANTG4', 'KBA05_BAUMAX', 'KBA05_MAXVORB', 'KK_KUNDENTYP', 'TITEL_KZ'] customers.drop(cols_to_drop, axis=1, inplace = True) customers.shape # Apply feature scaling to the population data. start = time.time() customers = pd.DataFrame(scaler.fit_transform(customers), columns = customers.columns) end = time.time() print("Total execution time: {:.2f} seconds".format(end-start)) customers.head(10) # Apply PCA to the population data. start = time.time() pca = PCA() customers = pca.fit_transform(customers) end = time.time() print("Total execution time: {:.2f} seconds".format(end-start)) # Investigate the variance accounted for by each principal component. n_components = min(np.where(np.cumsum(pca.explained_variance_ratio_)>0.8)[0]+1) # 80% of variance selected fig = plt.figure() ax = fig.add_axes([0,0,1,1],True) ax2 = ax.twinx() ax.plot(pca.explained_variance_ratio_, label='Variance',) ax2.plot(np.cumsum(pca.explained_variance_ratio_), label='Cumulative Variance',color = 'red'); ax.set_title('n_components needed for >%80 explained variance: {}'.format(n_components)); ax.axvline(n_components, linestyle='dashed', color='black') ax2.axhline(np.cumsum(pca.explained_variance_ratio_)[n_components], linestyle='dashed', color='black') fig.legend(loc=(0.8,0.2)); # Re-apply PCA to the data while selecting for number of components to retain. start = time.time() pca = PCA(n_components=60, random_state=10) azdias_pca = pca.fit_transform(customers) end = time.time() print("Total execution time of this procedure: {:.2f} seconds".format(end-start)) # check the sum of the explained variance pca.explained_variance_ratio_.sum() def plot_pca(data, pca, n_components): ''' Plot the features with the most absolute variance for given pca component ''' compo = pd.DataFrame(np.round(pca.components_, 4), columns = data.keys()).iloc[n_components-1] compo.sort_values(ascending=False, inplace=True) compo = pd.concat([compo.head(5), compo.tail(5)]) compo.plot(kind='bar', title='Component ' + str(n_components)) ax = plt.gca() ax.grid(linewidth='0.5', alpha=0.5) ax.set_axisbelow(True) plt.show() # plot_pca(customers, pca, 2) from sklearn.cluster import KMeans, MiniBatchKMeans start = time.time() kmeans_scores = [] for i in range(2,30,2): #run k-means clustering on the data kmeans = MiniBatchKMeans(i) kmeans.fit(azdias_pca) #compute the average within-cluster distances. #print(i,kmeans.score(azdias_pca)) kmeans_scores.append(-kmeans.score(azdias_pca)) end = time.time() print("Total execution time: {:.2f} seconds".format(end-start)) kmeans_scores # Investigate the change in within-cluster distance across number of clusters. # HINT: Use matplotlib's plot function to visualize this relationship. # Plot elbow plot x = range(2, 30, 2) plt.figure(figsize=(8, 4)) plt.plot(x, kmeans_scores, marker='o') plt.xticks(x) plt.xlabel('K') plt.ylabel('SSE'); # Re-fit the k-means model with the selected number of clusters (20) and obtain # cluster predictions for the general population demographics data. start = time.time() kmeans_20 = KMeans(20, random_state=10) clusters_pop = kmeans_20.fit_predict(azdias_pca) end = time.time() print("Total execution time of this procedure: {:.2f} seconds".format(end-start)) #general_prop = [] customers_prop = [] x = [i+1 for i in range(20)] for i in range(20): #general_prop.append((clusters_pop == i).sum()/len(clusters_pop)) customers_prop.append((clusters_pop == i).sum()/len(clusters_pop)) df_general = pd.DataFrame({'cluster' : x, 'Customers pop':customers_prop}) #ax = sns.countplot(x='index', y = df_general['prop_1', 'prop_2'], data=df_general ) df_general.plot(x='cluster', y = ['Customers pop'], kind='bar', figsize=(9,6)) plt.ylabel('proportion of persons in each cluster') plt.show() ``` ## Part 2: Supervised Learning Model Now that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign. The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ``` mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') mailout_train.info() mailout_train.shape mailout_train.head() # Imbalance of REPONSE column vc = mailout_train['RESPONSE'].value_counts() vc # positive response vc[1]/(vc[0]+vc[1]) # negative response vc[0]/(vc[0]+vc[1]) mailout_train.head(10) # find features to drop because of many missing values missing_per_column = mailout_train.isnull().mean() plt.hist(missing_per_column, bins=34) start = time.time() # clean data, no splitting of rows necessary mailout_train = clean_data(mailout_train, feat_info) mailout_train.shape end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # this columns has data values as time mailout_train.drop(labels=['D19_LETZTER_KAUF_BRANCHE'], axis=1, inplace=True) mailout_train.shape mailout_train.head(10) # extract RESPONSE column response = mailout_train['RESPONSE'] # drop RESPONSE column mailout_train.drop(labels=['RESPONSE'], axis=1, inplace=True) # impute median and scale azdias imputer = Imputer(strategy='median') scaler = StandardScaler() mailout_train_imputed = pd.DataFrame(imputer.fit_transform(mailout_train)) mailout_train_scaled = scaler.fit_transform(mailout_train_imputed) mailout_train_scaled.shape response.shape ``` Dataset has been preprocessed for further analysis ``` def classify(clf, param_grid, X_train=mailout_train_scaled, y_train=response): """ Fits a classifier to its training data and prints its ROC AUC score. INPUT: - clf (classifier): classifier to fit - param_grid (dict): classifier parameters used with GridSearchCV - X_train (DataFrame): training input - y_train (DataFrame): training output OUTPUT: - classifier: input classifier fitted to the training data """ # cv uses StratifiedKFold grid = GridSearchCV(estimator=clf, param_grid=param_grid, scoring='roc_auc', cv=5) grid.fit(X_train, y_train) print(grid.best_score_) return grid.best_estimator_ start = time.time() # LogisticRegression logreg = LogisticRegression(random_state=12) classify(logreg, {}) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) start = time.time() # BaggingClassifier bac = BaggingClassifier(random_state=12) classify(bac, {}) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) start = time.time() # AdaBoostClassifier abc = AdaBoostClassifier(random_state=12) abc_best_est = classify(abc, {}) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) start = time.time() # GradientBoostingClassifier gbc = GradientBoostingClassifier(random_state=12) classify(gbc, {}) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # Tuning the model which gave the best result # tune with the help of GridSearchCV # the result is our model that will be used with the test set gbc = GradientBoostingClassifier(random_state=12) param_grid = {'loss': ['deviance', 'exponential'], 'max_depth': [2, 3], 'n_estimators':[80], 'random_state': [12] } start = time.time() gbc_tuned = classify(gbc, param_grid) gbc_tuned end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # Import StratifiedKFold from sklearn.model_selection import StratifiedKFold # Initialize 5 stratified folds skf = StratifiedKFold(n_splits=5, random_state=12) skf.get_n_splits(mailout_train, response) print(skf) def create_pipeline(clf): # Create machine learning pipeline pipeline = Pipeline([ ('imp', imputer), ('scale', scaler), ('clf', clf) ]) return pipeline def cross_validate(clf): pipeline = create_pipeline(clf) scores = [] i = 0 # Perform 5-fold validation for train_index, test_index in skf.split(mailout_train, response): i+=1 print('Fold {}'.format(i)) # Split the data into training and test sets X_train, X_test = mailout_train.iloc[train_index], mailout_train.iloc[test_index] y_train, y_test = response.iloc[train_index], response.iloc[test_index] # Train using the pipeline pipeline.fit(X_train, y_train) #Predict on the test data y_pred = pipeline.predict(X_test) score = roc_auc_score(y_test, y_pred) scores.append(score) print("Score: {}".format(score)) return scores from sklearn.metrics import roc_auc_score start = time.time() tuned_scores = cross_validate(gbc) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) ``` ## Part 3: Kaggle Competition Now that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview! Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ``` mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') mailout_test.info() mailout_test.head(10) # extract lnr for later generation of the competition result file # lnr = mailout_test.LNR # clean data mailout_test = clean_data(mailout_test, feat_info) mailout_test.shape # this columns has data values as time mailout_test.drop(labels=['D19_LETZTER_KAUF_BRANCHE'], axis=1, inplace=True) lnr = mailout_test.LNR mailout_test.shape # impute median and scale azdias mailout_test_imputed = pd.DataFrame(imputer.transform(mailout_test)) mailout_test_scaled = scaler.transform(mailout_test_imputed) # use the trained model from Part 2 to predict the probabilties of the testing data response_test = gbc_tuned.predict_proba(mailout_test_scaled) response_test response_test.shape # generate result file for the competition result = pd.DataFrame({'LNR':lnr, 'RESPONSE':response_test[:,0]}) result.head(10) ```
github_jupyter
# Classifying Fashion-MNIST Now it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world. <img src='assets/fashion-mnist-sprite.png' width=500px> In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this. First off, let's load the dataset through torchvision. ``` import torch from torchvision import datasets, transforms import helper # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) ``` Here we can see one of the images. ``` image, label = next(iter(trainloader)) helper.imshow(image[0,:]); ``` ## Building the network Here you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers. ``` from torch import nn, optim import torch.nn.functional as F # TODO: Define your network architecture here class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x ``` # Train the network Now you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) (something like `nn.CrossEntropyLoss` or `nn.NLLLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`). Then write the training code. Remember the training pass is a fairly straightforward process: * Make a forward pass through the network to get the logits * Use the logits to calculate the loss * Perform a backward pass through the network with `loss.backward()` to calculate the gradients * Take a step with the optimizer to update the weights By adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4. ``` # TODO: Create the network, define the criterion and optimizer model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) # TODO: Train the network here epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: log_ps = model(images) loss = criterion(log_ps, labels) optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss/len(trainloader)}") %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper # Test out your network! dataiter = iter(testloader) images, labels = dataiter.next() img = images[1] # TODO: Calculate the class probabilities (softmax) for img ps = torch.exp(model(img)) # Plot the image and probabilities helper.view_classify(img, ps, version='Fashion') from torch import nn import torch.nn.functional as F from torch import optim from collections import OrderedDict # Step1 Build Model (Class) input_dim = 784 hidden_layers = [256,128,64] output_dim = 10 class Classifier1(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(input_dim, hidden_layers[0]) self.fc2 = nn.Linear(hidden_layers[0], hidden_layers[1]) self.fc3 = nn.Linear(hidden_layers[1], hidden_layers[2]) self.fc4 = nn.Linear(hidden_layers[2], output_dim) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = nn.LogSoftmax(dim = 1)(x) return x clasifier1 = Classifier1() # Step2 Set the loss function criterion = nn.NLLLoss() # Step3 Set the Optimizer optimzer = optim.SGD(clasifier1.parameters(), lr = 0.003) # Step4 Training epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: # Flatten the images images = images.view(images.shape[0],-1) # Step4-1 Initializing optimizer optimzer.zero_grad() # Step4-2 Get the Prediction output = clasifier1(images) # Step4-3 Calculate the loss loss = criterion(output, labels) # Step4-4 Backwards the loss loss.backward() # Step4-5 Update Gradient optimzer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss/len(trainloader)}") ```
github_jupyter
``` # import models import matplotlib.pyplot as plt import numpy as np import xlrd from sklearn.preprocessing import MinMaxScaler from sklearn.svm import SVC from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import GridSearchCV from sklearn.metrics import roc_curve, auc import itertools from sklearn.metrics import confusion_matrix # load data ## Pathology for train data = xlrd.open_workbook('Pathology.xlsx').sheets()[0] X_tra = np.zeros((111, 2)) X_tra[:, 0] = data.col_values(3) # X0 = Integrated backscatter #X_tra[:, 1] = data.col_values(4) # X1 = Q factor (HHT) X_tra[:, 1] = data.col_values(6) # X2 = Homogeneity factor ### scaling to 0~1 Scaler = MinMaxScaler(copy=True, feature_range=(0, 1)).fit(X_tra) X_train = Scaler.transform(X_tra) ### label nrows = data.nrows y_train = [] for i in range(nrows): # y = 'T' or 'F' from Pathology data if data.cell(i,1).value < 10: y_train.append(0) elif data.cell(i,1).value >= 10: y_train.append(1) y_train = np.ravel(y_train) # Return a contiguous flattened array class_names = np.ravel(['negative', 'positive']) ## New Pathology for test data = xlrd.open_workbook('NPathology.xlsx').sheets()[0] X_te = np.zeros((74, 2)) X_te[:, 0] = data.col_values(3) # X0 = Integrated backscatter #X_te[:, 1] = data.col_values(4) # X1 = Q factor (HHT) X_te[:, 1] = data.col_values(6) # X2 = Homogeneity factor ### scaling to 0~1 X_test = Scaler.transform(X_te) ### label nrows = data.nrows y_test = [] for i in range(nrows): # y = 'T' or 'F' from Pathology data if data.cell(i,1).value < 10: y_test.append(0) elif data.cell(i,1).value >= 10: y_test.append(1) y_test = np.ravel(y_test) # Return a contiguous flattened array # training & validation ## set CV cv = StratifiedKFold(10) ## set parameters C_l = list(2**(i/100) for i in range(-100, 325, 25)) C_r = list(2**(i/100) for i in range(-100, 325, 25)) C_p = list(2**(i/100) for i in range(-700, -375, 25)) gamma_r = list(2**(i/100) for i in range(-300, 125, 25)) gamma_p = list(2**(i/100) for i in range(100, 525, 25)) ## SVC_linear linear_prarmeters = {'kernel': ['linear'], 'C': C_l} clf_linear = GridSearchCV(SVC(), linear_prarmeters, cv=cv, n_jobs=3).fit(X_train, y_train) print('The best parameters of linear are {0}\n with a validation score of {1:.2%}' .format(clf_linear.best_params_, clf_linear.best_score_)) ## SVC_rbf rbf_prarmeters = {'kernel': ['rbf'], 'C': C_r, 'gamma': gamma_r} clf_rbf = GridSearchCV(SVC(), rbf_prarmeters, cv=cv, n_jobs=3).fit(X_train, y_train) print('The best parameters of rbf are {0}\n with a validation score of {1:.2%}' .format(clf_rbf.best_params_, clf_rbf.best_score_)) ## SVC_poly poly_prarmeters = {'kernel': ['poly'], 'C': C_p, 'gamma': gamma_p, 'degree': [1, 2, 3]} clf_poly = GridSearchCV(SVC(), poly_prarmeters, cv=cv, n_jobs=3).fit(X_train, y_train) print('The best parameters of poly are {0}\n with a validation score of {1:.2%}\n' .format(clf_poly.best_params_, clf_poly.best_score_)) # test ## SVC_linear test_predict_linear = clf_linear.predict(X_test) test_score_linear = clf_linear.score(X_test, y_test) print('test accuracy of linear = {0:.2%}'.format(test_score_linear)) dec_linear = clf_linear.decision_function(X_test) ## SVC_rbf test_predict_rbf = clf_rbf.predict(X_test) test_score_rbf = clf_rbf.score(X_test, y_test) print('test accuracy of rbf = {0:.2%}'.format(test_score_rbf)) dec_rbf = clf_rbf.decision_function(X_test) ## SVC_poly test_predict_poly = clf_poly.predict(X_test) test_score_poly = clf_poly.score(X_test, y_test) print('test accuracy of poly = {0:.2%}\n'.format(test_score_poly)) dec_poly = clf_poly.decision_function(X_test) ## Confusion matrix ### define confusion matrix def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): global i print('{0}'.format(kernel[i])) print('Sensitivity = {0:.2%}'.format(cm[1,1]/(cm[1,0]+cm[1,1]))) print('Specificity = {0:.2%}'.format(cm[0,0]/(cm[0,0]+cm[0,1]))) print('LR+ = {0:.2f}'.format((cm[1,1]/(cm[1,0]+cm[1,1]))/(cm[0,1]/(cm[0,0]+cm[0,1])))) print('LR- = {0:.2f}'.format((cm[1,0]/(cm[1,0]+cm[1,1]))/(cm[0,0]/(cm[0,0]+cm[0,1])))) print('PPV = {0:.2%}'.format(cm[1,1]/(cm[0,1]+cm[1,1]))) print('NPV = {0:.2%}'.format(cm[0,0]/(cm[0,0]+cm[1,0]))) print('Accuracy = {0:.2%}\n'.format((cm[0,0]+cm[1,1])/(cm[0,0]+cm[0,1]+cm[1,0]+cm[1,1]))) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment='center', color='white' if cm[i, j] > thresh else 'black') plt.tight_layout() plt.title(title, size = 20) plt.ylabel('True label', size = 15) plt.xlabel('Predicted label', size = 15) kernel = ['linear', 'rbf', 'poly'] pre_kernel = [test_predict_linear, test_predict_rbf, test_predict_poly] # predict of each kernel for i in range(0,3): ## Compute confusion matrix cnf_matrix = confusion_matrix(y_test, pre_kernel[i]) np.set_printoptions(precision=4) ## Plot non-normalized confusion matrix plt.figure(figsize=(6,6)) plot_confusion_matrix(cnf_matrix, classes=class_names, title='{0} Confusion matrix'.format(kernel[i])) plt.show() ## ROC curve ### compute ROC and AUC fpr = dict() tpr = dict() thresholds = dict() t_score = dict() roc_auc = dict() dec = [dec_linear, dec_rbf, dec_poly] # direction of each kernel for i in range(0,3): fpr[i], tpr[i], thresholds[i] = roc_curve(y_test, dec[i]) roc_auc[i] = auc(fpr[i], tpr[i]) t_score[i] = tpr[i] - fpr[i] for j in range(len(t_score[i])): if t_score[i][j] == max(t_score[i]): s = j break print('{0} Cut-off value = {1:.4f}'.format(kernel[i], thresholds[i][s])) print('{0} AUROC = {1:.4f}'.format(kernel[i], roc_auc[i])) ### plot plt.figure(figsize=(10,10)) lw=2 colors = ['red', 'green', 'blue'] linestyles = ['-', '-.', ':'] for i, color, linestyle in zip(range(0,3), colors, linestyles): plt.plot(fpr[i], tpr[i], color=color, linestyle=linestyle, lw=lw, label='ROC curve of {0} (auc = {1:.4f})'.format(kernel[i], roc_auc[i])) plt.plot([0, 1], [0, 1], color='orange', lw=lw, linestyle='--') plt.xlim([-0.05, 1.05]) plt.ylim([-0.05, 1.05]) plt.xlabel('False Positive Rate (1-specificity)', size = 20) plt.ylabel('True Positive Rate (sensitivity)', size = 20) plt.title('Receiver operating characteristic', size = 25) plt.legend(loc='lower right') plt.show() ```
github_jupyter
Lambda School Data Science *Unit 2, Sprint 2, Module 3* --- <p style="padding: 10px; border: 2px solid red;"> <b>Before you start:</b> Today is the day you should submit the dataset for your Unit 2 Build Week project. You can review the guidelines and make your submission in the Build Week course for your cohort on Canvas.</p> ``` %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/main/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' ``` # Module Project: Hyperparameter Tuning This sprint, the module projects will focus on creating and improving a model for the Tanazania Water Pump dataset. Your goal is to create a model to predict whether a water pump is functional, non-functional, or needs repair. Dataset source: [DrivenData.org](https://www.drivendata.org/competitions/7/pump-it-up-data-mining-the-water-table/). ## Directions The tasks for this project are as follows: - **Task 1:** Use `wrangle` function to import training and test data. - **Task 2:** Split training data into feature matrix `X` and target vector `y`. - **Task 3:** Establish the baseline accuracy score for your dataset. - **Task 4:** Build `clf_dt`. - **Task 5:** Build `clf_rf`. - **Task 6:** Evaluate classifiers using k-fold cross-validation. - **Task 7:** Tune hyperparameters for best performing classifier. - **Task 8:** Print out best score and params for model. - **Task 9:** Create `submission.csv` and upload to Kaggle. You should limit yourself to the following libraries for this project: - `category_encoders` - `matplotlib` - `pandas` - `pandas-profiling` - `sklearn` # I. Wrangle Data ``` import numpy as np import pandas as pd from sklearn.model_selection import train_test_split # Merge train_features.csv & train_labels.csv train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')).set_index('id') # Read test_features.csv & sample_submission.csv test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv').set_index('id') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') # Split train into train & val #train, val = train_test_split(train, train_size=0.80, test_size=0.20, #stratify=train['status_group'], random_state=42) def wrangle(X): """Wrangle train, validate, and test sets in the same way""" # Prevent SettingWithCopyWarning X = X.copy() # About 3% of the time, latitude has small values near zero, # outside Tanzania, so we'll treat these values like zero. X['latitude'] = X['latitude'].replace(-2e-08, 0) # When columns have zeros and shouldn't, they are like null values. # So we will replace the zeros with nulls, and impute missing values later. # Also create a "missing indicator" column, because the fact that # values are missing may be a predictive signal. cols_with_zeros = ['longitude', 'latitude', 'construction_year', 'gps_height', 'population'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) X[col+'_MISSING'] = X[col].isnull() # Drop duplicate columns duplicates = ['quantity_group', 'payment_type'] X = X.drop(columns=duplicates) # Drop recorded_by (never varies) and id (always varies, random) unusable_variance = ['recorded_by'] X = X.drop(columns=unusable_variance) # Convert date_recorded to datetime X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True) # Extract components from date_recorded, then drop the original column X['year_recorded'] = X['date_recorded'].dt.year X['month_recorded'] = X['date_recorded'].dt.month X['day_recorded'] = X['date_recorded'].dt.day X = X.drop(columns='date_recorded') # Engineer feature: how many years from construction_year to date_recorded X['years'] = X['year_recorded'] - X['construction_year'] X['years_MISSING'] = X['years'].isnull() # return the wrangled dataframe return X ``` **Task 1:** Using the above `wrangle` function to read `train_features.csv` and `train_labels.csv` into the DataFrame `df`, and `test_features.csv` into the DataFrame `X_test`. ``` df = wrangle(train) X_test = wrangle(test) ``` # II. Split Data **Task 2:** Split your DataFrame `df` into a feature matrix `X` and the target vector `y`. You want to predict `'status_group'`. **Note:** You won't need to do a train-test split because you'll use cross-validation instead. ``` target = 'status_group' y = df[target] X = df.drop(columns=target) ``` # III. Establish Baseline **Task 3:** Since this is a **classification** problem, you should establish a baseline accuracy score. Figure out what is the majority class in `y_train` and what percentage of your training observations it represents. ``` baseline_acc = y.value_counts(normalize = True) print('Baseline Accuracy Score:', baseline_acc) ``` # IV. Build Models **Task 4:** Build a `Pipeline` named `clf_dt`. Your `Pipeline` should include: - an `OrdinalEncoder` transformer for categorical features. - a `SimpleImputer` transformer fot missing values. - a `DecisionTreeClassifier` Predictor. **Note:** Do not train `clf_dt`. You'll do that in a subsequent task. ``` from sklearn.pipeline import make_pipeline import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.tree import DecisionTreeClassifier clf_dt = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median', verbose=0), DecisionTreeClassifier(random_state=42, max_depth=None,), ) sorted(clf_dt.get_params().keys()) ``` **Task 5:** Build a `Pipeline` named `clf_rf`. Your `Pipeline` should include: - an `OrdinalEncoder` transformer for categorical features. - a `SimpleImputer` transformer fot missing values. - a `RandomForestClassifier` predictor. **Note:** Do not train `clf_rf`. You'll do that in a subsequent task. ``` from sklearn.ensemble import RandomForestClassifier clf_rf = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(), RandomForestClassifier() ) ``` # V. Check Metrics **Task 6:** Evaluate the performance of both of your classifiers using k-fold cross-validation. ``` from sklearn.model_selection import KFold, cross_val_score k = 5 cv_scores_dt = cross_val_score(clf_dt, X, y, cv = k) cv_scores_rf = cross_val_score(clf_rf, X, y, cv = k) print('CV scores DecisionTreeClassifier') print(cv_scores_dt) print('Mean CV accuracy score:', cv_scores_dt.mean()) print('STD CV accuracy score:', cv_scores_dt.std()) print('CV score RandomForestClassifier') print(cv_scores_rf) print('Mean CV accuracy score:', cv_scores_rf.mean()) print('STD CV accuracy score:', cv_scores_rf.std()) ``` # VI. Tune Model **Task 7:** Choose the best performing of your two models and tune its hyperparameters using a `RandomizedSearchCV` named `model`. Make sure that you include cross-validation and that `n_iter` is set to at least `25`. **Note:** If you're not sure which hyperparameters to tune, check the notes from today's guided project and the `sklearn` documentation. ``` from sklearn.pipeline import Pipeline from category_encoders import OrdinalEncoder from sklearn.impute import SimpleImputer from sklearn.tree import DecisionTreeClassifier pipeline = Pipeline([ ('encoder', OrdinalEncoder()), ('imputer', SimpleImputer(strategy='median')), ('classifier', DecisionTreeClassifier(random_state = 42)) ]) from scipy.stats import randint from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import RandomizedSearchCV features = df.columns.drop([target]) X_train = df[features] y_train = df[target] # Setup the parameters and distributions to sample from: param_dist param_dist = {"classifier__max_depth": [3, None], 'imputer__strategy': ['mean', 'median'], "classifier__max_features": randint(1, 9), "classifier__min_samples_leaf": randint(1, 9), "classifier__criterion": ["gini", "entropy"]} # Instantiate a Decision Tree classifier: tree tree = DecisionTreeClassifier() # Instantiate the RandomizedSearchCV object: tree_cv #tree_cv = RandomizedSearchCV(tree, param_dist, n_iter=25, cv=5) tree_cv = RandomizedSearchCV( pipeline, param_distributions=param_dist, n_iter=25, cv=3, ) tree_cv.fit(X_train, y_train) #If you're on Colab, decrease n_iter & cv parameters ``` **Task 8:** Print out the best score and best params for `model`. ``` best_score = tree_cv.best_score_ best_params = tree_cv.best_params_ print('Best score for `model`:', best_score) print('Best params for `model`:', best_params) ``` # Communicate Results **Task 9:** Create a DataFrame `submission` whose index is the same as `X_test` and that has one column `'status_group'` with your predictions. Next, save this DataFrame as a CSV file and upload your submissions to our competition site. **Note:** Check the `sample_submission.csv` file on the competition website to make sure your submissions follows the same formatting. ``` #since we have found the best parameters, we plug them in and then fit and make prediction best_model = Pipeline([ ('encoder', OrdinalEncoder()), ('imputer', SimpleImputer(strategy='mean')), ('classifier', DecisionTreeClassifier(criterion = 'gini', max_depth = None, max_features = 8, min_samples_leaf = 7, random_state = 42)) ]) best_model.fit(X_train, y_train) print('X_test', X_test.shape) print('X_train', X_train.shape) X_test.head() #make predictions first pred = best_model.predict(X_test) #Create a datafram submission submission = submission = pd.DataFrame(pred, columns = ['Status_group'], index = X_test.index) submission.head() #Save into a csv file for upload submission.to_csv('Tanzania_water_pred_submission.csv') ```
github_jupyter
> **Note:** In most sessions you will be solving exercises posed in a Jupyter notebook that looks like this one. Because you are cloning a Github repository that only we can push to, you should **NEVER EDIT** any of the files you pull from Github. Instead, what you should do, is either make a new notebook and write your solutions in there, or **make a copy of this notebook and save it somewhere else** on your computer, not inside the `sds` folder that you cloned, so you can write your answers in there. If you edit the notebook you pulled from Github, those edits (possible your solutions to the exercises) may be overwritten and lost the next time you pull from Github. This is important, so don't hesitate to ask if it is unclear. # Exercise Set 16: Exploratory Data Analysis *Afternoon, August 22, 2018* In this exercise set we will be practicing our skills wihtin Exploratory Data Analysis. Furthermore we will get a little deeper into the mechanics of the K-means clustering algorithm. ## Exercise Section 16.1: Exploratory data analysis with interactive plots In the following exercise you will practice interactive plotting with the Plotly library. ** Preparation:** Setting up plotly: * Install plotly version (2.7.0) using conda: `conda install -c conda-forge plotly==2.7.0` * Intall cufflinks that bins the plotly to the dataframe. `conda install -c conda-forge cufflinks-py` * Create a user on "https://plot.ly/". * Login * Hover over your profile name, and click on settings. * Get the API key and copy it. * Run the following command in the notebook ```python # First time you run it import plotly username = 'username' # your.username api_key = 'apikey' # find it under settings # your.apikey plotly.tools.set_credentials_file(username=username, api_key=api_key) ``` * Plotly is a sort of social media for Graphs, and automatically save all your figures. If you want to run it in offline mode, run the following in the notebook : ```python import plotly.offline as py # import plotly in offline mode py.init_notebook_mode(connected=True) # initialize the offline mode, with access to the internet or not. import plotly.tools as tls tls.embed('https://plot.ly/~cufflinks/8') # embed cufflinks. # import cufflinks and make it offline import cufflinks as cf cf.go_offline() # initialize cufflinks in offline mode ``` > ** Ex. 16.1.1 ** Reproduce the plots made in the Lectures. This means doing a scatter plot where the colors (hue=) are the ratings, and the text comes when hovering over (text=). ``` #[Answer goes here] ``` ## Exercise Section 16.2: Implementing the K-means Clustering algorithm In the following exercise you will implement your own version of the K-means Clustering Algorithm. This will help you practice the basic matrix operations and syntax in python. > **Ex. 16.2.0:** First we need to load the dataset to practice on. For this task we will use the famous clustering dataset: of properties of 3 iris flower species. This is already build into many packages, including the plotting library seaborn, and can be loaded using the following command: ```df = sns.load_dataset('iris')``` Plot the data as a scatter matrix to inspect that it indeed has some rather obvious clusters: search for seaborn and scatter matrix on google and figure out the command. Color the markers (the nodes in the graph) by setting the ```hue='species'``` ``` # [Answer to Ex. 16.2.0] ``` If we weren't biologist and we had not already named the three flower species, we might want to find and define the natural groupings using a clustering method. Now you should implement the K-Means Clustering Algorithm. > **Ex. 16.2.1:** First define a matrix X, by extracting the four columns ('sepal_length','sepal_width','petal_length','petal_width') from the dataframe using the .values method. ``` # [Answer to Ex. 16.2.1] ``` Now we are ready to implement the algorithm. > **Ex. 16.2.2:** First we write the initialization, our first *Expectation*. This will initialize our first guess of the cluster centroids. This is done by picking K random points from the data. We do this by sampling a list of K numbers from the index. And then extracting datapoints using this index, same syntax as with a dataframe. ***(hint: use the random.sample function and sample from a range(len(data)))*** Check that this works and wrap it in a function named `initialize_clusters`. The function should take the data and a value of K (number of clusters / intial samples) as input parameters. And return the initial cluster centroids. ``` #[Answer Ex. 16.2.2] ``` Now we will write the *Maximization* step. > **Ex.16.2.3.:** The maximization step is done by assigning each datapoint to the closests cluster centers/centroid. This means: * we need to calculate the distance from each point to each centroid (at first it is just our randomly initialized points). This can be done using the the sklearn.metrics.pairwise_distances() taking the two matrices as input. * Next run an argmin operation on the matrix to obtain the cluster_assignments, using the ```.argmin()``` method built into the matrix object. The argmin gives you the index of smallest value, and not the smallest value itself. Remember to choose the right axis to apply the argmin operation on - i.e. columns or rows to minimize. You do this setting the axis= argument. ```.argmin(axis=0)``` applies it on the columns and ```.argmin(axis=1)``` applies it on the rows. Finally wrap these operations into a function `maximize` that takes the cluster centers, and the data as input. And return the cluster assignments. ``` #[Answer Ex. 16.2.3] ``` > **Ex. 16.2.4:** Now we want to update our Expectation of the cluster centroids. We calculate new cluster centroids, by applying the builtin ```.mean``` function on the subset of the data that is assigned to each cluster. First you define a container for the new centroids. Using the function: `np.zeros(shape)`. The `shape` parameter should be a tuple with the dimensions of matrix of cluster centroids i.e. (k, n_columns). >For each cluster you *(this can be done using a for loop from 0 to k number of clusters)*: * filter the data with a boolean vector that is True if the cluster assignment of the datapoint is equal to the cluster. The indexing is done in the same way as you would do with a dataframe. * calculate the mean on the subset of the data. Make sure you are doing it on the right axis. (axis=0) is on the columns, and axis=1 is on the rows. * store the it in a container Each cluster center should be a vector of 4 values [val,val2,val3,val4] so make sure you take the mean on the right axis. ```.mean(axis=?)```. Finally wrap these operations into a function `update_expectation` that takes the, `k`, the data `X`, and the `cluster_assignment` as input. And return the new cluster centers. ``` #[Answer 16.2.4] ``` Lastly we put it all together in the scikitlearn canonical ".fit()" function. This function will use the other functions. The important new things here are setting the number of maximization steps, and checking if the solution has converged, i.e. it is stable with little to no change. Pipeline is the following: * First we initialize our cluster centroids using the initialization function. * Then we run the maximazation function until convergence. Converged is checked by comparing if old_centroids from the previous step is equal to the new centroids. * Once convergence is reached we have our final cluster centroids, and the final cluster assignment. > **Ex. 16.2.5:** You should now implement it by doing the following: * Define a maximum number of iterations`max_iter` to 15. * Use the `initialize_clusters` function to define a variable `centroids`. * make a `for` loop from 0 to max_iter where you: * copy the current cluster centroids to a new variable: old_centroids. This will be used for checking convergence after the maximization step. * define the `cluster_assignment` by running the `maximize` function * define a new (i.e. overwrite) `centroids` variable by running the `update_expectation` function. * finally check if old_centroids is equal to new_centroids, using the np.array_equal() function. If they are: break. Make sure that it works and wrap it around a function `fit_transform()` that takes the data `X` as input, and the number of clusters `k` plus the maximum number of iterations `max_iter`. It should return the cluster assignments and the cluster centroids. ``` #[Answer exercise 16.2.5] ``` > **Ex. 16.2.6:** Run the algorithm and create a new variable `'cluster'` in your dataframe using the cluster_assignments. Count the overlap between the two variables, by using the `pd.pivot_table()` method. between the species and each cluster. Define the `aggfunc=` to the 'count' method. extra: To avoid a local minima (due to unlucky random initialization) you should run the algorithm more than once. Write a function that fits the algorithm N number of times, and evaluates the best solution by calculating ratio between the average distance between all points within the same cluster and the average distance to points outside ones cluster. ``` #[Answer to Ex. 16.2.6] ```
github_jupyter
# MASH analysis pipeline with data-driven prior matrices This notebook is a pipeline written in SoS to run `flashr + mashr` for multivariate analysis described in Urbut et al (2019). This pipeline was last applied to analyze GTEx V8 eQTL data, although it can be used as is to perform similar multivariate analysis for other association studies. *Version: 2021.02.28 by Gao Wang and Yuxin Zou* ``` %revisions -s ``` ## Data overview `fastqtl` summary statistics data were obtained from dbGaP (data on CRI at UChicago Genetic Medicine). It has 49 tissues. [more description to come] ## Preparing MASH input Using an established workflow (which takes 33hrs to run on a cluster system as configured by `midway2.yml`; see inside `fastqtl_to_mash.ipynb` for a note on computing environment), ``` INPUT_DIR=/project/compbio/GTEx_dbGaP/GTEx_Analysis_2017-06-05_v8/eqtl/GTEx_Analysis_v8_eQTL_all_associations JOB_OPT="-c midway2.yml -q midway2" sos run workflows/fastqtl_to_mash.ipynb --data-list $INPUT_DIR/FastQTLSumStats.list --common-suffix ".allpairs.txt" $JOB_OPT ``` As a result of command above I obtained the "mashable" data-set in the same format [as described here](https://stephenslab.github.io/gtexresults/gtexdata.html). ### Some data integrity check 1. Check if I get the same number of groups (genes) at the end of HDF5 data conversion: ``` $ zcat Whole_Blood.allpairs.txt.gz | cut -f1 | sort -u | wc -l 20316 $ h5ls Whole_Blood.allpairs.txt.h5 | wc -l 20315 ``` The results agreed on Whole Blood sample (the original data has a header thus one line more than the H5 version). We should be good (since the pipeline reported success for all other files). ### Data & job summary The command above took 33 hours on UChicago RCC `midway2`. ``` [MW] cat FastQTLSumStats.log 39832 out of 39832 groups merged! ``` So we have a total of 39832 genes (union of 49 tissues). ``` [MW] cat FastQTLSumStats.portable.log 15636 out of 39832 groups extracted! ``` We have 15636 groups without missing data in any tissue. This will be used to train the MASH model. The "mashable" data file is `FastQTLSumStats.mash.rds`, 124Mb serialized R file. ## Multivariate adaptive shrinkage (MASH) analysis of eQTL data Below is a "blackbox" implementation of the `mashr` eQTL workflow -- blackbox in the sense that you can run this pipeline as an executable, without thinking too much about it, if you see your problem fits our GTEx analysis scheme. However when reading it as a notebook it is a good source of information to help developing your own `mashr` analysis procedures. Since the submission to biorxiv of Urbut 2017 we have improved implementation of MASH algorithm and made a new R package, [`mashr`](https://github.com/stephenslab/mashr). Major improvements compared to Urbut 2019 are: 1. Faster computation of likelihood and posterior quantities via matrix algebra tricks and a C++ implementation. 2. Faster computation of MASH mixture via convex optimization. 3. Replace `SFA` with `FLASH`, a new sparse factor analysis method to generate prior covariance candidates. 4. Improve estimate of residual variance $\hat{V}$. At this point, the input data have already been converted from the original eQTL summary statistics to a format convenient for analysis in MASH, as a result of running the data conversion pipeline in `fastqtl_to_mash.ipynb`. Example command: ```bash JOB_OPT="-j 8" #JOB_OPT="-c midway2.yml -q midway2" sos run workflows/mashr_flashr_workflow.ipynb mash $JOB_OPT # --data ... --cwd ... --vhat ... ``` **FIXME: add comments on submitting jobs to HPC. Here we use the UChicago RCC cluster but other users can similarly configure their computating system to run the pipeline on HPC.** ### Global parameter settings ``` [global] parameter: cwd = path('./mashr_flashr_workflow_output') # Input summary statistics data parameter: data = path("fastqtl_to_mash_output/FastQTLSumStats.mash.rds") # Prefix of output files. If not specified, it will derive it from data. # If it is specified, for example, `--output-prefix AnalysisResults` # It will save output files as `{cwd}/AnalysisResults*`. parameter: output_prefix = '' # Exchangable effect (EE) or exchangable z-scores (EZ) parameter: effect_model = 'EZ' # Identifier of $\hat{V}$ estimate file # Options are "identity", "simple", "mle", "vhat_corshrink_xcondition", "vhat_simple_specific" parameter: vhat = 'mle' parameter: mixture_components = ['flash', 'flash_nonneg', 'pca'] data = data.absolute() cwd = cwd.absolute() if len(output_prefix) == 0: output_prefix = f"{data:bn}" prior_data = file_target(f"{cwd:a}/{output_prefix}.{effect_model}.prior.rds") vhat_data = file_target(f"{cwd:a}/{output_prefix}.{effect_model}.V_{vhat}.rds") mash_model = file_target(f"{cwd:a}/{output_prefix}.{effect_model}.V_{vhat}.mash_model.rds") def sort_uniq(seq): seen = set() return [x for x in seq if not (x in seen or seen.add(x))] ``` ### Command interface ``` sos run mashr_flashr_workflow.ipynb -h ``` ## Factor analyses ``` # Perform FLASH analysis with non-negative factor constraint (time estimate: 20min) [flash] input: data output: f"{cwd}/{output_prefix}.flash.rds" task: trunk_workers = 1, walltime = '2h', trunk_size = 1, mem = '8G', cores = 2, tags = f'{_output:bn}' R: expand = "${ }", stderr = f'{_output:n}.stderr', stdout = f'{_output:n}.stdout' dat = readRDS(${_input:r}) dat = mashr::mash_set_data(dat$strong.b, Shat=dat$strong.s, alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3) res = mashr::cov_flash(dat, factors="default", remove_singleton=${"TRUE" if "canonical" in mixture_components else "FALSE"}, output_model="${_output:n}.model.rds") saveRDS(res, ${_output:r}) # Perform FLASH analysis with non-negative factor constraint (time estimate: 20min) [flash_nonneg] input: data output: f"{cwd}/{output_prefix}.flash_nonneg.rds" task: trunk_workers = 1, walltime = '2h', trunk_size = 1, mem = '8G', cores = 2, tags = f'{_output:bn}' R: expand = "${ }", stderr = f'{_output:n}.stderr', stdout = f'{_output:n}.stdout' dat = readRDS(${_input:r}) dat = mashr::mash_set_data(dat$strong.b, Shat=dat$strong.s, alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3) res = mashr::cov_flash(dat, factors="nonneg", remove_singleton=${"TRUE" if "canonical" in mixture_components else "FALSE"}, output_model="${_output:n}.model.rds") saveRDS(res, ${_output:r}) [pca] # Number of components in PCA analysis for prior # set to 3 as in mash paper parameter: npc = 3 input: data output: f"{cwd}/{output_prefix}.pca.rds" task: trunk_workers = 1, walltime = '1h', trunk_size = 1, mem = '4G', cores = 2, tags = f'{_output:bn}' R: expand = "${ }", stderr = f'{_output:n}.stderr', stdout = f'{_output:n}.stdout' dat = readRDS(${_input:r}) dat = mashr::mash_set_data(dat$strong.b, Shat=dat$strong.s, alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3) res = mashr::cov_pca(dat, ${npc}) saveRDS(res, ${_output:r}) ``` ### Estimate residual variance FIXME: add some narratives here explaining what we do in each method. ``` # V estimate: "identity" method [vhat_identity] input: data output: f'{vhat_data:nn}.V_identity.rds' R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout" dat = readRDS(${_input:r}) saveRDS(diag(ncol(dat$random.b)), ${_output:r}) # V estimate: "simple" method (using null z-scores) [vhat_simple] depends: R_library("mashr") input: data output: f'{vhat_data:nn}.V_simple.rds' R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout" library(mashr) dat = readRDS(${_input:r}) vhat = estimate_null_correlation_simple(mash_set_data(dat$random.b, Shat=dat$random.s, alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3)) saveRDS(vhat, ${_output:r}) # V estimate: "mle" method [vhat_mle] # number of samples to use parameter: n_subset = 6000 # maximum number of iterations parameter: max_iter = 6 depends: R_library("mashr") input: data, prior_data output: f'{vhat_data:nn}.V_mle.rds' task: trunk_workers = 1, walltime = '36h', trunk_size = 1, mem = '4G', cores = 1, tags = f'{_output:bn}' R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout" library(mashr) dat = readRDS(${_input[0]:r}) # choose random subset set.seed(1) random.subset = sample(1:nrow(dat$random.b), min(${n_subset}, nrow(dat$random.b))) random.subset = mash_set_data(dat$random.b[random.subset,], dat$random.s[random.subset,], alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3) # estimate V mle vhat = estimate_null_correlation(random.subset, readRDS(${_input[1]:r}), max_iter = ${max_iter}) saveRDS(vhat, ${_output:r}) # Estimate each V separately via corshrink [vhat_corshrink_xcondition_1] # Utility script parameter: util_script = path('/project/mstephens/gtex/scripts/SumstatQuery.R') # List of genes to analyze parameter: gene_list = path() fail_if(not gene_list.is_file(), msg = 'Please specify valid path for --gene-list') fail_if(not util_script.is_file() and len(str(util_script)), msg = 'Please specify valid path for --util-script') genes = sort_uniq([x.strip().strip('"') for x in open(f'{gene_list:a}').readlines() if not x.strip().startswith('#')]) depends: R_library("CorShrink") input: data, for_each = 'genes' output: f'{vhat_data:nn}/{vhat_data:bnn}_V_corshrink_{_genes}.rds' task: trunk_workers = 1, walltime = '3m', trunk_size = 500, mem = '3G', cores = 1, tags = f'{_output:bn}' R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout" source(${util_script:r}) CorShrink_sum = function(gene, database, z_thresh = 2){ print(gene) dat <- GetSS(gene, database) z = dat$"z-score" max_absz = apply(abs(z), 1, max) nullish = which(max_absz < z_thresh) # if (length(nullish) < ncol(z)) { # stop("not enough null data to estimate null correlation") # } if (length(nullish) <= 1){ mat = diag(ncol(z)) } else { nullish_z = z[nullish, ] mat = as.matrix(CorShrink::CorShrinkData(nullish_z, ash.control = list(mixcompdist = "halfuniform"))$cor) } return(mat) } V = Corshrink_sum("${_genes}", ${data:r}) saveRDS(V, ${_output:r}) # Estimate each V separately via "simple" method [vhat_simple_specific_1] # Utility script parameter: util_script = path('/project/mstephens/gtex/scripts/SumstatQuery.R') # List of genes to analyze parameter: gene_list = path() fail_if(not gene_list.is_file(), msg = 'Please specify valid path for --gene-list') fail_if(not util_script.is_file() and len(str(util_script)), msg = 'Please specify valid path for --util-script') genes = sort_uniq([x.strip().strip('"') for x in open(f'{gene_list:a}').readlines() if not x.strip().startswith('#')]) depends: R_library("Matrix") input: data, for_each = 'genes' output: f'{vhat_data:nn}/{vhat_data:bnn}_V_simple_{_genes}.rds' task: trunk_workers = 1, walltime = '1m', trunk_size = 500, mem = '3G', cores = 1, tags = f'{_output:bn}' R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout" source(${util_script:r}) simple_V = function(gene, database, z_thresh = 2){ print(gene) dat <- GetSS(gene, database) z = dat$"z-score" max_absz = apply(abs(z), 1, max) nullish = which(max_absz < z_thresh) # if (length(nullish) < ncol(z)) { # stop("not enough null data to estimate null correlation") # } if (length(nullish) <= 1){ mat = diag(ncol(z)) } else { nullish_z = z[nullish, ] mat = as.matrix(Matrix::nearPD(as.matrix(cov(nullish_z)), conv.tol=1e-06, doSym = TRUE, corr=TRUE)$mat) } return(mat) } V = simple_V("${_genes}", ${data:r}) saveRDS(V, ${_output:r}) # Consolidate Vhat into one file [vhat_corshrink_xcondition_2, vhat_simple_specific_2] depends: R_library("parallel") # List of genes to analyze parameter: gene_list = path() fail_if(not gene_list.is_file(), msg = 'Please specify valid path for --gene-list') genes = paths([x.strip().strip('"') for x in open(f'{gene_list:a}').readlines() if not x.strip().startswith('#')]) input: group_by = 'all' output: f"{vhat_data:nn}.V_{step_name.rsplit('_',1)[0]}.rds" task: trunk_workers = 1, walltime = '1h', trunk_size = 1, mem = '4G', cores = 1, tags = f'{_output:bn}' R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout" library(parallel) files = sapply(c(${genes:r,}), function(g) paste0(c(${_input[0]:adr}), '/', g, '.rds'), USE.NAMES=FALSE) V = mclapply(files, function(i){ readRDS(i) }, mc.cores = 1) R = dim(V[[1]])[1] L = length(V) V.array = array(as.numeric(unlist(V)), dim=c(R, R, L)) saveRDS(V.array, ${_output:ar}) ``` ### Compute MASH priors Main reference are our `mashr` vignettes [this for mashr eQTL outline](https://stephenslab.github.io/mashr/articles/eQTL_outline.html) and [this for using FLASH prior](https://github.com/stephenslab/mashr/blob/master/vignettes/flash_mash.Rmd). The outcome of this workflow should be found under `./mashr_flashr_workflow_output` folder (can be configured). File names have pattern `*.mash_model_*.rds`. They can be used to computer posterior for input list of gene-SNP pairs (see next section). ``` # Compute data-driven / canonical prior matrices (time estimate: 2h ~ 12h for ~30 49 by 49 matrix mixture) [prior] depends: R_library("mashr") # if vhat method is `mle` it should use V_simple to analyze the data to provide a rough estimate, then later be refined via `mle`. input: [data, vhat_data if vhat != "mle" else f'{vhat_data:nn}.V_simple.rds'] + [f"{cwd}/{output_prefix}.{m}.rds" for m in mixture_components] output: prior_data task: trunk_workers = 1, walltime = '36h', trunk_size = 1, mem = '4G', cores = 4, tags = f'{_output:bn}' R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout" library(mashr) rds_files = c(${_input:r,}) dat = readRDS(rds_files[1]) vhat = readRDS(rds_files[2]) mash_data = mash_set_data(dat$strong.b, Shat=dat$strong.s, V=vhat, alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3) # setup prior U = list(XtX = t(mash_data$Bhat) %*% mash_data$Bhat / nrow(mash_data$Bhat)) for (f in rds_files[3:length(rds_files)]) U = c(U, readRDS(f)) U.ed = cov_ed(mash_data, U, logfile=${_output:nr}) # Canonical matrices U.can = cov_canonical(mash_data) saveRDS(c(U.ed, U.can), ${_output:r}) ``` ## `mashr` mixture model fitting ``` # Fit MASH mixture model (time estimate: <15min for 70K by 49 matrix) [mash_1] depends: R_library("mashr") input: data, vhat_data, prior_data output: mash_model task: trunk_workers = 1, walltime = '36h', trunk_size = 1, mem = '4G', cores = 1, tags = f'{_output:bn}' R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout" library(mashr) dat = readRDS(${_input[0]:r}) vhat = readRDS(${_input[1]:r}) U = readRDS(${_input[2]:r}) mash_data = mash_set_data(dat$random.b, Shat=dat$random.s, alpha=${1 if effect_model == 'EZ' else 0}, V=vhat, zero_Bhat_Shat_reset = 1E3) saveRDS(mash(mash_data, Ulist = U, outputlevel = 1), ${_output:r}) ``` ### Optional posterior computations Additionally provide posterior for the "strong" set in MASH input data. ``` # Compute posterior for the "strong" set of data as in Urbut et al 2017. # This is optional because most of the time we want to apply the # MASH model learned on much larger data-set. [mash_2] # default to True; use --no-compute-posterior to disable this parameter: compute_posterior = True # input Vhat file for the batch of posterior data skip_if(not compute_posterior) depends: R_library("mashr") input: data, vhat_data, mash_model output: f"{cwd:a}/{output_prefix}.{effect_model}.posterior.rds" task: trunk_workers = 1, walltime = '36h', trunk_size = 1, mem = '4G', cores = 1, tags = f'{_output:bn}' R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout" library(mashr) dat = readRDS(${_input[0]:r}) vhat = readRDS(${_input[1]:r}) mash_data = mash_set_data(dat$strong.b, Shat=dat$strong.s, alpha=${1 if effect_model == 'EZ' else 0}, V=vhat, zero_Bhat_Shat_reset = 1E3) mash_model = readRDS(${_input[2]:ar}) saveRDS(mash_compute_posterior_matrices(mash_model, mash_data), ${_output:r}) ``` ## Compute MASH posteriors In the GTEx V6 paper we assumed one eQTL per gene and applied the model learned above to those SNPs. Under that assumption, the input data for posterior calculation will be the `dat$strong.*` matrices. It is a fairly straightforward procedure as shown in [this vignette](https://stephenslab.github.io/mashr/articles/eQTL_outline.html). But it is often more interesting to apply MASH to given list of eQTLs, eg, from those from fine-mapping results. In GTEx V8 analysis we obtain such gene-SNP pairs from DAP-G fine-mapping analysis. See [this notebook](https://stephenslab.github.io/gtex-eqtls/analysis/Independent_eQTL_Results.html) for how the input data is prepared. The workflow below takes a number of input chunks (each chunk is a list of matrices `dat$Bhat` and `dat$Shat`) and computes posterior for each chunk. It is therefore suited for running in parallel posterior computation for all gene-SNP pairs, if input data chunks are provided. ``` JOB_OPT="-c midway2.yml -q midway2" DATA_DIR=/project/compbio/GTEx_eQTL/independent_eQTL sos run workflows/mashr_flashr_workflow.ipynb posterior \ $JOB_OPT \ --posterior-input $DATA_DIR/DAPG_pip_gt_0.01-AllTissues/DAPG_pip_gt_0.01-AllTissues.*.rds \ $DATA_DIR/ConditionalAnalysis_AllTissues/ConditionalAnalysis_AllTissues.*.rds ``` ``` # Apply posterior calculations [posterior] parameter: mash_model = path(f"{cwd:a}/{output_prefix}.{effect_model}.V_{vhat}.mash_model.rds") parameter: posterior_input = paths() parameter: posterior_vhat_files = paths() # eg, if data is saved in R list as data$strong, then # when you specify `--data-table-name strong` it will read the data as # readRDS('{_input:r}')$strong parameter: data_table_name = '' parameter: bhat_table_name = 'Bhat' parameter: shat_table_name = 'Shat' mash_model = f"{mash_model:a}" skip_if(len(posterior_input) == 0, msg = "No posterior input data to compute on. Please specify it using --posterior-input.") fail_if(len(posterior_vhat_files) > 1 and len(posterior_vhat_files) != len(posterior_input), msg = "length of --posterior-input and --posterior-vhat-files do not agree.") for p in posterior_input: fail_if(not p.is_file(), msg = f'Cannot find posterior input file ``{p}``') depends: R_library("mashr"), mash_model input: posterior_input, group_by = 1 output: f"{_input:n}.posterior.rds" task: trunk_workers = 1, walltime = '20h', trunk_size = 1, mem = '20G', cores = 1, tags = f'{_output:bn}' R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout" library(mashr) data = readRDS(${_input:r})${('$' + data_table_name) if data_table_name else ''} vhat = readRDS("${vhat_data if len(posterior_vhat_files) == 0 else posterior_vhat_files[_index]}") mash_data = mash_set_data(data$${bhat_table_name}, Shat=data$${shat_table_name}, alpha=${1 if effect_model == 'EZ' else 0}, V=vhat, zero_Bhat_Shat_reset = 1E3) saveRDS(mash_compute_posterior_matrices(readRDS(${mash_model:r}), mash_data), ${_output:r}) ``` ### Posterior results 1. The outcome of the `[posterior]` step should produce a number of serialized R objects `*.batch_*.posterior.rds` (can be loaded to R via `readRDS()`) -- I chopped data to batches to take advantage of computing in multiple cluster nodes. It should be self-explanary but please let me know otherwise. 2. Other posterior related files are: 1. `*.batch_*.yaml`: gene-SNP pairs of interest, identified elsewhere (eg. fine-mapping analysis). 2. The corresponding univariate analysis summary statistics for gene-SNPs from `*.batch_*.yaml` are extracted and saved to `*.batch_*.rds`, creating input to the `[posterior]` step. 3. Note the `*.batch_*.stdout` file documents some SNPs found in fine-mapping results but not found in the original `fastqtl` output.
github_jupyter
# Visualisation in Python - Matplotlib Here is the sales dataset for an online retailer. The data is collected over a period of three years: 2012 to 2015. It contains the information of sales made by the company. The products captured belong to three categories: Furniture Office Supplies Technology Also, the company caters to five different markets: USCA LATAM ASPAC EUR AFR We will be using the 'pyplot' package of the Matplotlib library. ``` # importing numpy and the pyplot package of matplotlib import numpy as np import matplotlib.pyplot as plt # Creating an array with product categories product_cat = np.array(['Furniture','Technology','Office Supplies']) # Creating an array with the sales amount # Furniture: 4110451.90 # Technology: 4744557.50 # Office Supplies: 3787492.52 sales_amt = np.array([4110451.90,4744557.50,3787492.52]) print(sales_amt) ``` ## Bar Graph: Plotting sales across each product category ``` # plotting the bar graph with product categories on x-axis and sales amount of y-axis plt.bar(product_cat,sales_amt) # necessary command to display the created graph plt.show() ``` ### Adding title and labeling axes in the graph ``` # plotting the bar graph with product categories on x-axis and sales amount of y-axis plt.bar(product_cat, sales_amt) # adding title to the graph plt.title("Sales Across Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'}) # labeling axes plt.xlabel("Product Category", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) # necessary command to display the created graph plt.show() ``` #### Modifying the bars in the graph ``` # changing color of the bars in the bar graph # plotting the bar graph with product categories on x-axis and sales amount of y-axis plt.bar(product_cat, sales_amt, color='cyan', edgecolor='orange') # adding title to the graph plt.title("Sales Across Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'}) # labeling axes plt.xlabel("Product Category", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) # necessary command to display the created graph plt.show() ``` #### Adjusting tick values and the value labels ``` # plotting the bar graph with product categories on x-axis and sales amount of y-axis plt.bar(product_cat, sales_amt, color='cyan', edgecolor='orange') # adding title to the graph plt.title("Sales Across Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'}) # labeling axes plt.xlabel("Product Category", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) # Modifying the ticks to show information in (lakhs) tick_values = np.arange(0, 8000000, 1000000) tick_labels = ["0L", "10L", "20L", "30L", "40L", "50L", "60L", "70L"] plt.yticks(tick_values, tick_labels) plt.show() ``` ## Scatter Chart: Plotting Sales vs Profits Scatter plots are used when you want to show the relationship between two facts or measures. Now, you have the sales and profit data of different product categories across different countries. Let's try to build scatterplots to visualise the data at hand. ``` # Sales and Profit data for different product categories across different countries sales = np.array ([1013.14, 8298.48, 875.51, 22320.83, 9251.6, 4516.86, 585.16, 836154.03, 216748.48, 174.2, 27557.79, 563.25, 558.11, 37117.45, 357.36, 2206.96, 709.5, 35064.03, 7230.78, 235.33, 148.32, 3973.27, 11737.8, 7104.63, 83.67, 5569.83, 92.34, 107104.36, 1045.62, 9072.51, 42485.82, 5093.82, 14846.16, 943.92, 684.36, 15012.03, 38196.18, 2448.75, 28881.96, 13912.14, 4507.2, 4931.06, 12805.05, 67912.73, 4492.2, 1740.01, 458.04, 16904.32, 21744.53, 10417.26, 18665.33, 2808.42, 54195.57, 67332.5, 24390.95, 1790.43, 2234.19, 9917.5, 7408.14, 36051.99, 1352.22, 1907.7, 245722.14, 2154.66, 1078.21, 3391.65, 28262.73, 5177.04, 66.51, 2031.34, 1683.72, 1970.01, 6515.82, 1055.31, 1029.48, 5303.4, 1850.96, 1159.41, 39989.13, 1183.87, 96365.09, 8356.68, 7010.24, 23119.23, 46109.28, 146071.84, 242259.03, 9058.95, 1313.67, 31525.06, 2019.94, 703.04, 1868.79, 700.5, 55512.02, 243.5, 2113.18, 11781.81, 262189.49, 3487.29, 513.12, 312050.42, 5000.7, 121.02, 1302.78, 169.92, 124.29, 57366.05, 29445.93, 4614.3, 45009.98, 309.24, 3353.67, 41348.34, 2280.27, 61193.7, 1466.79, 12419.94, 445.12, 25188.65, 263514.92, 12351.23, 1152.3, 26298.81, 9900.78, 5355.57, 2325.66, 6282.81, 127707.92, 1283.1, 3560.15, 3723.84, 13715.01, 4887.9, 3396.89, 33348.42, 625.02, 1665.48, 32486.97, 340212.44, 20516.22, 8651.16, 13590.06, 2440.35, 6462.57, 1770.13, 7527.18, 1433.65, 423.3, 21601.72, 10035.72, 2378.49, 3062.38, 719469.32, 179366.79, 345.17, 30345.78, 300.71, 940.81, 36468.08, 1352.85, 1755.72, 2391.96, 19.98, 19792.8, 15633.88, 7.45, 521.67, 1118.24, 7231.68, 12399.32, 204.36, 23.64, 5916.48, 313.98, 108181.5, 9212.42, 27476.91, 1761.33, 289.5, 780.3, 15098.46, 813.27, 47.55, 8323.23, 22634.64, 1831.02, 28808.1, 10539.78, 588.99, 939.78, 7212.41, 15683.01, 41369.09, 5581.6, 403.36, 375.26, 12276.66, 15393.56, 76.65, 5884.38, 18005.49, 3094.71, 43642.78, 35554.83, 22977.11, 1026.33, 665.28, 9712.49, 6038.52, 30756.51, 3758.25, 4769.49, 2463.3, 160153.16, 967.11, 2311.74, 1414.83, 12764.91, 4191.24, 110.76, 637.34, 1195.12, 2271.63, 804.12, 196.17, 167.67, 131.77, 2842.05, 9969.12, 1784.35, 3098.49, 25005.54, 1300.1, 118697.39, 7920.54, 6471.78, 31707.57, 37636.47, 118777.77, 131170.76, 3980.88, 3339.39, 26563.9, 4038.73, 124.8, 196.65, 2797.77, 29832.76, 184.84, 79.08, 8047.83, 205313.25, 1726.98, 899.73, 224.06, 304763.54, 6101.31, 729.6, 896.07, 17.82, 26.22, 46429.78, 31167.27, 2455.94, 37714.3, 1506.93, 3812.78, 25223.34, 3795.96, 437.31, 41278.86, 2091.81, 6296.61, 468.82, 23629.64, 160435.53, 9725.46, 1317.03, 1225.26, 30034.08, 7893.45, 2036.07, 215.52, 3912.42, 82783.43, 253.14, 966.96, 3381.26, 164.07, 1984.23, 75.12, 25168.17, 3295.53, 991.12, 10772.1, 44.16, 1311.45, 35352.57, 245783.54, 20.49, 13471.06, 8171.16, 14075.67, 611.82, 3925.56, 981.84, 10209.84, 156.56, 243.06, 21287.52, 7300.51, 434.52, 6065.0, 741577.51, 132461.03, 224.75, 28953.6, 757.98, 528.15, 34922.41, 50.58, 2918.48, 1044.96, 22195.13, 3951.48, 6977.64, 219.12, 5908.38, 10987.46, 4852.26, 445.5, 71860.82, 14840.45, 24712.08, 1329.9, 1180.44, 85.02, 10341.63, 690.48, 1939.53, 20010.51, 914.31, 25223.82, 12804.66, 2124.24, 602.82, 2961.66, 15740.79, 74138.35, 7759.39, 447.0, 2094.84, 22358.95, 21734.53, 4223.73, 17679.53, 1019.85, 51848.72, 69133.3, 30146.9, 705.48, 14508.88, 7489.38, 20269.44, 246.12, 668.13, 768.93, 215677.35, 899.16, 2578.2, 4107.99, 20334.57, 366.84, 3249.27, 98.88, 3497.88, 3853.05, 786.75, 1573.68, 458.36, 1234.77, 1094.22, 2300.61, 970.14, 3068.25, 35792.85, 4277.82, 71080.28, 3016.86, 3157.49, 15888.0, 30000.36, 140037.89, 216056.25, 1214.22, 1493.94, 32036.69, 4979.66, 106.02, 46257.68, 1033.3, 937.32, 3442.62, 160633.45, 213.15, 338.88, 242117.13, 9602.34, 2280.99, 73759.08, 23526.12, 6272.74, 43416.3, 576.78, 1471.61, 20844.9, 3497.7, 56382.38, 902.58, 6235.26, 48.91, 32684.24, 276611.58, 13370.38, 10595.28, 4555.14, 10084.38, 267.72, 1012.95, 4630.5, 149433.51, 364.32, 349.2, 4647.56, 504.0, 10343.52, 5202.66, 2786.26, 34135.95, 2654.58, 24699.51, 339239.87, 136.26, 23524.51, 8731.68, 8425.86, 835.95, 11285.19]) profit = np.array([-1213.46, 1814.13, -1485.7, -2286.73, -2872.12, 946.8, 198.48, 145454.95, 49476.1, -245.56, 5980.77, -790.47, -895.72, -34572.08, 117.9, 561.96, 152.85, 1426.05, 1873.17, -251.03, 68.22, 635.11, 3722.4, -3168.63, 27.6, 952.11, 7.38, 20931.13, 186.36, -5395.38, 9738.45, 525.27, 3351.99, 120.78, 266.88, 3795.21, 8615.97, 609.54, 7710.57, 2930.43, 1047.96, -2733.32, 2873.73, -5957.89, -909.6, 163.41, -376.02, -6322.68, -10425.86, 2340.36, -28430.53, 756.12, 12633.33, 7382.54, -14327.69, 436.44, 683.85, -694.91, 1960.56, 10925.82, 334.08, 425.49, 53580.2, 1024.56, 110.93, 632.22, 8492.58, 1418.88, 19.26, -2567.57, 346.26, 601.86, 1318.68, 304.05, 428.37, 1416.24, -2878.18, 283.41, 12611.04, 261.95, -648.43, 1112.88, -2640.29, 6154.32, 11558.79, 15291.4, 56092.65, 1515.39, 342.03, -10865.66, -902.8, 351.52, 364.17, 87.72, 11565.66, 75.4, 289.33, 3129.63, 50795.72, 783.72, 215.46, 29196.89, 1147.26, 53.22, 286.56, 73.02, 42.24, 13914.85, 5754.54, 998.04, -1476.04, 86.58, -1636.35, 10511.91, 647.34, 13768.62, 338.67, 3095.67, 173.84, 5632.93, 64845.11, 3297.33, 338.61, 7246.62, 2255.52, 1326.36, 827.64, 1100.58, 9051.36, 412.23, 1063.91, 940.59, 3891.84, 1599.51, 1129.57, 8792.64, 6.24, 592.77, 8792.85, 47727.5, -4597.68, 2242.56, 3546.45, 321.87, 1536.72, -2463.29, 1906.08, -1916.99, 186.24, 3002.05, -3250.98, 554.7, 830.64, 122612.79, 33894.21, -559.03, 7528.05, -477.67, -1660.25, -33550.96, 481.68, 425.08, 450.3, 9.57, -3025.29, 2924.62, -11.84, 87.36, 26.51, 1727.19, -6131.18, 59.16, 3.06, 1693.47, 74.67, 24729.21, -4867.94, 6705.18, 410.79, 70.74, 101.7, 3264.3, 137.01, 6.18, 2100.21, 5295.24, 520.29, 7205.52, 2602.65, 116.67, 224.91, -5153.93, 3882.69, -6535.24, -1254.1, 84.56, -186.38, -3167.2, -7935.59, 37.02, 1908.06, -27087.84, 829.32, 8727.44, 2011.47, -11629.64, 234.96, 53.1, 1248.14, 1511.07, 7374.24, 1193.28, 1090.23, 553.86, 38483.86, 255.81, 528.54, 326.07, 3924.36, 1018.92, 36.48, 113.24, -1770.05, 527.64, 224.49, 79.53, 64.77, 38.08, 868.08, 2265.06, -2643.62, 833.73, 5100.03, 326.44, 18158.84, 1682.01, -3290.22, 8283.33, 7926.18, 1694.41, 30522.92, 1214.07, 900.6, -6860.8, -865.91, 26.16, 47.22, 863.52, 7061.26, 73.92, 33.12, 1801.23, 38815.44, 431.13, 216.81, 16.5, 53688.2, 1210.32, 236.94, 210.84, 3.18, 2.22, 10265.64, 7212.3, 343.56, 3898.28, 568.11, -1867.85, 5782.38, 697.29, -192.06, 10179.02, 616.32, 1090.47, 165.84, 6138.28, 39723.06, 2085.14, 90.0, 129.93, 7957.53, 2131.86, 562.44, 99.12, 1298.37, 7580.33, 113.73, 139.71, 456.0, 21.24, 292.68, 30.34, 5817.15, 1060.89, 252.9, 3060.61, 6.6, 219.09, 8735.82, 31481.09, 2.85, -3124.72, 2195.94, 3464.7, 141.12, 1125.69, -1752.03, 3281.52, -303.77, 114.18, -2412.63, -5099.61, 146.64, 660.22, 18329.28, 28529.84, -232.27, 7435.41, -1157.94, -746.73, -30324.2, 2.52, 1313.44, 213.72, -5708.95, 930.18, 1663.02, 31.59, 1787.88, -8219.56, 973.92, 4.32, 8729.78, -2529.52, 5361.06, 69.21, 519.3, 13.56, 2236.77, 213.96, 367.98, 5074.2, 206.61, 7620.36, 2093.19, 164.07, 230.01, -815.82, 4226.7, -3635.09, -3344.17, 167.26, 143.79, -8233.57, -4085.21, 919.35, -25232.35, 234.33, 12040.68, 7206.28, -15112.76, 206.04, -2662.49, 2346.81, 4461.36, 93.48, 82.11, 147.87, 10389.53, 395.58, 474.74, 1333.26, 3913.02, 117.36, 858.78, 6.9, -4628.49, 1170.6, 218.55, 539.58, -211.0, 438.87, 317.16, 310.8, -1578.09, 706.56, 6617.4, 803.84, 2475.26, 764.34, -1461.88, 3805.56, 7371.27, -1377.13, 42435.03, 472.47, 315.48, -11755.91, -2418.6, 6.36, 9317.76, 326.88, -287.31, 637.68, 17579.17, 70.83, 47.4, 26143.92, 1548.15, 612.78, 17842.76, 6735.39, 1206.5, -10035.74, 149.4, -777.85, 5566.29, 748.92, 14941.58, 348.93, 1944.06, -5.51, 7026.84, 46114.92, 2361.86, 2613.24, 1277.37, 2587.74, 103.08, 311.43, 1250.58, 13055.21, 18.21, 108.24, 709.44, 115.92, 1863.6, 1873.86, 817.32, 7577.64, 1019.19, 6813.03, 24698.84, 66.24, -10971.39, 2056.47, 2095.35, 246.33, 2797.89]) ``` ### Plotting a Scatterplot ``` # plotting scatterplot plt.scatter(sales,profit) # necessary command to display graph plt.show() plt.scatter(profit,sales) plt.show() # Sales and Profit data for different product categories across different countries sales = np.array ([1013.14, 8298.48, 875.51, 22320.83, 9251.6, 4516.86, 585.16, 836154.03, 216748.48, 174.2, 27557.79, 563.25, 558.11, 37117.45, 357.36, 2206.96, 709.5, 35064.03, 7230.78, 235.33, 148.32, 3973.27, 11737.8, 7104.63, 83.67, 5569.83, 92.34, 107104.36, 1045.62, 9072.51, 42485.82, 5093.82, 14846.16, 943.92, 684.36, 15012.03, 38196.18, 2448.75, 28881.96, 13912.14, 4507.2, 4931.06, 12805.05, 67912.73, 4492.2, 1740.01, 458.04, 16904.32, 21744.53, 10417.26, 18665.33, 2808.42, 54195.57, 67332.5, 24390.95, 1790.43, 2234.19, 9917.5, 7408.14, 36051.99, 1352.22, 1907.7, 245722.14, 2154.66, 1078.21, 3391.65, 28262.73, 5177.04, 66.51, 2031.34, 1683.72, 1970.01, 6515.82, 1055.31, 1029.48, 5303.4, 1850.96, 1159.41, 39989.13, 1183.87, 96365.09, 8356.68, 7010.24, 23119.23, 46109.28, 146071.84, 242259.03, 9058.95, 1313.67, 31525.06, 2019.94, 703.04, 1868.79, 700.5, 55512.02, 243.5, 2113.18, 11781.81, 262189.49, 3487.29, 513.12, 312050.42, 5000.7, 121.02, 1302.78, 169.92, 124.29, 57366.05, 29445.93, 4614.3, 45009.98, 309.24, 3353.67, 41348.34, 2280.27, 61193.7, 1466.79, 12419.94, 445.12, 25188.65, 263514.92, 12351.23, 1152.3, 26298.81, 9900.78, 5355.57, 2325.66, 6282.81, 127707.92, 1283.1, 3560.15, 3723.84, 13715.01, 4887.9, 3396.89, 33348.42, 625.02, 1665.48, 32486.97, 340212.44, 20516.22, 8651.16, 13590.06, 2440.35, 6462.57, 1770.13, 7527.18, 1433.65, 423.3, 21601.72, 10035.72, 2378.49, 3062.38, 719469.32, 179366.79, 345.17, 30345.78, 300.71, 940.81, 36468.08, 1352.85, 1755.72, 2391.96, 19.98, 19792.8, 15633.88, 7.45, 521.67, 1118.24, 7231.68, 12399.32, 204.36, 23.64, 5916.48, 313.98, 108181.5, 9212.42, 27476.91, 1761.33, 289.5, 780.3, 15098.46, 813.27, 47.55, 8323.23, 22634.64, 1831.02, 28808.1, 10539.78, 588.99, 939.78, 7212.41, 15683.01, 41369.09, 5581.6, 403.36, 375.26, 12276.66, 15393.56, 76.65, 5884.38, 18005.49, 3094.71, 43642.78, 35554.83, 22977.11, 1026.33, 665.28, 9712.49, 6038.52, 30756.51, 3758.25, 4769.49, 2463.3, 160153.16, 967.11, 2311.74, 1414.83, 12764.91, 4191.24, 110.76, 637.34, 1195.12, 2271.63, 804.12, 196.17, 167.67, 131.77, 2842.05, 9969.12, 1784.35, 3098.49, 25005.54, 1300.1, 118697.39, 7920.54, 6471.78, 31707.57, 37636.47, 118777.77, 131170.76, 3980.88, 3339.39, 26563.9, 4038.73, 124.8, 196.65, 2797.77, 29832.76, 184.84, 79.08, 8047.83, 205313.25, 1726.98, 899.73, 224.06, 304763.54, 6101.31, 729.6, 896.07, 17.82, 26.22, 46429.78, 31167.27, 2455.94, 37714.3, 1506.93, 3812.78, 25223.34, 3795.96, 437.31, 41278.86, 2091.81, 6296.61, 468.82, 23629.64, 160435.53, 9725.46, 1317.03, 1225.26, 30034.08, 7893.45, 2036.07, 215.52, 3912.42, 82783.43, 253.14, 966.96, 3381.26, 164.07, 1984.23, 75.12, 25168.17, 3295.53, 991.12, 10772.1, 44.16, 1311.45, 35352.57, 245783.54, 20.49, 13471.06, 8171.16, 14075.67, 611.82, 3925.56, 981.84, 10209.84, 156.56, 243.06, 21287.52, 7300.51, 434.52, 6065.0, 741577.51, 132461.03, 224.75, 28953.6, 757.98, 528.15, 34922.41, 50.58, 2918.48, 1044.96, 22195.13, 3951.48, 6977.64, 219.12, 5908.38, 10987.46, 4852.26, 445.5, 71860.82, 14840.45, 24712.08, 1329.9, 1180.44, 85.02, 10341.63, 690.48, 1939.53, 20010.51, 914.31, 25223.82, 12804.66, 2124.24, 602.82, 2961.66, 15740.79, 74138.35, 7759.39, 447.0, 2094.84, 22358.95, 21734.53, 4223.73, 17679.53, 1019.85, 51848.72, 69133.3, 30146.9, 705.48, 14508.88, 7489.38, 20269.44, 246.12, 668.13, 768.93, 215677.35, 899.16, 2578.2, 4107.99, 20334.57, 366.84, 3249.27, 98.88, 3497.88, 3853.05, 786.75, 1573.68, 458.36, 1234.77, 1094.22, 2300.61, 970.14, 3068.25, 35792.85, 4277.82, 71080.28, 3016.86, 3157.49, 15888.0, 30000.36, 140037.89, 216056.25, 1214.22, 1493.94, 32036.69, 4979.66, 106.02, 46257.68, 1033.3, 937.32, 3442.62, 160633.45, 213.15, 338.88, 242117.13, 9602.34, 2280.99, 73759.08, 23526.12, 6272.74, 43416.3, 576.78, 1471.61, 20844.9, 3497.7, 56382.38, 902.58, 6235.26, 48.91, 32684.24, 276611.58, 13370.38, 10595.28, 4555.14, 10084.38, 267.72, 1012.95, 4630.5, 149433.51, 364.32, 349.2, 4647.56, 504.0, 10343.52, 5202.66, 2786.26, 34135.95, 2654.58, 24699.51, 339239.87, 136.26, 23524.51, 8731.68, 8425.86, 835.95, 11285.19]) profit = np.array([-1213.46, 1814.13, -1485.7, -2286.73, -2872.12, 946.8, 198.48, 145454.95, 49476.1, -245.56, 5980.77, -790.47, -895.72, -34572.08, 117.9, 561.96, 152.85, 1426.05, 1873.17, -251.03, 68.22, 635.11, 3722.4, -3168.63, 27.6, 952.11, 7.38, 20931.13, 186.36, -5395.38, 9738.45, 525.27, 3351.99, 120.78, 266.88, 3795.21, 8615.97, 609.54, 7710.57, 2930.43, 1047.96, -2733.32, 2873.73, -5957.89, -909.6, 163.41, -376.02, -6322.68, -10425.86, 2340.36, -28430.53, 756.12, 12633.33, 7382.54, -14327.69, 436.44, 683.85, -694.91, 1960.56, 10925.82, 334.08, 425.49, 53580.2, 1024.56, 110.93, 632.22, 8492.58, 1418.88, 19.26, -2567.57, 346.26, 601.86, 1318.68, 304.05, 428.37, 1416.24, -2878.18, 283.41, 12611.04, 261.95, -648.43, 1112.88, -2640.29, 6154.32, 11558.79, 15291.4, 56092.65, 1515.39, 342.03, -10865.66, -902.8, 351.52, 364.17, 87.72, 11565.66, 75.4, 289.33, 3129.63, 50795.72, 783.72, 215.46, 29196.89, 1147.26, 53.22, 286.56, 73.02, 42.24, 13914.85, 5754.54, 998.04, -1476.04, 86.58, -1636.35, 10511.91, 647.34, 13768.62, 338.67, 3095.67, 173.84, 5632.93, 64845.11, 3297.33, 338.61, 7246.62, 2255.52, 1326.36, 827.64, 1100.58, 9051.36, 412.23, 1063.91, 940.59, 3891.84, 1599.51, 1129.57, 8792.64, 6.24, 592.77, 8792.85, 47727.5, -4597.68, 2242.56, 3546.45, 321.87, 1536.72, -2463.29, 1906.08, -1916.99, 186.24, 3002.05, -3250.98, 554.7, 830.64, 122612.79, 33894.21, -559.03, 7528.05, -477.67, -1660.25, -33550.96, 481.68, 425.08, 450.3, 9.57, -3025.29, 2924.62, -11.84, 87.36, 26.51, 1727.19, -6131.18, 59.16, 3.06, 1693.47, 74.67, 24729.21, -4867.94, 6705.18, 410.79, 70.74, 101.7, 3264.3, 137.01, 6.18, 2100.21, 5295.24, 520.29, 7205.52, 2602.65, 116.67, 224.91, -5153.93, 3882.69, -6535.24, -1254.1, 84.56, -186.38, -3167.2, -7935.59, 37.02, 1908.06, -27087.84, 829.32, 8727.44, 2011.47, -11629.64, 234.96, 53.1, 1248.14, 1511.07, 7374.24, 1193.28, 1090.23, 553.86, 38483.86, 255.81, 528.54, 326.07, 3924.36, 1018.92, 36.48, 113.24, -1770.05, 527.64, 224.49, 79.53, 64.77, 38.08, 868.08, 2265.06, -2643.62, 833.73, 5100.03, 326.44, 18158.84, 1682.01, -3290.22, 8283.33, 7926.18, 1694.41, 30522.92, 1214.07, 900.6, -6860.8, -865.91, 26.16, 47.22, 863.52, 7061.26, 73.92, 33.12, 1801.23, 38815.44, 431.13, 216.81, 16.5, 53688.2, 1210.32, 236.94, 210.84, 3.18, 2.22, 10265.64, 7212.3, 343.56, 3898.28, 568.11, -1867.85, 5782.38, 697.29, -192.06, 10179.02, 616.32, 1090.47, 165.84, 6138.28, 39723.06, 2085.14, 90.0, 129.93, 7957.53, 2131.86, 562.44, 99.12, 1298.37, 7580.33, 113.73, 139.71, 456.0, 21.24, 292.68, 30.34, 5817.15, 1060.89, 252.9, 3060.61, 6.6, 219.09, 8735.82, 31481.09, 2.85, -3124.72, 2195.94, 3464.7, 141.12, 1125.69, -1752.03, 3281.52, -303.77, 114.18, -2412.63, -5099.61, 146.64, 660.22, 18329.28, 28529.84, -232.27, 7435.41, -1157.94, -746.73, -30324.2, 2.52, 1313.44, 213.72, -5708.95, 930.18, 1663.02, 31.59, 1787.88, -8219.56, 973.92, 4.32, 8729.78, -2529.52, 5361.06, 69.21, 519.3, 13.56, 2236.77, 213.96, 367.98, 5074.2, 206.61, 7620.36, 2093.19, 164.07, 230.01, -815.82, 4226.7, -3635.09, -3344.17, 167.26, 143.79, -8233.57, -4085.21, 919.35, -25232.35, 234.33, 12040.68, 7206.28, -15112.76, 206.04, -2662.49, 2346.81, 4461.36, 93.48, 82.11, 147.87, 10389.53, 395.58, 474.74, 1333.26, 3913.02, 117.36, 858.78, 6.9, -4628.49, 1170.6, 218.55, 539.58, -211.0, 438.87, 317.16, 310.8, -1578.09, 706.56, 6617.4, 803.84, 2475.26, 764.34, -1461.88, 3805.56, 7371.27, -1377.13, 42435.03, 472.47, 315.48, -11755.91, -2418.6, 6.36, 9317.76, 326.88, -287.31, 637.68, 17579.17, 70.83, 47.4, 26143.92, 1548.15, 612.78, 17842.76, 6735.39, 1206.5, -10035.74, 149.4, -777.85, 5566.29, 748.92, 14941.58, 348.93, 1944.06, -5.51, 7026.84, 46114.92, 2361.86, 2613.24, 1277.37, 2587.74, 103.08, 311.43, 1250.58, 13055.21, 18.21, 108.24, 709.44, 115.92, 1863.6, 1873.86, 817.32, 7577.64, 1019.19, 6813.03, 24698.84, 66.24, -10971.39, 2056.47, 2095.35, 246.33, 2797.89]) # corresponding category and country value to the above arrays product_category = np.array(['Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture']) country = np.array(['Zimbabwe', 'Zambia', 'Yemen', 'Vietnam', 'Venezuela', 'Uzbekistan', 'Uruguay', 'United States', 'United Kingdom', 'United Arab Emirates', 'Ukraine', 'Uganda', 'Turkmenistan', 'Turkey', 'Tunisia', 'Trinidad and Tobago', 'Togo', 'Thailand', 'Tanzania', 'Tajikistan', 'Taiwan', 'Syria', 'Switzerland', 'Sweden', 'Swaziland', 'Sudan', 'Sri Lanka', 'Spain', 'South Sudan', 'South Korea', 'South Africa', 'Somalia', 'Singapore', 'Sierra Leone', 'Serbia', 'Senegal', 'Saudi Arabia', 'Rwanda', 'Russia', 'Romania', 'Qatar', 'Portugal', 'Poland', 'Philippines', 'Peru', 'Paraguay', 'Papua New Guinea', 'Panama', 'Pakistan', 'Norway', 'Nigeria', 'Niger', 'Nicaragua', 'New Zealand', 'Netherlands', 'Nepal', 'Namibia', 'Myanmar (Burma)', 'Mozambique', 'Morocco', 'Mongolia', 'Moldova', 'Mexico', 'Mauritania', 'Martinique', 'Mali', 'Malaysia', 'Madagascar', 'Luxembourg', 'Lithuania', 'Libya', 'Liberia', 'Lesotho', 'Lebanon', 'Kyrgyzstan', 'Kenya', 'Kazakhstan', 'Jordan', 'Japan', 'Jamaica', 'Italy', 'Israel', 'Ireland', 'Iraq', 'Iran', 'Indonesia', 'India', 'Hungary', 'Hong Kong', 'Honduras', 'Haiti', 'Guyana', 'Guinea-Bissau', 'Guinea', 'Guatemala', 'Guadeloupe', 'Greece', 'Ghana', 'Germany', 'Georgia', 'Gabon', 'France', 'Finland', 'Ethiopia', 'Estonia', 'Eritrea', 'Equatorial Guinea', 'El Salvador', 'Egypt', 'Ecuador', 'Dominican Republic', 'Djibouti', 'Denmark', 'Democratic Republic of the Congo', 'Czech Republic', 'Cuba', 'Croatia', "Cote d'Ivoire", 'Costa Rica', 'Colombia', 'China', 'Chile', 'Central African Republic', 'Canada', 'Cameroon', 'Cambodia', 'Burkina Faso', 'Bulgaria', 'Brazil', 'Bosnia and Herzegovina', 'Bolivia', 'Benin', 'Belgium', 'Belarus', 'Barbados', 'Bangladesh', 'Bahrain', 'Azerbaijan', 'Austria', 'Australia', 'Argentina', 'Angola', 'Algeria', 'Albania', 'Afghanistan', 'Zimbabwe', 'Zambia', 'Yemen', 'Western Sahara', 'Vietnam', 'Venezuela', 'Uzbekistan', 'Uruguay', 'United States', 'United Kingdom', 'United Arab Emirates', 'Ukraine', 'Uganda', 'Turkmenistan', 'Turkey', 'Tunisia', 'Trinidad and Tobago', 'Togo', 'The Gambia', 'Thailand', 'Tanzania', 'Tajikistan', 'Taiwan', 'Syria', 'Switzerland', 'Sweden', 'Swaziland', 'Suriname', 'Sudan', 'Sri Lanka', 'Spain', 'South Korea', 'South Africa', 'Somalia', 'Slovenia', 'Slovakia', 'Singapore', 'Sierra Leone', 'Serbia', 'Senegal', 'Saudi Arabia', 'Rwanda', 'Russia', 'Romania', 'Republic of the Congo', 'Qatar', 'Portugal', 'Poland', 'Philippines', 'Peru', 'Paraguay', 'Papua New Guinea', 'Panama', 'Pakistan', 'Oman', 'Norway', 'Nigeria', 'Niger', 'Nicaragua', 'New Zealand', 'Netherlands', 'Nepal', 'Namibia', 'Myanmar (Burma)', 'Mozambique', 'Morocco', 'Montenegro', 'Mongolia', 'Moldova', 'Mexico', 'Mauritania', 'Martinique', 'Mali', 'Malaysia', 'Madagascar', 'Macedonia', 'Luxembourg', 'Lithuania', 'Libya', 'Liberia', 'Lesotho', 'Lebanon', 'Laos', 'Kyrgyzstan', 'Kenya', 'Kazakhstan', 'Jordan', 'Japan', 'Jamaica', 'Italy', 'Israel', 'Ireland', 'Iraq', 'Iran', 'Indonesia', 'India', 'Hungary', 'Hong Kong', 'Honduras', 'Haiti', 'Guyana', 'Guinea-Bissau', 'Guinea', 'Guatemala', 'Guadeloupe', 'Greece', 'Ghana', 'Germany', 'Georgia', 'Gabon', 'French Guiana', 'France', 'Finland', 'Ethiopia', 'Estonia', 'Eritrea', 'Equatorial Guinea', 'El Salvador', 'Egypt', 'Ecuador', 'Dominican Republic', 'Djibouti', 'Denmark', 'Democratic Republic of the Congo', 'Czech Republic', 'Cyprus', 'Cuba', 'Croatia', "Cote d'Ivoire", 'Costa Rica', 'Colombia', 'China', 'Chile', 'Chad', 'Central African Republic', 'Canada', 'Cameroon', 'Cambodia', 'Burkina Faso', 'Bulgaria', 'Brazil', 'Botswana', 'Bosnia and Herzegovina', 'Bolivia', 'Bhutan', 'Benin', 'Belize', 'Belgium', 'Belarus', 'Barbados', 'Bangladesh', 'Bahrain', 'Azerbaijan', 'Austria', 'Australia', 'Armenia', 'Argentina', 'Angola', 'Algeria', 'Albania', 'Afghanistan', 'Zimbabwe', 'Zambia', 'Yemen', 'Western Sahara', 'Vietnam', 'Venezuela', 'Uzbekistan', 'Uruguay', 'United States', 'United Kingdom', 'United Arab Emirates', 'Ukraine', 'Uganda', 'Turkmenistan', 'Turkey', 'Tunisia', 'Trinidad and Tobago', 'Togo', 'Thailand', 'Tanzania', 'Taiwan', 'Syria', 'Switzerland', 'Sweden', 'Sudan', 'Sri Lanka', 'Spain', 'South Korea', 'South Africa', 'Somalia', 'Slovenia', 'Slovakia', 'Singapore', 'Sierra Leone', 'Senegal', 'Saudi Arabia', 'Rwanda', 'Russia', 'Romania', 'Republic of the Congo', 'Qatar', 'Portugal', 'Poland', 'Philippines', 'Peru', 'Paraguay', 'Papua New Guinea', 'Panama', 'Pakistan', 'Norway', 'Nigeria', 'Niger', 'Nicaragua', 'New Zealand', 'Netherlands', 'Nepal', 'Myanmar (Burma)', 'Mozambique', 'Morocco', 'Montenegro', 'Mongolia', 'Moldova', 'Mexico', 'Mauritania', 'Martinique', 'Mali', 'Malaysia', 'Malawi', 'Madagascar', 'Macedonia', 'Lithuania', 'Libya', 'Liberia', 'Lebanon', 'Laos', 'Kyrgyzstan', 'Kuwait', 'Kenya', 'Kazakhstan', 'Jordan', 'Japan', 'Jamaica', 'Italy', 'Israel', 'Ireland', 'Iraq', 'Iran', 'Indonesia', 'India', 'Hungary', 'Hong Kong', 'Honduras', 'Haiti', 'Guyana', 'Guatemala', 'Guadeloupe', 'Greece', 'Ghana', 'Germany', 'Georgia', 'Gabon', 'France', 'Finland', 'Estonia', 'El Salvador', 'Egypt', 'Ecuador', 'Dominican Republic', 'Djibouti', 'Denmark', 'Democratic Republic of the Congo', 'Czech Republic', 'Cuba', 'Croatia', "Cote d'Ivoire", 'Costa Rica', 'Colombia', 'China', 'Chile', 'Canada', 'Cameroon', 'Cambodia', 'Burundi', 'Burkina Faso', 'Bulgaria', 'Brazil', 'Botswana', 'Bosnia and Herzegovina', 'Bolivia', 'Benin', 'Belgium', 'Belarus', 'Barbados', 'Bangladesh', 'Azerbaijan', 'Austria', 'Australia', 'Armenia', 'Argentina', 'Angola', 'Algeria', 'Albania', 'Afghanistan']) ``` ### Adding title and labeling axes ``` # plotting scatter chart plt.scatter(profit,sales) # Adding and formatting title plt.title("Sales Across Profit in various Counteries for different Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'}) # Labeling Axes plt.xlabel("Profit", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) plt.show() ``` ### Representing product categories using different colors ``` product_categories = np.array(["Technology", "Furniture", "Office Supplies"]) colors = np.array(["cyan", "green", "yellow"]) # plotting the scatterplot with color coding the points belonging to different categories for color,category in zip(colors,product_categories): sales_cat = sales[product_category==category] profit_cat = profit[product_category==category] plt.scatter(profit_cat,sales_cat,c=color,label=category) # Adding and formatting title plt.title("Sales Across Profit in various Counteries for different Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'}) # Labeling Axes plt.xlabel("Profit", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) # Adding legend for interpretation of points plt.legend() plt.show() ``` ### Adding labels to points belonging to specific country ``` # plotting the scatterplot with color coding the points belonging to different categories for color,category in zip(colors,product_categories): sales_cat = sales[product_category==category] profit_cat = profit[product_category==category] plt.scatter(profit_cat,sales_cat,c=color,label=category) # labeling points that belong to country "India" for xy in zip(profit[country == "India"],sales[country == "India"]): plt.annotate(text="India",xy = xy) # Adding and formatting title plt.title("Sales Across Profit in various Counteries for different Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'}) # Labeling Axes plt.xlabel("Profit", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'}) # Adding legend for interpretation of points plt.legend() plt.show() ``` # Line Chart: Trend of sales over the 12 months Can be used to present the trend with time variable on the x-axis In some cases, can be used as an alternative to scatterplot to understand the relationship between 2 variables ``` # Sales data across months months = np.array(['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']) sales = np.array([241268.56, 184837.36, 263100.77, 242771.86, 288401.05, 401814.06, 258705.68, 456619.94, 481157.24, 422766.63, 555279.03, 503143.69]) # plotting a line chart plt.plot(months,sales,'bx') # adding title to the chart plt.title("Sales across months") # labeling the axes plt.xlabel("Months") plt.ylabel("Sales") # rotating the tick values of x-axis plt.xticks(rotation = 90) # displating the created plot plt.show() y = np.random.randint(1,100, 50) plt.plot(y, 'ro') plt.show() # plotting a line chart plt.plot(months,sales,'b',marker='x') # adding title to the chart plt.title("Sales across months") # labeling the axes plt.xlabel("Months") plt.ylabel("Sales") # rotating the tick values of x-axis plt.xticks(rotation = 90) # displating the created plot plt.show() ``` # Histogram: Distibution of employees across different age groups Useful in checking the distribution of data range Builds a bar corresponding to each element in the data range showing its frequency ``` # data corresponding to age of the employees in the company age = np.array([23, 22, 24, 24, 23, 23, 22, 23, 24, 24, 24, 22, 24, 23, 24, 23, 22, 24, 23, 23, 22, 23, 23, 24, 23, 24, 23, 22, 24, 22, 23, 24, 23, 24, 22, 22, 24, 23, 22, 24, 24, 24, 23, 24, 24, 22, 23, 23, 24, 22, 22, 24, 22, 23, 22, 23, 22, 23, 23, 23, 23, 22, 22, 23, 23, 23, 23, 23, 23, 22, 29, 29, 27, 28, 28, 29, 28, 27, 26, 27, 28, 29, 26, 28, 26, 28, 27, 27, 28, 28, 26, 29, 28, 28, 26, 27, 26, 28, 27, 29, 29, 27, 27, 27, 28, 29, 29, 29, 27, 28, 28, 26, 28, 27, 26, 26, 27, 26, 29, 28, 28, 28, 29, 26, 26, 26, 29, 26, 28, 26, 28, 28, 27, 27, 27, 29, 27, 28, 27, 26, 29, 29, 27, 29, 26, 29, 26, 29, 29, 27, 28, 28, 27, 29, 26, 28, 26, 28, 27, 29, 29, 29, 27, 27, 29, 29, 26, 26, 26, 27, 28, 27, 28, 28, 29, 27, 26, 27, 29, 28, 29, 27, 27, 26, 26, 26, 26, 29, 28, 28, 33, 34, 33, 33, 34, 33, 31, 32, 33, 33, 32, 34, 32, 31, 33, 34, 31, 33, 34, 33, 34, 33, 32, 33, 31, 33, 32, 32, 31, 34, 33, 31, 34, 32, 32, 31, 32, 31, 32, 34, 33, 33, 31, 32, 32, 31, 32, 33, 34, 32, 34, 31, 32, 31, 33, 32, 34, 31, 32, 34, 31, 31, 34, 34, 34, 32, 34, 33, 33, 32, 32, 33, 31, 33, 31, 32, 34, 32, 32, 31, 34, 32, 32, 31, 32, 34, 32, 33, 31, 34, 31, 31, 32, 31, 33, 34, 34, 34, 31, 33, 34, 33, 34, 31, 34, 34, 33, 31, 32, 33, 31, 31, 33, 32, 34, 32, 34, 31, 31, 34, 32, 32, 31, 31, 32, 31, 31, 32, 33, 32, 31, 32, 32, 31, 31, 34, 31, 34, 33, 32, 31, 34, 34, 31, 34, 31, 32, 34, 33, 33, 34, 32, 33, 31, 31, 33, 32, 31, 31, 31, 37, 38, 37, 37, 36, 37, 36, 39, 37, 39, 37, 39, 38, 36, 37, 36, 38, 38, 36, 39, 39, 37, 39, 36, 37, 36, 36, 37, 38, 36, 38, 39, 39, 36, 38, 37, 39, 38, 39, 39, 36, 38, 37, 38, 39, 36, 37, 36, 36, 38, 38, 38, 39, 36, 37, 37, 39, 37, 37, 36, 36, 39, 37, 36, 36, 36, 39, 37, 37, 37, 37, 39, 36, 39, 37, 38, 37, 36, 36, 39, 39, 36, 36, 39, 39, 39, 37, 38, 36, 36, 37, 38, 37, 38, 37, 39, 39, 37, 39, 36, 36, 39, 39, 39, 36, 38, 39, 39, 39, 39, 38, 36, 37, 37, 38, 38, 39, 36, 37, 37, 39, 36, 37, 37, 36, 36, 36, 38, 39, 38, 36, 38, 36, 39, 38, 36, 36, 37, 39, 39, 37, 37, 37, 36, 37, 36, 36, 38, 38, 39, 36, 39, 36, 37, 37, 39, 39, 36, 38, 39, 39, 39, 37, 37, 37, 37, 39, 36, 37, 39, 38, 39, 36, 37, 38, 39, 38, 36, 37, 38, 42, 43, 44, 43, 41, 42, 41, 41, 42, 41, 43, 44, 43, 44, 44, 42, 43, 44, 43, 41, 44, 42, 43, 42, 42, 44, 43, 42, 41, 42, 41, 41, 41, 44, 44, 44, 41, 43, 42, 42, 43, 43, 44, 44, 44, 44, 44, 41, 42, 44, 43, 42, 42, 43, 44, 44, 44, 44, 41, 42, 43, 43, 43, 41, 43, 41, 42, 41, 42, 42, 41, 42, 44, 41, 43, 42, 41, 43, 41, 44, 44, 43, 43, 43, 41, 41, 41, 42, 43, 42, 48, 48, 48, 49, 47, 45, 46, 49, 46, 49, 49, 46, 47, 45, 47, 45, 47, 49, 47, 46, 46, 47, 45, 49, 49, 49, 45, 46, 47, 46, 45, 46, 45, 48, 48, 45, 49, 46, 48, 49, 47, 48, 45, 48, 46, 45, 48, 45, 46, 46, 48, 47, 46, 45, 48, 46, 49, 47, 46, 49, 48, 46, 47, 47, 46, 48, 47, 46, 46, 49, 50, 54, 53, 55, 51, 50, 51, 54, 54, 53, 53, 51, 51, 50, 54, 51, 51, 55, 50, 51, 50, 50, 53, 52, 54, 53, 55, 52, 52, 50, 52, 55, 54, 50, 50, 55, 52, 54, 52, 54]) # Checking the number of employees len(age) # plotting a histogram plt.hist(age) plt.show() ``` ### Plotting a histogram with fixed number of bins ``` # plotting a histogram plt.hist(age,bins=5,color='green',edgecolor='black') plt.show() list_1 = [48.49, 67.54, 57.47, 68.17, 51.18, 68.31, 50.33, 66.7, 45.62, 43.59, 53.64, 70.08, 47.69, 61.27, 44.14, 51.62, 48.72, 65.11] weights = np.array(list_1) plt.hist(weights,bins = 4,range=[40,80],edgecolor='white') plt.show() ``` # Box plot: Understanding the spread of sales across different countries Useful in understanding the spread of the data Divides the data based on the percentile values Helps identify the presence of outliers ``` # Creating arrays with sales in different countries across each category: 'Furniture', 'Technology' and 'Office Supplies' sales_technology = np.array ([1013.14, 8298.48, 875.51, 22320.83, 9251.6, 4516.86, 585.16, 174.2, 27557.79, 563.25, 558.11, 37117.45, 357.36, 2206.96, 709.5, 35064.03, 7230.78, 235.33, 148.32, 3973.27, 11737.8, 7104.63, 83.67, 5569.83, 92.34, 1045.62, 9072.51, 42485.82, 5093.82, 14846.16, 943.92, 684.36, 15012.03, 38196.18, 2448.75, 28881.96, 13912.14, 4507.2, 4931.06, 12805.05, 67912.73, 4492.2, 1740.01, 458.04, 16904.32, 21744.53, 10417.26, 18665.33, 2808.42, 54195.57, 67332.5, 24390.95, 1790.43, 2234.19, 9917.5, 7408.14, 36051.99, 1352.22, 1907.7, 2154.66, 1078.21, 3391.65, 28262.73, 5177.04, 66.51, 2031.34, 1683.72, 1970.01, 6515.82, 1055.31, 1029.48, 5303.4, 1850.96, 1159.41, 39989.13, 1183.87, 96365.09, 8356.68, 7010.24, 23119.23, 46109.28, 9058.95, 1313.67, 31525.06, 2019.94, 703.04, 1868.79, 700.5, 55512.02, 243.5, 2113.18, 11781.81, 3487.29, 513.12, 5000.7, 121.02, 1302.78, 169.92, 124.29, 57366.05, 29445.93, 4614.3, 45009.98, 309.24, 3353.67, 41348.34, 2280.27, 61193.7, 1466.79, 12419.94, 445.12, 25188.65, 12351.23, 1152.3, 26298.81, 9900.78, 5355.57, 2325.66, 6282.81, 1283.1, 3560.15, 3723.84, 13715.01, 4887.9, 3396.89, 33348.42, 625.02, 1665.48, 32486.97, 20516.22, 8651.16, 13590.06, 2440.35, 6462.57]) sales_office_supplies = np.array ([1770.13, 7527.18, 1433.65, 423.3, 21601.72, 10035.72, 2378.49, 3062.38, 345.17, 30345.78, 300.71, 940.81, 36468.08, 1352.85, 1755.72, 2391.96, 19.98, 19792.8, 15633.88, 7.45, 521.67, 1118.24, 7231.68, 12399.32, 204.36, 23.64, 5916.48, 313.98, 9212.42, 27476.91, 1761.33, 289.5, 780.3, 15098.46, 813.27, 47.55, 8323.23, 22634.64, 1831.02, 28808.1, 10539.78, 588.99, 939.78, 7212.41, 15683.01, 41369.09, 5581.6, 403.36, 375.26, 12276.66, 15393.56, 76.65, 5884.38, 18005.49, 3094.71, 43642.78, 35554.83, 22977.11, 1026.33, 665.28, 9712.49, 6038.52, 30756.51, 3758.25, 4769.49, 2463.3, 967.11, 2311.74, 1414.83, 12764.91, 4191.24, 110.76, 637.34, 1195.12, 2271.63, 804.12, 196.17, 167.67, 131.77, 2842.05, 9969.12, 1784.35, 3098.49, 25005.54, 1300.1, 7920.54, 6471.78, 31707.57, 37636.47, 3980.88, 3339.39, 26563.9, 4038.73, 124.8, 196.65, 2797.77, 29832.76, 184.84, 79.08, 8047.83, 1726.98, 899.73, 224.06, 6101.31, 729.6, 896.07, 17.82, 26.22, 46429.78, 31167.27, 2455.94, 37714.3, 1506.93, 3812.78, 25223.34, 3795.96, 437.31, 41278.86, 2091.81, 6296.61, 468.82, 23629.64, 9725.46, 1317.03, 1225.26, 30034.08, 7893.45, 2036.07, 215.52, 3912.42, 82783.43, 253.14, 966.96, 3381.26, 164.07, 1984.23, 75.12, 25168.17, 3295.53, 991.12, 10772.1, 44.16, 1311.45, 35352.57, 20.49, 13471.06, 8171.16, 14075.67, 611.82, 3925.56]) sales_furniture = np.array ([981.84, 10209.84, 156.56, 243.06, 21287.52, 7300.51, 434.52, 6065.0, 224.75, 28953.6, 757.98, 528.15, 34922.41, 50.58, 2918.48, 1044.96, 22195.13, 3951.48, 6977.64, 219.12, 5908.38, 10987.46, 4852.26, 445.5, 71860.82, 14840.45, 24712.08, 1329.9, 1180.44, 85.02, 10341.63, 690.48, 1939.53, 20010.51, 914.31, 25223.82, 12804.66, 2124.24, 602.82, 2961.66, 15740.79, 74138.35, 7759.39, 447.0, 2094.84, 22358.95, 21734.53, 4223.73, 17679.53, 1019.85, 51848.72, 69133.3, 30146.9, 705.48, 14508.88, 7489.38, 20269.44, 246.12, 668.13, 768.93, 899.16, 2578.2, 4107.99, 20334.57, 366.84, 3249.27, 98.88, 3497.88, 3853.05, 786.75, 1573.68, 458.36, 1234.77, 1094.22, 2300.61, 970.14, 3068.25, 35792.85, 4277.82, 71080.28, 3016.86, 3157.49, 15888.0, 30000.36, 1214.22, 1493.94, 32036.69, 4979.66, 106.02, 46257.68, 1033.3, 937.32, 3442.62, 213.15, 338.88, 9602.34, 2280.99, 73759.08, 23526.12, 6272.74, 43416.3, 576.78, 1471.61, 20844.9, 3497.7, 56382.38, 902.58, 6235.26, 48.91, 32684.24, 13370.38, 10595.28, 4555.14, 10084.38, 267.72, 1012.95, 4630.5, 364.32, 349.2, 4647.56, 504.0, 10343.52, 5202.66, 2786.26, 34135.95, 2654.58, 24699.51, 136.26, 23524.51, 8731.68, 8425.86, 835.95, 11285.19]) # plotting box plot for each category plt.boxplot([sales_technology,sales_office_supplies,sales_furniture]) # adding title to the graph plt.title("Sales across country and product categories") # labeling the axes plt.xlabel("Product Category") plt.ylabel("Sales") # Replacing the x ticks with respective category plt.xticks((1,2,3),["Technology","Office_Supply","Furniture"]) plt.show() ```
github_jupyter