markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Experiment parameters Total experiment lengthIn years, supports decimals
w["total_experiment_length_years"] = widgets.BoundedFloatText( value=7, min=0, max=15, step=0.1, description='Years:', disabled=False ) display(w["total_experiment_length_years"]) w["observing_efficiency"] = widgets.BoundedFloatText( value=0.2, min=0, max=1, step=0.01, ...
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
Observing efficiencyTypically 20%, use decimal notation
display(w["observing_efficiency"]) w["number_of_splits"] = widgets.BoundedIntText( value=1, min=1, max=7, step=1, description='Splits:', disabled=False )
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
Number of splitsNumber of splits, 1 generates only full mission2-7 generates the full mission map and then the requested numberof splits scaled accordingly. E.g. 7 generates the full missionmap and 7 equal (yearly) maps
display(w["number_of_splits"])
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
Telescope configurationCurrently we constraint to have a total of 6 SAT and 3 LAT,each SAT has a maximum of 3 tubes, each LAT of 19.The checkbox on the right of each telescope checks that the amount of number of tubes is correct.
import toml config = toml.load("s4_design.toml") def define_check_sum(telescope_widgets, max_tubes): def check_sum(_): total_tubes = sum([w.value for w in telescope_widgets[1:1+4]]) telescope_widgets[0].value = total_tubes == max_tubes return check_sum telescopes = {"SAT":{}, "LAT":{}} for teles...
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
Generate a TOML configuration fileClick on the button to generate the TOML file and display it.
import os output_location = os.environ.get("S4REFSIMTOOL_OUTPUT_URL", "") button = widgets.Button( description='Generate TOML', disabled=False, button_style='info', # 'success', 'info', 'warning', 'danger' or '' tooltip='Click me', icon='check' ) output_label = widgets.HTML(value="") output = widget...
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
Run the simulationGenerate the output maps
#export def create_wget_script(folder, output_location): with open(folder / "download_all.sh", "w") as f: f.write("#!/bin/bash\n") for o in folder.iterdir(): if not str(o).endswith("sh"): f.write(f"wget {output_location}/{o}\n") def run_simulation(toml_filename, md5sum)...
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
Amazon SageMaker with XGBoost and Hyperparameter Tuning for Direct Marketing predictions _**Supervised Learning with Gradient Boosted Trees: A Binary Prediction Problem With Unbalanced Classes**_------ Contents1. [Objective](Objective)1. [Background](Background)1. [Environment Prepration](Environment-preparation)1. [D...
import numpy as np # For matrix operations and numerical processing import pandas as pd # For munging tabular data import time import os from util.ml_reporting_tools import generate_classification_report # helper function for classification reports # setting up SageMaker parameters import sagemaker import boto3 sg...
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
--- Data downloading and explorationLet's start by downloading the [direct marketing dataset](https://archive.ics.uci.edu/ml/datasets/bank+marketing) from UCI's ML Repository. We can run shell commands from Jupyter using the following code:
# (Running shell commands from Jupyter) !wget -P data/ -N https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank-additional.zip !unzip -o data/bank-additional.zip -d data/
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
Now lets read this into a Pandas data frame and take a look.
df_data = pd.read_csv("./data/bank-additional/bank-additional-full.csv", sep=";") df_data.head() # show part of the dataframe
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
_**Specifics on each of the features:**_*Demographics:** `age`: Customer's age (numeric)* `job`: Type of job (categorical: 'admin.', 'services', ...)* `marital`: Marital status (categorical: 'married', 'single', ...)* `education`: Level of education (categorical: 'basic.4y', 'high.school', ...)*Past customer events:** ...
# Indicator variable to capture when pdays takes a value of 999 df_data["no_previous_contact"] = np.where(df_data["pdays"] == 999, 1, 0) # Indicator for individuals not actively employed df_data["not_working"] = np.where(np.in1d(df_data["job"], ["student", "retired", "unemployed"]), 1, 0) # remove unnecessary data df...
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
--- TrainingBefore initializing training, there are some things that need to be done:1. Suffle and split dataset. 2. Convert the dataset to the right format the SageMaker algorithm expects (e.g. CSV).3. Copy the dataset to S3 in order to be accessed by SageMaker during training. 4. Create s3_inputs that our training fu...
# shuffle and splitting dataset train_data, validation_data, test_data = np.split( df_model_data.sample(frac=1, random_state=1729), [int(0.7 * len(df_model_data)), int(0.9*len(df_model_data))], ) # create CSV files for Train / Validation / Test # XGBoost expects a CSV file with no headers, with the 1st row b...
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
Specify algorithm container image
# specify object of the xgboost container image from sagemaker.amazon.amazon_estimator import get_image_uri xgb_container_image = get_image_uri(sgmk_region, "xgboost", repo_version="latest")
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
A small competition: try to predict the best values for 4 hyper-parameters!SageMaker's XGBoost includes 38 parameters. You can find more information about them [here](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html).For simplicity, we choose to experiment only with 6 of them.**Please selec...
sess = sagemaker.Session() # initiate a SageMaker session # instantiate an XGBoost estimator object xgb_estimator = sagemaker.estimator.Estimator( image_name=xgb_container_image, # XGBoost algorithm container role=sgmk_role, # IAM role to be used train_instance_type="ml.m4.x...
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
--- Deploying and evaluating model DeploymentNow that we've trained the xgboost algorithm on our data, deploying the model (hosting it behind a real-time endpoint) is just one line of code!*Attention! This may take up to 10 minutes, depending on the AWS instance you select*.
xgb_predictor = xgb_estimator.deploy(initial_instance_count=1, instance_type="ml.m5.large")
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
EvaluationFirst we'll need to determine how we pass data into and receive data from our endpoint. Our data is currently stored as NumPy a array in memory of our notebook instance. To send it in an HTTP POST request, we will serialize it as a CSV string and then decode the resulting CSV. Note: For inference with CSV f...
# Converting strings for HTTP POST requests on inference from sagemaker.predictor import csv_serializer def predict_prob(predictor, data): # predictor settings predictor.content_type = "text/csv" predictor.serializer = csv_serializer return np.fromstring(predictor.predict(data).decode("utf-8"), sep=","...
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
These numbers are the **predicted probabilities** (in the interval [0,1]) of a potential customer enrolling for a term deposit. - 0: the person WILL NOT enroll.- 1: the person WILL enroll (which makes him/her good candidate for direct marketing).Now we will generate a **comprehensive model report**, using the following...
generate_classification_report( y_actual=test_data["y_yes"].values, y_predict_proba=predictions, decision_threshold=0.5, class_names_list=["Did not enroll","Enrolled"], model_info="XGBoost SageMaker inbuilt" )
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
--- Hyperparameter Optimization (HPO)*Note, with the default setting below, the hyperparameter tuning job can take up to 30 minutes to complete.*We will use SageMaker HyperParameter Optimization (HPO) to automate the searching process effectively. Specifically, we **specify a range**, or a list of possible values in th...
# import required HPO objects from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner # set up hyperparameter ranges ranges = { "num_round": IntegerParameter(1, 300), "max_depth": IntegerParameter(1, 10), "alpha": ContinuousParameter(0, 5), "eta": Co...
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
Launch HPONow we can launch a hyperparameter tuning job by calling *fit()* function. After the hyperparameter tuning job is created, we can go to SageMaker console to track the progress of the hyperparameter tuning job until it is completed.
# start HPO tuner.fit({"train": s3_input_train, "validation": s3_input_validation}, include_cls_metadata=False)
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
**Important notice**: HPO jobs are expected to take quite long to finsih and as such, **they do not wait by default** (the cell will look as 'done' while the job will still be running on the cloud). As such, all subsequent cells relying on the HPO output cannot run unless the job is finished. In order to check whether ...
# wait, until HPO is finished hpo_state = "InProgress" while hpo_state == "InProgress": hpo_state = sgmk_client.describe_hyper_parameter_tuning_job( HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)["HyperParameterTuningJobStatus"] print("-", end="") time.sleep(60) # poll once ...
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
Deploy and test optimized modelDeploying the best model is simply one line of code:
# deploy the best model from HPO best_model_predictor = tuner.deploy(initial_instance_count=1, instance_type="ml.m5.large")
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
Once deployed, we can now evaluate the performance of the best model.
# getting the predicted probabilities of the best model predictions = predict_prob(best_model_predictor, test_data.drop(["y_no", "y_yes"], axis=1).values) print(predictions) # generate report for the best model generate_classification_report( y_actual=test_data["y_yes"].values, y_predict_proba=predictions, ...
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
--- ConclusionsThe optimized HPO model exhibits approximately AUC=0.773.Depending on the number of tries, HPO can give a better performing model, compared to simply trying different hyperparameters (by trial and error). You can learn more in-depth details about HPO [here](https://docs.aws.amazon.com/sagemaker/latest/d...
# xgb_predictor.delete_endpoint(delete_endpoint_config=True) # best_model_predictor.delete_endpoint(delete_endpoint_config=True)
_____no_output_____
MIT
sagemaker_xgboost_hpo.ipynb
bbonik/sagemaker-xgboost-with-hpo
Naoki AtkinsProject 3
import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error from sklearn.preprocessing import PolynomialFeatures np.set_printoptions(suppress=True)
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
***Question 1***
data = np.load('./boston.npz')
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
***Question 2***
features = data['features'] target = data['target'] X = features y = target[:,None] X = np.concatenate((np.ones((len(X),1)),X),axis=1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=(2021-3-11))
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
***Question 3***
plt.plot(X_train[:,13], y_train, 'ro')
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
The relationship seems to follow more of a negative quadratic than a linear line. ***Question 4***
LSTAT = X_train[:,13][:,None] MEDV = y_train reg = LinearRegression().fit(LSTAT, MEDV) reg.coef_ reg.intercept_
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
MEDV = 34.991133021969475 + (-0.98093888)(LSTAT) ***Question 5***
abline = np.array([reg.intercept_, reg.coef_], dtype=object) testx = np.linspace(0,40,100)[:,None] testX = np.hstack((np.ones_like(testx),testx)) testt = np.dot(testX,abline) plt.figure() plt.plot(LSTAT,MEDV,'ro') plt.plot(testx,testt,'b')
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
The model fits decently well along the center of the mass of data. Around the extremes, the line is a little bit off. ***Question 6***
pred = reg.predict(LSTAT) mean_squared_error(y_train, pred)
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
Average Loss = 38.47893344802523 ***Question 7***
pred_test = reg.predict(X_test[:,13][:,None]) mean_squared_error(y_test, pred_test)
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
Test MSE is slightly higher, which means that there is a slight overfit ***Question 8***
LSTAT_sqr = np.hstack((np.ones_like(LSTAT), LSTAT, LSTAT**2)) reg = LinearRegression().fit(LSTAT_sqr, MEDV) pred_train_LSTAT_sqr = reg.predict(LSTAT_sqr) MSE_train_sqr = mean_squared_error(y_train, pred_train_LSTAT_sqr) MSE_train_sqr LSTAT_sqr_test = np.hstack((np.ones_like(X_test[:,13][:,None]), X_test[:,13][:,None], ...
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
The test set has a lower MSE compared to the training set which means the model is fitting well. ***Question 9***
reg.coef_ reg.intercept_ squared_line = [reg.intercept_, reg.coef_[0][1], reg.coef_[0][2]] testx = np.linspace(0,40,100)[:,None] testX = np.hstack((np.ones_like(testx),testx, testx**2)) testt = np.dot(testX,squared_line) plt.figure() plt.plot(LSTAT,MEDV,'ro') plt.plot(testx,testt,'b')
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
Model fits pretty well. Better than the line. ***Question 10***
reg = LinearRegression().fit(X_train, y_train) reg.coef_ reg.intercept_ pred = reg.predict(X_train) mean_squared_error(y_train, pred)
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
The above mean square error is for the training set
pred_test = reg.predict(X_test) mean_squared_error(y_test, pred_test)
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
This model with polynomial features fits better as compared to the linear model with just a single feature. Making the model more complex allows it to fit the data more flexibly. This causes the MSE to go lower. ***Question 11***
train_square_matrix = np.hstack((X_train, X_train**2)) model = LinearRegression().fit(train_square_matrix, MEDV) pred_train_sqr = model.predict(train_square_matrix) MSE_train_sqr = mean_squared_error(y_train, pred_train_sqr) MSE_train_sqr test_square_matrix = np.hstack((X_test, X_test**2)) pred = model.predict(test_squ...
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
The MSE's for the matrix of the squares of all the 13 input features performs better than the just the matrix of the features themselves. However, the testing set shows that the model is overfitting a little ***Question 12***
poly = PolynomialFeatures(degree = 2) X_train_poly = poly.fit_transform(X_train) X_test_poly = poly.fit_transform(X_test) model = LinearRegression().fit(X_train_poly, y_train) pred = model.predict(X_train_poly) mean_squared_error(y_train, pred) pred = model.predict(X_test_poly) mean_squared_error(y_test, pred)
_____no_output_____
MIT
machineLearning/scikitLearnAndPolyRegression.ipynb
naokishami/Classwork
Project 3: Implement SLAM --- Project OverviewIn this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fie...
import numpy as np from helpers import make_data # your implementation of slam should work with the following inputs # feel free to change these input values and see how it responds! # world parameters num_landmarks = 5 # number of landmarks N = 20 # time steps world_size = ...
Landmarks: [[21, 13], [16, 21], [70, 38], [38, 75], [87, 50]] Robot: [x=21.10550 y=65.76182]
MIT
3. Landmark Detection and Tracking.ipynb
takam5f2/CVN_SLAM
A note on `make_data`The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:1. Instantiating a robot (using the robot class)2. Creating a grid world with landmarks in it**This function also prints out the true location of landmarks and the *final* robot...
# print out some stats about the data time_step = 0 print('Example measurements: \n', data[time_step][0]) print('\n') print('Example motion: \n', data[time_step][1])
Example measurements: [[0, -27.336665046464944, -35.10866490452625], [1, -33.31752573050853, -30.61726091048749], [2, 19.910918480109437, -10.91254402509894], [3, -10.810346809363109, 24.042261189064593], [4, 35.51407538801196, -0.736122885336894]] Example motion: [-19.11495180327814, 5.883758795052183]
MIT
3. Landmark Detection and Tracking.ipynb
takam5f2/CVN_SLAM
Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location...
def initialize_constraints(N, num_landmarks, world_size): ''' This function takes in a number of time steps N, number of landmarks, and a world_size, and returns initialized constraint matrices, omega and xi.''' ## Recommended: Define and store the size (rows/cols) of the constraint matrix in a var...
_____no_output_____
MIT
3. Landmark Detection and Tracking.ipynb
takam5f2/CVN_SLAM
Test as you goIt's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.Below, you'll find some test code that allows ...
# import data viz resources import matplotlib.pyplot as plt from pandas import DataFrame import seaborn as sns %matplotlib inline # define a small N and world_size (small for ease of visualization) N_test = 5 num_landmarks_test = 2 small_world = 10 # initialize the constraints initial_omega, initial_xi = initialize_co...
_____no_output_____
MIT
3. Landmark Detection and Tracking.ipynb
takam5f2/CVN_SLAM
--- SLAM inputs In addition to `data`, your slam function will also take in:* N - The number of time steps that a robot will be moving and sensing* num_landmarks - The number of landmarks in the world* world_size - The size (w/h) of your world* motion_noise - The noise associated with motion; the update confidence fo...
## TODO: Complete the code to implement SLAM ## slam takes in 6 arguments and returns mu, ## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise): ## TODO: Use your initilization to create constr...
_____no_output_____
MIT
3. Landmark Detection and Tracking.ipynb
takam5f2/CVN_SLAM
Helper functionsTo check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the po...
# a helper function that creates a list of poses and of landmarks for ease of printing # this only works for the suggested constraint architecture of interlaced x,y poses def get_poses_landmarks(mu, N): # create a list of poses poses = [] for i in range(N): poses.append((mu[2*i].item(), mu[2*i+1].it...
_____no_output_____
MIT
3. Landmark Detection and Tracking.ipynb
takam5f2/CVN_SLAM
Run SLAMOnce you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks! What to ExpectThe `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (whi...
# call your implementation of slam, passing in the necessary parameters mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise) # print out the resulting landmarks and poses if(mu is not None): # get the lists of poses and landmarks # and print them out poses, landmarks = get_poses_l...
Estimated Poses: [50.000, 50.000] [30.932, 55.524] [10.581, 60.210] [1.931, 44.539] [13.970, 30.030] [26.721, 15.510] [38.852, 1.865] [23.640, 16.597] [7.988, 30.490] [16.829, 49.508] [26.632, 68.416] [34.588, 85.257] [15.954, 87.795] [14.410, 68.543] [11.222, 48.268] [7.365, 29.064] [4.241, 8.590] [10.207, 28.434] [...
MIT
3. Landmark Detection and Tracking.ipynb
takam5f2/CVN_SLAM
Visualize the constructed worldFinally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!**Note th...
# import the helper function from helpers import display_world # Display the final world! # define figure size plt.rcParams["figure.figsize"] = (20,20) # check if poses has been created if 'poses' in locals(): # print out the last pose print('Last pose: ', poses[-1]) # display the last position of the ro...
Last pose: (20.15873442355911, 66.43079160194176)
MIT
3. Landmark Detection and Tracking.ipynb
takam5f2/CVN_SLAM
Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare the...
# calculate RMSE import math def getRMSE(ground_truth, estimation): sum_rmse = 0 for i, element_est in enumerate(estimation): diff = ground_truth[i] - element_est diff_square = diff * diff sum_rmse += diff_square rmse = math.sqrt(sum_rmse / len(ground_truth)) return rmse flatten ...
0.6315729782587813
MIT
3. Landmark Detection and Tracking.ipynb
takam5f2/CVN_SLAM
TestingTo confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should ...
# Here is the data and estimated outputs for test case 1 test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.8663823748909...
Estimated Poses: [50.000, 50.000] [69.181, 45.665] [87.743, 39.703] [76.270, 56.311] [64.317, 72.176] [52.257, 88.154] [44.059, 69.401] [37.002, 49.918] [30.924, 30.955] [23.508, 11.419] [34.180, 27.133] [44.155, 43.846] [54.806, 60.920] [65.698, 78.546] [77.468, 95.626] [96.802, 98.821] [75.957, 99.971] [70.200, 81....
MIT
3. Landmark Detection and Tracking.ipynb
takam5f2/CVN_SLAM
Random Forests Import Libraries
import pandas as pd from sklearn.ensemble import RandomForestClassifier from sklearn import datasets,metrics from sklearn.model_selection import GridSearchCV from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline
_____no_output_____
MIT
ml/Random Forests/RandomForests.ipynb
Siddhant-K-code/AlgoBook
Load the [iris_data](https://archive.ics.uci.edu/ml/datasets/iris)
iris_data = datasets.load_iris() print(iris_data.target_names) print(iris_data.feature_names)
['setosa' 'versicolor' 'virginica'] ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
MIT
ml/Random Forests/RandomForests.ipynb
Siddhant-K-code/AlgoBook
Preprocess the data
df = pd.DataFrame( { 'sepal_length':iris_data.data[:,0], 'sepal_width':iris_data.data[:,1], 'petal_length':iris_data.data[:,2], 'petal_width':iris_data.data[:,3], 'species':iris_data.target }) df.head() #Number of instances per class df.groupby('species').size() # species -> target column features =...
_____no_output_____
MIT
ml/Random Forests/RandomForests.ipynb
Siddhant-K-code/AlgoBook
Visualization
#pair_plot #To explore the relationship between the features plt.figure() sns.pairplot(df,hue = "species", height=3, markers=["o", "s", "D"]) plt.show()
_____no_output_____
MIT
ml/Random Forests/RandomForests.ipynb
Siddhant-K-code/AlgoBook
Fitting the model
X_train, X_test, Y_train, Y_test = train_test_split(features,targets,test_size = 0.3,random_state = 1) model_1 = RandomForestClassifier(n_estimators = 100,random_state = 1) model_1.fit(X_train, Y_train) Y_pred = model_1.predict(X_test) metrics.accuracy_score(Y_test,Y_pred)
_____no_output_____
MIT
ml/Random Forests/RandomForests.ipynb
Siddhant-K-code/AlgoBook
Accuracy is around 95.6% Improving the model Hyperparameter selection
#using Exhaustive Grid Search n_estimators = [2, 10, 100,500] max_depth = [2, 10, 15,20] min_samples_split = [1,2, 5, 10] min_samples_leaf = [1, 2, 10,20] hyper_param = dict(n_estimators = n_estimators, max_depth = max_depth, min_samples_split = min_samples_split, min_samples_leaf = min_s...
_____no_output_____
MIT
ml/Random Forests/RandomForests.ipynb
Siddhant-K-code/AlgoBook
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.png) ResNet50 Image Classification using ONNX and A...
# Check core SDK version number import azureml.core print("SDK version:", azureml.core.VERSION)
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
Download pre-trained ONNX model from ONNX Model Zoo.Download the [ResNet50v2 model and test data](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz) and extract it in the same folder as this tutorial notebook.
import urllib.request onnx_model_url = "https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz" urllib.request.urlretrieve(onnx_model_url, filename="resnet50v2.tar.gz") !tar xvzf resnet50v2.tar.gz
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
Deploying as a web service with Azure ML Load your Azure ML workspaceWe begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook.
from azureml.core import Workspace ws = Workspace.from_config() print(ws.name, ws.location, ws.resource_group, sep = '\n')
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
Register your model with Azure MLNow we upload the model and register it in the workspace.
from azureml.core.model import Model model = Model.register(model_path = "resnet50v2/resnet50v2.onnx", model_name = "resnet50v2", tags = {"onnx": "demo"}, description = "ResNet50v2 from ONNX Model Zoo", workspace = ws)
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
Displaying your registered modelsYou can optionally list out all the models that you have registered in this workspace.
models = ws.models for name, m in models.items(): print("Name:", name,"\tVersion:", m.version, "\tDescription:", m.description, m.tags)
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
Write scoring fileWe are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object.
%%writefile score.py import json import time import sys import os import numpy as np # we're going to use numpy to process input and output data import onnxruntime # to inference ONNX models, we use the ONNX Runtime def softmax(x): x = x.reshape(-1) e_x = np.exp(x - np.max(x)) return e_x / e_x.sum(ax...
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
Create inference configuration First we create a YAML file that specifies which dependencies we would like to see in our container.
from azureml.core.conda_dependencies import CondaDependencies myenv = CondaDependencies.create(pip_packages=["numpy","onnxruntime","azureml-core"]) with open("myenv.yml","w") as f: f.write(myenv.serialize_to_string())
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
Create the inference configuration object
from azureml.core.model import InferenceConfig inference_config = InferenceConfig(runtime= "python", entry_script="score.py", conda_file="myenv.yml", extra_docker_file_steps = "Dockerfile")
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
Deploy the model
from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1, tags = {'demo': 'onnx'}, description = 'web serv...
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
The following cell will likely take a few minutes to run as well.
from random import randint aci_service_name = 'onnx-demo-resnet50'+str(randint(0,100)) print("Service", aci_service_name) aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig) aci_service.wait_for_deployment(True) print(aci_service.state)
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again.
if aci_service.state != 'Healthy': # run this command for debugging. print(aci_service.get_logs()) aci_service.delete()
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
Success!If you've made it this far, you've deployed a working web service that does image classification using an ONNX model. You can get the URL for the webservice with the code below.
print(aci_service.scoring_uri)
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
When you are eventually done using the web service, remember to delete it.
#aci_service.delete()
_____no_output_____
MIT
how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
MustAl-Du/MachineLearningNotebooks
Stable Baselines, a Fork of OpenAI Baselines - Training, Saving and LoadingGithub Repo: [https://github.com/hill-a/stable-baselines](https://github.com/hill-a/stable-baselines)Medium article: [https://medium.com/@araffin/stable-baselines-a-fork-of-openai-baselines-df87c4b2fc82](https://medium.com/@araffin/stable-basel...
!apt install swig cmake libopenmpi-dev zlib1g-dev !pip install stable-baselines==2.5.1 box2d box2d-kengz
_____no_output_____
MIT
my_colabs/stbl_team/saving_loading_a2c.ipynb
guyk1971/stable-baselines
Import policy, RL agent, ...
import gym import numpy as np from stable_baselines.common.policies import MlpPolicy from stable_baselines.common.vec_env import DummyVecEnv from stable_baselines import A2C
_____no_output_____
MIT
my_colabs/stbl_team/saving_loading_a2c.ipynb
guyk1971/stable-baselines
Create the Gym env and instantiate the agentFor this example, we will use Lunar Lander environment."Landing outside landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land on its first attempt. Four discrete actions available: do nothing, fire left orientation engine, fire main engine, fi...
env = gym.make('LunarLander-v2') # vectorized environments allow to easily multiprocess training # we demonstrate its usefulness in the next examples env = DummyVecEnv([lambda: env]) # The algorithms require a vectorized environment to run model = A2C(MlpPolicy, env, ent_coef=0.1, verbose=0)
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
MIT
my_colabs/stbl_team/saving_loading_a2c.ipynb
guyk1971/stable-baselines
We create a helper function to evaluate the agent:
def evaluate(model, num_steps=1000): """ Evaluate a RL agent :param model: (BaseRLModel object) the RL Agent :param num_steps: (int) number of timesteps to evaluate it :return: (float) Mean reward for the last 100 episodes """ episode_rewards = [0.0] obs = env.reset() for i in range(num_steps): ...
_____no_output_____
MIT
my_colabs/stbl_team/saving_loading_a2c.ipynb
guyk1971/stable-baselines
Let's evaluate the un-trained agent, this should be a random agent.
# Random Agent, before training mean_reward_before_train = evaluate(model, num_steps=10000)
Mean reward: -210.3 Num episodes: 107
MIT
my_colabs/stbl_team/saving_loading_a2c.ipynb
guyk1971/stable-baselines
Train the agent and save itWarning: this may take a while
# Train the agent model.learn(total_timesteps=10000) # Save the agent model.save("a2c_lunar") del model # delete trained model to demonstrate loading
_____no_output_____
MIT
my_colabs/stbl_team/saving_loading_a2c.ipynb
guyk1971/stable-baselines
Load the trained agent
model = A2C.load("a2c_lunar") # Evaluate the trained agent mean_reward = evaluate(model, num_steps=10000)
Mean reward: -310.2 Num episodes: 68
MIT
my_colabs/stbl_team/saving_loading_a2c.ipynb
guyk1971/stable-baselines
A/B test 3 - loved journeys, control vs node2vecThis related links B/C test (ab3) was conducted from 15-20th 2019.The data used in this report are 15-19th Mar 2019 because the test was ended on 20th mar.The test compared the existing related links (where available) to links generated using node2vec algorithm Import
%load_ext autoreload %autoreload 2 import os import pandas as pd import numpy as np import ast import re # z test from statsmodels.stats.proportion import proportions_ztest # bayesian bootstrap and vis import matplotlib.pyplot as plt import seaborn as sns import bayesian_bootstrap.bootstrap as bb from astropy.utils...
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
File/dir locations Processed journey data
DATA_DIR = os.getenv("DATA_DIR") filename = "full_sample_loved_947858.csv.gz" filepath = os.path.join( DATA_DIR, "sampled_journey", "20190315_20190319", filename) filepath VARIANT_DICT = { 'CONTROL_GROUP':'B', 'INTERVENTION_GROUP':'C' } # read in processed sampled journey with just the cols we need for ...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 740885/740885 [00:00<00:00, 766377.92it/s]
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Nav type of page lookup - is it a finding page? if not it's a thing page
filename = "document_types.csv.gz" # created a metadata dir in the DATA_DIR to hold this data filepath = os.path.join( DATA_DIR, "metadata", filename) print(filepath) df_finding_thing = pd.read_csv(filepath, sep="\t", compression="gzip") df_finding_thing.head() thing_page_paths = df_finding_thing[ df_fin...
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
OutliersSome rows should be removed before analysis. For example rows with journey lengths of 500 or very high related link click rates. This process might have to happen once features have been created. Derive variables journey_click_rateThere is no difference in the proportion of journeys using at least one relate...
# get the number of related links clicks per Sequence df['Related Links Clicks per seq'] = df['Event_cat_act_agg'].map(analysis.sum_related_click_events) # map across the Sequence variable, which includes pages and Events # we want to pass all the list elements to a function one-by-one and then collect the output. df["...
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
count of clicks on navigation elementsThere is no statistically significant difference in the count of clicks on navigation elements per journey between page variant A and page variant B.\begin{equation*}{\text{total number of navigation element click events from content pages}}\end{equation*} Related link counts
# get the total number of related links clicks for that row (clicks per sequence multiplied by occurrences) df['Related Links Clicks row total'] = df['Related Links Clicks per seq'] * df['Occurrences']
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Navigation events
def count_nav_events(page_event_list): """Counts the number of nav events from a content page in a Page Event List.""" content_page_nav_events = 0 for pair in page_event_list: if analysis.is_nav_event(pair[1]): if pair[0] in thing_page_paths: content_page_nav_events += 1 ...
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Temporary df file in case of crash Save
df.to_csv(os.path.join( DATA_DIR, "ab3_loved_temp.csv.gz"), sep="\t", compression="gzip", index=False) df = pd.read_csv(os.path.join( DATA_DIR, "ab3_loved_temp.csv.gz"), sep="\t", compression="gzip")
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Frequentist statistics Statistical significance
# help(proportions_ztest) has_rel = analysis.z_prop(df, 'Has_Related', VARIANT_DICT) has_rel has_rel['p-value'] < alpha
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Practical significance - uplift
# Due to multiple testing we used the Bonferroni correction for alpha ci_low,ci_upp = analysis.zconf_interval_two_samples(has_rel['x_a'], has_rel['n_a'], has_rel['x_b'], has_rel['n_b'], alpha = alpha) print(' difference in proportions = {0:.2f}%'.format(100*(has_rel['p_b']-has...
difference in proportions = 1.53% % relative change in proportions = 44.16% 95% Confidence Interval = ( 1.46% , 1.61% )
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Bayesian statistics Based on [this](https://medium.com/@thibalbo/coding-bayesian-ab-tests-in-python-e89356b3f4bd) blog To be developed, a Bayesian approach can provide a simpler interpretation. Bayesian bootstrap
analysis.compare_total_searches(df, VARIANT_DICT) fig, ax = plt.subplots() plot_df_B = df[df.ABVariant == VARIANT_DICT['INTERVENTION_GROUP']].groupby( 'Content_Nav_or_Search_Count').sum().iloc[:, 0] plot_df_A = df[df.ABVariant == VARIANT_DICT['CONTROL_GROUP']].groupby( 'Content_Nav_or_Search_Cou...
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
proportion of journeys with a page sequence including content and related links onlyThere is no statistically significant difference in the proportion of journeys with a page sequence including content and related links only (including loops) between page variant A and page variant B \begin{equation*}\frac{\text{total...
# if (Content_Nav_Search_Event_Sum == 0) that's our success # Has_No_Nav_Or_Search == 1 is a success # the problem is symmetrical so doesn't matter too much sum(df.Has_No_Nav_Or_Search * df.Occurrences) / df.Occurrences.sum() sns.distplot(df.Content_Nav_or_Search_Count.values);
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Frequentist statistics Statistical significance
nav = analysis.z_prop(df, 'Has_No_Nav_Or_Search', VARIANT_DICT) nav
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Practical significance - uplift
# Due to multiple testing we used the Bonferroni correction for alpha ci_low,ci_upp = analysis.zconf_interval_two_samples(nav['x_a'], nav['n_a'], nav['x_b'], nav['n_b'], alpha = alpha) diff = 100*(nav['x_b']/nav['n_b']-nav['x_a']/nav['n_a']) print(' difference in proportions =...
There was a 0.18% relative change in the proportion of journeys not using search/nav elements
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Average Journey Length (number of page views)There is no statistically significant difference in the average page list length of journeys (including loops) between page variant A and page variant B.
length_B = df[df.ABVariant == VARIANT_DICT['INTERVENTION_GROUP']].groupby( 'Page_List_Length').sum().iloc[:, 0] lengthB_2 = length_B.reindex(np.arange(1, 501, 1), fill_value=0) length_A = df[df.ABVariant == VARIANT_DICT['CONTROL_GROUP']].groupby( 'Page_List_Length').sum().iloc[:, 0] lengthA_2 =...
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Bayesian bootstrap for non-parametric hypotheses
# http://savvastjortjoglou.com/nfl-bayesian-bootstrap.html # let's use mean journey length (could probably model parametrically but we use it for demonstration here) # some journeys have length 500 and should probably be removed as they are liekely bots or other weirdness #exclude journeys of longer than 500 as these c...
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
We can also measure the uncertainty in the difference between the Page Variants's Journey Length by subtracting their posteriors.
# calculate the posterior for the difference between A's and B's YPA ypa_diff = np.array(b_bootstrap) - np.array(a_bootstrap) # get the hdi ypa_diff_ci_low, ypa_diff_ci_hi = bb.highest_density_interval(ypa_diff) # the mean of the posterior ypa_diff.mean() print('low ci:', ypa_diff_ci_low, '\nhigh ci:', ypa_diff_ci_hi) ...
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
We can actually calculate the probability that B's mean Journey Length was greater than A's mean Journey Length by measuring the proportion of values greater than 0 in the above distribution.
# We count the number of values greater than 0 and divide by the total number # of observations # which returns us the the proportion of values in the distribution that are # greater than 0, could act a bit like a p-value (ypa_diff > 0).sum() / ypa_diff.shape[0] # We count the number of values greater than 0 and divide...
_____no_output_____
MIT
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb
alphagov/govuk_ab_analysis
Design Your Own Neural Net
import numpy as np import matplotlib.pyplot as plt logistic = lambda u: 1/(1+np.exp(-u)) def get_challenge1(): np.random.seed(0) X = np.random.randn(100, 2) d = np.sqrt(np.sum(X**2, axis=1)) y = np.array(d < 1, dtype=float) return X, y def get_challenge2(): X, y = get_challenge1() X = np.c...
_____no_output_____
Apache-2.0
DesignYourNeuralNet.ipynb
Ursinus-CS477-F2021/Week11_Convexity_NNIntro
Gaussian Process Regression Gaussian Process* Random process where any point $\large π‘₯∈\mathbb{R}^𝑑$ is assigned random variable $\large \mathbb{f}(π‘₯)$* Joint distribution of such finite number of variables is given by:$$\large 𝑝(\mathbb{f}│𝑋)=𝒩(\mathbb{f}|πœ‡,𝐾)$$ where$$ \mathbb{f} = (\mathbb{f}(π‘₯_1 ), …, \ma...
import inspect import numpy as np import ipywidgets as w import bqplot.pyplot as plt import bqplot as bq # kernels def rbf(x1, x2, sigma=1., l=1.): z = (x1 - x2[:, np.newaxis]) / l return sigma**2 * np.exp(-.5 * z ** 2) def gp_regression(X_train, y_train, X_test, kernel=rbf, ...
_____no_output_____
MIT
ml/visualizations/Gaussian Process Regression.ipynb
kingreatwill/penter
Tranformation helper Functions Getter function for axes and origin of a sensors coordinate system`Note: view is a field extracted from the config of sensors.`For example, `view = config['cameras']['front_left']['view']`
def get_axes_of_a_view(view): """ Extract the normalized axes of a sensor in the vehicle coordinate system view: 'view object' is a dictionary of the x-axis, y-axis and origin of a sensor """ x_axis = view['x-axis'] y_axis = view['y-axis'] x_axis_no...
_____no_output_____
MIT
data_processing/lidar_data_processing.ipynb
abhitoronto/KITTI_ROAD_SEGMENTATION
Getter functions for Coordinate tranformation matrix: $$\begin{bmatrix} R & T \\ 0 & 1\end{bmatrix}$$
def get_transform_to_global(view): """ Get the Tranformation matrix to convert sensor coordinates to global coordinates from the view object of a sensor view: 'view object' is a dictionary of the x-axis, y-axis and origin of a sensor """ # get axes x_...
_____no_output_____
MIT
data_processing/lidar_data_processing.ipynb
abhitoronto/KITTI_ROAD_SEGMENTATION
Getter Functions for Rotation Matrix $$R_{3x3}$$
def get_rot_from_global(view): """ Get the only the Rotation matrix to rotate sensor coordinates to global coordinates from the view object of a sensor view: 'view object' is a dictionary of the x-axis, y-axis and origin of a sensor """ # get transform to...
_____no_output_____
MIT
data_processing/lidar_data_processing.ipynb
abhitoronto/KITTI_ROAD_SEGMENTATION
Helper Functions for (image/Lidar/label) file names
def extract_sensor_file_name(file_name, root_path, sensor_name, ext): file_name_split = file_name.split('/') seq_name = file_name_split[-4] data_viewpoint = file_name_split[-2] file_name_sensor = file_name_split[-1].split('.')[0] file_name_sensor = file_name_sensor.split('_') file_name...
_____no_output_____
MIT
data_processing/lidar_data_processing.ipynb
abhitoronto/KITTI_ROAD_SEGMENTATION
Helper Functions for Images
def get_cv2_image(file_name_image, color_transform): # Create Image object and correct image color image = cv2.imread(file_name_image) image = cv2.cvtColor(image, color_transform) return image def get_undistorted_cv2_image(file_name_image, config, color_transform): # Create Image object an...
_____no_output_____
MIT
data_processing/lidar_data_processing.ipynb
abhitoronto/KITTI_ROAD_SEGMENTATION
LIDAR Helper Function Using LIDAR data- LiDAR data is provided in a camera reference frame.- `np.load(file_name_lidar)` loads the LIDAR points dictionary- LIDAR info - azimuth: - row: y axis image location of the lidar point - lidar_id: id of the LIDAR that the point belongs to - depth: Point Depth - ...
def get_lidar_on_image(file_name_lidar, config, root_path, pixel_size=3, pixel_opacity=1): file_name_image = extract_image_file_name_from_any_file_name(file_name_lidar, root_path) image = get_undistorted_cv2_image(file_name_image, config, cv2.COLOR_BGR2RGB) lidar = np.load(file_name_lidar) # g...
_____no_output_____
MIT
data_processing/lidar_data_processing.ipynb
abhitoronto/KITTI_ROAD_SEGMENTATION
MAIN
# # Pick a random LIDAR file from the custom data set # np.random.seed() # idx = np.random.randint(0, len(custom_lidar_files)-1) # file_name_lidar = custom_lidar_files[idx] # # Visualize LIDAR on image # lidar_on_image, lidar = get_lidar_on_image(file_name_lidar, config, root_path) # pt.fig = pt.figure(figsize=(15, 1...
_____no_output_____
MIT
data_processing/lidar_data_processing.ipynb
abhitoronto/KITTI_ROAD_SEGMENTATION
LIDAR data loading
# Open Config File with open ('cams_lidars.json', 'r') as f: config = json.load(f) # pprint.pprint(config) # Create Root Path root_path = '/hdd/a2d2-data/camera_lidar_semantic/' # Count Number of LIDAR points in each file def get_num_lidar_pts_list(file_names_lidar): num_lidar_points = [] start = time....
_____no_output_____
MIT
data_processing/lidar_data_processing.ipynb
abhitoronto/KITTI_ROAD_SEGMENTATION
LIDAR DATA PROCESSING
def get_image_files(lidar_file, method_type): # Create Lidar_x Lidar_y Lidar_z directory lx_file = extract_sensor_file_name(lidar_file, root_path, f'lidar-x-{method_type}', 'png') ly_file = extract_sensor_file_name(lidar_file, root_path, f'lidar-y-{method_type}', 'png') lz_file = extract_sensor_file_nam...
Creating Image: /hdd/a2d2-data/camera_lidar_semantic/20180807_145028/lidar-x-ip/cam_front_center/20180807145028_lidar-x-ip_frontcenter_000009806.png Creating Image: /hdd/a2d2-data/camera_lidar_semantic/20180807_145028/lidar-x-ip/cam_front_center/20180807145028_lidar-x-ip_frontcenter_000009489.png Creating Image: /hdd...
MIT
data_processing/lidar_data_processing.ipynb
abhitoronto/KITTI_ROAD_SEGMENTATION