markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
1.0 Connect to workspace and datastore
from azureml.core import Workspace # set up workspace ws = Workspace.from_config() # set up datastores dstore = ws.get_default_datastore() print('Workspace Name: ' + ws.name, 'Azure Region: ' + ws.location, 'Subscription Id: ' + ws.subscription_id, 'Resource Group: ' + ws.resource_group, sep = '\n')
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
2.0 Create an experiment
from azureml.core import Experiment experiment = Experiment(ws, 'oj_training_pipeline') print('Experiment name: ' + experiment.name)
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
3.0 Get the training DatasetNext, we get the training Dataset using the [Dataset.get_by_name()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.datasetget-by-name-workspace--name--version--latest--) method.This is the training dataset we created and registered in the [data preparation notebook](../01_Data_Preparation.ipynb). If you chose to use only a subset of the files, the training dataset name will be `oj_data_small_train`. Otherwise, the name you'll have to use is `oj_data_train`. We recommend to start with the small dataset and make sure everything runs successfully, then scale up to the full dataset.
dataset_name = 'oj_data_small_train' from azureml.core.dataset import Dataset dataset = Dataset.get_by_name(ws, name=dataset_name) dataset_input = dataset.as_named_input(dataset_name)
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
4.0 Create the training pipelineNow that the workspace, experiment, and dataset are set up, we can put together a pipeline for training. 4.1 Configure environment for ParallelRunStepAn [environment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-environments) defines a collection of resources that we will need to run our pipelines. We configure a reproducible Python environment for our training script including the [scikit-learn](https://scikit-learn.org/stable/index.html) python library.
from azureml.core import Environment from azureml.core.conda_dependencies import CondaDependencies train_env = Environment(name="many_models_environment") train_conda_deps = CondaDependencies.create(pip_packages=['sklearn', 'pandas', 'joblib', 'azureml-defaults', 'azureml-core', 'azureml-dataprep[fuse]']) train_env.python.conda_dependencies = train_conda_deps
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
4.2 Choose a compute target Currently ParallelRunConfig only supports AMLCompute. This is the compute cluster you created in the [setup notebook](../00_Setup_AML_Workspace.ipynb3.0-Create-compute-cluster).
cpu_cluster_name = "cpucluster" from azureml.core.compute import AmlCompute compute = AmlCompute(ws, cpu_cluster_name)
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
4.3 Set up ParallelRunConfig[ParallelRunConfig](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_config.parallelrunconfig?view=azure-ml-py) provides the configuration for the ParallelRunStep we'll be creating next. Here we specify the environment and compute target we created above along with the entry script that will be for each batch.There's a number of important parameters to configure including:- **mini_batch_size**: The number of files per batch. If you have 500 files and mini_batch_size is 10, 50 batches would be created containing 10 files each. Batches are split across the various nodes. - **node_count**: The number of compute nodes to be used for running the user script. For the small sample of OJ datasets, we only need a single node, but you will likely need to increase this number for larger datasets composed of more files. If you increase the node count beyond five here, you may need to increase the max_nodes for the compute cluster as well.- **process_count_per_node**: The number of processes per node. The compute cluster we are using has 8 cores so we set this parameter to 8.- **run_invocation_timeout**: The run() method invocation timeout in seconds. The timeout should be set to be higher than the maximum training time of one model (in seconds), by default it's 60. Since the batches that takes the longest to train are about 120 seconds, we set it to be 180 to ensure the method has adequate time to run.We also added tags to preserve the information about our training cluster's node count, process count per node, and dataset name. You can find the 'Tags' column in Azure Machine Learning Studio.
from azureml.pipeline.steps import ParallelRunConfig processes_per_node = 8 node_count = 1 timeout = 180 parallel_run_config = ParallelRunConfig( source_directory='./scripts', entry_script='train.py', mini_batch_size="1", run_invocation_timeout=timeout, error_threshold=-1, output_action="append_row", environment=train_env, process_count_per_node=processes_per_node, compute_target=compute, node_count=node_count)
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
4.4 Set up ParallelRunStepThis [ParallelRunStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py) is the main step in our training pipeline. First, we set up the output directory and define the pipeline's output name. The datastore that stores the pipeline's output data is Workspace's default datastore.
from azureml.pipeline.core import PipelineData output_dir = PipelineData(name="training_output", datastore=dstore)
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
We provide our ParallelRunStep with a name, the ParallelRunConfig created above and several other parameters:- **inputs**: A list of input datasets. Here we'll use the dataset created in the previous notebook. The number of files in that path determines the number of models will be trained in the ParallelRunStep.- **output**: A PipelineData object that corresponds to the output directory. We'll use the output directory we just defined. - **arguments**: A list of arguments required for the train.py entry script. Here, we provide the schema for the timeseries data - i.e. the names of target, timestamp, and id columns - as well as columns that should be dropped prior to modeling, a string identifying the model type, and the number of observations we want to leave aside for testing.
from azureml.pipeline.steps import ParallelRunStep parallel_run_step = ParallelRunStep( name="many-models-training", parallel_run_config=parallel_run_config, inputs=[dataset_input], output=output_dir, allow_reuse=False, arguments=['--target_column', 'Quantity', '--timestamp_column', 'WeekStarting', '--timeseries_id_columns', 'Store', 'Brand', '--drop_columns', 'Revenue', 'Store', 'Brand', '--model_type', 'lr', '--test_size', 20] )
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
5.0 Run the pipelineNext, we submit our pipeline to run. The run will train models for each dataset using a train set, compute accuracy metrics for the fits using a test set, and finally re-train models with all the data available. With 10 files, this should only take a few minutes but with the full dataset this can take over an hour.
from azureml.pipeline.core import Pipeline pipeline = Pipeline(workspace=ws, steps=[parallel_run_step]) run = experiment.submit(pipeline) #Wait for the run to complete run.wait_for_completion(show_output=False, raise_on_error=True)
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
6.0 View results of training pipelineThe dataframe we return in the run method of train.py is outputted to *parallel_run_step.txt*. To see the results of our training pipeline, we'll download that file, read in the data to a DataFrame, and then visualize the results, including the in-sample metrics.The run submitted to the Azure Machine Learning Training Compute Cluster may take a while. The output is not generated until the run is complete. You can monitor the status of the run in Azure Portal https://ml.azure.com 6.1 Download parallel_run_step.txt locally
import os def download_results(run, target_dir=None, step_name='many-models-training', output_name='training_output'): stitch_run = run.find_step_run(step_name)[0] port_data = stitch_run.get_output_data(output_name) port_data.download(target_dir, show_progress=True) return os.path.join(target_dir, 'azureml', stitch_run.id, output_name) file_path = download_results(run, 'output') file_path
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
6.2 Convert the file to a dataframe
import pandas as pd df = pd.read_csv(file_path + '/parallel_run_step.txt', sep=" ", header=None) df.columns = ['Store', 'Brand', 'Model', 'File Name', 'ModelName', 'StartTime', 'EndTime', 'Duration', 'MSE', 'RMSE', 'MAE', 'MAPE', 'Index', 'Number of Models', 'Status'] df['StartTime'] = pd.to_datetime(df['StartTime']) df['EndTime'] = pd.to_datetime(df['EndTime']) df['Duration'] = df['EndTime'] - df['StartTime'] df.head()
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
6.3 Review Results
total = df['EndTime'].max() - df['StartTime'].min() print('Number of Models: ' + str(len(df))) print('Total Duration: ' + str(total)[6:]) print('Average MAPE: ' + str(round(df['MAPE'].mean(), 5))) print('Average MSE: ' + str(round(df['MSE'].mean(), 5))) print('Average RMSE: ' + str(round(df['RMSE'].mean(), 5))) print('Average MAE: '+ str(round(df['MAE'].mean(), 5))) print('Maximum Duration: '+ str(df['Duration'].max())[7:]) print('Minimum Duration: ' + str(df['Duration'].min())[7:]) print('Average Duration: ' + str(df['Duration'].mean())[7:])
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
6.4 Visualize Performance across modelsHere, we produce some charts from the errors metrics calculated during the run using a subset put aside for testing.First, we examine the distribution of mean absolute percentage error (MAPE) over all the models:
import seaborn as sns import matplotlib.pyplot as plt fig = sns.boxplot(y='MAPE', data=df) fig.set_title('MAPE across all models')
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
Next, we can break that down by Brand or Store to see variations in error across our models
fig = sns.boxplot(x='Brand', y='MAPE', data=df) fig.set_title('MAPE by Brand')
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
We can also look at how long models for different brands took to train
brand = df.groupby('Brand') brand = brand['Duration'].sum() brand = pd.DataFrame(brand) brand['time_in_seconds'] = [time.total_seconds() for time in brand['Duration']] brand.drop(columns=['Duration']).plot(kind='bar') plt.xlabel('Brand') plt.ylabel('Seconds') plt.title('Total Training Time by Brand') plt.show()
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
7.0 Publish and schedule the pipeline (Optional) 7.1 Publish the pipelineOnce you have a pipeline you're happy with, you can publish a pipeline so you can call it programatically later on. See this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipelinepublish-a-pipeline) for additional information on publishing and calling pipelines.
# published_pipeline = pipeline.publish(name = 'train_many_models', # description = 'train many models', # version = '1', # continue_on_step_failure = False)
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
7.2 Schedule the pipelineYou can also [schedule the pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-schedule-pipelines) to run on a time-based or change-based schedule. This could be used to automatically retrain models every month or based on another trigger such as data drift.
# from azureml.pipeline.core import Schedule, ScheduleRecurrence # training_pipeline_id = published_pipeline.id # recurrence = ScheduleRecurrence(frequency="Month", interval=1, start_time="2020-01-01T09:00:00") # recurring_schedule = Schedule.create(ws, name="training_pipeline_recurring_schedule", # description="Schedule Training Pipeline to run on the first day of every month", # pipeline_id=training_pipeline_id, # experiment_name=experiment.name, # recurrence=recurrence)
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models
Let's turn the mapping features into a function
def get_ticks(bounds, dirs, otherbounds): dirs = dirs.lower() l0 = np.float(bounds[0]) l1 = np.float(bounds[1]) r = np.max([l1 - l0, np.float(otherbounds[1]) - np.float(otherbounds[0])]) if r <= 1.5: # <1.5 degrees: 15' major ticks, 5' minor ticks minor_int = 1.0 / 12.0 major_int = 1.0 / 4.0 elif r <= 3.0: # <3 degrees: 30' major ticks, 10' minor ticks minor_int = 1.0 / 6.0 major_int = 0.5 elif r <= 7.0: # <7 degrees: 1d major ticks, 15' minor ticks minor_int = 0.25 major_int = np.float(1) elif r <= 15: # <15 degrees: 2d major ticks, 30' minor ticks minor_int = 0.5 major_int = np.float(2) elif r <= 30: # <30 degrees: 3d major ticks, 1d minor ticks minor_int = np.float(1) major_int = np.float(3) else: # >=30 degrees: 5d major ticks, 1d minor ticks minor_int = np.float(1) major_int = np.float(5) minor_ticks = np.arange(np.ceil(l0 / minor_int) * minor_int, np.ceil(l1 / minor_int) * minor_int + minor_int, minor_int) minor_ticks = minor_ticks[minor_ticks <= l1] major_ticks = np.arange(np.ceil(l0 / major_int) * major_int, np.ceil(l1 / major_int) * major_int + major_int, major_int) major_ticks = major_ticks[major_ticks <= l1] if major_int < 1: d, m, s = dd2dms(np.array(major_ticks)) if dirs == 'we' or dirs == 'ew' or dirs == 'lon' or dirs == 'long' or dirs == 'longitude': n = 'W' * sum(d < 0) p = 'E' * sum(d >= 0) dir = n + p major_tick_labels = [str(np.abs(int(d[i]))) + u"\N{DEGREE SIGN}" + str(int(m[i])) + "'" + dir[i] for i in range(len(d))] elif dirs == 'sn' or dirs == 'ns' or dirs == 'lat' or dirs == 'latitude': n = 'S' * sum(d < 0) p = 'N' * sum(d >= 0) dir = n + p major_tick_labels = [str(np.abs(int(d[i]))) + u"\N{DEGREE SIGN}" + str(int(m[i])) + "'" + dir[i] for i in range(len(d))] else: major_tick_labels = [str(int(d[i])) + u"\N{DEGREE SIGN}" + str(int(m[i])) + "'" for i in range(len(d))] else: d = major_ticks if dirs == 'we' or dirs == 'ew' or dirs == 'lon' or dirs == 'long' or dirs == 'longitude': n = 'W' * sum(d < 0) p = 'E' * sum(d >= 0) dir = n + p major_tick_labels = [str(np.abs(int(d[i]))) + u"\N{DEGREE SIGN}" + dir[i] for i in range(len(d))] elif dirs == 'sn' or dirs == 'ns' or dirs == 'lat' or dirs == 'latitude': n = 'S' * sum(d < 0) p = 'N' * sum(d >= 0) dir = n + p major_tick_labels = [str(np.abs(int(d[i]))) + u"\N{DEGREE SIGN}" + dir[i] for i in range(len(d))] else: major_tick_labels = [str(int(d[i])) + u"\N{DEGREE SIGN}" for i in range(len(d))] return minor_ticks, major_ticks, major_tick_labels def add_map_features(ax, extent): # # Gridlines and grid labels # gl = ax.gridlines( # draw_labels=True, # linewidth=.5, # color='black', # alpha=0.25, # linestyle='--', # ) # gl.xlabels_top = gl.ylabels_right = False # gl.xlabel_style = {'size': 16, 'color': 'black'} # gl.ylabel_style = {'size': 16, 'color': 'black'} # gl.xformatter = LONGITUDE_FORMATTER # gl.yformatter = LATITUDE_FORMATTER xl = [extent[0], extent[1]] yl = [extent[2], extent[3]] tick0x, tick1, ticklab = get_ticks(xl, 'we', yl) ax.set_xticks(tick0x, minor=True, crs=ccrs.PlateCarree()) ax.set_xticks(tick1, crs=ccrs.PlateCarree()) ax.set_xticklabels(ticklab, fontsize=14) # get and add latitude ticks/labels tick0y, tick1, ticklab = get_ticks(yl, 'sn', xl) ax.set_yticks(tick0y, minor=True, crs=ccrs.PlateCarree()) ax.set_yticks(tick1, crs=ccrs.PlateCarree()) ax.set_yticklabels(ticklab, fontsize=14) gl = ax.gridlines(draw_labels=False, linewidth=.5, color='gray', alpha=0.75, linestyle='--', crs=ccrs.PlateCarree()) gl.xlocator = mticker.FixedLocator(tick0x) gl.ylocator = mticker.FixedLocator(tick0y) ax.tick_params(which='major', direction='out', bottom=True, top=True, labelbottom=True, labeltop=False, left=True, right=True, labelleft=True, labelright=False, length=5, width=2) ax.tick_params(which='minor', direction='out', bottom=True, top=True, labelbottom=True, labeltop=False, left=True, right=True, labelleft=True, labelright=False, width=1) # Axes properties and features ax.set_extent(extent) ax.add_feature(LAND, zorder=0, edgecolor='black') ax.add_feature(cfeature.LAKES) ax.add_feature(cfeature.BORDERS) ax.add_feature(state_lines, edgecolor='black') return ax
_____no_output_____
MIT
examples/notebooks/plot_quiver_curly.ipynb
teresaupdyke/codar_processing
Let's change the arrows
# velocity_min = np.int32(np.nanmin(speed)) # Get the minimum speed from the data # velocity_max =np.int32(np.nanmax(speed)) # Get the maximum speed from the data # velocity_min = 0 # Get the minimum speed from the data # velocity_max = 40 # Get the maximum speed from the data # Setup a keyword argument, kwargs, dictionary to pass optional arguments to the quiver plot kwargs = dict( transform=ccrs.PlateCarree(), scale=65, # Number of data units per arrow length unit, e.g., m/s per plot width; a smaller scale parameter makes the arrow longer. Default is None. headwidth=2.75, # Head width as multiple of shaft width. headlength=2.75, #Head length as multiple of shaft width. headaxislength=2.5, # Head length at shaft intersection. minshaft=1, minlength=1 ) # Clip the colors # color_clipped = np.clip(speed, velocity_min, velocity_max).squeeze(), # Set the colorbar ticks to correspond to the velocity minimum and maximum of the data with a step of 20... Append the max velocity # ticks = np.append(np.arange(velocity_min, velocity_max, 5), velocity_max) import matplotlib.pyplot as plt import numpy as np from scipy.interpolate import griddata lon, lat = np.meshgrid(tds.longitude, tds.latitude) u = tds.u.data v = tds.v.data # # resample onto a 50x50 grid nx, ny = 50, 50 # (N, 2) arrays of input x,y coords and u,v values pts = np.vstack((lon.ravel(), lat.ravel())).T vals = np.vstack((u.ravel(), v.ravel())).T # the new x and y coordinates for the grid, which will correspond to the # columns and rows of u and v respectively xi = np.linspace(lon.min(), lon.max(), nx) yi = np.linspace(lat.min(), lat.max(), ny) # an (nx * ny, 2) array of x,y coordinates to interpolate at ipts = np.vstack(a.ravel() for a in np.meshgrid(yi, xi)[::-1]).T # an (nx * ny, 2) array of interpolated u, v values ivals = griddata(pts, vals, ipts, method='linear') # Only works with nearest # reshape interpolated u,v values into (ny, nx) arrays ui, vi = ivals.T ui.shape = vi.shape = (ny, nx) np.nanmax(yi) # Initialize blank plot with a mercator projection fig, ax = plt.subplots( figsize=(22, 16), subplot_kw=dict(projection=ccrs.Mercator()) ) norm = np.sqrt(ui**2 + vi**2) norm_flat = norm.flatten() start_points = np.array([xi.flatten(), yi.flatten()]).T scale = .2/np.nanmax(norm) for i in range(start_points.shape[0]): plt.streamplot(xi, yi, ui, vi, color='k', start_points=np.array([start_points[i,:]]), minlength=.95*norm_flat[i]*scale, maxlength=1.0*norm_flat[i]*scale, integration_direction='backward', density=10, arrowsize=0.0, transform=ccrs.PlateCarree() ) # Add map features to the axes add_map_features(ax, extent) # plt.quiver(xi, yi, ui/norm, vi/norm, scale=30, transform=ccrs.PlateCarree()) import matplotlib.pyplot as plt import numpy as np w = 3 Y, X = np.mgrid[-w:w:8j, -w:w:8j] U = -Y V = X norm = np.sqrt(U**2 + V**2) norm_flat = norm.flatten() start_points = np.array([X.flatten(),Y.flatten()]).T plt.clf() scale = .2/np.max(norm) plt.subplot(121) plt.title('scaling only the length') for i in range(start_points.shape[0]): plt.streamplot(X,Y,U,V, color='k', start_points=np.array([start_points[i,:]]),minlength=.95*norm_flat[i]*scale, maxlength=1.0*norm_flat[i]*scale, integration_direction='backward', density=10, arrowsize=0.0) plt.quiver(X,Y,U/norm, V/norm,scale=30) plt.axis('square') plt.subplot(122) plt.title('scaling length, arrowhead and linewidth') for i in range(start_points.shape[0]): plt.streamplot(X,Y,U,V, color='k', start_points=np.array([start_points[i,:]]),minlength=.95*norm_flat[i]*scale, maxlength=1.0*norm_flat[i]*scale, integration_direction='backward', density=10, arrowsize=0.0, linewidth=.5*norm_flat[i]) plt.quiver(X,Y,U/np.max(norm), V/np.max(norm),scale=30) plt.axis('square') """ Streamline plotting for 2D vector fields. """ from __future__ import (absolute_import, division, print_function, unicode_literals) import six from six.moves import xrange from scipy.interpolate import interp1d import numpy as np import matplotlib import matplotlib.cm as cm import matplotlib.colors as mcolors import matplotlib.collections as mcollections import matplotlib.lines as mlines import matplotlib.patches as patches def velovect(axes, x, y, u, v, linewidth=None, color=None, cmap=None, norm=None, arrowsize=1, arrowstyle='-|>', transform=None, zorder=None, start_points=None, scale=1.0, grains=15): """Draws streamlines of a vector flow. *x*, *y* : 1d arrays an *evenly spaced* grid. *u*, *v* : 2d arrays x and y-velocities. Number of rows should match length of y, and the number of columns should match x. *density* : float or 2-tuple Controls the closeness of streamlines. When `density = 1`, the domain is divided into a 30x30 grid---*density* linearly scales this grid. Each cell in the grid can have, at most, one traversing streamline. For different densities in each direction, use [density_x, density_y]. *linewidth* : numeric or 2d array vary linewidth when given a 2d array with the same shape as velocities. *color* : matplotlib color code, or 2d array Streamline color. When given an array with the same shape as velocities, *color* values are converted to colors using *cmap*. *cmap* : :class:`~matplotlib.colors.Colormap` Colormap used to plot streamlines and arrows. Only necessary when using an array input for *color*. *norm* : :class:`~matplotlib.colors.Normalize` Normalize object used to scale luminance data to 0, 1. If None, stretch (min, max) to (0, 1). Only necessary when *color* is an array. *arrowsize* : float Factor scale arrow size. *arrowstyle* : str Arrow style specification. See :class:`~matplotlib.patches.FancyArrowPatch`. *minlength* : float Minimum length of streamline in axes coordinates. *start_points*: Nx2 array Coordinates of starting points for the streamlines. In data coordinates, the same as the ``x`` and ``y`` arrays. *zorder* : int any number *scale* : float Maximum length of streamline in axes coordinates. Returns: *stream_container* : StreamplotSet Container object with attributes - lines: `matplotlib.collections.LineCollection` of streamlines - arrows: collection of `matplotlib.patches.FancyArrowPatch` objects representing arrows half-way along stream lines. This container will probably change in the future to allow changes to the colormap, alpha, etc. for both lines and arrows, but these changes should be backward compatible. """ grid = Grid(x, y) mask = StreamMask(10) dmap = DomainMap(grid, mask) if zorder is None: zorder = mlines.Line2D.zorder # default to data coordinates if transform is None: transform = axes.transData if color is None: color = axes._get_lines.get_next_color() if linewidth is None: linewidth = matplotlib.rcParams['lines.linewidth'] line_kw = {} arrow_kw = dict(arrowstyle=arrowstyle, mutation_scale=10 * arrowsize) use_multicolor_lines = isinstance(color, np.ndarray) if use_multicolor_lines: if color.shape != grid.shape: raise ValueError( "If 'color' is given, must have the shape of 'Grid(x,y)'") line_colors = [] color = np.ma.masked_invalid(color) else: line_kw['color'] = color arrow_kw['color'] = color if isinstance(linewidth, np.ndarray): if linewidth.shape != grid.shape: raise ValueError( "If 'linewidth' is given, must have the shape of 'Grid(x,y)'") line_kw['linewidth'] = [] else: line_kw['linewidth'] = linewidth arrow_kw['linewidth'] = linewidth line_kw['zorder'] = zorder arrow_kw['zorder'] = zorder ## Sanity checks. if u.shape != grid.shape or v.shape != grid.shape: raise ValueError("'u' and 'v' must be of shape 'Grid(x,y)'") u = np.ma.masked_invalid(u) v = np.ma.masked_invalid(v) magnitude = np.sqrt(u**2 + v**2) magnitude/=np.max(magnitude) resolution = scale/grains minlength = .9*resolution integrate = get_integrator(u, v, dmap, minlength, resolution, magnitude) trajectories = [] edges = [] if start_points is None: start_points=_gen_starting_points(x,y,grains) sp2 = np.asanyarray(start_points, dtype=float).copy() # Check if start_points are outside the data boundaries for xs, ys in sp2: if not (grid.x_origin <= xs <= grid.x_origin + grid.width and grid.y_origin <= ys <= grid.y_origin + grid.height): raise ValueError("Starting point ({}, {}) outside of data " "boundaries".format(xs, ys)) # Convert start_points from data to array coords # Shift the seed points from the bottom left of the data so that # data2grid works properly. sp2[:, 0] -= grid.x_origin sp2[:, 1] -= grid.y_origin for xs, ys in sp2: xg, yg = dmap.data2grid(xs, ys) t = integrate(xg, yg) if t is not None: trajectories.append(t[0]) edges.append(t[1]) if use_multicolor_lines: if norm is None: norm = mcolors.Normalize(color.min(), color.max()) if cmap is None: cmap = cm.get_cmap(matplotlib.rcParams['image.cmap']) else: cmap = cm.get_cmap(cmap) streamlines = [] arrows = [] for t, edge in zip(trajectories,edges): tgx = np.array(t[0]) tgy = np.array(t[1]) # Rescale from grid-coordinates to data-coordinates. tx, ty = dmap.grid2data(*np.array(t)) tx += grid.x_origin ty += grid.y_origin points = np.transpose([tx, ty]).reshape(-1, 1, 2) streamlines.extend(np.hstack([points[:-1], points[1:]])) # Add arrows half way along each trajectory. s = np.cumsum(np.sqrt(np.diff(tx) ** 2 + np.diff(ty) ** 2)) n = np.searchsorted(s, s[-1]) arrow_tail = (tx[n], ty[n]) arrow_head = (np.mean(tx[n:n + 2]), np.mean(ty[n:n + 2])) if isinstance(linewidth, np.ndarray): line_widths = interpgrid(linewidth, tgx, tgy)[:-1] line_kw['linewidth'].extend(line_widths) arrow_kw['linewidth'] = line_widths[n] if use_multicolor_lines: color_values = interpgrid(color, tgx, tgy)[:-1] line_colors.append(color_values) arrow_kw['color'] = cmap(norm(color_values[n])) if not edge: p = patches.FancyArrowPatch( arrow_tail, arrow_head, transform=transform, **arrow_kw) else: continue ds = np.sqrt((arrow_tail[0]-arrow_head[0])**2+(arrow_tail[1]-arrow_head[1])**2) if ds<1e-15: continue #remove vanishingly short arrows that cause Patch to fail axes.add_patch(p) arrows.append(p) lc = mcollections.LineCollection( streamlines, transform=transform, **line_kw) lc.sticky_edges.x[:] = [grid.x_origin, grid.x_origin + grid.width] lc.sticky_edges.y[:] = [grid.y_origin, grid.y_origin + grid.height] if use_multicolor_lines: lc.set_array(np.ma.hstack(line_colors)) lc.set_cmap(cmap) lc.set_norm(norm) axes.add_collection(lc) axes.autoscale_view() ac = matplotlib.collections.PatchCollection(arrows) stream_container = StreamplotSet(lc, ac) return stream_container class StreamplotSet(object): def __init__(self, lines, arrows, **kwargs): self.lines = lines self.arrows = arrows # Coordinate definitions # ======================== class DomainMap(object): """Map representing different coordinate systems. Coordinate definitions: * axes-coordinates goes from 0 to 1 in the domain. * data-coordinates are specified by the input x-y coordinates. * grid-coordinates goes from 0 to N and 0 to M for an N x M grid, where N and M match the shape of the input data. * mask-coordinates goes from 0 to N and 0 to M for an N x M mask, where N and M are user-specified to control the density of streamlines. This class also has methods for adding trajectories to the StreamMask. Before adding a trajectory, run `start_trajectory` to keep track of regions crossed by a given trajectory. Later, if you decide the trajectory is bad (e.g., if the trajectory is very short) just call `undo_trajectory`. """ def __init__(self, grid, mask): self.grid = grid self.mask = mask # Constants for conversion between grid- and mask-coordinates self.x_grid2mask = (mask.nx - 1) / grid.nx self.y_grid2mask = (mask.ny - 1) / grid.ny self.x_mask2grid = 1. / self.x_grid2mask self.y_mask2grid = 1. / self.y_grid2mask self.x_data2grid = 1. / grid.dx self.y_data2grid = 1. / grid.dy def grid2mask(self, xi, yi): """Return nearest space in mask-coords from given grid-coords.""" return (int((xi * self.x_grid2mask) + 0.5), int((yi * self.y_grid2mask) + 0.5)) def mask2grid(self, xm, ym): return xm * self.x_mask2grid, ym * self.y_mask2grid def data2grid(self, xd, yd): return xd * self.x_data2grid, yd * self.y_data2grid def grid2data(self, xg, yg): return xg / self.x_data2grid, yg / self.y_data2grid def start_trajectory(self, xg, yg): xm, ym = self.grid2mask(xg, yg) self.mask._start_trajectory(xm, ym) def reset_start_point(self, xg, yg): xm, ym = self.grid2mask(xg, yg) self.mask._current_xy = (xm, ym) def update_trajectory(self, xg, yg): xm, ym = self.grid2mask(xg, yg) #self.mask._update_trajectory(xm, ym) def undo_trajectory(self): self.mask._undo_trajectory() class Grid(object): """Grid of data.""" def __init__(self, x, y): if x.ndim == 1: pass elif x.ndim == 2: x_row = x[0, :] if not np.allclose(x_row, x): raise ValueError("The rows of 'x' must be equal") x = x_row else: raise ValueError("'x' can have at maximum 2 dimensions") if y.ndim == 1: pass elif y.ndim == 2: y_col = y[:, 0] if not np.allclose(y_col, y.T): raise ValueError("The columns of 'y' must be equal") y = y_col else: raise ValueError("'y' can have at maximum 2 dimensions") self.nx = len(x) self.ny = len(y) self.dx = x[1] - x[0] self.dy = y[1] - y[0] self.x_origin = x[0] self.y_origin = y[0] self.width = x[-1] - x[0] self.height = y[-1] - y[0] @property def shape(self): return self.ny, self.nx def within_grid(self, xi, yi): """Return True if point is a valid index of grid.""" # Note that xi/yi can be floats; so, for example, we can't simply check # `xi < self.nx` since `xi` can be `self.nx - 1 < xi < self.nx` return xi >= 0 and xi <= self.nx - 1 and yi >= 0 and yi <= self.ny - 1 class StreamMask(object): """Mask to keep track of discrete regions crossed by streamlines. The resolution of this grid determines the approximate spacing between trajectories. Streamlines are only allowed to pass through zeroed cells: When a streamline enters a cell, that cell is set to 1, and no new streamlines are allowed to enter. """ def __init__(self, density): if np.isscalar(density): if density <= 0: raise ValueError("If a scalar, 'density' must be positive") self.nx = self.ny = int(30 * density) else: if len(density) != 2: raise ValueError("'density' can have at maximum 2 dimensions") self.nx = int(30 * density[0]) self.ny = int(30 * density[1]) self._mask = np.zeros((self.ny, self.nx)) self.shape = self._mask.shape self._current_xy = None def __getitem__(self, *args): return self._mask.__getitem__(*args) def _start_trajectory(self, xm, ym): """Start recording streamline trajectory""" self._traj = [] self._update_trajectory(xm, ym) def _undo_trajectory(self): """Remove current trajectory from mask""" for t in self._traj: self._mask.__setitem__(t, 0) def _update_trajectory(self, xm, ym): """Update current trajectory position in mask. If the new position has already been filled, raise `InvalidIndexError`. """ #if self._current_xy != (xm, ym): # if self[ym, xm] == 0: self._traj.append((ym, xm)) self._mask[ym, xm] = 1 self._current_xy = (xm, ym) # else: # raise InvalidIndexError # Integrator definitions #======================== def get_integrator(u, v, dmap, minlength, resolution, magnitude): # rescale velocity onto grid-coordinates for integrations. u, v = dmap.data2grid(u, v) # speed (path length) will be in axes-coordinates u_ax = u / dmap.grid.nx v_ax = v / dmap.grid.ny speed = np.ma.sqrt(u_ax ** 2 + v_ax ** 2) def forward_time(xi, yi): ds_dt = interpgrid(speed, xi, yi) if ds_dt == 0: raise TerminateTrajectory() dt_ds = 1. / ds_dt ui = interpgrid(u, xi, yi) vi = interpgrid(v, xi, yi) return ui * dt_ds, vi * dt_ds def integrate(x0, y0): """Return x, y grid-coordinates of trajectory based on starting point. Integrate both forward and backward in time from starting point in grid coordinates. Integration is terminated when a trajectory reaches a domain boundary or when it crosses into an already occupied cell in the StreamMask. The resulting trajectory is None if it is shorter than `minlength`. """ stotal, x_traj, y_traj = 0., [], [] dmap.start_trajectory(x0, y0) dmap.reset_start_point(x0, y0) stotal, x_traj, y_traj, m_total, hit_edge = _integrate_rk12(x0, y0, dmap, forward_time, resolution, magnitude) if len(x_traj)>1: return (x_traj, y_traj), hit_edge else: # reject short trajectories dmap.undo_trajectory() return None return integrate def _integrate_rk12(x0, y0, dmap, f, resolution, magnitude): """2nd-order Runge-Kutta algorithm with adaptive step size. This method is also referred to as the improved Euler's method, or Heun's method. This method is favored over higher-order methods because: 1. To get decent looking trajectories and to sample every mask cell on the trajectory we need a small timestep, so a lower order solver doesn't hurt us unless the data is *very* high resolution. In fact, for cases where the user inputs data smaller or of similar grid size to the mask grid, the higher order corrections are negligible because of the very fast linear interpolation used in `interpgrid`. 2. For high resolution input data (i.e. beyond the mask resolution), we must reduce the timestep. Therefore, an adaptive timestep is more suited to the problem as this would be very hard to judge automatically otherwise. This integrator is about 1.5 - 2x as fast as both the RK4 and RK45 solvers in most setups on my machine. I would recommend removing the other two to keep things simple. """ # This error is below that needed to match the RK4 integrator. It # is set for visual reasons -- too low and corners start # appearing ugly and jagged. Can be tuned. maxerror = 0.003 # This limit is important (for all integrators) to avoid the # trajectory skipping some mask cells. We could relax this # condition if we use the code which is commented out below to # increment the location gradually. However, due to the efficient # nature of the interpolation, this doesn't boost speed by much # for quite a bit of complexity. maxds = min(1. / dmap.mask.nx, 1. / dmap.mask.ny, 0.1) ds = maxds stotal = 0 xi = x0 yi = y0 xf_traj = [] yf_traj = [] m_total = [] hit_edge = False while dmap.grid.within_grid(xi, yi): xf_traj.append(xi) yf_traj.append(yi) m_total.append(interpgrid(magnitude, xi, yi)) try: k1x, k1y = f(xi, yi) k2x, k2y = f(xi + ds * k1x, yi + ds * k1y) except IndexError: # Out of the domain on one of the intermediate integration steps. # Take an Euler step to the boundary to improve neatness. ds, xf_traj, yf_traj = _euler_step(xf_traj, yf_traj, dmap, f) stotal += ds hit_edge = True break except TerminateTrajectory: break dx1 = ds * k1x dy1 = ds * k1y dx2 = ds * 0.5 * (k1x + k2x) dy2 = ds * 0.5 * (k1y + k2y) nx, ny = dmap.grid.shape # Error is normalized to the axes coordinates error = np.sqrt(((dx2 - dx1) / nx) ** 2 + ((dy2 - dy1) / ny) ** 2) # Only save step if within error tolerance if error < maxerror: xi += dx2 yi += dy2 dmap.update_trajectory(xi, yi) if not dmap.grid.within_grid(xi, yi): hit_edge=True if (stotal + ds) > resolution*np.mean(m_total): break stotal += ds # recalculate stepsize based on step error if error == 0: ds = maxds else: ds = min(maxds, 0.85 * ds * (maxerror / error) ** 0.5) return stotal, xf_traj, yf_traj, m_total, hit_edge def _euler_step(xf_traj, yf_traj, dmap, f): """Simple Euler integration step that extends streamline to boundary.""" ny, nx = dmap.grid.shape xi = xf_traj[-1] yi = yf_traj[-1] cx, cy = f(xi, yi) if cx == 0: dsx = np.inf elif cx < 0: dsx = xi / -cx else: dsx = (nx - 1 - xi) / cx if cy == 0: dsy = np.inf elif cy < 0: dsy = yi / -cy else: dsy = (ny - 1 - yi) / cy ds = min(dsx, dsy) xf_traj.append(xi + cx * ds) yf_traj.append(yi + cy * ds) return ds, xf_traj, yf_traj # Utility functions # ======================== def interpgrid(a, xi, yi): """Fast 2D, linear interpolation on an integer grid""" Ny, Nx = np.shape(a) if isinstance(xi, np.ndarray): x = xi.astype(int) y = yi.astype(int) # Check that xn, yn don't exceed max index xn = np.clip(x + 1, 0, Nx - 1) yn = np.clip(y + 1, 0, Ny - 1) else: x = int(xi) y = int(yi) # conditional is faster than clipping for integers if x == (Nx - 2): xn = x else: xn = x + 1 if y == (Ny - 2): yn = y else: yn = y + 1 a00 = a[y, x] a01 = a[y, xn] a10 = a[yn, x] a11 = a[yn, xn] xt = xi - x yt = yi - y a0 = a00 * (1 - xt) + a01 * xt a1 = a10 * (1 - xt) + a11 * xt ai = a0 * (1 - yt) + a1 * yt if not isinstance(xi, np.ndarray): if np.ma.is_masked(ai): raise TerminateTrajectory return ai def _gen_starting_points(x,y,grains): eps = np.finfo(np.float32).eps tmp_x = np.linspace(x.min()+eps, x.max()-eps, grains) tmp_y = np.linspace(y.min()+eps, y.max()-eps, grains) xs = np.tile(tmp_x, grains) ys = np.repeat(tmp_y, grains) seed_points = np.array([list(xs), list(ys)]) return seed_points.T f, ax = plt.subplots(figsize=(15,4)) grains = 15 tmp = np.linspace(-3, 3, grains) xs = np.tile(tmp, grains) ys = np.repeat(tmp, grains) seed_points = np.array([list(xs), list(ys)]) scale=2. velovect(ax, xi, yi, ui, vi, arrowstyle='fancy', scale = 1.5, grains = 15, color='k') # cs = ax.contourf(xi,yi, W, cmap=plt.cm.viridis, alpha=0.5, zorder=-1) # ax1.set_title("Quiver") # ax2.set_title("Streamplot") # ax3.set_title("Curved quivers") # plt.colorbar(cs, ax=[ax1,ax2,ax3]) plt.show()
_____no_output_____
MIT
examples/notebooks/plot_quiver_curly.ipynb
teresaupdyke/codar_processing
Amazon SageMaker Object Detection for Bird Species1. [Introduction](Introduction)2. [Setup](Setup)3. [Data Preparation](Data-Preparation) 1. [Download and unpack the dataset](Download-and-unpack-the-dataset) 2. [Understand the dataset](Understand-the-dataset) 3. [Generate RecordIO files](Generate-RecordIO-files)4. [Train the model](Train-the-model)5. [Host the model](Host-the-model)6. [Test the model](Test-the-model)7. [Clean up](Clean-up)8. [Improve the model](Improve-the-model)9. [Final cleanup](Final-cleanup) IntroductionObject detection is the process of identifying and localizing objects in an image. A typical object detection solution takes an image as input and provides a bounding box on the image where an object of interest is found. It also identifies what type of object the box encapsulates. To create such a solution, we need to acquire and process a traning dataset, create and setup a training job for the alorithm so that it can learn about the dataset. Finally, we can then host the trained model in an endpoint, to which we can supply images.This notebook is an end-to-end example showing how the Amazon SageMaker Object Detection algorithm can be used with a publicly available dataset of bird images. We demonstrate how to train and to host an object detection model based on the [Caltech Birds (CUB 200 2011)](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) dataset. Amazon SageMaker's object detection algorithm uses the Single Shot multibox Detector ([SSD](https://arxiv.org/abs/1512.02325)) algorithm, and this notebook uses a [ResNet](https://arxiv.org/pdf/1603.05027.pdf) base network with that algorithm.![Sample results detecting a pair of goldfinch on a feeder](./goldfinch_detections.png)We will also demonstrate how to construct a training dataset using the RecordIO format, as this is the format that the training job consumes. This notebook is similar to the [Object Detection using the RecordIO format](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb) notebook, with the following key differences:- We provide an example of how to translate bounding box specifications when providing images to SageMaker's algorithm. You will see code for generating the train.lst and val.lst files used to create [recordIO](https://mxnet.incubator.apache.org/architecture/note_data_loading.html) files.- We demonstrate how to improve an object detection model by adding training images that are flipped horizontally (mirror images).- We give you a notebook for experimenting with object detection challenges with an order of magnitude more classes (200 bird species, as opposed to the 20 categories used by [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/)).- We show how to chart the accuracy improvements that occur across the epochs of the training job.Note that Amazon SageMaker Object Detection also allows training with the image and JSON format, which is illustrated in the [image and JSON Notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_image_json_format.ipynb). SetupBefore preparing the data, there are some initial steps required for setup. This notebook requires two additional Python packages:* **OpenCV** is required for gathering image sizes and flipping of images horizontally.* The **MXNet** runtime is required for using the im2rec tool.
import sys !{sys.executable} -m pip install opencv-python !{sys.executable} -m pip install mxnet
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
We need to identify the S3 bucket that you want to use for providing training and validation datasets. It will also be used to store the tranied model artifacts. In this notebook, we use a custom bucket. You could alternatively use a default bucket for the session. We use an object prefix to help organize the bucket content.
bucket = "<your_s3_bucket_name_here>" # custom bucket name. prefix = "DEMO-ObjectDetection-birds"
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
To train the Object Detection algorithm on Amazon SageMaker, we need to setup and authenticate the use of AWS services. To begin with, we need an AWS account role with SageMaker access. Here we will use the execution role the current notebook instance was given when it was created. This role has necessary permissions, including access to your data in S3.
import sagemaker from sagemaker import get_execution_role role = get_execution_role() print(role) sess = sagemaker.Session()
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Data PreparationThe [Caltech Birds (CUB 200 2011)](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) dataset contains 11,788 images across 200 bird species (the original technical report can be found [here](http://www.vision.caltech.edu/visipedia/papers/CUB_200_2011.pdf)). Each species comes with around 60 images, with a typical size of about 350 pixels by 500 pixels. Bounding boxes are provided, as are annotations of bird parts. A recommended train/test split is given, but image size data is not.![](./cub_200_2011_snapshot.png)The dataset can be downloaded [here](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html). Download and unpack the datasetHere we download the birds dataset from CalTech.
import os import urllib.request def download(url): filename = url.split("/")[-1] if not os.path.exists(filename): urllib.request.urlretrieve(url, filename) %%time # download('http://www.vision.caltech.edu/visipedia-data/CUB-200-2011/CUB_200_2011.tgz') # CalTech's download is (at least temporarily) unavailable since August 2020. # Can now use one made available by fast.ai . download("https://s3.amazonaws.com/fast-ai-imageclas/CUB_200_2011.tgz")
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Now we unpack the dataset into its own directory structure.
%%time # Clean up prior version of the downloaded dataset if you are running this again !rm -rf CUB_200_2011 # Unpack and then remove the downloaded compressed tar file !gunzip -c ./CUB_200_2011.tgz | tar xopf - !rm CUB_200_2011.tgz
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Understand the dataset Set some parameters for the rest of the notebook to use Here we define a few parameters that help drive the rest of the notebook. For example, `SAMPLE_ONLY` is defaulted to `True`. This will force the notebook to train on only a handful of species. Setting to false will make the notebook work with the entire dataset of 200 bird species. This makes the training a more difficult challenge, and you will need many more epochs to complete.The file parameters define names and locations of metadata files for the dataset.
import pandas as pd import cv2 import boto3 import json runtime = boto3.client(service_name="runtime.sagemaker") import matplotlib.pyplot as plt %matplotlib inline RANDOM_SPLIT = False SAMPLE_ONLY = True FLIP = False # To speed up training and experimenting, you can use a small handful of species. # To see the full list of the classes available, look at the content of CLASSES_FILE. CLASSES = [17, 36, 47, 68, 73] # Otherwise, you can use the full set of species if not SAMPLE_ONLY: CLASSES = [] for c in range(200): CLASSES += [c + 1] RESIZE_SIZE = 256 BASE_DIR = "CUB_200_2011/" IMAGES_DIR = BASE_DIR + "images/" CLASSES_FILE = BASE_DIR + "classes.txt" BBOX_FILE = BASE_DIR + "bounding_boxes.txt" IMAGE_FILE = BASE_DIR + "images.txt" LABEL_FILE = BASE_DIR + "image_class_labels.txt" SIZE_FILE = BASE_DIR + "sizes.txt" SPLIT_FILE = BASE_DIR + "train_test_split.txt" TRAIN_LST_FILE = "birds_ssd_train.lst" VAL_LST_FILE = "birds_ssd_val.lst" if SAMPLE_ONLY: TRAIN_LST_FILE = "birds_ssd_sample_train.lst" VAL_LST_FILE = "birds_ssd_sample_val.lst" TRAIN_RATIO = 0.8 CLASS_COLS = ["class_number", "class_id"] IM2REC_SSD_COLS = [ "header_cols", "label_width", "zero_based_id", "xmin", "ymin", "xmax", "ymax", "image_file_name", ]
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Explore the dataset imagesFor each species, there are dozens of images of various shapes and sizes. By dividing the entire dataset into individual named (numbered) folders, the images are in effect labelled for supervised learning using image classification and object detection algorithms. The following function displays a grid of thumbnail images for all the image files for a given species.
def show_species(species_id): _im_list = !ls $IMAGES_DIR/$species_id NUM_COLS = 6 IM_COUNT = len(_im_list) print('Species ' + species_id + ' has ' + str(IM_COUNT) + ' images.') NUM_ROWS = int(IM_COUNT / NUM_COLS) if ((IM_COUNT % NUM_COLS) > 0): NUM_ROWS += 1 fig, axarr = plt.subplots(NUM_ROWS, NUM_COLS) fig.set_size_inches(8.0, 16.0, forward=True) curr_row = 0 for curr_img in range(IM_COUNT): # fetch the url as a file type object, then read the image f = IMAGES_DIR + species_id + '/' + _im_list[curr_img] a = plt.imread(f) # find the column by taking the current index modulo 3 col = curr_img % NUM_ROWS # plot on relevant subplot axarr[col, curr_row].imshow(a) if col == (NUM_ROWS - 1): # we have finished the current row, so increment row counter curr_row += 1 fig.tight_layout() plt.show() # Clean up plt.clf() plt.cla() plt.close()
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Show the list of bird species or dataset classes.
classes_df = pd.read_csv(CLASSES_FILE, sep=" ", names=CLASS_COLS, header=None) criteria = classes_df["class_number"].isin(CLASSES) classes_df = classes_df[criteria] print(classes_df.to_csv(columns=["class_id"], sep="\t", index=False, header=False))
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Now for any given species, display thumbnail images of each of the images provided for training and testing.
show_species("017.Cardinal")
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Generate RecordIO files Step 1. Gather image sizesFor this particular dataset, bounding box annotations are specified in absolute terms. RecordIO format requires them to be defined in terms relative to the image size. The following code visits each image, extracts the height and width, and saves this information into a file for subsequent use. Some other publicly available datasets provide such a file for exactly this purpose.
%%time SIZE_COLS = ["idx", "width", "height"] def gen_image_size_file(): print("Generating a file containing image sizes...") images_df = pd.read_csv( IMAGE_FILE, sep=" ", names=["image_pretty_name", "image_file_name"], header=None ) rows_list = [] idx = 0 for i in images_df["image_file_name"]: # TODO: add progress bar idx += 1 img = cv2.imread(IMAGES_DIR + i) dimensions = img.shape height = img.shape[0] width = img.shape[1] image_dict = {"idx": idx, "width": width, "height": height} rows_list.append(image_dict) sizes_df = pd.DataFrame(rows_list) print("Image sizes:\n" + str(sizes_df.head())) sizes_df[SIZE_COLS].to_csv(SIZE_FILE, sep=" ", index=False, header=None) gen_image_size_file()
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Step 2. Generate list files for producing RecordIO files [RecordIO](https://mxnet.incubator.apache.org/architecture/note_data_loading.html) files can be created using the [im2rec tool](https://mxnet.incubator.apache.org/faq/recordio.html) (images to RecordIO), which takes as input a pair of list files, one for training images and the other for validation images. Each list file has one row for each image. For object detection, each row must contain bounding box data and a class label.For the CalTech birds dataset, we need to convert absolute bounding box dimensions to relative dimensions based on image size. We also need to adjust class id's to be zero-based (instead of 1 to 200, they need to be 0 to 199). This dataset comes with recommended train/test split information ("is_training_image" flag). This notebook is built flexibly to either leverage this suggestion, or to create a random train/test split with a specific train/test ratio. The `RAMDOM_SPLIT` variable defined earlier controls whether or not the split happens randomly.
def split_to_train_test(df, label_column, train_frac=0.8): train_df, test_df = pd.DataFrame(), pd.DataFrame() labels = df[label_column].unique() for lbl in labels: lbl_df = df[df[label_column] == lbl] lbl_train_df = lbl_df.sample(frac=train_frac) lbl_test_df = lbl_df.drop(lbl_train_df.index) print( "\n{}:\n---------\ntotal:{}\ntrain_df:{}\ntest_df:{}".format( lbl, len(lbl_df), len(lbl_train_df), len(lbl_test_df) ) ) train_df = train_df.append(lbl_train_df) test_df = test_df.append(lbl_test_df) return train_df, test_df def gen_list_files(): # use generated sizes file sizes_df = pd.read_csv( SIZE_FILE, sep=" ", names=["image_pretty_name", "width", "height"], header=None ) bboxes_df = pd.read_csv( BBOX_FILE, sep=" ", names=["image_pretty_name", "x_abs", "y_abs", "bbox_width", "bbox_height"], header=None, ) split_df = pd.read_csv( SPLIT_FILE, sep=" ", names=["image_pretty_name", "is_training_image"], header=None ) print(IMAGE_FILE) images_df = pd.read_csv( IMAGE_FILE, sep=" ", names=["image_pretty_name", "image_file_name"], header=None ) print("num images total: " + str(images_df.shape[0])) image_class_labels_df = pd.read_csv( LABEL_FILE, sep=" ", names=["image_pretty_name", "class_id"], header=None ) # Merge the metadata into a single flat dataframe for easier processing full_df = pd.DataFrame(images_df) full_df.reset_index(inplace=True) full_df = pd.merge(full_df, image_class_labels_df, on="image_pretty_name") full_df = pd.merge(full_df, sizes_df, on="image_pretty_name") full_df = pd.merge(full_df, bboxes_df, on="image_pretty_name") full_df = pd.merge(full_df, split_df, on="image_pretty_name") full_df.sort_values(by=["index"], inplace=True) # Define the bounding boxes in the format required by SageMaker's built in Object Detection algorithm. # the xmin/ymin/xmax/ymax parameters are specified as ratios to the total image pixel size full_df["header_cols"] = 2 # one col for the number of header cols, one for the label width full_df["label_width"] = 5 # number of cols for each label: class, xmin, ymin, xmax, ymax full_df["xmin"] = full_df["x_abs"] / full_df["width"] full_df["xmax"] = (full_df["x_abs"] + full_df["bbox_width"]) / full_df["width"] full_df["ymin"] = full_df["y_abs"] / full_df["height"] full_df["ymax"] = (full_df["y_abs"] + full_df["bbox_height"]) / full_df["height"] # object detection class id's must be zero based. map from # class_id's given by CUB to zero-based (1 is 0, and 200 is 199). if SAMPLE_ONLY: # grab a small subset of species for testing criteria = full_df["class_id"].isin(CLASSES) full_df = full_df[criteria] unique_classes = full_df["class_id"].drop_duplicates() sorted_unique_classes = sorted(unique_classes) id_to_zero = {} i = 0.0 for c in sorted_unique_classes: id_to_zero[c] = i i += 1.0 full_df["zero_based_id"] = full_df["class_id"].map(id_to_zero) full_df.reset_index(inplace=True) # use 4 decimal places, as it seems to be required by the Object Detection algorithm pd.set_option("display.precision", 4) train_df = [] val_df = [] if RANDOM_SPLIT: # split into training and validation sets train_df, val_df = split_to_train_test(full_df, "class_id", TRAIN_RATIO) train_df[IM2REC_SSD_COLS].to_csv(TRAIN_LST_FILE, sep="\t", float_format="%.4f", header=None) val_df[IM2REC_SSD_COLS].to_csv(VAL_LST_FILE, sep="\t", float_format="%.4f", header=None) else: train_df = full_df[(full_df.is_training_image == 1)] train_df[IM2REC_SSD_COLS].to_csv(TRAIN_LST_FILE, sep="\t", float_format="%.4f", header=None) val_df = full_df[(full_df.is_training_image == 0)] val_df[IM2REC_SSD_COLS].to_csv(VAL_LST_FILE, sep="\t", float_format="%.4f", header=None) print("num train: " + str(train_df.shape[0])) print("num val: " + str(val_df.shape[0])) return train_df, val_df train_df, val_df = gen_list_files()
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Here we take a look at a few records from the training list file to understand better what is being fed to the RecordIO files.The first column is the image number or index. The second column indicates that the label is made up of 2 columns (column 2 and column 3). The third column specifies the label width of a single object. In our case, the value 5 indicates each image has 5 numbers to describe its label information: the class index, and the 4 bounding box coordinates. If there are multiple objects within one image, all the label information should be listed in one line. Our dataset contains only one bounding box per image.The fourth column is the class label. This identifies the bird species using a zero-based class id. Columns 4 through 7 represent the bounding box for where the bird is found in this image.The classes should be labeled with successive numbers and start with 0. The bounding box coordinates are ratios of its top-left (xmin, ymin) and bottom-right (xmax, ymax) corner indices to the overall image size. Note that the top-left corner of the entire image is the origin (0, 0). The last column specifies the relative path of the image file within the images directory.
!tail -3 $TRAIN_LST_FILE
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Step 2. Convert data into RecordIO formatNow we create im2rec databases (.rec files) for training and validation based on the list files created earlier.
!python tools/im2rec.py --resize $RESIZE_SIZE --pack-label birds_ssd_sample $BASE_DIR/images/
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Step 3. Upload RecordIO files to S3Upload the training and validation data to the S3 bucket. We do this in multiple channels. Channels are simply directories in the bucket that differentiate the types of data provided to the algorithm. For the object detection algorithm, we call these directories `train` and `validation`.
# Upload the RecordIO files to train and validation channels train_channel = prefix + "/train" validation_channel = prefix + "/validation" sess.upload_data(path="birds_ssd_sample_train.rec", bucket=bucket, key_prefix=train_channel) sess.upload_data(path="birds_ssd_sample_val.rec", bucket=bucket, key_prefix=validation_channel) s3_train_data = "s3://{}/{}".format(bucket, train_channel) s3_validation_data = "s3://{}/{}".format(bucket, validation_channel)
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Train the model Next we define an output location in S3, where the model artifacts will be placed on completion of the training. These artifacts are the output of the algorithm's traning job. We also get the URI to the Amazon SageMaker Object Detection docker image. This ensures the estimator uses the correct algorithm from the current region.
from sagemaker.amazon.amazon_estimator import get_image_uri training_image = get_image_uri(sess.boto_region_name, "object-detection", repo_version="latest") print(training_image) s3_output_location = "s3://{}/{}/output".format(bucket, prefix) od_model = sagemaker.estimator.Estimator( training_image, role, train_instance_count=1, train_instance_type="ml.p3.2xlarge", train_volume_size=50, train_max_run=360000, input_mode="File", output_path=s3_output_location, sagemaker_session=sess, )
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Define hyperparameters The object detection algorithm at its core is the [Single-Shot Multi-Box detection algorithm (SSD)](https://arxiv.org/abs/1512.02325). This algorithm uses a `base_network`, which is typically a [VGG](https://arxiv.org/abs/1409.1556) or a [ResNet](https://arxiv.org/abs/1512.03385). The Amazon SageMaker object detection algorithm supports VGG-16 and ResNet-50. It also has a number of hyperparameters that help configure the training job. The next step in our training, is to setup these hyperparameters and data channels for training the model. See the SageMaker Object Detection [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection.html) for more details on its specific hyperparameters.One of the hyperparameters here for example is `epochs`. This defines how many passes of the dataset we iterate over and drives the training time of the algorithm. Based on our tests, we can achieve 70% accuracy on a sample mix of 5 species with 100 epochs. When using the full 200 species, we can achieve 52% accuracy with 1,200 epochs.Note that Amazon SageMaker also provides [Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html). Automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose. When [tuning an Object Detection](https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection-tuning.html) algorithm for example, the tuning job could find the best `validation:mAP` score by trying out various values for certain hyperparameters such as `mini_batch_size`, `weight_decay`, and `momentum`.
def set_hyperparameters(num_epochs, lr_steps): num_classes = classes_df.shape[0] num_training_samples = train_df.shape[0] print("num classes: {}, num training images: {}".format(num_classes, num_training_samples)) od_model.set_hyperparameters( base_network="resnet-50", use_pretrained_model=1, num_classes=num_classes, mini_batch_size=16, epochs=num_epochs, learning_rate=0.001, lr_scheduler_step=lr_steps, lr_scheduler_factor=0.1, optimizer="sgd", momentum=0.9, weight_decay=0.0005, overlap_threshold=0.5, nms_threshold=0.45, image_shape=512, label_width=350, num_training_samples=num_training_samples, ) set_hyperparameters(100, "33,67")
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Now that the hyperparameters are setup, we define the data channels to be passed to the algorithm. To do this, we need to create the `sagemaker.session.s3_input` objects from our data channels. These objects are then put in a simple dictionary, which the algorithm consumes. Note that you could add a third channel named `model` to perform incremental training (continue training from where you had left off with a prior model).
train_data = sagemaker.session.s3_input( s3_train_data, distribution="FullyReplicated", content_type="application/x-recordio", s3_data_type="S3Prefix", ) validation_data = sagemaker.session.s3_input( s3_validation_data, distribution="FullyReplicated", content_type="application/x-recordio", s3_data_type="S3Prefix", ) data_channels = {"train": train_data, "validation": validation_data}
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Submit training job We have our `Estimator` object, we have set the hyperparameters for this object, and we have our data channels linked with the algorithm. The only remaining thing to do is to train the algorithm using the `fit` method. This will take more than 10 minutes in our example.The training process involves a few steps. First, the instances that we requested while creating the `Estimator` classes are provisioned and setup with the appropriate libraries. Then, the data from our channels are downloaded into the instance. Once this is done, the actual training begins. The provisioning and data downloading will take time, depending on the size of the data. Therefore it might be a few minutes before our training job logs show up in CloudWatch. The logs will also print out Mean Average Precision (mAP) on the validation data, among other losses, for every run of the dataset (once per epoch). This metric is a proxy for the accuracy of the model.Once the job has finished, a `Job complete` message will be printed. The trained model artifacts can be found in the S3 bucket that was setup as `output_path` in the estimator.
%%time od_model.fit(inputs=data_channels, logs=True)
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Now that the training job is complete, you can also see the job listed in the `Training jobs` section of your SageMaker console. Note that the job name is uniquely identified by the name of the algorithm concatenated with the date and time stamp. You can click on the job to see the details including the hyperparameters, the data channel definitions, and the full path to the resulting model artifacts. You could even clone the job from the console, and tweak some of the parameters to generate a new training job. Without having to go to the CloudWatch console, you can see how the job progressed in terms of the key object detection algorithm metric, mean average precision (mAP). This function below prepares a simple chart of that metric against the epochs.
import boto3 import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as ticker %matplotlib inline client = boto3.client("logs") BASE_LOG_NAME = "/aws/sagemaker/TrainingJobs" def plot_object_detection_log(model, title): logs = client.describe_log_streams( logGroupName=BASE_LOG_NAME, logStreamNamePrefix=model._current_job_name ) cw_log = client.get_log_events( logGroupName=BASE_LOG_NAME, logStreamName=logs["logStreams"][0]["logStreamName"] ) mAP_accs = [] for e in cw_log["events"]: msg = e["message"] if "validation mAP <score>=" in msg: num_start = msg.find("(") num_end = msg.find(")") mAP = msg[num_start + 1 : num_end] mAP_accs.append(float(mAP)) print(title) print("Maximum mAP: %f " % max(mAP_accs)) fig, ax = plt.subplots() plt.xlabel("Epochs") plt.ylabel("Mean Avg Precision (mAP)") (val_plot,) = ax.plot(range(len(mAP_accs)), mAP_accs, label="mAP") plt.legend(handles=[val_plot]) ax.yaxis.set_ticks(np.arange(0.0, 1.05, 0.1)) ax.yaxis.set_major_formatter(ticker.FormatStrFormatter("%0.2f")) plt.show() plot_object_detection_log(od_model, "mAP tracking for job: " + od_model._current_job_name)
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Host the model Once the training is done, we can deploy the trained model as an Amazon SageMaker real-time hosted endpoint. This lets us make predictions (or inferences) from the model. Note that we don't have to host using the same type of instance that we used to train. Training is a prolonged and compute heavy job with different compute and memory requirements that hosting typically does not. In our case we chose the `ml.p3.2xlarge` instance to train, but we choose to host the model on the less expensive cpu instance, `ml.m4.xlarge`. The endpoint deployment takes several minutes, and can be accomplished with a single line of code calling the `deploy` method.Note that some use cases require large sets of inferences on a predefined body of images. In those cases, you do not need to make the inferences in real time. Instead, you could use SageMaker's [batch transform jobs](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html).
%%time object_detector = od_model.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Test the model Now that the trained model is deployed at an endpoint that is up-and-running, we can use this endpoint for inference. The results of a call to the inference endpoint are in a format that is similar to the .lst format, with the addition of a confidence score for each detected object. The format of the output can be represented as `[class_index, confidence_score, xmin, ymin, xmax, ymax]`. Typically, we don't visualize low-confidence predictions.We have provided a script to easily visualize the detection outputs. You can visulize the high-confidence preditions with bounding box by filtering out low-confidence detections using the script below:
def visualize_detection(img_file, dets, classes=[], thresh=0.6): """ visualize detections in one image Parameters: ---------- img : numpy.array image, in bgr format dets : numpy.array ssd detections, numpy.array([[id, score, x1, y1, x2, y2]...]) each row is one object classes : tuple or list of str class names thresh : float score threshold """ import random import matplotlib.pyplot as plt import matplotlib.image as mpimg img = mpimg.imread(img_file) plt.imshow(img) height = img.shape[0] width = img.shape[1] colors = dict() num_detections = 0 for det in dets: (klass, score, x0, y0, x1, y1) = det if score < thresh: continue num_detections += 1 cls_id = int(klass) if cls_id not in colors: colors[cls_id] = (random.random(), random.random(), random.random()) xmin = int(x0 * width) ymin = int(y0 * height) xmax = int(x1 * width) ymax = int(y1 * height) rect = plt.Rectangle( (xmin, ymin), xmax - xmin, ymax - ymin, fill=False, edgecolor=colors[cls_id], linewidth=3.5, ) plt.gca().add_patch(rect) class_name = str(cls_id) if classes and len(classes) > cls_id: class_name = classes[cls_id] print("{},{}".format(class_name, score)) plt.gca().text( xmin, ymin - 2, "{:s} {:.3f}".format(class_name, score), bbox=dict(facecolor=colors[cls_id], alpha=0.5), fontsize=12, color="white", ) print("Number of detections: " + str(num_detections)) plt.show()
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Now we use our endpoint to try to detect objects within an image. Since the image is a jpeg, we use the appropriate content_type to run the prediction. The endpoint returns a JSON object that we can simply load and peek into. We have packaged the prediction code into a function to make it easier to test other images. Note that we are defaulting the confidence threshold to 30% in our example, as a couple of the birds in our sample images were not being detected as clearly. Defining an appropriate threshold is entirely dependent on your use case.
OBJECT_CATEGORIES = classes_df["class_id"].values.tolist() def show_bird_prediction(filename, ep, thresh=0.40): b = "" with open(filename, "rb") as image: f = image.read() b = bytearray(f) endpoint_response = runtime.invoke_endpoint(EndpointName=ep, ContentType="image/jpeg", Body=b) results = endpoint_response["Body"].read() detections = json.loads(results) visualize_detection(filename, detections["prediction"], OBJECT_CATEGORIES, thresh)
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Here we download images that the algorithm has not yet seen.
!wget -q -O multi-goldfinch-1.jpg https://t3.ftcdn.net/jpg/01/44/64/36/500_F_144643697_GJRUBtGc55KYSMpyg1Kucb9yJzvMQooW.jpg !wget -q -O northern-flicker-1.jpg https://upload.wikimedia.org/wikipedia/commons/5/5c/Northern_Flicker_%28Red-shafted%29.jpg !wget -q -O northern-cardinal-1.jpg https://cdn.pixabay.com/photo/2013/03/19/04/42/bird-94957_960_720.jpg !wget -q -O blue-jay-1.jpg https://cdn12.picryl.com/photo/2016/12/31/blue-jay-bird-feather-animals-b8ee04-1024.jpg !wget -q -O hummingbird-1.jpg http://res.freestockphotos.biz/pictures/17/17875-hummingbird-close-up-pv.jpg def test_model(): show_bird_prediction("hummingbird-1.jpg", object_detector.endpoint) show_bird_prediction("blue-jay-1.jpg", object_detector.endpoint) show_bird_prediction("multi-goldfinch-1.jpg", object_detector.endpoint) show_bird_prediction("northern-flicker-1.jpg", object_detector.endpoint) show_bird_prediction("northern-cardinal-1.jpg", object_detector.endpoint) test_model()
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Clean upHere we delete the SageMaker endpoint, as we will no longer be performing any inferences. This is an important step, as your account is billed for the amount of time an endpoint is running, even when it is idle.
sagemaker.Session().delete_endpoint(object_detector.endpoint)
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Improve the model Define Function to Flip the Images Horizontally (on the X Axis)
from PIL import Image def flip_images(): print("Flipping images...") SIZE_COLS = ["idx", "width", "height"] IMAGE_COLS = ["image_pretty_name", "image_file_name"] LABEL_COLS = ["image_pretty_name", "class_id"] BBOX_COLS = ["image_pretty_name", "x_abs", "y_abs", "bbox_width", "bbox_height"] SPLIT_COLS = ["image_pretty_name", "is_training_image"] images_df = pd.read_csv(BASE_DIR + "images.txt", sep=" ", names=IMAGE_COLS, header=None) image_class_labels_df = pd.read_csv( BASE_DIR + "image_class_labels.txt", sep=" ", names=LABEL_COLS, header=None ) bboxes_df = pd.read_csv(BASE_DIR + "bounding_boxes.txt", sep=" ", names=BBOX_COLS, header=None) split_df = pd.read_csv( BASE_DIR + "train_test_split.txt", sep=" ", names=SPLIT_COLS, header=None ) NUM_ORIGINAL_IMAGES = images_df.shape[0] rows_list = [] bbox_rows_list = [] size_rows_list = [] label_rows_list = [] split_rows_list = [] idx = 0 full_df = images_df.copy() full_df.reset_index(inplace=True) full_df = pd.merge(full_df, image_class_labels_df, on="image_pretty_name") full_df = pd.merge(full_df, bboxes_df, on="image_pretty_name") full_df = pd.merge(full_df, split_df, on="image_pretty_name") full_df.sort_values(by=["index"], inplace=True) if SAMPLE_ONLY: # grab a small subset of species for testing criteria = full_df["class_id"].isin(CLASSES) full_df = full_df[criteria] for rel_image_fn in full_df["image_file_name"]: idx += 1 full_img_content = full_df[(full_df.image_file_name == rel_image_fn)] class_id = full_img_content.iloc[0].class_id img = Image.open(IMAGES_DIR + rel_image_fn) width, height = img.size new_idx = idx + NUM_ORIGINAL_IMAGES flip_core_file_name = rel_image_fn[:-4] + "_flip.jpg" flip_full_file_name = IMAGES_DIR + flip_core_file_name img_flip = img.transpose(Image.FLIP_LEFT_RIGHT) img_flip.save(flip_full_file_name) # append a new image dict = {"image_pretty_name": new_idx, "image_file_name": flip_core_file_name} rows_list.append(dict) # append a new split, use same flag for flipped image from original image is_training_image = full_img_content.iloc[0].is_training_image split_dict = {"image_pretty_name": new_idx, "is_training_image": is_training_image} split_rows_list.append(split_dict) # append a new image class label label_dict = {"image_pretty_name": new_idx, "class_id": class_id} label_rows_list.append(label_dict) # add a size row for the original and the flipped image, same height and width size_dict = {"idx": idx, "width": width, "height": height} size_rows_list.append(size_dict) size_dict = {"idx": new_idx, "width": width, "height": height} size_rows_list.append(size_dict) # append bounding box for flipped image x_abs = full_img_content.iloc[0].x_abs y_abs = full_img_content.iloc[0].y_abs bbox_width = full_img_content.iloc[0].bbox_width bbox_height = full_img_content.iloc[0].bbox_height flipped_x_abs = width - bbox_width - x_abs bbox_dict = { "image_pretty_name": new_idx, "x_abs": flipped_x_abs, "y_abs": y_abs, "bbox_width": bbox_width, "bbox_height": bbox_height, } bbox_rows_list.append(bbox_dict) print("Done looping through original images") images_df = images_df.append(rows_list) images_df[IMAGE_COLS].to_csv(IMAGE_FILE, sep=" ", index=False, header=None) bboxes_df = bboxes_df.append(bbox_rows_list) bboxes_df[BBOX_COLS].to_csv(BBOX_FILE, sep=" ", index=False, header=None) split_df = split_df.append(split_rows_list) split_df[SPLIT_COLS].to_csv(SPLIT_FILE, sep=" ", index=False, header=None) sizes_df = pd.DataFrame(size_rows_list) sizes_df[SIZE_COLS].to_csv(SIZE_FILE, sep=" ", index=False, header=None) image_class_labels_df = image_class_labels_df.append(label_rows_list) image_class_labels_df[LABEL_COLS].to_csv(LABEL_FILE, sep=" ", index=False, header=None) print("Done saving metadata in text files")
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Re-train the model with the expanded dataset
%%time BBOX_FILE = BASE_DIR + "bounding_boxes_with_flip.txt" IMAGE_FILE = BASE_DIR + "images_with_flip.txt" LABEL_FILE = BASE_DIR + "image_class_labels_with_flip.txt" SIZE_FILE = BASE_DIR + "sizes_with_flip.txt" SPLIT_FILE = BASE_DIR + "train_test_split_with_flip.txt" # add a set of flipped images flip_images() # show the new full set of images for a species show_species("017.Cardinal") # create new sizes file gen_image_size_file() # re-create and re-deploy the RecordIO files with the updated set of images train_df, val_df = gen_list_files() !python tools/im2rec.py --resize $RESIZE_SIZE --pack-label birds_ssd_sample $BASE_DIR/images/ sess.upload_data(path="birds_ssd_sample_train.rec", bucket=bucket, key_prefix=train_channel) sess.upload_data(path="birds_ssd_sample_val.rec", bucket=bucket, key_prefix=validation_channel) # account for the new number of training images set_hyperparameters(100, "33,67") # re-train od_model.fit(inputs=data_channels, logs=True) # check out the new accuracy plot_object_detection_log(od_model, "mAP tracking for job: " + od_model._current_job_name)
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Re-deploy and test
# host the updated model object_detector = od_model.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge") # test the new model test_model()
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Final cleanupHere we delete the SageMaker endpoint, as we will no longer be performing any inferences. This is an important step, as your account is billed for the amount of time an endpoint is running, even when it is idle.
# delete the new endpoint sagemaker.Session().delete_endpoint(object_detector.endpoint)
_____no_output_____
Apache-2.0
introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb
Amirosimani/amazon-sagemaker-examples
Test
import fastai.train import pandas as pd import torch import torch.nn as nn from captum.attr import LayerIntegratedGradients # --- Model Setup --- # Load a fast.ai `Learner` trained to predict IMDB review category `[negative, positive]` awd = fastai.train.load_learner(".", "imdb_fastai_trained_lm_clf.pth") awd.model[0].bptt = 200 # getting to the actual layer that holds embeddings embedding_layer = awd.model[0]._modules["module"]._modules["encoder_dp"] # working around the model prediction - first output only, apply softmax forward_func = lambda x: torch.softmax(awd.model(x)[0], dim=-1) # make integrated gradients instance lig = LayerIntegratedGradients(forward_func, embedding_layer) # Explainer logic def get_attributions_for_sentence( sentence, awd_model=awd, lig_instance=lig, target=None, lig_n_steps=200, baseline_token="\n \n ", ): awd = awd_model lig = lig_instance vocab = awd.data.x.vocab sentence_tokens = awd.data.one_item(sentence)[0] reversed_tokens = [vocab.itos[w] for w in sentence_tokens[0]] baseline = ( torch.ones_like(sentence_tokens) * vocab.stoi[baseline_token] ) # see "how to choose a good baseline" baseline[0, 0] = vocab.stoi["xxbos"] # beginning of sentence is always #1 y = awd.predict(sentence) if target is None: target = y[1].item() attrs = lig.attribute(sentence_tokens, baseline, target, n_steps=lig_n_steps) a = attrs.sum(-1) a = a / torch.norm(a) return (pd.Series(a.numpy()[0], index=reversed_tokens), y) # https://www.imdb.com/review/rw5384922/?ref_=tt_urv review_1917 = """I sat in a packed yet silent theater this morning and watched, what I believe to be, the next Academy Award winner for the Best Picture.""" """I'm not at all a fan of war movies but I am a fan of great movies... and 1917 is a great movie. I have never been so mesmerized by set design and direction, the mass human emotion of this film is astonishingly captured and embedded magically in the audience. It keeps running through my mind...the poetry and beauty intertwined with the raw misery of war. Treat yourself... see this movie! """; import ipyvuetify as v import ipywidgets as w class Chip(v.Chip): positive = "0, 255, 0" negative = "255, 0, 0" def __init__(self, word, attribution): direction = self.positive if attribution >= 0 else self.negative color = f"rgba({direction}, {abs(attribution):.2f})" super().__init__( class_="mx-0 px-1", children=[word], color=color, value=attribution, label=True, small=True, ) def saliency_chips(attributions: pd.Series) -> v.ChipGroup: children = [Chip(w, a) for w, a in attributions.iteritems()] return v.ChipGroup(column=True, children=children) @w.interact_manual( sentence=w.Textarea(review_1917), target=[None, 0, 1], baseline_token=["\n \n", ".", "<BOS>"], ) def display_attributions(sentence="Great film", target=None, baseline_token="\n \n "): attributions, prediction = get_attributions_for_sentence(sentence) return saliency_chips(attributions)
_____no_output_____
Apache-2.0
Interactive.ipynb
MichaMucha/awdlstm-integrated-gradients
데이터 불러오기
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import datetime as dt import warnings warnings.filterwarnings('ignore') %matplotlib inline import matplotlib as mat import matplotlib.font_manager as fonm font_list = [font.name for font in fonm.fontManager.ttflist] # for f in font_list: # print(f"{f}.ttf") mat.rcParams['font.family'] = 'Hancom Gothic' def str_col(df): col = [] for i in range(0,len(df.dtypes)): if str(df.dtypes[i]) == 'object': col.append(df.dtypes.index[i]) print(col) return col def int_col(df): col = [] for i in range(0,len(df.dtypes)): if str(df.dtypes[i]) != 'object': col.append(df.dtypes.index[i]) print(col) return col def p_100(a, b): print( round( (a/(a+b))*100,2), "%" ) def extraction_func(df, col_name, num_list): temp = pd.DataFrame() for i in num_list: temp = pd.concat([ temp, df.loc[df[col_name] == i ] ],axis=0) return temp def unique_check(df): for i in range(0,len(df.columns)): if df[df.columns[i]].isnull().sum() > 0: print("Impossible if there are None : ",df.columns[i]) col_1 = [] col_2 = [] for i in range(0,len(df.columns)): if type(df[df.columns[i]][0]) == str: col_1.append(df.columns[i]) if df[df.columns[i]].nunique() > 5: col_2.append(df.columns[i]) print(df.columns[i],"컬럼의 unique 개수는 ",df[df.columns[i]].nunique(),"개") return col_1, col_2 insurance = pd.read_csv('./temp_data/insurance.csv',encoding='utf-8') print(insurance.shape) print(insurance.dtypes) print(insurance.isnull().sum()) insurance.tail(5) insurance = insurance.astype({'RESI_TYPE_CODE': str, 'MINCRDT':str, 'MAXCRDT':str, 'ACCI_DVSN':str, 'DMND_RESN_CODE':str, 'CUST_ROLE':str})
_____no_output_____
BSD-Source-Code
2. data_check.ipynb
DUYONGBEAK/Insurance-fraud-detection-model
데이터 복사
copy_insurance = insurance.copy()
_____no_output_____
BSD-Source-Code
2. data_check.ipynb
DUYONGBEAK/Insurance-fraud-detection-model
비식별화 및 고유값이 많은 컬럼 삭제 - unique한 값이 많으면 인코딩이 어려움으로 해당하는 컬럼들 삭제 - 실제로 컬럼삭제를 진행하지 않은 결과 인코딩 시 차원이 60000여개로 늘어나는 문제 발생
col_1, col_2 = unique_check(copy_insurance) col_2.remove('RESI_TYPE_CODE') col_2.remove('OCCP_GRP_1') col_2.remove('MINCRDT') col_2.remove('MAXCRDT') col_2.remove('DMND_RESN_CODE') col_2.remove('CUST_ROLE') # index를 CUST_ID로 변경 copy_insurance.set_index('CUST_ID', inplace=True) copy_insurance.drop(col_2, axis=1, inplace=True)
_____no_output_____
BSD-Source-Code
2. data_check.ipynb
DUYONGBEAK/Insurance-fraud-detection-model
데이터 파악하기 변수간 상관관계 확인
### 필요한 모듈 불러오기 #%matplotlib inline # 시각화 결과를 Jupyter Notebook에서 바로 보기 # import matplotlib.pyplot as plt # 모듈 불러오기 ### 상관계수 테이블 corr = copy_insurance.corr() # 'df'라는 데이터셋을 'corr'라는 이름의 상관계수 테이블로 저장 ### 상관계수 히트맵 그리기 # 히트맵 사이즈 설정 plt.figure(figsize = (20, 15)) # 히트맵 형태 정의. 여기서는 삼각형 형태(위 쪽 삼각형에 True, 아래 삼각형에 False) mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # 히트맵 그리기 sns.heatmap(data = corr, # 'corr' = 상관계수 테이블 annot = True, # 히트맵에 값 표시 mask=mask, # 히트맵 형태. 여기서는 위에서 정의한 삼각형 형태 fmt = '.2f', # 값 표시 방식. 소숫점 2번째자리까지 linewidths = 1., # 경계면 실선 구분 여부 cmap = 'RdYlBu_r') # 사용할 색 지정 ('python colormap 검색') plt.title('상관계수 히트맵') plt.show()
_____no_output_____
BSD-Source-Code
2. data_check.ipynb
DUYONGBEAK/Insurance-fraud-detection-model
연관성이 높은 컬럼 제거
copy_insurance = copy_insurance[copy_insurance.columns.difference(['LTBN_CHLD_AGE','JPBASE_HSHD_INCM'])]
_____no_output_____
BSD-Source-Code
2. data_check.ipynb
DUYONGBEAK/Insurance-fraud-detection-model
데이터가 정규분포를 이루는지 확인하기 - 최소 최대 정규화: 모든 feature들의 스케일이 동일하지만, 이상치(outlier)를 잘 처리하지 못한다. (X - MIN) / (MAX-MIN) - Z-점수 정규화(표준화) : 이상치(outlier)를 잘 처리하지만, 정확히 동일한 척도로 정규화 된 데이터를 생성하지는 않는다. (X - 평균) / 표준편차
plot_target = int_col(copy_insurance) import scipy.stats as stats for i in plot_target: print(i,"의 가우시안 분포 확인") fig = plt.figure(figsize=(15,3)) ax1 = fig.add_subplot(1,2,1) ax2 = fig.add_subplot(1,2,2) stats.probplot(copy_insurance[i], dist=stats.norm,plot=ax1) mu = copy_insurance[i].mean() variance = copy_insurance[i].var() sigma = variance ** 0.5 x=np.linspace(mu - 3*sigma, mu + 3*sigma, 100) ax2.plot(x, stats.norm.pdf(x,mu,sigma), color="blue",label="theoretical") sns.distplot(ax=ax2, a=copy_insurance[i], bins=100, color="red", label="observed") ax2.legend() plt.show() print()
AGE 의 가우시안 분포 확인
BSD-Source-Code
2. data_check.ipynb
DUYONGBEAK/Insurance-fraud-detection-model
stats.kstest으로 가설검증하기 - 귀무가설은 '정규분포를 따른다' 이다.
for i in plot_target: print(i,"귀무가설의 기각 여부 확인") test_state, p_val = stats.kstest(copy_insurance[i],'norm',args=(copy_insurance[i].mean(), copy_insurance[i].var()**0.5) ) print("Test-statistics : {:.5f}, p-value : {:.5f}".format(test_state, p_val)) print()
AGE 귀무가설의 기각 여부 확인 Test-statistics : 0.05453, p-value : 0.00000 CHLD_CNT 귀무가설의 기각 여부 확인 Test-statistics : 0.36416, p-value : 0.00000 CLAIM_NUM 귀무가설의 기각 여부 확인 Test-statistics : 0.25656, p-value : 0.00000 CUST_INCM 귀무가설의 기각 여부 확인 Test-statistics : 0.29274, p-value : 0.00000 CUST_RGST 귀무가설의 기각 여부 확인 Test-statistics : 0.30788, p-value : 0.00000 DISTANCE 귀무가설의 기각 여부 확인 Test-statistics : 0.27584, p-value : 0.00000 HOUSE_HOSP_DIST 귀무가설의 기각 여부 확인 Test-statistics : 0.32806, p-value : 0.00000 PAYM_AMT 귀무가설의 기각 여부 확인 Test-statistics : 0.42303, p-value : 0.00000 RCBASE_HSHD_INCM 귀무가설의 기각 여부 확인 Test-statistics : 0.11014, p-value : 0.00000 RESI_COST 귀무가설의 기각 여부 확인 Test-statistics : 0.13425, p-value : 0.00000 RESN_DATE_NUM 귀무가설의 기각 여부 확인 Test-statistics : 0.25856, p-value : 0.00000 SUM_ORIG_PREM 귀무가설의 기각 여부 확인 Test-statistics : 0.44341, p-value : 0.00000 TOTALPREM 귀무가설의 기각 여부 확인 Test-statistics : 0.27630, p-value : 0.00000
BSD-Source-Code
2. data_check.ipynb
DUYONGBEAK/Insurance-fraud-detection-model
AGE를 제외한 모든 컬럼이 정규분포를 따르지 않으므로 MinMaxScaler를 이용해 정규화 적용
from sklearn.preprocessing import MinMaxScaler int_data = copy_insurance[plot_target] # 인덱스 빼두기 index = int_data.index # MinMaxcaler 객체 생성 scaler = MinMaxScaler() # MinMaxcaler로 데이터 셋 변환 .fit( ) 과 .transform( ) 호출 scaler.fit(int_data) data_scaled = scaler.transform(int_data) # int_data.loc[:,:] = data_scaled # transform( )시 scale 변환된 데이터 셋이 numpy ndarry로 반환되어 이를 DataFrame으로 변환 data_scaled = pd.DataFrame(data=data_scaled, columns=int_data.columns, index=index) print('feature 들의 정규화 최소 값') print(data_scaled.min()) print('\nfeature 들의 정규화 최대 값') print(data_scaled.max())
feature 들의 정규화 최소 값 AGE 0.0 CHLD_CNT 0.0 CLAIM_NUM 0.0 CUST_INCM 0.0 CUST_RGST 0.0 DISTANCE 0.0 HOUSE_HOSP_DIST 0.0 PAYM_AMT 0.0 RCBASE_HSHD_INCM 0.0 RESI_COST 0.0 RESN_DATE_NUM 0.0 SUM_ORIG_PREM 0.0 TOTALPREM 0.0 dtype: float64 feature 들의 정규화 최대 값 AGE 1.0 CHLD_CNT 1.0 CLAIM_NUM 1.0 CUST_INCM 1.0 CUST_RGST 1.0 DISTANCE 1.0 HOUSE_HOSP_DIST 1.0 PAYM_AMT 1.0 RCBASE_HSHD_INCM 1.0 RESI_COST 1.0 RESN_DATE_NUM 1.0 SUM_ORIG_PREM 1.0 TOTALPREM 1.0 dtype: float64
BSD-Source-Code
2. data_check.ipynb
DUYONGBEAK/Insurance-fraud-detection-model
label컬럼을 제외한 나머지 카테고리 데이터들은 원핫 인코딩을 진행
onehot_target = str_col(copy_insurance) onehot_target.remove('SIU_CUST_YN') str_data = copy_insurance[onehot_target] onehot_data = pd.get_dummies(str_data)
['ACCI_DVSN', 'CUST_ROLE', 'DMND_RESN_CODE', 'FP_CAREER', 'HEED_HOSP_YN', 'MAXCRDT', 'MINCRDT', 'OCCP_GRP_1', 'RESI_TYPE_CODE', 'SEX', 'SIU_CUST_YN', 'WEDD_YN']
BSD-Source-Code
2. data_check.ipynb
DUYONGBEAK/Insurance-fraud-detection-model
인코딩과 스케일링 데이터, 라벨을 합쳐서 저장
concat_data = pd.concat([data_scaled, onehot_data, copy_insurance['SIU_CUST_YN']], axis=1) concat_data.to_csv('./temp_data/save_scaled_insurance.csv',index = True)
_____no_output_____
BSD-Source-Code
2. data_check.ipynb
DUYONGBEAK/Insurance-fraud-detection-model
Repertoire classification subsamplingWhen training a classifier to assign repertoires to the subject from which they were obtained, we need a set of subsampled sequences. The sequences have been condensed to just the V- and J-gene assignments and the CDR3 length (VJ-CDR3len). Subsample sizes range from 10 to 10,000 sequences per biological replicate.The [`abutils`](https://www.github.com/briney/abutils) Python package is required for this notebook, and can be installed by running `pip install abutils`.*NOTE: this notebook requires the use of the Unix command line tool `shuf`. Thus, it requires a Unix-based operating system to run correctly (MacOS and most flavors of Linux should be fine). Running this notebook on Windows 10 may be possible using the [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/about) but we have not tested this.*
from __future__ import print_function, division from collections import Counter import os import subprocess as sp import sys import tempfile from abutils.utils.pipeline import make_dir
_____no_output_____
MIT
data_processing/05_repertoire-classification-subsampling.ipynb
Linda-Lan/grp_paper
Subjects, subsample sizes, and directoriesThe `input_dir` should contain deduplicated clonotype sequences. The datafiles are too large to be included in the Github repository, but may be downloaded [**here**](http://burtonlab.s3.amazonaws.com/GRP_github_data/techrep-merged_vj-cdr3len_no-header.tar.gz). If downloading the data (which will be downloaded as a compressed archive), decompress the archive in the `data` directory (in the same parent directory as this notebook) and you should be ready to go. If you want to store the downloaded data in some other location, adjust the `input_dir` path below as needed.By default, subsample sizes increase by 10 from 10 to 100, by 100 from 100 to 1,000, and by 1,000 from 1,000 to 10,000.
with open('./data/subjects.txt') as f: subjects = sorted(f.read().split()) subsample_sizes = list(range(10, 100, 10)) + list(range(100, 1000, 100)) + list(range(1000, 11000, 1000)) input_dir = './data/techrep-merged_vj-cdr3len_no-header/' subsample_dir = './data/repertoire_classification/user-created_subsamples_vj-cdr3len' make_dir(subsample_dir)
_____no_output_____
MIT
data_processing/05_repertoire-classification-subsampling.ipynb
Linda-Lan/grp_paper
Subsampling
def subsample(infile, outfile, n_seqs, iterations): with open(outfile, 'w') as f: f.write('') shuf_cmd = 'shuf -n {} {}'.format(n_seqs, infile) p = sp.Popen(shuf_cmd, stdout=sp.PIPE, stderr=sp.PIPE, shell=True) stdout, stderr = p.communicate() with open(outfile, 'a') as f: for iteration in range(iterations): seqs = ['_'.join(s.strip().split()) for s in stdout.strip().split('\n') if s.strip()] counts = Counter(seqs) count_strings = [] for k, v in counts.items(): count_strings.append('{}:{}'.format(k, v)) f.write(','.join(count_strings) + '\n') for subject in subjects: print(subject) files = list_files(os.path.join(input_dir, subject)) for file_ in files: for subsample_size in subsample_sizes: num = os.path.basename(file_).split('_')[0] ofile = os.path.join(subsample_dir, '{}_{}-{}'.format(subject, subsample_size, num)) subsample(file_, ofile, subsample_size, 50)
_____no_output_____
MIT
data_processing/05_repertoire-classification-subsampling.ipynb
Linda-Lan/grp_paper
Strata objects: Legend and ColumnStrata is stratigraphic data.The main object of `strata` submodule is `mplStrater.strata.Column` which represents the single stratigraphic column.This example shows the structure of the class and how to use it.First, import all required packages and load the example dataset.
%load_ext autoreload %autoreload 2 from mplStrater.data import StrataFrame from mplStrater.strata import Column,Legend import pandas as pd import matplotlib.pyplot as plt df=pd.read_csv("../../../data/example.csv") df.head()
_____no_output_____
MIT
docs/examples/strata.ipynb
giocaizzi/mplStrater
Then, initiate a `mpl.StrataFrame` providing a `pandas.DataFrame` and specifying its `epsg` code.
sf=StrataFrame( df=df, epsg=32633)
_____no_output_____
MIT
docs/examples/strata.ipynb
giocaizzi/mplStrater
Define a `Legend`.This is done providing a dictionary containing pairs of (value-specification) the `fill_dict` parameter and for the `hatch_fill` parameter.The dictionary matches dataframe `fill` and `hatch` column values to either a *matplotlib encoded color* or *encoded hatch* string.The example uses the following dictionaries.
fill_dict={ 'Terreno conforme': 'lightgreen', 'Riporto conforme': 'darkgreen', 'Riporto non conforme': 'orange', 'Rifiuto': 'red', 'Assenza campione': 'white' } hatch_dict={ 'Non pericoloso': '', 'Pericoloso': 'xxxxxxxxx', '_': '' } l=Legend( fill_dict=fill_dict, hatch_dict=hatch_dict )
_____no_output_____
MIT
docs/examples/strata.ipynb
giocaizzi/mplStrater
Plot stand-alone `Column` objectsImagine we would need to inspect closely a column. It's not sure that we would be able to clearly do it on the map with all other elements (labels, basemap...). Unless exporting the map in pdf with a high resolution, open the local file... would take sooo long! Therefore `Column` object has its own `plot()` method.Let's plot the first three columns of the strataframe.
sf.strataframe[:3]
_____no_output_____
MIT
docs/examples/strata.ipynb
giocaizzi/mplStrater
Plot the first three columns contained in the `StrataFrame`.
#create figure f,axes=plt.subplots(1,4,figsize=(5,3),dpi=200,frameon=False) for ax,i in zip(axes,range(4)): ax.axis('off') #instantiate class c=Column( #figure ax,l, #id sf.strataframe.loc[i,"ID"], #coords (0.9,0.9), #scale sf.strataframe.loc[i,"scale"], 3, #stratigraphic data sf.strataframe.loc[i,"layers"], sf.strataframe.loc[i,"fill_list"], sf.strataframe.loc[i,"hatch_list"], #labels sf.strataframe.loc[i,"lbl1_list"], sf.strataframe.loc[i,"lbl2_list"], sf.strataframe.loc[i,"lbl3_list"]) ax.set_title(c.id) c.fill_column() c.set_inset_params() c.label_column(hardcoding=None)
_____no_output_____
MIT
docs/examples/strata.ipynb
giocaizzi/mplStrater
Sometimes it is useful to take a random choice between two or more options.Numpy has a function for that, called `random.choice`:
import numpy as np
_____no_output_____
CC-BY-4.0
notebooks/10/random_choice.ipynb
matthew-brett/cfd-uob
Say we want to choose randomly between 0 and 1. We want an equal probability of getting 0 and getting 1. We could do it like this:
np.random.randint(0, 2)
_____no_output_____
CC-BY-4.0
notebooks/10/random_choice.ipynb
matthew-brett/cfd-uob
If we do that lots of times, we see that we have a roughly 50% chance of getting 0 (and therefore, a roughly 50% chance of getting 1).
# Make 10000 random numbers that can be 0 or 1, with equal probability. lots_of_0_1 = np.random.randint(0, 2, size=10000) # Count the proportion that are 1. np.count_nonzero(lots_of_0_1) / 10000
_____no_output_____
CC-BY-4.0
notebooks/10/random_choice.ipynb
matthew-brett/cfd-uob
Run the cell above a few times to confirm you get numbers very close to 0.5. Another way of doing this is to use `np.random.choice`.As usual, check the arguments that the function expects with `np.random.choice?` in a notebook cell.The first argument is a sequence, like a list, with the options that Numpy should chose from.For example, we can ask Numpy to choose randomly from the list `[0, 1]`:
np.random.choice([0, 1])
_____no_output_____
CC-BY-4.0
notebooks/10/random_choice.ipynb
matthew-brett/cfd-uob
A second `size` argument to the function says how many items to choose:
# Ten numbers, where each has a 50% chance of 0 and 50% chance of 1. np.random.choice([0, 1], size=10)
_____no_output_____
CC-BY-4.0
notebooks/10/random_choice.ipynb
matthew-brett/cfd-uob
By default, Numpy will chose each item in the sequence with equal probability, In this case, Numpy will chose 0 with 50% probability, and 1 with 50% probability:
# Use choice to make another 10000 random numbers that can be 0 or 1, # with equal probability. more_0_1 = np.random.choice([0, 1], size=10000) # Count the proportion that are 1. np.count_nonzero(more_0_1) / 10000
_____no_output_____
CC-BY-4.0
notebooks/10/random_choice.ipynb
matthew-brett/cfd-uob
If you want, you can change these proportions with the `p` argument:
# Use choice to make another 10000 random numbers that can be 0 or 1, # where 0 has probability 0.25, and 1 has probability 0.75. weighted_0_1 = np.random.choice([0, 1], size=10000, p=[0.25, 0.75]) # Count the proportion that are 1. np.count_nonzero(weighted_0_1) / 10000
_____no_output_____
CC-BY-4.0
notebooks/10/random_choice.ipynb
matthew-brett/cfd-uob
There can be more than two choices:
# Use choice to make another 10000 random numbers that can be 0 or 10 or 20, or # 30, where each has probability 0.25. multi_nos = np.random.choice([0, 10, 20, 30], size=10000) multi_nos[:10] np.count_nonzero(multi_nos == 30) / 10000
_____no_output_____
CC-BY-4.0
notebooks/10/random_choice.ipynb
matthew-brett/cfd-uob
The choices don't have to be numbers:
np.random.choice(['Heads', 'Tails'], size=10)
_____no_output_____
CC-BY-4.0
notebooks/10/random_choice.ipynb
matthew-brett/cfd-uob
You can also do choices *without replacement*, so once you have chosen an element, all subsequent choices cannot chose that element again. For example, this *must* return all the elements from the choices, but in random order:
np.random.choice([0, 10, 20, 30], size=4, replace=False)
_____no_output_____
CC-BY-4.0
notebooks/10/random_choice.ipynb
matthew-brett/cfd-uob
Capsule Network In this notebook i will try to explain and implement Capsule Network. MNIST images will be used as an input. To implement capsule Network, we need to understand what are capsules first and what advantages do they have compared to convolutional neural network. so what are capsules?* Briefly explaining it, capsules are small group of neurons where each neuron in a capsule represents various properties of a particular image part.* Capsules represent relationships between parts of a whole object by using **dynamic routing** to weight the connections between one layer of capsules and the next and creating strong connections between spatially-related object parts, will be discussed later.* The output of each capsule is a vector, this vector has a magnitude and orientation. * Magnitude : It is an indicates if that particular part of image is present or not. Basically we can summerize it as the probability of the part existance (It has to be between 0 and 1). * Oriantation : It changes if one of the properties of that particular image has changed. Let us have an example to understand it more and make it clear. As shown in the following image, capsules will detect a cat's face. As shown in the image the capsule consists of neurals with properties like the position,color,width and etc.. .Then we get a vector output with magnitude 0.9 which means we have 90% confidence that this is a cat face and we will get an orientation as well.![cat1.png](attachment:cat1.png)(image from : https://cezannec.github.io/Capsule_Networks/) But what if we have changed in these properties like we have flipped the cat's face,what will happen ? will it detect the cat face? Yes it still will detect the cat's face with 90% confidance(with magnitude 0.9) but there will be a change in the oriantation(theta)to indicate a change in the properties.![cat2.png](attachment:cat2.png)(image from: https://cezannec.github.io/Capsule_Networks/ ) What advantages does it have compared to Convolutional Neural Network(CNN)?* CNN is looking for key features regadless their position. As shown in the following image, CNN will detect the left image as a face while capsule network will not detect them as it will check if they are in the correct postition or not.![face.png](attachment:face.png)(image from:https://kndrck.co/posts/capsule_networks_explained/)* Capsules network is more rubust to affine transformations in data. if translation or rotation is done on test data, atrained Capsule network will preform better and will give higher accuracy than normal CNN. Model Architecture The capsule network is consisting of two main parts:* A convolutional encoder.* A fully connected, linear decoder.![encoder_architecture.png](attachment:encoder_architecture.png)(image from :[Hinton's paper(capsule networks orignal paper)](https://arxiv.org/pdf/1710.09829.pdf) )In this Explantaion and implementation i will follow the architecture from [Hinton paper(capsule networks orignal paper)](https://arxiv.org/pdf/1710.09829.pdf) 1)Encoder The ecnoder consists of three main layers as shown in the following image and the input layer which is from MNIST which has a dimension of 28 x28 please notice the difference between this image and the previous image where the last layer is the decoder in the pravious image.![encoder_only.png](attachment:encoder_only.png) A)The convolutional layer So in Hinton's paper they have applied a kernel of size 9x9 to the input layer. This kernel has a depth of 256,stride =1 and padding = 0.This will give us an output of a dimenstion 20x20.**Note** :you can calculate the output dimenstion by this eqaution, output = [(w-k+2p)/s]+1 , where:- w is the input size- k is the kernel size- p is padding - s is strideSo to clarify this more:- The input's dimension is (28,28,1) where the 28x28 is the input size and 1 is the number of channels.- Kernel's dimention is (9,9,1,256) where 9x9 is the kernel size ,1 is the number of channels and 256 is the depth of the kernel .- The output's dimension is (20,20,256) where 20x20 is the ouptut size and 256 is the stack of filtered images. I think we are ready to start implementing the code now, so let us start by obtaining the MNIST data and create our DataLoaders for training and testing purposes.
# import resources import numpy as np import torch # random seed (for reproducibility) seed = 1 # set random seed for numpy np.random.seed(seed) # set random seed for pytorch torch.manual_seed(seed) from torchvision import datasets import torchvision.transforms as transforms # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # convert data to Tensors transform = transforms.ToTensor() # choose the training and test datasets train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform) test_data = datasets.MNIST(root='data', train=False, download=True, transform=transform) # prepare data loaders train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
The nexts step is to create the convolutional layer as we explained:
import torch.nn as nn import torch.nn.functional as F class ConvLayer(nn.Module): def __init__(self, in_channels=1, out_channels=256): '''Constructs the ConvLayer with a specified input and output size. These sizes has initial values from the paper. param input_channel: input depth of an image, default value = 1 param output_channel: output depth of the convolutional layer, default value = 256 ''' super(ConvLayer, self).__init__() # defining a convolutional layer of the specified size self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=9, stride=1, padding=0) def forward(self, x): # applying a ReLu activation to the outputs of the conv layer output = F.relu(self.conv(x)) # we will have dimensions (batch_size, 20, 20, 256) return output
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
B)Primary capsules This layer is tricky but i will try to simplify it as much as i can.We would like to convolute the first layer to a new layer with 8 primary capsules.To do so we will follow Hinton's paper steps: - First step is to convolute our first Convolutional layer which has a dimension of (20 ,20 ,256) with a kernel of dimension(9,9,256,256) in which 9 is the kernel size,first 256 is the number of chanels from the first layer and the second 256 is the number of filters or the depth of the kernel.We will get an output with a dimension of (6,6,256) .- second step is to reshape this output to (6,6,8,32) where 8 is the number of capsules and 32 is the depth of each capsule .- Now the output of each capsule will have a dimension of (6,6,32) and we will reshape it to (32x32x6,1) = (1152,1) for each capsule.- Final step we will squash the output to have a magnitute between 0 and 1 as we have discussed earlier using the following equation :![squashing.png](attachment:squashing.png)where Vj is the normalized output vector of capsule j, Sj is the total inputs of each capsule (which is the sum of weights over all the output vectors from the capsules in the layer below capsule).We will use ModuleList container to loop on each capsule we have.
class PrimaryCaps(nn.Module): def __init__(self, num_capsules=8, in_channels=256, out_channels=32): '''Constructs a list of convolutional layers to be used in creating capsule output vectors. param num_capsules: number of capsules to create param in_channels: input depth of features, default value = 256 param out_channels: output depth of the convolutional layers, default value = 32 ''' super(PrimaryCaps, self).__init__() # creating a list of convolutional layers for each capsule I want to create # all capsules have a conv layer with the same parameters self.capsules = nn.ModuleList([ nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=9, stride=2, padding=0) for _ in range(num_capsules)]) def forward(self, x): '''Defines the feedforward behavior. param x: the input; features from a convolutional layer return: a set of normalized, capsule output vectors ''' # get batch size of inputs batch_size = x.size(0) # reshape convolutional layer outputs to be (batch_size, vector_dim=1152, 1) u = [capsule(x).view(batch_size, 32 * 6 * 6, 1) for capsule in self.capsules] # stack up output vectors, u, one for each capsule u = torch.cat(u, dim=-1) # squashing the stack of vectors u_squash = self.squash(u) return u_squash def squash(self, input_tensor): '''Squashes an input Tensor so it has a magnitude between 0-1. param input_tensor: a stack of capsule inputs, s_j return: a stack of normalized, capsule output vectors, v_j ''' squared_norm = (input_tensor ** 2).sum(dim=-1, keepdim=True) scale = squared_norm / (1 + squared_norm) # normalization coeff output_tensor = scale * input_tensor / torch.sqrt(squared_norm) return output_tensor
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
c)Digit capsules As we have 10 digit classes from 0 to 9, this layer will have 10 capsules each capsule is for one digit.Each capsule takes an input of a batch of 1152 dimensional vector while the output is a ten 16 dimnsional vector. Dynamic Routing Dynamic routing is used to find the best matching between the best connections between the child layer and the possible parent.Main companents of the dynamic routing is the capsule routing.To make it easier we can think of the capsule routing as it is backprobagation.we can use it to obtain the probability that a certain capsule’s output should go to a parent capsule in the next layer.As shown in the following figure The first child capsule is connected to $s_{1}$ which is the fist possible parent capsule and to $s_{2}$ which is the second possible parent capsule.In the begining the coupling will have equal values like both of them are zeros then we start apply dynamic routing to adjust it.We will find for example that coupling coffecient connected with $s_{1}$ is 0.9 and coupling coffecient connected with $s_{2}$ is 0.1, that means the probability that first child capsule’s output should go to a parent capsule in the next layer.![diagram.png](attachment:diagram.png)**Notes** - Across all connections between one child capsule and all possible parent capsules, the coupling coefficients should sum to 1.This means That $c_{11}$ + $c_{12}$ = 1 - As shown in the following figure $s_{1}$ is the total inputs of each capsule (which is the sum of weights over all the output vectors from the capsules in the layer below capsule). - To check the similarity between the total inputs $s_{1}$ and each vector we will calculate the dot product between both of them, in this example we will find that $s_{1}$ is more similar to $u_{1}$ than $u_{2}$ or $u_{3}$ , This similarity called (agreement) ![s_1.png](attachment:s_1.png) Dynamic Routing Algorithm The followin algorithm is from [Hinton's paper(capsule networks orignal paper)](https://arxiv.org/pdf/1710.09829.pdf)![Dynamic_routing.png](attachment:Dynamic_routing.png) we can simply explain the algorithm as folowing :- First we initialize the initial logits $b_{ij}$ of the softmax function with zero- calculate the capsule coefficiant using the softmax equation.$$c_{ij} = \frac{e^{\ b_{ij}}}{\sum_{k}\ {e^{\ b_{ik}}}} $$- calculate the total capsule inputs $s_{1}$ .**Note**''- $ s_j = \sum{c_{ij} \ \hat{u}}$- $ \hat{u} = Wu $ where W is the weight matrix and u is the input vector''- squash to get a normalized vector output $v_{j}$- last step is composed of two steps, we will calculate agreement and the new $b_{ij}$ .The similarity (agremeent) is that we have discussed before,which is the cross product between prediction vector $\hat{u}$ and parent capsule's output vector $s_{1}$ . The second step is to update $b_{ij}$ . $$\hat{u} = W u $$$$a = v \cdot u $$$$b_{ij} = b_{ij} + a $$
def softmax(input_tensor, dim=1): # to get transpose softmax function # for multiplication reason s_J # transpose input transposed_input = input_tensor.transpose(dim, len(input_tensor.size()) - 1) # calculate softmax softmaxed_output = F.softmax(transposed_input.contiguous().view(-1, transposed_input.size(-1)), dim=-1) # un-transpose result return softmaxed_output.view(*transposed_input.size()).transpose(dim, len(input_tensor.size()) - 1) # dynamic routing def dynamic_routing(b_ij, u_hat, squash, routing_iterations=3): '''Performs dynamic routing between two capsule layers. param b_ij: initial log probabilities that capsule i should be coupled to capsule j param u_hat: input, weighted capsule vectors, W u param squash: given, normalizing squash function param routing_iterations: number of times to update coupling coefficients return: v_j, output capsule vectors ''' # update b_ij, c_ij for number of routing iterations for iteration in range(routing_iterations): # softmax calculation of coupling coefficients, c_ij c_ij = softmax(b_ij, dim=2) # calculating total capsule inputs, s_j = sum(c_ij*u_hat) s_j = (c_ij * u_hat).sum(dim=2, keepdim=True) # squashing to get a normalized vector output, v_j v_j = squash(s_j) # if not on the last iteration, calculate agreement and new b_ij if iteration < routing_iterations - 1: # agreement a_ij = (u_hat * v_j).sum(dim=-1, keepdim=True) # new b_ij b_ij = b_ij + a_ij return v_j # return latest v_j
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
After implementing the dynamic routing we are ready to implement the Digitcaps class,which consisits of :- This layer is composed of 10 "digit" capsules, one for each of our digit classes 0-9.- Each capsule takes, as input, a batch of 1152-dimensional vectors produced by our 8 primary capsules, above.- Each of these 10 capsules is responsible for producing a 16-dimensional output vector.- we will inizialize the weights matrix randomly.
# it will also be relevant, in this model, to see if I can train on gpu TRAIN_ON_GPU = torch.cuda.is_available() if(TRAIN_ON_GPU): print('Training on GPU!') else: print('Only CPU available') class DigitCaps(nn.Module): def __init__(self, num_capsules=10, previous_layer_nodes=32*6*6, in_channels=8, out_channels=16): '''Constructs an initial weight matrix, W, and sets class variables. param num_capsules: number of capsules to create param previous_layer_nodes: dimension of input capsule vector, default value = 1152 param in_channels: number of capsules in previous layer, default value = 8 param out_channels: dimensions of output capsule vector, default value = 16 ''' super(DigitCaps, self).__init__() # setting class variables self.num_capsules = num_capsules self.previous_layer_nodes = previous_layer_nodes # vector input (dim=1152) self.in_channels = in_channels # previous layer's number of capsules # starting out with a randomly initialized weight matrix, W # these will be the weights connecting the PrimaryCaps and DigitCaps layers self.W = nn.Parameter(torch.randn(num_capsules, previous_layer_nodes, in_channels, out_channels)) def forward(self, u): '''Defines the feedforward behavior. param u: the input; vectors from the previous PrimaryCaps layer return: a set of normalized, capsule output vectors ''' # adding batch_size dims and stacking all u vectors u = u[None, :, :, None, :] # 4D weight matrix W = self.W[:, None, :, :, :] # calculating u_hat = W*u u_hat = torch.matmul(u, W) # getting the correct size of b_ij # setting them all to 0, initially b_ij = torch.zeros(*u_hat.size()) # moving b_ij to GPU, if available if TRAIN_ON_GPU: b_ij = b_ij.cuda() # update coupling coefficients and calculate v_j v_j = dynamic_routing(b_ij, u_hat, self.squash, routing_iterations=3) return v_j # return final vector outputs def squash(self, input_tensor): '''Squashes an input Tensor so it has a magnitude between 0-1. param input_tensor: a stack of capsule inputs, s_j return: a stack of normalized, capsule output vectors, v_j ''' # same squash function as before squared_norm = (input_tensor ** 2).sum(dim=-1, keepdim=True) scale = squared_norm / (1 + squared_norm) # normalization coeff output_tensor = scale * input_tensor / torch.sqrt(squared_norm) return output_tensor
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
2)Decoder As shown in the following figure from [Hinton's paper(capsule networks orignal paper)](https://arxiv.org/pdf/1710.09829.pdf), The decoder is made of three fully-connected, linear layers. The first layer sees the 10, 16-dimensional output vectors from the digit capsule layer and produces hidden_dim=512 number of outputs. The next hidden layer = 1024 , and the third and final linear layer produces an output of 784 values which is a 28x28 image! ![decoder.png](attachment:decoder.png)
class Decoder(nn.Module): def __init__(self, input_vector_length=16, input_capsules=10, hidden_dim=512): '''Constructs an series of linear layers + activations. param input_vector_length: dimension of input capsule vector, default value = 16 param input_capsules: number of capsules in previous layer, default value = 10 param hidden_dim: dimensions of hidden layers, default value = 512 ''' super(Decoder, self).__init__() # calculate input_dim input_dim = input_vector_length * input_capsules # define linear layers + activations self.linear_layers = nn.Sequential( nn.Linear(input_dim, hidden_dim), # first hidden layer nn.ReLU(inplace=True), nn.Linear(hidden_dim, hidden_dim*2), # second, twice as deep nn.ReLU(inplace=True), nn.Linear(hidden_dim*2, 28*28), # can be reshaped into 28*28 image nn.Sigmoid() # sigmoid activation to get output pixel values in a range from 0-1 ) def forward(self, x): '''Defines the feedforward behavior. param x: the input; vectors from the previous DigitCaps layer return: two things, reconstructed images and the class scores, y ''' classes = (x ** 2).sum(dim=-1) ** 0.5 classes = F.softmax(classes, dim=-1) # find the capsule with the maximum vector length # here, vector length indicates the probability of a class' existence _, max_length_indices = classes.max(dim=1) # create a sparse class matrix sparse_matrix = torch.eye(10) # 10 is the number of classes if TRAIN_ON_GPU: sparse_matrix = sparse_matrix.cuda() # get the class scores from the "correct" capsule y = sparse_matrix.index_select(dim=0, index=max_length_indices.data) # create reconstructed pixels x = x * y[:, :, None] # flatten image into a vector shape (batch_size, vector_dim) flattened_x = x.contiguous().view(x.size(0), -1) # create reconstructed image vectors reconstructions = self.linear_layers(flattened_x) # return reconstructions and the class scores, y return reconstructions, y
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
Now let us collect all these layers (classes that we have created i.e ConvLayer,PrimaryCaps,DigitCaps,Decoder) in one class called CapsuleNetwork.
class CapsuleNetwork(nn.Module): def __init__(self): '''Constructs a complete Capsule Network.''' super(CapsuleNetwork, self).__init__() self.conv_layer = ConvLayer() self.primary_capsules = PrimaryCaps() self.digit_capsules = DigitCaps() self.decoder = Decoder() def forward(self, images): '''Defines the feedforward behavior. param images: the original MNIST image input data return: output of DigitCaps layer, reconstructed images, class scores ''' primary_caps_output = self.primary_capsules(self.conv_layer(images)) caps_output = self.digit_capsules(primary_caps_output).squeeze().transpose(0,1) reconstructions, y = self.decoder(caps_output) return caps_output, reconstructions, y
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
Let us now instantiate the model and print it.
# instantiate and print net capsule_net = CapsuleNetwork() print(capsule_net) # move model to GPU, if available if TRAIN_ON_GPU: capsule_net = capsule_net.cuda()
CapsuleNetwork( (conv_layer): ConvLayer( (conv): Conv2d(1, 256, kernel_size=(9, 9), stride=(1, 1)) ) (primary_capsules): PrimaryCaps( (capsules): ModuleList( (0): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2)) (1): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2)) (2): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2)) (3): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2)) (4): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2)) (5): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2)) (6): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2)) (7): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2)) ) ) (digit_capsules): DigitCaps() (decoder): Decoder( (linear_layers): Sequential( (0): Linear(in_features=160, out_features=512, bias=True) (1): ReLU(inplace=True) (2): Linear(in_features=512, out_features=1024, bias=True) (3): ReLU(inplace=True) (4): Linear(in_features=1024, out_features=784, bias=True) (5): Sigmoid() ) ) )
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
Loss The loss for a capsule network is a weighted combination of two losses:1. Reconstraction loss2. Margin loss Reconstraction Loss - It checks how the reconstracted image which we get from the decoder diferent from the original input image.- It is calculated using mean squared error which is nn.MSELoss in pytorch.- In [Hinton's paper(capsule networks orignal paper)](https://arxiv.org/pdf/1710.09829.pdf) they have weighted reconstraction loss with a coefficient of 0.0005, so it wouldn't overpower margin loss. Margin Loss
from IPython.display import Image Image(filename='images/margin_loss.png')
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
Margin Loss is a classification loss (we can think of it as cross entropy) which is based on the length of the output vectors coming from the DigitCaps layer.so let us try to elaborate it more on our example.Let us say we have an output vector called (x) coming from the digitcap layer, this ouput vector represents a certain digit from 0 to 9 as we are using MNIST. Then we will square the length(take the square root of the squared value) of the corresponding output vector of that digit capsule $v_k = \sqrt{x^2}$ . The right capsule should have an output vector of greater than or equal 0.9 ($v_k >=0.9$) value while other capsules should output of smaller than or eqaul 0.1( $v_k<=0.1$ ).So, if we have an input image of a 0, then the "correct," zero-detecting, digit capsule should output a vector of magnitude 0.9 or greater! For all the other digits (1-9, in this example) the corresponding digit capsule output vectors should have a magnitude that is 0.1 or less.The following function is used to calculate the margin loss as it sums both sides of the 0.9 and 0.1 and k is the digit capsule.where($T_k = 1 $) if a digit of class k is presentand $m^{+}$ = 0.9 and $m^{-}$ = 0.1. The λ down-weightingof the loss for absent digit classes stops the initial learning from shrinking the lengths of the activity vectors of all the digit capsules. In the paper they have choosen λ = 0.5. **Note** :The total loss is simply the sum of the losses of all digit capsules.
class CapsuleLoss(nn.Module): def __init__(self): '''Constructs a CapsuleLoss module.''' super(CapsuleLoss, self).__init__() self.reconstruction_loss = nn.MSELoss(reduction='sum') # cumulative loss, equiv to size_average=False def forward(self, x, labels, images, reconstructions): '''Defines how the loss compares inputs. param x: digit capsule outputs param labels: param images: the original MNIST image input data param reconstructions: reconstructed MNIST image data return: weighted margin and reconstruction loss, averaged over a batch ''' batch_size = x.size(0) ## calculate the margin loss ## # get magnitude of digit capsule vectors, v_c v_c = torch.sqrt((x**2).sum(dim=2, keepdim=True)) # calculate "correct" and incorrect loss left = F.relu(0.9 - v_c).view(batch_size, -1) right = F.relu(v_c - 0.1).view(batch_size, -1) # sum the losses, with a lambda = 0.5 margin_loss = labels * left + 0.5 * (1. - labels) * right margin_loss = margin_loss.sum() ## calculate the reconstruction loss ## images = images.view(reconstructions.size()[0], -1) reconstruction_loss = self.reconstruction_loss(reconstructions, images) # return a weighted, summed loss, averaged over a batch size return (margin_loss + 0.0005 * reconstruction_loss) / images.size(0)
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
Now we have to call the custom loss class we have implemented and we will use Adam optimizer as in the paper.
import torch.optim as optim # custom loss criterion = CapsuleLoss() # Adam optimizer with default params optimizer = optim.Adam(capsule_net.parameters())
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
Train the network So the normal steps to do the training from a batch of data:1. Clear the gradients of all optimized variables, by making them zero.2. Forward pass: compute predicted outputs by passing inputs to the model3. Calculate the loss .4. Backward pass: compute gradient of the loss with respect to model parameters5. Perform a single optimization step (parameter update)6. Update average training loss
def train(capsule_net, criterion, optimizer, n_epochs, print_every=300): '''Trains a capsule network and prints out training batch loss statistics. Saves model parameters if *validation* loss has decreased. param capsule_net: trained capsule network param criterion: capsule loss function param optimizer: optimizer for updating network weights param n_epochs: number of epochs to train for param print_every: batches to print and save training loss, default = 100 return: list of recorded training losses ''' # track training loss over time losses = [] # one epoch = one pass over all training data for epoch in range(1, n_epochs+1): # initialize training loss train_loss = 0.0 capsule_net.train() # set to train mode # get batches of training image data and targets for batch_i, (images, target) in enumerate(train_loader): # reshape and get target class target = torch.eye(10).index_select(dim=0, index=target) if TRAIN_ON_GPU: images, target = images.cuda(), target.cuda() # zero out gradients optimizer.zero_grad() # get model outputs caps_output, reconstructions, y = capsule_net(images) # calculate loss loss = criterion(caps_output, target, images, reconstructions) # perform backpropagation and optimization loss.backward() optimizer.step() train_loss += loss.item() # accumulated training loss # print and record training stats if batch_i != 0 and batch_i % print_every == 0: avg_train_loss = train_loss/print_every losses.append(avg_train_loss) print('Epoch: {} \tTraining Loss: {:.8f}'.format(epoch, avg_train_loss)) train_loss = 0 # reset accumulated training loss return losses # training for 5 epochs n_epochs = 5 losses = train(capsule_net, criterion, optimizer, n_epochs=n_epochs)
Epoch: 1 Training Loss: 0.25108408 Epoch: 1 Training Loss: 0.09796484 Epoch: 1 Training Loss: 0.07615296 Epoch: 1 Training Loss: 0.06122471 Epoch: 1 Training Loss: 0.05977095 Epoch: 1 Training Loss: 0.05478950 Epoch: 1 Training Loss: 0.05140611 Epoch: 1 Training Loss: 0.05044698 Epoch: 1 Training Loss: 0.04870245 Epoch: 2 Training Loss: 0.04324130 Epoch: 2 Training Loss: 0.04060882 Epoch: 2 Training Loss: 0.03622841 Epoch: 2 Training Loss: 0.03470477 Epoch: 2 Training Loss: 0.03626744 Epoch: 2 Training Loss: 0.03480921 Epoch: 2 Training Loss: 0.03538792 Epoch: 2 Training Loss: 0.03432405 Epoch: 2 Training Loss: 0.03438207 Epoch: 3 Training Loss: 0.03111325 Epoch: 3 Training Loss: 0.02989269 Epoch: 3 Training Loss: 0.02743311 Epoch: 3 Training Loss: 0.02656386 Epoch: 3 Training Loss: 0.02738586 Epoch: 3 Training Loss: 0.02737884 Epoch: 3 Training Loss: 0.02820305 Epoch: 3 Training Loss: 0.02727670 Epoch: 3 Training Loss: 0.02587884 Epoch: 4 Training Loss: 0.02593555 Epoch: 4 Training Loss: 0.02382935 Epoch: 4 Training Loss: 0.02312145 Epoch: 4 Training Loss: 0.02189966 Epoch: 4 Training Loss: 0.02289272 Epoch: 4 Training Loss: 0.02197252 Epoch: 4 Training Loss: 0.02546153 Epoch: 4 Training Loss: 0.02200746 Epoch: 4 Training Loss: 0.02378933 Epoch: 5 Training Loss: 0.02140641 Epoch: 5 Training Loss: 0.02041025 Epoch: 5 Training Loss: 0.02020690 Epoch: 5 Training Loss: 0.01983862 Epoch: 5 Training Loss: 0.02128812 Epoch: 5 Training Loss: 0.01994716 Epoch: 5 Training Loss: 0.02163137 Epoch: 5 Training Loss: 0.02023643 Epoch: 5 Training Loss: 0.02078038
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
Now let us plot the training loss to get more feeling how does the loss look like:
import matplotlib.pyplot as plt %matplotlib inline plt.plot(losses) plt.title("Training Loss") plt.show()
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
Test the trained network Test the trained network on unseen data:
def test(capsule_net, test_loader): '''Prints out test statistics for a given capsule net. param capsule_net: trained capsule network param test_loader: test dataloader return: returns last batch of test image data and corresponding reconstructions ''' class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) test_loss = 0 # loss tracking capsule_net.eval() # eval mode for batch_i, (images, target) in enumerate(test_loader): target = torch.eye(10).index_select(dim=0, index=target) batch_size = images.size(0) if TRAIN_ON_GPU: images, target = images.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model caps_output, reconstructions, y = capsule_net(images) # calculate the loss loss = criterion(caps_output, target, images, reconstructions) # update average test loss test_loss += loss.item() # convert output probabilities to predicted class _, pred = torch.max(y.data.cpu(), 1) _, target_shape = torch.max(target.data.cpu(), 1) # compare predictions to true label correct = np.squeeze(pred.eq(target_shape.data.view_as(pred))) # calculate test accuracy for each object class for i in range(batch_size): label = target_shape.data[i] class_correct[label] += correct[i].item() class_total[label] += 1 # avg test loss avg_test_loss = test_loss/len(test_loader) print('Test Loss: {:.8f}\n'.format(avg_test_loss)) for i in range(10): if class_total[i] > 0: print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( str(i), 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) else: print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i])) print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % ( 100. * np.sum(class_correct) / np.sum(class_total), np.sum(class_correct), np.sum(class_total))) # return last batch of capsule vectors, images, reconstructions return caps_output, images, reconstructions # call test function and get reconstructed images caps_output, images, reconstructions = test(capsule_net, test_loader)
Test Loss: 0.03073818 Test Accuracy of 0: 99% (975/980) Test Accuracy of 1: 99% (1132/1135) Test Accuracy of 2: 99% (1027/1032) Test Accuracy of 3: 99% (1001/1010) Test Accuracy of 4: 98% (971/982) Test Accuracy of 5: 99% (886/892) Test Accuracy of 6: 98% (947/958) Test Accuracy of 7: 99% (1020/1028) Test Accuracy of 8: 99% (967/974) Test Accuracy of 9: 98% (993/1009) Test Accuracy (Overall): 99% (9919/10000)
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
Now it is time to dispaly the reconstructions:
def display_images(images, reconstructions): '''Plot one row of original MNIST images and another row (below) of their reconstructions.''' # convert to numpy images images = images.data.cpu().numpy() reconstructions = reconstructions.view(-1, 1, 28, 28) reconstructions = reconstructions.data.cpu().numpy() # plot the first ten input images and then reconstructed images fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(26,5)) # input images on top row, reconstructions on bottom for images, row in zip([images, reconstructions], axes): for img, ax in zip(images, row): ax.imshow(np.squeeze(img), cmap='gray') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display original and reconstructed images, in rows display_images(images, reconstructions)
_____no_output_____
MIT
Capsule_ network.ipynb
noureldinalaa/Capsule-Networks
Monte Carlo ControlSo far, we assumed that we know the underlying model of the environment and that the agent has access to it. Now, we considere the case in which do not have access to the full MDP. That is, we do __model-free control__ now.To illustrate this, we implement the black jack example from the RL Lecture 5 by David Silver for Monte Carlo Control [see example](https://youtu.be/0g4j2k_Ggc4?t=2193)We use Monte-Carlo policy evaluation based on the action-value function $Q=q_\pi$ and then a $\epsilon$-greedy exploration (greedy exploration with probability to choose a random move).Remember: $ G_t = R_{t+1} + \gamma R_{t+2} + ... + \sum_{k=0} \gamma^k \cdot R_{t+k+1}$__Algorithm:__* Update $V(s)$ incrementally after each episode* For each state $S_t$ with return $G_t$ do: * $N(S_t) \gets N(S_t) +1$ * $Q(S_t,A_t) \gets Q(S_t,A_t) + \frac{1}{N(S_t)} \cdot (G_t - V(S_t,A_t))$ * Which corresponds to the _actual return_ ($G_t$) - the _estimated return_ ($Q(S_t,A_t)$) * $\frac{1}{N(S_t)}$ is a weighting factor that let us forget old episodes slowly* Improve policy based on new action-value function * $\epsilon \gets \frac{1}{k}$ * $\lambda \gets \epsilon-greedy(Q)$MC converges to solution with minimum mean squared error.
import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as ticker import plotting from operator import itemgetter plotting.set_layout(drawing_size=15)
_____no_output_____
MIT
Lecture 5 Monte-Carlo Control.ipynb
oesst/rl_lecture_examples
The EnvironmentFor this example we use the python package [gym](https://gym.openai.com/docs/) which provides a ready-to-use implementation of a BlackJack environment.The states are stored in this tuple format: \n(Agent's score , Dealer's visible score, and whether or not the agent has a usable ace)Here, we can look at the number of different states:
import gym env = gym.make('Blackjack-v0') env.observation_space
_____no_output_____
MIT
Lecture 5 Monte-Carlo Control.ipynb
oesst/rl_lecture_examples
And the number of actions we can take:
env.action_space
_____no_output_____
MIT
Lecture 5 Monte-Carlo Control.ipynb
oesst/rl_lecture_examples
To start a game call `env.reset()` which will return the obersavtion space
env.reset()
_____no_output_____
MIT
Lecture 5 Monte-Carlo Control.ipynb
oesst/rl_lecture_examples
We can take two different actions: `hit` = 1 or `stay` = 0. The result of this function call shows the _obersavtion space_, the reward (winning=+1, loosing =-1) and if the game is over,
env.step(1)
_____no_output_____
MIT
Lecture 5 Monte-Carlo Control.ipynb
oesst/rl_lecture_examples
Define the Agent
class agents(): """ This class defines the agent """ def __init__(self, state_space, action_space, ): """ TODO """ # Store the discount factor self.gamma = 0.7 # Store the epsilon parameters self.epsilon = 1 n_player_states = state_space[0].n n_dealer_states = state_space[1].n n_usable_ace = state_space[0].n # two available actions stay (0) and hit (1) self.actions = list(range(action_space.n)) # Store the action value function for each state and action self.q = np.zeros((n_player_states,n_dealer_states,n_usable_ace, action_space.n)) # incremental counter for a state self.N = np.zeros((n_player_states,n_dealer_states,n_usable_ace,action_space.n)) def greedy_move(self,s, k_episode): # given a state return the next move according to epsilon greedy algorithm # find optimal action a^* v_a = [] for i_a,a in enumerate(self.actions): # get value for action state pair s2 = 1 if s[2] else 0 v = self.q[s[0],s[1],s2,a] v_a.append((v,a)) # get action with maximal value a_max = max(v_a,key=itemgetter(0))[1] # with probabiliyt 1-eps execute the best action otherwise choose other action if np.random.rand() < (1-self.epsilon): a = a_max else: a = int(not a_max) # decrement epsilon self.epsilon = 1/(k_episode) return a def incre_counter(self, state, action): # Increments the counter for a given state and action # convert the true/false state to 0/1 s2 = 1 if state[2] else 0 # increment the counter for that state self.N[state[0],state[1],s2,action] += 1 def get_counter(self, state, action): # convert the true/false state to 0/1 s2 = 1 if state[2] else 0 # increment the counter for that state return self.N[state[0],state[1],s2,action] def policy_evaluation(self,all_states,all_rewards, all_actions): # Update V(s) incrementally for i_s,s in enumerate(all_states): # get corresponding action for given state a = all_actions[i_s] # convert the true/false state to 0/1 s2 = 1 if s[2] else 0 # Get the value function for that state Q_s = self.q[s[0],s[1],s2,a] # calculate the total reward G = np.sum([agent.gamma**k * r for k,r in enumerate(all_rewards)]) # Update the value funtion self.q[s[0],s[1],s2,a] = Q_s + 1/self.get_counter(s,a) * (G - Q_s) # how many episodes should be played n_episodes = 500000 # initialize the agent. let it know the number of states and actions agent = agents(env.observation_space, env.action_space) # Incremental MC updates # Play one episode then update V(s) for i in range(n_episodes): all_states = [] all_rewards = [] all_actions = [] # start the game s = env.reset() # play until environment tells you that the game is over game_ended = False while not game_ended: # increment counter # choose a movement according to eps-greedy algorithm and update policy move = agent.greedy_move(s,i+1) # use the old state for evaluation all_states.append(s) # increment the counter for a given state and action agent.incre_counter(s,move) # move s,r,game_ended,_ = env.step(move) # save everything # all_states.append(s) all_rewards.append(r) all_actions.append(move) # Evaluate policy agent.policy_evaluation(all_states,all_rewards,all_actions) ### END OF EPISODE ###
_____no_output_____
MIT
Lecture 5 Monte-Carlo Control.ipynb
oesst/rl_lecture_examples
Plotting
fig = plt.figure(figsize=(10,5)) axes = fig.subplots(1,2,squeeze=False) ax = axes[0,0] c = ax.pcolormesh(agent.q[13:22,1:,0,:].max(2),vmin=-1,vmax=1) ax.set_yticklabels(range(13,22)) ax.set_xticklabels(range(1,11,2)) ax.set_xlabel('Dealer Showing') ax.set_ylabel('Player Sum') ax.set_title('No Usable Aces') # plt.colorbar(c) ax = axes[0,1] c = ax.pcolormesh(agent.q[13:22,1:,1,:].max(2),vmin=-1,vmax=1) ax.set_yticklabels(range(13,22)) ax.set_xticklabels(range(1,11,2)) ax.set_title('Usable Aces') ax.set_xlabel('Dealer Showing') plt.colorbar(c) plt.show() fig = plt.figure(figsize=(10,5)) axes = fig.subplots(1,2,squeeze=False) ax = axes[0,0] c = ax.contour(agent.q[13:22,1:,0,:].max(2),levels=1,vmin=-1,vmax=1) ax.set_yticklabels(range(13,22)) ax.set_xticklabels(range(1,11,2)) ax.set_xlabel('Dealer Showing') ax.set_ylabel('Player Sum') ax.set_title('No Usable Aces') # plt.colorbar(c) ax = axes[0,1] c = ax.contour(agent.q[13:22,1:,1,:].max(2),levels=1,vmin=-1,vmax=1) ax.set_yticklabels(range(13,22)) ax.set_xticklabels(range(1,11,2)) ax.set_title('Usable Aces') ax.set_xlabel('Dealer Showing') plt.colorbar(c) plt.show()
<ipython-input-41-ea3605012f37>:9: UserWarning: FixedFormatter should only be used together with FixedLocator ax.set_yticklabels(range(13,22)) <ipython-input-41-ea3605012f37>:10: UserWarning: FixedFormatter should only be used together with FixedLocator ax.set_xticklabels(range(1,11,2)) <ipython-input-41-ea3605012f37>:18: UserWarning: FixedFormatter should only be used together with FixedLocator ax.set_yticklabels(range(13,22)) <ipython-input-41-ea3605012f37>:19: UserWarning: FixedFormatter should only be used together with FixedLocator ax.set_xticklabels(range(1,11,2))
MIT
Lecture 5 Monte-Carlo Control.ipynb
oesst/rl_lecture_examples
Azure Functions での展開用に Auto MLで作成したファイル群を Container 化する参考:Azure Functions に機械学習モデルをデプロイする (プレビュー)https://docs.microsoft.com/ja-jp/azure/machine-learning/how-to-deploy-functions
#!pip install azureml-contrib-functions
_____no_output_____
MIT
4.AML-Functions-notebook/AML-AzureFunctionsPackager.ipynb
dahatake/Azure-Machine-Learning-sample
Azure Machine Learnig ワークスペースへの接続
from azureml.core import Workspace, Dataset subscription_id = '<your azure subscription id>' resource_group = '<your resource group>' workspace_name = '<your azure machine learning workspace name>' ws = Workspace(subscription_id, resource_group, workspace_name) modelfilespath = 'AutoML1bb3ebb0477'
_____no_output_____
MIT
4.AML-Functions-notebook/AML-AzureFunctionsPackager.ipynb
dahatake/Azure-Machine-Learning-sample