markdown
stringlengths
44
160k
filename
stringlengths
3
39
--- title: XEMP qualitative strength description: Understand how the qualitative strength indicators for XEMP Prediction Explanations are calculated. --- # XEMP qualitative strength {: #xemp-qualitative-strength } [XEMP-based Prediction Explanations](xemp-pe#interpret-xemp-prediction-explanations) provide a visual indicator of the qualitative strength of each explanation presented by the insight. In the API, these values are returned from the [`qualitativeStrength` response parameter](dep-predex#qualitativestrength-indicator) of the Prediction Explanation API endpoint. The distribution is approximated from the validation data; the preview is computed on the validation data. ## Score translations {: score-translations } The boundaries between indicators (for example, `+++`, `++`, and `+`) are different when there are different numbers of features in a model. The tables below describe, based on feature count, how the calculations translate to the visual representation. Some notes: * If an explanation’s score is trivial and has little or no qualitative effect, the output displays three grayed out symbols (`+++` or `---`). This indicates, for the represented directionality, that the effect is minor. * When there are a large number of features, a normalized score greater than 0.2 is represented as `+++`, so it is possible for multiple features to display this symbolic score in a single row. In the tables, `q` represents the "qualitative" (or "normalized") score. ### Features = 1 {: #features-1 } The following describes the displayed symbolic score based on the calculated qualitative score for models built with a single feature: Qualitative Score | Symbolic Score ----------------- | -------------- q <= -0.001 | `---` -0.001 < q <= 0 | grayed-out `---` 0 < q < 0.001 | grayed-out `+++` q >= 0.001 | `+++` ### Features = 2 {: #features-2 } The following describes the displayed symbolic score based on the calculated qualitative score for models built with two features: Qualitative Score | Symbolic Score ----------------- | -------------- q < -0.75 | `---` -0.75 <= q < -0.25 | `--` -0.25 <= q <= -0.001 | `-` -0.001 < q <= 0 | grayed-out `---` 0 < q < 0.001 | grayed-out `+++` 0.001 <= q <= 0.25 | `+` 0.25 < q <= 0.75 | `++` q > 0.75 | `+++` ### Features >= 2, < 10 {: #features-2-10 } The following describes the displayed symbolic score based on the calculated qualitative score for models built with more than two but fewer than 10 features: Qualitative Score | Symbolic Score ----------------- | -------------- q < -2 / num_features | `---` -2 / num_features <= q < -1 / (2 * num_features) | `--` -1 / (2 * num_features) <= q <= -0.001 | `-` -0.001 < q <= 0 | grayed-out `---` 0 < q < 0.001 | grayed-out `+++` 0.001 <= q <= 1 / (2 * num_features) | `+` 1 / (2 * num_features) < q <= 2 / num_features | `++` q > 2 / num_features | `+++` ### Features >= 10 {: #features-10 } The following describes the displayed symbolic score based on the calculated qualitative score for models built with 10 or more features: Qualitative Score | Symbolic Score ----------------- | -------------- q < -0.2 | `---` -0.2 <= q < -0.05 | `--` -0.05 <= q <= -0.001 | `-` -0.001 < q <= 0 | grayed-out `---` 0 < q < 0.001 | grayed-out `+++` 0.001 <= q <= 0.05 | `+` 0.05 < q <= 0.2 | `++` q > 0.2 | `+++`
xemp-calc
--- title: Export charts and data description: DataRobot exports charts as PNG images and data as CSV files; use the export function to download this data. --- # Export charts and data {: #export-charts-and-data } Many of the DataRobot charts have images and data available for download. (You can also download all charts and data for a particular model using the Leaderboard [**Downloads**](download) tab.) DataRobot exports charts as PNG images and data as CSV files. For charts that provide both, you can export images and data to a single ZIP file. Note that the PNG image DataRobot exports is generated from the chart as it appears in the application at the time of export. That is, it reflects any interaction you had with the chart. The table in the exported CSV file contains the data that was used to generate the chart you see at time of export. ### Export to a file {: #export-to-a-file } To export chart data to a file, follow these steps: 1. Find the chart you want to export on the Leaderboard’s expanded model view or in **Insights** and click **Export**. 2. In the **Export** dialog box, select the content to export (not all options are available for all screens): - Click **.csv** to save the data as a CSV file. Optionally, you can copy [all or part](#copy-csv-data) of the results: ![](images/export-csv.png) - Click **.png** to save the image(s) as PNG, with an option to include the title: ![](images/export-png.png) - Click **.zip** to save both images and data in a ZIP archive. 3. In some charts, such as the ROC Curve, you can select a single chart to export. Use the **Download** option to export all charts in the insight. ![](images/roc-multiple-charts.png) 4. Enter a file name or accept the default file name and click **Download**. ## Copy to the clipboard {: #copy-to-the-clipboard } In addition to saving charts as PNG or CSV files, you can copy the chart image or data to the clipboard to paste into a document. Note that, in the instructions below, the text in the pop-up menus for copying may differ, depending on your browser. ### Copy charts {: #copy-charts } To copy a chart to the clipboard, follow these steps: 1. Click **Export** on the chart. 2. Select **.png** to display the chart in the dialog. 3. Right-click on the image, and select **Copy Image** from the pop-up menu. 4. Click **Close** to close the dialog, then paste the image to the desired location. ### Copy CSV data {: #copy-csv-data } To copy the CSV data of a chart to the clipboard, follow these steps: 1. Click **Export** on the chart. 2. Select **.csv** to display the chart in the dialog. 3. Highlight some or all of the text, right-click on it, and select **Copy** from the pop-up menu. 4. Click **Cancel** to close the dialog, then paste the file contents to the desired location.
export-results
--- title: Leaderboard reference description: A reference for the tags, icons, columns, and other aspects of the DataRobot model Leaderboard --- # Leaderboard reference {: #leaderboard-reference } The Leaderboard provides a wealth of summary information for each model built in a project. When models complete, DataRobot lists them on the Leaderboard with scoring and build information. The text below a model provides a brief description of the model type and version or whether it uses unaltered open-source code. Badges, tags, and columns, described below, provide quick model identifying and scoring information. ![](images/leaderboard-unique-id.png) ## Tags and indicators {: #tags-and-indicators } The following table describes the tags and indicators: | Display/name | Description | |---------------|---------------| | ![](images/baseline-badge.png){: style="height:20px; width:auto"} <br /> BASELINE | *Applicable to time series projects only.* Indicates a baseline model built using the MASE metric. | | ![](images/cf-export-badge.png){: style="height:20px; width:auto"} <br /> Beta | Indicates a model from which you can [export the coefficients](coefficients) and transformation parameters necessary to verify steps and make predictions outside of DataRobot. Blueprints that require complex preprocessing will not have the Beta tag because you can't export their preprocessing in a simple form (ridit transform for numerics, for example). Also note that when a blueprint has coefficients but is not marked with the Beta tag, it indicates that the coefficients are not exact (e.g., they may be rounded). | | ![](images/bias-mit-badge.png){: style="height:20px; width:auto"} <br /> BIAS MITIGATION | Indicates that the model had [bias mitigation](fairness-metrics#set-mitigation-techniques) techniques applied. The badge is added to the top three Autopilot Leaderboard models that DataRobot automatically attempted to mitigate bias for and any models to which mitigation techniques were manually applied. | | ![](images/blueprintid-badge.png){: style="height:20px; width:auto"} <br /> BP*xx* <br /> Blueprint ID&ast; | Displays a blueprint ID that represents an instance of a single model type (including version) and feature list. Models that share these characteristics _within the same project_ have the same blueprint ID regardless of the sample size used to build them. Use the model ID to differentiate models when the blueprint ID is the same. [Blender models](#blender-models) indicate the blueprints used to create them (for example, BP6+17+20). | | ![](images/fast-accurate.png){: style="height:20px; width:auto"} <br /> FAST & ACCURATE | *Deprecated, applicable to projects created prior to v6.1*. Indicates that this is the most accurate individual model on the Leaderboard that passes a set [prediction speed](model-rec-process) guideline. If no models meet the guideline, the badge is not applied. The badge is available for OTV but not time series projects. | | ![](images/frozen-badge.png){: style="height:20px; width:auto"} <br /> Frozen Run | Indicates that the model was produced using the [frozen run](frozen-run) feature. The badge also indicates the sample percent of the original model. | | ![](images/insights-badge.png){: style="height:20px; width:auto"} <br /> Insights | Indicates that the model appears on the [Insights](analyze-insights) page. | | ![](images/modelid-badge.png){: style="height:20px; width:auto"} <br /> M*xx* <br /> Model ID&ast; | Displays a unique ID for each model on the Leaderboard. The model ID represents a single instance of a model type, feature list, and sample size _within a single project_. Use the model ID to differentiate models when the blueprint ID is the same. | | ![](images/mono-badge.png){: style="height:20px; width:auto"} <br /> MONO | Indicates that the model either was built with, or supports but was not built with, [monotonic constraints](monotonic). | | ![](images/most-accurate.png){: style="height:20px; width:auto"} <br /> MOST ACCURATE | *Deprecated, applicable to projects created prior to v6.1*. Indicates that, based on the validation or cross-validation results, this model is the [most accurate](model-rec-process) model overall on the Leaderboard (in most cases, a [blender](#blender-models)). | | ![](images/new-series-badge.png){: style="height:20px; width:auto"} <br /> NEW SERIES OPTIMIZED | Indicates a model that supports unseen series modeling ([new series support](humility-settings#select-a-model-replacement)). | ![](images/prepare-for-dep.png){: style="height:20px; width:auto"} <br /> PREPARED FOR DEPLOYMENT | Indicates that the model has been through the [Autopilot recommendation stages](model-rec-process) and is ready for deployment. | | ![](images/rating-badge.png){: style="height:20px; width:auto"} <br /> Rating tables | Indicates that the model has [rating tables](rating-table) available for download. | | ![](images/recommend.png){: style="height:20px; width:auto"} <br /> RECOMMENDED FOR DEPLOYMENT | Indicates that this is the model DataRobot [recommends for deployment](model-rec-process), based on model accuracy and complexity. | | ![](images/ref-badge.png){: style="height:20px; width:auto"} <br /> REF | Indicates that the model is a reference model. A reference model uses no special preprocessing; it is a basic model that you can use to measure performance increase provided by an advanced model. | | ![](images/scorecode-badge.png){: style="height:20px; width:auto"} <br /> SCORING CODE | Indicates that the model has [Scoring Code](download) available for download. | | ![](images/segment-champ-badge.png){: style="height:20px; width:auto"} <br /> SEGMENT CHAMPION | Indicates that the model is the chosen segment champion in a [multiseries segmented modeling](ts-segmented) project. | | ![](images/shap-badge.png){: style="height:20px; width:auto"} <br /> SHAP | Indicates that the model was built with [SHAP-based Prediction Explanations](shap-pe). If no badge, the model provides XEMP-based explanations. | | ![](images/tuned-badge.png){: style="height:20px; width:auto"} <br /> TUNED | Indicates that the model has been [tuned](adv-tuning). | | ![](images/blacklist-badge.png){: style="height:20px; width:auto"} <br /> Upper Bound Running Time | Indicates that the model exceeded the [Upper Bound Running Time](additional#time-limit-exceptions). | &ast; You cannot rely on blueprint or model IDs to be the same across projects. Model IDs represent the order in which the models were added to the queue when built; because different projects can have different models or a different order of models, these numbers can differ across projects. Similarly for blueprint IDs where blueprint IDs can be different based on different generated blueprints. If you want to check matching blueprints across projects, check the blueprint diagram&mdash;if the diagrams match, the blueprints are the same. See also information on the [model recommendation](model-rec-process) calculations. ## Model icons {: #model-icons } In addition to the tags, DataRobot displays a badge (icon) to the left of the model name indicating the type: * ![](images/icon-dr.png): specially tuned DataRobot implementation of a model * ![](images/icon-blender.png): [blender model](#blender-models) * ![](images/icon-eureqa.png): [Eureqa model](eureqa) * ![](images/icon-keras.png): Keras model * ![](images/icon-lgbt.png): Light Gradient Boosting Machine model * ![](images/icon-python.png): Python model * ![](images/icon-r.png): R model * ![](images/icon-spark1.png): Spark model * ![](images/icon-tensor.png): TensorFlow model * ![](images/icon-xg.png): XGBoost model * ![](images/icon-custom.png): custom model, built with Jupyter Notebooks (deprecated) Text below the model provides a brief description of the model type and version, or whether it uses unaltered open source code. ### Model type and performance {: #model-type-and-performance } Some models sacrifice prediction speed to improve prediction accuracy. These models are best suited to batch predictions ([one-time](batch-pred) or [recurring](batch-pred-jobs)), where prediction time and reliability aren't critical factors. Some use cases require a model to make low-latency (or real-time) predictions. For these performance-sensitive use cases, it is best to avoid deploying the following model types as they prioritize accuracy over prediction speed and prediction memory usage: * [Keras models](vai-ref#keras-models) * [Blender models (or ensemble models)](leaderboard-ref#blender-models) * [Advanced tuned models](adv-tuning) * [Models generated using Comprehensive Autopilot mode](more-accuracy) * [Tensorflow models (deprecated)](v8.0.0-aml#tensorflow-blueprints-deprecated-and-soon-to-be-removed) ## Columns and tools {: #columns-and-tools } Leaderboard columns give you at-a-glance information about a model's "specs": ![](images/leaderboard-columns.png) The following table describes the Leaderboard columns and tools: | Column | Description | |------------|---------------| | Model Name and Description | Provides the model name (type) as well as [identifiers](#model-icons) and description. | | Feature List | Lists the name of the [Feature List used to create the model](feature-lists). Click the Feature List label to get a count of the number of features in the list. | | Sample Size | Displays the sample size used to create the model. Click the Sample Size label to see the number of rows the sample size represents, or set the display to only selected sample sizes. By default, DataRobot displays all sample sizes run for a project. When a project includes an [External predictions](external-preds) model, sample size displays N/A. | | Validation | Displays the [Validation score](data-partitioning) of the model. This is the score derived from the first cross-validation fold. Some scores may be marked with an [asterisk](#asterisked-scores), indicating in-sample predictions.| | Cross-Validation | Displays the [Cross-Validation score](data-partitioning#k-fold-cross-validation-cv), if run. If the dataset is greater than 50,000 rows, DataRobot does not automatically start a cross-validation run. You can click the **Run** link to run cross-validation manually. Some scores may be marked with an [asterisk](#asterisked-scores), indicating in-sample predictions. If the dataset is larger than 800MB, cross-validation is not allowed. | | Holdout | Displays a lock icon that indicates whether [holdout is unlocked](unlocking-holdout) for the model. When unlocked, some scores may be marked with an [asterisk](#asterisked-scores), indicating use of in-sample predictions to derive the score. | | Metric | Sets (and displays the selection of) an accuracy metric for the Leaderboard. Models display in order of their scoring (best to worst) for the metric chosen before the model building process. Click the orange arrow to access a dropdown that allows you to change the [optimization metric](opt-metric). | | Menu | Provides quick access to [comparing models](model-compare), [adding and deleting](creating-addl-models) models, and [creating blender](creating-addl-models#create-a-blended-model) models. | | Search | Searches for a model, as described [below](#search-the-leaderboard). | | Add New Model | [Adds a model](creating-addl-models#add-models-from-the-leaderboard) based on specific criteria that you set from the dialog. | | Filter | Filters by a variety of selection criteria. Alternatively, click a [Leaderboard tag](#tags-and-indicators) to filter by the selected tag. | | Export | Allows you to download the Leaderboard's contents as a CSV file, as described [below](#export-the-leadboard). | ### Tag and filter models {: #tag-and-filter-models } The Leaderboard offers filtering capabilities to make viewing and focusing on relevant models easier. * Tag or "star" one or more models on the Leaderboard, making it easier to refer back to them when navigating through the application. To star a model, hold the pointer over it and a star appears, which you can then click to select: ![](images/starred-model.png) To unselect the model, click again on the star. * Use the [**Filters**](#use-leaderboard-filters) option to only display models meeting the criteria you select. ![](images/starred-model-report.png) * Combine any of the filters with [search filtering](#search-the-leaderboard). First, search for a model type or blueprint number, for example, and then select **Filters** to find only those models of that type meeting the additional criteria. ![](images/search-leaderboard.png) ### Use Leaderboard filters {: #use-leaderboard-filters } Use the **Filters** selection box to modify the Leaderboard display to match only those models matching the selected criteria. Available fields, and the settings for that field, are dependent on the project and/or model type. For example, non-date/time models offer sample size filtering while time-aware models offer training period: ![](images/leaderboard-filter-1.png) !!! note Filters are inclusive. That is, results show models that match _any_ of the filters, not all filters. Also, options available for selection only include those in which at least one model matching the criteria is on the Leaderboard. The following table describes all available Leaderboard filters. Tag | Filters on... --- | ----------- Model importance | Models that are manually marked with a star on the Leaderboard. Sample size | Selected sample size or N/A for [External predictions](external-preds) models. Non time-aware only. [Training period](ts-date-time#change-the-training-period) | Time periods, either duration or start/end date. Time-aware only. [Feature list](feature-lists) | Any feature list, manually or automatically created, that was used in at least one of the project's models. Model family | Models grouped by tasks, an extended functionality of the model icon badge. [Model characteristics](#model-characteristics-options) | Displayed model badges. ![](images/icon-blueprintid.png) <br /> Blueprint ID | All models that have the same ID&mdash;representing an instance of a single model type (including version). ![](images/icon-modelid.png) <br /> Model ID | A single, unique ID for a model on the Leaderboard. Build method | The method that added models to the Leaderboard. <br /> <ul><li>[Autopilot](model-data#set-the-modeling-mode): Models created using full, Quick, or Comprehensive Autopilot.</li><li>Repository: Models added manually from the [Repository](repository).</li><li>[Composable ML](cml-blueprint-edit): Custom models built using the blueprint editor.<l/i><li>[Advanced Tuning](adv-tuning): Manually tuned models. ![](images/icon-tuned.png) </li><li>[Eureqa child](eureqa): Manually added to the Leaderboard via Eureqa solutions.</li></ul> #### Model characteristics options {: #model-characteristics-options } The following list includes the model characteristics available to search on. See the [table above](#tags-and-indicators) for brief descriptions or the linked pages for complete details. * [Additional insights](analyze-insights) * [Augmented](tti-augment/index) * [Baseline](glossary/index#baseline-model) * [Bias mitigation](fairness-metrics) * [Blacklisted](additional#time-limit-exceptions) * [Deprecated](deprecations-and-migrations/index) * [Exportable coefficients](coefficients) * [External predictions](external-preds) * [Frozen](frozen-run) * [Monotonic constraints](monotonic) * [New series optimized](humility-settings#select-a-model-replacement) * [Rating table](rating-table) * Reference model * [Scoring code](scoring-code/index) * [SHAP](shap-pe) ### Use Repository filters {: #use-repository-filters } The **Filters** option is also available from the model [**Repository**](repository) page: ![](images/repo-filter.png) The following table describes all available Repository filters. Tag | Filters on... --- | -------------- Blueprint characteristics | Blueprints based on the functionality they support. Options are Reference, Monotonic, Baseline, External Predictions, and SHAP. Blueprint family | The mathematical technique or algorithm the blueprint uses. Blueprint type | Blueprint origin, either [DataRobot](repository#datarobot-models), [Eureqa](eureqa), or [Composable ML](cml/index). Blueprint ID | Models that have the same ID&mdash;representing an instance of a single model type (including version) and feature list. ### Search the Leaderboard {: #search-the-leaderboard } In addition to the **Filter** method, the Leaderboard provides a method to further limit the display to only those models matching your search criteria. ![](images/search-leaderboard.png) ### Export the Leaderboard {: #export-the-leaderboard } The Leaderboard allows you to download its contents as a CSV file. To do so, click the **Export** button on the action bar: ![](images/leaderboard-1.png) Doing so prompts a preview screen: ![](images/leaderboard-2.png) This screen displays the Leaderboard contents (1), which you can copy, and lets you rename the .csv file (2). Note that: * .csv is the only available file type for exporting the Leaderboard. * Holdout scores are only included in the report if holdout has been unlocked. Click **Download** to export the contents. ## Blender models {: #blender-models } A blender (or ensemble) model can increase accuracy by combining the predictions of between two and eight models. Use the [**Create blenders from top models**](additional) advanced option to enable DataRobot to add the following blenders to the Leaderboard automatically. * Average (AVG) Blend * Generalized Linear Model (GLM) Blend * Elastic Net (ENET) Blend !!! note Depending on the project's dataset, DataRobot may only run a subset of the blenders listed above. If you did not select the **Create blenders from top models** option prior to model building, you can manually [create blender models](creating-addl-models#create-a-blended-model) when Autopilot has finished. To improve response times for blender models, DataRobot stores predictions for all models trained at the highest sample size used by Autopilot (typically 64%) and creates blenders from those results. Storing only the largest sample size (and therefore predictions from the best performing models) limits the disk space required. DataRobot has special logic in place for natural language processing (NLP) and image fine-tuner models. For example, fine-tuners do not support [stacked predictions](data-partitioning#what-are-stacked-predictions). As a result, when blending stacked and non-stack-enabled models, the available blender methods are: AVG, MED, MIN, or MAX. DataRobot does not support other methods in this case because they may introduce target leakage. ## Asterisked scores {: #asterisked-scores } !!! info "Availability information" Asterisked partitions do not apply to time series or multiseries projects. Sometimes, the Leaderboard's **Validation**, **Cross-Validation**, or **Holdout** score displays an asterisk. Hover over the score for a tooltip explaining the reason for the asterisk: ![](images/leaderboard-asterisk.png) !!! note The following training set percentage values are examples based on the default [data partitioning](data-partitioning) settings recommended by DataRobot (without downsampling). The default data partitions are [5-fold CV](data-partitioning#k-fold-cross-validation-cv) with 20% holdout or, for larger datasets, [TVH](data-partitioning#training-validation-and-holdout-tvh) 16% validation and 20% holdout. If you customize the data partitioning settings, the thresholds for training into validation change. For example, if you select a 10-fold CV with 20% holdout, your maximum training set sample size will be 72%, not 64%. By default, DataRobot uses up to 64% of the data for the training set. This is the largest sample size that does not include any data from the validation or holdout sets (16% of the data is reserved for the validation set and 20% for the holdout set). When model building finishes, you can manually train at larger sample sizes (for example, 80% or 100%). If you train above 64%, but under 80%, the model trains on data from the validation set. If you train above 80%, the model trains on data from the holdout set. As a result, if you train above 64%, DataRobot marks the **Validation** score with an asterisk to indicate that some in-sample predictions were used for that score. If you train above 80%, the **Holdout** score is also asterisked to indicate the use of in-sample predictions to derive the score. ## N/A scores {: #na-scores } Sometimes, the Leaderboard's **Validation**, **Cross-Validation**, or **Holdout** score displays **N/A** instead of a score. "Not available" scores occur if your project trains models into the validation or holdout sets *and* meets any of the following criteria: * The dataset exceeds 800MB resulting in a slim run project containing models that do not have [stacked predictions](data-partitioning#what-are-stacked-predictions). * The project is [date/time partitioned](otv) (both OTV and time series), and all models do not have stacked predictions. * The project is [multiclass](multiclass) with greater than ten classes. * The project uses [Eureqa modeling](eureqa), as Eureqa models do not have stacked predictions.
leaderboard-ref
--- title: Model recommendation process description: As a result of the Autopilot modeling process, the most accurate individual, non-blender model is selected and then prepared for deployment. --- # Model recommendation process {: #model-recommendation-process } DataRobot provides an option to set the Autopilot modeling process to recommend a model for deployment. If you have enabled the [**Recommend and prepare a model for deployment**](additional) option, one of the models&mdash;the most accurate individual, _non-blender_ model&mdash;is selected and then prepared for deployment. The following tabs describe the process for each process when this option is enabled&mdash;by project type (AutoML or time-aware) and by modeling mode (full Autopilot/Comprehensive or Quick). === "AutoML" The following description describes the model recommendation process for full Autopilot and Comprehensive mode in AutoML projects. Accuracy is based on the up-to-validation sample size (typically 64%). The resulting prepared model is marked with the **Recommended for Deployment** and **Prepared for Deployment** badges. You can also select any model from the Leaderboard and initiate the [deployment preparation](#prepare-a-model-for-deployment) process. The following describes the preparation process: 1. First, DataRobot calculates **Feature Impact** for the selected model and uses it to generate a [reduced feature list](feature-lists#automatically-created-feature-lists). 2. Next, the app retrains the selected model on the reduced feature list. If the new model performs better than the original model, DataRobot uses the new model for the next stage. Otherwise, the original model is used. 3. DataRobot then retrains the selected model at an up-to-holdout sample size (typically 80%). As long as the sample is under the frozen threshold (1.5GB), the stage is not frozen. 4. Finally, DataRobot retrains the selected model as a [frozen run](frozen-run) (hyperparameters are not changed from the up-to-holdout run) using a 100% sample size and selects it as **Recommended for Deployment**. Depending on the size of the dataset, the insights for the recommended model are either based on the up-to-holdout model or, if DataRobot can use [out-of-sample predictions](data-partitioning#what-are-stacked-predictions), based on the 100%, recommended model. === "AutoML Quick" The following description describes the model recommendation process for Quick Autopilot mode in AutoML projects. Accuracy is based on the up-to-validation sample size (typically 64%). The resulting prepared model is marked with the **Recommended for Deployment** and **Prepared for Deployment** badges. You can also select any model from the Leaderboard and initiate the [deployment preparation](#prepare-a-model-for-deployment) process. The following describes the preparation process: 1. First, DataRobot calculates **Feature Impact** for the 64% sample size of the recommended model. 2. Next, DataRobot uses the results of that calculation to create a reduced feature list if applicable (e.g., feature list can be reduced). 3. Finally, DataRobot retrains the selected model as a [frozen run](frozen-run) using a 100% sample size and selects it as **Recommended for Deployment**. This frozen run model is trained on the same feature list as the 64% sample size model. To apply the reduced feature list to the recommended model, manually retrain it&mdash;or any Leaderboard model&mdash;using the reduced feature list. Depending on the size of the dataset, the insights for the recommended model are either based on the up-to-holdout model or, if DataRobot can use [out-of-sample predictions](data-partitioning#what-are-stacked-predictions), based on the 100%, recommended model. === "Time-aware" The following description describes the model recommendation process for OTV and time series projects in Quick Autopilot mode. When backtesting is finished, one of the models&mdash;the most accurate individual, non-blender model&mdash;is selected and then prepared for deployment. The resulting prepared model is marked with the **Recommended for Deployment** badge. The following describes the preparation process for time-aware projects: 1. First, DataRobot calculates **Feature Impact** for the selected model and uses it to generate a [reduced feature list](feature-lists#automatically-created-feature-lists). 2. Next, the app retrains the selected model on the reduced feature list. (If the selected model is a [Start/End Date](otv#setting-the-start-and-end-dates) model, because it is frozen, it will not be retrained on the reduced feature list or most recent data.) 3. If the new model performs better than the original model, DataRobot then retrains the better scoring model on the most recent data (using the same duration/row count as the original model). If using duration, and the equivalent period does not provide enough rows for training, DataRobot extends it until the minimum is met. Note that there are two exceptions for time series models: * Feature reduction cannot be run for baseline (naive) or ARIMA models. This is because they only use `date+naive` predictions features (i.e., there is nothing to reduce). * Because they don't use weights to train and don't need retraining, baseline (naive) models are not retrained on the most recent data. === "Time-aware Quick" The following description describes the model recommendation process for OTV and time series projects in Quick Autopilot mode. When backtesting is finished, one of the models&mdash;the most accurate individual, non-blender model&mdash;is selected and then prepared for deployment. The resulting prepared model is marked with the **Recommended for Deployment** badge. The following describes the preparation process for time-aware projects: 1. DataRobot calculates **Feature Impact** for the selected model. 2. Next, DataRobot uses the results of that calculation to create a reduced feature list if applicable (e.g., feature list can be reduced). 3. Finally, DataRobot retrains the best-scoring model on the most recent data (using the same duration/row count as the original model). If using duration, and the equivalent period does not provide enough rows for training, DataRobot extends it until the minimum duration is met. To apply the reduced feature list to the best-scoring model, manually retrain it&mdash;or any Leaderboard model&mdash;using the reduced feature list. Note that there are two exceptions for time series models: * Feature reduction cannot be run for baseline (naive) or ARIMA models. This is because they only use `date+naive` predictions features (i.e., there is nothing to reduce). * Because they don't use weights to train and don't need retraining, baseline (naive) models are not retrained on the most recent data. ## Prepare a model for deployment {: #prepare-a-model-for-deployment } Although Autopilot recommends and prepares a single model for deployment, you can initiate the Autopilot recommendation and deployment preparation stages for any Leaderboard model. To do so, select a model from the Leaderboard and navigate to [**Predict > Deploy**](deploy-model). ![](images/prep-dep-1.png) Click **Prepare for Deployment**. DataRobot begins running the recommendation stages described above for the selected model (view progress seen in the right panel). In other words, DataRobot runs **Feature Impact**, retrains the model on a reduced feature list, trains on a higher sample size, and then the full sample size (for non date/time partitioned projects) or most recent data (for [time-aware projects](ts-date-time#recommended-time-series-models)). Once the process completes, DataRobot marks the new, final model built at 100% with the **Prepared for Deployment** badge. (The originally recommended model also maintains its badge.) From the **Deploy** tab of the original model, click **Go to model** to see the prepared model on the Leaderboard. ![](images/prep-dep-2.png) Click the new model's blueprint number to see the new feature list and sample sizes associated with the process: ![](images/prep-dep-3.png) If you return to the model that you made the original request from (for example, the 64% sample size) and access the **Deploy** tab, you'll see that it is now linked to the prepared model. ## Notes and considerations {: #notes-and-considerations } * When retraining the final **Recommended for Deployment** model at 100%, it is always executed as a frozen run. This makes model retraining faster, and also ensures that the 100% model uses the same settings as the 80% model. * If the model that is recommended for deployment has been trained into the validation set, DataRobots unlocks and displays the [Holdout score](unlocking-holdout) for this model, but not the other Leaderboard models. Holdout can be unlocked for the other models from the right panel. * If the model that is recommended for deployment has been trained into the validation set, or the project was created without a holdout partition, the ability to compute predictions using validation and holdout data is not available. * The heuristic logic of automatic model recommendation may differ across different projects types. For example, retraining a model with non-redundant features is implemented in regression and binary classification while retraining a model at a higher sample size is implemented in regression, binary classification, and multiclass projects. * If you terminate a model that is being trained on a higher sample size, or training on a higher sample size does not successfully finish, that model will not be a candidate for the **Recommended for Deployment** model. ### Deprecated badges {: #deprecated-badges } Projects created prior to v6.1 may also have been tagged with the **Most Accurate** and/or **Fast & Accurate** badges. With improvements made to Autopilot automation, these badges are no longer necessary but are still visible, if they were assigned, to pre-v6.1 projects. Contact your DataRobot representative for code snippets that can help transition automation built around the deprecated badges. * The model marked **Most Accurate** is typically, but not always, a blender. As the name suggests, it is the most accurate model on the Leaderboard, determined by a ranking of validation or cross-validation scores. * The **Fast & Accurate** badge, applicable only to non-blender models, is assigned to the model that is both the most accurate _and_ is the fastest to make predictions. To evaluate, DataRobot uses prediction timing from: * a project’s holdout set. * a sample of the training data for a project without holdout. Not every project has a model tagged as **Fast & Accurate**. This happens if the prediction time does not meet the minimum speed threshold determined by an internal algorithm.
model-rec-process
--- title: Optimization metrics description: Provides a complete reference to the optimization metrics DataRobot employs during the modeling process. --- # Optimization metrics {: #optimization-metrics } The following table lists all metrics, with a short description, available from the [**Optimization Metric**](additional#change-the-optimization-metric) dropdown. The sections [below the table](#datarobot-metrics) provide more detailed explanations, leveraging information from across the internet. !!! tip Remember that the metric DataRobot chooses for scoring models is usually the best selection. Changing the metric is advanced functionality and recommended only for those who understand the metrics and the algorithms behind them. For information on how recommendations are made, see [Recommended metrics](opt-metric#recommended-metrics). For weighted metrics, the weights are the result of [smart downsampling](smart-ds) and/or specifying a value for the [Advanced options weights](additional) parameter. The metric then takes those weights into account. Metrics used are dependent on project type, either R (regression), C (binary classification), or M (multiclass). ??? info "What are true/false negatives and true/false positives?" Consider the following definitions: * True means the prediction was correct; false means the prediction was incorrect. * Positive means the model predicted positive; negative means it predicted negative. Based on those definitions: * True positives are observations correctly predicted as positive. * True negatives are observations correctly predicted as negative. * False positives are observations incorrectly predicted as positive. * False negatives are observations incorrectly predicted as negative. | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | [Accuracy](#accuracybalanced-accuracy) | Accuracy | Computes subset accuracy; the set of labels predicted for a sample must exactly match the corresponding set of labels in y\_true. | Binary classification, multiclass | | [AUC/Weighted AUC](#aucweighted-auc) | Area Under the (ROC) Curve | Measures the ability to distinguish the ones from the zeros; for multiclass, AUC is calculated for each class one-vs-all and then averaged, weighted by the class frequency. | Binary classification, multiclass, multilabel | | [Area Under PR Curve](#area-under-pr-curve) | Area Under the Precision-Recall Curve | Approximation of the Area under the Precision-Recall Curve; summarizes precision and recall in one score. Well-suited to imbalanced targets. | Binary classification, multilabel | | [Balanced Accuracy](#accuracybalanced-accuracy) | Balanced Accuracy | Provides the average of the class-by-class one-vs-all accuracy. | Multiclass | | [FVE Binomial/Weighted FVE Binomial](#fve-deviance-metrics) | Fraction of Variance Explained | Measures deviance based on fitting on a binomial distribution. | Binary classification | | [FVE Gamma/Weighted FVE Gamma](#fve-deviance-metrics) | Fraction of Variance Explained | Provides FVE for gamma deviance. | Regression | | [FVE Multinomial/Weighted FVE Multinomial](#fve-deviance-metrics) | Fraction of Variance Explained | Measures deviance based on fitting on a multinomial distribution. | Multiclass | | [FVE Poisson/Weighted FVE Poisson](#fve-deviance-metrics) | Fraction of Variance Explained | Provides FVE for Poisson deviance. | Regression | | [FVE Tweedie/Weighted FVE Tweedie](#fve-deviance-metrics) | Fraction of Variance Explained | Provides FVE for Tweedie deviance. | Regression | | [Gamma Deviance/Weighted Gamma Deviance](#deviance-metrics) | Gamma Deviance | Measures the inaccuracy of predicted mean values when the target is skewed and gamma distributed. | Regression | | [Gini/Weighted Gini](#gini-coefficient) | Gini Coefficient | Measures the ability to rank. | Regression, binary classification | | [Gini Norm/Weighted Gini Norm](#gini-coefficient) | Normalized Gini Coefficient | Measures the ability to rank. | Regression, binary classification | | [KS](#kolmogorovsmirnov-ks) | Kolmogorov-Smirnov | Measures the maximum distance between two non-parametric distributions. Used for ranking a binary classifier, KS evaluates models based on the degree of separation between true positive and false positive distributions. The KS value is displayed in the [ROC Curve](roc-curve#kolmogorov-smirnov-ks-metric). | Binary classification | | [LogLoss/Weighted LogLoss](#loglossweighted-logloss) | Logarithmic Loss | Measures the inaccuracy of predicted probabilities. | Binary classification, multiclass, multilabel | | [MAE/Weighted MAE](#maeweighted-mae)* | Mean Absolute Error | Measures the inaccuracy of predicted median values. | Regression | | [MAPE/Weighted MAPE](#mapeweighted-mape) | Mean Absolute Percentage Error | Measures the percent inaccuracy of the mean values. | Regression | | [MASE](#mase) | Mean Absolute Scaled Error | Measures relative performance with respect to a baseline model. | Regression (time series only) | | [Max MCC/Weighted Max MCC](#max-mccweighted-max-mcc) | Maximum Matthews correlation coefficient | Measures the maximum value of the Matthews correlation coefficient between the predicted and actual class labels. | Binary classification | | [Poisson Deviance/Weighted Poisson Deviance](#deviance-metrics) | Poisson Deviance | Measures the inaccuracy of predicted mean values for count data. | Regression | | [R Squared/Weighted R Squared](#r-squared-r2weighted-r-squared) | R Squared | Measures the proportion of total variation of outcomes explained by the model.| Regression | | [Rate@Top5%](#ratetop10-ratetop5-ratetoptenth) | Rate@Top5% | Measures the response rate in the top 5% highest predictions. | Binary classification | | [Rate@Top10%](#ratetop10-ratetop5-ratetoptenth) | Rate@Top10% | Measures the response rate in the top 10% highest predictions. | Binary classification | | [Rate@TopTenth%](#ratetop10-ratetop5-ratetoptenth) | Rate@TopTenth% | Measures the response rate in the top tenth highest predictions. | Binary classification | | [RMSE/Weighted RMSE](#rmse-weighted-rmse-rmsle-weighted-rmsle) | Root Mean Squared Error | Measures the inaccuracy of predicted mean values when the target is normally distributed. | Regression, binary classification | | [RMSLE/Weighted RMSLE](#rmse-weighted-rmse-rmsle-weighted-rmsle)* | Root Mean Squared Log Error | Measures the inaccuracy of predicted mean values when the target is skewed and log-normal distributed. | Regression | | [Silhouette Score ](#silhouette-score) | Silhouette score, also referred to as silhouette coefficient | Compares clustering models. | Clustering | | [SMAPE/Weighted SMAPE](#smapeweighted-smape) | Symmetric Mean Absolute Percentage Error | Measures the bounded percent inaccuracy of the mean values. | Regression | | [Synthetic AUC](anomaly-detection#synthetic-auc-metric) | Synthetic Area Under the Curve | Calculates AUC. | Unsupervised | | [Theil's U](#theils-u) | Henri Theil's U Index of Inequality | Measures relative performance with respect to a baseline model. | Regression (time series only) | | [Tweedie Deviance/Weighted Tweedie Deviance](#fve-deviance-metrics) | Tweedie Deviance | Measures the inaccuracy of predicted mean values when the target is zero-inflated and skewed. | Regression| \* Because these metrics don't optimize for the mean, Lift Chart results (which show the mean) are misleading for most models that use them as a metric. ## Recommended metrics {: #recommended-metrics } DataRobot recommends which optimization metric to use when scoring models; the recommended metric is usually the best option for the given circumstances. Changing the metric is advanced functionality, and only those who understand the other metrics (and the algorithms behind them) should use them for analysis. The table below outlines the general guidelines DataRobot follows when recommending a metric: Project type | Recommended metric -------------|------------------- Binary classification | [**LogLoss**](#loglossweighted-logloss) Multiclass classification | [**LogLoss**](#loglossweighted-logloss) Multilabel classification | [**LogLoss**](#loglossweighted-logloss) Regression | DataRobot will choose between RMSE, Tweedie Deviance, Poisson Deviance, and Gamma Deviance optimization metrics by applying heuristics informed by the properties of the target distribution, including percentiles, mean, variance, skew, and zero counts. \* Of the [EDA 1](eda-explained#eda1) sample. ## DataRobot metrics {: #datarobot-metrics } The following sections describe the DataRobot optimization metrics in more detail. !!! note There is some overlap in DataRobot optimization metrics and Eureqa error metrics. You may notice, however, that in some cases the metric formulas are expressed differently. For example, predictions may be expressed as `y^` versus `f(x)`. Both are correct, with the nuance being that `y^` indicates a prediction generally, regardless of how you got there, while `f(x)` indicates a function that may represent an underlying equation. ### Accuracy/Balanced Accuracy {: #accuracybalanced-accuracy } | Display | Full name | Description | Project type | | --------|-------------|----------------|---------------| | Accuracy | Accuracy | Computes subset accuracy; the set of labels predicted for a sample must exactly match the corresponding set of labels in y\_true. | Binary classification, multiclass | | Balanced Accuracy | Balanced Accuracy | Provides the average of the class-by-class one-vs-all accuracy. | Multiclass | The Accuracy metric applies to classification problems and captures the ratio of the total count of correct predictions over the total count of all predictions, based on a given threshold. True positives (TP) and true negatives (TN) are correct predictions, false positives (FP) and false negatives (FN) are incorrect predictions. The formula is: ![](images/dnt-opt-accuracy-1.png) Unlike Accuracy, which looks at the number of true positive and true negative predictions per class, Balanced Accuracy looks at the true positives (TP) and the false negatives (FN) for each class, also known as Recall. It is the sum of the recall values of each class divided by the total number of classes. (This formula matches the TPR formula.) ![](images/recall.png) ![](images/balanced-accuracy.png) For example, in the 3x3 matrix example below: ![](images/acc-metric-example.png) Accuracy = `(TP_A + TP_B + TP_C) / Total prediction count` or, from the image above, `(9 + 60 + 30) / 200 = 0.495` Balanced Accuracy = `(Recall_A + Recall_B + Recall_C) / total number of classes`. Recall_A = 9 / (9 + 1 + 0) = 0.9 Recall_B = 60 / (20 + 60 + 20) = 0.6 Recall_C = 30 / (25 + 35 + 30) = 0.333 Balanced Accuracy = `(0.9 + 0.6 +0.333) / 3 = 0.611` Accuracy and Balanced Accuracy apply to both binary and multiclass classification. _Using weights_: Every cell of the confusion matrix will be the sum of the sample weights in that cell. If no weights are specified, the implied weight is 1, so the sum of the weights is also the count of observations. Accuracy does not perform well with imbalanced data sets. For example, if you have 95 negative and 5 positive samples, classifying all as negative gives 0.95 accuracy score. <a target="_blank" href="https://en.wikipedia.org/wiki/Precision_and_recall">Balanced Accuracy (bACC)</a> overcomes this problem by normalizing true positive and true negative predictions by the number of positive and negative samples, respectively, and dividing their sum into two. This is equivalent to the following formula: ![](images/dnt-opt-accuracy-3.png) ### Approximate Median Significance (deprecated) {: #approximate-median-significance-deprecated } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | AMS@15%tsh | Approximate Median Significance | Measures the median of estimated significance with a 15% threshold. | Binary classification | | AMS@opt\_tsh | Approximate Median Significance | Measures the median of estimated significance with an optimal threshold. | Binary classification | The Approximate Median Significance (AMS) metric creates a distinction between the two classes in a binary classification problem as the “signal” (true positive) and the remaining as the “background” (false positive). This metric was largely brought to light from the ATLAS experiment to identify the <a target="_blank" href="https://higgsml.lal.in2p3.fr/files/2014/04/documentation_v1.8.pdf">Higgs boson</a>, and the associated <a target="_blank" href="https://www.kaggle.com/c/final22202#evaluation">Kaggle competition</a>. Since the probability of a signal event is usually several orders of magnitude lower than the probability of a background event, the signal and background samples are usually renormalized to produce a balanced classification problem. Next, a real-value discriminant function is trained on this reweighted sample to minimize any weighted classification error. The signal region is then defined by cutting the discriminant value at a certain threshold, which may be optimized on a held-out set to maximize the sensitivity of the statistical test. Given a classifier, `g`, and _n_ observed events selected by `g` (positives), the (Gaussian) significance of discovery would be, roughly: ![](images/dnt-opt-ams-1.png) standard deviations where: ![](images/dnt-opt-ams-2.png) is the expected value of background observations, and thus: ![](images/dnt-opt-ams-3.png) is the expected value of signal observations. Or, stated equivalently: ![](images/dnt-opt-ams-4.png) would suggest an objective function of: ![](images/dnt-opt-ams-5.png) for training `g`. However, it is only valid when `s << b and b >> 1`, which is often not the case in practice. To improve the behavior of the objective function in this range, the AMS objective function is defined by: ![](images/dnt-opt-ams-6.png) Where `s` (signal), `b` (background): unnormalized true positive and false positive rates, respectively, is a regularization term, set as a constant equal to 10. The classifier is trained on simulated background and signal events. The AMS `s` and `b `are the sum of signal and background weights, respectively, in the selection region, and the objective is a function of the weights of selected events. Simulators produce weights for each event to correct for the mismatch between the natural (prior) probability of the event and the instrumental probability applied by the simulator. After re-normalizing the samples to produce a balanced classification problem, a real-valued discriminant function is then trained on this reweighted sample to minimize the weighted classification error. The signal region is then defined by cutting the discriminant value at a certain threshold, which is optimized on a held-out set to maximize the sensitivity of the statistical test. ### AUC/Weighted AUC {: #aucweighted-auc } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | AUC/Weighted AUC | Area Under the (ROC) Curve | Measures the ability to distinguish the ones from the zeros; for multiclass, AUC is calculated for each class one-vs-all and then averaged, weighted by the class frequency. | Binary classification, multiclass, multilabel | AUC for the ROC curve is a performance measurement for classification problems. ROC is a probability curve and AUC represents the degree or measure of separability. The metric ranges from 0 to 1 and indicates how much the model is capable of distinguishing between classes. The higher the AUC, the better the model is at predicting negatives (0s as 0s) and positives (1s as 1s). The ROC curve shows how the true positive rate (sensitivity) on the Y-axis and false positive rate (specificity) on the X-axis vary at each possible threshold. For a multiclass or multilabel model, you can plot _n_ number of AUC/ROC Curves for _n_ number classes using One vs All methodology. For example, if there are three classes named `X`, `Y` and `Z`, there will be one ROC for `X` classified against `Y` and `Z`, another ROC for `Y` classified against `X` and `Z`, and a third for `Z` classified against `Y` and `X`. To extend the ROC curve and the Area Under the Curve to multiclass or multilabel classification, it is necessary to binarize the output. For multiclass projects, the AUC score is the averaged AUC score for each single class (macro average), weighted by support (the number of true instances for each class). The Weighted AUC score is the averaged, sample-weighted AUC score for each single class (macro average), weighted according to the sample weights for each class `sum(sample_weights_for_class)/sum(sample_weights)`. For multilabel projects, the AUC score is the averaged AUC score for each single class (macro average). The Weighted AUC score is the averaged, sample-weighted AUC score for each single class (macro average). ### Area Under PR Curve {: #area-under-pr-curve } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | Area Under PR Curve | Area Under the Precision-Recall Curve | Approximation of the Area under the Precision-Recall Curve; summarizes precision and recall in one score. Well-suited to imbalanced targets. | Binary classification, multilabel | The Precision-Recall (PR) curve captures the tradeoff between a model's precision and recall at different probability thresholds. Precision is the proportion of positively labeled cases that are true positives (i.e., `TP / (TP + FP)` ), and recall is the proportion of positive labeled cases that are recovered by the model (`TP/ (TP + FN)`). The area under the PR curve cannot always be calculated exactly, so an approximation is used by means of a weighted mean of precisions at each threshold, weighted by the improvement in recall from the previous threshold: ![](images/dnt-opt-auc-pr.png) Area under the PR curve is very well-suited to problems with imbalanced classes where the minority class is the "positive" class of interest (it is important that this is encoded as such): precision and recall both summarize information about positive class retrieval, and neither is informed by True Negatives. For more reading about the relative merits of using the above approach as opposed to an interpolation of the area, see: * <a target="_blank" href="https://www.biostat.wisc.edu/~page/rocpr.pdf">The Relationship Between Precision-Recall and ROC Curves</a> * <a target="_blank" href="https://papers.nips.cc/paper/5867-precision-recall-gain-curves-pr-analysis-done-right">Precision-Recall-Gain Curves: PR Analysis Done Right</a> For multilabel projects, the reported Area Under PR Curve score is the averaged Area Under PR Curve score for each single class (macro average). ### Deviance metrics {: #deviance-metrics } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | Gamma Deviance/ Weighted Gamma Deviance | Gamma Deviance | Measures the inaccuracy of predicted mean values when the target is skewed and gamma distributed. | Regression | | Poisson Deviance/Weighted Poisson Deviance | Poisson Deviance | Measures the inaccuracy of predicted mean values for count data. | Regression | | Tweedie Deviance/Weighted Tweedie Deviance | Tweedie Deviance | Measures the inaccuracy of predicted mean values when the target is zero-inflated and skewed. | Regression | <a target="_blank" href="https://en.wikipedia.org/wiki/Deviance_(statistics)">Deviance</a> is a measure of the goodness of model fit&mdash;how well your model fits the data. Technically, it is how well your fitted prediction model compares to a perfect (saturated) model from the observed values. This is usually defined as twice the log-likelihood function, parameters that are determined via a maximum likelihood estimation. Thus, the deviance is defined as the difference of likelihoods between the fitted model and the saturated model. As a consequence, the deviance is always larger than or equal to zero, where zero only applies if the fit is perfect. Deviance metrics are based on the principle of generalized linear models. That is, the deviance is some measure of the error difference between the target value and the predicted value, where the predicted value is run through a link function, denoted with: ![](images/dnt-opt-dev-1a.png) An example of a link function is the logit function, which is used in logistic regression to transform the prediction from a linear model into a probability between 0 and 1. In essence, each deviance equation is an error metric intended to work with a type of distribution deemed applicable for the target data. For example, a normal distribution for a target uses the sum of squared errors: ![](images/dnt-opt-dev-1.png) And the Python implementation: `np.sum((y - pred) ** 2)` In this case, the deviance metric is just that&mdash;the sum of squared errors. For a Gamma distribution, where data is skewed to one side (say to the right for something like the distribution of how much customers spend at a store), deviance is: ![](images/dnt-opt-dev-2.png) Python: `2 * np.mean(-np.log(y / pred) + (y - pred) / pred)` For a Poisson distribution, when interested in predicting counts or number of occurrences of something, the function is this: ![](images/dnt-opt-dev-3.png) Python: `2 * np.mean(y * np.log(y / pred) - (y - pred))` For Tweedie, the function looks a little messier. Tweedie Deviance measures how well the model fits the data, assuming the target has a Tweedie distribution. Tweedie is commonly used in zero-inflated regression problems, where there are a relatively large number of 0s and the rest are continuous values. Smaller values of Deviance are more accurate models. As Tweedie Deviance is a more complicated metric, it may be easier to explain the model using FVE (Fraction of Variance Explained) Tweedie. This metric is equivalent to R^2, but for Tweedie distributions instead of Normal distributions. A score of 1 is a perfect explanation. Tweedie deviance attempts to differentiate between a variety of distribution families, including Normal, Poisson, Gamma, and some less familiar distributions. This includes a class of mixed compound Poisson–Gamma distributions that have positive mass at zero, but are otherwise continuous (e.g., zero-inflated distributions). In this case, <a target="_blank" href="https://stats.stackexchange.com/questions/201339/what-if-explained-deviance-is-greater-than-1-0-or-100">the function is</a>: ![](images/dnt-opt-dev-4.png) Python: `2 * np.mean((y ** (2-p)) / ((1-p) * (2-p)) - (y * (pred ** (1-p))) / (1-p) + (pred ** (2-p)) / (2-p)) ` Where parameter `p` is an index value that differentiates between the distribution family. For example, 0 is Normal, 1 is Poisson, 1.5 is Tweedie, and 2 is Gamma. Interpreting these metric scores is not particularly intuitive. `y` and `pred` values are in the unit of target (e.g., dollars), but as can be seen above, log functions and scaling complicates it. You can transform this to a weighted deviance function simply by introducing a weights multiplier, for example for Poisson: ![](images/dnt-opt-dev-5.png) !!! note Because of log functions and predictions in the denominator in some calculations, this only works for positive responses. That is, predictions are enforced to be strictly positive `(max(pred, 1e-8))` and actuals are enforced to be either non-negative `(max(y, 0))` or strictly positive `(max(y, 1e-8))`, depending on the deviance function. ### FVE deviance metrics {: #fve-deviance-metrics } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | FVE Binomial/Weighted FVE Binomial | Fraction of Variance Explained | Measures deviance based on fitting on a binomial distribution. | Binary classification | | FVE Gamma/Weighted FVE Gamma | Fraction of Variance Explained | Measures FVE for gamma deviance. | Regression | | FVE Multinomial/Weighted FVE Multinomial | Fraction of Variance Explained | Measures deviance based on fitting on a multinomial distribution. | Multiclass | | FVE Poisson/Weighted FVE Poisson | Fraction of Variance Explained | Measures FVE for Poisson deviance. | Regression | | FVE Tweedie/Weighted FVE Tweedie | Fraction of Variance Explained | Measures FVE for Tweedie deviance. | Regression | FVE is [fraction of variance explained](https://stats.stackexchange.com/questions/201339/what-if-explained-deviance-is-greater-than-1-0-or-100){: target=_blank } (also sometimes referred to as "fraction of deviance explained"). That is, what proportion of the total deviance, or error, is captured by the model? This is defined as: To calculate the fraction of variance explained, three models are fit: * The "model analyzed," or the model actually constructed within DataRobot. * A "worst fit" model (a model fitted without any predictors, fitting only an intercept). * A "perfect fit" model (also called a "fully saturated" model), which exactly predicts every observation. "Null deviance" is the total deviance calculated between the "worst fit" model and the "perfect fit" model. "Residual deviance" is the total deviance calculated between the "model analyzed" and the "perfect fit" model. (See the [deviance formulas](#deviance-metrics) for more detail.) You can think of the "fraction of *unexplained* deviance" as the residual deviance (a measure of error between the "perfect fit" model and your model) divided by the null deviance (a measure of error between the "perfect fit" model and the "worst fit" model). The fraction of *explained* deviance is 1 minus the fraction of unexplained deviance. Gauge the model's performance improvement compared to the "worst fit" model by calculating an R²-style statistic, the Fraction of Variance Explained (FVE). ![](images/null-deviance-1.png) Illustrated conceptually as: ![](images/null-deviance.png) &ast; Illustration courtesy of Eduardo García-Portugués, [Notes for Predictive Modeling](https://bookdown.org/egarpor/PM-UC3M/){ target=_blank }. Therefore, FVE equals traditional R-squared for linear regression models, but, unlike traditional R-squared, generalizes to exponential family regression models. By scaling the difference by the Null Deviance, the value of FVE _should_ be between 0 and 1, but not always. It can be less than zero in the event the model predicts responses poorly for new observations and/or a cross-validated out of sample data is very different. For multiclass projects, FVE Multinomial computes `loss = logloss(act, pred)` and `loss_avg = logloss(act, act_avg)`, where: * `act_avg` is the one-hot encoded "actual" data. * each class (column) is averaged over `N` data points. Basically `act_avg` is a list containing the percentage of the data that belongs to each class. Then, the FVE is computed via `1 - loss / loss_avg`. ### Gini coefficient {: #gini-coefficient } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | [Gini/Weighted Gini](#gini) | Gini Coefficient | Measures the ability to rank. | Regression, binary classification | | [Gini Norm/Weighted Gini Norm](#gini) | Normalized Gini Coefficient | Measures the ability to rank. | Regression, binary classification | In machine learning, the <a target="_blank" href="https://en.wikipedia.org/wiki/Gini_coefficient">Gini Coefficient or Gini Index</a> measures the ability of a model to accurately rank predictions. Gini is effectively the same as [AUC](#aucweighted-auc), but on a scale of -1 to 1 (where 0 is the score of a random classifier). If the Gini Norm is 1, then the model perfectly ranks the inputs. Gini can be useful when you care more about ranking your predictions, rather than the predicted value itself. Gini is defined as a ratio of normalized values between 0 and 1&mdash;the numerator as the area between the Lorenz curve of the distribution and the 45 degree uniform distribution line, discussed below. ![](images/dnt-opt-gini-1.png) The <a target="_blank" href="https://www.kaggle.com/batzner/gini-coefficient-an-intuitive-explanation">Gini coefficient</a> is thus defined as the blue area divided by the area of the lower triangle: ![](images/opt-gini-2.png) The Gini coefficient is equal to the area below the line of perfect equality (0.5 by definition) minus the area below the Lorenz curve, divided by the area below the line of perfect equality. In other words, it is double the area between the Lorenz curve and the line of perfect equality. The line at 45 degrees thus represents perfect equality. The Gini coefficient can then be thought of as the ratio of the area that lies between the line of equality and the Lorenz curve (call that `A`) over the total area under the line of equality (call that `A + B`). So: `Gini = A / (A + B)` It is also equal to `2A` and to `1 − 2B` due to the fact that `A + B = 0.5` (since the axes scale from 0 to 1). It is alternatively defined as twice the area between the receiver operating characteristic (ROC) curve and its diagonal, in which case the AUC (Area Under the ROC Curve) measure of performance is given by `AUC = (G + 1)/2` or factored as `2 * AUC-1`. ![](images/opt-gini-3.png) Its purpose is to normalize the AUC so that a random classifier scores 0, and a perfect classifier scores 1. Formally then, the range of possible Gini coefficient scores is [-1, 1] but in practice zero is typically the low end. You can also integrate the area between the perfect 45 degree line and the Lorenz curve to get the same Gini value, but the former is arguably easier. In economics, the <a target="_blank" href="https://en.wikipedia.org/wiki/Gini_coefficient">Gini coefficient</a> is a measure of statistical dispersion intended to represent the income or wealth distribution of a nation's residents and is the most commonly used measure of inequality. A Gini coefficient of zero expresses perfect equality, where all values are the same (for example, where everyone has the same income). In this context, a Gini coefficient of 1 (or 100%) expresses maximal inequality among values (e.g., for a large number of people, where only one person has all the income or consumption, and all others have none, the Gini coefficient will be very nearly one). However, a value greater than 1 can occur if some persons represent negative contribution to the total (for example, having negative income or wealth). Using this economics example, the Lorenz curve shows income distribution by plotting the population percentile by income on the horizontal axis and cumulative income on the vertical axis. The Normalized Gini Coefficient adjusts the score by the theoretical maximum so that the maximum score is 1. Because the score is normalized, comparisons can be made between the Gini coefficient values of like entities such that values can be rank ordered. For example, <a target="_blank" href="https://www.cia.gov/the-world-factbook/field/gini-index-coefficient-distribution-of-family-income/country-comparison">economic inequality by country</a> is commonly assessed with the Gini coefficient and is used to rank order the countries: | Rank | Country | Distribution of family income—Gini index | Date of information | |-----:|---------------------------------|-----------------------------------------:|---------------------| | 1 | LESOTHO | 63.2 | 1995 | | 2 | SOUTH AFRICA | 62.5 | 2013 EST. | | 3 | MICRONESIA, FEDERATED STATES OF | 61.1 | 2013 EST. | | 4 | HAITI | 60.8 | 2012 | | 5 | BOTSWANA | 60.5 | 2009 | One way to use the Gini index metric in a machine learning context is to compute it using the actual and predicted values, instead of using individual samples. If, using the example above, you generate the Gini index from the samples of individual incomes of people in a country, the Lorenz curve is a function of the population percentage by cumulative sum of incomes. In a machine learning context, you could generate the Gini from the actual and predicted values. One approach would be to pair the actual and predicted values and sort them by predicted. The Lorenz curve in that case is a function of the predicted values by the cumulative sum of actuals&mdash;the running total of the 1s of class 1 values. Then, calculate the Gini using one of the formulas above. For an example, see the <a target="_blank" href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction#evaluation">Porto Seguro’s Safe Driver Kaggle competition</a> and the corresponding <a target="_blank" href="https://www.kaggle.com/batzner/gini-coefficient-an-intuitive-explanation">explanation</a>. ### Kolmogorov–Smirnov (KS) {: #kolmogorovsmirnov-ks } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | KS | Kolmogorov-Smirnov | Measures the maximum distance between two non-parametric distributions. Used for ranking a binary classifier, KS evaluates models based on the degree of separation between true positive and false positive distributions. The KS value is displayed in the [ROC Curve](roc-curve#kolmogorov-smirnov-ks-metric). | Binary classification | The KS or Kolmogorov-Smirnov chart measures performance of classification models. More accurately, KS is a measure of the degree of separation between positive and negative distributions. The KS is 1 if the scores partition the population into two separate groups in which one group contains all the positives and the other all the negatives. On the other hand, If the model can’t differentiate between positives and negatives, then it is as if the model selects cases randomly from the population. In that case, the KS would be 0. In most classification models, the KS will fall between 0 and 1; the higher the value, the better the model is at separating the positive from negative cases. In <a target="_blank" href="https://arxiv.org/pdf/1606.00496.pdf">this paper</a>, in binary classification problems, it has been used as dissimilarity metric for assessing the classifier’s discriminant power measuring the distance that its score produces between the cumulative distribution functions (CDFs) of the two data classes, known as KS2 for this purpose (two samples). The usual metric for both purposes is the maximum vertical difference (MVD) between the CDFs (the Max_KS), which is invariant to score range and scale making it suitable for classifiers comparisons. The MVD is simply the vertical distance between the two curves at a single point on the X axis. The Max_KS is the single point <a target="_blank" href="https://www.machinelearningplus.com/machine-learning/evaluation-metrics-classification-models-r/attachment/kolmogorov_smirnov_chart-2/">where this distance is the greatest</a>. ![](images/dnt-opt-ks-1.png) ### LogLoss/Weighted LogLoss {: #loglossweighted-logloss } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | LogLoss/ Weighted LogLoss | Logarithmic Loss | Measures the inaccuracy of predicted probabilities. | Binary classification, multiclass, multilabel | Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Log loss increases as the predicted probability diverges from the actual label. So, for example, predicting a probability of .12 when the actual observation label is 1, or predicting .91 when the actual observation label is 0, would be “bad” and result in a higher loss value than misclassification probabilities closer to the true label value. A perfect model would have a log loss of 0. ![](images/opt-logloss-1.png) The graph above shows the range of possible loss values given a true observation (true = 1). As the predicted probability approaches 1, log loss slowly decreases. As the predicted probability decreases, however, the log loss increases rapidly. Log loss penalizes both types of errors, but especially those predictions that are _confident_ and wrong. Cross-entropy and log loss are <a target="_blank" href="https://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html">slightly different depending on context</a>, but in machine learning when calculating error rates between 0 and 1 they resolve to the same thing. In binary classification, the formula equals `-(ylog(p) + (1 - y)log(1 - p))` or: ![](images/dnt-opt-logloss-2a.png) Where `p` is the predicted value of `y`. Similarly for multiclass and multilabel, take the sum of log loss values for each class prediction in the observation: ![](images/dnt-opt-logloss-2.png) You can transform this to a weighted loss function by introducing weights to a given class: ![](images/dnt-opt-logloss-3.png) Note that the reported log loss scores for multilabel are scaled by `1/number_of_unique_classes`. ### MAE/Weighted MAE {: #maeweighted-mae } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | MAE/Weighted MAE | Mean Absolute Error | Measures the inaccuracy of predicted median values. | Regression | DataRobot implements a MAE metric using the median to measure absolute deviations, which is a more accurate calculation of absolute deviance (or rather stated, absolute error in this case). This is based on the fact that in optimizing the loss function for the absolute error, the best value that is derived turns out to be the median of the series. To see why, first assume a series of numbers that you want to summarize to an optimal value, (`x1,x2,…,xn`)&mdash; the predictions. You want the summary to be a single number, `s`. How do you select `s` so that it summarizes the predictions, (`x1,x2,…,xn`), effectively? Aggregate the error deviances between `xi`and `s` for each of `xi` into a single summary of the quality of a proposed value of `s`. To perform this aggregation, sum up the deviances over each of the `xi` and call the result `E`: ![](images/dnt-opt-mad-3.png) Upon solving for the `s` that results in the smallest error, the `E` loss function optimizes to be the median, not the mean. Note that, likewise, the best value of the squared error loss function optimizes to be the mean. Thus, the mean squared error. While MAE stands for “mean absolute error,” it optimizes the model to predict the median correctly. This is similar to how RMSE is “root mean squared error,” but optimizes for predicting the mean correctly (not the square of the mean). You may notice some curious discrepancies in DataRobot, which are worth remembering, when you optimize for MAE. Most insights report for the mean. As such, all the Lift Charts look “off” because the model under- or over-predicts for every point along the distribution. The Lift Chart calculates a mean, whereas MAE optimizes for the median. You can transform this to a weighted loss function by introducing weights to observations: ![](images/dnt-opt-mad-4.png) Unfortunately, the statistical literature has not yet adopted a standard notation, as both the mean absolute deviation around the mean (MAD) and the mean absolute error (what DataRobot calls “MAE”) have been denoted by their initials MAD in the literature, which may lead to confusion, since in general, they can have values considerably different from each other. ### MAPE/Weighted MAPE {: #mapeweighted-mape } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | MAPE/Weighted MAPE | Mean Absolute Percentage Error | Measures the percent inaccuracy of the mean values. | Regression | One problem with the MAE is that the relative size of the error is not always obvious. Sometimes it is hard to tell a large error from a small error. To deal with this problem, find the mean absolute error in percentage terms. <a target="_blank" href="http://canworksmart.com/using-mean-absolute-error-forecast-accuracy/">Mean Absolute Percentage Error (MAPE)</a> allows you to compare forecasts of different series in different scales. For example, consider comparing the sales forecast accuracy of one store with the sales forecast of another, similar store, even though these stores may have different sales volumes. ![](images/dnt-opt-mape-1.png) ### MASE {: #mase } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | MASE | Mean Absolute Scaled Error | Measures relative performance with respect to a baseline model. | Regression (time series only) | MASE is a measure of the accuracy of forecasts and is a comparison of one model to a naïve baseline model&mdash;the simple ratio of the MAE of a model over the baseline model. This has the advantage of being easily interpretable and explainable in terms of relative accuracy gain, and is recommended when comparing models. In DataRobot time series projects, [the baseline model](ts-feature-lists#mase-and-baseline-models) is a model that uses the most recent value that matches the longest periodicity. That is, while a project could have multiple different naïve predictions with different periodicity, DataRobot uses the longest naïve predictions to compute the MASE score. ![](images/dnt-opt-mase-1.png) Or in more detail: ![](images/dnt-opt-mase-2.png) Where the numerator is the model of interest and the denominator is the naïve baseline model. ### Max MCC/Weighted Max MCC {: #max-mccweighted-max-mcc } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | [Max MCC/Weighted Max MCC](#max-mccweighted-max-mcc) | Maximum Matthews correlation coefficient | Measures the maximum value of the Matthews correlation coefficient between the predicted and actual class labels. | Binary classification | <a target="_blank" href="https://en.wikipedia.org/wiki/Matthews_correlation_coefficient">Matthews correlation coefficient</a> is a balanced metric for binary classification that takes into account all four entries in the [confusion matrix](confusion-matrix). It can be calculated as: ![](images/dnt-mcc-1.png) Where: | Outcome | Description | |--------------|----------------------| | True positive (TP) | A positive instance that the model correctly classifies as positive. | | False positive (FP) | A negative instance that the model incorrectly classifies as positive. | | True negative (TN) | A negative instance that the model correctly classifies as negative. | | False negative (FN) | A positive instance that the model incorrectly classifies as negative. | The range of possible values is [-1, 1], where 1 represents perfect predictions. Since the entries in the confusion matrix depend on the prediction threshold, DataRobot uses the maximum value of MCC over possible prediction thresholds. ### R-Squared (R2)/Weighted R-Squared {: #r-squared-r2weighted-r-squared } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | R-Squared/Weighted R-Squared | R-Squared | Measures the proportion of total variation of outcomes explained by the model. | Regression | <a target="_blank" href="https://blog.minitab.com/blog/adventures-in-statistics-2/regression-analysis-how-do-i-interpret-r-squared-and-assess-the-goodness-of-fit">R-squared is a statistical measure of goodness of fit</a>&mdash;how close the data are to a fitted regression line. It is also known as the coefficient of determination, or the coefficient of multiple determination for multiple regression. As a description of the variance explained, it is the percentage of the response variable variation that is explained by a linear model. Typically R-squared is between 0 and 100%. 0% indicates that the model explains none of the variability of the response data around its mean. 100% indicates that the model explains all the variability of the response data around its mean. Note that there are circumstances that result in a negative R value, meaning that the model is predicting worse than the mean. This can happen, for example, due to problematic training data. For time-aware projects, R-squared has a higher chance to be negative due to mean changes over time&mdash;if you train a model on a high mean period, but test on a low mean period, a large negative R-squared value can result. (When partitioning is done via random sampling, the target mean for the train and test sets are roughly the same, so negative R-squared values are less likely.) Generally speaking, it is best to avoid models with a negative R-squared value. ![](images/dnt-opt-rsq-1.png) Where `SS_res` is the residual sum of squares, also called the explained sum of squares: ![](images/dnt-opt-rsq-2.png) `SS_tot` is the total sum of squares (proportional to the variance of the data) and: ![](images/dnt-y-is.png) is the sample mean of `y`, calculated from the training data: ![](images/dnt-opt-rsq-3.png) For a weighted R-squared, `SS_res` becomes: ![](images/dnt-opt-rsq-4.png) And `SS_tot` becomes: ![](images/dnt-opt-rsq-5.png) Some key Limitations of R-squared: * R-squared cannot determine whether the coefficient estimates and predictions are biased, which is why you must assess the residual plots. * R-squared can be artificially made high. That is, you can increase the value of R-squared by simply adding more and more independent variables to the model. In other words, R-squared never decreases upon adding more independent variables. Sometimes, some of these variables might be very insignificant and can be really useless to the model. * R-squared does not indicate whether a regression model is adequate. You can have a low R-squared value for a good model, or a high R-squared value for a model that does not fit the data. To that end, R-squared values must be interpreted with caution. Low R-squared values aren’t inherently bad. In some fields, it is entirely expected that your R-squared values will be low. For example, any field that attempts to predict human behavior, such as psychology, typically has R-squared values lower than 50%. Humans are simply harder to predict than, say, physical processes. At the same time, high R-squared values aren’t inherently good. A high R-squared does not necessarily indicate that the model has a good fit. For example, the fitted line plot may indicate a good fit and seemingly express the high R-squared, but a look at the residual plot may show a systematic over and/or under prediction, indicative of high bias. DataRobot calculates on out-of-sample data, mitigating traditional critiques such as, for example, that adding more features increases the value or that R2 is not applicable to non-linear techniques. It is essentially treated as a scaled version of RMSE, allowing DataRobot to compare itself to the mean model (R2 = 0) and determine if it’s doing better (R2 >0) or worse (R2 <0). ### Rate@Top10%, Rate@Top5%, Rate@TopTenth% {: #ratetop10-ratetop5-ratetoptenth } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | Rate@Top5% | Rate@Top5% | Measures the response rate in the top 5% highest predictions. | Binary classification | | Rate@Top10% | Rate@Top10% | Measures the response rate in the top 10% highest predictions. | Binary classification | | Rate@TopTenth% | Rate@TopTenth% | Measures the response rate in the top 0.1% highest predictions. | Binary classification | Rate@Top5%, Rate@Top10%, and Rate@TopTenth% are a measure of accuracy for a classification model and are simply the calculations of the accuracy for the top 5%, top 10% and top tenth% of highest predictions, respectively. For example, take a set of 100 predictions ordered from lowest to highest, something like: [.05, .08, .11, .12, .14 … .87, .89, .91, .93, .94 ]. Presuming the threshold is below .87, the top 5 predictions from .87 to .94 would be assigned to the positive class, 1. Now say the actual values for the top 5 are [1, 1, 0, 1, 1]. Then the Rate@Top5% measure of accuracy would be 80%. ### RMSE, Weighted RMSE & RMSLE, Weighted RMSLE {: #rmse-weighted-rmse-rmsle-weighted-rmsle } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | RMSE/ Weighted RMSE | Root Mean Squared Error | Measures the inaccuracy of predicted mean values when the target is normally distributed. | Regression, binary classification | | RMSLE/ Weighted RMSLE* | Root Mean Squared Log Error | Measures the inaccuracy of predicted mean values when the target is skewed and log-normal distributed. | Regression | The root mean squared error (RMSE) is another measure of accuracy somewhat similar to MAD in that they both take the difference between the actual and the predicted or forecast values. However, RMSE squares the difference rather than applying the absolute value, and then finds the square root. ![](images/dnt-opt-rmse-1.png) Thus, <a target="_blank" href="https://en.wikipedia.org/wiki/Root-mean-square_deviation">RMSE is always non-negative</a> and a value of 0 indicates a perfect fit to the data. In general, a lower RMSE is better than a higher one. However, comparisons across different types of data would be invalid because the measure is dependent on the scale of the numbers used. RMSE is the square root of the average of squared errors. The effect of each error on RMSE is proportional to the size of the squared error. Thus, larger errors have a disproportionately large effect on RMSE. Consequently, RMSE is sensitive to outliers. The root mean squared log error (RMSLE), to avoid taking the natural log of zero, adds 1 to both actual and predicted before taking the natural logarithm. As a result, the function can be used if actual or predicted have zero-valued elements. Note that only the percent difference between the actual and prediction matter. For example, P = 1000 and A = 500 would give roughly the same error as when P = 100000 and A = 50000. ![](images/dnt-opt-rmse-2.png) You can transform this to a weighted function simply by introducing a weights multiplier: ![](images/dnt-opt-rmse-3-weight.png) !!! note For RMSLE, many model blueprints log transform the target and optimize for RMSE. This is equivalent to optimizing for RMSLE. If this occurs, the model's build information lists "log transformed response". ### Silhouette Score {: #silhouette-score } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | Silhouette Score | Silhouette score, also referred to as silhouette coefficient | Compares clustering models. | Clustering | The silhouette score, also called the silhouette coefficient, is a metric used to compare clustering models. It is calculated using the mean *intra*-cluster distance (average distance between each point within a cluster) and the mean *nearest*-cluster distance (average distance between clusters). That is, it takes into account the distances between the clusters, but also the distribution of each cluster. If a cluster is condensed, the instances (points) have a high degree of similarity. The silhouette score ranges from -1 to +1. The closer to +1, the more separated the clusters are. Computing silhouette score for large datasets is very time-intensive&mdash;training a clustering model takes minutes but the metric computation can take hours. To address this, DataRobot performs stratified sampling to limit the dataset to 50000 rows so that models are trained and evaluated for large datasets in a reasonable timeframe while also providing a good estimation of the actual silhouette score. In time series, the silhouette score is a measure of the silhouette coefficient between different series calculated by comparing the similarity of the data points across the different series. Similar to non-time series use cases, the distance is calculated using the distances between the series; however, there is an important distinction in that the silhouette coefficient calculations do not account for location in time when considering similarity. While the silhouette score is generally useful, consider it with caution for time series. The silhouette score can identify series that have a high degree of similarity in the points contained within the series, but it does not account for periodicity and trends, or similarities across time. To understand the impact, examine the following two scenarios: #### Silhouette time series scenario 1 {: #silhouette-time-sceries-scenario-1 } Consider these two series: * The first series has a large spike in the first 10 points, followed by 90 small to near-zero values. * The second series has 70 small to near-zero values followed by a moderate spike and several more near-zero values. In this scenario, the silhouette coefficient will likely be large between these two series. Given that time isn't taken into account, the values show a high degree of mathematical similarity. #### Silhouette time series scenario 2 {: #silhouette-time-series-scenario-2 } Consider these three series: * The first series is a sine wave of magnitude 1. * The second series is a cosine wave of magnitude 1. * The third series is a cosine wave of magnitude 0.5. Potential clustering methods: * The first method adds the sine and cosine wave (both having a magnitude of 1) into a cluster and it adds the smaller cosine wave into a second cluster. * The second method adds the two cosine waves into a single cluster and the sine wave into a separate cluster. The first method will likely have a higher silhouette score than the second method. This is because the silhouette score does not consider the periodicity of the data and the fact that the peaks in the cosine waves likely have more meaning to each other. If the goal is to perform segmented modeling, take the silhouette score into consideration, but be aware of the following: * A higher silhouette score may not indicate a better segmented modeling performance. * Series grouped together based on periodicity, volatility, or other time-dependent features will likely return lower silhouette scores than series that have a higher similarity when considering only the magnitudes of values independent of time. ### SMAPE/Weighted SMAPE {: #smapeweighted-smape } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | SMAPE/Weighted SMAPE | Symmetric Mean Absolute Percentage Error | Measures the bounded percent inaccuracy of the mean values. | Regression | The Mean Absolute Percentage Error [(MAPE)](#mapeweighted-mape) allows you to compare forecasts of different series in different scales. However, MAPE cannot be used if there are zero values, and does not have an upper limit to the percentage error. In these cases, the <a target="_blank" href="https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error">Symmetric Mean Absolute Percentage Error (SMAPE)</a> can be a good alternative. The SMAPE has a lower and upper boundary, and will always result in a value between 0% and 200%, which makes statistical comparisons between values easier. It is also a suitable function for use on data where values that are zero are present. That is, rows in which `Actual = Forecast = 0`, DataRobot replaces `0/0 = NaN` with zero before summing over all rows. ![](images/dnt-opt-smape.png) ### Theil's U {: #theils-u } | Display | Full name | Description | Project type | |----------|-----------|-------------|--------------| | Theil's U | Henri Theil's U Index of Inequality | Measures relative performance with respect to a baseline model. | Regression (time series only) | [Theil’s U](https://en.wikipedia.org/wiki/Uncertainty_coefficient){ target=_blank }, similar to [MASE](#mase), is a metric to evaluate the accuracy of a forecast relative to the forecast of the naïve model (a model that uses, for predictions, the most recent value that matches the longest periodicity). This has the advantage of being easily interpretable and explainable in terms of relative accuracy gain, and is recommended when comparing models. In DataRobot time series projects, [the baseline model](ts-feature-lists#mase-and-baseline-models) is a model that uses the most recent value that matches the longest periodicity. That is, while a project could have multiple different naïve predictions with different periodicity, DataRobot uses the longest naïve predictions to compute the Theil's U score. The comparison of the forecast model to the naïve model is a function of the ratio of the two. A value greater or less than 1 indicates the model is worse or better than the naïve model, respectively. ![](images/dnt-opt-theil-1.png) Or, in more detail: ![](images/dnt-opt-theil-2.png) Where the numerator is the model of interest and the denominator is the naïve baseline model.
opt-metric
--- title: Modeling details description: This section introduces the model building process; data partitioning and validation; features for working with a project after model building completes, including Generate AI report, Export charts and data; and links to details on its component tasks. --- # Modeling details {: #modeling-details } This section provides details into components of the functionality that makes up the model building process. Topic | Describes... ----- | ------ **Data** | :~~: [Exploratory Data Analysis](eda-explained) | Details of Exploratory Data Analysis (EDA), phases 1 and 2. [Data partitioning and validation](data-partitioning) | Describes validation types and data partitioning methods. **Modeling** | :~~: [Modeling process details](model-ref) | Bits and pieces of the initial model building process. [Leaderboard reference](leaderboard-ref) | Components of the Leaderboard, blender models, and asterisked scores. [Model recommendation process](model-rec-process) | Steps involved in DataRobot's selection of a recommended model. [Sliced insights](sliced-insights) | View and compare insights based on segments of a project’s data. [SHAP reference](shap) | Details of SHapley Additive exPlanations, the coalitional game theory framework. [XEMP calculations](xemp-calc) | Describes the calculations used to determine XEMP qualitative strength. **Miscellaneous** | :~~: [Optimization metrics](opt-metric) | Short descriptions of all metrics available for model building. [Generate AI Report](generate-ai-report) | Create a report of modeling results and insights. [Export charts and data](export-results) | Download created insights. [Worker Queue](worker-queue) | Manage models and projects and export data.
index
--- title: Modeling process description: Provides details of DataRobot's modeling process, from selecting modeling modes, interpreting data summary information, and working with missing values. --- # Modeling process {: #modeling-process } This section provides more detail to help understand DataRobot's initial model building process. * More on modeling modes, such as [small datasets](#small-datasets) and [Quick Autopilot](#quick-autopilot) * [Two-stage models](#two-stage-models) (Frequency and Severity models). * [Data](#data-summary-information) summary information * Handling project [build failure](model-data#build-failure) * Working with [missing values](#missing-values) DataRobot also runs a complete [data quality assessment](data-quality) that automatically detects, and in some cases addresses, data quality issues. See also the [basic modeling process](model-data) section for a workflow overview. ## Modeling modes {: #modeling-modes } The exact action and options for a modeling mode are dependent on your data. In addition to the [standard description of mode behavior](model-data#modeling-modes-explained), the following sections describe circumstantial modeling behavior. DataRobot supports Tree-based models, Deep Learning models, Support Vector Machines (SVM), Generalized Linear Models, Anomaly Detection models, Text Mining models, and many more. Specifically: * TensorFlow * XGBoost * LightGBM * Elastic Net, Ridge regressor, or Lasso regressor (text-capable) * Nystroem Kernel SVM * Random Forest * GA2M * Single-column text models (for word clouds) * Rulefit ### Small datasets {: #small-datasets } Autopilot for AutoML changes the sample percentages run depending on the number of rows in the dataset. The following table describes the criteria: | Number of rows | Percentages run | |-----------------------|---------------------------------------------| | Less than 2000 | Final Autopilot stage only (64%) | | Between 2001 and 3999 | Final two Autopilot stages (32% and 64%) | | 4000 and larger | All stages of Autopilot (16%, 32%, and 64%) | ### Quick Autopilot {: #quick-autopilot } Quick Autopilot is the default modeling mode, which has been optimized to ensure, typically, availability of more accurate models without sacrificing variety of tested options. As a result, reference models are not run. DataRobot runs [supported models](#modeling-modes) on a sample of data, depending on project type: Project type | Sample size ------------ | ----------- AutoML | Typically 64% of data or 500MB, whichever is smaller. OTV | 100% of each backtest. Time series | Maximum training size for each backtest defined in the date/time partitioning. With this shortened version of the full Autopilot, DataRobot selects models to run based on a variety of criteria, including target and performance metric, but as its name suggests, chooses only models with relatively short training runtimes to support quicker experimentation. The specific number of Quick models run varies by project and target type (e.g., some blueprints are only available for a specific target/target distribution). The [Average blender](additional), when enabled, is created from the top two models. To maximize runtime efficiency in Quick mode, DataRobot automatically creates the [DR Reduced Features](feature-lists#automatically-created-feature-lists) list but does not automatically fit the recommended (or any) model to it (fitting the reduced list requires retraining models). The steps involved in Quick mode are dependent on whether the [**Recommend and prepare a model for deployment**](additional) is checked. Option state | Action ------------ | ------ Checked | <ul><li>Run Quick mode at 64%</li><li>Create a reduced feature list (if feature list can be reduced).</li><li>Automatically retrain the [recommended model](model-rec-process) at 100% (using the feature list of the 64% model).</li></ul> Unchecked | Run Quick mode at 64%. For single column text datasets, DataRobot runs the following models: * Elastic Net (text-capable) * Single-column text models (for word clouds) * SVM on the document-term matrix For projects with [**Offset** or **Exposure**](additional#set-exposure) set, DataRobot runs the following: * XGBoost * Elastic Net (text-capable) * LightGBM * ASVM * Scikit learn GBM * GA2M + rating table * Eureqa GAM * Single-column text models (for word clouds) ## Two-stage models {: two-stage-models } Some datasets result in a two-stage modeling process; these projects create additional models not otherwise available&mdash;Frequency and Severity models. Creation of this two-stage process, and the resulting additional model types, occurs in regression projects when the target is zero-inflated (that is, greater than 50% of rows in the dataset have a value of 0 for the target feature). These methods are most frequently applicable in insurance and operational risk and loss modeling&mdash;insurance claim, modeling foreclosure frequency with loss severity, and frequent flyer points redemption activity. For qualifying models (see below), you can view stage-related information in the following tabs: * In the [**Coefficients**](coefficients) tab, DataRobot graphs parameters corresponding to the selected stage for linear models. Additionally, if you export the coefficients, two additional columns&mdash;`Frequency_Coefficient` and `Severity_Coefficient`&mdash;provide the coefficients at each stage. * In the [**Advanced Tuning**](adv-tuning) tab, DataRobot graphs the parameters corresponding to the selected stage. DataRobot automatically runs some models built to support the frequency/severity methods as part of Autopilot; additional models are available in the [Repository](repository). The models in which the staging is available can be identified by the preface "Frequency-Cost" or "Frequency-Severity" and include the following: * XGBoost&#42; * LightGBM&#42; * Generalized Additive Models * Elastic Net &#42; Coefficients are not available for these models **Example use case: insurance** Zach is building an insurance claim model based on frequency (the number of times that a policyholders made a claim) and severity (cost of the claim). Zach wants to predict the payout amount of claims for a potential policyholder in the coming year. Generally, most policyholders don't have accidents and so don't file claims. Therefore, a dataset where each row represents one policyholder&mdash;and the target is claim payouts&mdash;the target column for most rows will be &#36;0. In Zach's dataset he has a zero-inflated target. Most policyholders represented in the training data have $0 as their target value. In this project, DataRobot will build several Frequency-Cost and Frequency-Severity models. ## Data summary information {: #data-summary-information } The following information assumes that you have selected a target feature and started the modeling process. After you select a target variable and begin modeling, DataRobot analyzes the data and presents this information in the **Project Data** tab of the **Data** page. Data features are listed in order of importance in predicting the target variable. DataRobot also detects the data (variable) type of each feature; supported data types are: * numeric * categorical * date * percentage * currency * length * text * [summarized categorical](histogram#summarized-categorical-feature-details) * multicategorical * [multilabel](multilabel) * [date duration](otv) (OTV projects) * [location](location-ai/index) (Location AI projects) Additional information on the **Data** page includes: * Unique and [missing values](#missing-values) * Mean, median, standard deviation, and minimum and maximum values * [Informational tags](histogram#data-page-informational-tags) * [Feature importance](#importance-score) * Access to tabs that allow you to work with [feature lists](feature-lists) and investigate [feature associations](feature-assoc) ### Importance score {: #importance-score } ![](images/data-summary-importance.png) The **Importance** bars show the degree to which a feature is correlated with the target. These bars are based on "Alternating Conditional Expectations" (ACE) scores. ACE scores are capable of detecting non-linear relationships with the target, but as they are univariate, they are unable to detect interaction effects between features. **Importance** is calculated using an algorithm that measures the information content of the variable; this calculation is done independently for each feature in the dataset. The importance score has two components&mdash;`Value` and `Normalized Value`: * `Value`: This shows the metric score you should expect (more or less) if you build a model using only that variable. For Multiclass, `Value` is calculated as the weighted average from the binary univariate models for each class. For binary classification and regression, `Value` is calculated from a univariate model evaluated on the validation set using the selected project metric. * `Normalized Value`: `Value` normalized; scores up to 1 (higher scores are better). 0 means accuracy is the same as predicting the training target mean. Scores of less than 0 mean the ACE model prediction is worse than the target mean model (overfitting). These scores represent a measure of predictive power of a simple model using only that variable to predict the target. (The score is adjusted by _exposure_ if you set the [**Exposure**](additional#set-exposure) parameter.) Scores are measured using the project's accuracy metric. Features are ranked from most important to least important. The length of the green bar next to each feature indicates its relative importance&mdash;the amount of green in the bar compared to the total length of the bar, which shows the maximum potential feature importance (and is proportional to the `Normalized Value`)&mdash;the more green in the bar, the more important the feature. Hovering on the green bar shows both scores. These numbers represent the score in relation to the project metric for a model that uses only that feature (the metric selected when the project was run). Changing the metric on the Leaderboard has no effect on the tooltip scores. ![](images/eda-tooltip.png) Click a feature name to [view details](histogram) of the data values. While the values change between EDA1 and EDA2 (e.g., rows are removed because they are part of holdout or they are missing values), the meaning of the charts and the options are the same. ### Automated feature transformations {: #automated-feature-transformations } Feature engineering is a key part of the modeling process. After pressing **Start**, DataRobot performs automated feature engineering on the given dataset to create derived variables in order to enhance model accuracy. See the table below for a list of feature engineering tasks DataRobot may perform during modeling for each feature type: Feature type | Automated transformations ---------- | ----------- Numeric and categorical| <ul><li>Missing Imputation (Median, Arbitrary)</li><li>Standardization</li><li>Search for ratios</li> <li>Search for differences</li><li>Ridit Transform</li> <li>DataRobot Smart Binning using a second model</li><li>Principal Components Analysis</li><li>K-Means Clustering</li><li>One hot encoding</li><li>Ordinal encoding</li><li>Credibility intervals</li><li>Category counts</li><li>Variational Autoencoder</li><li>Custom Feature Engineering for Numerics</li></ul> Date | <ul><li>Month-of-year</li><li>Day of week</li><li>Day of year</li><li>Day of month</li><li>Hour of day</li><li>Year</li><li>Month</li><li>Week</li></ul> Text | <ul><li>Character / word ngrams</li><li>Pretrained TinyBERT featurizer</li><li>Stopword removal</li><li>Part of Speech Tagging / Removal</li><li>TF-IDF scaling (optional sublinear scaling and binormal separation scaling)</li><li>Hashing vectorizers for big data</li><li>SVD preprocessing</li><li>Cosine similarity between pairs of text columns (on datasets with 2+ text columns)</li><li>Support for multiple languages, including English, Japanese, French, Korean, Spanish, Chinese, and Portuguese</li></ul> Images | DataRobot uses featurizers to turn images into numbers: <br><ul><li>Resnet50</li><li>Xception</li><li>Squeezenet</li><li>Efficientnet</li><li>PreResnet</li><li>Darknet</li><li>MobileNet</li><ul><br>DataRobot also allows you to fine-tune these featurizers. Geospatial | DataRobot uses several techniques to automatically derive spatially-lagged features from the input dataset:<br><ul><li>_Spatial Lag:_ A k-nearest neighbor approach to calculate mean neighborhood values of numeric features are varying spatial lags and neighborhood sizes.</li><li>_Spatial Kernel:_ Characterizes spatial dependence structure using a spatial kernel neighborhood technique. This technique characterizes spatial dependence structure for all numerica variables using varying kernel sizes, weighing by distance</li></ul>DataRobot also derives local autocorrelation features using local indicators of spatial association to capture hot and cold spot of spatial similarity within the context of the entire input dataset.<br>DataRobot derives features for the following geometric properties:<br><ul><li>_MultiPoints:_ Centroid</li><li>_Lines/MultiLines:_ Centroid, Length, Minimum bounding rectangle area</li><li>_Polygons/MultiPolygons:_ Centroid, Perimeter, Area, Minimum bounding rectangle area</li></ul> #### Text vs. categorical features {: #text-vs-categorical-features } DataRobot runs heuristics to differentiate text from categorical features, including the following: 1. If the number of unique rows is less than 5% of the column size, or if there are fewer than 60 unique rows, the column is classified as **categorical**. 2. Using the Python language identifier <a target="_blank" href="https://pypi.org/project/langid/"><code>langid</code></a>, DataRobot attempts to detect a language. If no language is detected, the column is classified as **categorical**. 3. Languages are categorized as either Japanese/Chinese/Korean or English and all other languages ("English+"). If at least three of the following checks pass, the feature is classified as text: **English+** * `(Number of unique lines / total number of lines > 0.3)` or `number of unique lines > 1000`. * The mean number of spaces per line is at least 1.5. * 10% or more lines have at least 4 words. * The longest line has at least 6 words. **Japanese/Chinese/Korean** * `(Number of unique lines / total number of lines > 0.3)` or `number of unique lines > 1000`. * The mean line length is at least 4 characters. * 10% or more lines have at least 7 characters. * The longest line has at least 12 characters. [Manual feature transformations](feature-transforms) allow you to override the automated assignment, but because this can cause errors, DataRobot provides a warning during the transformation process. #### Missing values {: #missing-values } DataRobot handles missing values differently, depending on the model and/or value type. The following are the codes DataRobot recognizes and treats as missing values: **Special NaN Values for all feature types** - `null, NULL` - `na, NA, n/a, #N/A, N/A, #NA, #N/A N/A` - `1.#IND, -1.#IND` - `NaN, nan, -NaN, -nan` - `1.#QNAN, -1.#QNAN` - `?` - `.` - `Inf, INF, inf, -Inf, -INF, -inf` - `None` - One or more whitespace characters and empty cells are also treated as missing values. The following notes describe some specifics of DataRobot's value handling. !!! note The missing value imputation method is fixed during training time. Either the median or arbitrary value set during training will be provided at prediction time. * Some models natively handle missing values so that no special preprocessing is needed. * For linear models (such as linear regression or an SVM), DataRobot's handling depends on the case: * _median imputation_: DataRobot imputes missing values, using the median of the non-missing training data. This effectively handles data that are missing-at-random. * _missing value flag_: DataRobot adds a binary "missing value flag" for each variable with any missing values, allowing the model to recognize the pattern in structurally missing values and learn from it. This effectively handles data that are missing-not-at-random. * For tree-based models, DataRobot imputes with an arbitrary value (e.g., -9999) rather than the median. This method is faster and gives just as accurate a result. * For categorical variables in all models, DataRobot treats missing values as another level in the categories. #### Numeric columns {: #numeric-columns } DataRobot assigns a var type to a value during EDA. For numeric columns, there are three types of values: 1. Numeric values: these can be integers or floating point numbers. 2. Special NaN values (listed in the table above): these are not numeric, but are recognized as representative of NaN. 3. All other values: for example, string or text data. Following are the rules DataRobot uses when determining if a particular column is treated as numeric, and how it handles the column at prediction time: * _Strict Numeric_: If a column has only numeric and special NaN values, DataRobot treats the column as numeric. At prediction time, DataRobot accepts any of the same special NaN values as missing and makes predictions. If an other value is present, DataRobot errors. * _Permissive Numeric_: If a column has numeric values, special NaN values and one (and only one) other value, DataRobot treats that other value as missing and treats the column as numeric. At prediction time, all other values are treated as missing (regardless of whether they differ from the first one). * _Categorical_: If DataRobot finds two or more other (non-numeric and non-NaN) values in a column during EDA, it treats the feature as categorical instead of numeric. * If DataRobot does not process any other value during EDA sampling and categorizes the feature as numeric, before training (but after EDA) it "looks" at the full dataset for that column. If any other values are seen for the full dataset, the column is treated as permissive numeric. If not, it is strict numeric.
model-ref
--- title: Worker Queue description: The Worker Queue is where you monitor build progress and manage the resources your projects use for training models and building insights. --- # Worker Queue {: #worker-queue } The <em>Worker Queue</em>, displayed in the right-side panel of the application, is a place to monitor the steps of [EDA1 and EDA2](eda-explained) and set the [number of workers](#adjust-number-of-workers) used for model building: ![](images/queue-1.png) After modeling is complete, you can [rerun Autopilot](#restart-a-model-build) at the next level ("Get More Accuracy"), [configure modeling settings](model-data#configure-modeling-settings) and rerun the project, and [unlock project holdout](unlocking-holdout). ![](images/queue-2.png) ??? tip "Minimize and expand the panel" Move your mouse over the edge between the Worker Queue and the Leaderboard to expose an arrow which you can click to expand or minimize the queue: ![](images/queue-3.png) See the section on [troubleshooting workers](#troubleshoot-workers) for help when you cannot add workers. ## Understand workers {: #understand-workers } DataRobot uses different types of workers for different types of jobs, such as: * Uploading data * Computing EDA * Training a model * Creating insights The workers responsible for managing data and computing EDA are shared across an organization. They are a pool of resources to ensure that all shared services are highly available. Modeling workers, on the other hand, are allocated to each user by their administrator. The following sections describe the Worker Queue for modeling and prediction workers. ## Monitor build progress {: #monitor-build-progress } After you start building, all running and pending models appear in the Worker Queue. The queue prioritizes tasks based on the mechanism of the task’s submission. It starts processing “more important” jobs first, assigning all tasks submitted by Autopilot to a lower (background) priority. All other tasks (those not started by Autopilot) are assigned a default priority. For example, user-submitted tasks such as computing Feature Impact are processed by the next available worker without having to wait for Autopilot to finish building models. You can do the following in the Worker Queue: Display | Description --------|------------- [View progress](#view-progress) | See summary and detailed information about each model as it builds. [Adjust workers](#adjust-number-of-workers) | Adjust the number of simultaneous workers used for the current build. [Pause processing](#pause-the-worker-queue) | Pause the queue while allowing processing models to complete. [Stop processing](#stop-the-worker-queue) | Stop the queue, removing all processing and queued models from the project. [Reorder builds](#reorder-workers) | Reorder queued models. [Cancel builds](#cancel-workers) | Remove scheduled builds from the queue. [Restart model runs](#restart-a-model-build) | Select options to rerun models with different build criteria. If a worker or build fails for any reason, DataRobot lists and records the event under **Error** at the bottom of the queue. Additionally, the event is reported in a dialog and [recorded and available](model-data#build-failure) from the Manage Projects inventory. ### View progress {: #view-progress } The Worker Queue is broken into two parts&mdash;models in progress and models in queue. For each in-progress model, DataRobot displays a live report of CPU and RAM use. ![](images/worker-queue-progress.png) You can expand and collapse the list of queued models by clicking the arrow next to the queue name: ![](images/queue-collapse.png) ### Adjust number of workers {: #adjust-number-of-workers } You can adjust the number of workers that DataRobot uses for model building, up to the maximum number allowed for you (set by your administrator). Note that if a project was shared with you and the owner has a higher allotment, the allowed worker total displays the owners allotment, which is not necessarily what you are allowed. To increase or decrease the number of workers, click the orange arrows: ![](images/adjust-queue-workers.png) ### Pause the Worker Queue {: #pause-the-worker-queue } To pause the Worker Queue, click the pause symbol (double vertical bars) at the top of the queue. After pausing, the symbol changes to a play symbol (arrow). ![](images/worker-queue-pause.png) When you pause the queue, processing continues on in-progress models. As those models complete, workers become available for the next queued model. The position is not filled until you un-pause the queue. Click the play arrow to resume building models. ### Stop the Worker Queue {: #stop-the-worker-queue } To stop the Worker Queue, click the "X" symbol at the top of the queue. ![](images/stop-auto-1.png) When you stop the queue, all in-progress or queued models are immediately removed, and the modeling process ends. ### Reorder workers {: #reorder-workers } To prioritize training specific models, you can drag-and-drop queued models to a new position. ![](images/worker-reorder-1.png) If a job triggers dependencies, you cannot reorder the queue so that a model's dependencies are trained after the initial model. Attempting to do so returns the model to its original position in the queue. For example, launching job `A` creates two dependencies, jobs `B` and `C`. Because jobs `B` and `C` are required to build job `A`, you cannot move job `A` ahead of them in the Worker Queue. Note that you cannot reorder in-process models. ### Cancel workers {: #cancel-workers } You can remove an in-process or queued model by clicking the X next to the model name. An in-process model is immediately cancelled, a queued model is removed from the queue. ![](images/queue-kill.png) ### Restart a model build {: #restart-a-model-build } When your build completes, or if it is paused, you can use the same data to start a new build. Restart your build from either: * The Worker Queue, using the [**Configure modeling settings**](model-data#configure-modeling-settings) link. * The **Data** page's [**Feature Lists**](feature-lists#rerun-autopilot) tab. ![](images/queue-restart.png) Clicking the link opens a dialog box where you can select which feature list to use. Note [these considerations](feature-lists#rerun-autopilot) when selecting a list and retraining. Then, click **Restart Autopilot** to begin the new build. ![](images/queue-select.png) ## Troubleshoot workers {: #troubleshoot-workers } {% include 'includes/worker-queue-tbsht-include.md' %} !!! note When you delete a project using the [**Actions** menu](manage-projects#project-actions-menu) in the inventory, all its worker jobs are automatically terminated.
worker-queue
--- title: SHAP reference description: Provides reference content for understanding Shapley Values, the coalitional game theory framework by Lloyd Shapley, as used in DataRobot's SHAP Prediction Explanations. --- # SHAP reference {: #shap-reference } SHAP is an open-source algorithm used to address the accuracy vs. explainability dilemma. SHAP (SHapley Additive exPlanations) is based on Shapley Values, the coalitional game theory framework by Lloyd Shapley, Nobel Prize-winning economist. Shapley asked: > How should we divide a payout among a cooperating team whose members made different contributions? Shapley values answers: * The Shapley value for member _X_ is the amount of credit they get. * For every subteam, how much marginal value does member _X_ add when they join the subteam? Shapley value is the weighted mean of this marginal value. * Total payout is the sum of Shapley values over members. Scott Lundberg is the primary author of the SHAP python package, providing a programmatic way to explain predictions: > We can divide credit for model predictions among features! By assuming that each value of a feature is a “player” in a game, the prediction is the payout. SHAP explains how to fairly distribute the “payout” among features. SHAP has become increasing popular due to the SHAP <a target="_blank" href="https://github.com/slundberg/shap">open source package</a> that developed: * A high-speed exact algorithm for tree ensemble methods (called "TreeExplainer"). * A high-speed approximation algorithm for deep learning models (called "DeepExplainer"). * A model-agnostic algorithm to estimate Shapley values for any model (called "KernelExplainer"). The following key properties of SHAP make it particularly suitable for DataRobot machine learning: * **Local accuracy**: The sum of the feature attributions is equal to the output of the model DataRobot is "explaining." * **Missingness**: Features that are already missing have no impact. * **Consistency**: Changing a model to make a feature more important to the model will never decrease the SHAP attribution assigned to that feature. (For example, model A uses feature X. You then make a new model, B, that uses feature X more heavily (perhaps by doubling the coefficient for that feature and keeping everything else the same). Because of the consistency quality of SHAP, the SHAP importance for feature X in model B is at least as high as it was for feature X in model A.) Additional [readings](#additional-reading) are listed below. SHAP contributes to model explainability by: * Feature Impact: SHAP shows, at a high level, which features are driving model decisions. Without SHAP, results are sensitive to sample size and can change when re-computed unless the sample is quite large. See the [deep dive](#feature-impact). * Prediction Explanations: There are certain types of data that don't lend themselves to producing results for all columns. This is especially problematic in regulated industries like banking and insurance. SHAP explanations reveal how much each feature is responsible for a given prediction being different from the average. For example, when a real estate record is predicted to sell for $X, SHAP Prediction Explanations illustrate how much each feature contributes to that price. See the [deep dive](#prediction-explanations). !!! note To retrieve the SHAP-based **Feature Impact** or **Prediction Explanations** visualizations, you must enable the [Include only models with SHAP value support](additional) advanced option prior to model building. * Feature Effects: SHAP does not change the Feature Effects results. The Predicted, Actual, and Partial dependence plots do not use SHAP in any way. However, the bar chart on the left is ordered by SHAP Feature Impact instead of the usual Permutation Feature Impact. ## Feature Impact {: #feature-impact } Feature Impact assigns importance to each feature (`j`) used by a model. ### With SHAP {: #with-shap } Given a model and some observations (up to 5000 rows in the training data), Feature Impact for each feature `j` is computed as: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;sample average of `abs(shap_values for feature j)` Normalize values such that the top feature has impact of 100%. ### With permutation {: #with-permutation } Given a model and some observations (2500 by default and up to 100,000), calculate the metric for the model based on the actual data. For each column `j`: * Permute the values of column `j`. * Calculate metrics on permuted data. * Importance = `metric_actual - metric_perm` Optionally, normalize by the largest resulting value. ## Prediction Explanations {: #prediction-explanations } SHAP Prediction Explanations are [additive](#additivity-in-prediction-explanations). The sum of SHAP values is exactly equal to: [prediction - average(prediction)] When selecting between XEMP and SHAP, consider your need for accuracy versus interpretability and performance. With XEMP, because all blueprints are included in Autopilot, the results may produce slightly higher accuracy. This is only true in some cases, however, since SHAP supports all _key_ blueprints meaning that often times accuracy is the same. SHAP does provide higher interpretability and performance: * results are intuitive * it’s computed for all features * results often return 5-20 times faster * it's [additive](#additivity-in-prediction-explanations) * open source nature provides transparency ### Additivity in Prediction Explanations {: #additivity-in-prediction-explanations } In certain cases, you may notice that SHAP values do not add up to the prediction. This is because SHAP values are additive in the units of the direct model output, which can be different from the units of prediction for several reasons. * For most binary classification problems, the SHAP values correspond to a scale that is different from the probability space [0,1]. This is due to the way that these algorithms map their direct outputs `y` to something always between 0 and 1, most commonly using a nonlinear function like the logistic function `prob = logistic(y)`. (In technical terms, the model's "link function" is `logit(p)`, which is the inverse of `logistic(y)`.) In this common situation, the SHAP values are additive in the pre-link "margin space", not in the final probability space. This means `sum(shap_values) = logit(prob) - logit(prob_0)`, where `prob_0` is the training average of the model's predictions. * Regression problems with a skewed target may use the natural logarithm `log()` as a link function in a similar way. * The model may have specified an [offset](additional#set-offset) (applied before the link) and/or an [exposure](additional#set-exposure) (applied after the link). * The model may "cap" or "censor" its predictions (for example, enforcing them to be non-negative). The following pseudocode can be used for verifying additivity in these cases. ```angular2 # shap_values = output from SHAP prediction explanations # If you obtained the base_value from the UI prediction distribution chart, first transform it by the link. base_value = api_shap_base_value or link_function(ui_shap_base_value) pred = base_value + sum(shap_values) if offset is not None: pred += offset if link_function == 'log': pred = exp(pred) elif link_function == 'logit: pred = exp(pred) / (1 + exp(pred)) if exposure is not None: pred *= exposure pred = predictions_capping(pred) # at this point, pred matches the prediction output from the model ``` ### Open-source additivity warning {: #open-source-additivity-warning } There is a known (though rare) issue in the interaction of the SHAP and XGBoost libraries that can cause SHAP to add to a slightly incorrect value. Most XGBoost models produce SHAP values that obey additivity, verified by an automatic check. See examples reported on the SHAP GitHub page: * <a target="_blank" href="https://github.com/slundberg/shap/issues/950">Additivity check failed in TreeExplainer!</a> * <a target="_blank" href="https://github.com/slundberg/shap/issues/1151">Shap values do not sum to model output</a> * <a target="_blank" href="https://github.com/slundberg/shap/issues/887">Tree explainer error in latest version</a> !!! note {% include 'includes/github-sign-in-plural.md' %} In DataRobot, if additivity is violated by less than 1% (normalized to model predictions), the application provides a warning and provides the SHAP values. If failure is larger than 1%, an error is returned, and the SHAP values, which are potentially wrong, are not provided. There is a known issue in the interaction of SHAP and Keras models with certain activation functions, including SELU and Swish, which can cause SHAP values to fail additivity significantly. If this failure occurs in Keras models, the SHAP values are provided with a warning. See examples reported on the SHAP GitHub page: <a target="_blank" href="https://github.com/slundberg/shap/issues/712">"Shap values don't match real predictions - DeepExplainer."</a> ## SHAP compatibility matrix {: #shap-compatibility-matrix } See the following for blueprint support with SHAP: | Blueprint | Regression | Binary classification | OTV | Prediction servers| |-------------------------------|------------|-----------------------|-----|--------------------| | Linear models | ✔ | ✔ | ✔ | ✔ | | XGBoost | ✔ | ✔ | ✔ | ✔ | | LightGBM | ✔ | ✔ | ✔ | ✔ | | Keras | ✔ | ✔ | ✔ | ✔ | | Random Forest | x | x | x | x | | Shallow Random Forest | ✔ | ✔ | ✔ | ✔ | | Frequency Cost / Severity | ✔ | N/A | N/A | ✔ | | Stacked / boosted blueprints | ✔ | ✔ | ✔ | ✔ | | Blueprints with calibration | ✔ | ✔ | ✔ | ✔ | | Blenders | x | x | x | x | | sklearn GBM | x | ✔ | ✔ | x | | DataRobot Scoring Code models | x | x | x | x | ## Which explainer is used for which model? {: #which-explainer-is-used-for-which-model } Within a blueprint that supports SHAP, each modeling vertex uses the SHAP explainer that is most appropriate to the model type: * Tree-based models (XGBoost, LightGBM, Random Forest, Decision Tree): TreeExplainer * Keras deep learning models: DeepExplainer * Linear models: LinearExplainer If a blueprint contains more than one modeling task, the SHAP values are combined additively to yield the SHAP values for the overall blueprint. ## Additional reading {: #additional-reading } The following public information provides additional information on open-source SHAP: * <a target="_blank" href="https://meichenlu.com/2018-11-10-SHAP-explainable-machine-learning/">SHAP for explainable machine learning</a> * <a target="_blank" href="https://christophm.github.io/interpretable-ml-book/shap.html">SHAP (SHapley Additive exPlanations)</a> * <a target="_blank" href="https://towardsdatascience.com/explain-your-model-with-the-shap-values-bc36aac4de3d">Explain Your Model with the SHAP Values</a> * <a target="_blank" href="https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf">A Unified Approach to Interpreting Model Predictions</a> * <a target="_blank" href="https://github.com/slundberg/shap">SHAP GitHub package</a>
shap
--- title: Generate AI Report description: Generate an AI Report, a high-level overview of your modeling results and insights, to communicate the most important findings of your modeling project to stakeholders. --- # Generate AI Report {: #generate-ai-report } Once you complete an Autopilot run, you can generate an AI Report, which communicates the most important findings of your modeling project to stakeholders. The AI Report provides a high-level overview of your modeling results and insights, with particular focus on Trusted AI insights that fall under the dimensions of quality, accuracy, and interpretability. ![](images/ai-report.png) The report provides accuracy insights for the top performing model, including its speed and cross-validation scores. It also captures interpretability insights in the model's **Feature Impact** histogram, which helps to show which features are driving model decisions. The AI Report provides these summary details: * The target and the number of models trained. * The type of problem, for example, binary or regression. * The modeling mode, for example, Quick Autopilot. It also provides details about your data and its quality, including: * The number of rows of data. * The number and type of features (for example, categorical, numeric, text, etc.). * The number of features determined to be informative&mdash;DataRobot removes the features that are not informative. Availability is as follows: * Reports can be created for projects built with all modeling modes except manual mode. * The option is unavailable (the functionality is disabled) for time-aware, unsupervised, and multiclass/multilabel projects. ## Generating an AI Report {: #generating-an-ai-report } To generate an AI Report for your Autopilot run, on the Leaderboard, click **Menu** and select **AI Report > Generate new report**. ![](images/gen-ai-report.png) DataRobot automatically downloads the AI Report as a Word document after it is generated. ## Viewing an AI Report {: #viewing-an-ai-report } To view a previously generated AI Report, on the Leaderboard, click **Menu** and select **AI Report > View report**. ![](images/view-ai-report.png) ## Downloading an AI Report {: #downloading-an-ai-report } To download a previously generated AI Report, on the Leaderboard, click **Menu** and select **AI Report > Download report**. ![](images/download-ai-report.png) DataRobot downloads the AI Report as a Word document.
generate-ai-report
--- title: Sliced insights description: Using a filtered subset of the full data, DataRobot segments insights by subpopulation to provide better segment-based accuracy information. --- # Sliced insights {: #sliced-insights } Sliced insights provide the option to view a subpopulation of a model's data based on feature values&mdash;either raw or derived. Slices are, in effect, a filter for categorical, numeric, or both types of features. Slices are applied to the Training, Validation, Cross-validation, or Holdout (if unlocked) partitions, depending on the insight. Viewing and comparing insights based on segments of a project’s data helps to understand how models perform on different subpopulations. Use the segment-based accuracy information gleaned from sliced insights, or compare the segments to the "global" slice (all data), to improve training data, create individual models per segment, or augment predictions post-deployment. Some common uses of sliced insights: > A bank is building a model to predict loan default risk and wants to understand if there are segments of their data&mdash; demographic information, location, etc.&mdash;that their model performs either more or less accurately on. If they find that "slicing" the data shows some segments perform to their expectations, they may choose to create individual projects per segment. > An advertising company wants to predict whether someone will click an ad. Their data contains multiple websites and they want to understand if the drivers are different between websites in their portfolio. They are interested in creating comparison groups, with each group consisting of some number of different values, to ultimately impact user behaviors in different ways for each site. To view insights for a segment of your data once models are trained, choose the preconfigured slice from the **Slice** dropdown. If the slice has been calculated for the chosen insight, DataRobot will load the insight. Otherwise, a button will be available to start further calculations. ### Supported insights {: #supported-insights } Sliced insights are available for the following insights (where applicable) for non- and time-aware binary classification and regression projects: * [**Lift Chart**](lift-chart) * [**ROC Curve**](roc-curve-tab/index) * [**Residuals**](residuals) (not available for time-aware) * [**Feature Effects**](feature-effects) (not available for time-aware) * [**Feature Impact**](#feature-impact-slices) See also the [sliced insight considerations](#feature-considerations). ## Create a slice {: #create-a-slice } You can create a slice to apply to insights from a supported [Leaderboard](#leaderboard-slices) insight. Each slice is made up of up to three [_filters_](#filter-configuration-reference) (connected, as needed, by the `AND` operator). !!! note Features that can be used to create filters are based on all features, regardless of what is currently displayed on the **Data** tab or if you built a model using a list that doesn't include that feature. This is because while feature lists are based on columns, slices are based on rows. That is, the value of the selected feature appears in the *row* that is identified by the individual feature in the list feature. To create a slice from the Leaderboard: 1. Select a model and open a [supported insight](#supported-insights). The insight loads using all data for the selected partition. 2. From the **Slice** dropdown, which defaults to `All Data`, configure a new slice by selecting **Manage slices**. ![](images/sliced-insights-1.png) 3. Click **Add slice** to open the filter configuration window. Fields are described in the [filter reference](#filter-configuration-reference). ![](images/sliced-insights-4.png) 4. Click **Add** to finish and view the **Slices** window. The **Slices** window lists all configured slices as well as summary text of the filters that define the slice. ![](images/sliced-insights-6.png) From here you can add a new slice or delete one or more configured slices. Click **Done** to close the configuration window. ### Filter configuration reference {: #filter-configuration-reference } The following table describes the fields in the filter configuration: Filter field | Description ------------ | ----------- Slice name | Enter a name for the filter. This is the name that will appear in the **Slices** dropdown of supported insights. Filter type | Select the categorical or numeric feature to base the filter on. They are grouped in the dropdown by variable type. You cannot set the target as the filter type. Operator | Set the filter operator to define what comprises the subpopulation. <br /> `in`: Those rows in which the feature value falls within the range of the defined **Value** (categorical and boolean). <br /> `=`&ast;: Those rows in which the feature value is equivalent to the defined **Value** (see below). <br /> `>`: Those rows in which the feature value is greater than the defined **Value** (numeric only).<br /> `<`: Those rows in which the feature value is less than the defined **Value** (numeric only). Values | Set the matching criteria for **Filter type**. For categorical features, all available values will be listed in the dropdown. For numerics, enter a value manually. &ast; If you select `=` as the operator, the **Value** must match exactly and you can choose only one value. If you set `in`, you can select multiple values. ![](images/sliced-insights-5.png) ### Adding multiple conditions {: #adding-multiple-conditions } Use **Add filter** to build a slice with multiple conditions. Note that: * You can mix categorical and numeric features in a single slice. * All conditions are processed with the `AND` operator. ![](images/sliced-insights-7.png) ## Generate sliced insights {: #generate-sliced-insights } When you first load an insight, DataRobot displays results for all data in the appropriate partition (unless further calculations are required). This is the equivalent of the global slice, or as referenced in the dropdown, `All Data`. For the Lift Chart, ROC Curve, and Residuals, once prediction calculations are run for the first slice, DataRobot stores them for re-use (assuming the same data partition). Feature Impact and Feature Effects, because they use a special [calculation process](#deep-dive-slice-calculations), do not benefit from caching and so recompute predictions for each slice. When viewing sliced data for a given model, you only have to generate predictions once for a selected partition&mdash;Validation, Cross-validation, or if calculated, Holdout. Note that this calculation is in addition to the original calculation DataRobot ran when fitting models for the project. !!! note **Feature Impact** provides a [quick-compute option](feature-impact#quick-compute) to control the sample size used in the calculation. ## Recompute sliced insights {: #recompute-sliced-insights } When using slices with either **Feature Effects** or **Feature Impact**, you must manually launch the calculation to compute a sliced version of the insight. The reason for this is to save compute resources&mdash;it allows you to determine whether a sliced insight has been created without automatically launching the associated jobs. The order of operations is: * When requested, DataRobot calculates **Feature Impact** on all data. Then, you can initiate the calculation for any configured slice. * If you request **Feature Effects** before **Feature Impact**, DataRobot calculates **Feature Impact** on all data and then returns **Feature Effects** results. You are not provided with the option to select a slice until after the initial **Feature Impact** job is complete. * If off, the row count uses 100,000 rows or the number of rows available after a slice is applied, whichever is smaller. For unsliced **Feature Impact**, the quick-compute toggle replaces the [**Adjust sample size**](feature-impact#change-sample-size) option. In that case, possible outcomes are: * If on, DataRobot uses 2500 rows or the number of rows in the model training sample size, whichever is smaller. * If off, the row count uses 100,000 rows or the number of rows in the model training sample size. You may want to use this option, for example, to train **Feature Impact** at a sample size higher than the default 2500 rows (or less, if downsampled) in order to get more accurate and stable results. ### View a sliced insight {: #view-a-sliced-insight } To view a sliced insight, choose the appropriate slice from the **Slice** dropdown. If you see a slice but are unsure of the filter conditions, click **Manage Slices** to view summary text of the filters that define the slice. ![](images/sliced-insights-6.png) The following example shows the **ROC Curve** tab without slices applied: ![](images/sliced-insights-9.png) Consider the same model with a slice applied that segments the data for females aged 70-80 who have had more than five diagnoses: ![](images/sliced-insights-10.png) !!! note If the slice results in predictions that are either all positive or all negative, the ROC curve will be a straight line. The Confusion Matrix reports the same results in table form. The images below show **Feature Impact** with first the global slice: ![](images/sliced-insights-11.png) And then a configured slice: ![](images/sliced-insights-11a.png) Hover on a feature to compare the calculated impact between sliced views: ![](images/sliced-insights-12.png) ## Deep dive: Slice calculations {: #deep-dive-slice-calculations } For the Lift Chart, ROC Curve, and Residuals, once prediction calculations are run for the first slice, DataRobot stores them so that they can be re-used, assuming the same data partition. Specifically: * When you select a new *slice* for the first time, within the same insight, DataRobot will generate the insight but will not need to rerun predictions (because predictions for the partition have already been computed). * When you change to another supported *insight* (other than **Feature Impact**), the predictions are available and only the insight itself must be generated (because the partition's predictions were already computed by another supported insight). For **Feature Impact** and **Feature Effects**, DataRobot first runs predictions on the training sample chosen to fit the model. Then, DataRobot creates sliced-based synthetic prediction datasets and generates predictions for use in the respective insights. Each insight generates its own unique synthetic datasets. ## Feature considerations {: #feature-considerations } * Sliced insights are only available for binary classification and regression projects. * Sliced insights for OTV and time series projects are available as Public Preview (feature flag: Sliced Insights for Time Aware Projects) for Lift Chart, ROC Curve, and Feature Impact. * Slices are not available for projects that use [Feature Discovery](feature-discovery/index). * Slices are not available in projects created with SHAP Feature Importance and SHAP-based Prediction Explanations. * You cannot edit slices. Instead, delete (if desired) the existing slice and create a new slice with the desired criteria. * You can add a maximum of three filter conditions to a single slice. * If you create an invalid slice, the slice is created, but when you apply it on supported insights, it will error. This could happen, for example, if there are not enough rows on the sliced data to compute the insight or the filter is invalid. For example, if set `num_procedures > 10` and the maximum number for any row is `6`, DataRobot creates the slice but errors during the insight calculation if the slice is selected. * Row requirements: * Feature Impact with **Quick-compute**: A minimum of 2500 rows or the number of rows available after a slice is applied, whichever is smaller. Minimum 10 rows, maximum 100,000 rows. * Other insights: Minimum 1 row (must fall within the slice), maximum set only by [file size limits](file-types). * For **Feature Impact**: * Slices calculates on all rows in the training data and then slices it for the requested number of rows. Previously, **Feature Impact** was calculated on the exact number of rows requested in row count. * DataRobot recommends you calculate **Feature Impact** for all data before calculating an individual slice.
sliced-insights
--- title: Data partitioning and validation description: To maximize accuracy, DataRobot separates data into training, validation/cross-validation, and holdout data. --- # Data partitioning and validation {: #data-partitioning-and-validation } You should evaluate and select models using only the Validation and Cross-Validation scores. Use the **Holdout** score for a final estimate of model performance only after you have selected your best model. To make sure that the **Holdout** score does not inadvertently affect your model selection, DataRobot “hides” the score behind the padlock icon. After you have selected your optimal model, you can score it using the holdout data. The following section describes the dataset segments. ## Validation types {: #validation-types } To maximize accuracy, DataRobot separates data into training, validation, and holdout data. The segments (splits) of the dataset are defined as follows: | Split | Description | |------------|--------------| | Training | The training set is data used to build the models. Things such as linear model coefficients and the splits of a decision tree are derived from information in the training set.| | Validation | The validation (or testing) set is data that is not part of the training set; it is used to evaluate a model’s performance using data it has not seen before. Since this data was not used to build the model, it can provide an unbiased estimate of a model’s accuracy. You often compare the results of validation when selecting a model. | | Holdout | Because the process of training a series of models and then selecting the “best” based on the validation score can yield an overly optimistic estimate of a model’s performance, DataRobot uses the holdout set as an extra check against this selection bias. The holdout data is unavailable to models during the training and validation process. After selecting a model, you can score your model using this data as another check. | When creating splits, DataRobot uses five folds by default. With each round of training, DataRobot uses increasing amounts of data, split across those folds. For full Autopilot, the first round uses a random sample comprised of 16% of the training data in the first four partitions. The next round uses 32% of the training data, and the final round 64%. The Validation and Holdout partitions never change. (Other [modeling modes](model-data#set-the-modeling-mode) may use different sample sizes). You can visualize data partitioning like this: ![](images/data-parts-1.png) And cross-validation like this: ![](images/data-parts.png) DataRobot uses "stacked predictions" for the validation partition when creating "out-of-sample" predictions on training data. ### What are stacked predictions {: #what-are-stacked-predictions } Without some kind of manipulation, predictions from training data would appear to have misleadingly high accuracy. To address this, DataRobot uses a technique called stacked predictions for the training dataset. With stacked predictions, DataRobot builds multiple models on different subsets of the data. The prediction for any row is made using a model that excluded that data from training. In this way, each prediction is effectively an "out-of-sample" prediction. Consider a sample of downloaded predictions: ![](images/dnt-downloaded-preds.png) DataRobot makes it obvious which is the holdout partition&mdash;the validation partition is labeled as `0`. ### Validation scores {: #validation-scores } The Leaderboard lists all models that DataRobot created (automatically or manually) and the model's scores. Scores are displayed in all or some of the following columns: * Validation * Cross-Validation (CV) * Holdout The presence or absence of a particular column depends on the type of validation partition that you chose at the start of the project. By default, DataRobot creates a 20% holdout and five-fold cross-validation. If you use these defaults, DataRobot displays values in all three columns. Scores in the _Validation_ column are calculated using a model's trained predictions against the first validation partition. That is, it uses a "single-fold" of data. The [_Cross-Validation_](#k-fold-cross-validation-cv) partition is a mean of the (by default) five scores calculated on five different training/validation partitions. ## Understand validation types {: #understand-validation-types } Model validation has two important purposes. First, you use validation to pick the best model from all the models built for a given dataset. Then, once picked, validation helps you to decide whether the model is accurate enough to suit your needs. The following sections describe methods for using your data to validate models. ### _K_-fold cross-validation (CV) {: #k-fold-cross-validation-cv } The performance of a predictive model usually increases as the size of the training set increases. Also, model performance estimates are more consistent if the validation set is large. Therefore, it is best to use as much data as possible for both training and validation. Use the cross-validation method to maximize the data available for each of these sets. This process involves: 1. Separating the data into two or more sections, called “folds” 2. Creating one model per fold, with the data assigned to that fold used for validation and the rest of the data used for training. The benefit to this approach is that all of the data is used for scoring and if enough folds are used, most of the data is used for training. * **Pros**: This method provides a better estimate of model performance. * **Cons**: Because of its multiple passes, CV is computationally intensive and takes longer to run. To compensate for the overhead when working with large datasets, DataRobot first trains models on a smaller part of the data and uses only one cross-validation fold to evaluate model performance. Then, for the highest performing models, DataRobot increases the subset sizes. In the end, only the best models are trained on the total cross-validation partition. For those models, DataRobot completes _k_-fold cross-validation training and scoring. As a result, the mean score of complete cross-validation for a model is displayed in the Cross-Validation column. Those models that did not perform well will not have a cross-validation score. Instead, because they only had a "one-fold" validation, their score is reported in the Validation column. You can initiate complete CV model evaluation manually for those models by clicking **Run** in the model's Cross-Validation column. If the dataset is greater than or equal to 50k rows, DataRobot does not run cross-validation automatically. To initiate, click **Run** in the model's Cross-Validation column. If the dataset is larger than 800MB, cross-validation is not allowed. Instead, DataRobot uses TVH (described below). CV is generally useful for smaller datasets where you would not otherwise have enough useful data using TVH. ### Training, validation, and holdout (TVH) {: #training-validation-and-holdout-tvh } With the TVH method, the default validation method for datasets larger than 800MB, DataRobot builds and evaluates predictive models by partitioning datasets into three distinct sections: training, validation, and holdout. Predictions are based on a single pass over the data. * **Pros**: This method is faster than cross-validation because it only makes one pass on each dataset to score the data. * **Cons**: For the same reason that it is faster, it is also moderately less accurate. For projects larger than 800MB (non-time-aware only), the training partition percentage is not scaled down. The validation and holdout partitions are set to default sizes of 80MB and 100MB respectively and do not change unless you manually do so (both have a maximum size of 400 MB). The validation and holdout percentages, therefore, scale down with a larger training partition. The percentage of the training partition is comprised of the remaining percentage after accounting for the validation and holdout percentages. For example, say you have a 900MB project. If the validation and holdout partitions are at the default sizes of 80MB and 100MB respectively, then the validation percentage will be 9% and the holdout percentage will be 11.1%. The training partition will comprise the remaining 720MB as a percentage: 80%. Note that for time-aware projects, the TVH method is not applicable. They instead use [date/time partitioning](ts-date-time). ## Examples: partitioning methods {: #examples-partitioning-methods } The examples below provide an illustration of how different partitioning methods work in DataRobot non-time-aware projects. All examples describe a binary classification problem: predicting loan defaults. ### Random partitioning {: #random-partitioning } Rows for each partition are selected at [random](partitioning#random-partitioning-random), without taking target values into account. | State | Loan_purpose | Is_bad_loan (target) | Possible outcome (TVH) | Possible outcome (5-fold CV) | |-------|------------|-----------------------:|-------------|--------| | AR | debt consolidation | 0 | Training | Fold 1 | | AZ | debt consolidation | 0 | Training | Fold 5 | | AZ | home improvement | 1 | Validation | Fold 4 | | AZ | credit card | 1 | Training | Fold 4 | | CO | credit card | 0 | Training | Fold 3 | | CO | home improvement | 0 | Training | Fold 2 | | CO | home improvement | 0 | Validation | Fold 1| | CT | small business | 1 | Training | Holdout | | GA | credit card | 0 | Training | Fold 3 | | ID | small business | 0 | Training | Fold 2 | | IL | small business | 0 | Training | Holdout | | IN | home improvement | 1 | Holdout | Fold 5 | | IN | debt consolidation | 1 | Holdout | Fold 3 | | KY | credit card | 0 | Training | Holdout | ### Stratified partitioning {: #stratified-partitioning } For [stratified partitioning](partitioning#ratio-preserved-partitioning-stratified), each partition (T, V, H, or each CV fold) has a similar proportion of positive and negative target examples, unlike the previous example with random partitioning. | State | Loan_purpose | Is_bad_loan (target) | Possible outcome (TVH) | Possible outcome (5-fold CV) | |-------|------------------|------------:|--------|-------------| | AR | debt consolidation | 1 | Training | Fold 1 | | AZ | debt consolidation | 0 | Training | Fold 5 | | AZ | home improvement | 1 | Validation | Fold 4 | | AZ | credit card | 0 | Training | Fold 4 | | CO | credit card | 1 | Training | Fold 3 | | CO | home improvement | 0 | Training | Fold 2 | | CO | home improvement | 0 | Validation | Fold 1 | | CT | small business | 1 | Training | Holdout | | GA | credit card | 0 | Training | Fold 3 | | ID | small business | 1 | Training | Fold 2 | | IL | small business | 0 | Training | Holdout | | IN | home improvement | 1 | Holdout | Fold 5 | | IN | debt consolidation | 1 | Training | Fold 3 | | KY | credit card | 0 | Holdout | Holdout | ### Group partitioning {: #group-partitioning } In this example of [group partitioning](partitioning#group-partitioning-group), **State** is used as the group column. Note how the rows for the same state always end up in the same partition. | State | Loan_purpose | Is_bad_loan (target) | Possible outcome (TVH) | Possible outcome (5-fold CV) | |-------|---------|------:|--------|-------------| | AR | debt consolidation | 1 | Training | Fold 1 | | AZ | debt consolidation | 0 | Training | Fold 5 | | AZ | home improvement | 1 | Training | Fold 5 | | AZ | credit card | 0 | Training | Fold 5 | | CO | credit card | 1 | Validation | Fold 3 | | CO | home improvement | 0 | Validation | Fold 3 | | CO | home improvement | 0 | Validation | Fold 3 | | CT | small business | 1 | Training | Fold 1 | | GA | credit card | 0 | Training | Fold 2 | | ID | small business | 1 | Training | Fold 2 | | IL | small business | 0 | Holdout| Holdout | | IN | home improvement | 1 | Holdout | Fold 4 | | IN | debt consolidation | 1 | Training | Fold 4 | | KY | credit card | 0 | Training | Holdout | ### Partition feature partitioning {: #partition-feature-partitioning } The [partition feature](partitioning#column-based-partitioning-partition-feature) method uses either TVH or CV. **TVH**: The three unique values of “My_partition_id” directly correspond to assigned partitions. | State | Loan_purpose | Is_bad_loan (target) | My_partition_id (partition feature) | Outcome | |-------|--------|---------------:|-------|-----------| | AR | debt consolidation | 1 | my_train | Training | | AZ | debt consolidation | 0 | my_train | Training | | AZ | home improvement | 1 | my_train | Training | | AZ | credit card | 0 | my_val | Validation | | CO | credit card | 1 | my_val | Validation | | CO | home improvement | 0 | my_val | Validation | | CO | home improvement | 0 | my_train | Training | | CT | small business | 1 | my_train | Training | | GA | credit card | 0 | my_train | Training | | ID | small business | 1 | my_train | Training | | IL | small business | 0 | my_holdout | Holdout | | IN | home improvement | 1 | my_holdout | Holdout | | IN | debt consolidation | 1 | HO | Holdout | | KY | credit card | 0 | HO | Holdout | **CV**: The seven unique values of **My_partition_id** directly correspond to seven created partitions. | State | Loan\_purpose | Is\_bad\_loan (target) | My\_partition\_id (partition feature) | Outcome | |-------|--------------------|-----------------------:|---------------------------------------|---------| | AR | debt consolidation | 1 | P1 | Fold 1 | | AZ | debt consolidation | 0 | P1 | Fold 1 | | AZ | home improvement | 1 | P2 | Fold 2 | | AZ | credit card | 0 | P2 | Fold 2 | | CO | credit card | 1 | P3 | Fold 3 | | CO | home improvement | 0 | P3 | Fold 3 | | CO | home improvement | 0 | P4 | Fold 4 | | CT | small business | 1 | P4 | Fold 4 | | GA | credit card | 0 | P5 | Fold 5 | | ID | small business | 1 | P5 | Fold 5 | | IL | small business | 0 | P6 | Fold 6 | | IN | home improvement | 1 | P6 | Fold 6 |
data-partitioning
--- title: Datasets description: The Datasets tab lists each dataset that has been added to your Use Case and provides the ability interact with those datasets, by exploring features, wrangling data, or creating an experiment. --- # Datasets {: #datasets } The **Datasets** tab lists all datasets currently linked to the selected Use Case by you and other team members. To access this tab, open a Use Case and click **Datasets**. From this tab, you can: ![](images/wb-data-tab-5.png) | | Element | Description | |----------|-----------|-------------| | <div class="table-label">1</div> | Add new | Add a dataset, experiment, or notebook to your Use Case, or create a new Use Case. | | <div class="table-label">2</div> | Search | Search for a specific dataset. | | <div class="table-label">3</div> | Sort | Sort the dataset columns. | | <div class="table-label">4</div> | More options | Click **More options** to interact with a dataset:<br><ul><li>**Explore**: View [Exploratory Data Insights](#view-exploratory-data-insights).</li><li>**Wrangle/Continue Wrangling**: [Perform data wrangling](wb-wrangle-data/index) on datasets retrieved from a data connection.</li><li>**Start modeling**: Set up an experiment using the dataset.</li><li>**Remove from Use Case**: Removes the dataset from the Use Case, also removing access for any team members. The dataset is still available via the [Data Registry](wb-data-registry).</li></ul> | ## View Exploratory Data Insights {: #view-exploratory-data-insights } While a dataset is being registered in Workbench, DataRobot also performs [EDA1](eda-explained#eda1){ target=_blank }&mdash;analyzing and profiling every feature to detect feature types, automatically transform date-type features, and assess feature quality. Once registration is complete, you can explore the information uncovered while computing EDA1. To view Exploratory Data Insights: 1. In a Use Case, navigate to the **Datasets** tab. 2. Click the **More options** icon next to the dataset you want to view and select **Explore**. Alternatively, click the dataset name to view its insights. ![](images/wb-data-tab-1.png) 3. For each feature in the dataset, DataRobot displays various [feature details](histogram){ target=_blank }, including a [histogram](histogram#histogram-chart){ target=_blank } and summary statistics. ![](images/wb-data-tab-2.png) 4. To drill down into a specific feature, click its histogram chart along the top. ![](images/wb-data-tab-3.png) ## Next steps {: #next-steps } From here, you can: - [Add more data.](wb-add-data/index) - [Perform data wrangling for datasets added via a data connection.](wb-wrangle-data/wb-add-operation) - [Use the dataset to set up an experiment and start modeling.](wb-experiment/index)
wb-data-tab
--- title: Data preparation description: How to add, profile, and wrangle data in Workbench. --- # Data preparation {: #data-preparation } {% include 'includes/data-description.md' %} DataRobot’s wrangling capabilities give you the ability to prepare data and engineer features with a no-code interface to see transformations in real time&mdash;reducing the time from data to model. This section covers the following topics: Topic | Describes... ---------|----------------- [Add data](wb-add-data/index) | Add datasets to your Use Case from a local file, data connection, or the Data Registry. [Wrangle data](wb-wrangle-data/index) | Interactively prepare data for modeling without moving it from your data source to generate a new output dataset. [Datasets tab](wb-data-tab) | Manage the datasets linked to your Use Case and access Exploratory Data Insights. [Reference](wb-data-ref) | View feature considerations and other reference material for Data Preparation.
index
--- title: Notebook reference description: Answers questions and provides tips for working with DataRobot Notebooks in DataRobot's Workbench. section_name: Notebooks --- {% include 'includes/notebooks/notebook-ref.md' %}
wb-notebook-ref
--- title: DataRobot Notebooks description: Read preliminary documentation for DataRobot's notebook features currently in the DataRobot public preview pipeline. section_name: Notebooks --- # DataRobot Notebooks {: #datarobot-notebooks } {% include 'includes/notebooks/nb-index-main.md' %} ## Browser compatibility {: #browser-compatibility } {% include 'includes/browser-compatibility.md' %}
index
--- title: Workbench overview description: Understand the components of the DataRobot Workbench interface, including the architecture, some sample workflows, and directory landing page. --- # Workbench overview {: #workbench-overview } {% include 'includes/wb-overview.md' %}
wb-overview
--- title: Capability matrix description: An evolving comparison of capabilities available in DataRobot Classic and Workbench. --- # Capability matrix {: #capability-matrix } {% include 'includes/wb-capability-matrix.md' %}
wb-capability-matrix
--- title: Get started description: Understand the components of the new DataRobot user interface. --- # Get started in Workbench {: #get-started-in-workbench } Workbench is an intuitive, guided, machine learning workflow, providing a way for users to experiment and iterate. Move from raw data to prepared, partitioned data that’s ready for modeling, build many models at once, and generate value quickly through key insights and predictions. ![](images/uxr-2.png) The following sections help to understand DataRobot's Workbench interface: Topic | Describes... --------|----------------- [Workbench overview](wb-overview) | The value and benefits of Workbench, as a generational leap from classic DataRobot, as well as its components and architecture. [Workbench glossary](wb-glossary) | Terms specific to the Workbench user experience for handy, quick reference. [Capability matrix](wb-capability-matrix) | An evolving comparison of capabilities available in DataRobot Classic and Workbench.
index
--- title: Glossary description: Familiarize yourself with the terms specific to Workbench, DataRobot's collaborative, intuitive interface. --- # Glossary {: #glossary } The Workbench glossary provides brief definitions of terms relevant to the DataRobot's collaborative, intuitive interface. See the [full glossary](glossary/index){ target=_blank } for terms that span all phases of machine learning, from data to deployment. #### Apps {: #apps } See [No-Code AI Apps](glossary/index#no-code-ai-apps){ target=_blank } in the DataRobot Classic glossary. #### Asset {: #asset } One of the components of a Use Case that can be added, managed, and shared within Workbench. Components include data, experiments, No-Code AI Apps, and DataRobot Notebooks. #### Connection instance {: #connection instance } A connection that is configured with metadata about how to connect to a source system (e.g., instance of a Snowflake connection). #### Data Preparation {: #data-preparation } The process of transforming raw data to the point where it can be run through machine learning algorithms to uncover insights or make predictions. Also called “data preprocessing,” this term covers a broad range of activities like normalizing data, standardizing data, statistically or mathematically transforming data, processing and/or preprocessing data, and feature engineering. #### Data Registry {: #data-registry } A central catalog for your datasets in Workbench that allows you two link datasets to specific Use Cases. #### DataRobot Classic {: #datarobot-classic } The original DataRobot value-driven AI product. It provides a complete AI lifecycle platform leveraging machine learning that has broad interoperability, and end-to-end capabilities for ML experimentation and production. It can be deployed on-premise (Self-Managed AI Platform) or in any cloud infrastructure. DataRobot Classic is being migrated to a new user interface, known as [Workbench](#workbench). #### Dataset {: #dataset } See [Dataset](glossary/index#dataset){ target=_blank } in the DataRobot Classic glossary. #### Data wrangling {: #data-wrangling } Data preparation operations of a scope that ties to creating a dataset at an appropriate unit of analysis for a given machine learning use case. #### Experiment {: #experiment } An asset of a Use Case that is the result of having run the DataRobot modeling process. A Use Case can have zero or more experiments. #### Exploratory Data Insights {: #exploratory-data-insights } Insights generated by DataRobot running EDA1 on a dataset. See also [Exploratory Data Analysis](glossary/index#eda-exploratory-data-analysis){ target=_blank } in the DataRobot Classic glossary. #### Materialization {: #materialization } Creation of a physical dataset either in a data source, in the form of Table, or in DataRobot storage, in the form of a DataRobot dataset. #### Model overview {: #model-overview } A page within an experiment that displays the model Leaderboard, and once a model is selected, displays visualizations for that model. #### No-Code AI Apps {: #no-code-ai-apps } See [No-Code AI Apps](glossary/index#no-code-ai-apps){ target=_blank } in the DataRobot Classic glossary. #### Notebook {: #notebook } See [Notebook](glossary/index#notebook){ target=_blank } in the DataRobot Classic glossary. #### Operation {: #operation } A single data manipulation instruction that specifies to either transform, filter, or pivot one or more records into zero or more records (e.g., find and replace or compute new feature). #### Prepared dataset {: #prepared-dataset } A dataset that has been materialized in its source after publishing a recipe. #### Publish {: #publish } Execution of the sequence of operations specified in a recipe resulting in the materialization of a dataset in a data source. #### Recipe {: #recipe } A user-defined sequence of transformation operations that are applied to the data. A recipe is uniquely identified and versioned by the system. It includes metadata identifying the input data’s source and schema, the output data’s schema, the Use Case Container ID, and user ID. #### Use Case {: #use-case } A container that groups objects that are part of the experimentation flow. #### Wrangle {: #wrangle } A capability that enables you to import, explore, and transform data in an easy-to-use GUI environment. #### Workbench {: #workbench } An experiment-based product optimized to support iterative workflows by enabling users to group and share everything they need to solve a specific problem from a single location. Workbench is organized by Use Case, and each Use Case contains zero or more datasets, models, notebooks, and apps. Workbench is based on [DataRobot Classic](#datarobot-classic).
wb-glossary
--- title: Create experiments description: Describes how to create and manage experiments in the DataRobot Workbench interface. --- # Create experiments {: #create-experiments } Experiments are the individual "projects" within a [Use Case](wb-build-usecase). They allow you to vary data, targets, and modeling settings to find the optimal models to solve your business problem. Within each experiment, you have access to its Leaderboard and [model insights](wb-experiment-evaluate#insights), as well as [experiment summary information](wb-experiment-evaluate#view-experiment-info). After selecting a model, you can, from within the experiment: * [Make predictions](wb-predict). * [Create a No-Code AI App](wb-apps/index). * [Generate a compliance report](wb-experiment-evaluate#compliance-documentation). See the associated [FAQ](wb-experiment-ref) for important additional information. ## Create basic {: #create-basic } Follow the steps below to create a new experiment from within a Use Case. !!! note You can also start modeling directly from a dataset by clicking the **Start modeling** button. The **Set up new experiment** page opens. From there, the instructions follow the flow described below. ### Add experiment {: #add-experiment } From within a [Use Case](wb-build-usecase), click **Add new** and select **Add experiment**. The **Set up new experiment** page opens, which lists all data previously loaded to the Use Case. ![](images/wb-exp-1.png) ### Add data {: #add-data } Add data to the experiment, either by [adding new data](wb-add-data/index) (1) or selecting a dataset that has already been loaded to the Use Case (2). ![](images/wb-exp-2.png) Once the data is loaded to the Use Case (as in option 2 above), click to select the dataset you want to use in the experiment. Workbench opens a preview of the data: ![](images/wb-exp-6.png) From here you can: | | Option | Description | |---|---|---| | <div class="table-label">1</div> | ![](images/wb-exp-3.png) | Click to return to the data listing and choose a different dataset. | <div class="table-label">2</div> | ![](images/wb-exp-4.png) | Click the icon to proceed and set the target. | <div class="table-label">3</div> | ![](images/wb-exp-5.png) | Click **Next** to proceed and set the target. ### Select target {: #select-target} Once you have proceeded to target selection, Workbench prepares the dataset for modeling ([EDA 1](eda-explained#eda1){ target=_blank }). When the process finishes, set the target either by: === "Hover on feature name" Scroll through the list of features to find your target. If it is not showing, expand the list from the bottom of the display: ![](images/wb-exp-7.png) Once located, click the entry in the table to use the feature as the target. ![](images/wb-exp-8.png) === "Enter target name" Type the name of the target feature you would like to predict in the entry box. DataRobot lists matching features as you type: ![](images/wb-exp-9.png) Once a target is entered, Workbench displays a histogram providing information about the target feature's distribution and, in the right pane, a summary of the experiment settings. ![](images/wb-exp-10.png) From here, you are ready to build models with the default settings. Or, you can [modify the default settings](#customize-settings) and then begin. If using the default settings, click **Start modeling** to begin the [Quick mode](model-data#modeling-modes-explained){ target=_blank } Autopilot modeling process. ## Customize settings {: #customize-settings } Changing experiment parameters is a good way to iterate on a Use Case. Before starting to model, you can: * [Modify partitioning settings](#modify-partitioning). * [Change configuration settings](#change-the-configuration). Once you have reset any or all of the above, click **Start modeling** to begin the [Quick mode](model-data#modeling-modes-explained){ target=_blank } modeling process. ### Modify partitioning {: #modify-partitioning } Partitioning describes the method DataRobot uses to “clump” observations (or rows) together for evaluation and model building. Workbench defaults to [five-fold](data-partitioning){ target=_blank }, [stratified sampling](partitioning#ratio-preserved-partitioning-stratified){ target=_blank } with a 20% holdout fold. !!! info "Availability information" Date/time partitioning for building time-aware projects is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. <b>Feature flag:</b> Enable Date/Time Partitioning (OTV) in Workbench To change the partitioning method or validation type: 1. Click the icon for **Additional settings**, **Next**, or the **Partitioning** field in the summary: ![](images/wb-exp-12.png) 2. If there is a date feature available, your experiment is eligible for [out-of-time validation](otv){ target=_blank } partitioning, which allows DataRobot to build time-aware models. In that case, additional information becomes available in the summary. ![](images/wb-exp-16.png) 2. Set the fields that you want to change. The fields available depend on the selected partitioning method. ![](images/wb-exp-13a.png) * [Random](partitioning#random-partitioning-random){ target=_blank } assigns observations (rows) randomly to the training, validation, and holdout sets. * [Stratified](partitioning#ratio-preserved-partitioning-stratified){ target=_blank } randomly assigns rows to training, validation, and holdout sets, preserving (as close as possible to) the same ratio of values for the prediction target as in the original data. * [Date/time](otv#advanced-options) assigns rows to backtests chronologically instead of, for example, randomly. This is the only valid partitioning method for time-aware projects. === "Random or Stratified" ![](images/wb-exp-13.png) | &nbsp; | Field | Description | |---------|--------|--------------| | <div class="table-label">1</div> | Validation type | Sets the method used on data to validate models.<br /> <ul><li> [Cross-validation](data-partitioning#k-fold-cross-validation-cv){ target=_blank }. Separates the data into two or more “folds” and creates one model per fold, with the data assigned to that fold used for validation and the rest of the data used for training. </li><li> [Training-validation-holdout](data-partitioning#training-validation-and-holdout-tvh){ target=_blank }. For larger datasets, partitions data into three distinct sections&mdash;training, validation, and holdout&mdash; with predictions based on a single pass over the data. | | <div class="table-label">2</div> | [Cross-validation folds](data-partitioning#k-fold-cross-validation-cv){ target=_blank } | Sets the number of folds used with the cross-validation method. A higher number increases the size of training data available in each fold; consequently increasing the total training time. | <div class="table-label">3</div> | [Holdout percentage](adv-opt/partitioning#configure-model-validation){ target=_blank } | Sets the percentage of data that Workbench “hides” when training. The Leaderboard shows a Holdout value, which is calculated using the trained model's predictions on the holdout partition. === "Date/time" When you select **Date/time**, you are prompted to enter an _ordering feature_&mdash;the feature used to order rows in the dataset. Click in the box to view the date/time features that DataRobot detected during EDA1. They are also listed below the box, allowing you to select the ordering feature there. If a feature is not listed, it was not detected as type `date` and cannot be used. Once an ordering feature is selected, DataRobot detects and reports the date and/or time format ([standard GLIBC strings](https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior){ target=_blank }) for the selected feature: ![](images/wb-exp-13d.png) Backtest configuration becomes available. DataRobot sets defaults based on the characteristics of the dataset and can generally be left as-is&mdash;they will result in robust models. ![](images/wb-exp-13b.png) | &nbsp; | Field | Description | |---------|--------|--------------| | <div class="table-label">1</div> | Backtests | Sets the [backtesting partitions](otv#understanding-backtests){ target=_blank }. Any changes to these values are represented in the graphics below the entry boxes.<br /> <ul><li> [Number of backtests](otv#set-the-number-of-backtests){ target=_blank }. Sets the number of backtests for the project, which is the time-aware equivalent of cross-validation (but based on time periods or durations instead of random rows). </li><li> [Validation length](otv#set-the-validation-length){ target=_blank }. Sets the size of the partition used for testing&mdash;data that is not part of the training set that is used to evaluate a model’s performance.</li><li> [Gap length](otv#set-the-gap-length). Sets spaces in time, representing gaps between model training and model deployment. | <div class="table-label">2</div> | [Use equal rows per backtest](otv#set-rows-or-duration){ target=_blank } | Sets whether each backtest uses the same number of rows (enabled) or the same duration (disabled). | <div class="table-label">3</div> | Partition sampling method | Sets how to assign rows from the dataset, which is useful if a dataset is not distributed equally over time. | <div class="table-label">4</div> | Partitioning log | Provides a downloadable log that reports on partition creation. You can also use the graphics below the entry boxes to [edit individual backtests](otv#edit-individual-backtests){ target=_blank }. ![](images/wb-exp-13c.png) ### Change the configuration {: #change-the-configuration } You can make changes to the project's target or feature list before you begin modeling by returning to the **Target** page. To return, click the target icon, the **Back** button, or the **Target** field in the summary: ![](images/wb-exp-14.png) #### Change feature list {: #change-feature-list } Feature lists control the subset of features that DataRobot uses to build models. Workbench defaults to the [Informative Features](feature-lists#automatically-created-feature-lists){ target=_blank } list, but you can modify that prior to model building. To change, click on the **Feature list** dropdown and select a different list: ![](images/wb-exp-11.png) You can also [change the selected list](wb-experiment-add#train-on-new-settings) on a per-model basis once the experiment finishes building.
wb-experiment-create
--- title: Experiment reference description: Answers questions and provides tips for working with Use Cases in DataRobot's Workbench. --- # Experiment reference {: #experiment-reference } ??? faq "What types of experiments are supported in Workbench?" Currently Workbench supports binary classification and regression projects. Because development is ongoing, see the release notes for a full list of supported capabilities. ??? faq "Are experiments I create in Workbench available in DataRobot Classic?" When you create experiments in Workbench, you can access them in DataRobot Classic via the [project management center](manage-projects#manage-projects-control-center){ target=_blank }. When you select an experiment from Classic that was created in Workbench, you will see the Use Case name above the dataset name, indicating it is available in Workbench. ![](images/wb-uc-faq-3.png) ??? faq "Can an experiment be linked to more than one use case?" An experiment can only be apart of a single Use Case. The only asset that can be in multiple Use Cases is a dataset. The reason for this is because a Use Case is intended to represent a specific business problem and experiments within the Use Case are typically directed at solving that problem. If an experiment is relevant for more than one Use Case, consider consolidating the two Use Cases. ??? faq "What modeling modes are available?" Workbench always runs [Quick](model-ref#quick-autopilot){ target=_blank } Autopilot, whether on the initial run or when [rerunning](wb-experiment-evaluate#controls). ??? faq "Can I create a feature list for modeling in Workbench?" When first [creating an experiment](wb-experiment-create#change-feature-list), the only feature lists available are Raw Features and Informative Features. After running models, additional automatically created feature lists may appear (for example, Reduced Features and Univariate Selections). If you want to model on a feature list you create, you cannot currently create the list in Workbench. You can, however, open the experiment from the [project management center](manage-projects#manage-projects-control-center){ target=_blank } in DataRobot Classic and create a feature list there. When you re-open the experiment in Workbench, the feature list will be available. ??? faq "Why isn't the prepared for deployment model at the top of the Leaderboard?" When Workbench prepares a model for deployment, it trains the model on 100% of the data. While the most accurate was selected to be prepared, it was selected based on a 64% sample size. As a part of preparing the most accurate model for deployment, Workbench unlocks Holdout, resulting in the prepared model being trained on different data from the original. If you do not change the Leaderboard to sort by Holdout, the validation score in the left bar can make it appear as if the prepared model is not the most accurate. ??? faq "Does Workbench create blender models?" You cannot currently create a blender model in Workbench; however, there is a workaround. To add a blender to your use case: 1. Create an experiment and build models. 2. Open the project in DataRobot Classic and [blend models](creating-addl-models#blended-models){ target=_blank }. 3. Reopen the experiment in Workbench. The blender will be present in the Leaderboard.
wb-experiment-ref
--- title: Experiments description: Create experiments and iterate quickly to evaluate and select the best predictive models. --- # Experiments {: #experiments } The following sections help to understand Workbench experiments: Topic | Describes... ---------|----------------- [Create experiments](wb-experiment-create) | Create experiments in Workbench. [Evaluate models](wb-experiment-evaluate) | Filter the Leaderboard, use visualization tools to evaluate models, and manage experiments. [Add/retrain models](wb-experiment-add) | Retrain existing models and add models from the blueprint repository [Reference and FAQ](wb-experiment-ref) | Questions and tips for working with experiments.
index
--- title: Evaluate experiments description: Describes how to filter the Leaderboard and use visualization tools to evaluate models in the DataRobot Workbench interface. --- # Evaluate experiments {: #evaluate-experiments } Once you start modeling, Workbench begins to construct your model Leaderboard, a list of models ranked by performance, to help with quick model evaluation. The Leaderboard provides a summary of information, including scoring information, for each model built in an experiment. From the Leaderboard, you can click a model to access visualizations for further exploration. Using these tools can help to assess what to do in your next experiment. After Workbench completes [Quick mode](model-data#modeling-modes-explained){ target=_blank } on the 64% sample size phase, the most accurate model is selected and trained on 100% of the data. That model is marked with the [**Prepared for Deployment**](model-rec-process#prepare-a-model-for-deployment){ target=_blank } badge. ![](images/wb-exp-eval-1.png) ## Manage the Leaderboard {: #manage-the-leaderboard } There are several controls available, described in the next sections, for navigating the Leaderboard. * [View experiment info](#view-experiment-info) * [Filter models](#filter-models) * [Sort models by](#sort-models-by) * [Controls](#actions) ### View experiment info {: #view-experiment-info } Click **View experiment info** to: * View a summary of information about the experiment's [setup](#setup-tab). * Access the [blueprint repository](wb-experiment-add#blueprint-repository). ![](images/wb-exp-eval-2.png) #### Setup tab {: #setup-tab } The **Setup** tab reports the parameters used to build the models on this Leaderboard. Field | Reports... ----- | ----------- Created | A time stamp indicating the creation of the experiment's Leaderboard as well as the user who initiated the model run. Dataset | The name, number of features, and number of rows in the modeling dataset. This is the same information available from the [data preview](wb-data-tab) page. Target | The feature selected as the basis for predictions, the resulting project type, and the [optimization metric](opt-metric){ target=_blank } used to define how to score the experiment's models. You can [change the metric](#sort-models-by) the Leaderboard is sorted by, but the metric displayed in the summary is the one used for the build. Partitioning | Details of the partitioning done for the experiment, either the default or [modified](wb-experiment-create#modify-partitioning){ target=_blank }. ### Filter models {: #filter-models } Filtering makes viewing and focusing on relevant models easier. Click **Filter models** to set the criteria for the models that Workbench displays on the Leaderboard. The choices available for each filter are dependent on the experiment and/or model type&mdash;they were used in at least one Leaderboard model&mdash;and will potentially change as models are added to the experiment. For example: === "Random or Stratified partitioning" ![](images/wb-exp-eval-3.png) === "Date/time partitioning" ![](images/wb-exp-eval-3a.png) Filter | Displays models that... ----- | ------------- Labeled models | Have been assigned the listed tag, either [starred models](leaderboard-ref#tag-and-filter-models){ target=_blank } or models [recommended for deployment](model-rec-process#prepare-a-model-for-deployment){ target=_blank }. Feature list | Were built with the selected feature list. Sample size | Were trained on the selected sample size. Model family | Are part of the selected model family: <ul><li>GBM (Gradient Boosting Machine), such as Light Gradient Boosting on ElasticNet Predictions, eXtreme Gradient Boosted Trees Classifier </li><li>GLMNET (Lasso and ElasticNet regularized generalized linear models), such as Elastic-Net Classifier, Generalized Additive2</li><li>RI (Rule induction), such as RuleFit Classifier</li><li>RF (Random Forest), such as RandomForest Classifier or Regressor</li><li>NN (Neural Network), such as Keras</li></ul> ### Sort models by {: #sort-models-by } By default, the Leaderboard sorts models based on the score of the validation partition, using the selected [optimization metric](opt-metric){ target=_blank }. You can, however, use the **Sort models by** control to change the basis of the display parameter when evaluating models. Note that although Workbench built the project using the most appropriate metric for your data, it computes many applicable metrics on each of the models. After the build completes, you can redisplay the Leaderboard listing based on a different metric. It will not change any values within the models, it will simply reorder the model listing based on their performance on this alternate metric. ![](images/wb-exp-eval-4.png) See the page on [optimization metrics](opt-metric){ target=_blank } for detailed information on each. ### Controls {: #controls } Workbench provides simple, quick shorthand controls: Icon | Action ---- | ------ ![](images/icon-wb-rerun.png)| Reruns Quick mode with a different feature list If you select a feature list that has already been run, Workbench will replace and deleted models or make no changes. ![](images/icon-wb-duplicate.png) | [Duplicates the experiment](manage-projects#duplicate-a-project), with an option to reuse just the dataset, or the dataset and settings. ![](images/icon-wb-delete.png) | Deletes the experiment and its models. If the experiment is being used by an application, you cannot delete it. ![](images/icon-wb-close.png) | Slides the Leaderboard panel closed to make additional room for, for example, viewing insights. ## Insights {: #insights } Model insights help to interpret, explain, and validate what drives a model’s predictions. Available insights are dependent on experiment type, but may include the insights listed in the table below. Availability of [sliced insights](sliced-insights) is also model-dependent. !!! info "Availability information" Sliced insights slices in Workbench are off by default. Contact your DataRobot representative or administrator for information on enabling the feature. <b>Feature flag:</b> Slices in Workbench Insight | Description | Problem type | Sliced insights? ------- | ----------- | ------------ | ---------------- [Feature Impact](#feature-impact) | Shows which features are driving model decisions. | All | ✔ [Feature Effects](#feature-efftecs) | Conveys how changes to the value of each feature change model predictions | All |✔ [Blueprint](#blueprint) | Provides a graphical representation of data preprocessing and parameter settings. | All | [ROC Curve](#roc-curve) | Provides tools for exploring classification, performance, and statistics related to a model. | Classification | ✔ [Lift Chart](#lift-chart) | Depicts how well a model segments the target population and how capable it is of predicting the target.| All | ✔ [Residuals](#residuals) | Provides scatter plots and a histogram for understanding model predictive performance and validity. | Regression | ✔ [Accuracy Over Time](#accuracy-over-time) | Visualizes how predictions change over time. | Time-aware [Stability](#stability) | Provides a summary of how well a model performs on different backtests. | Time-aware | To see a model's insights, click on the model in the left-pane Leaderboard. ### Feature Impact {: #feature-impact } [Feature Impact](feature-impact){ target=_blank } provides a high-level visualization that identifies which features are most strongly driving model decisions. It is available for all model types and is an on-demand feature, meaning that for all but models prepared for deployment, you must initiate a calculation to see the results. ![](images/wb-exp-eval-5.png) ### Feature Effects {: #feature-effects } The [Feature Effects](feature-effects){ target=_blank } insight shows the effect of changes in the value of each feature on model predictions&mdash;how does a model "understand" the relationship between each feature and the target? It is an on-demand feature, dependent on the [Feature Impact](feature-impact){ target=_blank } calculation, which is prompted for when first opening the visualization. The insight is communicated in terms of [partial dependence](feature-effects#partial-dependence-logic){ target=_blank }, an illustration of how changing a feature's value, while keeping all other features as they were, impacts a model's predictions. ![](images/wb-exp-eval-29.png) ### Blueprint {: #blueprint } Blueprints are ML pipelines containing preprocessing steps (tasks), modeling algorithms, and post-processing steps that go into building a model. The [Blueprint](blueprints){ target=_blank } tab provides a graphical representation of the blueprint, showing each step. Click on any task in the blueprint to see more detail, including more complete model documentation (by clicking **DataRobot Model Docs** from inside the blueprint’s task). ![](images/wb-exp-eval-21.png) Additionally, you can access the [blueprint repository](wb-experiment-add#blueprint-repository) from the **Blueprint** tab: ![](images/wb-exp-eval-22.png) ### ROC Curve {: #roc-curve } For classification experiments, the ROC Curve tab provides the following tools for exploring classification, performance, and statistics related to a selected model at any point on the probability scale: * An [ROC Curve](roc-curve){ target=_blank } * [Cumulative charts](cumulative-charts){ target=_blank } * A [confusion matrix](confusion-matrix){ target=_blank } * A [payoff matrix/profit curve](profit-curve){ target=_blank } * [Metrics](metrics){ target=_blank } ![](images/wb-exp-eval-6.png) ### Lift Chart {: #lift-chart } To help visualize model effectiveness, the [Lift Chart](lift-chart){ target=_blank } depicts how well a model segments the target population and how well the model performs for different ranges of values of the target variable. ![](images/wb-exp-eval-17.png) ### Residuals {: #residuals } For regression experiments, the [Residuals](residuals){ target=_blank } tab helps to clearly understand a model's predictive performance and validity. It allows you to gauge how linearly your models scale relative to the actual values of the dataset used. It provides multiple scatter plots and a histogram to assist your residual analysis: * Predicted vs. Actual * Residual vs. Actual * Residual vs. Predicted * Residuals histogram ![](images/wb-exp-eval-7.png) ### Accuracy Over Time {: #accuracy-over-time } For time-aware projects, [Accuracy Over Time](aot){ target=_blank } helps to visualize how predictions change over time. By default, the view shows predicted and actual vs. time values for the training and validation data of the most recent (first) backtest. This is the backtest model DataRobot uses to deploy and make predictions. (In other words, the model used to generate the error metric for the validation set.) ![](images/wb-exp-eval-18.png) The visualization also has a time-aware [Residuals](aot#interpret-the-residuals-chart){ target=_blank } tab that plots the difference between actual and predicted values. It helps to visualize whether there is an unexplained trend in your data that the model did not account for and how the model errors change over time. ![](images/wb-exp-eval-19.png) ### Stability {: #stability } The [Stability](stability) tab provides an at-a-glance summary of how well a model performs on different backtests. It helps to measure performance and gives an indication of how long a model can be in production (how long it is "stable") before needing retraining. The values in the chart represent the validation scores for each backtest and the holdout. ![](images/wb-exp-eval-20.png) ## Compliance documentation {: #compliance-documentation } DataRobot automates many critical compliance tasks associated with developing a model and, by doing so, decreases time-to-deployment in highly regulated industries. You can generate, for each model, [individualized documentation](compliance){ target=_blank } to provide comprehensive guidance on what constitutes effective model risk management. Then, you can download the report as an editable Microsoft Word document (`.docx`). The generated report includes the appropriate level of information and transparency necessitated by regulatory compliance demands. The model compliance report is not prescriptive in format and content, but rather serves as a guide in creating sufficiently rigorous model development, implementation, and use documentation. The documentation provides evidence to show that the components of the model work as intended, the model is appropriate for its intended business purpose, and it is conceptually sound. As such, the report can help with completing the Federal Reserve System's [SR 11-7: Guidance on Model Risk Management](https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm){ target=_blank }. To generate a compliance report: 1. Select a model from the Leaderboard. 2. From the **Model actions** dropdown, select **Generate compliance report**. ![](images/wb-exp-eval-9.png) 3. Workbench prompts for a download location and, once selected, generates the report in the background as you continue experimenting. ## Manage experiments {: #manage-experiments } At any point after models have been built, you can manage an individual experiment from within its Use Case. Click on the three dots to the right of the experiment name to delete it. To share the experiment, use the Use Case [**Manage members**](wb-build-usecase#share) tool to share the experiment and other associated assets. ![](images/wb-exp-15.png)
wb-experiment-evaluate
--- title: Add models to experiments description: Describes how to retrain Leaderboard models or add new models from the blueprint repository. --- # Add models to experiments {: #add-models-to-experiments } There are two methods for adding new models to your experiment: - [Retrain](#train-on-new-settings) existing Leaderboard models using new settings. - Add new models from the blueprint [repository](#blueprint-repository). ## Train on new settings {: #train-on-new-settings } Once the Leaderboard is populated, you can retrain an existing model to create a new Leaderboard model. To retrain a model: 1. Select a model from the **Leaderboard** by clicking on it. 2. Change a model characteristic by clicking the pencil icon (![](images/icon-pencil.png)). From here, you can: - Select a new feature list. You cannot change the feature list for the model prepared for deployment because it is a ["frozen"](frozen-run){ target=_blank }. (See the [FAQ](wb-experiment-ref) for a workaround to add feature lists.) - Change the sample size (non-time aware) or training period (if date/time partitioning was used). The resulting window depends on the [partitioning method](wb-experiment-create#modify-partitioning). !!! info "Availability information" Date/time partitioning for building time-aware projects is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. <b>Feature flag:</b> Enable Date/Time Partitioning (OTV) in Workbench === "Random or Stratified" Click the pencil icon to change the sample size and optionally [enforce a frozen run](frozen-run#start-a-frozen-run){ target=_blank }. ![](images/wb-exp-eval-8.png) === "Date/time" Click the pencil icon to change the training period size and optionally [enforce a frozen run](frozen-run#start-a-frozen-run){ target=_blank }. While you can change the training range and sampling rate, you cannot change the duration of the validation partition once models have been built. !!! note Consider [retraining your model on the most recent data](otv#retrain-before-deployment){ target=_blank } before final deployment. The **New Training Period** box has multiple selectors, described below: ![](images/wb-exp-eval-16.png) | | Selection | Description | |---|---|---| | <div class="table-label">1</div> | Frozen run toggle | [Freeze the run](frozen-run) ("freeze" parameter settings from a model’s early, smaller-sized run).| | <div class="table-label">2</div> | Training mode | Rerun the model using a different training period. Before setting this value, see [the details](ts-customization#duration-and-row-count) of row count vs. duration and how they apply to different folds. | | <div class="table-label">3</div> | Snap to | "Snap to" predefined points to facilitate entering values and avoid manually scrolling or calculation. | | <div class="table-label">4</div> | [Enable time window sampling](ts-leaderboard#time-window-sampling) | Train on a subset of data within a time window for a duration or [start/end](ts-leaderboard#setting-the-start-and-end-dates) training mode. Check to enable and specify a percentage. | | <div class="table-label">5</div> | [Sampling method](ts-leaderboard#set-rows-or-duration) | Select the sampling method used to assign rows from the dataset. | | <div class="table-label">6</div> | Summary graphic | View a summary of the observations and testing partitions used to build the model. | | <div class="table-label">7</div> | Final Model | View an image that changes as you adjust the dates, reflecting the data to be used in the model you will make predictions with (see the [note](ts-leaderboard#about-final-models) about final models). | Once you have set a new value, click **Train new models**. DataRobot builds the new model and displays it on the Leaderboard. ## Blueprint repository {: #blueprint-repository } !!! info "Availability information" Access to the blueprint repository is off by default. Contact your DataRobot representative or administrator for information on enabling access. <b>Feature flag:</b> Enable the Repository tab in Workbench The blueprint [repository](repository){ target=_blank } is a library of modeling blueprints available for a selected experiment. Blueprints illustrate the tasks used to build a model, not the model itself. Model blueprints listed in the repository have not necessarily been built yet, but could be as they are of a type that is compatible with the experiment's data and settings. There are two ways to access the blueprint repository: - From a Leaderboard model's [**Blueprint**](wb-experiment-evaluate#blueprint) tab. ![](images/wb-exp-eval-22.png) - Click the [**View experiment info**](wb-experiment-evaluate#view-experiment-info) link and select the **Blueprint repository** tab. ![](images/wb-exp-eval-23.png) ### Add models {: #add-models } Once in the repository, you can add one or more blueprints to your experiment by selecting them via checkbox to the left of the blueprint name. ![](images/wb-exp-eval-24.png) Click on the blueprint name to see the graphical representation of tasks that comprise that blueprint. Choose settings for the new model build; settings differ slightly depending on the partitioning method applied. === "Random or Stratified" Set the [feature list](wb-experiment-create#change-feature-list) and [sample size](#train-on-new-settings) to apply to all selected blueprints. === "Date/time" Set the feature list and [training period](#train-on-new-settings) to apply to all selected blueprints. For date/time models, DataRobot recommends a feature list. Use the recommended list or select a new list from the dropdown: ![](images/wb-exp-eval-25.png) Once configuration is set, click **Train models** to start building. ### Search models {: #search-models } There are three ways to find specific blueprints in the repository by filtering the Leaderboard display to show only those blueprints matching the selected criteria. - Use the search bar to return all blueprints with matching strings in the name or description: ![](images/wb-exp-eval-26.png) - Click a [badge](leaderboard-ref#tags-and-indicators){ target=_blank } to return all blueprints with that badge: ![](images/wb-exp-eval-27.png) Click again on the badge to remove it as a filter. - Use **Edit filters** to choose blueprints by model family and/or property. Available fields, and the settings for that field, are dependent on the project and/or model type. ![](images/wb-exp-eval-28.png) ## Manage experiments {: #manage-experiments } At any point after models have been built, you can manage an individual experiment from within its Use Case. Click on the three dots to the right of the experiment name to delete it. To share the experiment, use the Use Case [**Manage members**](wb-build-usecase#share) tool to share the experiment and other associated assets. ![](images/wb-exp-15.png)
wb-experiment-add
--- title: No-Code AI App reference description: Reference material for No-Code AI Apps in Workbench, including considerations and frequently asked questions (FAQs). --- # No-Code AI App reference {: #no-code-ai-app-reference } ## FAQ {: #faq } ??? faq "How do I see which model an app was built from?" In your Use Case, go to the [**Applications**](wb-apps/index#applications-tab) tab. The **Source** column lists the model each app was created from. ??? faq "Is there a way to see how many predictions were made using an app?" You can view every prediction made in an app in the All Rows widget or by downloading them as a CSV. ??? faq "Can I re-run an optimization in the Optimizer app after changing the widget's settings?" At this time, you cannot re-run an optimization unless you recreate the prediction row.
wb-app-ref
--- title: No-Code AI Apps description: Create and configure AI-powered applications in Workbench using a no-code interface to enable core DataRobot services. --- # No-Code AI Apps {: #no-code-ai-apps } {% include 'includes/no-code-app-intro.md' %} See the associated [considerations](app-builder/index#considerations){ target=_blank } for important additional information, and view the latest [public preview features](wb-app-pp/index) for DataRobot apps. ## Applications tab {: #applications-tab } The **Applications** tab lists any apps that you or a team member has created in a Use Case. From this tab, you can: ![](images/wb-app-6.png) | | Element | Description | |----|----------|-------------| | <div class="table-label">1</div> | Application name | Click to launch an application. | | <div class="table-label">2</div> | Source | View the model used to create the application. | | <div class="table-label">3</div> | Settings | Control the columns displayed in this tab. | | <div class="table-label">4</div> | More options | Click to remove an application from the Use Case. | ## Create an app {: #create-an-app } In Workbench, you can create applications directly from a model in your experiment, so that you and other team members can quickly begin making predictions and generating data visualizations. To create an app: 1. Follow the instructions to [set up and run an experiment](wb-experiment/index). Then, select the model you want to use to power the application. 2. Click **Model actions > Create app**. ![](images/wb-app-1.png) 3. Select an [application type](create-app#template-options){ target=_blank }: Predictor, What-if, or Optimizer. Your selection determines the initial configuration of the app. ![](images/wb-app-2.png) ??? tip "Select a new application type" To change application type once you've made a selection, click **Select Application type **at the top of the page. ![](images/wb-app-5.png) 4. Name the app and choose a [sharing option](app-settings#permissions)&mdash;_Anyone With the Sharing Link_ automatically generates a link that can be shared with non-DataRobot users while _Invited Users Only_ limits sharing to other DataRobot users, groups, and organizations. ![](images/wb-app-3.png) 5. When you're done, click **Build Application**. In a new tab, you'll be prompted to sign in with DataRobot and authorize read/write access. ![](images/wb-app-4.png) ## Next steps {: #next-steps } From here, you can: - [Configure](edit-apps/index){ target=_blank } and [use](use-apps/index){ target=_blank } the application. - [Share your application](current-app#share-applications){ target=_blank }
index
--- title: Make predictions description: After you create an experiment and train models, you can upload scoring data, make predictions, and download the results. --- # Make predictions {: #make-predictions } After you create an [experiment](wb-experiment/index) and train models, you can make predictions to validate those models. To make predictions with a model in a Workbench experiment: 1. Select the model from the **Models** list and then click **Model actions > Make predictions**. ![](images/wb-model-action-pred.png) 2. On the **Make Predictions** page, upload a **Prediction source**: * Click, drag, and drop a file into the **Prediction source** group box. * Click **Choose file** to upload a **Local file** or a dataset stored in the **AI Catalog**. ![](images/wb-pred-source.png) !!! note When you upload a prediction dataset, it is automatically stored in the **AI Catalog** once the upload is complete. Be sure not to navigate away from the page during the upload, or the dataset will not be stored in the catalog. If the dataset is still processing after the upload, that means DataRobot is [running EDA](eda-explained){ target=_blank } on the dataset before it becomes available for use. 3. In the **Prediction options** section, to write input features (columns) to the prediction results file alongside the predictions, enable **Include input features**: * To filter for and include specific features from the dataset, select **Specific features** and enter one or more feature names to add them. * To include every feature from the dataset, select **All features**. ![](images/wb-pred-options.png) !!! note You can only append a feature (column) present in the original dataset, although the feature does not have to have been part of the feature list used to build the model. Derived features are not included. 4. After you configure the **Prediction options**, click **Compute and download predictions** to start scoring the data. 5. When scoring completes, under **Your recent predictions**: * If scoring is successful, click the download icon (![](images/icon-download-pred.png)) to download a predictions file. ![](images/wb-pred-download.png) * If the prediction job fails, click **View logs** to view and optionally copy the run details. !!! note Predictions are available for download for the next 48 hours.
wb-predict
--- title: Predictions reference description: Answers questions and provides tips for working with Predictions in DataRobot's Workbench. --- # Predictions reference {: #predictions-reference } ??? faq "What types of experiments are supported for Workbench predictions?" Currently, Workbench supports predictions for binary classification and regression projects; it doesn't support time series or Feature Discovery projects. Because development is ongoing, see the [capability matrix](wb-capability-matrix) for a full list of supported capabilities. ??? faq "What prediction options are supported?" Currently, Workbench only supports the **Include input features** option. For additional prediction options, you can build models in Workbench, switch to DataRobot Classic, [deploy a model](deploy-model){ target=_blank }, and then [make batch predictions](batch-pred#make-predictions-with-a-deployment){ target=_blank }. ??? faq "How can I make predictions in DataRobot Classic?" You can make predictions on Workbench models in DataRobot classic, but only [from the Leaderboard](predict){ target=_blank } (via UI or API) or after [deploying the model](deploy-model){ target=_blank }. If you deploy the model, you can [make batch predictions](batch-pred#make-predictions-with-a-deployment){ target=_blank }. You can easily switch back and forth between Workbench and DataRobot Classic. Access Workbench from the top navigation bar in the application. ![](images/wb-uc-faq-1.png) To return to DataRobot Classic to deploy your model and make predictions, in the top navigation bar, click **DataRobot Classic**, and then, in the dropdown list, click **Leave Workbench**. ![](images/wb-uc-faq-2.png) From DataRobot Classic, you can [deploy models](deploy-model){ target=_blank } and [make predictions](predictions/index){ target=_blank }. You can see the Use Case name in the [Projects dropdown](manage-projects#projects-dropdown){ target=_blank } to identify your Workbench Use Case in the DataRobot Classic UI. ![](images/wb-pred-faq.png) ??? faq "Can I make predictions on training data?" You shouldn't make predictions on training data. It is possible to upload the training dataset and make predictions, but the results won’t always match those from the Leaderboard in DataRobot Classic. If you want to make predictions on training data, you can [make Leaderboard predictions](predict){ target=_blank } back in DataRobot Classic.
wb-predict-ref
--- title: Predictions description: How to make predictions for a Workbench model. --- # Predictions {: #predictions } The following sections help to understand Workbench predictions: Topic | Describes... ---------|----------------- [Make predictions](wb-predict) | After you create an experiment and train models, you can upload scoring data, make predictions, and download the results. [Reference and FAQ](wb-predict-ref) | Questions and tips for working with predictions in Workbench.
index
--- title: Use Cases description: Describes creating a Use Case in DataRobot's Workbench and navigating the directory and assets. --- # Use Cases {: #use-cases } This section covers the following topics: Topic | Describes... ---------|----------------- [Use Cases](wb-build-usecase) | Create, share, and manage Use Cases. [Reference and FAQ](wb-usecase-ref) | Questions and tips for working with Use Cases.
index
--- title: Use Case reference description: Answers questions and provides tips for working with Use Cases in DataRobot's Workbench. --- # Use Case reference {: #use-case-reference } ??? faq "How do you move between Workbench and DataRobot Classic?" You can easily switch back and forth between Workbench and DataRobot Classic. Access Workbench from the top navigation bar in the application. ![](images/wb-uc-faq-1.png) To return to DataRobot Classic, choose a specific location from the dropdown the top navigation bar. ![](images/wb-uc-faq-2.png) ??? faq "Are Use Cases available in DataRobot Classic?" When you create experiments in Workbench, you can access them in DataRobot Classic via the [project management center](manage-projects#manage-projects-control-center){ target=_blank }. When you select an experiment from Classic that was created in Workbench, you will see the Use Case name above the dataset name, indicating it is available in Workbench. ![](images/wb-uc-faq-3.png) ??? faq "Is a Use Case just a folder?" No, it's more. Use Cases organize, provide a permission control mechanism, and are a collaborative space where teams can comment and review each other’s work. Use Cases make sure that your ML projects deliver actual business value. ??? faq "Are DataRobot Classic projects available in Workbench?" Projects originally created in Classic are not currently available in Workbench. Once all the advanced features of DataRobot Classic are available, they will be eligible to migrate into Workbench. ??? faq "What types of experiments are supported in Workbench?" Currently Workbench supports binary classification and regression projects. Because development is ongoing, see the release notes for a full list of supported capabilities. ??? faq "What does Workbench offer that is not available in DataRobot Classic?" Workbench offers a streamlined Workflow that includes native data prep, and Use Cases to organize, compare, and share projects. Coming soon will be an opportunity to compare models from different experiments (but within the same use case) from a single page. ??? faq "Is Workbench available for code-first users?" Just like other DataRobot offerings, the Workbench pipeline is fully supported with the Python API. It can be authored with any Python IDE or notebook, but provides an even more seamless experience with DataRobot’s own notebook solution. Notebook integration provides ways to prepare data and modeling with a code-first approach and these capabilities will provide the same functionality as is available in the GUI.
wb-usecase-ref
--- title: Use Cases description: Learn how to build a Use Case in DataRobot's Workbench and investigate its assets. --- # Use Cases {: #use-cases } Use Cases are folder-like containers inside of DataRobot Workbench that allow you to group everything related to solving a specific business problem&mdash;datasets, models, experiments, No-Code AI Apps, and notebooks&mdash;inside of a single, manageable entity. You can share whole Use Cases as well as the individual assets they contain. The overarching benefit of a Use Case is that it enables experiment-based, iterative workflows. By housing all key insights in a single location, data scientists have improved navigation and a cleaner interface for experiment creation, and model training, review, and evaluation. Specifically, Use Cases allow you to: * *Organize your work*&mdash;group related datasets, experiments, notebooks, etc. by the problem they solve. * *Find everything easily*&mdash;removing the need to search through hundreds of unrelated projects or scraping emails for hyperlinks. * *Share in collections*&mdash;you can share the full Use Cases, containing all the assets your team needs to participate. * *Manage access*&mdash;add or remove members to the Use Case to control their access. * *Monitor changes*&mdash;receive notifications when a team member adds, removes, or modifies any asset in your Use Case. See the associated [FAQ](wb-usecase-ref) for important additional information. ## Overview {: #overview } When you launch Workbench, you are brought to the Use Case directory. If it is your first visit, the page will be empty. After your first Use Case is started, the directory lists all Use Cases either owned by or shared with you. Use Case contents is provided in tiles and in a table: ![](images/wb-uc-2.png) The _tiles_ display the six last-modified Use Cases. Each tile provides an at-a-glance count of the Use Case's assets. ![](images/wb-uc-3.png) The _table_ displays all Use Cases in your directory. Initial pagination defaults to five Use Cases, but you can change the display from the dropdown on the right: ![](images/wb-uc-4.png) For each Use Case, the table displays: * Assets: The number of associated datasets, experiments, apps, and notebooks. * Metadata: The creator, last modification, and membership. Click the arrows to the right of a table column to sort the table by those entries. Click the three dots to the right to delete the Use Case. ![](images/wb-uc-5.png) ## Create {: #create } To create a new Use Case: 1. From the Workbench directory, click **Create Use Case** in the upper right. ![](images/wb-uc-0.png) 2. Provide a name for the Use Case and click the check mark to accept. You can change this name at any time by opening the Use Case and clicking on the existing name. ![](images/wb-uc-6.png) 3. Click **Add new**: ![](images/wb-uc-7.png) From here you can begin adding assets or create a new Use Case. * [Add data](wb-dataprep/index) * [Add an experiment](wb-experiment/index) * [Add or upload a notebook](wb-notebook/index) ## Modify {: #modify } To work with an existing Use Case: 1. From the Workbench directory, click on any tile or table entry. Both methods resolve to the same location&mdash;inside the selected Use Case. 2. Review the assets associated with the Use Case: ![](images/wb-uc-8.png) | &nbsp; | Element | Description | |---|---|---| | <div class="table-label">1</div> | Asset summary | Provides a total count for each asset type associated with the Use Case. | <div class="table-label">2</div> | Display controls | Sets the "last modified" criteria for the list-format asset display. | <div class="table-label">3</div> | Asset metadata | Reports the asset type, last modification date, and Use Case member who made the modification. | <div class="table-label">4</div> | Asset control | Provides options for working with the asset. Options are dependent on the asset type: <ul><li>Experiment: Delete from Use Case.</li><li>Dataset: Explore (preview data), wrangle (if applicable), start modeling (create a new experiment), remove from Use Case (remains in Data Registry). 2. Click **Add new** to begin adding assets or create a new Use Case. * [Add data](wb-dataprep/index) * [Add an experiment](wb-experiment/index) * [Add or upload a notebook](wb-notebook/index) ## Manage {: #manage } Managing a Use Case includes: ![](images/wb-uc-9.png) | &nbsp; | Element | Description | ------- | ---- | ------------ | <div class="table-label">1</div> | Rename the Use Case | Click on the existing title and enter the new name. The change is immediately reflected on the page and in the Use case directory. It also changed for all members. | <div class="table-label">2</div> | List only specific assets | Click on the asset tab to filter the contents of the table below. Table content is dependent on asset type. | <div class="table-label">3</div> | Manage team members | View the teammates with access, and their roles. Click [**Manage members**](#share) to share a Use Case with other team members. | <div class="table-label">4</div> | Manage assets | Provides options for working with the asset. Options are dependent on the asset type: <ul><li>Experiment: Delete from Use Case.</li><li>Dataset: Explore (preview data), wrangle (if applicable), start modeling (create a new experiment), remove from Use Case (remains in Data Registry). ## Share {: #share } With Workbench, when you share a Use Case, the recipient gets access to all the associated assets. To share a Use Case: 1. From the **Use Case info** pane on the right, click **Manage members**. ![](images/wb-uc-10.png) 2. A sharing modal opens. Enter one or more team member email address(es), click the name on the associated dropdown, and set the desired [permissions level (role)](roles-permissions#project-roles){ target=_blank }. ![](images/wb-uc-11.png) 3. Click **Share**. ## Manage members {: #manage-members } As a Use Case **Owner**, you can edit a team member's [role (permissions level)](roles-permissions#project-roles){ target=_blank } or remove them from the Use Case: 1. From the **Use Case info** pane on the right, click **Manage members**. ![](images/wb-uc-10.png) 2. In the **Share** dialog box, in the **Role** column, you can: * Update a user's permissions level: ![](images/wb-uc-12.png) * Revoke a user's permissions entirely by removing them from the Use Case: ![](images/wb-uc-13.png) 3. Click **Close** to return to the Use Case. ## Next steps {: #next-steps } From here, you can: * [Add data](wb-dataprep/index) * [Add an experiment](wb-experiment/index) * [Add or upload a notebook](wb-notebook/index) * [Create another Use Case.](#create)
wb-build-usecase
--- title: Data registry description: The Data Registry lists all static and snapshot datasets you currently have access to in the AI Catalog, including those uploaded from local files and data connections in Workbench. --- # Data registry {: #data-registry } When you open the **Add data** modal, by default, DataRobot displays the **Data Registry**, a central catalog for your datasets that lists all static and snapshot datasets you currently have access to in the AI Catalog, including those uploaded from [local files](wb-local-file) and [data connections](wb-connect) in Workbench. This method of adding data is a good approach if your dataset is already ready for modeling. When you add a dataset from the registry, you're creating a link from the Use Case to the source of that dataset, meaning datasets can have a one-to-many relationship with Use Cases, so when a dataset is removed, you're only removing the link; any experiments created from the dataset will not be affected. ![](images/wb-datareg-1.png) See the associated [considerations](wb-data-ref/index#add-data) for important additional information. ## Add a dataset {: #add-a-dataset } You can add any static or snapshot datasets that have been previously registered in DataRobot. To add a dataset: 1. In the **Data Registry**, select the box to the left of the dataset you want to view. ![](images/wb-datareg-6.png) 2. (Optional) [Preview the dataset](#preview-a-dataset) to determine if the dataset is appropriate for the objective of your Use Case by clicking **Preview**. 3. Click **Add to Use Case** in the upper-right corner. ![](images/wb-datareg-7.png) Workbench opens to the **Datasets** tab of your Use Case. ## Preview a dataset {: #preview-a-dataset} Viewing a snapshot preview allows you to confirm that a dataset is appropriate for your Use Case before adding it. To preview a dataset: 1. In the **Data Registry**, select the box to the left of the dataset you want to view and click **Preview** in the upper-right corner. ![](images/wb-datareg-2.png) 2. Analyze the dataset using the **Features** and **Data preview** buttons: === "Features" Lists the feature name, type, number of unique values, and number of missing values for each feature in the dataset. ![](images/wb-datareg-4.png) === "Data preview" Displays a random sample, up to 1MB, of the raw data table. ![](images/wb-datareg-5.png) 3. Determine if the dataset suits your Use Case, and then either: - Add the dataset to your Use Case by clicking **Add to Use Case**. - Go back to the Data Registry by clicking **Data Registry** in the breadcrumbs below the dataset name. ## Next steps {: #next-steps } From here, you can: - [Add more data.](wb-add-data/index) - [View Exploratory Data Insights for the dataset.](wb-data-tab) - [Use the dataset to set up an experiment and start modeling.](wb-experiment/index) ## Read more {: #read-more} To learn more about the topics discussed on this page, see: - [Asset states in DataRobot.](catalog-asset#asset-states){ target=_blank }
wb-data-registry
--- title: Data connections description: Connect to an external data source to seamlessly browse, preview, and profile data, as well as intiate scalable data preparation for machine learning with push-down. --- # Data connections {: #data-connections } In Workbench, you can easily configure and reuse secure connections to predefined data sources. Not only does this allow you to interactively browse, preview, and profile your data, it also gives you access to DataRobot's integrated [data preparation capabilities](wb-wrangle-data/index). See the associated [considerations](wb-data-ref/index#add-data) for important additional information. ??? note "Deleting data connections" You cannot delete data connections from within Workbench; to remove existing data connections, go to [**User Settings > Data Connections**](data-conn#delete-a-connection){ target=_blank } in DataRobot Classic. ![](images/jdbc-dataconn.png) ## Supported databases {: #supported-databases } Workbench currently supports the following databases: Database | Notes ---------- | ------------------ Snowflake | See the [documentation](dc-snowflake){ target=_blank } for required parameters and additional information. BigQuery<br>(public preview) | See the [documentation](dc-bigquery){ target=_blank } for required parameters and additional information. !!! info "Public preview" Support for BigQuery in Workbench is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. <b>Feature flag:</b> Enable Native BigQuery Driver ## Connect to a data source {: #connect-to-a-data-source } Creating a data connection lets you explore external source data and then add it to your Use Case. To create a data connection: 1. In the Add data modal, click **Connect**. ![](images/wb-connect-1.png) 2. Select the data source (Snowflake in this example). ![](images/wb-connect-2.png) Now, you can [configure the data connection](#configure-the-connection). ## Configure the connection {: #configure-the-connection } When configuring your data connection, the required parameters are based on the selected data source as well as the authentication method. This example shows how to configure Snowflake with OAuth using new credentials. To configure the data connection: 1. On the **Configuration** page, select a configuration method&mdash;either **Parameters** or **JDBC URL**. 2. Enter the required parameters for the selected configuration method. === "Parameters" ![](images/wb-connect-7.png) === "JDBC URL" ![](images/wb-connect-6.png) 3. Click **New Credentials** and select an authentication method&mdash;either **Basic** or **OAuth**. === "Basic" ![](images/wb-connect-8.png) === "OAuth" ![](images/wb-connect-9.png) ??? note "Saved credentials" If you previously [saved credentials for the selected data source](stored-creds#credentials-management){ target=_blank }, click **Saved credentials** and select the appropriate credentials from the dropdown. ![](images/wb-connect-10.png) 4. Click **Save** in the upper right corner. ![](images/wb-connect-3.png) If you selected OAuth as your authentication method, you will be prompted to sign in before you can [select a dataset](#select-a-dataset). See the [DataRobot Classic documentation](dc-snowflake){ target=_blank } for more information about supported authentication methods and required parameters. ## Select a dataset {: #select-a-dataset } Once you've set up a data connection, you can add datasets by browsing the [database schemas](https://www.ibm.com/topics/database-schema){ target=_blank } and tables you have access to. To select a dataset: 1. Select the schema associated with the table you want to add. ![](images/wb-connect-4.png) 2. Select the box to the left of the appropriate table. ![](images/wb-connect-5.png) With a dataset selected, you can: | | Description | |----------|-------------| | <div class="table-label">1</div> | Click **Wrangle** to prepare the dataset before adding it to your Use Case. | | <div class="table-label">2</div> | Click **Preview** to open a snapshot preview to help determine if the dataset is relevant to your Use Case and/or if it needs to be wrangled. | | <div class="table-label">3</div> | Click **Add to Use Case** to add it to your Use Case, making it available to you and other team members on the Datasets tab. | ??? tip "Large datasets" If you want to decrease the size of the dataset before adding it to your Use Case, click Wrangle. When you publish a recipe, you can [configure automatic downsampling](wb-pub-recipe#configure-downsampling) to control the number of rows when Snowflake materializes the output dataset. ## Next steps {: #next-steps } From here, you can: - [Perform data wrangling before adding the dataset to your Use Case.](wb-wrangle-data/wb-add-operation) - [Add more data.](wb-add-data/index) - [View Exploratory Data Insights for the dataset.](wb-data-tab) - [Use the dataset to set up an experiment and start modeling.](wb-experiment/index) ## Read more {: #read-more } To learn more about the topics discussed on this page, see: - [Snowflake configuration details and required parameters.](dc-snowflake){ target=_blank } - [BigQuery configuration details and required parameters.](dc-bigquery){ target=_blank } - [DataRobot's dataset requirements.](file-types){ target=_blank } - [Saved data connection credentials.](stored-creds){ target=_blank } - [Delete data connections.](data-conn#delete-a-connection){ target=_blank }
wb-connect
--- title: Add data description: In Workbench, you can add datasets from a local file, data connection, or the Data Registry. --- # Add data {: #add-data } From anywhere in a Use Case, you can add data by clicking **Add new > Add datasets**, opening the **Add data** modal. By adding data before setting up an experiment, you have the chance to [explore the dataset](wb-data-tab#view-exploratory-data-insights) and decide if it's ready for modeling. ![](images/wb-add-data-1.png) This section covers the following topics: Topic | Describes... ---------- | ----------- [Local file](wb-local-file) | Upload datasets stored locally on your computer. [Data connection](wb-connect) | Connect to and add data from an external data source. [Data Registry](wb-data-registry) | Add any static or snapshot datasets you currently have access to in the AI Catalog.
index
--- title: Local files description: Add locally-stored datasets to your Use Case in Workbench. --- # Upload local files {: #upload-local-files } By uploading a local file via the Add data modal, you are both adding the dataset to your Use Case and [registering it in the Data Registry](wb-data-registry). This method of adding data is a good approach if your dataset is already ready for modeling. Before uploading a file, review DataRobot's [dataset requirements](file-types){ target=_blank } for accepted file formats and size guidelines. See the associated [considerations](wb-data-ref/index#add-data) for important additional information. To upload a local file: 1. In the **Add data** modal, click **Upload**. ![](images/wb-local-1.png) 2. Locate and select your dataset in the file explorer. Then, click **Open**. !!! note "Supported file types" Workbench supports the following file types for upload: .csv, .tsv, .dsv, .xls, .xlsx, .sas7bdat, .geojson, .gz, .bz2, .tar, .tgz, .zip. 3. DataRobot opens the **Datasets** tab, where you can monitor the progress of the dataset as it's registering in the Data Registry. === "Data registering" ![](images/wb-local-2.png) === "Data registered" ![](images/wb-local-3.png) When registration is complete, DataRobot displays the source, row count, feature count, and size. ## Next steps {: next-steps } From here, you can: - [Add more data.](wb-add-data/index) - [View Exploratory Data Insights for the dataset.](wb-data-tab) - [Use the dataset to set up an experiment and start modeling.](wb-experiment/index) ## Read more {: #read-more} To learn more about the topics discussed on this page, see: - [DataRobot's dataset requirements.](file-types){ target=_blank } - [Upload local files in DataRobot Classic.](import-to-dr#drag-and-drop){ target=_blank }
wb-local-file
--- title: Smart downsampling description: Use smart downsampling to reduce the size of your output dataset when publishing a wrangling recipe. section_name: Workbench maturity: public-preview --- # Publish recipes with smart downsampling {: #publish-recipes-with-smart-downsampling } !!! info "Availability information" Smart downsampling in Workbench is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. <b>Feature flag:</b> Enables Smart Downsampling in Wrangle Publishing Settings You can use smart downsampling to reduce the size of your output dataset when publishing a wrangling recipe. Smart downsampling is a data science technique to reduce the time it takes to fit a model without sacrificing accuracy; it is particularly useful for imbalanced data. This downsampling technique accounts for class imbalance by stratifying the sample by class. In most cases, the entire minority class is preserved, and sampling only applies to the majority class. Because accuracy is typically more important on the minority class, this technique greatly reduces the size of the training dataset (reducing modeling time and cost), while preserving model accuracy. To apply smart downsampling to a wrangling recipe: 1. [Wrangle a dataset](wb-add-operation), and when you've finished adding operations, click **Publish**. ![](images/wb-publish-5.png) 2. In the **Publish Settings** window, enable the **Automatic downsampling** toggle and click **Smart**. ![](images/wb-smart-down-1.png) 3. Select a **Target** feature&mdash;a binary classification or zero-inflated feature. If the dataset does not contain either feature type, the option to apply smart downsampling is unavailable. ![](images/wb-smart-down-3.png) 4. (Optional) Enter a name for the **Weights feature**. This column, which contains downsampling weights, is computed and added to your output dataset as a result of smart downsampling. 5. Enter the desired **Maximum number of rows** _or_ **Estimated Size (MB)**. These values are linked, so if you change the value in one field, the other field updates automatically. See [DataRobot's dataset requirements](file-types) to ensure the output dataset is below the file size limit. ![](images/wb-smart-down-2.png) 6. When you're done [configuring publishing settings](wb-pub-recipe), click **Publish** in the upper-right corner. !!! note Any rows with `null` as a value in the target column will be filtered out after smart downsampling.
wb-downsample
--- title: Public preview features description: Read preliminary documentation for data-related features currently in the DataRobot public preview pipeline. section_name: Workbench maturity: public-preview --- # Data preparation public preview features {: #data-preparation-public-preview-features } {% include 'includes/pub-preview-notice-include.md' %} ## Available public preview documentation {: #available-public-preview-documentation } Public preview for... | Describes... ----- | ------ [Publish recipes with smart downsampling](wb-downsample) | Use smart downsampling to reduce the size of your output dataset when publishing a wrangling recipe.
index
--- title: Reference description: Reference material for data workflows in Workbench, including considerations and frequently asked questions (FAQs). --- # Data preparation reference {: #data-preparation-reference } ## FAQ {: #faq } ### Add data {: #add-data } ??? faq "What is the Data Registry and why does it show my AI Catalog datasets?" The Data Registry is a catalog for your assets in Workbench that displays all static and snapshot datasets you have access to in the AI Catalog. Adding data via the Data Registry creates a _link_ from the source of the data (the AI Catalog) to your Use Case. ??? faq "Can one dataset be added to multiple Use Cases?" Yes. When you add a dataset to a Use Case via the Data Registry, you are establishing a link between the dataset and the Use Case. ??? faq "How do I delete a dataset?" To remove a dataset from a Use Case, click [**More options > Remove from Use Case**](wb-data-tab). Note that this only removes the _link_ from the data source to the Use Case, meaning team members in that specific Use Case will no longer see the dataset, however, if they have access to the same dataset in a different Use Case, they will still be able to access the dataset. You can control access to the source data from the [AI Catalog](catalog-asset){ target=_blank } in DataRobot Classic. ??? faq "How can I browse and manage non-Snowflake data connections?" You must use DataRobot Classic to manage non-Snowflake data connections. Additional connections will be added to Workbench in future releases. ??? faq "How can I delete a data connection in Workbench?" You cannot delete data connections from within Workbench; to remove existing data connections, go to [**User Settings > Data Connections**](data-conn#delete-a-connection){ target=_blank } in DataRobot Classic. ??? faq "How can I manage saved credentials?" You can manage [saved credentials](stored-creds#credentials-management){ target=_blank } for your data connections in DataRobot Classic. ### Wrangle data {: #wrangle-data } ??? faq "What permissions are required to be able to push down operations to Snowflake?" Your account in Snowflake must have `read` access to the selected database. ??? faq "Are there situations where data is moved from source?" Yes, data is moved from the source: - **During an interactive wrangling session:** 10,000 randomly sampled rows from the original table or view in Snowflake are brought into DataRobot for preview and profiling purposes. - **After publishing a wrangling recipe:** When you publish a recipe, the transformations are pushed down to the source and applied to the entire input table or view in Snowflake. The resulting output is materialized in DataRobot as a snapshot dataset. ??? faq "How do the wrangling insights differ from the Exploratory Data Insights generated when registering a dataset in DataRobot?" The [insights generated during data wrangling](wb-add-operation#analyze-the-live-sample) are based on the same live random sample of the raw dataset retrieved from your data source used during an interactive wrangling session. Whenever you adjust the row count or add operations, DataRobot updates the sample and performs exploratory data analysis again. ??? faq "Why do I need to downsample my data?" If the size of the raw data in Snowflake does not meet [DataRobot's file size requirements](file-types){ target=_blank }, you can [configure automatic downsampling](wb-pub-recipe#configure-downsampling) to reduce the size of the output dataset. ## Considerations {: #Considerations } ### Add data {: #add-data } Consider the following when adding data: - Data connections must be removed in DataRobot Classic. - URL import is not supported. - There is currently no image support in previews. ### Wrangle data {: #wrangle-data } Consider the following when wrangling data: - Profile cannot be customized and is limited to sample-based profiles. - Unstructured data types are not supported. - The live sample only supports random sampling.
index
## Sample paths {: #sample-paths } See the following paths for examples of adding and working with data in Workbench: === "via a data connection" ``` mermaid flowchart TB A(Add new > Add datasets) --> B(Connect to Snowflake); B --> C(Preview a dataset); C --> E(Does it need to be prepared?); E --> | No | F(Add to Use Case); E --> | Yes| G(Wrangle); G --> H(Build and publish a recipe); H --> I(View Exploratory Data Insights); F --> I; I --> J(Is it ready for modeling?); J --> | No | G; J --> | Yes | K(Set up an experiment); ``` === "via a local file" ``` mermaid flowchart TB A(Add new > Add datasets) --> B(Upload); B --> C(Select locally-stored file); C -- Registers in the Data Registry + Adds to Use Case --> E(View Exploratory Data Insights); E --> G(Set up an experiment); ``` === "via the Data Registry" ``` mermaid flowchart TB A(Add new > Add datasets) --> B(Browse the Data Registry); B --> C(Preview a dataset); C --> D(Is it appropriate for the Use Case?); D --> | No | B; D --> | Yes | E(Add to Use Case); E --> F(View Exploratory Data Insights); F --> G(Set up an experiment); ```
wb-data-paths
--- title: Build a recipe description: DataRobot leverages the compute environment and distributed architecture of your data source to quickly perform exploratory data analysis and apply transformations as you build your recipe. --- # Build a recipe {: #build-a-recipe } Building a recipe is the first step in preparing your data. When you start a Wrangle session, DataRobot connects to your data source, pulls a live random sample, and performs exploratory data analysis on that sample. When you add operations to your recipe, the transformation is applied to the sample and the exploratory data insights are recalculated, allowing you to quickly iterate on and profile your data before publishing. See the associated [considerations](wb-data-ref/index#wrangle-data) for important additional information. !!! warning "Wrangling requirement" To wrangle data, you must [add a dataset using a configured data connection](wb-connect). ??? note "Operation behavior" When a wrangling recipe is pushed down to the connected cloud data platform, the operations are executed in their environment. To understand how operations behave, refer to the documentation for your data platform: - [Snowflake documentation](https://docs.snowflake.com/en/sql-reference-functions){ target=_blank } - [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators){ target=_blank } To view which queries were executed by the cloud data platform during pushdown, open the **AI Catalog** and select the new output dataset. The queries are listed on the **Info** tab. ![](images/wb-view-query.png) ## Configure the live sample {: #configure-the-live-sample } By default, DataRobot retrieves 10000 rows for the live sample, however, you can modify this number in the wrangling settings. Note that the more rows you retrieve, the longer it will take to render the live sample. To configure the live sample: 1. Click **Settings** in the right panel and open **Interactive sample**. ![](images/wb-operation-1.png) 2. Enter the number of rows (under 10000) you want to include in the live sample and click **Resample**. The live sample updates to display the specified number of rows. ![](images/wb-operation-2.png) ## Analyze the live sample {: #analyze-the-live-sample } During data wrangling, DataRobot performs exploratory data analysis on the live sample, generating table- and column-level [summary statistics](histogram){ target=_blank } and [visualizations](histogram#histogram-chart){ target=_blank } that help you profile the dataset and recognize data quality issues as you apply operations. For more information on interacting with the live sample, see the section on [Exploratory Data Insights](wb-data-tab#view-exploratory-data-insights). ![](images/wb-operation-13.png) ??? tip "Speed up live sample" To speed up the time it takes to retrieve and render the live sample, use the toggle next to **Show Insights** to hide the feature distribution charts. ??? faq "Live sample vs. Exploratory Data Insights on the Datasets tab" Although both pages provide similar insights, you can specify the number of rows displayed in the live sample and it updates each time a transformation is added to your recipe. ## Add operations {: #add-operations } A recipe is composed of operations&mdash;transformations that will be applied to the source data to prepare it for modeling. Note that operations are applied sequentially, so you may need to [reorder the operations](#reorder-operations) in your recipe to achieve the desired result. The table below describes the wrangling operations currently available in Workbench: Operation | Description --------- | ----------- [Join](#join) (public preview) | Join datasets that are accessible via the same connection instance. [Aggregate](#aggregate) (public preview) | Apply mathematical aggregations to features in your dataset. [Compute new feature](#compute-a-new-feature) | Create a new feature using Snowflake scalar subqueries, scalar functions, or window functions. [Filter row](#filter-row) | Filter the rows in your dataset according to specified value(s) and conditions [De-duplicate rows](#de-duplicate-row) | Automatically remove all duplicate rows from your dataset. [Find and replace](#find-and-replace) | Replace specific feature values in a dataset. [Rename features](#rename-features) | Change the name of one or more features in your dataset. [Remove features](#remove-features) | Remove one or more features from your dataset. To add an operation to your recipe: 1. With **Recipe** selected, click **Add Operation** in the right panel. ![](images/wb-operation-12.png) 2. Select and configure an operation. Then, click **Add to recipe**. The live sample updates after DataRobot retrieves a new sample from the data source and applies the operation, allowing you to review the transformation in realtime. 3. Continue adding operations while analyzing their effect on the live sample; when you're done, the [recipe is ready to be published](wb-pub-recipe). ![](images/wb-operation-11.png) ### Join {: #join } !!! info "Public preview" The Join operation is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. <b>Feature flag:</b> Enables Additional Wrangler Operations Use the **Join** operation to combine datasets that are accessible via the same connection instance. To join a table or dataset: 1. Click **Join** in the right panel. ![](images/wb-join-1.png) 2. Click **+ Select dataset** to browse and select a dataset from your connection instance. ![](images/wb-join-2.png) 3. Once you've opened and profiled the dataset you want to add, click **Select**. ![](images/wb-join-3.png) 4. Select the appropriate **Join type** from the dropdown. - **Inner** only returns rows that have matching values in both datasets, for example, any rows with matching values in the `order_id` column. - **Left** returns all rows from the left dataset (the original), and only the rows with matching values in the right dataset (joined). ![](images/wb-join-5.png) 5. Select the **Join condition**, which defines how the two datasets are related. In this example, both the datasets are related by `order_id`. ![](images/wb-join-6.png) 6. Click **Add to recipe**. ### Aggregate {: #aggregate } !!! info "Public preview" The Aggregate operation is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. <b>Feature flag:</b> Enables Additional Wrangler Operations Use the **Aggregate** operation to apply the following mathematical aggregations to the dataset (available aggregations vary by feature type): - Sum - Min - Max - Avg - Standard deviation - Count - Count distinct - Most frequent (Snowflake only) To add an aggregation: 1. Click **Aggregate** in the right panel. ![](images/wb-aggregate-1.png) 2. Under **Group by key**, select the feature(s) you want to group your aggregation(s) by. ![](images/wb-aggregate-2.png) 3. Click the field below **Feature to aggregate** and select a feature from the dropdown. Then, click the field below **Aggregate function** and choose one or more aggregations to apply to the feature. ![](images/wb-aggregate-3.png) 4. (Optional) Click **+ Add feature** to apply aggregations to additional features in this grouping. 5. Click **Add to recipe**. After adding the operation to the recipe, DataRobot renames aggregated features using the original name with the `_AggregationFunction` suffix attached. In this example, the new columns are `age_max` and `age_most_frequent`. ![](images/wb-aggregate-4.png) ### Compute a new feature {: #compute-a-new-feature } Use the **Compute new feature** operation to create a new output feature from existing features in your dataset. By applying domain knowledge, you can create features that do a better job of representing your business problem to the model than those in the original dataset. To compute a new feature: 1. Click **Compute new feature** in the right panel. ![](images/wb-operation-10.png) 2. Enter a name for the new feature, and under **Expression**, define the feature using scalar subqueries, scalar functions, or window functions for your chosen cloud data platform: === "Snowflake" - [Scalar subqueries](https://docs.snowflake.com/en/user-guide/querying-subqueries#scalar-subqueries.){ target=_blank } - [Scalar functions](https://docs.snowflake.com/en/sql-reference/functions){ target=_blank } - [Window functions](https://docs.snowflake.com/en/sql-reference/functions-analytic){ target=_blank } === "BigQuery" - [Scalar subqueries](https://cloud.google.com/bigquery/docs/reference/standard-sql/subqueries#scalar_subquery_concepts){ target=_blank } - [Scalar functions](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators){ target=_blank } - [Window functions](https://cloud.google.com/bigquery/docs/reference/standard-sql/window-function-calls){ target=_blank } ![](images/wb-operation-14.png) This example uses `REGEXP_SUBSTR`, to extract the first number from the `[<age_range_start> - <age_range_end>)` from the `age` column, and `to_number` to convert the output from a string to a number. 3. Click **Add to recipe**. ### Filter row {: #filter-row } Use the **Filter row** operation to filter the rows in your dataset according to specified value(s) and conditions. To filter rows: 1. Click **Filter row** in the right panel. ![](images/wb-operation-8.png) 2. Decide if you want to keep the rows that match the defined conditions or exclude them. 3. Define the filter conditions, by choosing the feature you want to filter, the condition type, and the value you want to filter by. DataRobot highlights the selected column. ![](images/wb-operation-7.png) 4. (Optional) Click **Add condition** to define additional filtering criteria. 5. Click **Add to recipe**. ### De-duplicate row {: #de-duplicate-row } Use the **De-duplicate rows** operation to automatically remove all rows with duplicate information from the dataset. To de-duplicate rows, click De-duplicate rows in the right panel. This operation is immediately added to your recipe and applied to the live sample. ![](images/wb-operation-15.png) ### Find and replace {: #find-and-replace } Use the **Find and replace** operation to quickly replace specific feature values in a dataset. This is helpful to, for example, fix typos in a dataset. To find and replace a feature value: 1. Click **Find and replace** in the right panel. ![](images/wb-operation-9.png) 2. Under **Select feature**, click the dropdown and choose the feature that contains the value you want to replace. DataRobot highlights the selected column. ![](images/wb-operation-3.png) 3. Under **Find**, choose the match criteria&mdash;**Exact**, **Partial**, or **Regular Expression**&mdash;and enter the feature value you want to replace. Then, under **Replace**, enter the new value. ![](images/wb-operation-4.png) 4. Click **Add to recipe**. ### Rename features {: #rename-features } Use the **Rename features** operation to rename one or more features in the dataset. To rename features: 1. Click **Rename features** in the right panel. ![](images/wb-operation-16.png) ??? tip "Rename specific features from the live sample" Alternatively, you can click the **More options** icon next to the feature you want to rename. This opens the operation parameters in the right panel with the feature field already filled in. ![](images/wb-operation-21.png) 2. Under **Feature name**, click the dropdown and choose the feature you want to rename. Then, enter the new feature name in the second field. ![](images/wb-operation-18.png) 4. (Optional) Click **Add feature** to rename additional features. 5. Click **Add to recipe**. ### Remove features {: #remove-features } Use the **Remove features** operation to remove features from the dataset. To remove features: 1. Click **Remove features** in the right panel. ![](images/wb-operation-19.png) ??? tip "Remove specific features from the live sample" Alternatively, you can click the **More options** icon next to the feature you want to remove. This opens the operation parameters in the right panel with the feature field already filled in. ![](images/wb-operation-21.png) 2. Under **Feature name**, click the dropdown and either start typing the feature name or scroll through the list to select the feature(s) you want to remove. Click outside of the dropdown when you're done selecting features. ![](images/wb-operation-20.png) 3. Click **Add to recipe**. ## Reorder operations {: #reorder-operations } All operations in a wrangling recipe are applied sequentially, therefore, the order in which they appear affects the results of the output dataset. To move an operation to a new location, click and hold the operation you want to move, and then drag it to a new position. ![](images/wb-op-reorder.png) The live sample updates to reflect the new order. ## Quit wrangling {: #quit-wrangling } At any point, you can click **Quit Wrangling** to end your wrangling session, however, any operations applied to the dataset will be removed. ![](images/wb-operation-quit.png) ## Next steps {: #next-steps } From here, you can: - [Publish the recipe to the data source, generating a new output dataset.](wb-pub-recipe) ## Read more {: #read-more} To learn more about the topics discussed on this page, see: - [Description of summary statistics and histograms in DataRobot Classic.](histogram){ target=_blank }
wb-add-operation
--- title: Wrangle data description: Apply transformations to a external source data, a Snowflake dataset for example, creating a recipe that can then be published to generate a new output dataset. --- # Wrangle data {: #wrangle-data } DataRobot's wrangling capabilities provide a seamless, scalable, and secure way to access and transform data for modeling. In Workbench, "wrangle" is a visual interface for executing data cleaning at the source, leveraging the compute environment and distributed architecture of your data source. When you click **Wrangle**, DataRobot pulls a uniform random sample of 10000 rows and calculates Exploratory Data Insights on that sample, all while connected to your data source. Then, you build a recipe of operations you want to apply to the entire dataset&mdash;the transformations are first applied to the live sample to make sure it's being done correctly. When the recipe is ready to be published, it's pushed down to the data source where it's executed to materialize an output dataset. Why wrangle data in DataRobot? - It's fully integrated in Workbench&mdash;find the right datasets, apply transformations, and in realtime, see the effects of those transformations on your dataset in one place. - It's pushed down&mdash;leverage the scale of your cloud data warehouse or lake. - It's secure&mdash;limiting data movement means faster results, better performance, and enhanced security. This section covers the following topics: Topic | Describes... ---------- | ----------- [Build a recipe](wb-add-operation) | Build a recipe to interactively prepare data for modeling without moving it from your data source. [Publish a recipe](wb-pub-recipe) | Publish a recipe to push down transformations to your data source and generate an output dataset.
index
--- title: Publish a recipe description: Publish a recipe to push down transformations to your data source and generate an output dataset. --- # Publish a recipe {: #publish-a-recipe } Once the recipe is built and live sample looks ready for modeling, you can publish the recipe, pushing it down as a query to the data source. There, the query is executed by applying the recipe to the entire dataset and materializing a new output dataset. The output is sent back to DataRobot and added to the Use Case. See the associated [considerations](wb-data-ref/index#wrangle-data) for important additional information. To publish a recipe: 1. After you're done wrangling a dataset, click **Publish recipe**. ![](images/wb-publish-5.png) 2. Enter a name for the output dataset. DataRobot will use this name to register the dataset in the AI Catalog and Data Registry. ![](images/wb-publish-3.png) 3. (Optional) Configure [Automatic downsampling](#configure-downsampling). ![](images/wb-publish-4.png) 4. Click **Publish**. DataRobot sends the published recipe to Snowflake and where it is applied to the source data to create a new output dataset. In DataRobot, the output dataset is registered in the Data Registry and added to your Use Case. ## Publish to Snowflake {: #publish-to-Snowflake } !!! info "Public preview" In-source materialization for Snowflake is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. <b>Feature flag:</b> Enable Snowflake In-Source Materialization in Workbench When you publish a wrangling recipe, those operations and settings are pushed down into the Snowflake virtual warehouse, allowing you to leverage the security, compliance, and financial controls specified within the Snowflake environment. By default, the output dataset is materialized in DataRobot's Data Registry, however, you can choose to also materialize the wrangled dataset in Snowflake. To enable Snowflake in-source materialization: 1. In the **Publishing Settings** modal, click **Publish to Snowflake**. ![](images/wb-snow-mat-1.png) 2. Select the appropriate Snowflake **Database** and **Schema** using the dropdowns. ![](images/wb-snow-mat-2.png) 3. From here, you can: - Publish your recipe. - [Configure downsampling](#configure-downsampling). ## Configure downsampling {: #configure-downsampling } Automatic downsampling is a technique used to reduce the size of a dataset by reducing the size of the majority class using random sampling. Consider enabling automatic downsampling if the size of your source data exceeds that of [DataRobot's file size requirements](file-types){ target=_blank }. To configure downsampling: 1. Enable the **Automatic downsampling** toggle in the **Publishing Settings** modal. ![](images/wb-publish-1.png) 2. Specify the **Maximum number of rows** and **Estimated size** in megabytes. ![](images/wb-publish-2.png) ## Next steps {: #next-steps } From here, you can: - [Add more data.](wb-add-data/index) - [View Exploratory Data Insights to determine if you want to continue data wrangling.](wb-data-tab) - [Use the dataset to set up an experiment and start modeling.](wb-experiment/index) ## Read more {: #read-more} To learn more about the topics discussed on this page, see: - [Snowflake documentation on pushdown](https://docs.snowflake.com/en/developer-guide/pushdown-optimization){ target=_blank } - [DataRobot file size requirements](file-types){ target=_blank }
wb-pub-recipe
--- title: Add notebooks description: Learn how to create new DataRobot notebooks, import existing notebooks, and export notebooks as .ipynb files. section_name: Notebooks maturity: public-preview --- # Add notebooks {: #add-notebooks } {% include 'includes/notebooks/create-nb.md' %}
wb-create-nb
--- title: Notebook versioning description: Learn how to maintain versions of DataRobot Notebooks. section_name: Notebooks maturity: public-preview --- # Notebook versioning {: #notebook-versioning } {% include 'includes/notebooks/revise-nb.md' %}
wb-revise-nb
--- title: Notebook settings description: Learn about the options available for DataRobot Notebook settings. section_name: Notebooks maturity: public-preview --- # Notebook settings {: #notebook-settings } {% include 'includes/notebooks/settings-nb.md' %}
wb-settings-nb
--- title: Manage notebooks description: Learn how to create, configure, and manage DataRobot Notebooks. section_name: Notebooks maturity: public-preview --- # Manage notebooks {: #manage-notebooks } {% include 'includes/notebooks/manage-index.md' %}
index
--- title: Code intelligence description: Describes the code intelligence capabilities available for code cells in DataRobot Notebooks. section_name: Notebooks --- # Code intelligence {: #code-intelligence} {% include 'includes/notebooks/code-int.md' %}
wb-code-int
--- title: Cell actions description: Describes the various actions available to control notebook cells. section_name: Notebooks --- # Cell actions {: #cell-actions } {% include 'includes/notebooks/action-nb.md' %}
wb-action-nb
--- title: Notebook terminals description: Describes the terminal integration available for DataRobot Notebooks. section_name: Notebooks --- # Notebook terminals {: #notebook-terminals } {% include 'includes/notebooks/terminal-nb.md' %}
wb-terminal-nb
--- title: Notebook coding experience description: Learn about the coding experience in DataRobot Notebooks. section_name: Notebooks --- # Notebook coding experience {: #notebook-coding-features } {% include 'includes/notebooks/code-index.md' %}
index
--- title: Create and execute cells description: Describes how to create and execute cells in DataRobot Notebooks. section_name: Notebooks --- # Create and execute cells {: #create-and-execute-cells} {% include 'includes/notebooks/cell-nb.md' %}
wb-cell-nb
--- title: Azure OpenAI Service integration description: Use Azure's OpenAI assistant to generate code in DataRobot Notebooks. section_name: Notebooks maturity: public-preview --- # Azure OpenAI Service integration {: #azure-openai-service-integration } {% include 'includes/notebooks/openai-nb.md' %}
wb-openai-nb
--- title: Environment management description: Describes the environment management capabilities of the DataRobot Notebook platform. section_name: Notebooks --- # Environment management {: #environment-management } {% include 'includes/notebooks/env-nb.md' %}
wb-env-nb
--- title: New app experience description: DataRobot introduces an new, streamlined application experience in Workbench. section_name: Workbench maturity: public-preview --- # New app experience {: #new-app-experience } !!! info "Availability information" The new application interface with model insights is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. <b>Required feature flag:</b> Enable New No-Code AI Apps Edit Mode <b>Recommended feature flag:</b> [Enable Prefill NCA Templates with Training Data](app-prefill) Now available for public preview, DataRobot introduces an new, streamlined application experience in Workbench. With this release, the following improvements have been added: - Applications created from an experiment in Workbench no longer open outside of Workbench in the application builder. - Applications have a new, simplified interface to make the experience more intuitive. - You can access model insights, including Feature Impact and Feature Effects, from all new Workbench apps. ## Create applications {: #create-applications } With this release, the application creation workflow has also been simplified&mdash;you still [create one as you normally would](wb-apps/index#create-an-app) from a Workbench model, however, you are no longer prompted to choose a template, enter a name, and select sharing permissions. Instead, DataRobot immediately begins building the application and once complete, you must authorize read/write access and sign in. Then, the application opens in [**Edit** mode](#edit-mode). ## Present mode {: #present-mode } Present mode displays the end-user version of the application. Anonymous users (accessing the app via the shareable link) and those with `Consumer` access will only have access to this mode. ![](images/wb-app-present.png) ## Edit mode {: #edit-mode } In **Edit** mode, you can configure, customize, and even use your application. See the image and table below for a brief description of the new interface. ![](images/wb-app-edit-1.png) &nbsp; | Description ----------- | --------------------- ![](images/icon-wb-1.png) | Organizes the application using [folders and sections](#application-folders). ![](images/icon-wb-2.png) | Displays the selected folder and its sections (indicated by the open folder on the left). ![](images/icon-wb-3.png) | **Share** provides a shareable link that grants recipients access to the application. <br><br>**Present** opens the end-user version of the application. If you are already presenting, DataRobot displays an **Edit** button. ### Change themes {: #change-themes } In the **Themes** tab, you can choose a light or dark theme for your application. ![](images/wb-app-theme.png) ### Upload a custom logo {: #upload-a-custom-logo } To add a custom logo to your application, click **Upload Logo** and select the image you'd like to use. ![](images/wb-app-logo.png) ### Add sections {: #add-sections } Although you can't remove default sections from your application, you can add a new section to any of the folders. These custom sections include an editable text field and heading that when edited, also updates the section name in the left panel. ![](images/wb-app-section.png) To remove a custom section, click the trash icon. ![](images/wb-app-section-1.png) ### Reorder sections {: #reorder-sections } You can change the order of where sections appear within a folder by dragging them to a new position in the left panel. ![](images/wb-app-reorder.gif) ### Customize visualizations {: #customize-visualizations } If a section contains a visualization, you can customize its appearance and behavior. To customize a visualization, hover over the chart and click the pencil icon. ![](images/wb-app-viz-2.png) In the below example, you can change the chart colors, number of bins, sorting order, and partition source. However, the available customization options vary by visualization. ![](images/wb-app-viz-1.png) ## Application folders {: #application-folders } By default, DataRobot automatically organizes your application by intuitively grouping its content in folders, and within each folder, information, insights, and actions are divided into sections. Default folders and sections cannot be removed, however, you can [add new sections](#add-sections) to a folder. Each application contains the following folders: Folder | Description ----------------- | --------------------- [Overview](#overview) | Displays general summary information. [Insights](#insights) | Displays various model insights. [Predictions](#predictions) | Allows you to make and view predictions. [Prediction Details](#prediction-details) | Displays individual prediction results. ### Overview {: #overview } The **Overview** folder contains general summary information for your application. ![](images/wb-app-edit-2.png) Section | Type | Description ----------------- | -------- | --------------------- Use Case summary | Text | Displays summary information for the application's Use Case. Problem statement/Why it's valuable | Text | Allows you to enter a problem statement and description of application value. Experiment summary | Text | Displays summary information for the application's experiment. Blueprint Chart | Visualization | Displays the blueprint of the model used to create the application. <br><br>See the [full documentation](blueprints){ target=_blank }. Lift chart | Visualization | Depicts how well a model segments the target population and how capable it is of predicting the target. <br><br>See the [full documentation](lift-chart){ target=_blank }. ### Insights {: #insights } The **Insights** folder displays several insights (if available) for the original model that was used to create the application. ![](images/wb-app-edit-3.png) Section | Type | Description -------- | ------- | ------------- Feature Impact | Visualization | Provides a high-level visualization that identifies which features are most strongly driving model decisions. <br><br>See the [full documentation](feature-impact){ target=_blank }. Feature Effects | Visualization |Visualizes the effect of changes in the value of each feature on the model’s predictions. <br><br>See the [full documentation](feature-effects){ target=_blank }. Word Cloud | Visualization | Displays the most relevant words and short phrases in word cloud format. <br><br>See the [full documentation](analyze-insights#word-clouds){ target=_blank }. Prediction Explanations | Visualization | Illustrates what drives predictions on a row-by-row basis—they provide a quantitative indicator of the effect variables have on the predictions, answering why a given model made a certain prediction. <br><br>See the [full documentation](predex-overview){ target=_blank }. ### Predictions {: #predictions } The **Predictions** folder allows you to make single record and batch predictions, as well as, view a history of predictions made in the application. ![](images/wb-app-edit-4.png) Section | Type | Description ----------------- | -------- | --------------------- All Rows | Visualization | Displays each prediction row generated by the application. Single prediction | Action | Allows you to submit single record predictions. Batch prediction | Action | Allows you to upload batch prediction files. For more information, see the documentation on [app predictions](app-make-pred). ### Prediction Details {: #prediction-details } The **Prediction Details** folder displays prediction results for individual predictions. Note that this folder is disabled until the app is used to make a prediction. ![](images/wb-app-edit-5.png) Section | Type | Description ----------------- | -------- | --------------------- General Information | Text | Displays prediction information for the selected row. Prediction Explanations | Visualization | Displays Prediction Explanations for the selected row. What-if and Optimizer | Visualization / Action | Allows you to interact with prediction results using scenario comparison and optimizer tools. For more information, see the documentation on [app prediction results](app-analyze-result) and the [What-if and Optimizer widget](whatif-opt). ## Share {: #share } There are two ways to share applications: 1. [Send a shareable link](#generate-a-shareable-link) to DataRobot or non-DataRobot users. 2. [Share the Use Case](wb-build-usecase#share) with other DataRobot users, which provides them with access to all assets contained within it. ### Generate a shareable link {: #with-a-shareable-link } When using the application&mdash;in either **Edit** and **Present** mode&mdash;you can generate a shareable link. This link provides access to your application and can be shared with non-DataRobot users. To generate a shareable link: 1. Click **Share** in the upper-right corner. ![](images/wb-app-share.png) 2. Select the box next to **Grant access to anyone with this link**. If this box does not have a check mark, users cannot access the application via the link. ![](images/wb-app-share-1.png) 3. (Optional) Toggle **Prevent users with link access from making predictions** to control the ability for recipients to make predictions in the app. !!! warning "Generate a new link" If you click **Generate a new link**, all users with the older link will no longer have access to the app&mdash; you must send a new link to grant access. ## Feature considerations {: #feature-considerations } Consider the following when using the new application interface in Workbench: - Supports the following experiments: - Binary classification - Regression - Does not support the following experiments: - Time series - Geospatial - Images
wb-app-edit
--- title: Public preview features description: Read preliminary documentation for application-related features currently in the DataRobot public preview pipeline. section_name: Workbench maturity: public-preview --- # Application public preview features {: #application-public-preview-features } {% include 'includes/pub-preview-notice-include.md' %} ## Available public preview documentation {: #available-public-preview-documentation } Public preview for... | Describes... ----- | ------ [New app experience ](wb-app-edit) | Enable the new application experience with model insights.
index
--- title: Notebook versioning description: Learn how to maintain versions of DataRobot Notebooks. section_name: Notebooks --- # Notebook versioning {: #notebook-versioning } {% include 'includes/notebooks/revise-nb.md' %}
dr-revise-nb
--- title: Add notebooks description: Learn how to create new DataRobot notebooks, import existing notebooks, and export notebooks as .ipynb files. section_name: Notebooks --- # Add notebooks {: #add-notebooks } {% include 'includes/notebooks/create-nb.md' %}
dr-create-nb
--- title: Notebook settings description: Learn about the options available for DataRobot Notebook settings. section_name: Notebooks --- # Notebook settings {: #notebook-settings } {% include 'includes/notebooks/settings-nb.md' %}
dr-settings-nb
--- title: Manage notebooks description: Learn how to create, configure, and manage DataRobot Notebooks. section_name: Notebooks --- # Manage notebooks {: #manage-notebooks } {% include 'includes/notebooks/manage-index.md' %}
index
--- title: Environment management description: Describes the environment management capabilities of the DataRobot Notebook platform. section_name: Notebooks --- # Environment management {: #environment-management } {% include 'includes/notebooks/env-nb.md' %}
dr-env-nb
--- title: Create and execute cells description: Describes how to create and execute cells in DataRobot Notebooks. section_name: Notebooks --- # Create and execute cells {: #create-and-execute-cells} {% include 'includes/notebooks/cell-nb.md' %}
dr-cell-nb
--- title: Code intelligence description: Describes the code intelligence capabilities available for code cells in DataRobot Notebooks. section_name: Notebooks --- # Code intelligence {: #code-intelligence} {% include 'includes/notebooks/code-int.md' %}
dr-code-int
--- title: Cell actions description: Describes the various actions available to control notebook cells. section_name: Notebooks --- # Cell actions {: #cell-actions } {% include 'includes/notebooks/action-nb.md' %}
dr-action-nb
--- title: Notebook terminals description: Describes the terminal integration available for DataRobot Notebooks. section_name: Notebooks --- # Notebook terminals {: #notebook-terminals } {% include 'includes/notebooks/terminal-nb.md' %}
dr-terminal-nb
--- title: Azure OpenAI Service integration description: Use Azure's OpenAI assistant to generate code in DataRobot Notebooks. section_name: Notebooks maturity: public-preview --- # Azure OpenAI Service integration {: #openai-assistant } {% include 'includes/notebooks/openai-nb.md' %}
dr-openai-nb
--- title: Notebook coding experience description: Learn about the coding experience in DataRobot Notebooks. section_name: Notebooks --- # Notebook coding experience {: #notebook-coding-features } {% include 'includes/notebooks/code-index.md' %}
index
--- title: Python client v3.0 description: Learn about the new features, enhancements, and changes in version 3.0 of DataRobot's Python client. --- # Python client v3.0 Now generally available, DataRobot has released version 3.0 of the [Python client](https://pypi.org/project/datarobot/){ target=_blank }. This version introduces significant changes to common methods and usage of the client. Many prominent changes are listed below, but **view the [changelog](https://datarobot-public-api-client.readthedocs-hosted.com/page/CHANGES.html){ target=_blank } for a complete list of changes introduced in version 3.0**. ### New features A summary of some new features for version 3.0 are outlined below: * Version 3.0 of the Python client does not support Python 3.6 and earlier versions. Version 3.0 currently supports Python 3.7+. * The default Autopilot mode for the `project.start_autopilot` method has changed to `AUTOPILOT_MODE.QUICK`. * Pass a file, file path, or DataFrame to a deployment to easily make batch predictions and return the results as a DataFrame using the new method `Deployment.predict_batch`. * You can use a new method to retrieve the canonical URI for a project, model, deployment, or dataset: * `Project.get_uri` * `Model.get_uri` * `Deployment.get_uri` * `Dataset.get_uri` ### New methods for DataRobot projects Review the new methods available for `datarobot.models.Project`: * `Project.get_options` allows you to retrieve saved modeling options. * `Project.set_options` saves `AdvancedOptions` values for use in modeling. * `Project.analyze_and_model` initiates Autopilot or data analysis using data that has been uploaded to DataRobot. * `Project.get_dataset` retrieves the dataset used to create the project. * `Project.set_partitioning_method` creates the correct Partition class for a regular project, based on input arguments. * `Project.set_datetime_partitioning` creates the correct Partition class for a time series project. * `Project.get_top_model` returns the highest scoring model for a metric of your choice. ### Deprecations Review the deprecations introduced in version 3.0: * `Project.set_target` has been deprecated. Use `Project.analyze_and_model` instead. * `PredictJob.create` has been removed. Use `Model.request_predictions` instead. * `Model.get_leaderboard_ui_permalink` has been removed. Use `Model.get_uri` instead. * `Project.open_leaderboard_browser` has been removed. Use `Project.open_in_browser` instead. * `ComplianceDocumentation` has been removed. Use `AutomatedDocument` instead. ### Notebooks The table below outlines the notebooks available that use version 3.0 of DataRobot's Python client. Topic | Describes... | ----- | ------ | [Insurance claim triage](insurance/index) | Evaluate the severity of an insurance claim in order to triage it effectively. | [Large scale demand forecasting](demand-v3.ipynb) | Learn about an end-to-end demand forecasting use case that uses DataRobot's Python package. | [Predict fraudulent medical claims](pred-fraud-v3.ipynb) | The identification of fraudulent medical claims using the DataRobot Python package. | [Predict customer churn](customer-churn-v3.ipynb) | How to predict customers that are at risk to churn and when to intervene to prevent it. | [Generate SHAP-based Prediction Explanations](shap-nb.ipynb) | How to use DataRobot's SHAP Prediction Explanations to determine what qualities of a home drive sale value. | [Configure datetime partitioning](datetime-v3.ipynb) | How to use [datetime partitioning](datetime_partitioning) to guard a project against time-based target leakage. | [Generate advanced model insights](adv-insights.ipynb) | How to generate the model insights available for DataRobot's Python client. | [Migrate models](migrate-nb.ipynb) | How to transfer models from one DataRobot cluster to another as an .mlpkg file. | [Create and schedule JDBC prediction jobs](jdbc-nb.ipynb) | How to use DataRobot's Python client to schedule prediction jobs and write them to a JDBC database. |
pythonv3
--- title: Modeling workflow overview description: Learn how to use DataRobot's clients, both Python and R, to train and experiment with models. --- # Modeling workflow overview {: #modeling-workflow-overview } This code example outlines how to use DataRobot's clients, both Python and R, to train and experiment with models. It also offers ideas for integrating DataRobot with other products via the API. Specifically, you will: - Create a project and run Autopilot. - Experiment with feature lists, modeling algorithms, and hyperparameters. - Choose the best model. - Perform an in-depth evaluation of the selected model. - Deploy a model into production in a few lines of code. In addition to this walkthrough, you can download a Jupyter notebook for each language: * ![](images/icon-download.png) [Python notebook](guide/python-modeling.ipynb) * ![](images/icon-download.png) [R notebook](guide/r-modeling.ipynb) ## Data used for this example This walkthrough uses a synthetic dataset that illustrates a credit card company’s anti-money laundering (AML) compliance program, with the intent of detecting the following money-laundering scenarios: * A customer spends on the card, but overpays their credit card bill and seeks a cash refund for the difference. * A customer receives credits from a merchant without offsetting transactions, and either spends the money or requests a cash refund from the bank. A rule-based engine is in place to produce an alert when it detects potentially suspicious activity consistent with the scenarios above. The engine triggers an alert whenever a customer requests a refund of any amount. Small refund requests are included because they could be a money launderer’s way of testing the refund mechanism or trying to establish refund requests as a normal pattern for their account. The target feature is `SAR`, suspicious activity reports. It indicates whether or not the alert resulted in an SAR after manual review by investigators, which means that this project is a binary classification problem. The unit of analysis is an individual alert, so the model will be built on the alert level. Each alert will get a score ranging from 0 to 1, indicating the probability of being an alert leading to an SAR. The data consists of a mixture of numeric, categorical, and text data. ## Setup ### Import libraries The first step to create a project is uploading a dataset. This example uses the dataset `auto-mpg.csv`, which you can download [here](https://github.com/datarobot-community/quickstart-guide/tree/master/data){ target=_blank }. === "Python" ``` python import datarobot as dr from datarobot_bp_workshop import Workshop, Visualize import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import time import warnings import graphviz import plotly.express as px warnings.filterwarnings('ignore') w = Workshop() # wider .head()s pd.options.display.width = 0 pd.options.display.max_columns = 200 pd.options.display.max_rows = 2000 sns.set_theme(style="darkgrid") ``` === "R" ``` R library(dplyr) library(ggplot2) library(datarobot) ``` ### Connect to DataRobot If the config file is in the default location described in the API Quickstart guide, then you do not need to do anything else. Read more about different options for [connecting to DataRobot from the client](https://docs.datarobot.com/en/docs/api/api-quickstart/api-qs.html). === "Python" ``` python # If the config file is not in the default location described in the API Quickstart guide, then you will need to call dr.Client(config_path = '<file-path-to-drconfig.yaml>') ``` === "R" ``` R # If the config file is not in the default location described in the API Quickstart guide, then you will need to call datarobot::ConnectToDataRobot(configPath = '<file-path-to-drconfig.yaml>') ``` ### Upload a dataset === "Python" ```python # To read from a local file, uncomment and use: # df = pd.read_csv('./data/DR_Demo_AML_Alert.csv') # To read from an s3 bucket: df = pd.read_csv('https://s3.amazonaws.com/datarobot_public_datasets/DR_Demo_AML_Alert.csv') df.head() # To view target distribution: df_target_summary = pd.DataFrame(df['SAR'].value_counts()).reset_index().rename(columns={'index':'SAR','SAR':'Count'}) ax = sns.barplot(x='SAR',y='Count', data=df_target_summary) for index, row in df_target_summary.iterrows(): ax.text(row.SAR,row.Count, round(row.Count,2), color='black', ha="center") plt.show() ``` === "R" ``` # Set to the location of the training data via a local file or URL # Sample file location: '/Users/myuser/Downloads/DR_Demo_AML_Alert.csv' dataset_file_path <- "https://s3.amazonaws.com/datarobot_public_datasets/DR_Demo_AML_Alert.csv" training_data <- utils::read.csv(dataset_file_path) test_data <- training_data[ -c(2) ] head(training_data) ``` ## Create a project and train models with Autopilot When you have successfully deployed a model, you can use the DataRobot Prediction API to make predictions. This allows you to access advanced [model management](../../../mlops/index) features such as data drift, accuracy, and service health statistics. === "Python" You can also reference a Python prediction snippet from the UI. Navigate to the **Deployments** page, select your deployment, and go to **Predictions > Prediction API** to reference the snippet for making predictions. ``` python # Create a project by uploading data. This will take a few minutes. project = dr.Project.create(sourcedata=df, project_name='DR_Demo_API_alert_AML_{}'.format(pd.datetime.now().strftime('%Y-%m-%d %H:%M'))) # Set the project's target and initiate Autopilot in Quick mode. # Wait for Autopilot to finish. You can set verbosity to 0 if you do not wish to see progress updates. project.analyze_and_model(target='SAR', worker_count=-1) # Open the project's Leaderboard to monitor the progress in UI. project.open_in_browser() ``` === "R" ``` # Create a project by uploading data. This will take a few minutes. project <- SetupProject(dataSource = training_data, projectName = "SAR Detection", maxWait = 60 * 60) # Set the project target and initiate Autopilot SetTarget(project, target = "SAR") # Block execution until Autopilot is complete WaitForAutopilot(project) # The `WaitForAutopilot()` function forces the R Kernel to wait until DataRobot has finished modeling before executing the next series of commands. # Open the project's Leaderboard to monitor the progress in UI. ViewWebProject("620423876638a2187c5aa876") # Provide the project ID ``` ### Retrieve and review results from the Leaderboard === "Python" ``` python def get_top_of_leaderboard(project, verbose = True): # A helper method to assemble a dataframe with Leaderboard results and print a summary: leaderboard = [] for m in project.get_models(): leaderboard.append([m.blueprint_id, m.featurelist.id, m.id, m.model_type, m.sample_pct, m.metrics['AUC']['validation'], m.metrics['AUC']['crossValidation']]) leaderboard_df = pd.DataFrame(columns = ['bp_id', 'featurelist', 'model_id', 'model', 'pct', 'validation', 'cross_validation'], data = leaderboard) if verbose == True: # Print a Leaderboard summary: print("Unique blueprints tested: " + str(len(leaderboard_df['bp_id'].unique()))) print("Feature lists tested: " + str(len(leaderboard_df['featurelist'].unique()))) print("Models trained: " + str(len(leaderboard_df))) print("Blueprints in the project repository: " + str(len(project.get_blueprints()))) # Print the essential information for top models, sorted by accuracy from validation data: print("\n\nTop models on the Leaderboard:") leaderboard_top = leaderboard_df[leaderboard_df['pct'] == 64].sort_values(by = 'cross_validation', ascending = False).head().reset_index(drop = True) display(leaderboard_top.drop(columns = ['bp_id', 'featurelist'], inplace = False)) # Show blueprints of top models: for index, m in leaderboard_top.iterrows(): Visualize.show_dr_blueprint(dr.Blueprint.get(project.id, m['bp_id'])) return leaderboard_top leaderboard_top = get_top_of_leaderboard(project) ``` === "R" ``` # Use the `ListModels()` function to retrieve a list of all the trained DataRobot models for a specified project. ListModels(project) # Retrive the model DataRobot recommends for deployment model <- GetRecommendedModel(project, type = RecommendedModelType$RecommendedForDeployment) # Get a model's blueprint GetModelBlueprintChart(project, "<model-id>") # Provide the model ID ``` ## Experiment to get better results When you run a project using Autopilot, DataRobot first creates blueprints based on the characteristics of your data and puts them in the Repository. Then, it chooses a subset from these to train; when training completes, these are the blueprints you’ll find on the Leaderboard. After the Leaderboard is populated, it can be useful to train some of those blueprints that DataRobot skipped. For example, you can try a more complex Keras blueprint like Keras Residual AutoInt Classifier using Training Schedule (3 Attention Layers with 2 Heads, 2 Layers: 100, 100 Units). In some cases, you may want to directly access the trained model through R and retrain it with a different feature list or tune its hyperparameters. ### Find blueprints not yet trained for the project from the Repository === "Python" ``` python blueprints = project.get_blueprints() # After retrieving the blueprints, you can search for a specific blueprint # In the example below, search for all models that have "Gradient" in their name models_to_run = [] for blueprint in blueprints: if 'Gradient' in blueprint.model_type: models_to_run.append(blueprint) ``` ``` python models_to_run ``` === "R" ``` modelsInLeaderboard <- ListModels(project) modelsInLeaderboard_df <- as.data.frame(modelsInLeaderboard) ``` ### Python: define and train a custom blueprint ??? note "Python" This section, exclusive to the Python client, describes how to use various DataRobot features to improve the results returned from models. Use the snippet below to define and train a custom blueprint. You can read more about composing custom blueprints via code by visiting DataRobot's [blueprint workshop](https://blueprint-workshop.datarobot.com/){ target=_blank }. ``` python pdm3 = w.Tasks.PDM3(w.TaskInputs.CAT) pdm3.set_task_parameters(cm=50000, sc=10) ndc = w.Tasks.NDC(w.TaskInputs.NUM) rdt5 = w.Tasks.RDT5(ndc) ptm3 = w.Tasks.PTM3(w.TaskInputs.TXT) ptm3.set_task_parameters(d2=0.2, mxf=20000, d1=5, n='l2', id=True) kerasc = w.Tasks.KERASC(rdt5, pdm3, ptm3) kerasc.set_task_parameters(always_use_test_set=1, epochs=4, hidden_batch_norm=1, hidden_units='list(64)', hidden_use_bias=0, learning_rate=0.03, use_training_schedule=1) # Check task documentation: # kerasc.documentation() kerasc_blueprint = w.BlueprintGraph(kerasc, name='A Custom Keras BP (1 Layer: 64 Units)').save() kerasc_blueprint.show() kerasc_blueprint.train(project_id = project.id, sample_pct = 64) ``` After creating a custom blueprint, use the code outlined below to train models with the custom blueprint. ### Train a model using a different feature list === "Python" ``` python # Select a model from the Leaderboard: model = dr.Model.get(project = project.id, model_id = leaderboard_top.iloc[0]['model_id']) # Retrieve Feature Impact: feature_impact = model.get_or_request_feature_impact() # Create a feature list using the top 25 features based on feature impact: feature_list = [f["featureName"] for f in feature_impact[:25]] new_list = project.create_featurelist('new_feat_list', feature_list) # Retrain models using the new feature list: model.retrain(featurelist_id = new_list.id) ``` === "R" ``` for (i in 1:length(models_to_run)){ job <- RequestNewModel(project, models_to_run[[i]]) WaitForJobToComplete(project, job, maxWait=600) } ``` ### Tune hyperparameters for a model === "Python" ``` python tune = model.start_advanced_tuning_session() # Get available task names, # and available parameter names for a task name that exists on this model tasks = tune.get_task_names() tune.get_parameter_names(tasks[1]) # Adjust this section as required as it may differ depending on task/parameter names as well as acceptable values tune.set_parameter( task_name=tasks[1], parameter_name='n_estimators', value=200) job = tune.run() ``` === "R" ``` StartTuningSession(model) ``` ### Select the top-performing model === "Python" ``` python # View the top models on the Leaderboard leaderboard_top = get_top_of_leaderboard(project) ``` ``` python # Select the model based on accuracy (AUC) top_model = dr.Model.get(project = project.id, model_id = leaderboard_top.iloc[0]['model_id']) ``` === "R" ``` # Use the `ListModels()` function to retrieve a list of all the trained DataRobot models for a specified project ListModels(project) # Retrive the model DataRobot recommends for deployment model <- GetRecommendedModel(project, type = RecommendedModelType$RecommendedForDeployment) ``` ## Model evaluation ### Retrieve and plot Feature Impact === "Python" ``` python max_num_features = 15 # Retrieve Feature Impact feature_impacts = top_model.get_or_request_feature_impact() # Plot permutation-based Feature Impact feature_impacts.sort(key=lambda x: x['impactNormalized'], reverse=True) FeatureImpactDF = pd.DataFrame([{'Impact Normalized': f["impactNormalized"], 'Feature Name': f["featureName"]} for f in feature_impacts[:max_num_features]]) FeatureImpactDF["X axis"] = FeatureImpactDF.index g = sns.lmplot(x="Impact Normalized", y="X axis", data=FeatureImpactDF, fit_reg=False) sns.barplot(y=FeatureImpactDF["Feature Name"], x=FeatureImpactDF["Impact Normalized"]) ``` === "R" ``` # Retrieve the top 10 most impactful features: feature_impact <- GetFeatureImpact(model) feature_impact <- feature_impact[order(-feature_impact$impactNormalized), ] %>% slice(1:10) # Create plot of top 10 features based on Feature Impact ggplot(data = feature_impact, mapping = aes( x = featureName, y = impactNormalized)) + geom_col() + labs(x = "Feature") ``` ### Retrieve and plot the ROC curve === "Python" ``` python roc = top_model.get_roc_curve('validation') df_roc = pd.DataFrame(roc.roc_points) dr_dark_blue = '#08233F' dr_roc_green = '#03c75f' white = '#ffffff' fig = plt.figure(figsize=(8, 8)) axes = fig.add_subplot(1, 1, 1, facecolor=dr_dark_blue) plt.scatter(df_roc.false_positive_rate, df_roc.true_positive_rate, color=dr_roc_green) plt.plot(df_roc.false_positive_rate, df_roc.true_positive_rate, color=dr_roc_green) plt.plot([0, 1], [0, 1], color=white, alpha=0.25) plt.title('ROC curve') plt.xlabel('False Positive Rate (Fallout)') plt.xlim([0, 1]) plt.ylabel('True Positive Rate (Sensitivity)') plt.ylim([0, 1]) plt.show() ``` === "R" ``` roc <- GetRocCurve(model, source = 'validation') roc_df <- roc$rocPoints head(roc_df) ``` ### Retrieve and plot Feature Effects === "Python" ``` python feature_effects = top_model.get_or_request_feature_effect(source='validation') max_features = 5 for f in feature_effects.feature_effects[:max_features]: plt.figure(figsize = (9,6)) d = pd.DataFrame(f['partial_dependence']['data']) if f['feature_type'] == 'numeric': d = d[d['label'] != 'nan'] d['label'] = pd.to_numeric(d['label']) sns.lineplot(x="label", y="dependence", data = d).set_title(f['feature_name'] + ": importance=" + str(round(f['feature_impact_score'], 2))) else: sns.scatterplot(x="label", y="dependence", data = d).set_title(f['feature_name'] + ": importance=" + str(round(f['feature_impact_score'], 2))) ``` ### Score data before deployment === "Python" ``` python # Use training data to test how the model makes predictions test_data = df.head(50) dataset_from_file = project.upload_dataset(test_data) predict_job_1 = top_model.request_predictions(dataset_from_file.id) predictions = predict_job_1.get_result_when_complete() display(predictions.head()) ``` === "R" ``` test_data <- training_data[ -c(2) ] head(test_data) ``` ``` # Uploading the testing dataset scoring <- UploadPredictionDataset(project, dataSource = test_data) # Requesting prediction predict_job_id <- RequestPredictions(project, modelId = model$modelId, datasetId = scoring$id) # Grabbing predictions predictions_prob <- GetPredictions(project, predictId = predict_job_id, type = "probability") head(predictions_prob) ``` ### Compute Prediction Explanations === "Python" ``` python # Prepare prediction explanations pe_job = dr.PredictionExplanationsInitialization.create(project.id, top_model.id) pe_job.wait_for_completion() ``` ``` python # Compute prediction explanations with default parameters pe_job2 = dr.PredictionExplanations.create(project.id, top_model.id, dataset_from_file.id, max_explanations=3, threshold_low = 0.1, threshold_high = 0.5) pe = pe_job2.get_result_when_complete() display(pe.get_all_as_dataframe().head()) ``` === "R" ``` GetPredictionExplanations(model, test_data) ``` ## Deploy a model After identifying the best-performing models, you can deploy them and use DataRobot's REST API to make HTTP requests and return predictions. You can also configure batch jobs to write back into your environment of choice. Once deployed, access monitoring capabilities such as: - [Service health](https://docs.datarobot.com/en/docs/mlops/monitor/service-health.html){ target=_blank } - [Prediction accuracy](https://docs.datarobot.com/en/docs/mlops/monitor/deploy-accuracy.html){ target=_blank } - [Model retraining](https://docs.datarobot.com/en/docs/mlops/manage-mlops/set-up-auto-retraining.html){ target=_blank } === "Python" ``` python # Copy and paste the model ID from previous steps or from the UI: model_id = top_model.id prediction_server_id = dr.PredictionServer.list()[0].id deployment = dr.Deployment.create_from_learning_model( model_id, label = 'New Deployment', description = 'A new deployment', default_prediction_server_id = prediction_server_id) deployment ``` === "R" ``` prediction_server <- ListPredictionServers()[[1]] deployment <- CreateDeployment(model, label = 'New Deployment', description = 'A new deployment', defaultPredictionServerId = prediction_server$id) deployment ```
modeling-workflow
--- title: API user guide description: Review comprehensive workflows, notebooks, and tutorials that help you find complete examples of common data science and machine learning workflows. --- # API user guide {: #api-user-guide } The API user guide includes overviews, Jupyter notebooks, and task-based tutorials that help you find complete examples of common data science and machine learning workflows. Be sure to review the [API quickstart guide](api-quickstart/index) before using the notebooks below. Topic | Describes... | ----- | ------ | [Modeling workflow overview](modeling-workflow) | How to use DataRobot's clients, both Python and R, to train and experiment with models. | [Python client v3.0](pythonv3) | Review the changes introduced to DataRobot's Python client with version 3.0. | [Common use cases](common-case/index) | Review Jupyter notebooks that outline common use cases and machine learning workflows using DataRobot's Python client. | [Public preview: R client v2.29](r-pub-prev/index) | Review the changes introduced to DataRobot's R client with version 2.29. | [Python code examples](python/index) | Python code examples for common data science workflows. | [R code examples](r-nb/index) | code examples that outline common data science workflows. | [REST API code examples](restapi/index) | REST API code examples that outline common data science workflows. | In addition to the examples listed above, DataRobot hosts community-driven notebooks accessible from the following locations: | Resource | Description | | --------------------------| ----------------------------- | [Examples for data scientists (Github repository)](https://github.com/datarobot-community/examples-for-data-scientists/) | Referential Jupyter notebooks that outline common DataRobot functions. [Tutorials for data scientists (Github repository)](https://github.com/datarobot-community/tutorials-for-data-scientists/) | Jupyter notebooks that detail applicable use cases for DataRobot. [R vignettes included in the R client](https://cran.r-project.org/web/packages/datarobot/index.html) | Long-form tutorials outlining functions in the DataRobot R package. [Jupyter notebooks included in the Python client](https://datarobot-public-api-client.readthedocs-hosted.com/) | Example Jupyter notebooks demonstrating sample use cases and DataRobot functions using the DataRobot Python package.
index
--- title: Build a recommendation engine description: Explore how to use historical user purchase data in order to create a recommendation model which will attempt to guess which products out of a basket of items the customer will be likely to purchase at a given point in time. --- # Build a recommendation engine {: #build-a-recommendation-engine} [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/game-changer/Recommendation%20Engine/Recommendation%20Engine.ipynb){ .md-button } The accelerator provided in this notebook trains a model on historical customer purchases in order to make recommendations for future visits. The DataRobot features that will be utilized in this notebook are multi-Label modeling and feature discovery. Together the resulting model can provide rank ordered suggestions of content, product, or services that a specific customer might like. In the notebook, you will: * Analyze the datasets required * Create a multilabel dataset for training * Connect to DataRobot * Configure a feature discovery project * Generate features and models * Generate recommendations for new visits
rec-engine
--- title: Create custom blueprints with composable ML description: Customize models on the Leaderboard using the Blueprint Workshop. --- # Create custom blueprints with composable ML {: #create-custom-blueprints-with-composable-ml} [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/custom_blueprints/create_custom_blueprint.ipynb){ .md-button } DataRobot's [Composable ML](cml/index) allows you to add pre-defined tasks to a blueprint or to insert their own custom code. You're free to add your data science and subject matter expertise to the models you build. This accelerator shows how to customize the models on the leaderboard via Composable ML's API, the Blueprint Workshop. It covers the following activities: * Access the Blueprint Workshop * Define and train a custom blueprint using the tasks provided by DataRobot * Insert custom code in the form of a CatBoost classifier into the blueprint
custom-bp-nb
--- title: Fine-tune models with Eureqa description: Apply symbolic regression to your dataset in the form of the Eureqa algorithm. --- # Fine-tune models with Eureqa {: #fine-tune-models-with-eureqa} [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/fine_tuning_with_eureqa/fine_tuning_with_eureqa.ipynb){ .md-button } DataRobot offers the ability to apply symbolic regression to your dataset in the form of the Eureqa algorithm. Eureqa returns human-readable and interpretable analytic expressions and allows us to incorporate DataRobot's own domain expertise about the problem. This accelerator shows how the Eureqa algorithm can "discover" the gravitational constant by finding the correct relationship between the variables from a double-pendulum experiment. This accelerator covers the following activities: * Apply the Eureqa algorithm to your dataset * Tune the model's mathematical building blocks to incorporate DataRobot's domain expertise about the problem * Access the resulting closed-form expression
tune-eureqa
--- title: Predict factory order quantities for new products description: Build a model to improve decisions about initial order quantities using future product details and product sketches. --- # Predict factory order quantities for new products {: #predict-factory-order-quantities-for-new-products } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/Retail_Industry_Predicting_Factory_Orders_New_Products/Retail%20Industry%20-%20Predicting%20Factory%20Order%20Quantities%20for%20New%20Products.ipynb){ .md-button } Retailers face many decisions when launching new products. One key decision is the amount of product to order from the manufacturer. Ordering too much wastes working capital and can lead to products being heavily discounted. Ordering too little squanders an opportunity for revenue and may cause customers to purchase other brands. Getting initial orders quantities right is particularly difficult for luxury products where first year demand for a new purse, a new belt or a new shoe can vary by several orders of magnitude based on factors unrelated to the product specifications. This notebook illustrates how to build a model to improve decisions about initial order quantities using future product details and product sketches.
pred-products
--- title: No-show appointment forecasting description: How to build a model that identifies patients most likely to miss appointments, with correlating reasons. --- # No-show appointment forecasting {: #no-show-appointment-forecasting } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/appointment_forecasting/no_show.ipynb){ .md-button } Many people are guilty of having canceled a doctor’s appointment. However, although canceling an appointment does not seem too disastrous from the patient’s point of view, no-shows cost outpatient health centers a staggering 14% of anticipated daily revenue (JAOA). Missed appointments trickle into lower utilization rates for not only doctors and nurses but also the overhead costs required to run outpatient centers. In addition, patients missing their appointments risk facing poorer health outcomes as they are unable to access timely care. While outpatient centers employ solutions such as calling patients ahead of time, these high touch resources investments are often not prioritized for patients with the highest risk of no-shows. Low touch solutions such as automated texts are effective tools for mass reminders but do not offer necessary personalization for patients at the highest risk of no-shows. This accelerator shows how to identify clients who are likely to miss appointments ("no-shows") and take action to prevent that from happening.
no-show
--- title: Demand forecasting with the what-if app description: Discover the problem framing and data management steps required to successfully model for churn, using a B2C retail example and a B2B example based on a DataRobot’s churn model. --- # Demand forecasting with the what-if app {: #demand-forecasting-with-the-what-if-app } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/Demand_forecasting_what_if_app){ .md-button } This demand forecasting what-if app allows you to adjust certain known in advance variable values to see how changes in those factors might affect the forecasted demand. Some examples of factors that might be adjusted include marketing promotions, pricing, seasonality, or competitor activity. By using the app to explore different scenarios and adjust key inputs, you can make more accurate predictions about future demand and plan accordingly. This app is a third installment of a three-part series on demand forecasting. The [first accelerator](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/End_to_end_demand_forecasting){ target=_blank } focuses on handling common data and modeling challenges, identifies common pitfalls in real-life time series data, and provides helper functions to scale experimentation. The [second accelerator](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/Demand_forecasting_cold_start){ target=_blank } provides the building blocks for cold start modeling workflow on series with limited or no history. They can be used as a starting point to create a model deployment for the app.
ml-what-if
--- title: End-to-end ML workflow with Databricks description: Build models in DataRobot with data acquired and prepared in a Spark-backed notebook environment provided by Databricks. --- # End-to-end ML workflow with Databricks {: #end-to-end-ml-workflow-with-databricks } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Databricks_End_To_End.ipynb){ .md-button } DataRobot features an in-depth API that allows data scientists to produce fully automated workflows in their coding environment of choice. This accelerator shows how to pair the power of DataRobot with the Spark-backed notebook environment provided by Databricks. In this notebook you'll see how data acquired and prepared in a Databricks notebook can be used to train a collection of models on DataRobot. You'll then deploy a recommended model and use DataRobot's exportable Scoring Code to generate predictions on the Databricks Spark cluster. This accelerator notebook covers the following activities: * Acquiring a training dataset. * Building a new DataRobot project. * Deploying a recommended model. * Scoring via Spark using DataRobot's exportable Java Scoring Code. * Scoring via DataRobot's Prediction API. * Reporting monitoring data to DataRobot's MLOps agent framework. * Writing results back to a new table.
ml-databricks
--- title: Perform multi-model analysis description: Use Python functions to aggregate DataRobot model insights into visualizations. --- # Perform multi-model analysis {: #perform-multi-model-analysis } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/multi_model_analysis/Multi-Model%20Analysis.ipynb){ .md-button } DataRobot is designed to help you experiment with different modeling approaches, data preparation techniques, and problem framings. You can iterate fast with a tight feedback loop to quickly arrive at the best approach. Sometimes you may wish to break your use case into multiple models, likely across multiple DataRobot projects. Maybe you want to build a separate model for each country or one for different periods of the year. In this case, it helps to bring all of your model performances and insights into one chart. This accelerator shares several Python functions which can take the DataRobot insights - specifically model error, feature effects (partial dependence), and feature importance (SHAP or permutation-based) and bring them together into one chart, allowing you to understand all of your models in one place and more easily share your findings with stakeholders.
ml-analysis
--- title: Integrate DataRobot and Snowpark by maximizing the data cloud description: Use Python functions to aggregate DataRobot model insights into visualizations. --- # Integrate DataRobot and Snowpark by maximizing the data cloud {: #integrate-datarobot-and-snowpark-by-maximizing-the-data-cloud } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/snowflake_snowpark/Native%20integration%20DataRobot%20and%20Snowflake%20Snowpark-Maximizing%20the%20Data%20Cloud.ipynb){ .md-button } If you or your team have tried to develop and productionalize machine learning models with Snowflake using Python and Snowpark but are looking to level up your end-to-end ML lifecycle on the data cloud, then this AI Accelerator is for you. Depending on your role within the organization, This accelerator can address a number of use cases: * Providing technical personnel with a hosted notebook. * Create an improved developer experience. * Improve monitoring capabilities for models within Snowflake. * Provide guidance and insights for business personnel who want action items: next steps for customers, sales, marketing, and more. DataRobot addresses these exact needs, which can be found in this notebook. In addition, it is compatible with the Snowflake data science stack and DataRobot 9.0 to give you advantages in terms of speed, accuracy, security, and cost-effectiveness.
snowpark-data
--- title: Use self-joins with panel data to improve model accuracy description: Explore how to implement self-joins in panel data analysis. --- # Use self-joins with panel data to improve model accuracy {: #use-self-joins-with-panel-data-to-improve-model-accuracy } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Demand_forecasting_retraining/End_to_end_demand_forecasting_retraining.ipynb){ .md-button } In this accelerator, explore how to implement self-joins in panel data analysis. Regardless of your industry, if you work with panel data, this guide is tailored to help you accelerate feature engineering and extract valuable insights. Panel data, with multiple observations for consistent subjects over time, is ubiquitous in various domains. While panel data is often spread across multiple tables, it can also exist in a single dataset with multiple features suitable as panel dimensions. The self-join technique enables automated, time-aware feature engineering with just one dataset, generating hundreds of candidate features of lagged aggregations and statistics. Combining these features within panel dimensions can substantially improve predictive model performance. The accelerator focuses on predicting airline take-off delays of 30 minutes or more to illustrate the self-join technique. However, this framework applies broadly across verticals and can easily be adapted to your use case. Using a single dataset, join it four times across different features, engineer time-based features from each join, using the AI Catalog for data management. The accelerator covers data preparation with multiple joins and time horizons, how to mitigate target leakage with multiple feature lists as well as time gaps in time-aware joins. Panel data analysis unlocks valuable insights into subjects evolving over time, and is often overlooked when there is a singular dataset.
self-joins
--- title: Create a trading volume profile curve with a time series model factory description: Use a framework to build models that will allow you to predict how much of the next day trading volume will happen at each time interval. --- # Create a trading volume profile curve with a time series model factory {: #create-a-trading-volume-profile-curve-with-a-time-series-model-factory } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/trading_volume_profile_curve){ .md-button } In securities trading, it’s often useful to have an idea of how trading volume for a particular instrument will be distributed over the market session. This is done by building a volume curve — essentially, a prediction of how much of the volume will fall within the different time intervals (“time slices”) in a trading day. Volume curves allow traders to better anticipate how to time and pace their orders and are used as inputs into algorithmic execution strategies such as VWAP (volume weighted average price) and IS (implementation shortfall). Historically, volume curves have been built by taking the average share of volume for a particular time slice over the last N trading days (for instance, the share of the daily volume in AAPL that traded between 10:35 and 10:40am on each of the last 20 trading days, on average), with manual adjustments to take account of scheduled events and anticipated differences. Machine learning allows you to do this in a structured, systematic way. The goal of this AI accelerator is to provide a framework to build models that will allow you to predict how much of the next day trading volume will happen at each time interval. The granularity can vary from minute by minute (or even lower) to hourly or daily. If you are working with high granularity, such as minute by minute intervals, having a single time series model to predict the next 1440 minutes (or 480, based on how long the market is open) becomes problematic. Instead, consider a time series model per interval (minute, half hour, hour, etc.) so that each model is only forecasting one step ahead. You can then bring together the predictions of all the models to create the full curve for the next day. Furthermore, while a model is built to predict each time interval, the model isn't restricted to data for that interval, but can leverage a wider window. While the motivation for this repository is a financial markets use case, it should be useful in other scenarios where predictions are required at a high resolution, such as predictive maintenance. ### Challenges * The number of models or deployments can explode, and you need to keep track of all of them. * Each model needs slightly different data. * Even if you are creating a model per minute, you want to use data from earlier and later on in the day. * You want to see a unified result (a single curve for the whole trading day). ### Approach * Train a model per interval, but leverage data outside of the interval by "widening" the time window on which it is trained. * Use a data frame to track all the projects, models and deployments corresponding to each interval. This will make it easy to stitch all the predictions together to build the next day(s) curve.
ts-factory
--- title: Predict lumber prices with Ready Signal and time series forecasts description: Use Ready Signal to add external control data, such as census and weather data, to improve time series predictions. --- # Predict lumber prices with Ready Signal and time series forecasts {: #predict-lumber-prices-with-ready-signal-and-time-series-forecasts } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/Ready_Signal_TS/DataRobot_RXA.ipynb){ .md-button } In this accelerator, you will explore how to bring external data from Ready Signal to help augment your time series forecasting accuracy. Ready Signal is an AI-powered data platform that provides access to over 500 normalized, aggregated, and automatically updated data sources for predictive modeling, experimentation, business intelligence, and other data enrichment needs. The data catalog includes micro/macro-economic indicators, labor statistics, demographics, weather, and more. Its AI recommendation engine and auto feature engineering capabilities make it easy to integrate with existing data pipelines and analytics tooling, accelerating and enhancing how relevant third-party data is leveraged. Here, DataRobot provides an example of predicting lumber price combined with the most relevant external data automatically identified by ReadySignal based on correlation with the target variable. The workflow can be applied to any time series forecasting project. If you are interested in learning more about how Ready Signal and DataRobot together can help your time series project, please reach out to Matt Schaefer (matt.schaefer@readysignal.com) or anyone else in the author list.
ready-signal
--- title: Use Gramian angular fields to improve datasets description: Generate advanced features used for high frequency data use cases. --- # Use Gramian angular fields to improve datasets {: #use-gramian-angular-fields-to-improve-datasets } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/game-changer/high_freq_data_to_images/high_frequency_data_classification_using_gramian_angular_fields.ipynb){ .md-button } Prerequisites: [PYTS library](https://pyts.readthedocs.io/){ target=_blank } Traditional feature engineering methods like time aware aggregation and spectrograms can have limitations. Spectrograms cannot capture correlations between each segment of the signal with other segments of the signal. If you try to do this with tabular aggregates it becomes a high dimensionality problem. Gramian Angular Field images of signal data can solve the above problem using a matrix which can be used with computer vision models easily without the limitations of dimensionality.
gramian
--- title: End-to-end ML workflow with Google Cloud Platform and BigQuery description: Use Google Collaboratory to source data from BigQuery, build and evaluate a model using DataRobot, and deploy predictions from that model back into BigQuery and GCP. --- # End-to-end ML workflow with Google Cloud Platform and BigQuery {: #end-to-end-ml-workflow-with-google-cloud-platform-and-bigquery } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/GCP%20DataRobot%20End%20To%20End.ipynb){ .md-button } DataRobot can integrate directly into your GCP environment, helping to accelerate your use of machine learning across all of the GCP services. In this notebook accelerator, you can use Google Collaboratory or another notebook environment to source data from BigQuery, build and evaluate an ML model using DataRobot, and deploy predictions from that model back into BigQuery and GCP. This accelerator covers the following: 1. **Prepare data and ensure connectivity:** In the first section of the notebook, you will load a sample dataset to be used for modeling into BigQuery. Once complete, you will connect your BigQuery data with DataRobot. 2. **Build and evaluate a model:** Using the DataRobot Python API, you will have DataRobot build close to 50 different machine learning models while also evaluating how those models perform on this dataset. 3. **Scoring and hosting:** In the final section, the entire dataset will be scored on the new model with prediction data written back to BigQuery for use in your GCP applications.
ml-gcp
--- title: End-to-end modeling workflow with Azure description: Use data stored in Azure to train a collection of models on DataRobot. --- # End-to-end modeling workflow with Azure {: #end-to-end-modeling-workflow-with-azure} [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Azure_End_to_End.ipynb){ .md-button } DataRobot offers an in-depth API that allows you to produce fully automated workflows in your coding environment of choice. This accelerator shows how to enable end-to-end processing of data stored natively in Azure. In this notebook you'll see how data stored in Azure can be used to train a collection of models on DataRobot. You'll then deploy a recommended model and use DataRobot's batch prediction API to produce predictions and write them back to the source Azure container. This accelerator notebook covers the following activities: * Acquire a training dataset from an Azure storage container * Build a new DataRobot project * Deploy a recommended model * Score via DataRobot's batch prediction API * Write results back to the source Azure container
ml-azure
--- title: Use feature engineering and Visual AI with acoustic data description: Generate image features in addition to aggregate numeric features for high frequency data sources. --- # Use feature engineering and Visual AI with acoustic data {: #use-feature-engineering-and-visual-ai-with-acoustic-data } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/game-changer/high_freq_data_to_images/high_frequency_data_classification_using_spectrograms_n_numerics/high_frequency_classification_spectrograms_n_numerics.ipynb){ .md-button } The density of high frequency data presents a challenge for standard machine learning workflows that lack specialized feature engineering techniques to condense the signal, extracting and highlighting its uniqueness. DataRobot's multimodal input capability supports simultaneously leveraging numerics and images, which for this use-case is particularly beneficial for including descriptive spectrograms that enable you to leverage well-established computer vision techniques for complex data understanding. This example notebook shows how to generate image features and aggregate numeric features for high frequency data sources. This approach converts audio wav files from the time domain into the frequency domain to create several types of spectrograms. Statistical numeric features computed from the converted signal add additional descriptors to aid classification of the audio source.
ml-viz
--- title: Gather churn prediction insights with the Streamlit app description: Use the Streamlit churn predictor app to present the drivers and predictions of your DataRobot model. --- # Gather churn prediction insights with the Streamlit app {: #gather-churn-prediction-insights-with-the-streamlit-app } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/Churn_app_Streamlit){ .md-button } This app serves as an example of how to present the drivers and predictions of your Datarobot model using a Churn prediction use case. Building a churn predictor app using Streamlit and DataRobot is a great way to leverage the power of machine learning to improve customer retention. The first step in building a churn prediction model is to collect and prepare your data. This typically involves gathering data on your customers' behavior, demographics, and usage patterns. Once you have your data, you can upload it to DataRobot and let the platform do the rest. After training, DataRobot provides detailed insights into the model's performance, including feature importance, model validation, and accuracy metrics. Once you have a model that you're satisfied with, you can generate predictions on new data using DataRobot's prediction API. This workflow assumes that you have already generated these predictions and saved them as a CSV file. To create a Streamlit app for churn prediction, you will need to import the necessary libraries, including Pandas, NumPy, Streamlit, Plotly, and PIL. You can then read in your prediction data and set up your Streamlit app's page configuration. The app itself should allow users to specify criteria for viewing churn scores and top churn reasons. You can accomplish this using sliders and other interactive elements. A workflow of this process for building a Streamlit app using DataRobot predictions can be found in the churn Streamlit app GitHub repository. This workflow can be adapted to present insights from other classification or regression models built in DataRobot.
streamlit-app
--- title: Prepare and leverage image data with Databricks description: Import image files using Spark and prepare them into a data frame suitable for ingest into DataRobot. --- # Prepare and leverage image data with Databricks {: #prepare-and-leverage-image-data-with-databricks } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/image_data_databricks/Image%20Data%20Preparation.ipynb){ .md-button } DataRobot's Visual AI allows you to leverage images in your models just like any other type of data. In this accelerator, you will import image files using Spark and prepare them into a data frame suitable for ingest into DataRobot. Then you will leverage DataRobot through code to rapidly train and deploy a powerful multiclass image classifier. While there are other methods of ingesting image data into DataRobot, in this notebook you will encode the image data directly into the data frame using base64 encoding. This methodology allows you to keep all of the relevant data in a single data frame, and works well for a Databricks environment. This technique also extends widely to a wide variety of multimodal datasets. Dive in to go from Databricks image data to a deployed classifier.
image-databricks
--- title: End-to-end workflow with SAP Hana description: Learn how to programmatically build a model with DataRobot using SAP Hana as the data source. --- # End-to-end workflow with SAP Hana {: #end-to-end-workflow-with-sap-hana } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/SAP_End_to_End/SAP_End_to_End.ipynb){ .md-button } The scope of this accelerator provides instructions on how to use DataRobot's Python client to build a workflow that will use an existing SAP Hana JDBC driver and: * Create credentials * Create the training data source * Create the predictions data source * Create a dataset used to train the models * Create a dataset used to make predictions * Create a project * Create a deployment * Make batch and real-time predictions * Show the total predictions made so far There is also a playbook at the end of this notebook that describes how to create the back end SAP Hana database that will provide the data required.
ml-sap
--- title: Zero-shot text classification for error analysis description: Use zero-shot text classification with large language models (LLMs), focusing on its application in error analysis of supervised text classification models. --- # Zero-shot text classification for error analysis {: #zero-shot-text-classification-for-error-analysis } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/zero_shot_LMM/Zero%20Shot%20Text%20Classification%20for%20Error%20Analysis.ipynb){ .md-button } This AI Accelerator which offers a deep dive into the utilization of zero-shot text classification for error analysis in machine learning models. This educational resource is an invaluable asset for those interested in enhancing their understanding and proficiency in the field of machine learning. Building on your existing knowledge and experience with the DataRobot automated machine learning platform, this notebook demonstrates the development of a text classification model. From there, turn your focus towards a crucial, yet sometimes challenging aspect of machine learning - error analysis. Understanding why a supervised machine learning model incorrectly classifies certain examples can be a challenging task. The newly released notebook introduces a novel methodology for identifying and understanding these errors using zero-shot text classification. In this accelerator, make use of three different zero-shot classification methods: Natural Language Inference (NLI), Embedding, and Conversational AI. The distinct capabilities of each method contribute to a comprehensive and enlightening error analysis process. Detailed within the notebook is a thorough explanation of the error analysis procedure. So regardless of your proficiency level in machine learning, the content is structured to cater to a wide range of readers. The application of zero-shot text classification to error analysis could be a significant enhancement to your machine learning practice, particularly with DataRobot.
zero-shot
--- title: Tackle churn before modeling description: Discover the problem framing and data management steps required to successfully model for churn, using a B2C retail example and a B2B example based on a DataRobot’s churn model. --- # Tackle churn before modeling {: #tackle-churn-before-modeling } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/game-changer/churn_blog){ target=_blank } Customer retention is central to any successful business and machine learning is frequently proposed as a way of addressing churn. It is tempting to dive right into a churn dataset, but improving outcomes requires correctly framing the problem. Doing so at the start will determine whether the business can take action based on the trained model and whether your hard work is valuable or not. This accelerator blog will teach the problem framing and data management steps required before modeling begins. It uses two examples to illustrate concepts: a B2C retail example, and a B2B example based on DataRobot’s internal churn model.
ml-churn
--- title: Tune blueprints for preprocessing and model hyperparameters description: Learn how to access, understand, and tune blueprints for both preprocessing and model hyperparameters. --- # Tune blueprints for preprocessing and model hyperparameters {: #tune-blueprints-for-preprocessing-and-model-hyperparameters} [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/Hyperparameter_Optimization){ .md-button } In machine learning, hyperparameter tuning is the act of adjusting the "settings" (referred to as hyperparameters) in a machine learning algorithm, whether that's the learning rate for an XGBoost model or the activation function in a neural network. Many methods for doing this exist, with the simplest being a brute-force search over every feasible combination. While this requires little effort, it's extremely time-consuming as each combination requires fitting the machine learning algorithm. To this end, practitioners strive to find more efficient ways to search for the best combination of hyperparameters to use in a given prediction problem. DataRobot employs a proprietary version of pattern search for optimization not only for the machine learning algorithm's specific hyperparameters, but also the respective data preprocessing needed to fit the algorithm, with the goal of quickly producing high-performance models tailored to your dataset. While the approach used at DataRobot is sufficient in most cases, you may want to build upon DataRobot's Autopilot modeling process by custom tuning methods. In this AI Accelerator, you will familiarize yourself with DataRobot's fine-tuning API calls to control DataRobot's pattern search approach as well as implement a modified brute-force grid-search for the text and categorical data pipeline and hyperparameters of an XGBoost model. This accelerator serves as an introductory learning example that other approaches can be built from. Bayesian Optimization, for example, leverages a probabilistic model to judiciously sift through the hyperparameter space to converge on an optimal solution, and will be presented next in this accelerator bundle. Note that as a best practice, it is generally best to wait until the model is in a near-finished state before searching for the best hyperparameters to use. Specifically, the following have already been finalized: - Training data (e.g., data sources) - Model validation method (e.g., group cross-validation, random cross-validation, or backtesting. How the problem is framed influences all subsequent steps, as it changes error minimization.) - Feature engineering (particularly, calculations driven by subject matter expertise) - Preprocessing and data transformations (e.g., word or character tokenizers, PCA, embeddings, normalization, etc.) - Algorithm type (e.g. GLM, tree-based, neural net) These decisions typically have a larger impact on model performance compared to adjusting a machine learning algorithm's hyperparameters (especially when using DataRobot, as the hyperparameters chosen automatically are pretty competitive). This AI Accelerator teaches you how to access, understand, and tune blueprints for both preprocessing and model hyperparameters. You'll programmatically work with DataRobot advanced tuning which you can then adapt to your other projects. You'll learn how to: * Prepare for tuning a model via the DataRobot API * Load a project and model for tuning * Set the validation type for minimizing errors * Extract model metadata * Get model performance * Review hyperparameters * Run a single advanced tuning session * Implement your own custom gridsearch for single and multiple models to evaluate
opt-grid
--- title: Demand forecasting and retraining workflow description: Implement retraining policies with DataRobot MLOps demand forecast deployments. --- # Demand forecasting and retraining workflow {: #demand-forecasting-and-retraining-workflow } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Demand_forecasting_retraining/End_to_end_demand_forecasting_retraining.ipynb){ .md-button } This accelerator demonstrates retraining policies with DataRobot MLOps demand forecast deployments. This accelerator is a another installment of a series on demand forecasting. The [first accelerator](demand-flow) focuses on handling common data and modeling challenges, identifies common pitfalls in real-life time series data, and provides helper functions to scale experimentation. The [second accelerator](cold-start) provides the building blocks for cold start modeling workflow on series with limited or no history. They can be used as a starting point to create a model deployment for the app. The [third accelerator](ml-what-if) is a what-if app that allows you to adjust certain known in advance variable values to see how changes in those factors might affect the forecasted demand.
df-retrain
--- title: Customize lift charts description: Leverage popular Python packages with DataRobot's Python client to recreate and augment DataRobot's lift chart visualization. --- # Customize lift charts {: #customize-lift-charts } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/customizing_lift_charts){ .md-button } Ever wanted to plot more than 60 bins in DataRobot's lift chart? Ever needed to present this graphic with a specific color palette? Ever required to display more information in the chart due to regulatory reasons? In this AI Accelerator, leverage popular Python packages with DataRobot's Python client to recreate and augment DataRobot's lift chart visualization. These customizations allow you to: * Plot more than 60 bins in DataRobot's lift chart. * Present this lift chart visualizations with a specific color palette. * Display more information in the chart. The steps demonstrated in the accompanying notebook are: 1. Connect to DataRobot 2. Create a DataRobot project 3. Run a single blueprint from the repository 4. Obtain predictions and actuals 5. Recreate DataRobot’s lift chart 6. Add customization to the lift chart
custom-lift-chart
--- title: Cold start demand forecasting workflow description: This accelerator provides a framework to compare several approaches for cold start modeling on series with limited or no history. --- # Cold start demand forecasting workflow {: #cold-start-demand-forecasting-workflow} [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Demand_forecasting_cold_start/End_to_end_demand_forecasting_cold_start.ipynb){ .md-button } The cold start demand forecasting problem refers to the challenge of predicting future demand for a new product or service with little or no historical sales data available. This situation typically arises when a company introduces a new product or service to the market or a new product is launched in a store that is already being sold in other stores, and there is no past data available for training a machine learning model to predict future demand. In traditional demand forecasting, historical sales data is used to train a machine learning model that can predict future demand. However, in the case of a new product, there is no historical data available. This presents a significant challenge because accurate demand forecasting is critical for making informed decisions about inventory, pricing, and marketing strategies. This second accelerator of a three-part series on demand forecasting provides the building blocks for cold start modeling workflow on series with limited or no history. This accelerator provides a framework to compare several approaches for cold start modeling. The previous notebook aims to inspect and handle common data and modeling challenges, identifies common pitfalls in real-life time series data, and provides helper functions to scale experimentation with the tools mentioned above and more. The dataset consists of 50 series (46 SKUs across 22 stores) over a 2 year period with varying series history, typical of a business releasing and removing products over time. The test dataset contains 20 additional series with little or no history which were not present in the training dataset.
cold-start
--- title: Netlift modeling workflow description: Leverage machine learning to find patterns around the types of people for whom marketing campaigns are most effective. --- # Netlift modeling workflow {: #netlift-modeling-workflow} [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/game-changer/uplift_modeling/uplift_modeling.ipynb){ .md-button } Uplift modeling, also referred to as "netlift" modeling, is an approach used often in marketing to isolate the impact of a marketing campaign on specific prospective customers’ propensity to purchase something. The underlying example in this DataRobot AI Accelerator is exactly that, but more generally this approach could be used to isolate the impact of any “intervention” on the propensity of any positive response. The key challenge in uplift modeling is to isolate the effect of the campaign, because no individual person can be observed both receiving the campaign and not receiving the campaign. The accelerator addresses this key challenge, as well as other tips and tricks for uplift modeling. In many cases, the historical strategy for determining who received a campaign targeted those already likely to purchase the product (or generally, produce a favorable response). That approach would suggest a simple trend that receiving the campaign increases the likelihood to purchase, but many other features about the customers may be confounding the isolated impact of the campaign. In fact, it's possible that a campaign that targeted already high-probability buyers actually reduced their probability of purchase. These are the so-called "sleeping dogs'' in marketing lingo. From an ROI standpoint, increasing the probability to purchase on one group of prospects from 25% to 50% is just as valuable as increasing that probability on another group from 50% to 75% (assuming the groups are roughly the same size, with the same expected revenue values). So what you're really trying to ask from machine learning models is this: on which prospective customers will the campaign increase the probability of purchase by the greatest amount? This accelerator uses a generic dataset where the favorable outcome is binary: whether or not a product was purchased. The "treatment", or campaign, is simple: a single campaign type that was sent randomly to some prospective buyers, though it also discusses how these methods can be extrapolated to the common case where there was selection bias in the campaign. Leverage machine learning to find patterns around the types of people for whom the campaign is most effective, controlling for their baseline likelihood to purchase in the case that they don't see a campaign. Uplift use cases require some additional post-processing to extract and evaluate the "uplift score", and thus this use case is an ideal candidate for leveraging the DataRobot programmatic API, to seamlessly integrate powerful machine learning with one's typical coding pipeline. While working through the provided Jupyter Notebook, the following concepts and strategies will be reinforced: 1. Data formatting tricks to extract the most from your uplift models. 2. How to leverage DataRobot's API to integrate powerful machine learning into your code-first pipelines. 3. How to extract uplift scores from a single, binary classification model. 4. How to evaluate and understand those uplift scores, and their implied ROI. 5. Considerations for cases where your historical, training data exhibits selection bias, where the campaign was not randomly sent.
ml-uplift
--- title: Monitor AWS Sagemaker models with MLOps description: Train and host a SageMaker model that can be monitored in the DataRobot platform. --- # Monitor AWS Sagemaker models with MLOps {: #monitor-aws-sagemaker-models-with-mlops } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/monitor_sagemaker_model_in_DataRobot){ .md-button } DataRobot MLOps provides a central hub to deploy, monitor, manage, and govern all your models in production. You can deploy models to the production environment of your choice and continuously monitor the health and accuracy of your models, among other metrics. AWS Sagemaker is a fully managed service that allows data scientists and developers to build, train, and deploy machine learning models. DataRobot MLOps with its AWS Sagemaker integration provides an end-to-end solution for managing machine learning models at scale, you can easily monitor the performance of your machine learning models in real-time, and quickly identify and resolve any issues that arise.
aws-mlops
--- title: Deploy a model in AWS SageMaker description: Learn how to programmatically build a model with DataRobot and export and host the model in AWS SageMaker --- # Deploy a model in AWS SageMaker {: #deploy-a-model-in-aws-sagemaker } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/sagemaker_deployment){ .md-button } In this accelerator, you deploy a model that has been built in DataRobot to AWS SageMaker. If you already use SageMaker for hosting models, you can still make use of the powerful features of DataRobot, including AutoML and time series modeling. You can integrate DataRobot into your existing deployment processes. Likewise, you can use this workflow to deploy a DataRobot-built model into another type of environment. In this accelerator you will follow the manual steps that are [outlined in DataRobot's documentation](sc-sagemaker#use-scoring-code-with-aws-sagemaker), programmatically build a model with DataRobot, and export and host the model in AWS SageMaker. To assist with the setup of AWS services to run the model, this code provisions any extra items that you may not haven yet set up. Review the lists below of what is created in this AI accelerator. ### AWS * ECR Repository * S3 Bucket * IAM Role for SageMaker * SageMaker inference model * SageMaker endpoint configuration * SageMaker endpoint (for real time predictions) * SageMaker batch transform job (for batch predictions) ### DataRobot * DataRobot AutoML Project * DataRobot AutoML Models * Scoring Code JAR file of AutoML Model Once you have run through the code, you will see how you can leverage the power of DataRobot's automated machine learning capabilities to train a model and then make use of the power of AWS to deploy and host that model in SageMaker.
deploy-sagemaker
--- title: Track ML experiments with MLFlow description: Automate machine learning experimentation using DataRobot, MLFlow, and Papermill. --- # Track ML experiments with MLFlow {: #track-ml-experiments-with-mlflow } [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/MLFLOW){ .md-button } Experimentation is a mandatory activity in any machine learning developer’s day-to-day activities. For time series projects, the number of parameters and settings to tune for achieving the best model is in itself a vast search space. Many of the experiments in time series use cases are common and repeatable. Tracking these experiments and logging results is a task that needs streamlining. Manual errors and time limitations may lead to selection of suboptimal models leaving better models lost in global minima. The integration of DataRobot API, Papermill, and MLFlow automates machine learning experimentation so that is becomes easier, robust, and easy to share. As illustrated below, you will use the [orchestration notebook](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/MLFLOW/orchestration_notebook.ipynb) to design and run the [experiment notebook](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/MLFLOW/experiment_notebook.ipynb), with the permutations of parameters handled automatically by DataRobot. At the end of the experiments, copies of the experiment notebook will be available, with the outputs for each permutation for collaboration and reference. ![](images/mlflow.png) You can review [the dependencies](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/MLFLOW/requirements.txt) for the accelerator. This accelerator covers the following activities: * Acquiring a training dataset. * Building a new DataRobot project. * Deploying a recommended model. * Scoring via Spark using DataRobot's exportable Java Scoring Code. * Scoring via DataRobot's Prediction API. * Reporting monitoring data to DataRobot's MLOps agent framework. * Writing results back to a new table.
mlflow