markdown
stringlengths 44
160k
| filename
stringlengths 3
39
|
|---|---|
# Version 7.2.1 {: #version-721 }
_September 19, 2021_
The DataRobot v7.2.1 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.2.0 release notes for:
* [Features introduced in v7.2.0](v7.2.0-aml)
## Issues fixed in v7.2.1 {: #issues-fixed-in-v721 }
The following issues have been fixed since Enterprise release v7.2.0:
### Enterprise {: #enterprise}
* EP-1724: Fixes a mongo-watch issue where Replica Set(RS) would kick out of mongo during the initial RS setup.
* EP-1307: Comments out `PYTHON3_SERVICES` in example configs for cluster installs so that it can be appropriately enabled for new installs where supported.
### Platform {: #platform}
* PLT-4386: The following script was added: `./sbin/datarobot-manage-queue`. The script allows recovering the stuck jobs after DB backup and restore on the enterprise environment.
### MLOps {: #mlops}
* RAPTOR-6334: Fixes an issue where the execution environment version responses in the API only included non-null links in the environment version's downloadable artifacts if they are downloadable.
* RAPTOR-6267: Fixes an issue when applying database migration on upgrade from 7.1 to 7.2 when storage abstraction is not available.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.2.1-aml
|
# Version 7.2.8 {: #version-728 }
_January 26, 2022_
The DataRobot v7.2.8 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.2.0 release notes for:
* [Features introduced in v7.2.0](v7.2.0-aml)
## Issues fixed in v7.2.8 {: #issues-fixed-in-v728 }
The following issues have been fixed since Enterprise release v7.2.7:
### Enterprise {: #enterprise }
* EP-1366: `pngexport` no longer waits for storage to update.
### Platform {: #platform }
* PLT-5365: Fixes an issue where EMC ECS S3-compatible storage integration resulted in "SSL validation failed."
### Feature Discovery {: #feature-discovery }
* SAFER-4504: Fixes an issue where relationships could not be saved for some JDBC datasets in Feature Discovery projects.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.2.8-aml
|
---
title: Version 7.2.0 maintenance releases
description: Maintenance releases for the DataRobot Release 7.2 major release.
---
# Version 7.2.x maintenance releases {: #version-72x-maintenance-releases }
The following maintenance release notes include some fixed issues in the DataRobot Self-Managed AI Platform platform. See also the [features introduced in v7.2.0](v7.2/index), which was introduced _September 13, 2021_.
Version | Release date
------- | -----------
[v7.2.8](v7.2.8-aml) | _January 26, 2022_
[v7.2.7](v7.2.7-aml) | _December 21, 2021_
[v7.2.6](v7.2.6-aml) | _November 23, 2021_
[v7.2.5](v7.2.5-aml) | _November 9, 2021_
[v7.2.3](v7.2.3-aml) | _October 13, 2021_
[v7.2.2](v7.2.2-aml) | _September 28, 2021_
[v7.2.1](v7.2.1-aml) | _September 19, 2021_
[v7.2.0](v7.2.0-aml) | _September 13, 2021_
|
index
|
# Version 7.2.6 {: #version-726 }
_November 23, 2021_
The DataRobot v7.2.6 release does not include any customer-impacting issues. See the v7.2.0 release notes for:
* [Features introduced in v7.2.0](v7.2.0-aml)
## Updated language localization {: #updated-language-localization }
Localization of the documentation has been updated to include Japanese content for all new 7.2 features.
To change languages, open your [user settings](getting-help) and select a language for your session. The selection remains until you reset it.

_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.2.6-aml
|
# Version 7.2.2 {: #version-722 }
_September 28, 2021_
The DataRobot v7.2.2 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.2.0 release notes for:
* [Features introduced in v7.2.0](v7.2.0-aml)
## Issues fixed in v7.2.2 {: #issues-fixed-in-v722 }
The following issues have been fixed since Enterprise release v7.2.1:
### Feature Discovery {: #feature-discovery}
* SAFER-4224: Fixes an issue causing EDA to become deadlocked when using python3 and the fork method. This fix bumps the version of the joblib external dependency to pick up a fix for the semaphore tracker and changes python3 to use forkserver process start method.
### Predictions {: #predictions}
* CODEGEN-1097: Fixes an issue where old, unused JAR files were installed. Old jar files have been removed from the installation.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.2.2-aml
|
# Version 7.3.5 {: #version-735 }
_February 18, 2022_
The DataRobot v7.3.5 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.3.0 release notes for:
* [Features introduced in v7.3.0](v7.3.0-aml)
## Updated language localization {: #updated-language-localization }
Localization of the documentation has been updated to include Japanese content for all new 7.3 features.
To change languages, open your [user settings](getting-help) and select a language for your session. The selection remains until you reset it.

## Issues fixed in v7.3.5 {: #issues-fixed-in-v735 }
The following issues have been fixed since Enterprise release v7.3.4:
### Feature Discovery {: #feature-discovery }
* SAFER-4717: Fixes an issue causing an OOM error for batch prediction jobs in Feature Discovery projects with Snowflake secondary datasets configured.
### Predictions {: #predictions }
* PRED-7092: Fixes an issue with duplicated headers in output prediction files from AI Catalog intake.
* PRED-7028: BigQuery output adapter in batch predictions now uses the existing table schema in case of insert.
### Time series {: #time-series }
* TIME-10099: Fixes an issue with weights and early stopping in multiclass projects. Only LightGBM- and GBM-based models were affected.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.3.5-aml
|
# Version 7.3.3 {: #version-733 }
_January 17, 2022_
The DataRobot v7.3.3 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.3.0 release notes for:
* [Features introduced in v7.3.0](v7.3.0-aml)
## Issues fixed in v7.3.3 {: #issues-fixed-in-v733 }
The following issues have been fixed since Enterprise release v7.3.2:
### Platform {: #platform }
* PLT-5635: Fixes an issue affecting LDAP group mapping when an update has been made to the system-level group’s settings and/or permissions post creation.
* PLT-5365: Fixes an issue where EMC ECS S3-compatible storage integration would give "SSL validation failed."
### Feature Discovery {: #feature-discovery }
* SAFER-4504: Fixes an issue where relationships could not be saved for some JDBC datasets in Feature Discovery projects.
### Predictions {: #predictions }
* CODEGEN-1253: Replaces log4j in Scoring Code JARs with sl4fj-simple.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.3.3-aml
|
# Version 7.3.1 {: #version-731 }
_December 16, 2021_
The DataRobot v7.3.1 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.3.0 release notes for:
* [Features introduced in v7.3.0](v7.3.0-aml)
## Issues fixed in v7.3.1 {: #issues-fixed-in-v731 }
The following issues have been fixed since Enterprise release v7.3.0:
### Platform {: #platform }
* ARCH-3374: MongoDB migrations now support an analyze function to estimate collections impacted for each migration, with results shown on the command line to the cluster admin prior to executing the migration. A stats command was implemented to gather the full database statistics for collection sizes.
* MODEL-7371: Fixes a bug in grid-search that required square-brackets for search.
## DataRobot Response to Log4j Vulnerability & Recommended Next Steps {: #datarobot-response-to-log4j-vulnerability-recommended-next-steps }
On December 10, 2021, DataRobot became aware of a vulnerability in the widely used logging library Log4j (CVE-2021-44228) for Java-based applications, which is impacting enterprise applications and cloud services around the world. In response to this vulnerability, DataRobot immediately assembled a cross-functional team to assess the scope of the vulnerability and begin implementing steps for remediation.
Security is a foundational element of an Enterprise AI Platform. We have completed a thorough investigation and currently have no indication of any successful exploitation of this vulnerability in our Managed Cloud environment. We identified remediations and deployed fixes for all of our customers on DataRobot Managed Cloud as of December 12, 2021 and DataRobot DataPrep as of December 13, 2021.
If you currently do not use DataRobot Scoring Code, MLOps Monitoring Agents, or Portable Prediction Servers (PPS) with MLOps Monitoring enabled, no further action is required by you.
For customers using any of the above features, the Log4j vulnerability may continue to exist in any previously generated artifact. As general guidance, please follow the Apache Security Advisory for Log4j for mitigation. If you need further details, please review the appendix for specific mitigation steps on DataRobot artifacts to help address these risks.
Please do not hesitate to reach out to your account team or email support@datarobot.com if we can assist you in any way.
As always, thank you for including DataRobot as a cornerstone in your AI transformation. We will provide updates on Log4j if we have new information relevant to you. For now, we wish you the happiest of holiday seasons.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.3.1-aml
|
# Version 7.3.4 {: #version-734 }
_February 7, 2022_
The DataRobot v7.3.4 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.3.0 release notes for:
* [Features introduced in v7.3.0](v7.3.0-aml)
## Issues fixed in v7.3.4 {: #issues-fixed-in-v734 }
The following issues have been fixed since Enterprise release v7.3.3:
### Enterprise {: #enterprise }
* EP-1965: Updates Patroni to use md5 as password encryption for backward compatibility.
### Platform {: #platform }
* PLT-5899: Fixes an LDAP group mapping issue that prevented users from signing in when `USER_AUTH_LDAP_SEARCH_BASE_DN` was configured.
### Predictions {: #predictions }
* PRED-6605: Fixes an issue where some error messages would not display when using batch predictions and GCP.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.3.4-aml
|
---
title: Version 7.3.x maintenance releases
description: Maintenance releases for the DataRobot Release 7.3 major release.
---
# Version 7.3.x maintenance releases {: #version-73x-maintenance-releases }
The following maintenance release notes include some fixed issues in the DataRobot Self-Managed AI Platform platform. See also the [features introduced in v7.3.0](v7.3/index), which was introduced _December 13, 2021_.
Version | Release date
------- | -----------
* [v7.3.6](v7.3.6-aml) | *May 4, 2022*
* [v7.3.5)](v7.3.5-aml) | *February 18, 2022*
* [v7.3.4)](v7.3.4-aml) | *February 7, 2022*
* [v7.3.3)](v7.3.3-aml) | *January 17, 2022*
* [v7.3.2)](v7.3.2-aml) | *December 21, 2021*
* [v7.3.1)](v7.3.1-aml) | *December 16, 2021*
* [v7.3.0)](v7.3.0-aml) | *December 13, 2021*
|
index
|
# Version 7.3.6 {: #version-736 }
_May 4, 2022_
The DataRobot v7.3.6 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.3.0 release notes for:
* [Features introduced in v7.3.0](v7.3.0-aml)
## Issues fixed in v7.3.6 {: #issues-fixed-in-v736 }
The following issues have been fixed since Enterprise release v7.3.5:
### Platform {: #platform }
* PLT-6264: Fixes error messaging when permanently deleting a user and their assets.
* PLT-6265: Fixes an issue causing permanently deleted projects to appear in search.
### Visual AI {: #visual-ai }
* VIZAI-3312: Fixes an issue affecting CSV and PNG export for external datasets confusion charts.
### MLOps {: #mlops }
* RAPTOR-7478: Fixes date column handling for Compliance Docs for Custom Inference Models. Date columns are now excluded from Feature Fit and Feature Effects.
### Predictions {: #predictions }
* CODEGEN-1333: Removes the unused junit directory from the Scoring Code JAR files.
### Time series {: #time-series }
* TIME-9875: Resolves an issue causing the feature over time calculation to fail when the feature is categorical and has a unique value.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.3.6-aml
|
# Version 7.3.2 {: #version-732 }
_December 21, 2021_
The DataRobot v7.3.2 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.3.0 release notes for:
* [Features introduced in v7.3.0](v7.3.0-aml)
## Issues fixed in v7.3.2 {: #issues-fixed-in-v732 }
The following issues have been fixed since Enterprise release v7.3.1:
### Business Operations {: #business-operations }
* BOPS-2533: Fixes an issue affecting the distribution chart for regression models on the prediction result page in the AI App Builder.
* BOPS-2576: Fixes an issue causing created applications to not display on the **Applications** page.
### MLOps {: #mlops }
* MMM-8649: Fixes issue with deployment reports failing for clusters running with `PYTHON3_SERVICES` enabled.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.3.2-aml
|
# Version 7.1.4 {: #version-714 }
_December 21, 2021_
The DataRobot v7.1.4 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.1.0 release notes for:
* [Features introduced in v7.1.0](v7.1.0-aml)
## Issues fixed in v7.1.4 {: #issues-fixed-in-v714 }
The following issues have been fixed since Enterprise release v7.1.3:
### Enterprise {: #enterprise }
* EP-2133: Updates the Scoring Code build plugins to avoid the inclusion of vulnerable log4j versions.
### Platform {: #platform }
* PLT-5079: Fixes an issue in Azure storage backend affecting the speed prediction data processing for external deployments.
### Custom Models {: #custom-models }
* RAPTOR-6334: Execution environment version responses in the API only include non-null links to the environment version's downloadable artifacts if they are downloadable.
* RAPTOR-7296: Fixes an issue that caused transformed data to be sent to custom models during prediction explanation initialization.
### Predictions {: #predictions }
* CODEGEN-1097: Fixes an issue where unused old JAR files were installed. Now the old JAR files have been removed from the installation.
### Feature Discovery {: #feature-discovery }
* SAFER-4224: Fixes an issue with EDA getting deadlocked when using Python 3 and fork method.
### Time Series {: #time-series }
* TIME-9425: Fixes an issue that caused a blank page when accessing smart-sampled OTV projects with custom backtest settings.
### Trust & Explainability {: #trust-explainability }
* TREX-282: Fixes an issue with a database migration that could cause some models to lose their Lift Chart data during the Self-Managed AI Platform upgrade process.
* TREX-296: Fixes an issue with a database migration that could cause some models to lose their Word Cloud data during the Self-Managed AI Platform upgrade process.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.1.4-aml
|
# Version 7.1.2 {: #version-712 }
_August 2, 2021_
The DataRobot v7.1.2 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.1.0 release notes for:
* [Features introduced in v7.1.0](v7.1.0-aml)
## Updated language localization {: #updated-language-localization }
Localization of the documentation has been updated to include Japanese content for all new 7.1 features.
To change languages, open your [user settings](getting-help) and select a language for your session. The selection remains until you reset it.

## Issues fixed in v7.1.2 {: #issues-fixed-in-v712 }
The following issues have been fixed since Enterprise release v7.1.1:
### Enterprise {: #enterprise }
* EP-1226: Fixes an issue where `patroni` installs failed because nodes were not syncing.
* EP-1345: Fixes an issue that caused a KeyError during DB migration.
### Platform {: #platform }
* PRODSEC-234: Prevents containers from acquiring new privileges.
* XAI-4238: Fixes an issue when loading certain models with SHAP explainers that were built in DataRobot version 5.2.2, which failed to load after upgrading to version 6.3.
### Feature Discovery {: #feature-discovery }
* SAFER-3964: Fixes an issue where Feature Discovery projects created before DataRobot version 7.1 resulted in an error when running predictions.
### Time Series {: #time-series }
* TIME-8745: Allows `COMPUTE_TIMESERIES_AOT_THRESHOLD` to be configured from the environment.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.1.2-aml
|
---
title: Version 7.1.x maintenance releases
description: Maintenance releases for the DataRobot Release 7.1 major release.
---
# Version 7.1.x maintenance releases {: #version-71x-maintenance-releases }
The following maintenance release notes include some fixed issues in the DataRobot Self-Managed AI Platform platform. See also the [features introduced in v7.1.0](v7.1/index), which was introduced _June 14, 2021_.
Version | Release date
------- | -----------
[v7.1.4](v7.1.4-aml) | _December 21, 2021_
[v7.1.3](v7.1.3-aml) | _August 27, 2021_
[v7.1.2](v7.1.2-aml) | _August 2, 2021_
[v7.1.1](v7.1.1-aml) | _June 19, 2021_
|
index
|
# Version 7.1.1 {: #version-711 }
_June 19, 2021_
The DataRobot v7.1.1 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.1.0 release notes for:
* [Features introduced in v7.1.0](v7.1.0-aml)
## Issues fixed in v7.1.1 {: #issues-fixed-in-v711 }
The following issues have been fixed since Enterprise release v7.1.0:
### Enterprise {: #enterprise }
* EP-1315: Adds a missing `eks-mlops.yaml` example config in `installer/example-configs/docker`.
* EP-1303: Fixes an issue where the DataRobot install on azure instances fails the mongo version check.
* EP-1293: Updates the Platform Guide Installer Toolkit documentation for the 7.1 Install Guide.
### Platform {: #platform }
* SAFER-3796: Fixes an issue where the SQL Server Data Connection in the AI Catalog required authentication after saving credentials.
### Custom Models {: #custom-models }
* RAPTOR-5575: Adds _julia_ as a programming language type when creating custom execution environments.
* RAPTOR-5306: Fixes an issue where task names were not displayed in blueprint diagrams.
### Predictions {: #predictions }
* PRED-6005: Fixes an issue when updating a Batch Prediction Job Definition.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.1.1-aml
|
# Version 7.1.3 {: #version-713 }
_August 27, 2021_
The DataRobot v7.1.3 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.1.0 release notes for:
* [Features introduced in v7.1.0](v7.1.0-aml)
## Issues fixed in v7.1.3 {: #issues-fixed-in-v713 }
The following issues have been fixed since Enterprise release v7.1.2:
### Enterprise {: #enterprise }
* EP-1535: Fixes an issue with the map tile management workflow when Minio is used as storage on Docker-based installations.
* EP-1495: Sets `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` in datarobot-scoring to point to DataRobot's python.
### Platform {: #platform }
* PLT-3318: Fixes an issue where a user with read-only access to deployments through RBAC was not able to view deployments shared with them as a _User_.
### MLOps {: #mlops }
* RAPTOR-5955: Fixes an issue where models did not retain the content of a previous version when changing the base environment in the Custom Model Workshop.
* RAPTOR-6069: Fixes an issue causing the custom models GitHub integration to fail with SSL handshake error on verification of self-signed certificates. To fix SSL handshake, configure DataRobot to allow self-signed SSL certificates: set `ALLOW_SELF_SIGNED_CERTS: True` in the config.yaml under `app_configuration.drenv_override` and `re-run bin/datarobot install`.
* PRED-6240: Fixes an issue where drift tracking was not correctly set from the job-definitions UI (was previously `FALSE` by default).
### Business Operations {: #business-operations }
* BOPS-1289: Enable support for `ALLOW_SELF_SIGNED_CERTS` in the AI App Builder.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.1.3-aml
|
---
title: V9.0.3
description: DataRobot Release 9.0.3 release notes.
---
# V9.0.3 {: #v903 }
_June 30, 2023_
The DataRobot v9.0.3 release includes some improvements and fixed issues in the DataRobot Self-Managed AI Platform platform. See the v9.0.0 release notes for:
* [Features introduced in v9.0.0](v9.0/index)
Version 9.0.3 now supports Azure AKS v1.25 in addition to OpenShift 4.10 and AWS EKS with K8s v1.23.
## Issues fixed in v9.0.3 {: #issues-fixed-in-v903 }
The following issues have been fixed since release v9.0.3:
### MLOps {: #mlops }
- RAPTOR-9471: Fixes an issue making it impossible for custom models with training data to be shared with any other role except `consumer`.
### Predictions {: #predictions }
- PRED-8839: Fixes an issue causing Portable Prediction Servers to crash when requesting Prediction Explanations for models trained on Autopilot and cross-validation finished.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v9.0.3-aml
|
---
title: V9.0.1
description: DataRobot Release 9.0.1 release notes.
---
# V9.0.1 {: #v901 }
_June 6, 2023_
The DataRobot v9.0.1 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v9.0.0 release notes for:
* [Features introduced in v9.0.0](v9.0/index)
## Issues fixed in v9.0.1 {: #issues-fixed-in-v901 }
The following issues have been fixed since release v9.0.1:
### Data management {: #data-management }
- DM-5189: Supporting listing tables for Athena.
- DM-9894: Remove the **Method not supported** error when using treasure data JDBC driver.
### Predictions {: #predictions }
- CODEGEN-1670: Fixes an issue when detecting CSV line separators if the separator is `\r\n`.
- PRED-8594: Fixes an issue when setting the `columnNamesRemapping` property for a batch prediction job definition through a `PATCH` request.
### MLOps {: #mlops }
- AGENT-4155: Fixes an issue with force deletion of deployments in a Snowflake prediction environment.
- AGENT-4288: Updates the code snippet shown in external model deployments to not import a deprecated MLOps class (`OutputType`).
- AGENT-4253: Fixes an issue that prevented deployments created in the "default prediction environment" for enterprise installs from being listed in the prediction environment view.
- MMM-12255: Adds project and model information to User Activity Monitor entries for **Deployment Added** events
- MMM-12467: Fixes the histogram order for targets in external binary classification deployments created with holdout predictions.
- MMM-12742: Manually marks triggered retraining policy run as failed when encountering a credential error.
- MMM-12898: Fixes the target histogram for external deployments built inside DataRobot.
- MMM-13255: Fixes the calculation of "Accuracy" and "Balanced accuracy" metrics on the **Model Management Accuracy** tab for models with thresholds other than 0.5.
- MMM-12449: Fixes an issue when enabling feature drift for unsupervised model deployments.
### Time series {: #time-series }
- TIME-12787: Fixes partitioning set up for Segmentation to Clustering flow.
- TIME-12821: Removes misleading validation that restricted the insight computation to a max of 1000 forecast distance in absolute numbers.
- TIME-13090: Fixes the backtesting section of the **Advanced options** for OTV projects that have datasets larger than the time series limit.
- TIME-13098: Fixes an issue affecting holdout durations edition.
### Trust and Explainability {: #trust-and-explainability }
- TREX-3144: Fixes an issue when uploading a pairwise interactions CSV file in **Advanced options > Feature Constraints**.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v9.0.1-aml
|
---
title: V9.0 maintenance releases
description: Maintenance releases for the DataRobot Release 9.0 major release.
---
# V9.0 maintenance releases {: #v90-maintenance-releases }
The following maintenance release notes include some fixed issues in the DataRobot Self-Managed AI Platform platform. See also the [features introduced in v9.0.0](v9.0/index), which was introduced _March 29, 2023_.
Version | Release date
------- | ------
[v9.0.1](v9.0.1-aml) | *June 6, 2023*
[v9.0.2](v9.0.2-aml) | *June 9, 2023*
[v9.0.3](v9.0.3-aml) | *June 30, 2023*
[v9.0.4](v9.0.4-aml) | *July 24, 2023*
|
index
|
---
title: V9.0.2
description: DataRobot Release 9.0.2 release notes.
---
# V9.0.2 {: #v902 }
_June 9, 2023_
The DataRobot v9.0.2 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v9.0.0 release notes for:
* [Features introduced in v9.0.0](v9.0/index)
## Issues fixed in v9.0.2 {: #issues-fixed-in-v902 }
The DataRobot v9.0.2 release addresses issues found in the version numbering of the v9.0.1 release. For a complete list of notes since the v9.0.0 release, see the [v9.0.1 release notes](v9.0.1-aml).
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v9.0.2-aml
|
---
title: V9.0.4
description: DataRobot Release 9.0.4 release notes.
---
# V9.0.4 {: #v904 }
_July 24, 2023_
The DataRobot v9.0.4 release includes some fixed issues in the DataRobot Self-Managed AI Platform and issues found in the installation of the v9.0.3 release. For a complete list of notes since the v9.0.3 release, see the [v9.0.3 release notes](v9.0.3-aml). See the v9.0.0 release notes for:
* [Features introduced in v9.0.0](dr-9.0/indexv9.0/index
## Issues fixed in v9.0.4 {: #issues-fixed-in-v904 }
The following issues have been fixed since release v9.0.4:
### Data management {: #data-management }
DM-10790: Fixes an issue when starting Celery not running as root and using groups that do not exist.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v9.0.4-aml
|
# Version 7.0.2 {: #version-702 }
_April 26, 2021_
The DataRobot v7.0.2 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.0.0 release notes for:
* [Features introduced in v7.0.0](v7.0.0-aml)
## Updated language localization {: #updated-language-localization }
Localization of the documentation has been updated to include Japanese content for all new 7.0 features.
To change languages, open your [user settings](getting-help) and select a language for your session. The selection remains until you reset it.

## Issues fixed in v7.0.2 {: #issues-fixed-in-v702 }
The following issues have been fixed since Enterprise release v7.0.1:
### Platform {: #platform }
* PLT-3268: Fixes an issue where some permissions were not applied in the UI when switching a user’s role.
* TED-2595: Fixes an issue causing the **Feature Details** chart to not render for **Cross-Class Data Disparity** for protected feature class labels containing underscores (`_`) .
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
### Feature Discovery {: #feature-discovery }
* SAFER-3632: Fixes an issue with Feature Discovery projects by overriding the Spark configuration from env variable to customize app.
### Time series {: #time-series }
* TIME-7800: Fixes broken model exports for Eureqa GAM models.
|
v7.0.2-aml
|
---
title: Version 7.0.x maintenance releases
description: Maintenance releases for the DataRobot Release 7.0 major release.
---
# Version 7.0.x maintenance releases {: #version-70x-maintenance-releases }
The following maintenance release notes include some fixed issues in the DataRobot Self-Managed AI Platform platform. See also the [features introduced in v7.0.0](v7.0/index), which was introduced _March 15, 2021_.
Version | Release date
------- | -----------
[v7.0.3](v7.0.3-aml) | _December 21, 2021_
[v7.0.2](v7.0.2-aml) | _April 26, 2021_
[v7.0.1](v7.0.1-aml) | _March 29, 2021_
|
index
|
# Version 7.0.1 {: #version-701 }
_March 29, 2021_
The DataRobot v7.0.1 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.0.0 release notes for:
* [Features introduced in v7.0.0](v7.0.0-aml)
## Issues fixed in v7.0.1 {: #issues-fixed-in-v701 }
The following issues have been fixed since Enterprise release v7.0.0:
### Enterprise {: #enterprise }
* EP-951: Fixes an issue when upgrading to the latest version of DataRobot with the TimescaleDB service used in 6.2 and older. Now, upgrades run as expected.
* EP-872: Allows ingestion of Parquet files via "Enable conversion of binary files inside worker process" when `read_only_containers` is `True`.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.0.1-aml
|
# Version 7.0.3 {: #version-703 }
_December 21, 2021_
The DataRobot v7.0.3 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.0.0 release notes for:
* [Features introduced in v7.0.0](v7.0.0-aml)
## Issues fixed in v7.0.3 {: #issues-fixed-in-v703 }
The following issues have been fixed since Enterprise release v7.0.2:
### Enterprise {: #enterprise }
* EP-1062: Fixes a Python 3 incompatibility in the Boto library issue when connecting to AWS Cloudwatch behind an HTTPS proxy.
* EP-1345: Fixes an issue that caused a `KeyError` during DB migration.
* EP-2133: Updates Scoring Code build plugins to avoid the inclusion of vulnerable log4j versions.
### Platform {: #platform }
* PLT-3709: Prevents the `ENABLE_DATA_ENGINE_ON_K8S` feature flag from appearing under Developer UI feature flags for on-prem.
* PLT-3812: Fixes an issue where delegation would not turn off.
### Custom Models {: #custom-models}
* RAPTOR-5240: Fixes an issue where custom model dependency management failed to parse Python package names containing a period (`.`).
* RAPTOR-5955: Fixes an issue in the Custom Model Workshop that created a new minor version with empty model content when changing the base environment. Now it properly retains the content of a previous version.
* RAPTOR-6069: Fixes an issue causing the custom models GitHub integration to fail with a SSL handshake error on verification of self-signed certificates.
### Trust & Explainability {: #trust-explainability }
* TED-2688: Fixes issue where the **Bias vs Accuracy** page would not load when using a fairness metric that's independent of the model's prediction threshold.
* XAI-4106: Fixes the format of the `shapBaseValue` field in v2 Predictions Public API response to be a scalar float value which corresponds with other APIs and documentation.
* XAI-4238: Fixes an issue when loading certain models with SHAP explainers that were built in DataRobot version 5.2.2, which failed to load after upgrading to version 6.3.
### Time Series {: #time-series }
* TIME-8745: `COMPUTE_TIMESERIES_AOT_THRESHOLD` can now be configured from an environment.
_All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
|
v7.0.3-aml
|
---
title: Build models
description: Introduces the phases of building models, including Advanced options, building models, and managing projects.
---
# Build models {: #build-models }
These sections describe aspects of preparing to build, building, and managing models and projects:
Topic | Describes...
----- | ------
[Build models](build-basic/index) | Understand elements of the basic modeling workflow.
[Advanced options](adv-opt/index) | Set advanced modeling parameters prior to building.
**Reference** | :~~:
[Manage projects](manage-projects) | Use the project control center to manage models and projects, as well as export data.
[Generate AI Report](generate-ai-report) | Create a report of modeling results and insights.
[Export charts and data](export-results) | Download created insights.
[Exploratory Data Analysis](eda-explained) | Details of Exploratory Data Analysis (EDA), phases 1 and 2.
[Data partitioning and validation](data-partitioning) | Understand validation types and data partitioning methods.
[Modeling process details](model-ref) | Bits and pieces of the initial model building process.
[Optimization metrics](opt-metric) | Short descriptions of metrics available for model building.
[Worker Queue](worker-queue) | Manage queued and processing models.
|
index
|
---
title: Out-of-time validation modeling
description: Out-of-time validation (OTV) is a method of modeling time-relevant data using date/time partitioning.
---
# Out-of-time validation (OTV) {: #out-of-time-validation-otv }
Out-of-time validation (OTV) is a method for modeling time-relevant data. With OTV you are not forecasting, as with time series. Instead, you are predicting the target value on each individual row.
As with [time series](time/index) modeling, the underlying structure of OTV modeling is date/time partitioning. In fact, OTV _is_ date/time partitioning, with additional components such as sophisticated preprocessing and insights from the [Accuracy over Time](aot) graph.
To activate time-aware modeling, your dataset must contain a column with a [variable type “Date”](file-types#special-column-detection) for partitioning. If it does, the date/time partitioning feature becomes available through the **Set up time-aware modeling** link on the **Start** screen. After selecting a time feature, you can then use the [**Advanced options**](ts-adv-opt) link to further configure your model build.
The following sections describe the date/time partitioning workflow.
See these additional date/time partitioning considerations for [OTV](#feature-considerations) and [time series](ts-consider) modeling.
## Basic workflow {: basic-workflow }
To build time-aware models:
1. Load your dataset (see the [file size requirements](file-types#time-series-file-sizes)) and select your target feature. If your dataset contains a date feature, the **Set up time-aware modeling** link activates. Click the link to get started.

2. From the dropdown, select the primary date/time feature. The dropdown lists all date/time features that DataRobot detected during EDA1.

3. After selecting a feature, DataRobot computes and then loads a histogram of the time feature plotted against the target feature (feature-over-time). Note that if your dataset qualifies for [multiseries modeling](multiseries), this histogram represents the average of the time feature values across all series plotted against the target feature.

4. Explore what other features look like over time to view trends and determine whether there are gaps in your data (which is a data flaw you need to know about). To access these histograms, expand a numeric feature, click the **Over Time** tab, and click **Compute Feature Over Time**:

You can interact with the **Over Time** chart in several ways, described [below](#understand-a-features-over-time-chart).
Finally, set the type of time-aware modeling to **Automated machine learning** and consider whether to change the default settings in [advanced options](#advanced-options). If you have time series modeling enabled, and want to use a method other than OTV, see the [time series workflow](ts-date-time).

## Advanced options {: #advanced-options }
Expand the **Show Advanced options** link to set details of the partitioning method. When you enable time-aware modeling, **Advanced options** opens to the date/time partitioning method by default. The **Backtesting** section of date/time partitioning provides tools for configuring backtests for your time-aware projects.

{% include 'includes/date-time-include-1.md' %}
{% include 'includes/date-time-include-2.md' %}
{% include 'includes/date-time-include-3.md' %}
### Understand a feature's Over Time chart {: #understand-a-features-over-time-chart }
{% include 'includes/date-time-include-4.md' %}
{% include 'includes/date-time-include-5.md' %}
{% include 'includes/date-time-include-6.md' %}
## Feature considerations {: #feature-considerations }
Consider the following when working with OTV. Additionally, see the documented [file requirements](file-types) for information on file size considerations.
!!! note
Considerations are listed newest first for easier identification.
{% include 'includes/dt-consider.md' %}
|
otv
|
---
title: Text AI resources
description: Provides links to Text AI resources available in DataRobot.
---
# Text AI resources {: #text-ai-resources }
Text AI in DataRobot allows you to seamlessly incorporate text data into your model without being a Natural Language Processing (NLP) expert and without injecting extra steps in the model building process. With models and preprocessing steps designed specifically for NLP, DataRobot supports _all languages_ from [ISO 639](https://en.wikipedia.org/wiki/List_of_ISO_639-2_codes){ target=_blank }, the set of standards for representing names for languages and language groups.
The tools available for working with text are described in the following sections.
Topic | Describes...
---|---
**Working with text** | :~~:
[Automated transformations](model-ref#automated-feature-transformations) | Learn about DataRobot's automated feature engineering for text, built to enhance model accuracy.
[Clustering based on text collections](clustering) | Use clustering for detecting topics, types, taxonomies, and languages in a text collection.
[Aggregation and imputation in time series projects](ts-data-prep#set-manual-options) | Set handling for text features in time series projects.
[Composable ML transformers](cml-blueprint-edit) | Edit model blueprints, including pre-trained transformers, to best represent text features.
**Model insights** | :~~:
[Coefficients](coefficients) | See how text-preprocessing transforms text found in a dataset into a form that can be used by a DataRobot model.
[Text Mining](analyze-insights#text-mining) | Display the most relevant words and short phrases in any variables detected as text.
[Word cloud](analyze-insights#word-clouds) | Display the most relevant words and short phrases found in your dataset in word cloud-format.
[Text Explanations](predex-text) | Visualize not only the text feature that is impactful, but also which specific words within a feature are impactful.
[Multilabel modeling for text categorization](multilabel) | Use multilabel classification for text categorization.
[Example: Capturing sentiment in text (link not live yet)](cml-ref/index) | See an example of uplifting a model by capturing sentiment in the text.
**Text-related feature announcements** | :~~:
[NLP Fine-Tuner blueprints](v8.0.0-aml#nlp-fine-tuner-blueprints-for-multi-modal-datasets-in-any-langauge) | Read about NLP Fine-Tuner blueprints.
[FastText for language detection](july2022-announce#nlp-autopilot-with-better-language-support-now-ga) | Read about FastText for language detection at data ingest.
[TinyBERT featurizer](v7.1.0-aml#tiny-bert-pretrained-featurizer-implementation-extends-nlp) | Read about using Google's Bidirectional Encoder Representations from Transformers (distilled version).
|
textai-resources
|
---
title: Specialized workflows
description: Leverage the support for alternative workflows for specialized data types such as anomaly detection, multilabel modeling, Visual and Location AI, and date/time partitioning.
---
# Specialized workflows {: #specialized-workflows }
The following sections describe alternative workflows for a variety of specialized data types:
Topic | Describes...
----- | ------
[Bias and Fairness](b-and-f/index) | Access an index page for quick links to all Bias and Fairness content.
[Composable ML](cml/index) | Build custom blueprints using built-in tasks and custom Python/R code.
[Location AI](location-ai/index) | Use geospatial analysis on spatial data.
[Unsupervised learning](unsupervised/index) | Work with unlabeled or partially labeled data to detect patterns, such as anomalies and clusters).
[Visual AI](visual-ai/index) | Apply visual learning to image data.
[Multilabel modeling](multilabel)| Perform modeling in which each row in a dataset is associated with one, several, or zero labels.
[OTV](otv) | Date/time partitioning for non-time series modeling.
[Text AI resources](textai-resources) | Access links to the DataRobot Text AI functionality for working with text and viewing insights.
|
index
|
---
title: Multilabel modeling
description: In DataRobot's multilabel modeling, each row in a dataset is associated with one, several, or zero labels.
---
# Multilabel modeling {: #multilabel-modeling }
!!! info "Availability information"
Availability of multilabel modeling is dependent on your DataRobot package. If it is not enabled for your organization, contact your DataRobot representative for more information.
Multilabel modeling is a kind of classification task that, while similar to [multiclass modeling](multiclass#work-with-multiclass-models), provides more flexibility. In multilabel modeling, [each row in a dataset](#create-the-dataset) is associated with one, several, or zero labels. One common multilabel classification problem is text categorization (e.g., a movie description can include both "Crime" and "Drama"):

Another common multilabel classification problem is image categorization, where the image can fit into one, multiple, or none of the categories (cat, dog, bear).

See the [considerations](#feature-considerations) for working with multilabel modeling.
??? tip "Deep dive: Supported data types for multivariate modeling"
DataRobot supports the following strategies for multivariate modeling:
* _[Multiclass](multiclass)_: An extension to binary classification, it allows multiple classes for a feature, but only one can be applied at a time ("Am I looking at a cat? A dog?"). Predictions report probability for each class individually ("90% probability it's a dog but it could be a small bear"). Predictions for a row sum to 1.
* _Multilabel_: A generalization of multiclass that provides greater flexibility. Each observation can be associated with 0, 1, or several labels ("Am I looking at a cat? A dog? A cat _and_ a dog? At neither a cat _nor_ a dog?"). Predictions report probability for each label in an observation and don't necessarily sum to 1.
* _[Summarized categorical](histogram#summarized-categorical-features)_: A variable type used for features that host a collection of categories (for example, the count of a product by category or department). It aggregates categorical data and, while typically used for [Feature Discovery](feature-discovery/index), allows you to create the type in your dataset and use the unique visualizations.
The following table summarizes the feature types that support multivariate modeling:
| Data type | Description | Allowed as target? | Project type |
|-----------------|------------------|--------------|---------------|
| Categorical | Single category per row, mutually exclusive | Yes | Multiclass |
| Multicategorical | Multiple categories per row, non-exclusive | Yes | Multilabel |
| Summarized Categorical | Multiple categories per row, multiple instances of each category allowed | No | Multiregression* |
* Not currently supported
## Create the dataset {: #create-the-dataset }
To create a training dataset that can be used for multilabel modeling, include one multicategorical column. Note the following:
* Multicategorical features are only supported when selected as the target. All others multicategorical features are ignored.
* DataRobot supports creation of projects with any number of unique labels, using up to 1,000 labels in each multicategorical feature. There is no need to remove extraneous labels from the dataset as DataRobot will ignore them. Use the [**Feature Constraints**](feature-con#trim-target-labels) advanced option to configured how labels are trimmed for modeling.
* Label names must be strings of up to 60 ASCII characters; unicode characters of up to 60 bytes are supported.
* Multiple occurrences of the same label are allowed, but the repeated label value is treated as a single occurrence (for example, `crime, drama, drama` is treated as `crime, drama`).
* When working with images and Visual AI, follow the guidelines for creating an [image dataset](vai-model#prepare-the-dataset) and adding a categorical column for the multilabel feature.
### Multicategorical row format {: multicategorical-row-format }
The format of a multicategorical row is a list of label names. The following table provides examples of valid and invalid multicategorical values:
| Example | Reason |
|-----------|------------|
| _Valid multicategorical values_ | :~~: |
| `[“label_1”, “label_2”]` | String format, with 2 relevant labels |
| `[“label_1”]` | String format, with 1 relevant label |
| `[]` | Label set for one row with no relevant labels |
| _Invalid multicategorical values_ | :~~: |
| `[‘label_1’, ‘label_2’]` | Not a valid JSON list |
| `[1, 2]` | Label names are not strings |
When creating a CSV file with multicategorical features, be sure to properly escape special characters. Note that the comma (`,`) is the default delimiter; the double quotes symbol (`“`) is the default escape character. Additionally:
- Multicategorical values must be enclosed by double quotes in CSV files.
- Double quotes enclosing label names must be escaped by double quotes.
A valid representation of a multicategorical feature in a CSV file looks as follows:
“[“”label_1””, “”label_2””]”
The double quotes outside the list brackets escape the actual value, so the comma within the list is not interpreted as a delimiter. Additionally, double quotes around “label_1” and “label_2” are needed to escape the double quotes following them.
The recommended way to generate CSVs with multicategorical features using Python is to create a pandas DataFrame, in which multicategorical feature values are represented by lists of strings (i.e., one multicategorical row is a list of label names represented by strings). Then, JSON-encode the multicategorical column and use pandas `DataFrame.to_csv` to generate the CSV file. Pandas will take care of proper escaping when generating the CSV.
??? tip "Code snippet: create a four-row dataset"
The following code snippet shows how to create a four-row dataset with numeric and multicategorical features:
```
import json
import pandas as pd
multicategorical_values = [["A", "B"], ["A"], ["A", "C"], ["B", "C"]]
df = pd.DataFrame(
{
"numeric_feature": [1, 2, 3, 4],
"multicategorical_feature": multicategorical_values,
}
)
df["multicategorical_feature"] = df["multicategorical_feature"].apply(json.dumps)
df.to_csv("dataset.csv", index=False)
```
### Multicategorical feature validation {: multicategorical-feature-validation }
DataRobot runs feature validation at multiple stages to ensure correct row format:
* *EDA1*: If a feature is detected as potentially multicategorical (meaning at least one row has the right multicategorical format), DataRobot runs multicategorical format validation on a sample of rows. Any invalid multicategorical rows are reported as [multicategorical format errors](data-quality#multicategorical-format-errors) in the Data Quality Assessment tool.
* *EDA2*: If the feature passes EDA1 without multicategorical format errors and is selected as the target, DataRobot runs target validation on all rows. If any format errors are detected, a project creation error modal appears and the project is cancelled. Expand the **Details** link in the modal to see the format issues and required corrections. Once you fix the errors, re-upload the data and try again.
??? tip "How could it pass EDA1 and fail EDA2?"
If the dataset is over 500MB, DataRobot runs EDA1 on a sample (this is not specific to multilabel). From the EDA sample, DataRobot then randomly samples 100 rows and checks for anything meeting the multicategorical format. If there is at least one valid multicategorical feature, DataRobot checks the entire EDA sample. During target validation, DataRobot checks the entire dataset. As a result, it is possible that an invalid feature can pass EDA1 and then error when the entire dataset is evaluated.
## How DataRobot detects multilabel {: #how-datarobot-detects-multilabel }
All labels in a row comprise a "label set" for that row. The objective of multilabel classification is to accurately predict label sets, given new observations. When, during EDA1, DataRobot detects a data column consisting of label sets in its rows, it assigns that feature the variable type `multicategorical`. When you use a multicategorical feature as the target, DataRobot performs multilabel classification.

Labels are not mutually exclusive; each row can have many labels, and many rows can have the same labels. From the **Data** page, view the top 30 unique label sets in a multicategorical feature:

To see the label sets in the context of the dataset, use the **View Raw Data** button.
Once you have uploaded the dataset and EDA1 has finished, scroll to the feature list and expand a feature showing the variable type `Multicategorical` to see details. The associated tabs, which provide insights about label distribution and interactions, are described below.
* [Feature Statistics](#feature-statistics-tab)
* [Histogram](#histogram-tab)
* [Table](#table-tab)
## Feature Statistics tab {: #feature-statistics-tab }
The **Feature Statistics** tab, available for multicategorical-type features, is comprised of several parts, described in the table below.

| | Element | Description |
|---|---|---|
|  | [Feature properties](#feature-properties) | Provides overall multilabel dataset characteristics. |
|  | [Pairwise matrix](#pairwise-matrix) | Shows pairwise statistics for pairs of labels. |
|  | [Matrix management](#matrix-management) | Provides filters for controlling the matrix display. |
Note that the statistics in the **Feature Statistics** tab are not exact—they only reflect the dataset properties of the sample used for EDA.
### Feature properties {: #feature-properties }
The **Feature Properties** statistics report provides overall multilabel dataset characteristics.

| Field | Description | From the example |
|---------------|----------------|----------------|
| Labels number | Number of unique labels in the target. | 100 unique labels |
| Cardinality | Average number of labels in each row. | On average, each row has 3 labels |
| Density | Percentage of all unique labels present, on average, in each row. | Roughly 3% of the total labels are present, on average, in each row |
| P_min | Fraction of rows with only 1 label. | 21% of rows have only 1 label |
| Diversity | Fraction of unique label sets with respect to the max possible. | Only roughly 35% of all possible label sets are present in the data |
| MeanIR (Mean Imbalance Ratio)\* | Average label imbalance compared to the most frequent label. The higher the value, the more imbalanced are the labels, on average, compared to the most frequent label. | On average, labels are highly imbalanced |
| MaxIR (Max Imbalance Ratio)\* | Highest label imbalance across all labels. | Some extremely imbalanced labels present |
| CVIR (Coefficient of Variation for Average Imbalance Ratio)\* | Label imbalance variability. Indicates whether a label imbalance is concentrated around its mean or has significant variability. | Imbalance varies significantly across labels |
| SCUMBLE\** | Measure of concurrence between frequent and rare labels. A high scumble means the dataset is harder to learn. | Concurrence is high |
\* The imbalance measures follow [Charte, F., Rivera, A.J., del Jesus, M.J., Herrera, F.: Addressing imbalance in multilabel classification: *Measures and random resampling algorithms. Neurocomputing 163, 3–16 (2015)*](https://www.sciencedirect.com/science/article/abs/pii/S0925231215004269){ target=_blank }.
\** SCUMBLE follows the definition in [Francisco Charte, Antonio J. Rivera, Maria J. del Jesus, Francisco Herrera: *Dealing with Difficult Minority Labels in Imbalanced Multilabel Data Sets*](https://www.sciencedirect.com/science/article/abs/pii/S0925231217315321){ target=_blank }.
### Pairwise matrix {: #pairwise-matrix }
The pairwise matrix shows pairwise statistics for pairs of labels and the occurrence percentage of each label in the dataset. From here you can:
* Check individual label frequencies.
* Visualize pairwise correlation.
* Visualize pairwise joint probability.
* Visualize pairwise conditional probability.
The larger matrix provides an overview of every label pair found for the selected target; the mini-matrix to the right shows additional detail for the selected label pair. The matrix is a table, showing the relationships between labels. The variables in the mini-matrix are two labels—one label whose state (present, absent) varies along the X-axis and the other whose state varies along the Y-axis. For the full matrix, the state does not vary (always present); only the labels vary.

### Matrix management {: #matrix-management }
In datasets with more than 20 labels, an additional matrix map displays to the left of the main matrix. Click any point in the map to refocus the main matrix to that area (where the labels you want to investigate converge). The mini-matrix changes to provide more detailed information about the pair. Or, use the dropdowns, described below, to control the matrix display.

Color indicates the value of the property selected in the **Property for matrix fields** dropdown. For example, if you select "correlation", the color of a matrix cell represents the correlation between the label pair for the selected cell—red represents negative values, green represents positive values. Of the three properties that can be selected (correlation, joint probability, and conditional probability), only correlation can have negative values (red circles can never occur for joint or conditional probability). The blue bars that border the right side of the matrix represent numeric frequency of the label in the corresponding row.
You can change the order of labels in the matrix using one of the sort tools on the left:

Sort option | Description
----------- | -----------
Property for matrix fields | Sets the property to be displayed in the matrix: correlation, joint probability, or conditional probability. (See descriptions [below](#matrix-display-selectors).)
Sort labels | Changes the label ordering to be based alphabetically, by frequency, or by imbalance.
Label selection | Select label names either by [map](#select-labels-by-map) or [manually](#select-labels-manually).
In the mini-matrix on the right, set the **Property** dropdown to view measures of joint probability or conditional probability:

See the [**Confusion Matrix**](multiclass#multiclass-confusion-matrix) documentation for a general description of working with this type of matrix.
Additionally, you can select a label name—either using the [map](#select-labels-by-map) or [manually](#select-labels-manually) to highlight the label in the main matrix.
#### Select labels by map {: #select-labels-by-map }
You can modify the labels displayed in the pairwise matrix based on the matrix map. Simply click at any point in the map and the main and mini-matrix update to reflect your selection. The square marked in the map shows what the larger matrix represents:

#### Select labels manually {: #select-labels-manually }
You can manually set the labels that display in the pairwise matrix to match whatever combinations are of interest. You can also save the combinations as a named list to apply and compare after experimentation.

To select labels:
1. Under **Select labels**, choose **Manually**.
2. Check or uncheck the box to set rows and columns separately.
3. Each row or column input field defaults to the top 10 labels, determined by label frequency. Add or remove labels as desired, making sure to have between 1 and 10 labels for each option.
* To add, begin typing characters and any matching labels not already present are available for selection.
* To remove, click the **x** next to the label name.
4. Once you created the matrix as desired, click **Save labels** to save the label selection for reuse.
5. If any label lists have been saved, an additional dropdown becomes available allowing you to select a list:

6. To manage saved lists, select **Manage manual selections** from the dropdown. From there you can edit the list name or remove the list.
#### Matrix display selectors {: #matrix-display-selectors }
The following describes joint probability of two labels, conditional probability of two labels, and correlation.
=== "Joint probability"
This selection answers the question "How frequent are the different configurations of the co-occurrence of the labels?"
For example, given two labels `A` and `B`, there are four different configurations of their co-occurrence in the data rows:
* `A` is present, `B` is present
* `A` is present, `B` is absent
* `A` is absent, `B` is present
* `A` is absent, `B` is absent.
The joint probability is the probability of each of those events. For example, if the probability that `A` is present and `B` is absent is reported at 0.25, it means that in 25% of _all_ rows in the dataset, `A` is present and `B` is absent.
The pairwise statistics insight in the main matrix shows only the joint probability of both selected labels being present. In the mini-matrix, the cells show the joint probability of each co-occurrence configuration. For example, in the following:

The probability of `interest_medium` being present and `price_low` being absent is 13.8%, which means that in 13.8% of all rows, `interest_medium` is absent and `price_low` is present simultaneously.
=== "Conditional probability"
A dataset has labels `A` and `B`. Consider all rows in the dataset in which `B` is present. In some of them, `A` is present, in others, `A` is absent. For example, the dataset may have rows:
```
[A, B]
[B]
[A]
[A]
```
There are two rows containing `B`. In one of the rows, `A` is also present. This defines the conditional probability of `A` given that ("on the condition that") `B` is present:
`P(A present | B present)`
In the case above, the probability is 0.5: Out of two rows with `B`, `A` is in one row. `B` being present is the base condition, `A` being present is the event whose conditional probability, given the condition, you are interested in.
In this example, there can be four different configurations of (event, condition):
```
P(A present | B present)
P(A present | B absent)
P(A absent | B present)
P(A absent | B absent)
```
The main matrix shows only `P(A present | B present)`; the mini-matrix shows all configurations in the correspondent cells.
=== "Correlation"
Correlation, in general, is a measure of linear dependence between two random variables. In this case, the variables are the labels—`A` and `B`. Think of each label as a binary variable, where "0 = label is absent" (a "low" value) and "1 = label is present" (a "high" value). Correlation between `A` and `B` then shows the relation between the respective high and low values of `A` and high and low values of `B`.
What is the trend in simultaneous appearances of 1s in A and 1s in B (or 0s in A and 0s in B)? If the greater number of rows have A=1 and B=1 or A=0 and B=0, then A and B have a positive correlation.
Examples:
* If label `A` is 1 (present) in all rows where `B` is 1 (present), and 0 (absent) in all rows where `B` is 0 (absent), then correlation between them is 1 (the highest possible value of correlation). In other words, if in the greater number of rows A=1 and B=1 (or A=0 and B=0), then A and B have a positive correlation.
* If `A` is 0 in the rows where `B` is 1, and `A` is 1 in the rows where `B` is 0, then correlation is -1 (the lowest possible value). That is, the trend is the opposite of positive correlation (high values of `A` correspond to low values of `B`), then correlation is negative.
* If there is no trend, then correlation is 0.
Between these extremes, correlation shows how the high ("1") and low ("0") values of `A` come together with high/low values of `B`.
In the case of binary variables, correlation is similar to joint probability but is more easily interpretable. (It can be easily calculated from the joint probability of both labels being 1 and the expectation of both labels being 1, but is not the same.) Note that there is no 2x2 matrix for correlation. This is because correlation of two variables results in a single number that summarizes information from all four configurations (low-low, low-high, high-low, high-high). The 2x2 matrix, however, shows properties that require four numbers to fully describe them.
### Histogram tab {: #histogram-tab }
The **Histogram** provides a bar plot that indicates, for the selected label, the frequency (by number of rows) with which the label is present or absent in the data. Use the histogram to detect imbalanced labels.

Select a label from the list to display its histogram. You can sort labels by name, frequency, or imbalance. Use the imbalance option, for example, to find the most imbalanced label in your dataset.
See documentation for the [**Histogram**](histogram) for a general description of working with histograms.
### Table tab {: #table-tab }
The **Table** tab lists up to the 30 most frequent label sets:

## Building and investigating models {: #building-and-investigating-models }
Building multilabel models uses the standard DataRobot build process:
1. Upload a properly prepared dataset (or open from the **AI Catalog**).
2. From the **Data** page, find a multicategorical feature (search if necessary) and select it as the target.
3. Open the **Advanced options > Additional** tab and choose a metric, either LogLoss (default), AUC, AUPRC or their weighted versions. Set any other selections.
4. Select a mode—Autopilot, Quick, or Manual—and begin modeling.
## Leaderboard tabs {: #leaderboard-tabs }
Multilabel-specific modeling insights are available from the following Leaderboard tabs:
* Evaluate:
* [**Per-Label Metrics**](#per-label-metrics)
* [**ROC Curve**](#roc-curve)
* [**Lift Chart**](#lift-chart)
* Understand:
* [**Feature Effects**](#feature-effects)
Additionally, you can use [**Feature Impact**](feature-impact) to understand which features drive model decisions.
### Per-Label Metrics {: #per-label-metrics }
The **Evaluate > Per-Label Metrics** tab is a visualization designed specifically for multilabel models. It helps to evaluate a model, in that it summarizes performance across the labels for different values of the prediction threshold (which can be set from the page). The chart depicts binary performance metrics, treating each label as a binary feature. Specifically it:
- Displays average and per-label model performance, based on the prediction threshold, for a selectable metric.
- Helps to assess the number of well performing labels versus badly performing labels.
You can see detailed description of the metrics depicted here under [**ROC Curve Metrics**](roc-curve-tab/metrics).

| | Component | Description |
|---| ----------|-----------------|
|  | [Threshold selector](#threshold-selector) | Sets prediction and display thresholds settings. |
|  | [Metric value chart](#metric-value-chart) and metric selector | Displays graphed results based on the set display threshold; provides a dropdown to select the binary performance metric.
 | [Average performance report](#data-selector) | The macro-averaged model performance over all labels.
 | Label selector | Sets the display to all or [pinned labels](#set-label-display).
 | [Data selector](#data-selector) | Chooses the data partition to report per-label values for.
 | [Metric value table](#metric-value-table) | Displays model performance for each target label.
#### Metric value table {: #metric-value-table }
The metric value table reports a model's performance for each target label (considered as a binary feature). You can work with the table as follows:
* The metrics in the table correspond to the [**Display threshold**](#threshold-selector); change the threshold value to view label metrics at different threshold values.

* Click on a column header to change the sort order of labels in the table.

* Click the eye icon () in the SHOW column to include (and remove) a label from the metric value chart.
* Use the search field to search for particular labels in the table.
* The ID column (#) is static and allows you to assess, together with sorting, the labels for which the metric of interest is above or below a given value.
For example, consider a project with 100 labels. If measuring for accuracy above 0.7, sort by accuracy and look at the row index with the last accuracy value above 0.7. You can determine the percentage of labels with that accuracy or above from the row index with relation to the the total number of rows.
#### Metric value chart {: #metric-value-chart }
The chart consists of a graphed results and a metric selector:

The X-axis in the diagram represents different values of the prediction threshold. The Y-axis plots values for the selected metric. Overall, the diagram illustrates the average model performance curve, based on the selected metric, as a bold green curve. The threshold value set in the **Display threshold** is highlighted as a vertical orange line.
#### Set label display {: #set-label-display }
You can change the display to reflect labels of particular importance ("pinned" labels) by clicking the checkbox to the left of the label name:

The **Pinned labels** tab shows all labels you have selected to be of particular importance. If no labels have been pinned, you are prompted to return to **All labels** where you can click to pin labels.
To pin a label, select the pin icon () in the PIN column. Each pinned label is added to the metric value chart. Note the following:
1. The color of the label name changes to match its line entry in the chart.
2. You can remove a label from the chart by clicking the eye icon () in the SHOW column.
As labels are added, they become available under the **Pinned labels** tab:

#### Threshold selector {: #threshold-selector }
The threshold section provides a point for inputting both a **Display threshold** and a **Prediction threshold**.

- Use **Display threshold** to set the threshold level used by the table to display label metrics and average model performance.
- Use **Prediction threshold** to set model prediction threshold, which is applied when making predictions.
- Use the arrows to swap values for the current display and prediction thresholds.
#### Data selector {: #data-selector }
Select the dataset partition—validation, cross validation, or holdout (if unlocked)—that the metrics and curves in the chart and table are based on.

### ROC Curve {: #roc-curve }
Functions of the [**ROC Curve**](roc-curve-tab/index) are the same for binary classification and multilabel projects. For binary, the tab provides insight for the binary target. With multilabel projects, using a **Label** dropdown, you can view insights for the target label separately.

Changing the label updates the page elements, including the graphs, the summary statistics, and confusion matrix.
Prediction thresholds can be set manually or can be set to maximize F1 or MCC. The selected threshold is applied to all labels; there is no individual per-label application.
### Lift Chart {: #lift-chart }
Use the [Lift Chart](lift-chart) to compare predicted versus actual values of the multilabel target. It functions and provides the same selectors as the binary Lift Chart, with the addition of the ability to select the desired label:

### Feature Effects {: #feature-effects }
[Feature Effects](feature-effects) ranks dataset features based on their feature impact score. With multilabel modeling, all standard Feature Effects options are available as well as some additional functionality. Clicking **Compute Feature Effects** causes DataRobot to first compute Feature Impact (if not already computed for the project) and then run the Feature Effects calculations for the model:
After computation completes, select a label to view partial dependence as well as predicted and actual values. These views are available for all calculated numeric and categorical features.

For labels that were not computed as part of the initial calculations, use **Select label** to individually compute them.

## Making predictions {: #making-predictions }
Deploy multilabel classifiers with one click, as usual, and integrate predictions to your workflow via the real-time deployment API. Additionally, you can download the output to see the results for each label in the dataset. That is, for each row, output shows both the prediction of the labels being relevant in that row and each label's score for that row.
## Feature considerations {: #feature-considerations }
Consider the following when working with multilabel models:
* Time-aware (time series and OTV) modeling is not supported.
* DataRobot supports the creation of projects with any number of unique labels, using 2-1,000 labels in each multicategorical feature. Multilabel insights reflect only the 100 (after trimming settings are applied) most frequent labels.
* Multicategorical features are only supported as the target feature. DataRobot disregards descriptive multicategorical features.
* Because the size of predictions is proportional to the number of labels, the number of rows that can be used for real-time predictions decreases with the number of labels.
* Target drift and accuracy tracking is not supported for multicategorical targets.
* The following model types are available:
* Decision Tree Classifier
* Ridge Classifier
* Random Forest Classifier
* Extra Trees Classifier
* Multilabel kNN
* One-vs-all LGBM
* Selected Keras models
* Majority Class Classifier
* The following are not supported:
* Scoring Code
* Challenger models
* Image augmentation
* Agents
* Prediction Explanations
* Stratified partitioning
* Monotonic constraints
* Offsets
* Currency data types
* Export of ROC charts
* External holdout
* Compliance documentation generation
|
multilabel
|
---
title: Quantile regression analysis
description: For some projects, predicting the tendency (average or median, for example) of the target variable is not the prime concern; some are more interested in predicting a conditional value (a quantile).
section_name: AutoML
maturity: public-preview
---
# Quantile regression analysis {: #quantile-regression-analysis }
!!! info "Availability information"
Quantile regression analysis is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable Quantile Metric
For some projects, predicting the tendency (average or median, for example) of the target variable is not the prime concern. Some projects are more interested in predicting a conditional value (a quantile), such as an insurer that wants to be 95% confident that the loss will not exceed a specific amount.
To set the metric and quantile level:
1. Start a regression project. When [EDA1](eda-explained) completes, click **Show Advanced options** and select **Additional**.
2. From the Optimization Metric dropdown select the **Quantile Loss** (or **Weighted Quantile Loss**) metric.

3. Set the value for the quantile level, in the range of 0.01 to 0.99 (acceptable values must be to the tenth or hundredths place only).
4. Select a modeling mode and click **Start**. Quantile-specific models available to Autopilot or from the Repository include:
* Quantile Regression
* Statsmodel Quantile Regression
* Keras
* Gradient Boosted Trees
DataRobot returns a message if it determines there is not enough data to provide a meaningful value. If this happens, consider adding more data or lowering the quantile level. If the available data is limited but DataRobot can continue training, you'll see a **Quantile Target Sparcity** report in the [data quality assessment](data-quality). Too little data can result in unreliable results.
5. When building completes, you can see the value quantile parameter `quantile` value that was used to build the model in [**Advanced Tuning**](adv-tuning). To experiment with different values, set the `quantile` parameter and press **Begin Tuning**. Note that when you tune the quantile this way, it applies only to this model and does not impact the optimization level set for the entire project.

!!! note
When using quantile loss, some insights may look unusual or need to be interpreted differently. For example, Lift Chart and Residuals should not be interpreted in the same way as they would be in a standard regression project.
## Quantile regression metric {: #quantile-regression-metric }
The following describes the Quantile Loss metric.
| Display | Full name | Description | Project type |
|---------------|---------------|------------|-------------|
| Quantile Loss | Quantile Loss | The quantile loss, sometimes called “pinball loss”, asymmetrically penalizes over- and under-estimates depending on the quantile level selected. | Regression (non-time series) |
The Quantile Loss, sometimes called "pinball loss," is a metric that can be used to compare performance of quantile-optimized regression models. For example, with `y` as the true outcome and `ŷ` the prediction, the quantile loss function for a single observation is defined as follows:

Where `q` is a user-provided value between 0.01 and 0.99, indicating the quantile level at which the loss function is optimized. When the Quantile Loss metric is selected, a slider becomes available that allows you to select the quantile level (`q`) at which you would like to evaluate loss for the project.
This means that:
* when `q=0.5`, the quantile loss is identical to Mean Absolute Error, which optimizes to the median.
* when `q > 0.5`, the algorithm is effectively preferring an overestimate to an underestimate: the loss will be steeper for predictions that undershoot.
* when `q < 0.5`, the reverse is true—the algorithm overpenalizes estimates that miss high relative to those that miss low.
|
quantile-reg
|
---
title: Blueprint editor detailed view
description: Toggle between a summary view and a detailed view when working with the Composable ML blueprint editor.
section_name: AutoML
maturity: public-preview
---
# Blueprint editor detailed view {: #blueprint-editor-detailed-view }
!!! info "Availability information"
Blueprint view toggle is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable Blueprint Detailed View Toggle
When DataRobot builds models, it automatically selects the best predictive models for the specified target feature. These models derive from blueprints (modeling algorithms), which are composed of tasks. Initially, the blueprint includes all the tasks that are part of the algorithm, but in the process of creating the final blueprint, DataRobot "prunes" unnecessary branches.
## From the Leaderboard {: #from-the-leaderboard }
The blueprint shown from the Leaderboard's [**Blueprint**](blueprints) tab is a view-only representation of the model. It reflects the tasks used to train the model—a _summary view_.

You can, however, show the full blueprint. To enable a _detailed view_ that displays all the branches of the original algorithm, click the **Show full blueprint** toggle:

If a model uses all of the feature types contained in the project data (numeric, categorical, date, text etc.), the full blueprint toggle is disabled. This is because the summary and detailed blueprints will be the same (all tasks were used).
## From the blueprint editor {: #from-the-blueprint-editor }
When you click **Copy and Edit**, DataRobot opens the [blueprint editor](cml-blueprint-edit#access-the-blueprint-editor). The editor opens in the _detailed view_ of the blueprint. In this case, the toggle is disabled because the version used for modeling is not relevant to editing the complete blueprint.

|
blueprint-toggle
|
---
title: AutoML public preview features
description: Read preliminary documentation for AutoML features currently in the DataRobot public preview pipeline.
section_name: AutoML
maturity: public-preview
---
# AutoML public preview features {: #automl-public-preview-features }
{% include 'includes/pub-preview-notice-include.md' %}
## Available AutoML public preview documentation {: #available-automl-public-preview-documentation }
Public preview for... | Describes...
----- | ------
[Document AI](doc-ai/index) | Learn how to use raw documents as an input to modeling.
[GPU support](gpus) | Enable GPU support to improve runtime for deep learning models.
[Prediction Explanations for clusters](cluster-pe) | Generate and interpret Prediction Explanations for clustering projects.
[Detailed view for blueprints](blueprint-toggle) | Toggle between a summary view and a detailed view when working with the Composable ML blueprint editor.
[Quantile regression metric](quantile-reg) | Predict a conditional value (a quantile) for a project.
[Configure hyperparameters for custom tasks](cml-hyperparam) | Define the hyperparameters for a custom task.
|
index
|
---
title: GPUs for deep learning
description: Use GPU workers for faster training.
section_name: AutoML
maturity: public-preview
---
# GPUs for deep learning {: #gpus-for-deep-learning }
!!! info "Availability information"
GPU workers are disabled by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable GPU Workers
Support for deep learning models, Large Language Models for example, are increasingly important in an expanding number of business use cases. While some of the models can be run on CPUs, other models require [GPUs](glossary/index#gpu-graphics-processing-unit) to achieve reasonable training time. To efficiently train, host, and predict using these "heavier" deep learning models, DataRobot leverages Nvidia GPUs within the application. Training on GPUs can be set per-project and when enabled, are used as part of the Autopilot process.
## GPU task support {: #gpu-task-support }
Because accounts have fewer GPU workers available to them, GPU support is targeted at specific tasks. When GPU support is enabled, DataRobot detects blueprints that contain the following tasks and potentially uses GPU workers to train them. That is, if the sample size minimum is not met, the blueprint is routed to the CPU queue. Supported tasks are:
Type | Minimum sample size
---- | -------------------
Fine-Tuned Image, Multi-Image Classifier/Regressor, or Fine-Tuned Image/Multi-Image Featurizer | Greater 350 rows
Keras Text Convolutional Neural Network Classifier | Greater than 3500 rows
Consider the following when working with Keras models:
* Blueprints using these tasks are only present when image or text features are available in the training data.
* These models are only automatically trained as part of Comprehensive mode; they are also available for GPU support when trained manually.
Not every modeling job with a GPU-supported task is trained using GPU workers. A heuristic determines which blueprints will train with low runtime on CPU workers. Additionally, enabling GPU workers may increase the number of models available to Autopilot.
## Enable GPU workers {: #enable-gpu-workers }
To enable GPU workers, open advanced options and select the [Additional](additional) tab.

Check the **Allow use of GPU workers** box, which controls whether GPU workers will be used for the project. Use depends on whether the appropriate blueprints are available and whether GPU workers are available.
## GPU modeling process {: #gpu-modeling-process }
If GPUs are enabled:
1. When a training job gets scheduled, DataRobot uses heuristics to determine whether the job should be executed on a GPU worker or a CPU worker.
3. Models flagged for each processor type are sent into queues for the appropriate processor type.
4. The Worker Queue shows the number of CPUs and GPUs being used for training, as well as the total number available. You can add GPUs, if available, to increase queue processing.

5. Once a models's training is complete, it appears on the Leaderboard. A badge indicates whether the model was trained on GPUs.

6. If you retrain a model, DataRobot applies the same logic as for the original training job. If the heuristics indicates to train on a GPU, the **Allow use of GPU workers** is enabled, and GPU workers are available, DataRobot trains on GPU. Otherwise, DataRobot train on CPUs. If, however, you retrain on a sample size [below the limit](#gpu-task-support), DataRobot automatically trains using CPUs.
## Feature considerations {: #feature-considerations }
* Due to the inherent differences in implementation of floating point arithmetic on CPUs and GPUs, using a GPU trained model in environments without a GPU may lead to inconsistencies. Inconsistencies will vary depending on model and dataset, but will likely be insignificant.
* Training on GPUs can be non-deterministic. It is possible that training the same model on the same partition results in a slightly different model, scoring differently on the test set.
* GPUs are only used for training; they are not used for prediction or insights computation.
* There is no GPU support for custom tasks or custom models.
* There is no GPUs in DataRobot Notebooks.
|
gpus
|
---
title: Prediction Explanations for clusters
description: Understand the reasons behind a clustering model’s outcomes using Prediction Explanations to uncover the factors that most contribute to those outcomes.
section_name: AutoML
maturity: public-preview
---
# Prediction Explanations for clusters {: #prediction-explanations-for-clusters }
!!! info "Availability information"
Prediction Explanations for clustering models is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable Clustering Prediction Explanations
[Clustering](clustering) lets you explore your data by grouping and identifying natural segments in your data, capturing latent behavior that's not explicitly captured by a column in the dataset. Using Prediction Explanations with clustering, you can uncover the factors that most contribute to model outcomes. For example, targeted marketing strategies can be built using clustering models to assign logical clusters to data samples. With that insight, you can develop models that comply with regulations, easily explain the clustering model outcomes to stakeholders, and identify high-impact factors to help focus their business strategies.

!!! note
Clustering Prediction Explanations are only available when using the [XEMP-based methodology](xemp-pe). Additionally, they are not available for time series clustering models.
## Interpret Prediction Explanations {: #interpret-prediction-explanations}
Prediction Explanations for clustering models work very much like they do with [multiclass projects](xemp-pe#multiclass-prediction-explanations), including support for text and image explanations. Follow those instructions to generate explanations. The following describes working with the results that are unique to clustering.

### Select a cluster {: #select-a-cluster }
Use the **Cluster Label** dropdown to choose which cluster to display. These labels map to the labels shown in the **Cluster Insights** tab. That is, if you [change a cluster name](cluster-insights#name-clusters) there, the change is reflected in the **Cluster Label** selector dropdown.

DataRobot calculates the _prediction distribution_ for up to 20 clusters—those that contain the most data. The actual explanations are available for all clusters via [download](xemp-pe#download-explanations).
!!! note
If you select a cluster and see a message indicating that the preview data is missing, this indicates that the model was built before the feature was enabled. Because DataRobot computes prediction distribution during training, you must recompute explanations for any model built prior to enabling the flag.

### Calculate explanations {: #calculate-explanations }
You can calculate explanations either for the full training data set or for new data. [The process](xemp-pe#compute-and-download-predictions) is generally the same as for classification and regression projects, with a few clustering-specific differences (because DataRobot calculates explanations separately for each class). Clicking the calculator opens a modal that controls which clusters explanations are generated for:

Set the number of explanations and the thresholds. Use the **Clusters** setting to control the method—either **Predicted** or **List of clusters**, described below—for selecting which clusters are used in explanation computation. By default (if a method is not set), Prediction Explanations explain the top predicted cluster for a row.
#### Predicted {: #predicted }
Choose **Predicted** to view explanations for a specified number of cluster(s). When selected, you are prompted to enter the number of clusters to compute predictions for, between 1 and the number of existing clusters (maximum 10):

The clusters returned are those ranked with the highest probabilities for a given feature. In other words, if you request five predicted clusters, DataRobot returns, for each row and ranked by probability, each predicted cluster assignment with accompanying reasons.
#### List of clusters {: #list-of-clusters }
Choose **List of clusters** to view explanations for only specific clusters. Click on **List of clusters** to activate a cluster-selection dialog.

## Download explanations {: #download-explanations }
Once computed, click the download icon () to export all of a dataset's predictions and corresponding explanations in CSV format. The output can be interpreted in the same way as the [multiclass export](xemp-pe#download-explanations), with clusters instead of classes.
## Explanations from a deployment {: #explanations-from-a-deployment }
When you [calculate predictions from a deployment](batch-pred#set-prediction-options) (**Deployments > Predictions > Make Predictions**), DataRobot adds the **Predicted** and **List of clusters** fields to the options modal. These work in the same way as described [above](#calculate-explanations).

|
cluster-pe
|
---
title: Configure hyperparameters for custom tasks
description: Define the hyperparameters for a custom task.
section_name: AutoML
maturity: public-preview
---
# Configure hyperparameters for custom tasks {: #configure-hyperparameters-for-custom-tasks }
!!! info "Availability information"
Hyperparameters for custom tasks is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable Custom Task Hyperparameters
Now available for public preview, you can define hyperparameters for a custom task. You must specify two values for each hyperparameter: the `name` and `type`. The type can be one of `int`, `float`, `string`, `select`, or `multi`. All types support a `default` value. Integer and float values can have a `min` and `max` value specified. Certain type parameters require a list in the `values` field to define the accepted values. String type hyperparameters can accept any arbitrary string. Multi types have values specified as multiple of the aforementioned types, e.g. `float` and `select`.
View an example set of hyperparameters below.
```yaml
hyperparameters:
# int: Integer value, must provide a min and max. Default is optional. Uses the min value if not provided.
- name: seed
type: int
min: 0
max: 10000
default: 64
# int: Integer value, must provide a min and max. Default is optional. Uses the min value if not provided.
- name: kbins_n_bins
type: int
min: 2
max: 1000
default: 10
# select: A discrete set of unique values, similar to an enum. Default is optional. Will use the first value if
# not provided.
- name: kbins_strategy
type: select
values:
- uniform
- quantile
- kmeans
default: quantile
# multi: A parameter that can be of multiple types (int/float/select). Default is optional. Will use the first parameter
# type's default value. This example uses select, the first entry, or for int/float, the min value.
- name: missing_values_strategy
type: multi
values:
float:
min: -1000000.0
max: 1000000.0
select:
values:
- median
- mean
- most_frequent
default: median
# string: Unicode string. Default is optional. Is an empty string if not provided.
- name: print_message
type: string
default: "hello world 🚀"
```
Access a custom task's hyperparameters via the `fit` method and add parameters to the `fit` function arguments, as shown below.
```yaml
def fit(
X: pd.DataFrame,
y: pd.Series,
output_dir: str,
class_order: Optional[List[str]] = None,
row_weights: Optional[np.ndarray] = None,
parameters: Optional[dict] = None,
**kwargs,
):
```
|
cml-hyperparam
|
---
title: Model insights
description: Introduces the many insights the DataRobot Leaderboard provides when you select a model, with links to details.
---
# Model insights {: #model-insights }
When you select a model, DataRobot makes available a large selection of insights, grouped by purpose, appropriate for that model.
## Model Leaderboard {: #model-leaderboard }
The model Leaderboard is a list of models ranked by the chosen performance metric, with the best models at the top of the list. It provides a [variety of insight tabs](#leaderboard-tabs), available based on user permissions and applicability. Hover over an inactive division to view a dropdown of member tabs.
!!! note
Tabs are visible only if they are applicable to the <i>project type</i>. For example, time series-related tabs (e.g., <b>Accuracy Over Time</b>) only display for time series projects. Tabs that are applicable to a project but not a particular <i>model type</i> display as grayed out (for example, [blender](leaderboard-ref#blender-models) models, due to the nature of their construction, have fewer tab functions available).

The pages within this section provide information on using and interpreting the insights available from the Leaderboard (**Models** tab). See the [Leaderboard reference](leaderboard-ref) for information on the badges and components of the Leaderboard as well as functions such as tagging, searching, and exporting data.
## Leaderboard tabs {: #leaderboard-tabs }
| Tab name | Description |
|---------------|--------------|
|**_[Evaluate](evaluate/index)_:** *Key plots and statistics for judging model effectiveness* | :~~:|
| [Accuracy Over Space](lai-insights) | Provides a spatial residual mapping within an individual model. |
| [Accuracy over Time](aot) | Visualizes how predictions change over time. |
| [Advanced Tuning](adv-tuning) | Allows you to manually set model parameters, overriding the DataRobot selections. |
| [Anomaly Assessment](anom-viz) | Plots data for the selected backtest and provides SHAP explanations for up to 500 anomalous points. |
| [Anomaly over Time](anom-viz) | Plots how anomalies occur across the timeline of your data. |
| [Confusion Matrix](multiclass) | Compares actual data values with predicted data values in multiclass projects. For binary classification projects, use the [confusion matrix](confusion-matrix) on the [ROC Curve](roc-curve-tab/index) tab.|
| Feature Fit | Removed. See [**Feature Effects**](feature-effects). |
| [Forecasting Accuracy](forecast-acc) | Provides a visual indicator of how well a model predicts at each forecast distance in the project’s forecast window. |
| [Forecast vs Actual](fore-act) | Compares how different predictions behave at different forecast points to different times in the future. |
| [Lift Chart](lift-chart) | Depicts how well a model segments the target population and how capable it is of predicting the target. |
| [Residuals](residuals) | Clearly visualizes the predictive performance and validity of a regression model. |
| [ROC Curve](roc-curve-tab/index) | Explores classification, performance, and statistics related to a selected model at any point on the probability scale. |
| [Series Insights](series-insights-multi) | Provides series-specific information. |
| [Stability](stability) | Provides an at-a-glance summary of how well a model performs on different backtests. |
| [Training Dashboard](training-dash) | Provides an understanding about training activity, per iteration, for Keras-based models. |
| **_[Understand](understand/index):_** *Explains what drives a model’s predictions* | :~~:|
| [Feature Effects](feature-effects) | Visualizes the effect of changes in the value of each feature on the model’s predictions. |
| [Feature Impact](feature-impact) | Provides a high-level visualization that identifies which features are most strongly driving model decisions. |
| [Cluster Insights](cluster-insights) | Captures latent features in your data, surfacing and communicating actionable insights and identifying segments in your data for further modeling. |
| [Prediction Explanations](pred-explain/index) | Illustrates what drives predictions on a row-by-row basis using XEMP or SHAP methodology. |
| [Word Cloud](analyze-insights#word-cloud-insights) | Displays the most relevant words and short phrases in word cloud format. |
| **_[Describe](describe/index):_** *Model building information and feature details* | :~~:|
| [Blueprint](blueprints) | Provides a graphical representation of the data preprocessing and parameter settings via blueprint. |
| [Coefficients](coefficients) | Provides, for select models, a visual representation of the most important variables and a coefficient export capability.|
| [Constraints](monotonic) | Forces certain XGBoost models to learn only monotonic (always increasing or always decreasing) relationships between specific features and the target. |
| [Data Quality Handling Report](dq-report) | Provides transformation and imputation information for blueprints. |
| [Eureqa Models](eureqa) | Provides access to model blueprints for Eureqa generalized additive models (GAM), regression models, and classification models. |
| [Log](log) | Lists operation status results. |
| [Model Info](model-info) | Displays model information. |
| [Rating Table](rating-table) | Provides access to an export of the model’s complete, validated parameters. |
| **_[Predict](predictions/index.md)_:** *Access to prediction options* | :~~:|
| [Deploy](deploy-model) | Creates a deployment and makes predictions or generates a model package. |
| [Downloads](download) | Provides export of a model binary file, validated Java Scoring Code for a model, or charts. |
| [Make Predictions](predict) | Makes in-app predictions. |
| **_[Compliance](compliance/index)_:** *Compiles model documentation for regulatory validation* | :~~:|
| [Compliance Documentation](compliance) | Generates individualized model documentation. |
| [Template Builder](template-builder) | Allows you to create, edit, and share custom documentation templates. |
| **_[Comments](catalog-asset#add-comments)_:** *Adds comments to a modeling project* | :~~:|
| [Comments](catalog-asset#add-comments)| Adds comments to items in the **AI Catalog**. |
| **_[Bias and Fairness](bias/index)_:** *Tests models for bias* | :~~:|
| [Per-Class Bias](per-class) | Identifies if a model is biased, and if so, how much and who it's biased towards or against. |
| [Cross-Class Data Disparity](cross-data) | Depicts why a model is biased, and where in the training data it learned that bias from. |
| [Cross-Class Accuracy](cross-acc) | Measures the model's accuracy for each class segment of the protected feature. |
| **_[Insights and more](other/index)_:** *Graphical representations of model details* | :~~:|
| [Activation Maps](analyze-insights#activation-maps) | Visualizes areas of images that a model is using when making predictions. |
| [Anomaly Detection](analyze-insights#anomaly-detection) | Lists the most anomalous rows (those with the highest scores) from the Training data. |
| [Category Cloud](analyze-insights#category-clouds) | Visualizes relevancy of a collection of categories from summarized categorical features. |
| [Hotspots](analyze-insights#hotspots) | Indicates predictive performance. |
| [Image Embeddings](analyze-insights#image-embeddings) | Displays a projection of images onto a two-dimensional space defined by similarity. |
| [Text Mining](analyze-insights#text-mining) | Visualizes relevancy of words and short phrases. |
| [Tree-based Variable Importance](analyze-insights#tree-based-variable-importance) | Ranks the most important variables in a model. |
| [Variable Effects](analyze-insights#variable-effects)| Illustrates the magnitude and direction of a feature's effect on a model's predictions. |
| [Word Cloud](analyze-insights#word-clouds) | Visualizes variable keyword relevancy. |
| [Learning Curves](learn-curve) | Helps to determine whether it is worthwhile to increase dataset size. |
| [Speed vs Accuracy](speed) | Illustrates the tradeoff between runtime and predictive accuracy. |
| [Model Comparison](model-compare) | Compares selected models by varying criteria. |
| [Bias vs Accuracy](bias-tab) | Illustrates the tradeoff between predictive accuracy and fairness. |
|
index
|
---
title: External prediction comparison
description: Compare model predictions created outside of DataRobot with DataRobot-driven predictions to drive the best business decisions.
---
# External prediction comparison {: #external-prediction-comparison }
For organizations that have existing supervised [time series](time/index) models outside of the DataRobot application, the ability to compare these model predictions with DataRobot-driven predictions helps to drive the best business decisions. With this feature, you can use the output of your forecasts (your predictions) and as a baseline to compare against DataRobot predictions. You can not only compare existing predictions but can also compare with pre-existing models from a project with different settings. DataRobot applies specific, scaled [metrics](#metrics-used-for-comparison) for result comparison.
## Enabling external prediction comparison {: #enabling-external-prediction-comparison }
To compare predictions:
1. Create a time-aware project, select a date feature, and optionally (for time series), a series ID.
2. Create an [external baseline file](#create-an-external-baseline-file).
3. [Upload the file](#upload-the-baseline-file) from **Advanced options**.
4. Start the project, and when building completes, [select a metric](#metrics-used-for-comparison).
### Create an external baseline file {: #create-an-external-baseline-file }
The baseline file you use to compare predictions is created from the predictions of your alternate model.
!!! note
The prediction file used for comparison must not have more than 20% missing values in any backtest.
It must contain the same target, date, series ID (if applicable), and forecast distance as the DataRobot models you are building. When uploading, you can view the file requirements from within the setting.

Column names must match exactly and the date and series ID columns must match the original data. In this multiseries example, stored in the AI Catalog, each row in this example represents the prediction (the value in the `sales` column) of a specific series ID (the value in the `store` column) at a specific date (the value in the `date` column) with a specific forecast distance (value in `Forecast Distance` column).

Baseline prediction file requirements:
Column | Description
------ | -----------
Date/time values | Column name must match the primary date/time feature. This timestamp refers to the forecast timestamp, not the forecast point. For example, a row with a timestamp `Aug 1` and `FD = 1`, the baseline value should be compared to the `Aug 1` actuals (and would have been generated via the baseline model as of July 31).
Predicted values | Column name must match the project's target feature. This is the forecast of the given timestamp at the given forecast distance. For classification projects, this indicates the predicted probabilities of the positive class.
Series ID (multiseries only) | Column name must match the project's series ID.
Forecast distances | Column name must be equal to "Forecast Distance". This refers to the number of forecast distances away from forecast point.
!!! note
When the target is a transformed target (for example, if you applied the `Log(target)` transformation operation within DataRobot to transform the target column), the prediction column name must use the transformed name (`Log(target)`) rather than original target name (`target`).
### Upload the baseline file {: #upload-the-baseline-file }
Once the baseline file is created and meets the file criteria, upload the file into DataRobot. Open **Advanced options > Time Series** and scroll to find **Compare models to baseline data?**. Select a method for uploading the baseline, either a local file or a file from the **AI Catalog**. Note that if you use the local file option, the file will also be added to **AI Catalog**.

Once the file is successfully uploaded, return to the top of the page to start model building.
### Metrics used for comparison {: #metrics-used-for-comparison }
When building completes, expand the model metrics dropdown and select a metric to use for comparison.

| Metric | Project type |
|------------------------|-------------------|
| MAE scaled (to external baseline) | Regression |
| RMSE scaled (to external baseline) | Regression, binary classification |
| LogLoss scaled (to external baseline) | Binary classification |
Look at the value in the **Backtest 1** column. A value less than `1` indicates higher accuracy (lower error) for the DataRobot model. A value greater than `1` indicates the external model had higher accuracy.
Using standard RMSE:

Using scaled RMSE:

Calculations to scale the metrics to the external baseline are as follows:
<metric> of the DataRobot model / <metric> of the external model
All values scale to the external baseline (uploaded predictions), and work with weighted projects. Because the values are scaled (and calculated by dividing the two errors), these special metrics are not straight derivatives of their original.
|
cyob
|
---
title: Segmented modeling
description: Describes segmented modeling for multiseries projects in DataRobot.
---
# Segmented modeling for multiseries {: #segmented-modeling-for-multiseries }
Complex and accurate demand forecasting typically requires deep statistical know-how and lengthy development projects around big data architectures. DataRobot's multiseries with segmented modeling automates this requirement by creating multiple projects—"under the hood." Once the segments are identified and built, they are merged to make a single-object—the Combined Model. This leads to improved model performance and decreased time to deployment.
When using segmented modeling, DataRobot creates a full project for each segment—running Autopilot (full or quick) then selecting (and preparing) a recommended model for deployment. (See the note on using segmented modeling in [Manual](#manual-mode-in-segmented-modeling) mode.) DataRobot also marks the recommended model as "segment champion," although you can [reassign the champion](#reassign-the-champion-model) at any time.
!!! note
Although DataRobot creates a project for each segment, these projects are _not_ available from the [**Project Control Center**](manage-projects). Instead, they are investigated and managed from within the Combined Model, which _is_ available in the **Project Control Center**.
DataRobot incorporates each segment champion to create the _Combined Model_—a main umbrella project that acts as a collection point for all the segments. The result is a “one-model" deployment—the segmented project—while each segment has its own model running in the deployment behind the scenes.

It is important to remember that while segmented modeling solves some problems (model factories, multiple deployments), it cannot know which segments you care most about or which have the highest ROI. To be successful, you must correctly define the use case, set up the dataset, and define the segments.
See the [segmented modeling FAQ](segmented-faq) for more detailed information. See the [visual overview](segmented-qs) for a quick representation of why to use segmented modeling.
Modeling with segmentation is available for multiseries regression projects.
## Segmented modeling workflow {: #segmented-modeling-workflow }
Time series segmented modeling requires, first, defining segments that divide the dataset. To define segments, you can allow DataRobot to:
* Discover [clusters](#ts-clustering#use-cluster-models-from-the-model-registry) in your data and then use those clusters as segments.
* Assign segments for you based on the configured segment ID.
To build a segmented modeling project:
1. Follow the standard [time series workflow](ts-flow-overview)—set the target and turn on time-aware modeling. Choose **Automated time series forecasting** as the modeling method.
2. Enable multiseries modeling by [setting the series identifier](multiseries#set-the-series-id).
3. Set the segmentation method by clicking the pencil:

4. Set whether to enable segmented modeling:

* Select _Yes, build models per segment_ to enable segmented modeling. When selected, you must also set how segments are defined.
* Select _No, build models without segmenting_ to return to the previous **Time Series Forecasting** window. If you choose not to do segmented modeling, DataRobot builds one model for all detected series (regular [multiseries](multiseries)).
5. Set how segments are defined. The table below describes each option:

Option | Description
------ | -----------
ID column | Select a column from the training dataset that DataRobot will use as the segment ID. Start to type a column name and see matching auto-complete selections or select from the identifiers that DataRobot identified. Segment ID must be different than series ID (see note below).
Existing clustering model | Use a [clustering model previously saved](ts-clustering#use-cluster-models-from-the-model-registry) to the Model Registry.
New clustering model | Start a new clustering project, with results later applied via the **Existing clustering model** option, by clicking **time series clustering** link in the help text.
??? faq "What if I want to have one series per segment?"
The columns specified for segment ID and series ID cannot be the same; however, you can duplicate the series ID column and give it a new name. Then, set the segment ID to the new column name (using the **How are the segments defined** section). DataRobot will generate the segments using the series ID.
5. Once the method is selected—either the ID is set or an existing clustering model is selected—click **Set segmentation method**. The **Time Series Forecasting** window returns, where you can then [continue the configuration](ts-flow-overview)—training windows, duration, [KA](ts-adv-opt#set-known-in-advance-ka), and calendar selection—including changing the selected series and segment.
??? faq "How are the training periods determined if clustering was used?"
When building a segmented model using [found clusters](ts-clustering) to split the dataset into the child projects (segments), DataRobot applies the training window settings from the clustering project to the segmented modeling project. This protects the holdout in segmented modeling and prevents data leakage from the clustering model when splitting the segmented dataset into child projects. Using the start and end dates of each series, the general scenarios that affect the methodology:
* If the series data contains the time window needed (as defined in the clustering project), DataRobot simply passes the series data along.
* _Series data before the clustering training end_: If there is a series that is shorter than the full training window and extends past holdout, DataRobot only uses data points before the clustering end that is the size of the training duration (only the portions that exist within the training boundary).
* _Series data has only data older in time than the clustering training end_: If there is a legacy series in which its data does not fall into the training window, DataRobot "slides back" and gathers data for the _duration_ of the training window agains collect so it can be used in segmented and not lost
* _Series data only exists "newer" in time than the clustering training end_: If a series only exists in holdout, DataRobot slides the window forward but does not select any data that was used in training. In this way, the data is not dropped, but it is only used for examining the holdout of a child project.

6. When the configuration is ready, select Quick or full Autopilot, or [Manual](#manual-mode-in-segmented-modeling) mode, and click **Start**. DataRobot prompts to remind you that because it builds a complete project for each segment, the time required to finish modeling could be quite long. Confirm you want to proceed by clicking **Start modeling**. (You can set DataRobot to proceed without approval for future segmented projects.)
7. After [EDA2](eda-explained#eda2) completes, DataRobot immediately creates the Combined Model. Because the "child" models (the independent segment models) are still building, the Combined Model is not complete. However, you can [control building and worker assignment](#worker-control) from the Combined Model.

8. When modeling completes, use the Combined Model to explore segments.
## Explore results {: #explore-results }
Once modeling has finished, the **Model** tab indicates that one model has built. (See the [note](#manual-mode-in-segmented-modeling) regarding outcome when using Manual mode.) This is the completed Combined Model.

The charts and graphs available for segmented modeling are dependent on the model type:
* For the Combined Model, you can access the [**Segmentation**](#segmentation-tab), a model blueprint, the modeling log, **Make Predictions**, and **Comments**.
* For the models available in the individual segments, the visualizations and modeling tabs (Repository, Compare Models, etc.) appropriate to a multiseries regression project are available.
### Segmentation tab {: #segmentation-tab }
Click to expand the Combined Model and expose the **Segmentation** tab.

The following table describes components of the **Segmentation** tab:
Component | Description
--------- | -----------
Search | Use search to change the display so that it only includes segments that match the entered string.
Download CSV | Download a spreadsheet containing the metadata associated with the Combined Model project, including metric scores, champion history, IDs, and project history.
Segment | Lists the segment values, found in the training data by DataRobot, in the specified segment ID.
Rows | Displays segment statistics from the training data—the raw number of rows and the percentage of the dataset that those rows represent.
Total models | Indicates the number of models DataRobot built for that segment during the build process.
Champion last updated | Indicates the time and the responsible party for the last segment champion assignment. The entry also provides an [icon indicating the champion model type](leaderboard-ref#model-icons). Initially, all rows will list **by DataRobot**. Segments are listed by the "All backtests" scores; click the column header to re-sort.
Backtest 1 | Indicates the champion model's Backtest 1 score for the selected metric.
All backtests | Indicates the average score for all backtests run for the champion model.
Holdout | Provides an icon that indicates whether Holdout has been [unlocked](#unlock-holdout).
## Explore segments {: #explore-segments }
The Combined Model is comprised of one model per segment—the segment champion. The individual segments, on the other hand, comprise a complete project. You can investigate the project from the segment's Leaderboard and even deploy a segment model, independent of the Combined Model.
### Access a segment's Leaderboard {: #access-a-segments-leaderboard }
There are multiple ways to access a segment's Leaderboard.
#### From the Combined Model {: #from-the-combined-model }
Expand the Combined Model and click the segment name in the **Segmentation** tab list.

Once clicked, the segment's Leaderboard opens. Notice that:
Indicator | Description
--------- | -----------
 | A full set of models has been built.
 | DataRobot has [recommended a model for deployment](model-rec-process) and marked a model as champion.
 | Regular Worker Queue controls are available.

#### From the Segment dropdown {: #from-the-segment-dropdown }
Use the **Segment** dropdown to change your view.

* From a segment:
* Select an alternate segment. The segment's Leaderboard displays.
* Select **View all segments** to return to the Combined Model.
* From the Combined Model, select a segment to open the segment's Leaderboard.
### Reassign the champion model {: #reassign-the-champion-model }
While DataRobot initially assigns a segment champion, you may want to change the designation. This could be the case, for example, if it were important to you that all segments provide the same model type to the Combined Model. Identify the segment champion from a segment's Leaderboard, where it is marked with the champion badge:

To reassign the champion, from the segment Leaderboard, select the model you want as champion. Then, from the menu select **Leaderboard options > Mark model as champion**.

The badge moves to the new model:

And the Combined Model's **Segmentation** tab shows when the champion was last updated and who assigned the new champion.
## Control across projects {: #control-across-projects }
Because DataRobot treats each segment as an individual project, completing the Combined Model can take significantly longer than a regular multiseries project. The exact time is dependent on the number of segments and size of your dataset. You can use the controls described below to set workers (1) and to stop and start modeling (2). All actions are performed from the Worker Queue of the Combined Model and apply to all segment projects. You can also use it to [unlock Holdout](#global-holdout-unlock) (3).

### Worker control {: #worker-control }
From the Combined Model, you can control the number of modeling workers across all segment projects. DataRobot automatically re-balances workers between segments, distributing available workers between running segments as each segment completes modeling. When changing the worker count, DataRobot ignores any projects not in the modeling stage.
### Pause/Start/Stop child modeling {: #pause-start-stop-child-modeling }
From the Worker Queue of the parent segmented project, you can control modeling actions of the child projects. Use the stop/start/cancel buttons in the sidebar, and the selected action is applied to all child projects simultaneously. Specifically:
- At the start of a segmented project, no queue actions are available.
- When all segments have reached the [EDA2](eda-explained#eda2) stage, the Pause and Start buttons become available.
- The Cancel button becomes available when child projects are in the modeling stage and have at least one job running.
### Unlock Holdout {: #unlock-holdout }
You can [unlock Holdout](unlocking-holdout) for an entire project or for each segment.
* To unlock the entire project—all models in all segments—choose **Unlock Holdout** from the Combined Model's Worker Queue.
* To unlock Holdout for all models in a segment, (open the segment's Leaderboard) and choose **Unlock project Holdout for all models** in the Worker Queue.
### Leaderboard model scores {: #leaderboard-model-scores }
Scores for the Combined Model are updated when a model completes building (champions are assigned in all segments). Scores are recalculated any time a segment's champion is replaced. For efficiency, it is calculated by aggregating individual champion scores. Metrics that support this method of calculation are: MAD, MAE, MAPE, MASE, RMSE, RMSLE, SMAPE, and Theil’s U.
When one or more champions are prepared for deployment, the scores shown reflect the parent scores. (The parent of the champion/recommended model is the model the champion is trained _from_.)
??? tip "How are metric scores weighted based on number of series?"
The evaluation metric for a combined model is an average based on the number of rows in a particular partition (training, validation, holdout). Because each segment can have a different number of series to predict, DataRobot weights the value to account for the model count for each series. In the case of MAE/MAD/MAPE/SMAPE, it is calculated as:
`MAE_X = MAE1 * w_1 + MAE2 * w2 + ...`
For RMSE/RMSLE, as:
`RMSE_X = sqrt(RMSE_1**2 * w1 + RMSE_2**2 * w2 + ...)`
MASE/Theil’s U are calculated using champion metrics and champion base metrics in two steps—calculate naive model scores in all segments, then calculate final combined model score.
`naive_X = base_X / score_X`
`SCORE = (base_1 + base_2 + ... ) / (naive_1 + naive_2 + ...)`
The score weight for a segment is essentially the number of rows in a particular partition of the segment in relation to the total number of holdout rows in all segments. There are explicit tests for calculating score consistency. The score is calculated for the full dataset and then split into segments—scores are calculated individually for each segment and then combined and compared with the full dataset score.
When looking at the Combined Model on the Leaderboard, you may notice that the model has no score (only N/A), which indicates:
* The model is not yet complete.
* All backtests have not yet completed for one or more segment champions (the All Backtests score is N/A).
* The selected metric does not support score aggregation (for example, FVE Poisson, Gamma Deviance, etc.). For example:

Change the metric to a supported metric, for example MAE, and an aggregated score from the Combined Model becomes available.
To see the individual champion scores, expand the Combined Model to display the **Segmentation** tab.

Individual champion scores are reported there. Note that:
* Both the Backtest 1 and All Backtests scores are N/A if the champion is not assigned.
* An asterisk indicates that the champion model has been [prepared for deployment](ts-date-time#recommended-time-series-models) (retrained as a start/end model into the most recent data) and thus the scores of it's parent model are used.
If you change the champion, DataRobot passes the scores from the new champion (or its parent) to the Combined Model.
## Manual mode in segmented modeling {: #manual-mode-in-segmented-modeling }
When using Manual mode with segmented modeling, DataRobot creates individual projects per segment and completes preparation _as far as the modeling stage_. However, DataRobot does not create per-project models. It does create the Combined Model (as a placeholder), but does not select a champion. Using Manual mode is a technique you can use to have full manual control over which models are trained in each segment and selected as champions, without taking the time to build the models.
## Deploy a Combined Model {: #deploy-a-combined-model }
To fully leverage the value of segmented modeling, you can deploy Combined Models like any other time series model. After selecting the champion model for each included project, you can deploy the Combined Model to create a "one-model" deployment for multiple segments; however, the individual segments in the deployed Combined Model still have their own segment champion models running in the deployment behind the scenes. Creating a deployment allows you to use [DataRobot MLOps](mlops/index) for accuracy monitoring, prediction intervals, challenger models, and retraining.
When segmented modeling completes, you can deploy the resulting Combined Model:
1. Once Autopilot has finished, the **Model** tab contains one model. This model is the completed Combined Model.
2. Click the **Combined Model**, and then click **Predict** > **Deploy**.
3. On the **Deploy** tab, click **Deploy model**.

!!! note
You can also click **Add to Model Registry** and then deploy the Combined Model from there.
4. Add [deployment information and create the deployment](add-deploy-info).
5. Monitor, manage, and govern the deployed model in [DataRobot MLOps](mlops/index). Set up [retraining policies](set-up-auto-retraining) to maintain model performance post-deployment.
### Combined Model deployment considerations {: #combined-model-deployment-considerations }
{% include 'includes/deploy-combined-model-include.md' %}
## Modify and clone a deployed Combined Model {: #modify-and-clone-a-deployed-model-combined-model }
After deploying a Combined Model, you can change the segment champion for a segment by cloning the deployed Combined Model and modifying the cloned model. This process is automatic and occurs when you attempt to change a segment's champion within a deployed Combined Model. The cloned model you can modify becomes the **Active Combined Model**. This process ensures stability in the deployed model while allowing you to test changes within the same segmented project.
!!! note
Only one Combined Model on a project's Leaderboard can be the **Active Combined Model** (marked with a badge).
To modify and clone a deployed Combined Model, take the following steps:
1. Once a Combined Model is deployed, it is labeled **Prediction API Enabled**.
2. Click the active and deployed **Combined Model**, and then in the **Segments** tab, click the segment you want to modify.

3. [Reassign the segment champion](ts-segmented#reassign-the-champion-model).
4. In the dialog box that appears, click **Yes, create new combined model**.

5. On the project's **Leaderboard**, you can access and modify the **Active Combined Model**.

!!! tip
While the **Combined Model updated** notification is visible, you can click **Go to Combined Model** to return to the segment's Combined Model in the **Leaderboard**.

|
ts-segmented
|
---
title: Clustering
description: Available for time series projects, clustering groups by similar series across a multiseries dataset for insights or to prepare for segmented modeling.
---
# Clustering {: #clustering }
Time series clustering is an out of the box solution unique to DataRobot that enables you to easily identify and group similar series across a multiseries dataset. Instead of manually running a time series clustering technique outside the platform and then using the cluster assignments as a segmenting feature, this process is entirely contained within the time series workflow. You do not need to be familiar with advanced concepts like Dynamic Time Warping (DTW) or be code-savvy to use the clustering capability as DataRobot builds both DTW and Velocity clustering models (see the detailed descriptions [here](clustering-algos){ target=_blank }).
!!! note
[Non-time-aware projects clustering](clustering) is also available, although segmented modeling is not.
**Example:** You are predicting shoe sales across your North American stores. With clustering, DataRobot can automatically group all stores in San Francisco and Cleveland into one cluster because the sales profiles for these locations is the same.
Simply put, clustering is a mechanism for grouping the series together. Found clusters can then be used as input to time series [segmented modeling](ts-segmented). (Additionally, clustering can be used to simply get a better understanding of data.) Without clustering, _you_ define how to group the series together based on a configured segment ID. Clustering, on the other hand, automatically groups series together by looking at the data and determining which series look most similar. Once clusters are established, you can:
* Create a clustering model to [use immediately](#use-cluster-models-now) as part of a segmented modeling workflow.
* Create a clustering model and save it to the [Model Registry to use later](#use-cluster-models-from-the-model-registry) for segmented modeling.
When you cluster, there is no target ("output") variable. DataRobot groups series together based on their similarity. However, you must think about the target variable you will use in segmented modeling. DataRobot recommends using the variable you plan to select as the target in your segmented modeling project as the output variable for clustering.
See also the [time series clustering considerations](ts-consider#clustering-considerations).
## Cluster discovery {: #cluster-discovery }
To allow DataRobot to discover clusters:
1. Upload data, click **No target?**, and select **Clusters**.

**Modeling Mode** defaults to Comprehensive and **Optimization Metric** defaults to [Silhouette Score](opt-metric#silouette-score).
2. Click **Set up time-aware modeling** and select the primary date/time feature. (Modeling mode switches from Comprehensive to Autopilot.)
3. Set the Series ID. DataRobot launches the time-aware clustering workflow—an unsupervised project with the **Clusters** option enabled.
4. Set the feature(s) you want to cluster on. Note that only the selected features will be available for modeling. DataRobot automatically adds the date/time feature and series ID.

* To use clusters in segmented modeling, add only the intended output variable ("target"). DataRobot recommends using the variable you plan to select as the target in your segmented modeling project as the output variable for clustering.
* To cluster without segmentation, add any features.
Click **Set Cluster features**.
!!! info
DataRobot does not use features created during the [feature derivation process](feature-eng) when clustering.
5. Optionally, change the number of clusters that DataRobot discovers. Click **Clustering** in the help text to open the advanced options [**Clustering**](ts-cluster-adv-opt) tab. If using Manual mode, you will have an option to set the number from the Repository.
??? tip "Deep dive: Clustering buffer"
A clustering model has a start and end timestamp. The difference between start and end is the clustering training duration. Any time after the end is considered the holdout buffer.
If there is enough data available, DataRobot creates a clustering buffer that can be seen in the **Partitioning** section of advanced options. The clustering buffer is a section of data that DataRobot calculates to represent what the holdout would be in a subsequent segmentation project. It then shifts the training data dates back to account for the holdout period, to prevent data leakage and to ensure that you are not training a clustering model into what will be the holdout partition in segmentation.

To remove the buffer, toggle **Include training buffer** to off.
6. Click **Start** to begin Autopilot.
You can use the discovered clusters to explore—clusters can capture latent behavior that are not explicitly captured by a column in the dataset. Or, [continue the workflow](#use-cluster-models-now) to use the clusters in a segmented modeling project or save the model to the Model Registry for [later use](#use-cluster-models-from-the-model-registry).
## Use cluster models now {: #use-cluster-models-now }
Once Autopilot completes, you can view the [**Series Insights**](series-insights) tab for cluster and series distribution information. To create a segmented modeling project that uses the newly found clusters to define the segments:
1. Select a model from the Leaderboard and click **Predict**; the tab opens to **Use for Segmentation**. On this tab, you can:
2. Enter the target feature for the segmented modeling project in the **What would you like the new project to predict?** field:

3. Click **Create project and save to Model Registry**.
??? tip "To save the clustering model and create the project later"
Instead of creating a segmentation project now, you can save the clustering model as a model package by selecting **Save to Model Registry**.

Later you can [build a segmented modeling project using the clustering model](#use-cluster-models-from-the-model-registry).
8. Click **Go to project**.

Your segmentation method is configured with the clustering model.

9. Click **Start** to build your segmented model. At the prompt, confirm that you want to run a segmentation project.
After modeling is complete, a Combined Model displays on the Leaderboard where you can [explore the results](ts-segmented#explore-results) and the [model segments](ts-segmented#explore-segments).
!!! tip
This procedure saves the time series clustering model as a model package. You can later [create new segmented modeling projects](#use-cluster-models-from-the-model-registry) using the saved clustering model package.
## Use cluster models from the Model Registry {: #use-cluster-models-from-the-model-registry }
After you save a time series clustering model as a model package, you can use it in a new segmented modeling project.
!!! note
When building segmented modeling project from a clustering project, you must use the same dataset that was used to generate clusters.
1. Use the standard workflow to set up a [time series project](ts-flow-overview):
* Enter the target that you specified for **What would you like the new project to predict?** when you created clusters in the steps above.
* Set **Automated time series forecasting** as the modeling method.
* Set the series ID.
2. Modify window settings as needed and click the pencil next to **Segmentation method**.

3. Confirm building models per segment. Then, choose to use an **Existing clustering model** and click **+ Browse model registry** in the definitions section.

4. In the resulting popup window, select a time series clustering model package and click **Select model package**.

5. The package is now listed as part of the segmentation definition screen. DataRobot will use the training length window from the clustering project in the segmentation project to ensure the clusters used for the segmentation project were evaluated in the clustering project. Click **Set segmentation method**.

6. Click **Start** to build your segmented model. At the prompt, confirm that you want to run a segmentation project.
After modeling is complete, a Combined Model displays on the Leaderboard. You can [explore the results](ts-segmented#explore-results) and the [segment models](ts-segmented#explore-segments).
|
ts-clustering
|
---
title: Multiseries modeling
description: Multiseries modeling allows you to model datasets that contain multiple time series based on a common set of input features.
---
# Multiseries modeling {: #multiseries-modeling }
!!! note
See these additional [date/time partitioning considerations](ts-consider#datetime-partitioning-considerations).
Multiseries modeling allows you to model datasets that contain multiple time series based on a common set of input features. In other words, a dataset that could be thought of as consisting of multiple individual time-series datasets with one column of labels indicating which series each row belongs to. This column is known as the series ID column.
!!! tip
If DataRobot detects multiple series, consider whether you want multiseries or multiseries with [segmented](ts-segmented) modeling. If you select segmented modeling, DataRobot creates individual sets of models for each segment (and then automatically combines the best model per segment to create a single deployment). If you don't select segmented modeling, DataRobot creates, from the dataset, a single model representing all series.
DataRobot automatically suggests using multiseries modeling when the chosen primary date feature is not eligible for single-series modeling. This can happen, for example, because timestamps are not unique or are irregularly spaced. By grouping the rows based on the series ID feature, DataRobot knows to treat each group as a separate time series.
The following sample, perhaps sales from multiple stores in a chain, uses the column `store_id` as a common identifier for multiseries modeling:
store_id, timestamp, target, input1, …
1 2017-01-01, 1.23, AC,
1 2017-01-02, 1.21, AB,
1 2017-01-03, 1.21, BC,
1 2017-01-04, 1.23, B,
...
2 2017-01-03, 1.22, CBC,
2 2017-01-04, 1.23, AAB,
2 2017-01-05, 1.22, CA,
2 2017-01-06, 1.23, BAC,
...
Some features of DataRobot multiseries modeling:
* DataRobot automatically detects when multiseries is required and provides a multiseries modeling workflow, described below. Because there are cases when either there are multiple series or DataRobot did not detect a series, you can also manually assign a series ID.
* With regression projects, you can aggregate the target value across all series in the multiseries project, letting DataRobot automatically generate lags and statistics for the aggregated column. Enable this functionality in [**Advanced options > Time Series**](ts-adv-opt).
* The [**Feature Over Time**](ts-date-time#date-range) and [**Accuracy Over Time**](aot) visualizations provide insights based on an individual series in the data set or multiple series in one view.
!!! note "Feature derivation with multiseries"
When DataRobot runs the [feature derivation process](feature-eng) on a multiseries dataset, it determines the minimum and maximum dates to apply globally during derivation by selecting the longest 10 series from the dataset and using the minimum and maximum dates of these series. Any data to be transformed that falls outside these dates is not used in the modeling process. This is true even if the applied dates were previously selected as part of partitioning. As a result, it effectively appears as if the data was truncated.
To ensure that the entire global history is used for feature transformations and modeling, be certain to have at least one series that contains dates across the full date range of the training dataset.
See [below](#multiseries-use-case) for a sample use case using multiseries modeling. Also see the multiseries-specific [sampling](#sampling-in-multiseries-projects) explanation.
## Set the series ID {: #set-the-series-id }
Once you have selected to use time series modeling, DataRobot runs heuristics to detect whether the data has multiple rows with the same timestamp. If it detects multiple series, the multiseries workflow initiates:
1. Select a series identifier, either by clicking on one that DataRobot identified (1) or manually entering [a valid](#validation-criteria-for-series-id) column name of a known series (2).

2. Once selected, verify the number of unique instances and click **Set series ID**:

Or, select **Go back** to return to time-aware modeling type selection.
3. If you want to modify the series ID, click the series ID pencil icon to return to the series ID selection screen:

Or, change the identifier from the **Time Series** tab of [**Advanced options**](ts-adv-opt):

4. When the series ID is correct:
* [return to the time series configuration](ts-customization#forecast-settings) steps to complete project setup.
* Optionally, [set up segmented modeling](ts-segmented).
When model building is complete, evaluate your models with [time series-specific visualizations](ts-leaderboard) available from the **Leaderboard**. For multiseries modeling, these provide insights based on an individual series in the dataset or multiple series in a single view.
## Set the series ID through advanced options {: #set-the-series-id-through-advanced-options }
If DataRobot does not detect multiple series in your data, you can manually set a series ID and, and if it is [valid](#validation-criteria-for-series-id), can use multiseries modeling. To manually set a series ID:
1. After selecting time series modeling, expand the **Show Advanced options** link and select the [**Time Series**](ts-adv-opt) tab.
2. In the segment prompting to **Use multiple time series**, click **Set a Series ID**:

3. [Manually enter](#multi-set-id) a valid series identifier.
4. When validated, [return to the time series configuration](ts-customization#forecast-settings) steps to complete project setup.
### Validation criteria for series ID {: #validation-criteria-for-series-id }
For a feature to qualify as a series ID, it must meet the following criteria:
* It cannot be the target or primary date/time feature.
* The variable type must be numeric, categorical, or text.
* Complex float values with decimals are not allowed.
* Timestamps within each series group should be unique.
* Timestamps within each group should have [regular time steps](ts-flow-overview#time-steps).
## Multiseries calendars {: #multiseries-calendars }
[Calendar files](ts-adv-opt#calendar-files) contain a list of events relevant to your dataset that DataRobot then uses to derive time series features. The [**Accuracy Over Time**](aot#identify-calendar-events) chart provides a visualization of calendar events along the timeline, including hover help identifying series-specific events.
For multiseries projects, a third column in the calendar file identifies the series to which the event applies. If left blank, it applies to all series in the dataset. For example, the calendar file below lists US holidays, some of which apply to individual states and some to all states:
date,holiday,state
2019-01-01,New Year's Day
2019-01-18,Lee-Jackson Day,Virginia
2019-01-21,Martin Luther King, Jr. Day
2019-02-18,Washington's Birthday
2019-03-17,Evacuation Day,Massachusetts
2019-03-18,Evacuation Day (Observed),Massachusetts
2019-05-27,Memorial Day
2019-07-04,Independence Day
2019-09-02,Labor Day
.
.
.

Note that the entry in the ID (third) column must match the dataset's series identifier column. If you change the series ID for the dataset, you must re-upload the calendar file. See the full list of calendar criteria [here](ts-adv-opt#calendar-files).
## Sampling in multiseries projects {: #sampling-in-multiseries-projects }
Time series uses [sampling](ts-create-data#downsampling-in-time-series-projects) to ensure a manageable, optimized modeling dataset. Multiseries projects, however, require a somewhat different approach to ensure that there is enough data for series evaluation. As a result, insights (**Series Insights**, **Accuracy Over Time**, and **Forecast vs Actuals**) in multiseries projects are not sampled, although sample data is used for modeling and model evaluation.
Consider the following example, where:
* the base dataset is ~4.2M rows
* there are ~160 different series
* the series covers very long date ranges
When run as an OTV project, 100% of rows are used in the modeling process. When run as a time series project, the dataset grows by roughly 62x to 260M rows. This is because each series is treated separately and all forecast distances (within the forecast window) must be included for the specified training window. That results in the following numbers (based on the the forecast distance):
| Forecast distance | Derived rows | Rows used | % of total |
|---------------------|----------------|-------------|---------------|
| OTV (FD is N/A) | 4,184,841 | 4,184,841 | 100% |
| 1-5 | 13,030,580 | 3,498,680 | 26.85% |
| 1-10 | 26,061,160 | 3,493,940 | 13.41% |
| 1-100 | 260,611,600 | 3,010,900 | 1.16% |
In the end, the amount of data used for the OTV and multiseries projects was similar (once sampling was applied). That is, multiseries started with about 70-80% as many total rows as OTV but the derivation process added many new columns, triggering size limits.
With multiseries, the effect is that in the end there can be very few samples of data from each series. That percentage of data is picked randomly from each series, giving the blueprint only a “glimpse” of each series to build models with. In contrast, OTV doesn't distinguish between series--the series ID column is just another feature for the model to learn from. OTV models, as a result, are able to learn from all of the data from each series.
If you find that your dataset and project settings lead to excessive sampling levels, try reconfiguring the project or modeling approach. This can be accomplished, for example, by splitting very long forecast windows into smaller segments and creating a DataRobot project for each segment. Data can additionally be segmented in multiseries projects into similar clusters for datasets with many series.
Alternatively, you can reduce the number of columns used or columns that are unlikely to be useful as lagged features by [excluding them from derivation](ts-feature-lists#excluding-features-from-feature-lists). Finally, consider the length of your training set. Reduction in the duration of the training data to exclude the oldest data can both increase modeling accuracy and reduce sampling on the dataset.
## Multiseries use case {: #multiseries-use-case }
Predicting sales and comparing stores:
> A large chain store wants to create a forecast to correctly order inventory and staff stores with the needed number of people for the predicted store volume. An analyst managing the stores uses DataRobot to build time series models that predict daily sales. First, she looks at the distribution of sales across time to get a sense of the trend. Because there is a lot of data, to review and verify that the data is correct before modeling she uses the date range slider to zoom in only on the data from the past few weeks.
> After setting the target and configuring the time series options, she clicks Start to generate time series features and run Autopilot. After running Autopilot with a forecast window of 1 to 7 days in the future, she looks at the Accuracy Over Time chart of the top-performing model on the Leaderboard to see how the model performs on the main validation set (Backtest 1). She looks at the overall view and then switches the series identifier to view each store. She then uploads the most recent history to make predictions that forecast sales for each series over the next week. Then, she downloads the forecast and uses it to order the correct inventory amounts for next week.
|
multiseries
|
---
title: Time series modeling
description: Follow the steps used to create time series models.
---
# Time series modeling {: #time-series-modeling }
!!! info "Availability information"
Contact your DataRobot representative for information on enabling automated time series (AutoTS) modeling.
Time series modeling forecasts multiple future values of the target. With [out-of-time validation (OTV)](otv), by contrast, you are not forecasting but instead modeling time-relevant data and predicting the target value on each individual row. Time series forecast modeling is based on the following framework; see [the reference section](ts-framework) for a description of the framework elements. See the section on [nowcasting](nowcasting) to better understand that framework.

## Requirements and availability {: #requirements-and-availability }
Be sure to review the time step, data requirements, interval units, and acceptable project types for time series modeling, which are described in detail below.
* [Time steps](#time-steps)
* [Data requirements](#data-requirements)
* [Interval units](#interval-units)
* [Project types](#project-types)
See these additional considerations for [OTV](otv#feature-considerations) and [time series](ts-consider) modeling.
## Basic workflow {: basic-workflow }
The following describes the steps to build time series models. Each step links to detailed explanations and descriptions of the options, where applicable. See the time series [overview and description](whatis-time) for detailed descriptions of how DataRobot implements time series modeling.
1. Load your dataset and select the target feature. If the dataset contains a date feature, the **Set up time-aware modeling** link activates. Click the link to get started.

2. From the dropdown, select the primary date/time feature. The dropdown lists all date/time features that DataRobot detected during [EDA1](eda-explained).

3. After selecting a feature, DataRobot computes and then loads a histogram of the time feature plotted against the target feature (feature-over-time). Note that if your dataset qualifies for [multiseries modeling](multiseries), this histogram represents the average of the time feature values across all series plotted against the target feature.

4. Select the time series approach you would like to apply:

* Use [**Automated time series forecasting**](ts-flow-overview) when you want to forecast multiple future values of the target (for example, predicting sales for each day next week). Use this to extrapolate future values in a continuous sequence.
* Use [**Automated time series nowcasting**](nowcasting) when you want to use modeling to determine current values.
Or, use [**Automated machine learning**](otv) (OTV) when your data is time-relevant but you are not forecasting (instead, you are predicting the target value on each individual row). Use this if you have single event data, such as patient intake or loan defaults.
5. If you selected time series and DataRobot detects series data, set the series ID for [multiseries](multiseries) modeling.

* If DataRobot does not detect a series but your dataset qualifies, set the series identifier using [**Advanced options**](multiseries#set-the-series-id-through-advanced-options).
* To enable [segmented modeling](ts-segmented), after selecting the series identifier, click to change the value of **Segmentation method** from *None* to your segment ID.

6. If you were prompted that your time step was irregular, consider employing the [data prep tool](ts-data-prep).

7. Customize the window settings (Feature Derivation Window (FDW) and Forecast Window (FW)) to configure how DataRobot derives features for the modeling dataset. Before modifying these values, see the [detailed guidance](ts-customization#set-window-values) for the meaning and implication of each window.
!!! note
If using [nowcasting](nowcasting), these window settings differ.

8. Set the training window format, either **Duration** or **Row Count**, to specify how Autopilot chooses training periods when building models. Before setting this value, see [the details](ts-customization#duration-and-row-count) of row count vs. duration and how they apply to different folds. Note that, for irregular datasets, the setting defaults to **Row Count**. Use the [data prep tool](ts-data-prep) before changing this setting.

9. Consider whether to set ["known in advance" (KA)](ts-adv-opt#set-known-in-advance-ka) features or to upload an [event calendar](ts-adv-opt#calendar-files) (both set in the advanced options).
* Features treated as KA variables are used unlagged when making predictions.
* Calendars list events for DataRobot to use when automatically deriving time series features (setting features as unlagged when making predictions).

10. Explore what a feature looks like over time to view its [trends](glossary/index#trend) and determine whether there are gaps in your data (which is a data flaw you need to know about). To access these histograms, expand a numeric feature and click the expand a numeric feature, click the [**Over Time**](ts-leaderboard#understand-a-features-over-time-chart) tab, and click **Compute Feature Over Time**:

In this example, you can see a strong weekly pattern as well as a seasonal pattern. You can also change the resolution to see how the data aggregates at different intervals. Click **Show time bins** to see the number of rows per bin (blue bars at the bottom of the plot). Visualization of data density can provide information about potential [missing values](ts-create-data#handle-missing-values).
Read further options for [interacting with the **Over Time**](ts-leaderboard#understand-a-features-over-time-chart) chart.
11. To modify additional settings used for modeling (date/time format, training window, validation length, etc.), scroll down and expand **Show advanced options**. See the [full documentation](ts-adv-opt) for more information.

12. Once all configuration is set, choose [a modeling mode](multistep-ta) and press **Start**.
13. When the modeling process begins, DataRobot analyzes the target and creates time-based features to use for modeling. Display the **Data** page to watch the new features as they are created. By default DataRobot displays the **Derived Modeling Data** panel; to see your original data, click **Original Time Series Data**.
* Click [**View more info**](ts-create-data#review-data-and-new-features) for more specific feature generation details, including access to the derivation log.

* View the [Feature Lineage](ts-leaderboard#feature-lineage-tab) chart to understand the process that created any feature:

14. After reviewing the dataset, consider whether you want to [restore any features](restore-features) that were pruned by the feature reduction process.
15. Finally, if desired work with the [time series feature lists](ts-feature-lists) used for modeling.
## Next steps {: next-steps }
The following sections describe how to continue with time series modeling:
Section | Describes...
------- | -----------
[Time series Leaderboard models](ts-leaderboard#investigate-models) | Working with Leaderboard models, including changing training and sampling criteria.
[Making predictions](ts-predictions) | Making predictions and preparing for deployment.
[Customize project settings](ts-customization) | Modifying default partitioning and window settings for use-case specific implementations.
And further reading:
Section | Describes...
------- | -----------
[Framework](ts-framework) | The framework DataRobot uses to build time series models, including common [patterns](ts-framework#common-patterns-of-time-series-data) in time series data.
[Derived modeling dataset](ts-create-data#review-data-and-new-features) | DataRobot's feature derivation process, which freates a new modeling dataset for time series projects.
[Feature lists](ts-feature-lists) | Specialized for time series modeling.
[Automated Feature Engineering for Time Series Data](https://www.kdnuggets.com/2017/11/automated-feature-engineering-time-series-data.html){ target=_blank } | A more technical discussion of the general framework for developing time series models, including generating features and preprocessing the data as well as automating the process to apply advanced machine learning algorithms to almost any time series problem.
## Deep dive: Requirements {: #deep-dive-requirements }
The following sections provide details about models and project requirements, including:
* DataRobot builds both the standard algorithms and special time series blueprints to run specific models for time series. As always, you can run any time series models that DataRobot did not run from the [**Repository**](repository).
* DataRobot generates both traditional time series models (e.g., the ARIMA family) and advanced time series models (e.g., XGBoost).
* For models with the suffix "with Forecast Distance Modeling," DataRobot builds a different model for each distance in the future, each having a unique blueprint to make that prediction.
* The "Baseline prediction using most recent value" model (also known as "naive predictions") uses the most recent value or seasonal differences as the prediction; this model can be used as a baseline for judging performance.
??? info "For a time series project with multiple FDs, what do the displayed Leaderboard evaluation metrics correspond to?"
When a project has multiple FDs, the Leaderboard metric is a calculated summary across all FDs, across all dates, and across all series. That is, for each FD DataRobot generates predictions for each day in validation and for each series. Then, using actuals for each of those predictions, DataRobot calculate the loss metric such that the Leaderboard shows the loss metrics across all those predictions. If there are too many FDs, sampling is used.
For example, for a project with 30 series, 30 days of validation, and 30 FDs, DataRobot generates `30*30*30` predictions and then applies the loss function.
### Time steps {: #time-steps }
The first step in time series modeling is to be certain that your data is the correct type to employ forecasting or nowcasting. DataRobot categorizes data based on the [_time step_](glossary/index#time-step)—the typical time difference between rows—as one of three types:
| Time step | Description | Example |
|--------------|------------------------------|--------------------------------------|
| Regular | Regularly spaced events | Monday through Sunday |
| Semi-regular | Data that is mostly regularly spaced | Every business day but not weekends. |
| Irregular | No consistent time step | Random birthdays |
Assuming a regular or semi-regular time step, DataRobot's time series functionality works by encoding time-sensitive components as features, transforming your original input dataset into a [modeling dataset](ts-create-data) that can use conventional machine learning techniques. (Note that a time step is different than a [time interval](#interval-units), which is described below.) For each original row of your data, the modeling dataset includes both:
* New rows representing examples of predicting different distances into the future.
* For each input feature, new columns of lagged features and rolling statistics for predicting that new distance.
!!! note
When a time step is irregular, you can use [row-based partitioning](ts-customization#duration-and-row-count) or the [data prep tool](ts-data-prep) tool (to avoid the inaccurate rolling statistics these gaps can cause).
### Data requirements {: #data-requirements }
To activate time-series modeling:
* The time series dataset must meet the [file size and row requirements](file-types#time-series-file-import-sizes).
* Even if your data contains time features, time series forecasting mode may be disabled if the data contains irregular time units or non-unique time stamps. If this happens, the [time series data prep tool](ts-data-prep) for potential solutions.
* The dataset must contain a column with a [variable type “Date”](file-types#special-column-detection) for partitioning.
!!! note
There are times that you may want to [partition without holdout](ts-leaderboard#partition-without-holdout), which changes the minimum ingest rows and also the output of various visualizations.
If the requirements above are met, the date/time partitioning feature becomes available through the **Set up time-aware modeling** link on the **Start** screen.
### Interval units {: #interval-units }
Although many of the examples in this documentation show a time unit of "days," DataRobot supports several intervals for time series and multiseries modeling. Currently, DataRobot supports time steps that are integer multiples of the following units:
* row
* millisecond
* second
* minute
* hour
* day
* week
* month
* quarter
* year
For example, the time step between rows can be every 15 minutes (a multiple of minutes) but cannot be a fraction such as 13.23 minutes. DataRobot automatically detects the time unit and time step, and if it cannot, rejects the dataset as irregular. Datasets using milliseconds as a time unit must specify training and partitioning boundaries at the second level, and must span multiple seconds, for partitioning to operate correctly. Additionally, they must use the default forecast point to use a fractional-second forecast point.
### Project types {: #project-types }
DataRobot’s time series modeling supports both regression and binary classification projects. Each type has a full selection of models available from Autopilot or the Repository, specific to the project type. Both types have generally the same workflow and options, with the following differences found in binary classification projects:
* In the advanced option settings, the following are disabled:
* [**Treat as exponential trend?**](ts-adv-opt#treat-as-exponential-trend)
* [**Apply differencing?**](ts-adv-opt#apply-differencing)
* [**Exposure**](additional#set-exposure)
* Simple and seasonal differencing are not applied.
* Only classification metrics are supported.
* No differencing is performed, so feature lists using a differenced target are not created. By default, Autopilot runs on `Baseline only (average baseline)` and `Time Series Informative Features`. Note that "average baseline" refers to the average of the target in the [feature derivation window](ts-customization#set-window-values).
* Classification blueprints do not use naive predictions as offset in modeling.
|
ts-flow-overview
|
---
title: Time series predictions
description: Understand the prediction methods used with DataRobot's time series modeling.
---
# Time series predictions {: #time-series-predictions }
!!! info "Availability information"
Contact your DataRobot representative for information on enabling automated time series (AutoTS) modeling.
See these additional considerations for working with [time series](ts-consider) modeling.
Before making predictions, prepare your model, as described in the following sections.
{% include 'includes/date-time-include-5.md' %}
## Make Predictions tab {: #make-predictions-tab }
There are two methods for making predictions with time series models:
1. For prediction datasets that are less than 1GB, use the **Make Predictions** tab from the Leaderboard. This is the method described below.
2. For prediction datasets between 1GB and 5GB, consider deploying the model and using the [batch predictions capabilities](batch-pred#predictions-for-time-series-deployments) available from [**Deployments** > **Predictions**](batch-pred).
!!! note
Be aware that using a forecasting range with time series predictions can result in a significant increase over the original dataset size. Use the batch predictions capabilities to avoid out-of-memory errors.
The Leaderboard **Make Predictions** tab works slightly differently than with traditional modeling. The following describes, briefly, using **Make Predictions** with time series; see the full [**Make Predictions** tab details](predict) for more information.
!!! note
ARIMA model blueprints must be provided with full history when making batch predictions.
The **Make Predictions** tab provides summaries to help determine how much recent data—either time unit or rows, depending on how you configured your [feature derivation and forecast point](#set-window-values) windows—is required in the prediction dataset and to review the forecast rows and [KA](#known-in-advance) settings. Note that the list of features displayed as KA only includes those KA features that are part of the feature list used to build the current model. The [**Forecast Settings**](#forecast-settings) tab provides an overview of the prediction dataset for help in changing settings as well as access to the auto-generated [prediction file template](#prediction-file-template).
In this example, the prediction dataset needs at least 42 days of historical data and can predict (return) up to 7 rows. That is because although the model was configured for 35 days before the forecast point, seven days are added to the required history because the model uses seven-day differencing. Generally, `Historical rows = FDW size + seasonality`, where seasonality is the longest periodicity detected. Note that rows needed for _training_ are calculated as `Historical rows = FDW size + seasonality + FW size`.

The following provides an overview to making predictions with time series modeling:
1. Once you have selected a model to use for predictions, if you haven't already done so you are prompted to unlock holdout and [retrain](ts-date-time#retrain) the model. It's a good idea to complete this step so that the model uses the most recent data, but it is not required.</br>

2. Prepare and upload your prediction dataset. Either upload a [prediction-ready dataset](#create-a-prediction-ready-dataset) with the required forecast rows for predictions or let DataRobot build you a [prediction file template](#prediction-file-template).
3. Optionally, change the [forecast point](#forecast-point-and-settings)—the date to begin making predictions from—from the DataRobot default.
4. [Compute predictions](#compute-and-access-predictions).
### Create a prediction-ready dataset {: #create-a-prediction-ready-dataset }
If you choose to manually create a prediction dataset, use the provided summary to determine the number of historical rows needed. Optionally, open [Forecast Settings](#forecast-settings) to change the forecast point, making sure that the historical row requirements from your new forecast point are met in the prediction dataset. If needed, click **See an example dataset** for a visual representation of the format required for the CSV file.
The following example shows that you would leave the target and non-KA values in rows 7 through 9 (the "Forecast rows") blank; DataRobot fills in those rows with the prediction values when you compute predictions.

When your prediction dataset is in the appropriate format, click **Import data from** to select and upload it into DataRobot. Then, [compute predictions](#compute-and-access-predictions).
!!! note
While KA features can have missing values in the prediction data inside of the forecast window, that configuration may affect prediction accuracy. DataRobot surfaces a warning and also an information message beneath the affected dataset. Also, if you have missing history when picking a forecast point that is later than the default, DataRobot will still allow you to compute predictions.
### Prediction file template {: #prediction-file-template }
If your [forecast point](#forecast-point-and-settings) setting requires additional forecast rows be added to the original prediction dataset, DataRobot automatically generates a template file that appends those needed rows. Use the auto-generated prediction template as-is or [download and make modifications](#modify-the-template). To create the template, click **Import data from** to select and upload the intended dataset. DataRobot generates the template if it does not find at least one row after the default forecast point that does not include a target value (no empty forecast rows) and therefore can be a forecast row.
For example, let's say your forecast window is `+5 ... +6` and the default forecast point is `t0`. Points `t5` and `t6` are missing, but points `t1` and `t` are present. In this case, DataRobot generates the extended file because it found no forecast rows that satisfy `t5` or `t6` after the default forecast point.
For DataRobot to generate a template, the following conditions must be met:
* There are no supported forecast rows (empty target rows that fall within the forecast window).
* The generated template file size is less than the [upload file limit](file-types#time-series-file-sizes).
#### Use the template as-is {: #use-the-template-as-is }
Use the template as-is if you do not need to modify the forecast rows or add any [KA](#known-in-advance) features. DataRobot will set the forecast point and add the full number of rows required to satisfy the project's forecast window configuration.
Use the default auto-expansion if you are using the most recent data as your forecast point, have no gaps, and want the full number of rows. In this case, you can upload the dataset and [compute predictions](#compute-and-access-predictions).
#### Modify the template {: #modify-the-template }
DataRobot generates the prediction file template as soon as you upload a prediction dataset. However, there are cases where you may want to modify that template before computing predictions:
* You have identified a column as a [KA](#known-in-advance) feature and need to enter relevant information in the forecast rows.
* You have multiple series and want to predict on fewer than every series in the dataset. (DataRobot adds the necessary number of rows for each series in the dataset.)
* Based on your settings DataRobot would have generated several additional rows but you want to predict on fewer.
To modify a template:
1. Click [**Forecast Settings**](#forecast-settings) (Forecast Point Predictions tab), expand the **Advanced options** link, and download the auto-generated prediction file template:

2. Open the template and add any required information to the new forecast rows or remove rows you don't need as they will only slow predictions.
3. Save the modified template and upload it back into DataRobot using **Import data from**.
4. Optionally, set the forecast point to something other than the default.
5. [Compute predictions](#compute-and-access-predictions).
## Forecast settings {: #forecast-settings }
DataRobot chooses a default forecast point (1) to base predictions on. The default date is a forecast point that is the most recent valid timestamp that maximizes the usage of time history within the feature derivation window. However, you can change the default to:
* A customized [forecast point](#forecast-point-predictions) that sets a specific date (forecast point) from which you want to begin making predictions.
* A [forecast range](#forecast-range-predictions) that sets a range of forecast distances within a selected date range.
Use the **Forecast Settings** modal (2) to configure a date setting other than the default setting.

!!! note
The default forecast point is either the most recent row in the dataset that contains a valid target value or, if you configured [gaps](ts-customization#understanding-gaps) during project setup, it is the row in the dataset that satisfies the feature derivation window’s history requirements. Note also that you must use the default forecast point for [fractional-second forecasts](#time-series-interval-units).
### Forecast Point Predictions {: #forecast-point-predictions }
Use _Forecast Point Predictions_ to select the specific date (forecast point) from which you want to begin making predictions. You can select any date shown since DataRobot trains models using all potential forecast points. Be sure, if you select a different forecast point, that your dataset has enough history. See the [table](#forecast-settings-defined) for descriptions of each field.

### Forecast Range Predictions {: #forecast-range-predictions }
Use _Forecast Range Predictions_ for making predictions on all forecast distances within the selected date range. This option provides bulk predictions on an external dataset, including all forecast distance predictions for all rows in the dataset. Use the results for validating the model, not for making future predictions.
!!! note
When using range predictions, DataRobot _includes_ the prediction start date and _excludes_ the prediction end date. In other words, the last date in the range is not a forecast point in the prediction output.
Forecast Range Predictions are helpful for validating model accuracy. DataRobot extracts the actual values for all points in time from the dataset. Set the prediction start and end dates to define the historical range of time for which you want bulk predictions. Because this model evaluation process uses actual values, DataRobot only generates predictions for timestamps that can support predictions for every forecast distance. See the [table](#forecast-settings-defined) for descriptions of each field.

### Forecast settings definitions {: #forecast-settings-definitions }
The following table describes the forecast point and forecast range configuration fields:
| | Element | Description |
| | -------------|----------------|
| | Prediction type selector | Selects either forecast point (specific start date) or forecast range ([bulk predictions](#forecast-range-predictions)).|
| | Advanced options | Expands to download the [prediction file template](#prediction-file-template) (if created).|
| | Row summary <br /> (forecast point)| Provides the same summary information as that on the **Make Predictions** tab. Colors correspond to the visualization above (6), showing the historical and forecast rows set during original project creation. |
| | Row summary <br /> (forecast range) | A legend indicating the meaning of the line (5) above. |
| | Valid forecast options | Indicates, in the context of the date span for the entire dataset (5), the range of dates that are valid forecast settings (dates that will produce valid predictions). While the dotted colored bar above the full range indicates possible valid options, dates within the yellow range are those that extend beyond DataRobot's suggested settings because they have missing history or KA features. Also, if there are gaps inside this range, the predictions may still fail (due to insufficient time history or no forecast row).|
| | Dataset start and end | Within the context of the full range of dates (historical rows) found in the dataset, indicates the range of points you are choosing to forecast. In cases where DataRobot created a [prediction file template](#prediction-file-template), the dataset end date and template file end date are both represented. If the dataset end and max forecast distance are the same, the display does not show the dataset end. <br /> For forecast point settings, the historical and forecast rows summarized above (3) are also overlaid on the span. The overlay moves as the forecast point setting changes.|
| | Historical and forecast zoom | A zoomed view of the relevant historical rows and forecast rows, intended to simplify selecting a forecast point. As you move the sliders or set a calendar date, the date line above (5), reflects the change. |
| | Date selector | A calendar picker for setting the forecast point or forecast range (start and end dates). Invalid dates—those not indicated in the valid forecast range (4)—are disabled in the calendar. |
| | Compute Predictions | Initiate prediction computation (same as **Compute Predictions** on the **Make Predictions** page). Or, save the settings and close the modal without computing predictions. New settings are reflected on the **Make Predictions** page, and clicking **Compute Predictions** from there at any future time will use these settings. Alternatively, click the **X** to close without saving changes.|
### Understand dates in forecast settings {: #understand-dates-in-forecast-settings }
When you upload a prediction dataset, DataRobot detects the range of dates (the <em>valid forecast range</em>) available for use as the forecast point. It also determines a default forecast point, which is the latest timestamp available for making predictions with full history.
The following timestamps are marked in the visualization:
* <em>Data start</em> is the timestamp of the first row detected in the dataset.
* <em>Data end</em> is the timestamp of the last row detected in the dataset, whether it is the original or the auto-generated template.
* <em>Max forecast distance</em> is the timestamp of the last possible forecast distance in the dataset.
Before modifying the forecast point, review the basic [time series modeling framework](ts-framework).

Some things to consider:
- What is the most recent valid forecast point? The most recent valid forecast point is the maximum forecast point that can be used to run predictions without error. It may differ from the default forecast point because the default forecast point takes the time history usage into consideration.
- Based on the forecast window, what is the timestamp of the last prediction that was output? The forecast window is defined relative to the forecast point; the last prediction timestamp is a function of both the forecast window and the timestamp inside the prediction dataset.
For example, consider a forecast window from 1 to 7 days. The forecast point is 2001-01-01, but the max date in the dataset is 2001-01-05. In this case, the max forecast timestamp is 2001-01-05 as there are no rows from 2001-01-06 to 2001-01-08.
- Consider the length of your forecast window. That is, after the final row with actual values, do you have at least one forecast row (within the boundaries of the forecast window)? If you do, DataRobot will not generate a template; if you do not, DataRobot will generate forecast rows based on the project configuration.
Use the **Forecast settings** modal to get an overview of the prediction dataset, which aids in choosing settings like the forecast point and prediction start and end dates. In addition, DataRobot generates forecast rows after the final row with actual values (if there are no forecast rows based on the default forecast point), simplifying the prediction workflow. The actual values are the data taken from the last row of each and every series ID and duplicated to the forecast rows.
??? tip "Time series prediction dataset validation"
DataRobot validates a time series prediction dataset once it is uploaded, checking whether there are sufficient historical rows to produce the engineered features required by the project.
If seasonality is detected in the project, additional historical rows—longer than the feature derivation window (FDW)—are required. For example, a project with an FDW of [-14, 0] and 7-day seasonality will require 21 historical days in the prediction dataset to accommodate target differenced features (such as `target (7 day diff) (mean)`) and differencing features (such as `target (14 day max) (diff 7 day mean)`). If multiple seasonalities are detected, the longest seasonality is used to perform the validation check.
DataRobot does not require the presence of all historical rows when computing window statistics features (for example, `target (7 day mean)` or `feature (14 day max)`). Depending on the FDW settings, DataRobot predetermines the minimum required historical rows for predictions. If there are too many missing historical rows in the prediction dataset, predictions will error.
If a multiplicative trend is detected, DataRobot requires all historical target values in the prediction dataset to be strictly positive (> 0). Zero or negative target value(s) violate the model assumption that the dataset is multiplicative and the prediction generates an error. To correct it, check whether the training dataset is representative of the use case during prediction time or disable the advanced option [**Treat as exponential trend**](ts-adv-opt#treat-as-exponential-trend) and recreate the project.
### Compute and access predictions {: #compute-and-access-predictions }
When the forecast point is set and the dataset is in the correct format and successfully uploaded, it's time to compute predictions.
1. There are two methods for computing predictions. Click either:
* the **Compute Predictions** button on the **Forecast Settings** modal.
* the **Compute Predictions** link (next to the **Forecast Settings** link) on the **Make Predictions** page.
2. When processing completes, [preview](#prediction-preview) the historical data and predictions from the dataset or download a CSV of your predictions. To download, click **Download** to access predictions:

!!! note
Notes on prediction output:
<br>• Depending on your permissions, you may see the column, "Original Format Timestamp". This provides the same values provided by the "Timestamp" column but uses the timestamp format from the original prediction dataset. Your administrator can enable this permission for you.
<br> • When working with downloaded predictions, be aware that in time series projects, <code>row_id</code> does not represent the row position from the original project data (for training predictions) or uploaded prediction data for a given timestamp and/or <code>series_id</code>. Instead it is a derived value specific to the project.
With some spreadsheet software you could go on to graph your prediction output. For example, the sample data shows predicted sales for the next day through the next 7 days, which can then be acted on for inventory and staffing decisions.

### Prediction preview {: #prediction-preview }
After you have computed predictions, click the **Preview** link to display a plot of the predictions over time, in the context of the historical data. This plot shows the prediction for each [forecast distance](#detailed-workflow) at once, relative to a single forecast point.

By default, the prediction interval (shaded in blue) represents the area in which 80% of predictions fall. The intervals estimate the range of values DataRobot expects actual values of the target to fall within. They are similar to a prediction's confidence interval, but are instead based on the residual errors measured during the model's backtesting.
For charts meeting the following criteria, the chart displays an estimated prediction interval:
* All backtests must be trained. In this way, DataRobot can use all available validation rows and prevent different interval values based on the available information.
* There must be at least 10 data points per forecast distance value.
If the above criteria are not met, DataRobot displays only the prediction values (orange points).
You can specify a prediction interval size, which specifies the desired probability of actual values falling within the interval range. Larger values are less precise, but more conservative. For example, the default value of 80% results in a lower bound of 10% and an upper bound of 90%. To change the predictions interval, click the **Options** link and DataRobot recalculates the display:

!!! note
You can also set the prediction interval when [making predictions](deploy-model).
Prediction intervals are estimated based on the quantiles of the out-of-sample residuals and as a result may not be symmetrical. DataRobot calculates, independently, per series (if applicable) and per forecast distance, so intervals may increase with distance, and/or have a range specific to each series. If you predict on a new series, or a series in which there was no overlap with validation, DataRobot uses the average across all series.
Hover over a point in the preview graph, left of the forecast point, to display the value from the historical data:

Or to the right of the forecast point to view the forecast (prediction):

When used with multiseries modeling, you have an option to select which series to preview. This overview indicates how the target, feature, or accuracy changes over time for an individual series and provides a forecast for that series. From the dropdown, select a series. Or, page through the series options using the left and right arrows. By comparing the prediction intervals for each series, you can better identify the series with that provide the most accurate predictions.

Note that you can also download predictions from within the preview plot.
|
ts-predictions
|
---
title: Nowcasting
description: Describes making predictions for the present and very near future (very short-range forecasting).
---
# Nowcasting {: #nowcasting }
Nowcasting is a method of time series modeling that predicts the current value of a target based on past and present data. Technically, it is a forecast window in which the start and end times are 0 (now). Nowcasting builds an explanatory model that can describe latent present conditions and factors that contribute to a particular on-going behavior. In other words, based on the current input values and recent history, what is the target *right now*? For example, in an anomaly detection project you may want to answer the question, "is the observation I see right now an anomaly?"
*Forecasting*, by contrast, is the practice of predicting future values based on past and present data. In other words, by using information at a given time, you can predict values in a future row. Target values from later rows are aligned with feature values from past rows.
Some sample uses for nowcasting:
* Manufacturing and financial markets perform “fair value” modeling. For example, the Federal Reserve builds and publishes nowcasts that drive short-to-medium term policies and market changes.
* Explain natural gas prices under various conditions.
* Estimate an economic indicator before it is reported.
* Understand the drivers of something when time dynamics matter.
* Forecast a fixed point from multiple past points (for example, daily EOM-only forecast).
* Generate more timely current condition estimates when data readings or traditional computations are delayed (useful in weather modeling).
Any time series forecasting models that follow the [forecast distance framework](#nowcasting-framework) can be used for nowcasting. The standard time series insights also apply to nowcasting. Feature Impact is especially useful for nowcasting as it can provide good insight into the important features that may explain observed current values. See the [sections below](#more-info) for more information.
## Use nowcasting {: #use-nowcasting }
The nowcasting workflow follows the same steps as the forecasting method documented in the [time series workflow overview](ts-flow-overview) (set target, date/time feature, and, as applicable, series ID). The following describes the process once you have selected nowcasting.

### FDW settings {: #fdw-settings }
Nowcasting applies forecast window (FW) settings of [0, 0] for the forecast start and end times. Additionally, the Feature Derivation Window (FDW) end is set at a single time step prior to the current time step, allowing DataRobot to derive additional features for the target, such as rolling statistics (lags), without risking target leakage.

Notice that in contrast to the forecasting image representation that illustrates the past and future date range selected, nowcasting shows only the past rolling window.
### Features known in advance {: #features-known-in-advance }
By default, DataRobot marks all covariate (non-target) features as [known in advance](ts-adv-opt#set-known-in-advance-ka) (KA). This helps build better models because it provides guardrails for accuracy. Most commonly, all values in a nowcasting project are known in advance (because you are predicting "now", not "tomorrow"). It would be an exception to the rule to have data that you don't know now.
DataRobot reports the number of features that are KA:

Click the pencil icon () to open the **Time Series > Add features as known in advance** advanced option and adjust the list of features.

### Derive features from target {: #derive-features-from-target }
With nowcasting, DataRobot derives features from the target by default, enabling automatic time-based feature engineering. While this is a technique used with covariates to prevent the risk of target leakage, disabling target-derived features with nowcasting potentially limits the performance and selection of available blueprints. Additionally, for multiseries projects, target derivation supports [calculating features from other series](ts-adv-opt#enable-cross-series-feature-generation).

When checked, features are derived from the target. This setting is linked to the **Time Series > Exclude features from derivation** advanced option setting. If you deselect the box, the target feature name is added to the feature list in that field:

## More info... {: #more-info }
The following sections provide background information to help understand the application of nowcasting.
### Addition of target-derived features {: #addition-of-target-derived-features }
The nowcasting capability, in comparison to simply setting the FW start and end times to [0, 0], provides a variety of benefits:
* More features lists created.
* More blueprints available for selection.
* Large increase in derived features.
* Target-derived features are available.

Additionally, with [cross series enabled](ts-adv-opt#enable-cross-series-feature-generation), the set of derived features is richer still, and cross-series blueprints become available.
### How nowcasting works {: #how-nowcasting-works }
When nowcasting is selected as the time-aware modeling method and EDA2 begins, DataRobot automatically marks non-target features (covariates) as [known in advance](ts-adv-opt#set-known-in-advance-ka) (KA). This allows real-time features (for example, `latest transaction volume of stock`) to predict the target (`latest known price index`).
You can choose your desired FDW setting and mark covariates as known in advance.
DataRobot provides guardrails to prevent the automatically derived features from resulting in target leakage. Specifically, target-derived feature lags are inferred from the FDW end, which is prior to the present point in time. This ensures that the most recent derived rolling statistics do not result in target leakage (otherwise, trained models would suggest unrealistic performance).
### Nowcasting framework {: #nowcasting-framework }
The standard time series framework (forecasting) is described in the [time series section](ts-framework) and, once gaps inherent to time series problems are added, can be illustrated like this:

With nowcasting, that illustration changes a bit:

Given a time series dataset:
Time | Inputs | Target
---- | ------ | ------
2009 | 1.23 | 9.9
2010 | 1.41 | 10.0
2011 | 2.09 | 9.82
2012 | 1.31 | 7.99
2013 | 0.31 | 8.54
2014 | 3.09 | 7.42
2015 | 4.12 | 4.01
2016 | 5.91 | 6.73
For *forecasting*, DataRobot creates derived time series features and a forecast target as follows:
Time | Forecast point | Distance | Target
---- | ---------------| -------- | ------
2010 | 2009 | +1 year | 10.0
2011 | 2009 | +2 year | 9.82
-- | | -- | --
2011 | 2010 | +1 year | 9.82
2012 | 2010 | +2 year | 7.99
-- | | -- | --
2012 | 2011 | +1 year | 7.99
2013 | 2011 | +2 year | 8.54
For *nowcasting*, DataRobot creates derived time series features and a forecast target as follows:
Time | Forecast point | Distance | Target
---- | ---------------| -------- | ------
2009 | 2009 | +0 year | 9.9
2010 | 2010 | +0 year | 10.0
2011 | 2011 | +0 year | 9.82
2012 | 2012 | +0 year | 7.99
2013 | 2013 | +0 year | 8.54
2014 | 2014 | +0 year | 7.42
2015 | 2015 | +0 year | 4.01
2016 | 2016 | +0 year | 6.73
|
nowcasting
|
---
title: Time-series modeling
description: This topic introduces components of time-aware modeling, a recommended practice for data science problems where conditions may change over time.
---
# Time-series modeling {: #time-series-modeling }
!!! info "Availability information"
Time series modeling is not currently available for DataRobot Self Service users.
Time-series modeling is a recommended practice for data science problems where conditions may change over time. With this method, the validation set is made up of observations from a time window outside of (and more recent than) the time window used for model training. Time-aware modeling can make predictions on a single row, or, with its core time series functionality, can extract patterns from recent history and forecast multiple events into the future.
Topic | Describes...
----- | ------
[What is time-based modeling?](whatis-time) | Learn about the basic modeling process and a recommended reading path.
[Time series workflow overview](ts-flow-overview) | View the workflow for creating a time series project.
[Time series insights](ts-leaderboard) | Explore visualizations available to help interpret your data and models.
[Time series predictions](ts-predictions) | Make predictions with time series models.
[Multiseries modeling](multiseries) | Model with datasets that contain multiple time series.
[Creating clusters](ts-clustering) | Allow DataRobot to identify natural segments (similar series) for further exploring your data.
[Segmented modeling](ts-segmented) | Group series into user-defined segments, creating multiple projects for each segment, and producing a single Combined Model for the data.
[Nowcasting](nowcasting) | Make predictions for the present and very near future (very short-range forecasting).
[Enable external prediction comparison](cyob) | Compare model predictions built outside of DataRobot against DataRobot predictions.
[Advanced time series modeling](ts-adv-modeling/index) | Modify partitions, set advanced options, and understand window settings.
[Time series modeling data](ts-modeling-data/index) | Work with the time series modeling dataset:<br /><ul><li>Creating the modeling dataset</li><li>Using the data prep tool</li><li>Restoring pruned features</li></ul>
[Time series reference](ts-reference/index) | Learn to customize time series projects and view a variety of deep-dive reference material for DataRobot time series modeling, as well as ime series considerations
|
index
|
---
title: Time series insights
description: Describes the visualizations available to help interpret your data and models.
---
# Time series insights {: #time-series-insights}
This section describes the visualizations available to help interpret your data and models, both [prior to modeling](#prior-to-modeling) and [once models are built](#investigate-models).
## Prior to modeling {: #prior-to-modeling }
The following insights, with availability dependent on [modeling stage](eda-explained), are available to help understand your data.
* EDA1: [**Over Time**](#understand-a-features-over-time-chart) chart
* EDA2: [Feature Lineage](#feature-lineage-tab) graph
### Understand a feature's Over Time chart {: #understand-a-features-over-time-chart }
{% include 'includes/date-time-include-4.md' %}
## Feature Lineage tab {: #feature-lineage-tab }
To enhance understanding of the results displayed in the log, use the **Feature Lineage** tab for a visual "description" that illustrates each action taken (the lineage) to generate a derived feature. It can be difficult to understand how a feature that was not present in the original, uploaded dataset was created. **Feature Lineage** makes it easy to identify not only which features were derived but the steps that went into the end result.
From the **Data** page, click **Feature Lineage** to see each action taken to generate the derived feature, represented as a connectivity graph showing the relationship between variables (directed acyclic graph).

For more complex derivations, for example those with differencing, the graph illustrates how the difference was calculated:

Elements of the visualization represent the lineage. Click a cell in the graph to see the previous cells that are related to the selected cell's generation—parent actions are to the left of the element you click. Click once on a feature to show its parent feature, click again to return to the full display.
The graph uses the following elements:
Element | Description
--------|---------------------
ORIGINAL | Feature from the original dataset.
TIME SERIES | Actions (preprocessing steps) in the feature derivation process. Each action is represented in the final feature name.
RESULT | Final generated feature.
Info () | Dynamically-generated information about the element (on hover).
Clock () | Indicator that the feature is time-aware (i.e., derived using a time index such as `min value over 6 to 0 months` or `2nd lag`).
{% include 'includes/date-time-include-2.md' %}
{% include 'includes/date-time-include-3.md' %}
## Investigate models {: #investigate-models }
The following tabs are available from the **Leaderboard** to help with model evaluation:
Tab | Availability
----|-------------
[Accuracy Over Time](aot) | OTV; additional options for time series and multiseries |
[Forecast vs Actual](fore-act) | Time series, multiseries
[Series Insights](series-insights-multi) | Multiseries
[Stability](stability) | OTV, time series, multiseries
[Forecasting Accuracy](forecast-acc) | Time series, multiseries
[Anomaly Over Time](anom-viz) | [Anomaly detection](anomaly-detection): OTV, time series, multiseries|
[Anomaly Assessment](anom-viz#anomaly-assessment-chart) | Anomaly detection: time series, multiseries
[Segmentation](ts-segmented#segmentation-tab) | Time series/segmented modeling
|
ts-leaderboard
|
---
title: What is time-aware modeling?
description: Use OTV when your data is time-relevant but you are not forecasting; use time series when you want to forecast multiple future values; use nowcasting to determine an unknown current value of a time series.
---
# What is time-aware modeling? {: #what-is-time-aware-modeling }
!!! note "Availability information"
Contact your DataRobot representative for information on enabling time series modeling.
DataRobot offers two mechanisms for time-aware modeling—time series and OTV—both of which are implemented using [date/time partitioning](ts-date-time):
* Use the following types of _time series_ modeling when you want to:
* [Time series](ts-flow-overview): Forecast multiple future values of the target—"What will sales be like next week, Monday through Friday?"
* [Multiseries](multiseries): Model datasets that contain multiple time series based on a common set of input features.
* [Segmented](ts-segmented): Group your series into segments to improve demand forecasting—"What will sales of avocados look like in the northeast in January?"
* ["Nowcast"](nowcasting): Have an unknown current value of a time series—"What is this month's inflation rate based on recent history?"

* Use _out-of-time validation (OTV)_ when your data is time-relevant but you are not [forecasting](glossary/index) (instead, you are predicting the target value on each individual row). "How do I interpret this housing data?" This type of time-aware modeling is described in the [OTV specialized workflow](otv) section.
!!! note
See below for more specific information on [reasons to use time-aware modeling](#why-use-it) and how to put it in context with [supervised learning](#supervised-learning-models). Follow the [suggested reading path](time/index) to help locate the documentation appropriate to your understanding and requirements.
## Why use it? {: #why-use-it }
People frequently use time-aware models to predict future events while training those models on past data. A major difference between time-aware and conventional modeling is in how validation data—used to judge performance—is selected. For conventional modeling it is common practice to select rows from the dataset for [validation](data-partitioning), without regard to their time period. This practice is modified for time-aware modeling, to prevent validation scores that are overly optimistic and misleading (and potentially lead to damaging conclusions and actions). Time-aware modeling does not assume that the relationship between predictors and the target is constant over time.
A simple example: Let’s say you want to forecast housing prices. You have a variety of data about each house in your dataset and plan to use that data to predict the sales price. You will build a model using some of the data and make predictions using other parts of the data. The problem is, randomly selecting sale prices from your dataset suggests you are randomly selecting across time as well. In other words, the resulting model doesn't predict the future from the past. Using time-aware modeling, you can train and test models using time-based folds, which assures that your models are always validated on future house price data (the purpose of your forecast). It isn’t necessary to use the most recent data to make predictions—only to use data that is more recent than the data used for model training—to ensure that model predictions about the future hold up.
With time-aware modeling, you think of data in terms of time. When determining how much data you need to build an accurate model, the answer, for example, is in days or months or most recent x number of rows. “How long of a data history will I need and how much will my model improve with more time?” DataRobot partitions the data so that it can evaluate models with an awareness of the data’s time component, providing:
* Improved performance through better model selection
* More accurate validation scores
* Improved support for date variables as predictors
## Time series overview {: #time-series-overview }
When working with _time series_ data, ask yourself: How long do I want to look into the past and how far into the future do I want to predict? Once you determine those answers, you can configure DataRobot so that your time-sensitive data uses advanced DataRobot modeling techniques to create forecasts from your data. (See also the section on [why to use time series modeling](whatis-time#why-use-it).
DataRobot automatically creates and selects time series features in the modeling data. You can constrain the features (for example, minimum and maximum lags, etc.) by configuring the time series framework on the **Start** screen. Based on your settings and the analysis of the raw dataset, DataRobot derives new features and creates a modeling dataset. Because time shifts, lags, and features have already been applied, DataRobot can use general machine learning algorithms to build models with the new modeling dataset.
## Supervised learning models {: #supervised-learning-models }
In conventional supervised learning, you work with raw training data—with <em>labels</em> or <em>features</em>. DataRobot trains models to predict a specified target based on those features. DataRobot creates a model, tunes it, and then tests it on unseen (out-of-sample) data. That test results in a validation score which can be considered a measure of confidence in how ready the model is for deployment. Once deployed, you can score new data with the model. Feed the new data into DataRobot, where the application extracts features from the data and feeds them into the model. The model then makes predictions on those features to provide information about the target.

When DataRobot trains a model, it makes some decisions based on the training data. By making assumptions about the function or the data, for example, DataRobot can estimate parameter values based on those assumptions. Different modeling approaches make different assumptions. DataRobot's large repository of available models exercises many different functions (aspects), allowing you to pick the model type that best suit the data.
### Supervised learning in time-aware mode {: #supervised-learning-in-time-aware-mode }
Supervised learning assumes that training examples are independent and identically distributed (IID). That kind of modeling makes predictions based on each row of the dataset, without taking the neighboring rows into account. The assumption is that training samples are independent of each other. Another problematic assumption with the supervised learning is that the data you train on and your future will have the same distribution.
With time-dependent data, the traditional machine-learning assumptions don't work. Consider Google search trends for the term "DataRobot" in the period of July through November, 2017. The search interest is fairly uniform:

If you check the same search trend across the life of DataRobot, you can notice that the time series behaves very differently toward the more recent dates. If you trained a model on the earlier data, say 2013-2016, the model will be ineffective since the data does not follow the same distribution.

|
whatis-time
|
---
title: Modeling reference
description: Introduces sections that provide a deep dive into aspects of DataRobot functionality, including Data and sharing, Modeling details, Eureqa advanced tuning.
---
# Modeling reference {: #modeling-reference }
The topics in these section provide a deep dive into aspects of DataRobot functionality.
Topic | Describes...
----- | ------
[Data and sharing](data-sharing/index) | Dataset requirements, sharing assets, and permissions.
[Modeling details](model-detail/index) | Understand the stages of the model building process as well as SHAP details.
[Eureqa advanced tuning](eureqa-ref/index) | Modify building blocks, customize the target expression, and modify other model parameters for Eureqa models.
|
index
|
---
title: Image Augmentation
description: Describes the settings available from the Image Augmentation advanced option tab, where you can create new training images by randomly transforming existing images, thereby increasing the size of the training data.
---
# Image Augmentation {: #image-augmentation }
Train-time image augmentation creates new training images by randomly transforming existing images, thereby increasing the size of (i.e., “augmenting”) the training data. This allows projects to be built with datasets that might otherwise be too small. In addition, all image projects that use augmentation have the potential for smaller overall loss by improving the generalization of models on unseen data.
{% include 'includes/image-augmentation-include.md' %}
## Set transformations prior to modeling {: #set-transformations-prior-to-modeling }
After selecting your target, toggle on the **Image Augmentation** tab in **Advanced options**.

From there, begin selecting transformation settings, described [here](ttia-lists#augmentation-lists). These settings will be applied to all models when running Autopilot or using the Repository.

You can continue to modify settings, clicking **Preview augmentation** to view a sample of results:

The settings you choose are automatically saved as a list named **Initial Augmentation List**. If you do not set transformations through **Advanced options**, you can later create augmentation lists using the [**Advanced Tuning**](adv-tuning) tab.
|
ttia
|
---
title: Clustering advanced options
description: Allows you to set the number of clusters that DataRobot automatically discovers during time series clustering.
---
# Clustering advanced options {: #clustering-advanced-options }
{% include 'includes/ts-cluster-adv-opt-include.md' %}
|
time-series-cluster-adv-opt
|
---
title: Feature Constraints
description: Describes the settings available from the Feature Constraints advanced option tab, where you can control the influence, both up and down, between variables and the target.
---
# Feature Constraints {: #feature-constraints }
The **Feature Constraints** tab provides the tools described in the table below for applying constraints. Once you have completed the desired fields and built models, use the [**Describe > Constraints**](monotonic) Leaderboard tab to evaluate results of models trained with constraints.
Constraint | Description
-------------|----------------
[Monotonicity](#monotonicity) | Controls the influence, both up and down, between variables and the target.
[Aggregate target class](#aggregate-target-classes) | Configures how classes are aggregated for multiclass modeling.
[Trim target labels](#trim-target-labels) | Configures how labels are trimmed for multilabel modeling.
[Pairwise Interactions](#pairwise-interactions) | Controls which pairwise interactions are included in Generalized Additive Model (GA2M) output. (_Not available for unsupervised or time series projects_.)
## Monotonicity {: #monotonicity }
[Monotonicity](monotonic) forces a directional relationship between features and the target. Use this tab to apply monotonic [feature lists](monotonic#create-feature-lists), choose models, and make a positive class assignment.

### Monotonic Increasing / Decreasing {: #monotonic-increasing-decreasing }
**Monotonic Increasing** and **Monotonic Decreasing** set the feature lists you want DataRobot to apply to ensure monotonic modeling. From the dropdowns, select a list to use to enforce monotonically increasing and/or decreasing constraints. Remember that these apply on top of the selected model building feature list.
Note that this option is only available if the project has a numeric-only feature list defined. You can [create an appropriate feature list](feature-lists#create-feature-lists-from-the-data-page) prior to modeling. You can also create feature lists and retrain models, from the Leaderboard menu, after the initial model run.
The following options are available:
Feature list | Description
------------------|----------------
No Constraints | When this option is selected, DataRobot applies no monotonic constraints during training.
[Raw Features](feature-lists#automatically-created-feature-lists) | This option is only available when all features in the Raw Features list are of the type numeric, percentage, length, and/or currency.
[Informative Features](feature-lists#automatically-created-feature-lists) | This option is only available when all features in the Informative Features list are of the type numeric, percentage, length, and/or currency.
[User-defined feature list(s)](monotonic#create-feature-lists) | All lists that you defined for the project that meet the variable type requirement.
### Include only monotonic models {: include-only-monotonic-models }
When selected, DataRobot will only build (via Autopilot) or make available via the Repository those models that support monotonic constraints. Additionally, Autopilot only creates the [AVG Blender](leaderboard-ref#blender-models).
### Positive Class Assignment {: #positive-class-assignment }
The **Positive Class Assignment** option is available for binary classification projects only. It sets the class to use when a prediction scores higher than the classification threshold. When applying monotonic constraints, DataRobot applies the constraint between the value of the predictor and the probability of positive class.
By default, DataRobot sorts alphanumerically and then assigns the second value as the positive class. For example, if you load a dataset with a target of {`1`,`0`} or {`Yes`, `No`}, or {`True`, `False`}, the positive class in each case, after alphanumeric sorting, is `1`, `Yes`, and `True`, respectively.
## Aggregate / trim {: #aggregate-trim }
You can configure how DataRobot manages support for projects with many classes or labels. Use the appropriate toggle, which will appear depending on your project type:
* **Aggregate target classes** manages classes for [multiclass](multiclass) projects.
* **Trim target labels** manages labels for [multilabel](multilabel) projects.
### Aggregate target classes {: #aggregate-target-classes }
To configure aggregation settings, scroll to **Aggregate target classes**.

The following table describes each field:
| | Element | Description |
|---|---|---|
|  | Aggregate target classes | Enables the aggregation functionality. When more than 1,000 classes are detected, the selection is on and cannot be changed. If fewer than 1,000 classes, the selection is off by default but can be enabled.|
|  | Name of aggregation class | Sets the name of the "other" bin—the bin containing all classes that do not fall within the configuration set for this aggregation plan. It represents all the rows for the excluded values in the dataset. The provided name must differ from all existing target values in the column.|
|  | Min frequency for non-aggregate classes | Sets the minimum occurrence of rows belonging to a class that is required to avoid being bucketed in the "other" bin. That is, classes with fewer instances will be collapsed into a class. |
|  | Max number of non-aggregate classes | Sets the final number of classes after aggregation. The last class being the "other" bin. (For example, if you enter 900, there will be 899 class bins from your data and 1 "other" bin of aggregated classes.) Enter a value between 3-1,000.
|  | Classes to be excluded from aggregation | Identifies a comma-separated list of classes that will be preserved from aggregation, ensuring the ability to predict on less frequent classes that are of interest.|
#### Aggregation example {: #aggregation-example }
A dataset has the following parameters of the target column, with 8 unique values (classes).
Class | Row count
----- | ---------
A | 1024
B | 512
C | 256
D | 128
E | 64
F | 32
G | 16
H | 8
The parameters are set as follows:
Parameter | Value
--------- | -----
Name of aggregation class | DR_RARE_TARGET_VALUES
Min frequency for non-aggregate classes | 50
Max number of non-aggregate classes | 5
Classes to be excluded from aggregation | [E, H]
Class mapping initially applies the minimum frequency requirement, as follows:
Class | Row count | Impact
----- | --------- | --------
A | 1024 | None, above minimum frequency
B | 512 | None, above minimum frequency
C | 256 | None, above minimum frequency
D | 128 | None, above minimum frequency
E | 64 | None, above minimum frequency
Other bin | 48 | Combined rows of F and G above; did not meet minimum frequency
H | 8 | Excluded from aggregation
So far, the class mapping has resulted in 7 unique values (F and G dropped and replaced with an aggregated class). The "Max number of classes" parameter sets the maximum to five, requiring two more "drops." DataRobot will next drop the least frequent that are not excluded from aggregation (E and H are excluded) and so drops C and D. As a result, the final target class values distribution will be:
* Classes A and B are most frequent.
* Classes E and H are excluded from aggregation.
* Classes C, D, F, and G are aggregated into a single class, DR_RARE_TARGET_VALUES.
The final five classes for modeling are:
Class | Row count
----- | ---------
A | 1024
B | 512
Aggregated classes (C, D, Other) | 256 + 128 + 48
E | 64
H | 8
??? tip "Response time when making predictions"
When using unlimited multiclass, it is best to use a smaller "chunk" size when making predictions because response time grows linearly with the number of classes and number of rows in a prediction dataset.
Each class prediction can generate up to 10 digits to the right of the decimal point (`0.xxxxxxxxxx`). This can result, for each row, in 13-bytes per class. So, for example, a single dataset prediction for a 1,000-class multiclass for 10,000 rows can yield `13B * 1,000 classes * 10,000 rows` or roughly a 130MB response.
### Trim target labels {: #trim-target-labels }
To configure the labels used for multilabel modeling, scroll to **Trim target labels**.

The following table describes each field:
| | Element | Description |
|---|---|---|
|  | Trim target labels | Enables the trimming functionality. When more than 1,000 labels are detected, the selection is on and cannot be changed. If fewer than 1,000 labels, the selection is off by default but can be enabled.|
|  | Frequency minimum | Sets the minimum occurrence of rows containing this label that is required to avoid being removed. That is, labels with fewer instances will be trimmed (unless specified as a protected label). |
|  | Maximum labels | Sets the final number of labels after trimming. Enter a value between 2-1,000.
|  | Protected labels | Identifies a comma-separated list of labels that will be preserved from trimming, regardless of frequency. This ensures the ability to predict on less frequent labels that are of interest.|
## Pairwise Interactions {: #pairwise-interactions }
Use this setting to control which interactions are permitted to interact during the training of a [GA2M](ga2m) model. The setting is often applied in cases where there are certain features that are not permitted to interact due to regulatory constraints.
{% include 'includes/pairwise-warning.md' %}

You must provide a CSV file that specifies the pairwise interactions you want to include. Click the **File Requirements** link for more information about the limitations and format needed for the CSV. The pop-up modal includes an example showing how to structure your CSV in a case that specifies two allowed pairwise interaction groups.

Apply the required formatting and limitations to the CSV, and then click **Browse** to upload it (or drag and drop). DataRobot begins validating the CSV to ensure it matches the file requirements, and indicates any formatting errors with a message:

After successful CSV upload, you can begin training GA2M models. When the models are built, examine their output in the [**Rating Tables**](rating-table) tab. The output indicates the pairwise interactions that you specified.
|
feature-con
|
---
title: Advanced options
description: Set advanced parameters before building models to set non-default characteristics of the model build.
---
# Advanced options {: #advanced-options }
After importing data and selecting a target variable, the **Data** page appears. From this page you can click the **Show advanced options** link to access advanced modeling parameters.
These parameters are summarized in the following sections:
Option | Description
---------|-----------------
[Additional](additional) | Set additional parameters and modify values that can effect model builds.
[Bias and Fairness](fairness-metrics) | Set conditions that help calculate fairness, as well as identify and attempt to mitigate bias in a model's predictive behavior.
[Clustering](time-series-cluster-adv-opt.md) | Set the number of clusters that DataRobot discovers in a time series clustering project.
[External model prediction insights](external-preds) | Bring external model(s) into the DataRobot AutoML environment, view them on the Leaderboard, and run a subset of DataRobot's evaluative insights for comparison against DataRobot models.
[Feature Constraints](feature-con) | Set monotonic constraints to control the influence between variables and target.
[Partitioning](partitioning) | Set how data is partitioned for training/validation/holdout and the validation type.
Partitioning: Date/time | Set how data is partitioned for [OTV](otv#advanced-options) or [time series](ts-date-time) projects.
[Smart Downsampling](smart-ds) | Downsample the majority class for faster model build time.
[Time Series](ts-adv-opt) | Set a variety or time series-specific advanced options.
[Train-time image augmentation](ttia) | Create new training images to increase the amount of training data.

|
index
|
---
title: External Predictions
description: Through the **External Predictions** advanced option tab, you can bring external model(s) into the DataRobot AutoML environment, view them on the Leaderboard, and run a subset of DataRobot's evaluative insights for comparison against DataRobot models.
---
# External Predictions {: #external-predictions }
Through the **External Predictions** advanced option tab, you can bring external model(s) into the DataRobot AutoML environment, view them on the Leaderboard, and run a subset of DataRobot's evaluative insights for comparison against DataRobot models. This feature:
* Helps to understand how a model trained outside of DataRobot compares in terms of accuracy with DataRobot-trained models from the prediction values.
* Provides DataRobot’s trust and explainability visualizations for externally trained model(s) to provide better model understanding, compliance, and fairness results.
## Workflow overview {:#workflow-overview }
To bring external models into DataRobot, follow this workflow:
1. [Prepare the dataset](#prepare-the-dataset).
2. [Set advanced options](#set-advanced-options).
3. [Add an external model](#add-an-external-model).
4. [Evaluate the external model](#evaluate-the-external-model).
5. [Enable bias testing](#bias-and-fairness-testing) (binary classification only).
## Prepare the dataset {: #prepare-the-dataset }
To set up the project, ensure the uploaded dataset has the following two columns:
* A column containing the values that [identify the partition column](partitioning#configure-model-validation), either cross validation or train/validation/holdout (TVH). If cross-validation is used, the values represent the folds, for example (5 CV fold example): `1`, `2`, `3`, `4`, and `5`. For TVH, the values are typically `T`, `V`, and `H`.
This column will later be referenced in the advanced option **Partition Feature** strategy. In the following example, the column is named `partition_column`.
* A column of external model prediction values ("external predictions column"). The descriptions below use the name `Model1_output` as an example of the prediction values.

!!! note
External model prediction values must be numeric. For binary classification projects, the prediction values must be between `[0.0, 1.0]`. For regression projects, the prediction values must be between `(-inf, inf)`.
## Set advanced options {: #set-advanced-options }
To prepare for modeling:
1. Open the **External Predictions** tab in advanced options. Enter the external predictions column name(s) from your dataset (up to 100 columns). You are prompted to ensure that **Partitioning** is set.

2. Click **Set Partition Feature** to open the appropriate tab. From the **Partitioning** tab:
* Set **Select partitioning method** to Partition Feature.
* Set **Partition Feature** to the column name `partition_column`.
* Set **Run models using** to either cross validation or TVH.
* If using TVH, set the values found in the `partition_column` that represent the partitions.
## Add an external model {: #add-an-external-model }
You can add an external model on the Leaderboard either:
* As an individual model using Manual mode.
* As one of many models using full Autopilot, Quick, or Comprehensive mode. In this case, the external model is added at the end of the model recommendation process.
For example, to add a single external model:
1. From the **Start** page, change the modeling mode to **Manual**. (This allows you to select your external model from the Repository.) Click **Start** to begin EDA2.
2. Once EDA2 finishes, open the **Data** page. In the Importance column, the external model prediction values column, `Model1_Output`, is labeled **External** and the partition feature, `partition_column`, is labeled **Partition**.

3. Open the model Repository, search for `Model1_Output`, and select it. Notice in the resulting task setup fields, the feature list and sample size are not available for modification. This is because DataRobot cannot know which features from the training data, or what sample size, were used to train the external model.

4. Click **Run Task**.
## Evaluate the external model {: #evaluate-the-external-model }
When model building finishes, the model becomes available on the Leaderboard for comparison and further investigation. It is marked with the EXTERNAL PREDICTIONS label:

!!! note
The Leaderboard metric score (such as LogLoss) will be consistent with the equivalent validation, cross validation, and holdout metric scores calculated by scikit-learn.
The following insights are supported:
Insight | Project type
------- | ------------
[Lift Chart](lift-chart) | All
[Residuals](residuals) | Regression
[ROC Curve](roc-curve-tab/index) | Classification
[Profit Curve](profit-curve) | Classification
[Model comparison](model-compare) | All
[Model compliance documentation](compliance) | All; note only a subset of sections are generated due to the limited knowledge DataRobot has of the external model.
[Bias and Fairness](#bias-and-fairness-testing) | Classification; see below.

## Bias and fairness testing {: #bias-and-fairness-testing }
Additionally, if the dataset creates a binary classification project, you can set up Bias and Fairness options for bias testing of the external model.
1. Complete the fields on the [**Bias and Fairness > Settings**](fairness-metrics#configure-metrics-and-mitigation-post-autopilot) page. Click **Save** and DataRobot retrieves the necessary data.

2. Open the [**Per-Class Bias**](per-class) tab to help identify if a model is biased, and if so, how much and who it's biased towards or against.
3. Open the [**Cross-Class Accuracy**](cross-acc) tab to view calculated evaluation metrics and ROC curve-related scores, segmented by class, for each protected feature.
|
external-preds
|
---
title: Bias and Fairness
description: Describes the Bias and Fairness advanced option tab, where you can set protected features, choose a fairness metric, and configure bias mitigation techniques.
---
# Bias and Fairness {: #bias-and-fairness }
Bias and Fairness testing provides methods to calculate fairness for a binary classification model and attempt to identify any biases in the model's predictive behavior. In DataRobot, bias represents the difference between a model's predictions for different populations (or groups) while fairness is the measure of the model's bias.
Select protected features in the dataset and choose fairness metrics and mitigation techniques either [before model building](#configure-metrics-and-mitigation-pre-autopilot) or [from the Leaderboard](#configure-metrics-and-mitigation-post-autopilot) once models are built. [Bias and Fairness insights](bias/index) help identify bias in a model and visualize the root-cause analysis, explaining why the model is learning bias from the training data and from where.
DataRobot's bias mitigation is a technique for reducing (“mitigating”) model bias for an identified [protected feature](glossary/index#protected-feature)—by producing predictions with higher scores on a selected fairness metric for one or more groups (classes) in a protected feature. It is available for binary classification projects, and typically results in a small reduction in accuracy in exchange for greater fairness.
See the [Bias and Fairness](b-and-f/index) resource page for more complete information on the generally available bias and fairness testing and mitigation capabilities.
## Configure metrics and mitigation pre-Autopilot {: #configure-metrics-and-mitigation-pre-autopilot }
Once you select a target, click **Show advanced options** and select the **Bias and Fairness** tab. From the tab you can set [fairness metrics](#set-fairness-metrics) and [mitigation techniques](#set-mitigation-techniques).

### Set fairness metrics {: #set-fairness-metrics }
To configure **Bias and Fairness**, set the values that define your use case. For additional detail, refer to the [bias and fairness reference](bias-ref) for common terms and metric definitions.

1. Identify up to 10 **Protected Features** in the dataset. Protected features must be categorical. The model's fairness is calculated against the protected features selected from the dataset.

2. Define the **Favorable Target Outcome**, i.e., the outcome perceived as favorable for the protected class relative to the target. In the below example, the target is "salary" so annual salaries are listed under Favorable Target Outcome, and a favorable outcome is earning greater than 50K.

3. Choose the **Primary Fairness Metric** most appropriate for your use case from the five options below.
??? tip "Help me choose"
<a name="select-a-metric"></a>
If you are unsure of the best metric for your model, click **Help me choose**.

DataRobot presents a questionnaire where each question is determined by your answer to the previous one. Once completed, DataRobot recommends a metric based on your answers.

Because bias and fairness are ethically complex, DataRobot's questions cannot capture every detail of each use case. Use the recommended metric as a guidepost; it is not necessarily the correct (or only) metric appropriate for your use case. Select different metrics to observe how answering the questions differently would affect the recommendation.
Click **Select** to add the highlighted option to the **Primary Fairness Metric** field.
| Metric | Description |
|---------|---------------|
| [Proportional Parity](bias-ref#proportional-parity) | For each protected class, what is the probability of receiving favorable predictions from the model? This metric (also known as "Statistical Parity" or "Demographic Parity") is based on equal representation of the model's target across protected classes. |
| [Equal Parity](bias-ref#equal-parity) | For each protected class, what is the total number of records with favorable predictions from the model? This metric is based on equal representation of the model's target across protected classes. |
| Prediction Balance ([Favorable Class Balance](bias-ref#favorable-class-balance) and [Unfavorable Class Balance](bias-ref#unfavorable-class-balance)) | For all actuals that were favorable/unfavorable outcomes, what is the average predicted probability for each protected class? This metric is based on equal representation of the model's average raw scores across each protected class and is part of the set of Prediction Balance fairness metrics. |
| [True Favorable Rate Parity](bias-ref#true-favorable-rate-parity) and [True Unfavorable Rate Parity](bias-ref#true-unfavorable-rate-parity) | For each protected class, what is the probability of the model predicting the favorable/unfavorable outcome for all actuals of the favorable/unfavorable outcome? This metric is based on equal error. |
| [Favorable Predictive Value Parity](bias-ref#favorable-predictive-value-parity) and [Unfavorable Predictive Value Parity](bias-ref#unfavorable-predictive-value-parity) | What is the probability of the model being correct (i.e., the actual results being favorable/unfavorable)? This metric (also known as "Positive Predictive Value Parity") is based on equal error. |

The fairness metric serves as the foundation for the calculated fairness score; a numerical computation of the model's fairness against the protected class.
4. Set a **Fairness Threshold** for the project. The threshold serves as a benchmark for the model's fairness score. That is, it measures if a model performs within appropriate fairness bounds for each protected class. It does not affect the fairness score or performance of any protected class. (See the [reference section](bias-ref) for more information.)

### Set mitigation techniques {: #set-mitigation-techniques }

Select a bias mitigation technique for DataRobot to apply automatically. DataRobot uses the selected technique to automatically attempt bias mitigation for the top three full or Comprehensive Autopilot Leaderboard models (based on accuracy). You can also initiate bias mitigation [manually](#retrain-with-mitigation) after Autopilot completes. (If you used [Quick Autopilot mode](model-ref#quick-autopilot), for example, manual mode allows you to apply mitigation to selected models). With either method, once applied, you can compare [mitigated versus unmitigated models](#compare-models).
??? tip "How does mitigation work?"
Specifically, mitigation copies an affected blueprint and then adds either a pre- or post-processing task, depending on the **Mitigation technique** selected.

The table below summarizes the fields:
Field | Description
----- | -----------
Bias mitigation feature | Lists the protected feature(s); select one from which to reduce the model's bias towards.
Include as a predictor variable | Sets whether to include the mitigation feature as an input to model training.
Bias mitigation technique | Sets the mitigation technique and the point in model processing when mitigation is applied.
The steps below provide greater detail for each field:
1. Select a feature from the **Bias mitigation feature** dropdown, which lists the feature(s) that you set as protected in the **Protected features** field for general Bias and Fairness settings. This is the feature you would like to reduce the model’s bias towards.

2. Once the mitigation feature is set, DataRobot computes data quality for the feature. When the check is successful, the option to include the protected feature as a predictor variable becomes available. Check the box to use the feature to attempt mitigation and to include the mitigation feature as an input into model training. Leave it unchecked to use the feature for mitigation only, not as a training input. This can be useful when you are legally prohibited from, or don't want to, include sensitive data as a model input but you would like to attempt mitigation based on it.
??? tip "What does the data quality check identify?"
During the data quality check, there are three basic questions answered for the chosen mitigation feature and the chosen target:
1. Does the mitigation feature have too many rows where the value is completely missing?
2. Are there any values of the mitigation feature that are too rare to allow drawing firm conclusions? For example, consider a dataset with 10,000 rows where the mitigated feature is `race`. One of the values, `Inuit`, occurs only seven times, making the sample too small to be representative.
3. Are there any combinations of class plus target that are rare or absent? For example, consider a mitigation feature of `gender`. The categories `Male` and `Female` are both numerous, but the positive target label never occurs in `Female` rows.
If the quality check does not pass, a warning appears. Address the issues in the dataset, then re-upload and try again.
3. Set the **Mitigation technique**, either:
* _Preprocessing Reweighing:_ Assigns row-level weights and uses those as a special model input during training to attempt to make the predictions more fair.
* _Postprocessing with Rejection Option-based Classification (ROBC):_ Changes the predicted label for rows that are close to the prediction threshold (model predictions with the highest uncertainty). Read more about ROBC [here](https://towardsdatascience.com/reducing-ai-bias-with-rejection-option-based-classification-54fefdb53c2e){ target=_blank }. For more details on the applied algorithms, open the model documentation by clicking on the mitigation method task in the model blueprint.
??? tip "Which fairness metrics does each mitigation techniques use?"
The mitigation technique names, "pre" and "post," refer to the point in the workflow (as illustrated in the blueprint) where the technique is applied. For example, reweighing is called "preprocessing" because it happens before the model is trained. Rejection Option-based Classification is called post-processing because it happens after the model has been trained. The techniques use the following metrics.
Technique | Metric
--------- | ------
Preprocessing Reweighing | Primarily [Proportional Parity](bias-ref#proportional-parity) (but may, tangentially, improve other fairness metrics).
Postprocessing with Rejection Option-based Classification | Proportional Parity and [True Favorable](bias-ref#true-favorable-rate-parity) and [True Unfavorable](bias-ref#true-unfavorable-rate-parity) Rate Parity
4. Start the model building process. DataRobot automatically attempts mitigation on the top three _eligible_ models produced by Autopilot against the **Bias mitigation feature**. Mitigated models can be identified by the BIAS MITIGATION badge on the Leaderboard. See the explanation of what makes a model [eligible for mitigation](#mitigation-eligibility), as well as a table listing ineligible models.

5. [Compare](#compare-models) bias and accuracy of mitigated vs. unmitigated models.
## Configure metrics and mitigation post-Autopilot {: #configure-metrics-and-mitigation-post-autopilot }
If you did not configure **Bias and Fairness** prior to model building, you can configure [fairness tests](#retrain-with-fairness-tests) and [mitigation techniques](#retrain-with-mitigation) from the Leaderboard.
### Retrain with fairness tests {: #retrain-with-fairness-tests }
The following describes applying fairness metrics to models after Autopilot completes.
1. Select a model and click **Bias and Fairness > Settings**.

2. Follow the advanced options instructions on [configuring bias and fairness](#set-fairness-metrics).
3. Click **Save**. DataRobot then configures fairness testing for all models in your project based on these settings.
### Retrain with mitigation {: #retrain-with-mitigation }
After Autopilot has finished, you can apply mitigation to any models that have not already been mitigated. To do so, select [one](#single-model-retraining) or [multiple](#multiple-model-retraining) model(s) from the Leaderboard and retrain them with bias mitigation settings applied.
!!! note
While you cannot retrain an already mitigated model, even on a different protected feature, you can return to the parent and select a different feature or technique for mitigation.
From the [parent model](#identify-mitigated-models), you can view the **Models with Mitigation Applied** table. This table lists relationships between the parent model and any child models with mitigation applied. Note the parent model does _not_ have mitigation applied (1). All child mitigated models are listed by model ID (2), including their mitigation settings.

#### Single-model retraining {: #single-model-retraining }
!!! note
If you haven't previously completed the Bias and Fairness configuration in advanced options prior to model building, you must first set those fields via the [**Bias and Fairness > Settings**](#set-fairness-metrics) tab.
To apply mitigation to a single Leaderboard model after Autopilot completes:
1. Expand any [eligible](#mitigation-eligibility) Leaderboard model and open **Bias and Fairness > Bias Mitigation**.

2. [Configure the fields](#set-mitigation-techniques) for bias mitigation.
3. Click **Apply** to start building a new, mitigated version of the model. When training is complete, the model can be [identified](#identify-mitigated-models) on the Leaderboard by the BIAS MITIGATION badge.
4. [Compare](#compare-models) bias and accuracy of mitigated vs. unmitigated models.
#### Multiple-model retraining {: #multiple-model-retraining }
To apply mitigation to multiple Leaderboard models after Autopilot completes:
1. Use the checkboxes to the left of any [eligible](#mitigation-eligibility) models that have not already been mitigated.
2. From the menu, select **Model processing > Apply bias mitigation for selected models**.

3. In the resulting window, [configure the fields](#set-mitigation-techniques) for bias mitigation.

4. Click **Apply** to start building new, mitigated versions of the models. When training is complete, the models can be [identified](#identify-mitigated-models) on the Leaderboard by the BIAS MITIGATION badge.
5. [Compare](#compare-models) bias and accuracy of mitigated vs. unmitigated models.
### Identify mitigated models {: #identify-mitigated-models }
The Leaderboard provides several indicators for mitigated and parent (unmitigated versions) models:
* A BIAS MITIGATION badge. Use the Leaderboard search to easily identify all mitigated models.

* Model naming reflects [mitigation settings](#set-mitigation-techniques) (technique, protected feature, and predictor variable status).

* The **Bias Mitigation** tab includes a link to the original, unmitigated parent model.

### Compare models {: #compare-models }
Use the [**Bias vs Accuracy**](bias-tab) tab to compare the bias and accuracy of mitigated vs. unmitigated models. The chart will likely show that mitigated models have higher fairness scores (less bias) than their unmitigated version, but with lower accuracy.

Before a model (mitigated or unmitigated) becomes available on the chart, you must first calculate its fairness scores. To compare mitigated or unmitigated:
1. Open a model displaying the BIAS MITIGATION badge and navigate to [**Bias and Fairness > Per-Class Bias**](per-class). The fairness score is calculated automatically once you open the tab.
2. Navigate to the **Bias and Fairness > Bias Mitigation** tab to retrieve a link to the parent model. Click the link to open the parent.
3. From the parent model, visit the **Bias and Fairness > Per-Class Bias** tab to automatically calculate the fairness score.
4. Open the [**Bias vs Accuracy**](bias-tab) tab and compare the results. In this example, you can see that the mitigated model (shown in green) has higher accuracy (Y-axis) and fairness (X-axis) scores than the parent (shown in magenta).

## Mitigation eligibility {: #mitigation-eligibility }
DataRobot selects the top three _eligible_ models for mitigation, and as a result, those labeled with the BIAS MITIGATION badge may not be the top three models on the Leaderboard after Autopilot runs. Other models may be in a higher position on the Leaderboard but will not have mitigation applied because they were ineligible.
If you select **Preprocessing Reweighing** as the mitigation technique, the following models are ineligible for reweighing because the models don’t use weights:
* Nystroem Kernel SVM Classifier
* Gaussian Process Classifier
* K-nearest Neighbors Classifier
* Naive Bayes Classifier
* Partial Least Squares Classifier
* Legacy Neural Net models: "vanilla" Neural Net Classifier, Dropout Input Neural Net Classifier, "vanilla" Two Layer Neural Net Classifier, Two Hidden Layer Dropout Rectified Linear Neural Net Classifier, (but note that contemporary Keras models can be mitigated)
* Certain basic linear models: Logistic Regression, Regularized Logistic Regression (but note that ElasticNet models can be mitigated)
* Eureqa and Eureqa GAM Classifiers
* Two-stage Logistic Regression
* SVM Classifier, with any kernel
If you select either mitigation technique, the following models and/or projects are ineligible for mitigation:
* Models that have already had bias mitigation applied.
* Majority Class Classifier (predicts a constant value).
* [External Predictions](external-preds) models (uses a special column uploaded with the training data, cannot make new predictions).
* Blender models.
* Projects using [Smart Downsampling](smart-ds).
* Projects using custom weights.
* Projects where the **Mitigation Feature** is missing over 50% of its data.
* Time series or OTV projects (i.e., any project with time-based partitioning).
* Projects run with [SHAP](shap-pe) value support.
* Single-column, standalone text converter models: Auto-Tuned Word N-Gram Text Modeler, Auto-Tuned Char N-Gram Modeler, and Auto-Tuned Summarized Categorical Modeler.
## Bias mitigation considerations {: #bias-mitigation-considerations }
Consider the following when working with bias mitigation:
* Mitigation applies to a single, categorical protected feature.
* For the **ROBC** mitigation technique, the mitigation feature must have at least two classes that each have at least 100 rows in the training data. For the **Preprocessing Reweighing** technique, there is no explicit minimum row count, but mitigation effectiveness may be unpredictable with very small row counts.
|
fairness-metrics
|
---
title: Partitioning
description: Describes the settings available from the Partitioning advanced option tab, where you can set the method DataRobot uses to group observations (or rows) together for evaluation and model building.
---
# Partitioning {: #partitioning }
DataRobot provides a mechanism to select the partitioning method and parameters used for model validation. DataRobot selects the “optimal” modeling method based on the size of your data as the default option. Generally, it is best to leave the default selection, but you can modify the method through the **Advanced options** link.
Partitioning describes the method DataRobot uses to “clump” observations (or rows) together for evaluation and model building. DataRobot supports the following partitioning methods, described below:
- [Random](#random-partitioning-random)
- [Partition Feature](#column-based-partitioning-partition-feature)
- [Group](#group-partitioning-group)
- Date/Time for [OTV](otv#advanced-options) or for [time-series](ts-date-time)
- [Stratified](#ratio-preserved-partitioning-stratified)
View the [reference documentation](data-partitioning) for examples of each partitioning method.
!!! Note
If you selected to set up time-aware modeling on the **Start** screen, all partitioning methods _except_ [Date/Time](ts-date-time) are disabled. Additionally, not all partition types support smart downsampling.
There are two validation type selections available to you—<em>k</em>-fold cross-validation and training/validation/holdout. See the [data partitioning explanation](data-partitioning#validation-types) for a detailed description of validation type selections.
## Configure model validation {: #configure-model-validation }
To use partitioning and model validation, follow these general steps:
1. Select a target variable (what you want to predict).
2. Once you enter the variable, the **Advanced options** link becomes available. Click the link to display the selections.
3. In the **Partitioning Options** section, choose and configure, if required, the partitioning method. (The methods are described [below](#partitioning-methods).) For example:

4. Select a modeling option (**Run models using**). As applicable, type in the box or use the sliders to change the number of cross-validation folds, the validation percentage, and the holdout percentage. For example:

See [below](#partitioning-methods) for a description of available partitioning methods.
## Partitioning details {: #partitioning-details }
The following sections provide background detail on [partitioning methods](#partitioning-methods). See above for instructions on how to [configure partitioning](#configure-model-validation). See also the information on [stacked predictions](data-partitioning#what-are-stacked-predictions) and how DataRobot selects the validation partition.
### Partitioning methods {: #partitioning-methods }
The section on [validation types](data-partitioning#validation-types) describes methods for using your data to validate models; the sections below describe options for partitioning your data. Note that the choice of partitioning method and validation type is dependent on the target feature and/or partition column. In other words, not all selections will always display.
For all partitioning methods except **Partition Feature** and [date/time partitioning](ts-date-time), the following table describes the meaning of the model validation types. Type in the box or use the sliders to change the number of cross-validation folds, the validation percentage, and the holdout percentages.
| Modeling Options | Description |
|--------------------|---------------|
| Cross-Validation | Specifies the number of folds and the holdout percentage. The cross-validation score is the average of the scores for the individual partitions. |
| Training-Validation-Holdout | Specifies percentages for the training, validation, and holdout splits. |
#### Random partitioning (Random) {: #random-partitioning-random }
With **Random** partitioning, DataRobot randomly assigns observations (rows) to the training, validation, and holdout sets.

#### Column-based partitioning (Partition Feature) {: #column-based-partitioning-partition-feature }
The **Partition Feature** option creates a 1:1 mapping between values of this feature and validation partitions. Each unique value receives its own partition, and all rows with that value are placed in that partition. Although `date` cannot be selected as the target for a project, it can be selected as a partition feature. The column or feature you select must have at least two, and no more than 100 values; those with one unique value cannot be used.
DataRobot recommends the use of the Partition Feature option for features that have no more than 25 unique values. For features with more than 25 unique values, use [Group Partitioning](#group-partitioning-group).
You can, however, manually re-group large sets of unique values into a new feature in order to use that data with the Partition Feature option. For example, if you have a feature with 20,000 unique user IDs, you can group those IDs into 25 regions. As a new feature, those regions are your 25 unique values. You can then use the Partition Feature option with your new feature (which associates those regions with your 20,000 user IDs).
Additionally, the recommended modeling validation type depends on how many unique values your feature has. If your partition feature contains 2-3 unique values, use the training/validation/holdout split. If your partition feature contains closer to 10-25 unique values, DataRobot recommends using cross-validation instead.

The modeling validation types for **Partition Feature** have a slightly different meaning:
| Modeling Options | Description |
|--------------------|---------------|
| Cross-Validation | Select a value from the selected partition column that will specify the holdout set. DataRobot uses the split with the largest number of samples for the validation partition (the computed **Validation** score on the Leaderboard). The *Cross-Validation* score—evaluated on all partitions that are not a part of the holdout—is the average of those individual partition scores. If the partition column has fewer than three values, the holdout set is disabled. |
| Training-Validation-Holdout | For training, validation, and holdout, set the value from the selected partition column that specifies that set. |
#### Group partitioning (Group) {: #group-partitioning-group }
With the **Group** partitioning method, all rows with the same single value for the selected feature are guaranteed to be in the same training or test set. Each partition can contain more than one value for the feature, but each individual value will be automatically grouped together by DataRobot. The application returns an error message if your **Group ID feature** will not provide an informative result. The error occurs when the feature chosen for group partitioning has a cardinality of less than 3 times the number of cross-validation folds selected.
DataRobot recommends the use of the **Group** partitioning option for features with more than 25 unique values. For features with less than 25 unique values, use [Partition Feature](#column-based-partitioning-partition-feature). Additionally, DataRobot recommends a very evenly distributed set of unique values for group partitioning.

#### Date/time partitioning {: #datetime-partitioning }
Date/time partitioning allows you to order partitions based on time and is part of DataRobot's time-aware modeling capabilities. See a more complete description of date/time partitioning in the Out-of-Time Validation ([OTV](otv#advanced-options)) or [time series](time/index) sections.

#### Ratio-preserved partitioning (Stratified) {: #ratio-preserved-partitioning-stratified }
Observations (rows) are randomly assigned to training, validation, and holdout sets, preserving (as close as possible to) the same ratio of values for the prediction target as in the original data. If **Run models using** is set to **Train-Validate-Holdout**, each partition is assigned the same ratio. If set to **Cross-Validation**, the ratio is preserved both 1) across each CV fold and 2) relative to the training partition. This selection is available for zero-boosted regression problems and binary classification problems.

|
partitioning
|
---
title: Time series
description: Describes the settings available from the Time Series advanced option tab, where you can set features known in advance, exponential trends, and differencing for time series projects.
---
# Time Series {: #time-series }
{% include 'includes/ts-adv-opt-include.md' %}
|
time-series-adv-opt
|
---
title: Smart Downsampling
description: Smart Downsampling is a technique to reduce total dataset size by reducing the size of the majority class, enabling you to build models faster without sacrificing accuracy.
---
# Smart Downsampling {: #smart-downsampling }
Smart downsampling is a technique to reduce total dataset size by reducing the size of the majority class, enabling you to build models faster without sacrificing accuracy. When enabled, all analysis and model building is based on the new dataset size after smart downsampling.

When setting the downsampling percentage rate, you are specifying the size of the majority class after Smart Downsampling. For example, a 70% Smart Downsampling rate would downsample a majority class of 100 rows to 70 rows.
### When to downsample {: #when-to-downsample }
There are two types of problems that benefit from Smart Downsampling:
*Imbalanced classification*: This is a problem in which one of the two target classes occurs far more frequently than others in the dataset. For example, a direct mail response dataset might consist of negative responses on 99.5% of the records and positive responses on only 0.5%.
??? note "Is imbalanced data ok?"
There is a myth that you must balance data for binary classification problems, leading many data scientists to mistakenly resample their data. While upsampling is the worst mistake you can make here, downsampling can cause problems too. Remember, while _humans_ have trouble understanding imbalanced data, computers do not. For example, it is not intuitive that a model that predicts a constant value can be 99% accurate, but if the data is 99% a single value due to imbalance, this happens.
Most classification models optimize for [LogLoss](opt-metric#loglossweighted-logloss), which naturally handles class balance issues. If downsampling is applied, it affects _only_ the majority class. Once downsampled, DataRobot weights the data to correct for the sampling and ensure predicted probabilities are correct.
If there is no need to downsample (or upsample for that matter), why does DataRobot do it? Downsampling can result in much faster modeling, with very similar accuracy
*Zero-inflated regression*: This is a problem in which the value zero appears in more than 50% of the dataset. A common example of this is within insurance claim data where, for example, 90% of policies may generate zero loss while the other 10% generate claims of various amounts.
In both cases, DataRobot first downsamples the majority class to make the classes balanced, then adds a weight so that the effect of the resulting dataset mimics the original balance of the classes. The applicable optimization metric indicates that the classes are weighted.
## Conditions for Smart Downsampling {: #conditions-for-smart-downsampling }
Consider the following when using Smart Downsampling:
- The dataset must be larger than 500MB.
- The target variable must take only two values (binary classification) or it must be numeric with more than 50% of values being exactly zero (zero-boosted regression). With time series projects, modeling with many zeros uses a [different calculation](ts-feature-lists#zero-inflated-models).
- You cannot select [Random Partitioning](partitioning#random-partitioning-random) (it is automatically disabled when you enable Smart Downsampling).
- DataRobot will not create [anomaly models](anomaly-detection) when Smart Downsampling is enabled.
- Once enabled, the selected downsampling percentage rate cannot result in the majority class becoming smaller than the minority class.
If the conditions are not met, you cannot enable the feature. The Smart Downsampling option displays a message indicating that the current target is not a binary classification or zero-boosted regression problem.
When you use simple (binary) classification, DataRobot downsamples the majority class. When you use regression, DataRobot downsamples the zero values. Smart Downsampling is selected by default when both of the following conditions are met:
- The majority class is 2x or greater than the minority class.
- The dataset is larger than 500 MB.
## Enable Smart Downsampling {: #enable-smart-downsampling }
Enable Smart Downsampling and specify a sampling percentage from the **Advanced options** link on the **Data** page:
1. Import a dataset or open a project for which models have not yet been built and enter a target variable that results in a binary classification or zero-boosted regression problem.
2. Click the **Show advanced options** link and select the **Smart Downsampling** option.
3. Toggle **Downsample Data** to ON:

4. By typing in the box or using the slider, enter the majority class downsampling percentage rate. Note the following:
- The minimum percentage is the smallest percentage allowed. Any rate below the indicated minimum will result in a majority class that is smaller than the minority class.
- As you change the percentage rate, the majority rows listed under “Results of downsampling…” updates to indicate the new size of the majority class.
5. Scroll to the top of the page, choose a [modeling mode](model-data#set-the-modeling-mode), and click **Start** to begin modeling.
6. When model building is complete, select **Models** from the toolbar. The Leaderboard displays an icon indicating that model results are based on downsampling:

7. Click the icon for a report of the downsampling results:

From the report, you can see that readmitted=true, the minority class, was not modified by downsampling. The majority class, readmitted=false, was reduced by 25%. In other words, the percentage of the majority class that was maintained was 75%.
|
smart-ds
|
---
title: Additional
description: Describes the settings available from the Additional advanced option tab, where you can fine-tune a variety of aspects of model building.
---
# Additional {: #additional }
From the **Additional** tab you can fine-tune a variety of aspects of model building, with the options dependent on the project type. Options that are not applicable to the project are grayed out or do not display (depending on the reason that the option is unavailable).

The following table describes each of the additional parameter settings available in **Advanced options**.
| Parameter | Description |
|-------------|---------------|
| **Optimization metric** | :~~:|
| [Optimization metric](#change-the-optimization-metric) | Provides access to the complete list of available optimization metrics. Once you specify a target, DataRobot chooses from a comprehensive set of metrics and recommends one suited for the given data and target. The chosen metric is displayed below the **Start** button on the **Data** page. Use this dropdown to change the metric before beginning model building. |
| **Automation settings** | :~~:|
| [Search for interactions](feature-disc#search-for-interactions) | Automatically uncovers new features when it finds interactions between features from your primary dataset. Run as part of EDA2, enabling this results in not only new features but also new default and custom feature lists, identified by a plus sign (+).<br><br>This is useful for finding additional insights in existing data. |
| Include only blueprints with [Scoring Code](../../../predictions/port-pred/scoring-code/index) support | Toggle on to only train models that support [Scoring Code](../../../predictions/port-pred/scoring-code/index) export.<br><br>This is useful when scoring data outside of DataRobot or at a very low latency.|
| Create blenders from top models | Toggle whether DataRobot computes [blenders](leaderboard-ref#blender-models) from the best-performing models at the end of Autopilot.<br><br>Note that enabling this feature may increase modeling and scoring time. |
| Include only models with SHAP value support | Includes *only* [SHAP-based blueprints](shap) (often necessary for regulatory compliance). You must check this prior to project start to have access to SHAP-based insights (also true for API and Python client access). If enabled, in addition to the selected Autopilot mode only running SHAP blueprints, the Repository will also only have SHAP-supported blueprints available. When enabled, [**Feature Impact**](feature-impact#shap-based-feature-impact) and [**Prediction Explanations**](shap-pe) produce only SHAP-based insights. This option is only available if "create blenders from top models" is not selected. |
| Recommend and prepare a model for deployment | Toggle on to activate the blueprint recommendation flow (feature list reduction and retraining at higher sample size), which indicates whether DataRobot trains a model, labeled as ["recommended for deployment"](model-rec-process) and "prepared for deployment" at the end of Autopilot. |
| Include blenders when recommending a model | Toggle on to allow blender models to be considered as part of the [model recommendation process](model-rec-process). |
| Use accuracy-optimized metablueprint | Runs XGBoost models with a lower learning rate and more trees, as well as an XGBoost forest blueprint. In certain cases, these models can slightly increase accuracy, but they can take 10x to 100x longer to run. If set, you should increase the **Upper-bound running time** setting to approximately 30 hours (default three hours) so that DataRobot can promote the longer running models to the next stage of Autopilot. Note that because a better model is not guaranteed, this option is only intended for users who are already hand-tuning their XGBoost models and are aware of the runtime requirements of large XGBoost models. There is no guarantee that this setting will work for datasets greater than 1.5GB. If you get out of memory errors, try running the models without accuracy-optimized set, or with a smaller sample size. |
| Run Autopilot on feature list with target leakage removed | Automatically creates a feature list (Informative Features - Leakage Removed) that removes the high-risk problematic columns that may lead to [target leakage](data-quality#target-leakage). |
| Number of models to run cross-validation/backtesting on | Enter the number of models for which DataRobot will compute [cross-validation](data-partitioning) during Autopilot. This parameter also applies to backtesting for Time Series and OTV projects. The setting is activated if the number is greater than the Autopilot default. |
| [Upper-bound running time](#time-limit-exceptions) | Sets an execution limit time, in hours. If a model takes longer to run than this limit, Autopilot excludes the model from larger training sample runs. Models that exceed this time limit are identified on the Leaderboard; you can still run them at higher sample sizes manually, if needed. |
| Response cap (regression projects only) | Limits the maximum value of the response (target) to a percentile of the original values. For example, if you enter 0.9, any values above the 90th percentile are replaced with the value of the 90th percentile of the non-zero values during training. This capping is used only for training, not predicting or scoring. Enter a value between 0.5 and 1.0 (50-100%). |
| Random seed | Sets the starting value used to initiate random number generation. This fixes the value used by DataRobot so that subsequent runs of the same project will have the same results. If not set, DataRobot uses a default seed (typically `1234`). To ensure exact reproducibility, however, it is best to set your own seed value.|
| Positive class assignment (binary classification only) | Sets the class to use when a prediction scores higher than the classification threshold of .5. |
| Weighting Settings (see additional details [below](#additional-weighting-details))| :~~:|
| [Weight](#additional-weighting-details) | Sets a single feature to use as a differential weight, indicating the relative importance of each row. |
| [Exposure](#set-exposure) | Sets a feature to be treated with strict proportionality in target predictions, adding a measure of exposure when modeling insurance rates. *Regression problems only*. |
| [Count of Events](#set-count-of-events) | Used by Frequency-Severity models, sets a feature for which DataRobot collects (and treats as a special column) information on the frequency of non-zero events. *Zero-inflated Regression problems only*. |
| [Offset](#set-offset) | Sets feature(s) that should be treated as a fixed component for modeling (coefficient of 1 in generalized linear models or gradient boosting machine models). *Regression and binary classification problems only*. |
## Change the optimization metric {: #change-the-optimization-metric }
The optimization metric defines how DataRobot scores your models. After you choose a target feature, DataRobot selects an optimization metric based on the modeling task. This metric is reported under the **Start** button on the project start page. To build models using a different metric, overriding the recommended metric, use the **Optimization Metric** dropdown:

The metric DataRobot chooses for scoring models is usually the best selection. Changing the metric is an advanced functionality and recommended only for those who understand the metrics and the algorithms behind them.
Some notes:
* If the selected target has only two unique values, DataRobot assumes that it is as classification task and recommends a classification metric. Examples of recommended classification methods include LogLoss (if it is necessary to calculate a probability for each class), and Gini and AUC when it is necessary to sort records in order of ranking.
* Otherwise, DataRobot assumes that the selected target represents a regression task. The most popular metrics for regression are RMSE (Root Mean Square Error) and MAE (Mean Absolute Error).
* If you are using [smart downsampling](smart-ds) to downsize your dataset or you selected a weight column as your target, only weighted metrics are available. Alternately, you cannot choose a weighted metric if neither of those scenarios are true. A weighted metric takes into account a skew in the data.
Note that although you choose and build a project optimized for a specific metric, DataRobot computes many applicable metrics on each of the models. After the build completes, you can redisplay the Leaderboard listing based on a different metric. It will not change any values within the models, but it will simply reorder the model listing based on their performance on this alternate metric:

See the reference material for a complete [list of available metrics](opt-metric) for more information.
### Time limit exceptions {: #time-limit-exceptions }
When you set the **Upper Bound Running Time**, DataRobot uses that time limit to control how long a single model can run. Time limit is three hours by default. Any model that exceeds the limit continues to run until it completes, but DataRobot does not build the model with the subsequent sample size. For example, perhaps you run Full Autopilot and a model at the 16% sample size exceeds the limit. The model continues to run until it completes, but DataRobot does not begin the 32% sample size build for any model until all the 16% models are complete. By excepting very long running models, you can complete Autopilot and then manually run any that were halted.
These excepted models are maintained in a model <em>blacklist</em>, indicated with an  icon on the Leaderboard:

You can view the contents of the list (that is, the models excluded from subsequent sample size builds) by expanding the **Time Blacklisted Models** link in the [Worker Queue](worker-queue).

## Additional weighting details {: #additional-weighting-details }
The information below describes valid feature values and project types for the weighting options. With the **Weight**, **Exposure** and **Offset** parameters, you can add constraints to your modeling process. You do this by selecting which features (variables) should be treated differently; when set, the **Data** page indicates that the parameter is applied to a feature.

The following describes the weighting options. See below for more detail and usage criteria.
* **Weight**: Sets a single feature to use as a differential weight, indicating the relative importance of each row. It is used when building or scoring a model—for computing metrics on the Leaderboard—but not for making predictions on new data. All values for the selected feature must be greater than 0. DataRobot runs validation and ensures the selected feature contains only supported values.
* [**Exposure**](#set-exposure): In regression problems, sets a feature to be treated with strict proportionality in target predictions, adding a measure of exposure when modeling insurance rates. DataRobot handles a feature selected for **Exposure** as a special column, adding it to raw predictions when building or scoring a model; the selected column(s) must be present in any dataset later uploaded for predictions.
* [**Count of Events**](#set-count-of-events): Improves modeling of a zero-inflated target by adding information on the frequency of non-zero events. To use **Count of Events**, select the feature (variable) to treat as the source for the count.
* [**Offset**](#set-offset): In regression and binary classification problems, sets feature(s) that should be treated as a fixed component for modeling (coefficient of 1 in generalized linear models or gradient boosting machine models). Offsets are often used to incorporate pricing constraints or to boost existing models. DataRobot handles a feature selected for **Offset** as a special column, adding it to raw predictions when building or scoring a model; the selected column(s) must be present in any dataset later uploaded for predictions.
To use the weighting parameters, enter feature name(s) from the uploaded dataset. DataRobot uses the features as the offset and/or exposure in modeling and only builds those models that support the parameters—ElasticNet, GBM, LightGBM, GAM, ASVM, XGBoost, and Frequency x Severity models (Count of Events is only used in Frequency-Severity models). You can blend resulting models using the Average, GLM, and ENET [blenders](creating-addl-models#create-a-blended-model).
### Weighting parameter requirements {: #weighting-parameter-requirements }
For**Weight**, **Exposure**, and **Count of Events**, DataRobot filters out all features that do not match the criteria (listed in the tooltip) so that you cannot select them. For **Offset**, you can select multiple features. The following table describes the dataset requirements for each parameter:
Criteria | Weight | Exposure | Count of Events | Offset
----------|---------|------------|------------------|--------
Project type | all | regression | regression | regression and binary classification
Target value | any | positive | zero-inflated | any
Missing values, all zeros, or no values (empty) allowed? | no | no | no | no |
Positive values required? | yes | yes | yes | no
Zero values allowed? | no | no | yes | yes |
Columns allowed | single | single | single | multiple
Duplicate columns allowed? | no | no | no | no
Data type\* | numeric | numeric | numeric | numeric
Transformed features allowed? | yes | no | no | no
Cardinality > 2 allowed? | yes | no | yes | yes
Multiclass allowed? | yes | no | no | no
Time series projects? | yes | no | no | no
Unsupervised projects? | no | no | yes | no
ETL downsampled? | yes | no | no | no
Target zero-inflated? | no | yes | yes | yes
\* Columns selected must be a "pure" numeric (not date, time, etc.). Note also that columns selected for Offset, Exposure, or Count of Events cannot be specified as other special columns (user partition, weights, etc.).
## Weighting effect on insights {: #weighting-effect-on-insights }
Projects built using the Offset, Exposure, and/or Count of Events parameters produce the same DataRobot insights as projects that do not. However, DataRobot excludes Offset, Exposure, and Count of Events columns from the predictive set. That is, the selected columns are not part of the [Coefficients](coefficients), [Prediction Explanations](pred-explain/index), or [Feature Impact](feature-impact) visualizations; they are treated as special columns throughout the project. While the Exposure, Offset, and Count of Events columns do not appear in these displays as features, their values have been used in training.
??? tip "What is a link function?"
A link function—used by Exposure and Offset—maps a non-linear relationship to a linear one, allowing you to use a linear model (linear regression) for data that would otherwise not support that model type. Specifically, it transforms the probabilities of each categorical response variable to a continuous, unbounded scale that is unbounded.
### Set Exposure {: #set-exposure }
The **Exposure** parameter accepts only a single feature. Entering a second feature name will overwrite your previous selection. To set Exposure, start typing a name in the entry box. DataRobot string matches your entry and when completed, validates the entry:

Only [optimization metrics](model-data#optimization-metric) with the log link function (Poisson, Gamma, or Tweedie deviance) can make use of the exposure values in modeling. For these optimization metrics, DataRobot log transforms the value of the field you specify as an exposure (you do not need to do it). If you select otherwise, DataRobot returns an informative message. See [below](#offset-and-exposure-in-modeling) for more training and prediction application details.
??? tip "Exposure explained"
You cannot compare "present day value" when each row of your data has a different history length. Use the Exposure parameter when calculating risk by comparing observations that are not of equal duration. Exposure is a special weighting used to balance risk over time. For example, let's say you have determined that a policy files, on average, two claims per year. When comparing two policies—a half-year policy and a full-year policy—statistically speaking, the half-year policy will file one claim while the full-year policy will file two claims (on average). Exposure allows DataRobot to adjust predictions for the time difference.
### Set Count of Events {: #set-count-of-events }
**Count of Events** improves modeling of a zero-inflated target by adding information on the frequency of non-zero events. Frequency x Severity models handle it as a special column. The frequency stage uses the column to model the frequency of non-zero events. The severity stage normalizes the severity of non-zero events in the column and uses that value as the weight. This is specially required to improve interpretability of frequency and severity coefficients. The column is not used for making predictions on new data.
The parameter accepts only a single feature. Entering a second feature name will overwrite your previous selection. DataRobot string matches your entry and, when completed, validates the entry.
### Set Offset {: #set-offset }
The **Offset** parameter adjusts the model intercept (linear model) or margin (tree-based model) for each sample; it accepts multiple features. DataRobot displays a message below the selection reporting which link function it will use.
* For *regression* problems, if the [optimization metric](model-data#optimization-metric) is Poisson, Gamma, or Tweedie deviance, DataRobot uses the log link function, in which case offsets should be log transformed in advance. Otherwise, DataRobot uses the identity link function and no transformation is needed for offsets.
* For *binary classification* problems, DataRobot uses the logit link function, in which case offsets should be logit transformed in advance.

See [below](#offset-and-exposure-in-modeling) for more training and prediction application details.
??? tip "Offset explained"
Applying the **Offset** parameter is helpful when working with projects that rely on data that has a fixed component and a variable component. Offsets lets you limit a model to predicting on only the variable component. This is especially important when the fixed component varies. When you set the Offset parameter, DataRobot marks the feature as such and makes predictions without considering the fixed value.
Two examples:
1. Residual modeling is a commonly used method when important risk factors (for example, underwriting cycle, year, age, loss maturity, etc.) contribute strongly to the outcome, and mask all other effects, potentially leading to a highly biased result. Setting Offsets deals with the data bias issue. Using a feature set as an offset is the equivalent of running the model against the residuals of the selected feature set. By modeling on residuals, you can tell the model to focus on telling you new information, rather than what you already know. With offsets, DataRobot focuses on the "other" factors when model building, while still incorporating the main risk factors in the final predictions.
2. The constraint issue in insurance can arise due to market competition or regulation. Some examples are: discounts on multicar or home-auto package policies being limited to a 20% maximum, suppressing rates for youthful drivers, or suppressing rates for certain disadvantaged territories. In these types of cases, some of the variables can be set to a specific value and added to the model predictions as offsets.
These excepted models are maintained in a model list, indicated with an  icon on the Leaderboard:

## Offset and Exposure in modeling {: #offset-and-exposure-in-modeling }
During *training*, Offset and Exposure are incorporated into modeling using the following logic:
Project metric | Modeling logic
-------------- | -----
RMSE | `Y-offset ~ X`
Poisson/Tweedie/Gamma/RMSLE | `ln(Y/Exposure) - offset ~ X`
Bernoulli | `logit(Y) - offset ~ X`
When making *predictions*, the following logic is applied:
Project metric | Prediction calculation logic
-------------- | -----
RMSE | `model(X) + offset`
Poisson/Tweedie/Gamma/RMSLE | `exp(model(X) + offset) * exposure`
Bernoulli | `logistic(model(X) + offset)`
|
additional
|
---
title: Frozen runs
description: Describes “frozen runs,” a setting that freezes parameter settings from a model’s early, small sample size-based run to improve build times for ever increasing sample sizes.
---
# Frozen runs {: #frozen-runs }
To tune model performance on a sample, DataRobot systematically applies many parameter combinations to progressively narrow the search for the optimum model. Trying many parameter combinations is costly, however. As the sample size increases, the time taken for grid search (finding the best parameter settings for a model) increases exponentially.
DataRobot’s "frozen run" feature addresses this by "freezing" parameter settings from a model’s early, small sample size-based run. Because parameter settings based on smaller samples tend to also perform well on larger samples of the same data, DataRobot can piggyback on its early experimentation. Using parameter settings from the earlier pass and injecting them into the new model as it is training saves time, RAM, and CPU resources without much cost in model accuracy or performance. These savings are particularly important when working with big datasets or on resource-constrained systems.
To avoid costly runs for large (over 1.5GB) datasets, you can only launch sample percentage changes to a Leaderboard model as a frozen run. If you are using [Smart Downsampling](smart-ds), the threshold applies to the size of the dataset after subsampling.
## Start a frozen run {: #start-a-frozen-run }
A frozen run, because it relies on previously determined parameters, can only be launched from an existing model on the Leaderboard. To use the frozen run feature:
1. Run Autopilot or build a model using Manual mode. Use a sample size that can complete in a reasonable time (determined by your system resources and dataset), but is not so small that the parameter optimization search cannot identify good parameter values.
2. When the model build(s) are complete, open the model Leaderboard.
3. For each model that you want to re-run with more data, click the plus sign next to the reported sample size:

4. Set a new sample size and click the snowflake icon:

Use one of the following methods to update the sample size:
- Enter a percentage value (**Percent** field) or number of rows (**Row Count** field).
- Use the slider to set values based on a visual indicator.
- Click in the **Snap To** box for quick access to the default percentages that DataRobot uses for training and other significant values.
!!! note
When configuring sample sizes to retrain models in projects with large row counts, DataRobot recommends requesting sample sizes using integer row counts or the **Snap To Validation** option instead of percentages. This is because percentages map to many actual possible row counts and only one of which is the actual sample size for “up to validation.” For example, if a project has 199,408 rows and you request a 64% sample size, any number of rows between 126,625 rows and 128,618 rows maps to "64%" of the data. Using integer row counts or the “Snap-to” options avoids ambiguity around how many rows of data you want the model to use.
Note that all values adjust as you update the sample size.
5. Click **Run with new sample size** to start a model build using the parameter settings from the selected model on the sample size you just set.
Frozen run Leaderboard items are indicated with a [snowflake icon](leaderboard-ref#model-icons); the sample percentage used to obtain the parameters is displayed alongside that icon.
## Compare frozen run models {: #compare-frozen-run-models }
Once the model build completes, you should determine whether the speed and resource improvements are worth any potential cost in accuracy. The new model appears on the Leaderboard:

* Snowflake (1): The icon and percentage indicate that the model was based on “frozen” parameter settings from the 64% sample size version of the model.
* Sample Size (2): The sample size, as always, indicates the percentage of the training dataset that was used to build the model. This example shows a 64% model that was retrained to 50%.
Compare the following:
To compare accuracy between models, set the metric to a value you want to measure and check the **Validation** scores:

Click on the newly created model and then click the [**Model Info**](model-info) tab. The resulting page displays a resource usage summary detailing core use, RAM, build time and other statistics, as well as sample and model file size details.
Compare the information from these screens against your needs for speed and accuracy.
Note that under the [**Model Info**](model-info) tab for [Smart Downsampled](smart-ds) projects, the row count ("rows") in the **SAMPLE SIZE** tile represents the number of rows after downsampling. However, the Training and Test data sizes list the number of rows <em>before</em> downsampling occurs.

|
frozen-run
|
---
title: Feature lists
description: Describes how to work with feature lists, which control the subset of features that DataRobot uses to build models.
---
# Feature lists {: #feature-lists }
<em>Feature lists</em> control the subset of features that DataRobot uses to build models. You can use one of the [automatically created lists](#automatically-created-feature-lists) or manually add features from the [**Data**](#create-feature-lists-from-the-data-page) page or the [menu](#create-feature-lists-from-the-menu). You can also [review, rename, and delete](#feature-lists-tab) (some) feature lists. The list used for modeling is called the <em>default modeling feature list</em>. That is, it is the feature list selected when you clicked the Start button.
If you don't override the selection, DataRobot uses either of the following lists to build models:
* All features that provide information potentially valuable for modeling (the <em>Informative Features</em> list).
* All features that provide information potentially valuable for modeling with any feature(s) at risk of causing target leakage removed (the <em>Informative Features - Leakage Removed</em> list).
You can select features to create a new feature list, before or after [EDA2](eda-explained). The target feature is automatically added to every feature list. Once created, the new list becomes available in the **Feature List** dropdown. DataRobot highlights the active list, which controls the display of features on the page, in blue.

Note that the **Project Data** tab defaults to showing **All Features**, which is not actually a feature list but instead a way to view every feature in the dataset.
## Select a feature list {: #select-a-feature-list }
To use a feature list other than the list assigned by DataRobot, select the list to use as the default modeling list from the **Feature List** dropdown. The new setting is reflected under the Start button:

## Create feature lists {: #create-feature-lists }
If you do not want to use one of the [automatically created](#automatically-created-feature-lists) feature lists, you can create customized feature lists and train your models on them to see if they yield a better model. You can create these lists from the **Data** page or the [menu](#create-feature-lists-from-the-menu). Additionally, you can create lists based on feature impact from the [**Feature Impact**](feature-impact#create-a-new-feature-list) tab, including lists with redundant features removed. You can later [manage these lists](#manage-feature-lists) from the **Feature Lists** tab.
### Create feature lists from the Data page {: #create-feature-lists-from-the-data-page }
To create feature lists from the **Data** page:
1. Select the **Project Data** tab.
2. Optionally, from the **Feature List** dropdown select **All Features** to display all columns (features) in your dataset.
3. Use the checkboxes to the left of a feature name to select a set of features. When you select the first feature, the **Create Feature List** link becomes active.
4. Select each feature you want added to your new list and click **Create Feature List**.

Enter a name in the resulting dialog box and click **Create feature list**. The page display updates to show only those features that are part of the new list (highlighted in blue in the **Feature List** dropdown).

!!! tip
Click in the box to select all, or deselect any, selected features.

### Create feature lists from the menu {: #create-feature-lists-from-the-menu }
You can use the **Menu** options to quickly select features for a new feature list. Click the **Menu** to expand:

Clicking a feature list name causes DataRobot to select all features on the <em>displayed page</em> that are members of the chosen feature list (set by the **Feature List** dropdown). For example, set the **Feature List** to <em>Informative Features</em> and then, from the menu dropdown, select the example created above (<em>Top5</em>). DataRobot automatically selects (checks the left-hand boxes) of the five features in the <em>Top5</em> list. You can now use that as a base and add or drop features to create a new list:

Add the new features, name your list, and click **Create**. The new list is available for selection across the project (from the **Feature List** dropdown).
## Feature Lists tab {: #feature-lists-tab }
The **Feature Lists** tab of the the **Data** page provides a mechanism for managing feature lists. It provides a summary (name, number of features, number of models, created date, and description) of DataRobot-created and custom feature lists and allows you to [delete](#delete-feature-lists) or [rename](#edit-feature-list-names-and-descriptions) (some) lists to help avoid clutter and confusion. A lock() next to the name indicates the list cannot be [deleted](#delete-feature-lists).

After building models, the list includes additional automatically created lists (1) as well as any custom lists (2):

## Manage feature lists {: #manage-feature-lists }
DataRobot provides several tools for working with feature lists. Depending on how the list was created (automatically by DataRobot or manually by a user), or whether it has been used to create models on your Leaderboard, the actions may behave differently:

The following table describes the actions:
| Icon | Description |
|-------|---------------|
|  | Exports features that are part of the selected list as a CSV file. |
|  | Opens the selected feature list on the **Project Data** tab. |
|  | Provides a dialog to let you [edit](#feature-lists-tab) the list name and/or description. ([Automatically created feature lists](#automatically-created-feature-lists) cannot be renamed although the description can be changed.)* |
|  | [Restarts Autopilot](#rerun-autopilot) using the selected feature list.* |
|  or  | [Deletes](#delete-feature-lists) the selected list (or indicates it cannot be deleted). [Automatically created feature lists](#automatically-created-feature-lists) cannot be deleted.* |
\* You must have [User-level](roles-permissions#project-roles) or above project access to delete or rename feature lists, as well as to restart Autopilot.
!!! tip
You cannot add or remove features from a feature list. Instead, create a new feature list with all desired features.
## Edit feature list names and descriptions {: #edit-feature-list-names-and-descriptions }
When creating a [custom feature list](#create-feature-lists-from-the-data-page), you simply name the list in the initial dialog. From the **Feature Lists** tab you can append a description to the list. To add that description, or edit an existing description, highlight the list and click the pencil icon ().

You can change a description, but not a name, for a DataRobot-created list.
## Rerun Autopilot {: #rerun-autopilot }
You can launch a re-run of Autopilot from the **Feature Lists** tab by clicking the retrain icon (). Clicking the icon launches a dialog; select **Restart Autopilot** to rebuild the project with the new list.
* If you restart while models are building for the project, DataRobot halts the feature list that is currently running (i.e., stops building new models with it) and restarts Autopilot, from the beginning, using the selected list.
Note that this is the same action as rerunning Autopilot from the [**Configure modeling settings**](worker-queue#restart-a-model-build) link available in the right-panel Worker Queue.
## Delete feature lists {: #delete-feature-lists }
Deleting a feature list also deletes any models in the project that were built with that list. Only custom feature lists can be deleted (no  next to the name). If you click to delete a custom feature list that has been used for modeling, DataRobot warns with the number of models impacted:

You cannot use the delete function if the feature list is:
* An automatically created list.
* The default modeling list for the project.
* Configured as a [monotonic constraint](monotonic) feature list for the project.
* Used as the input feature list to create the [modeling dataset](ts-create-data#create-the-modeling-dataset) for a time series project.
* Used in a [model deployment](deployment/index) (the model and its feature lists cannot be deleted until after the deployments are deleted).
## Automatically created feature lists {: #automatically-created-feature-lists }
DataRobot automatically creates several feature lists for each project. Note that:
* [Time series feature lists](ts-feature-lists) differ from AutoML feature lists.
* Features created from a search for interactions result in [different lists](feature-disc#feature-lists-and-created-features) (appended with a plus (+) sign).
* A project's target feature is automatically added to every feature list.
The following describes the automatically created feature lists, although not all lists apply to a project.
* **DR Reduced Features**: A subset of features, selected based on the Feature Impact calculation of the best non-blender model on the Leaderboard. DataRobot then automatically retrains the best non-blender model with this <em>DR Reduced Features</em> list, creating a new model. DataRobot compares the original and new models, selects the better one, and retrains this model at a higher sample size for model recommendation purposes. <em>DR Reduced Features</em>, in most cases, consists of the features that provide 95% of the accumulated impact for the model. If that number is greater than 100, only the top 100 features are included. If [redundant feature](feature-impact#remove-redundant-features) identification is supported in the project, redundant features are excluded from DR Reduced Features. Note that this list is not created in [Quick](model-ref#quick-autopilot) mode.
* **Informative Features - Leakage Removed**: The default feature list if DataRobot detects [target leakage](data-quality#target-leakage). This list excludes feature(s) that are at risk of causing target leakage and any features providing little or no information useful for modeling. To determine what was removed, you can see these features labeled in the **Data** table with **All Features** selected.
* **Informative Features**: The default feature list if DataRobot does not detect [target leakage](data-quality#target-leakage). This list includes features that pass a "reasonableness" check that determines whether they contain information useful for building a generalizable model. For example, DataRobot excludes features it determines are low information or redundant, such as duplicate columns, a column containing all ones or reference IDs, a feature with too few values, [and others](histogram#data-page-informational-tags). Informative features are sorted to the top of the Features list.
* **Raw Features**: All features in the dataset, excluding user-derived features and including those excluded from the Informative Features list (e.g., duplicates, high missing values).
* **Univariate Selections**: Features that meet a certain threshold (an [ACE](model-ref#data-summary-information) score above 0.005) for non-linear correlation with the selected target. DataRobot calculates, for each entry in the Informative Features list, the feature’s individual relationship against the target. This list is not available until EDA2 completes.
While not a feature list (not available for use to build models), the **All Features** selection sets the **Project Data** display to list all columns in the dataset as well as any additional transformed features.
|
feature-lists
|
---
title: Basic model workflow
description: Describes the basic workflow of the DataRobot model building process, with links to complete documentation for each step.
---
# Basic model workflow {: #basic-model-workflow }
Once the import has finished, DataRobot displays the **Data** page. From here you can set a target and change your project settings, then build your models. DataRobot initiates [EDA2](eda-explained) when you start the modeling process.
Generally speaking, once you select a target and click **Start**, DataRobot searches through millions of possible combinations of algorithms, preprocessing steps, features, transformations, and tuning parameters. It then uses supervised learning algorithms to analyze the data and identify (apparent) predictive relationships. These relationships represent the value of the target in unseen data, as determined by its relationship to the other dataset variables.
## Model building workflow {: #model-building-workflow }
DataRobot supports both [supervised](glossary/index#supervised-learning) and [unsupervised](glossary/index#unsupervised-learning) learning. The following outlines the steps for building models after [EDA1](eda-explained#eda1) completes, with links to more detailed discussions of each step:
1. (<em>Optional</em>) [Explore](#explore-your-data) your data.
2. (<em>Optional</em>) Investigate the [Data Quality Assessment](data-quality).
3. [Set the target](#set-the-target-feature) feature or set up an [unsupervised learning](unsupervised/index) run by clicking **No target** and selecting [Anomalies](anomaly-detection) or [Clusters](clustering).
4. Add secondary datasets for [Feature Discovery](fd-overview).
5. (<em>Optional</em>) Customize your model build, including:
* Creating [multiclass models](multiclass).
* Changing the [optimization metric](additional#change-the-optimization-metric).
* Setting [advanced model building](adv-opt/index) options.
* Creating [feature lists](feature-lists) to define feature subsets.
* Creating [new (transformed) features](#create-new-features).
6. Set the [modeling mode](#set-the-modeling-mode).
7. (<em>Optional</em>) Set up [time-aware modeling](#set-up-time-aware-modeling), if applicable.
8. Start the model build process. (DataRobot provides [special handling](#build-failure) when the project fails after the build process starts.)
9. (<em>Optional</em>) Investigate results of [automated target leakage](data-quality#target-leakage) detection.
10. (<em>Optional</em>) [Rerun](#rerun-autopilot) modeling with newly configured settings.

!!! note
DataRobot provides [special handling](fast-eda) of larger datasets to make viewing and model building work more efficiently. Specifically, [early target selection](fast-eda#fast-eda-and-early-target-selection) allows you to set build parameters and set the project to start automatically when ingestion completes. For more information, see the sections on [viewing the project summary](manage-projects#project-summaries) and [interpreting summary information](model-ref#data-summary-information).
See the [deep dive](model-ref) for more details on the model building process.
## Explore your data {: #explore-your-data }
Even before you begin the model building process, DataRobot can provide information about your data. After EDA1 completes, you can scroll down or click the **Explore** link to view DataRobot's first analysis of the data. EDA1 provides the following resources for exploring the data:
1. A [Data Quality Assessment](data-quality).

2. For each feature, DataRobot detects the data (variable) type of each feature; supported data types are listed [here](model-ref#data-summary-information). Additional information on the data page includes unique and missing values, mean, median, standard deviation, and minimum and maximum values.

3. A histogram or table of Frequent Values for a selected feature as well as a dialog to modify the variable type (described in more detail [here](histogram)).

## Set the target feature {: #set-the-target-feature }
The model building phase of the project starts with selecting a target feature. The <em>target feature</em> is the name of the column in the dataset that you would like to predict. Until you select a target, the other **Start** screen configuration options aren't available.
Enter the name of the target feature you would like to predict. DataRobot lists matching features as you type:

Alternatively, while exploring your data, notice that when you hover over a feature name a **Use as Target** link appears. Click the link to select that feature as the target.

When you enter a target, DataRobot displays a histogram providing information about the target feature's distribution.

## Customize the model build {: #customize-the-model-build }
If you want to customize the build prior to building, you can modify a variety of advanced parameters (the optimization metric and many others), create [feature lists](feature-lists), and transform features. These options are described below.
### Optimization metric {: #optimization-metric }
The <em>optimization metric</em> defines how to score your models. Once you enter a target, DataRobot selects a default metric based on your data. The metric choice, which becomes visible after you select a target variable, is listed under the **Start** button. You can change the optimization metric through the [**Advanced options**](additional#change-the-optimization-metric) link.
Note that although you choose and build a project optimized for a specific metric, DataRobot computes many applicable metrics on each of the models. After the build completes, you can redisplay the Leaderboard listing based on a different metric. It will not change any values within the models, it will simply reorder the model listing based on their performance on this alternate metric.
### Improve accuracy {: #improve-accuracy }
If accuracy is a prime concern, consider selecting the "accuracy-optimized metablueprint" checkbox in [**Advanced options**](additional#time-limit-exceptions) prior to model building. Using this feature causes model building to run much more slowly, but potentially produces more accurate blueprints. (For example, with this option you may get XGBoost models with many more trees but a lower learning rate or with a deeper grid search.)
### Other advanced options {: #other-advanced-options }
The [**Show advanced options**](adv-opt/index) link allows you to set far more than the optimization metric. From there you can:
* Set [partitioning options](partitioning)
* Enable [Smart Downsampling](smart-ds)
* Set a variety of [additional parameters](additional), including weights, offset/exposure, running time limits, and more
### Create new features {: #create-new-features }
DataRobot supports two different types of transformations— automatic and manual. The software automatically creates derived features from any column that it identifies as var type `Date`. DataRobot also supports user-created transformations, which you can then include in your feature lists. See the more detailed description of [transformations](feature-transforms) for more information.
## Set up time-aware modeling {: #set-up-time-aware-modeling }
For projects where time is an important dimension, DataRobot provides an option to create [time-aware models](time/index)—models that use time for validation (OTV) or forecasting (time series). You can use out-of-time validation (OTV) and Automated Time Series modeling to predict individual events and to use time to validate performance for future data. Options for time-aware modeling become available after you select a target feature and <em>if</em> DataRobot detects a date/time feature in your dataset. If there are no time features, the option is grayed out and you can continue the modeling workflow.
## Set the modeling mode {: #set-the-modeling-mode }
!!! note
See the [multistage Autopilot](multistep-ta) description for time-aware modeling.
By default, DataRobot runs Quick (Autopilot)—a shortened and optimized version of the full Autopilot mode. In Autopilot, DataRobot selects a predefined set of models to run based on the specified target feature and then trains the models on the training data set. Sample percentage sizes are based on the selected mode (see the table below) and time-aware setting.
For example, in full Autopilot, DataRobot first builds models using 16% of the total data on the selected models. When the models are scored, DataRobot selects the top 16 models and reruns them on 32% of the data. Taking the top 8 models from that run, DataRobot runs on 64% of the data (or 500MB of data, whichever is smaller). Results of all model runs, at all sample sizes, are displayed on the Leaderboard. This method supports running more models in the early stages and advancing only the top models to the next stage, allowing for greater model diversity and faster Autopilot runtimes. See the notes on [calculating Autopilot stages](repository#notes-on-sample-size) for more detail.
When running Autopilot, DataRobot initially caps the sample size at 500 MB. Once it selects a model for deployment, that model is rerun at 80% (exceeding the previous 500MB cap). Note the you can train any model to any sample size (exceeding 500 MB) from the **Repository** or [retrain models](creating-addl-models) to any size from the Leaderboard.
For more control over which models are run, use the additional options beneath the **Start** button. For large datasets, see the section on [early target selection](fast-eda#fast-eda-and-early-target-selection).

!!! note
See the table of [differences applied ](model-ref#small-datasets) when working with smaller datasets.
### Modeling modes explained {: #modeling-modes-explained }
The following table describes each of the modeling modes:
| Modeling mode | Description |
|-------------|-------------|
| Quick (default) | Using a sample size of 64%, Quick Autopilot runs a [subset](model-ref#specifics-of-quick-autopilot) of models, based on the specified target feature and performance metric, to provide a base set of models that build and provide insights quickly.|
| Autopilot | In full automatic Autopilot mode, DataRobot selects the best predictive models for the specified feature. By default, Autopilot runs on the [Informative Features](feature-lists#automatically-created-feature-lists) feature list. |
| Manual | Manual mode gives you full control over which blueprints to execute. When you select Manual mode, DataRobot provides a message and link to the Repository after [EDA2](eda-explained) completes. |
| Comprehensive | [Comprehensive Autopilot mode](more-accuracy) runs all Repository blueprints on the maximum Autopilot sample size to ensure more accuracy for models. This mode results in extended build times. Note that you cannot use Comprehensive Autopilot mode for time series or [anomaly detection](anomaly-detection) projects. |
## Start the build {: #start-the-build }
To start the build, select a [feature list](feature-lists):

Then, select a [modeling mode](#set-the-modeling-mode) and click **Start** to initiate [EDA2](eda-explained). When the modeling process begins, DataRobot indicates the activity with a spinning icon by the **Models** tab. As models complete, a badge count also appears:

The modeling process finds the best predictive models for the target feature. You can manage the build using the DataRobot [Worker Queue](worker-queue). If projects [fail to build](#build-failure), DataRobot provides information, including a traceback that can be sent to Support.
As models build, you can [explore the EDA2 data](model-ref#data-summary-information) DataRobot is using from the **Project Data** tab. Once complete, you can also [work with feature lists](feature-lists) or visualize [associations](feature-assoc) within your data from the **Data** page.
!!! note
If you close your browser or log out, DataRobot continues building models in any projects that have started the model building phase.
## Build failure {: #build-failure }
After you load data, set a target, and select options, it is possible that your project fails to build (due to data format errors, for example). When this happens, DataRobot provides the information necessary to help troubleshoot the problem, whether on your own or with the help of Support. Errored projects, while not built, are saved to the [Manage Projects](manage-projects) inventory, with their traceback information. This helps to debug or repair issues without losing any feature engineering or other customization preprocessing you may have performed.
On first fail, DataRobot presents a dialog with:
* a brief error message
* the option to view traceback details by expanding the **Details** link
* the ability to dismiss the dialog

Once dismissed, DataRobot provides a preliminary summary of project data with a message indicating that project creation failed. Click the **CONTACT SUPPORT** link to see the information available, then click **Submit** to send the information to the Support team. (For organizations that are not configured for direct contact to Support through the application, clicking the link opens your mail client.)

At this point, you can continue working on other projects while Support investigates your issue. To revisit the failed project, open [**Manage Projects**](manage-projects). The failed project is marked with an icon indicating an issue:

Select the project to return to the preliminary project data summary page. From here you can open the Support contact link or view your traceback.
## Configure modeling settings {: #configure-modeling-settings }
When modeling completes, you can rerun the process—in either Autopilot, Quick, or Comprehensive mode—with new settings. Select **Configure modeling settings** in the right-side panel.

* Select the [modeling mode](#set-the-modeling-mode): Autopilot, Quick, Manual, or Comprehensive.
* Choose the [feature list](feature-lists) used for modeling.
* Determine the [automation settings](additional): choose to only include blueprints with Scoring Code support, create blenders from top models, and recommend models for deployment.

Once configured, click **Rerun** to restart the modeling process.
|
model-data
|
---
title: Model Repository
description: DataRobot's library of algorithms used to build models. They may be run as part of Autopilot and also are available for manual selection.
---
# Model Repository {: #model-repository }
The Repository is a library of modeling blueprints available for a selected project. These blueprints illustrate the algorithms (the preprocessing steps, selected estimators, and in some models, postprocessing as well) used to build a model, not the model itself. Model blueprints listed in the Repository have not necessarily been built yet, but could be built in any of the [modeling modes](model-data#set-the-modeling-mode). When you create a project in Manual mode and want to select a specific blueprint to run, you access it from the Repository.

When you choose any version of Autopilot as the modeling mode, DataRobot runs a sample of blueprints that will provide a good balance of accuracy and runtime. Blueprints that offer the possibility of improvement while also potentially increasing runtime (many deep learning models, for example) are available from the Repository but not run as part of Autopilot.
It is a good practice to run Autopilot, identify the blueprint (algorithm) that performed best on the data, and then run all variants of that algorithm in the Repository. [Comprehensive mode](more-accuracy) runs all models from the Repository at the maximum sample size, which can be quite time consuming.
From the Repository you can:
* [Search](#search-the-repository) to limit the list of models displayed by type.
* Use [Preview](#menu-actions) to display a model's [blueprint](blueprints) or code.
* Start a model [run](#create-a-new-model) using new parameters.
* Start a [batch run](#launch-a-batch-run) using new parameters applied to all selected blueprints.
## Search the Repository {: #search-the-repository }
To more easily find one of the model types described below, or to sort by model type, use the **Search** function:

Click in the search box and begin typing a model/blueprint family, blueprint name, or badge name. As you type, the list automatically narrows to those blueprints meeting your search criteria. To return to the complete model listing, remove all characters from the search box.
### DataRobot models {: #datarobot-models }
DataRobot models are built using massively parallel processing to train and evaluate thousands of choices, mostly built on open source algorithms (because open source has some of the best algorithms available). DataRobot searches through millions of possible combinations of algorithms, preprocessing steps, features, transformations, and tuning parameters to deliver the best models for your dataset and prediction target. It is this preprocessing and tuning that produces the best models possible. DataRobot models are marked with the DataRobot icon .
??? tip "List DataRobot models"
You can view a list of DataRobot models from the Repository (or the Leaderboard) by using the search term `datarobot model`:

## Create a new model {: #create-a-new-model }
To create a new model from a blueprint in the Repository:
1. Select the blueprint to run by marking the check box next to the name:
2. Once selected, modify one or more of the fields in the now-enabled dialog box:

| | Element | Description |
|---|---|---|
|  | **Feature list** | From the dropdown select a new feature list. The options include the default lists and any [lists you created](feature-lists). |
|  | **Sample Size** | [Modify the sample size](#notes-on-sample-size), making it either larger or smaller than the sample size used by Autopilot. Remember that when increasing the sample size, you must set values that leave data available for validation.|
|  | **CV Runs** | Set the number of [folds](data-partitioning#k-fold-cross-validation-cv) used in cross-validation. |
3. After verifying the parameter settings, click **Run Task(s)** to launch the new model run.

### Launch a batch run {: #launch-a-batch-run }
The Repository batch run capability allows you to set parameters and apply them to a group of selected models. To launch a batch run, select the blueprints to run in batch by either clicking in the box next to the model name or selecting all by clicking next to **Blueprint Name & Description**:

To deselect all selected blueprints, click the minus sign (-) next to **Blueprint Name & Description**.
If you have already built any of the models in the batch using the same sample size and feature list, you must make a change to at least one of the parameters (described in the [run](#create-a-new-model) option). This is not required for batches containing all new models. Click **Run Task(s)** to start the build.

### Notes on sample size {: #notes-on-sample-size }
The sample size available when adding models from the Repository differs depending on the size of the dataset. By default, the sample size reflects the size used in the last Autopilot stage (but this value can be changed to any valid size). DataRobot caps that amount of data at either 64% or 500MB, whichever is smaller.
When calculating size, DataRobot _first_ calculates what will ultimately be the final stage (64% or 500MB, whichever is smaller) and then models according to the [mode selected](model-data#set-the-modeling-mode). For full Autopilot, that would be 1/4, 1/2, and all of that data. In datasets smaller than 500MB, full Autopilot stages are 64%/4, 64%/2, and finally, 64%. (See the calculations below for alternate partitioning results.)
If the dataset is larger than 500MB, it is first reduced to the 500MB threshold using random sampling. Then, DataRobot calculates the percentage that corresponds to 500MB and create stages that are 1/4, 1/2, and all of the calculated percentage. For example, if 500MB is 40% of the data, DataRobot runs 10%, 20%, and 40% stages.
In addition to dataset size, the range available for selection also depends on the partitioning parameters. For example, if you have 20% holdout with five-fold cross-validation (CV), the calculation is as follows:
1. Take 100% of the data minus 20% holdout.
2. From the 80% remaining, use 1/5 of the data. By default, a single validation fold is calculated as: `100% - 20% - 80%/5 = 64%`.
Note that if you configured custom training/validation/holdout ([TVH](data-partitioning#training-validation-and-holdout-tvh)) partitions, it is calculated as:
`100% - custom % for holdout - custom % for validation`
Or, if you declined to include holdout (0%) with five-fold CV, the result is:
`100 - 1/5 of full data for validation = 80%`
### Menu actions {: #menu-actions }
Use the menu to preview a model—either the [blueprint](blueprints) or, for open-source models, the code.

Use the **Add** function to select the model and add it to the [task list](#create-a-new-model) to run when **Run Task** is clicked.
|
repository
|
---
title: Add/delete models
description: Describes how to work with models after your initial model build, including creating blenders.
---
# Add/delete models {: #adddelete-models }
This section describes how to work with models after your initial model build.
## Add models from the Leaderboard {: #add-models-from-the-leaderboard }
Once the Leaderboard is populated, you can create additional models by [creating a new model](#use-add-new-model) or [retraining](#retrain-a-model) an existing model. In both cases, once you submit your changes you can see the request's progress in the Worker Queue.
### Use Add New Model {: #use-add-new-model }
To create a new model from the Leaderboard:
1. Click the **Add New Model** link at the top of the Leaderboard.

2. Select a model type, and, if a model of that type already exists, be sure the new model changes at least one of: [feature list](feature-lists), sample size, and/or number of [cross-validation runs](data-partitioning#k-fold-cross-validation-cv). Click **Add Model**.

Note that this method functions the same as the **Run Task** button in the [**Repository**](repository#create-a-new-model).
### Retrain a model {: #retrain-a-model }
You can retrain an existing Leaderboard model by changing the sample size or feature list.
#### Change sample size {: #change-sample-size }
To use a different number of rows or percentage of data, click the plus sign () next to the reported sample size:

Set a new value and click **Run with new sample size**. Note that when setting a new sample size, above a certain point (which is determined by the size of the dataset) DataRobot forces a [frozen run](frozen-run#start-a-frozen-run). To increase sample size in larger datasets without a frozen run, create the new model from the [**Repository**](repository#create-a-new-model). Be aware that this method may use more RAM and have implications on system performance.
!!! note
When configuring sample sizes to retrain models in projects with large row counts, DataRobot recommends requesting sample sizes using integer row counts or the **Snap To Validation** option instead of percentages. This is because percentages map to many actual possible row counts and only one of which is the actual sample size for “up to validation.” For example, if a project has 199,408 rows and you request a 64% sample size, any number of rows between 126,625 rows and 128,618 rows maps to 64% of the data. Using integer row counts or the “Snap-to” options avoids ambiguity around how many rows of data you want the model to use.
#### Change feature list {: #change-feature-list }
To change the feature list for a specific model, click the feature list icon () and select a new list. You can also [rerun Autopilot on a new feature list](feature-lists#rerun-autopilot) to rebuild all models.

## Blended models {: #blended-models }
Blending lets you combine the predictions of multiple models, which may lead to better results than running models individually. DataRobot can [automatically create blender models](leaderboard-ref#blender-models) as part of Autopilot if the [**Create blenders from top model**](additional) advanced option is enabled. This option is off by default.
Why create blended models?
* To create more accurate models.
* To use multiple blueprints.
* To leverage the [wisdom of crowds principal](https://en.wikipedia.org/wiki/Wisdom_of_the_crowd){ target=_blank }.
Consider the following before creating a blended model:
* Blend between two and eight models, based on different algorithms and with high accuracy.
* Although blenders often increase accuracy, they also require more time to create and score.
* Because the final model is more complex, blended models can be more difficult to interpret and communicate. Use the [Understand tab insights](understand/index) to aid in interpretation.
DataRobot supports the following blending methods for non-time aware projects:
|Blender | Project type | Notes |
|--------|--------------|-------|
| Average Blend (AVG)| Regression, Binary Classification, Multiclass | N/A|
| Median Blend (MED)| Regression, Binary Classification, Multiclass | N/A|
| Partial Least Squares Blend (PLS)| Regression, Binary Classification| Not available on large datasets (slim run)|
| Generalized Linear Model Blend (GLM)| Regression, Binary Classification| Not available on large datasets (slim run)|
| Elastic Net Blend (ENET)| Regression, Binary Classification, Multiclass | Not available on large datasets (slim run)|
| Mean Absolute Error-Minimizing Weighted Average Blend (MAE)| Regression| Only available for projects using MAE as the project metric; not available on large datasets (slim run) |
| Mean Absolute Error-Minimizing Weighted Average Blend with L1 Penalty (MAEL1) | Regression| Only available for projects using MAE as the project metric; not available on large datasets (slim run) |
| Random Forest Blend (RF)| Multiclass| _Deprecated_|
| Light Gradient Boosting Machine Blend (LGBM)| Regression, Binary Classification, Multiclass | _Deprecated_ |
??? tip "Single-model blenders"
A single model blender is like a "calibration" step. Calibration is DataRobot's attempt to improve the model such that the distribution and behavior of predicted probability values are close to distribution and behavior of probability values observed in the training data.
GLM, ENET, and PLS blenders learn an **intercept** and a **coefficient**. That is, they "add a number to every prediction" and "multiply every prediction by a number." Sometimes, a simple addition or multiplication can yield a small improvement in a model's results. Blenders that require training (all except AVG and MED) use [stacking](data-partitioning#what-are-stacked-predictions) to ensure an out-of-sample prediction (and avoid misleadingly high accuracy). An entire LGBM or TF model is fit with a single prediction input and from that can learn a complex non-linear transform of the single prediction. For AVG and MED blenders, creating a single model blender is not useful as it results in an exact duplicate of the parent model.
See [below](#blenders-for-time-aware-projects) for time-aware blender information.
For each target point, the Average and Median blenders calculate the average or median values of the predictions of the selected individual models. GLM, Elastic Net, and PLS blenders are essentially a second layer of models on the top of the existing models. They use the predictions of the selected models as predictors, while keeping the same target as individual models.
### Create a blended model {: #create-a-blended-model }
Follow these steps to create a blended model.
1. Using the checkboxes on the left side of the model Leaderboard, select two or more models. (See the note on single-model blenders, above, to use blending as an additional calibration method.)

2. Click the model menu icon at the top left of the Leaderboard, then select one of the blending options listed under **Blending**. (Hovering over a menu item displays a description of the blending option.)

3. A new job appears in the Worker Queue while the blended model is processed. The name indicates the blender type and the models selected to create the blender.

When processing is complete, the new blended model displays in the list on the Leaderboard.
### Blenders for time-aware projects {: #blenders-for-time-aware-projects }
Time-aware models, because they do not use [stacked predictions](data-partitioning#what-are-stacked-predictions), have different blenders available:

| Blender (Code) | Project type | Description |
|---------------------------|------------------|-------------------|
| Average Blend (AVG) | OTV, time series | Average of prediction between different models |
| Median Blend (MED) | OTV, time series | Median of prediction between different models |
| Average Blend by Forecast Distance (FD\_AVG) | Time series | From the selected models, provides per forecast distance averages for the three top models. Only available for projects with two or more forecast distances; to blend by forecast distance, at least four models need to be selected |
| ENET Blend by Forecast Distance (FD\_ENET) | Time series | Elastic Net model per forecast distance to combine predictions; only available for projects with two or more forecast distances |
While some models do better on short term predictions (the next few steps into the future) and others do better at long term predictions (further into the future), time series projects add forecast distance blending options. With forecast distance blenders, DataRobot blends models differently for each forecast distances in order to use the best blueprints for each.
Note that:
* The forecast distance blenders are disabled when the forecast distance in equal to 1.
* When using Average Blender by Forecast Distance, you must select four or more models. If fewer than four are selected, the blender averages model predictions instead of forecast-distance based predictions.
## Add models from selected {: #add-models-from-selected}
After DataRobot creates models and populates the Leaderboard, you can retrain selected models using different settings. For example, you can run using a different feature list or sample size, or select either single-fold or up to five-fold cross-validation.
Use the following steps to re-run one or more selected models.
!!! note
You cannot use **Add models from selected** to change settings on blended models.
1. Use the checkboxes on the left side of the model names on the Leaderboard to select one or more models.
2. From the menu, select **Model processing > Add models from selected**.

3. Use the resulting box at the top of the Leaderboard to specify a feature list, sample size, and/or number of cross-validation (CV) runs.

4. Click **Run Models** to retrain the selected models with the specified parameters. New jobs appear in the Worker Queue while DataRobot processes the models.
## Delete models {: #delete-models }
You can delete models listed on the Leaderboard using the instructions below. Note that when you delete a model in this way, it is deleted from the Leaderboard but not from the underlying project database. Because of this, the model remains available to any parent project components (for example, [blender](#create-a-blended-model) models or Word Cloud).
To delete models:
1. Using the checkboxes on the left side of the model Leaderboard, select one or more models.
2. From the menu, select **Model processing > Delete selected model**.

3. Click **Delete** to confirm model deletion.
|
creating-addl-models
|
---
title: Build models
description: This topic introduces elements of the basic modeling workflow as well as methods for building additional models in a project.
---
# Build models {: #build-models }
These sections describe elements of the basic modeling workflow as well as methods for building additional models in a project:
Topic | Describes...
----- | ------
[Build overview](model-data) | Set advanced modeling parameters prior to building.
[Feature lists](feature-lists) | Work with the set of features used to build models.
[Unlock Holdout](unlocking-holdout) | Release data saved for model evaluation.
[Comprehensive Autopilot](more-accuracy) | Prioritize model accuracy with more models at maximum sample size.
[Add/delete models](creating-addl-models) | Work with models after the initial model build.
[Frozen runs](frozen-run) | Freeze parameter settings to optimize resources.
[Model repository](repository) | Build from the library of modeling blueprints.
|
index
|
---
title: Unlock Holdout
description: Holdout, the portion of your data DataRobot reserves when building models, provides an evaluation metric that measures a model's accuracy against the unseen data to validate model quality.
---
# Unlock Holdout {: #unlock-holdout }
The *Holdout* column displays an evaluation metric that measures a model's accuracy against unseen ("new") data. Holdout is calculated using the trained model's predictions on the [holdout partition](data-partitioning). DataRobot reserves a portion of your data to use as holdout (20% by default); it does not train models using this data but instead validates the quality of your models once they have been trained.
!!! tip
You should only unlock your holdout data after having made all your model-related decisions. **Once your project's holdout has been unlocked, it cannot be re-locked.**
!!! note
If you run full or Quick Autopilot and DataRobot returns a model [recommended and prepared for deployment](model-rec-process), the specifics described below work slightly differently.
To display a specific model's Holdout score:
1. Click **Unlock project Holdout for all models** on the rightmost panel.

2. Confirm your decision by clicking **Unlock holdout**.
When you unlock holdout, the label on the project menu changes to **Holdout is unlocked** and a value displays in the Holdout column.

Once you have unlocked the holdout data, view the Leaderboard scores on the test data. Then, look at the [Lift Chart](lift-chart). Alternate the **Data Source** dropdown between Validation and Holdout to determine the accuracy of the model's predictions.
|
unlocking-holdout
|
---
title: Comprehensive Autopilot
description: Describes the model building mode that returns the most accurate models by running all repository modeling blueprints on the maximum Autopilot sample size.
---
# Comprehensive Autopilot {: #comprehensive-autopilot }
It is a common decision when building models to prioritize speed over accuracy. If you want to invest more time to find the most accurate model to serve your use case, DataRobot offers a Comprehensive Autopilot modeling mode. This mode runs all repository modeling blueprints on the maximum Autopilot sample size. This is done to ensure more accuracy for models. However, it does result in extended build times.
!!! note
If your dataset has text, Comprehensive mode runs TinyBERT models, which can be 10x to 100x slower than other models with the same data.
## Comprehensive Autopilot {: #comprehensive-autopilot_1 }
If you have not begun the modeling process and want to prioritize the highest possible model accuracy, select Comprehensive mode from the **Start** screen.

!!! note
Note that you cannot run Comprehensive Autopilot for [time series](time/index) or [anomaly detection](anomaly-detection) projects.
## Get more accuracy {: #get-more-accuracy }
If you have completed a run of the modeling process but want to find a more accurate model, you can rerun Autopilot, building additional models while prioritizing accuracy.
After initial modeling is complete, navigate to the Leaderboard. In the Worker Queue, select the **Get More Accuracy** option to re-run the modeling process with new settings. To configure those settings before re-running Autopilot, select **Configure modeling settings**.

Below the **Get More Accuracy** button, DataRobot suggests a modeling mode and feature list for the new modeling run. In the example above, the initial run was done in Manual mode, so the **Get More Accuracy** option suggests a new modeling run in Quick mode. The table below summarizes the modeling mode suggested based on the mode used in your initial modeling process.
| Initial Modeling Mode | Suggested Modeling Mode |
|-----------------------|------------------------|
| Manual | Quick |
| Quick | Autopilot |
| Autopilot | Comprehensive |
| Comprehensive | After comprehensive Autopilot, the **Get More Accuracy** option directs you to configure modeling settings, as there is no higher level of Autopilot available. |
If you want to use the suggested modeling mode and feature list, click **Get More Accuracy**. If you wish to change these settings, or any other modeling settings before starting the new modeling run, select **Configure Modeling Settings**.
The modal prompts you to configure modeling settings:

* Select the [modeling mode](model-data#set-the-modeling-mode): Autopilot, Quick, or Comprehensive.
* Choose the [feature list](feature-lists) used for modeling.
* If you already ran Autopilot with the selected feature list, DataRobot prompts you to confirm that the previously created models can be deleted before rerunning Autopilot.
* Determine the [automation settings](additional): choose to only include blueprints with Scoring Code support, create blenders from top models, and recommend models for deployment.
Once configured, click **Run** to restart the modeling process.
After the modeling run completes, you may want to run an additional, more extensive modeling mode than the one you previously selected. The modeling mode for the **Get More Accuracy** option updates to suggest a more detailed mode.
|
more-accuracy
|
---
title: Clustering
description: Learn how to use clustering, a form of unsupervised learning, to separate your samples into clusters that help you to better understand your data or to use as segments for time series modeling.
---
# Clustering {: #clustering }
Clustering, an application of [unsupervised learning](glossary/index#unsupervised-learning), lets you explore your data by grouping and identifying natural segments. Use clustering to explore clusters generated from many types of data—numeric, categorical, text, image, and geospatial data—independently or combined. In clustering mode, DataRobot captures a latent behavior that's not explicitly captured by a column in the dataset.
You can also use clustering to generate the segments for a time series segmented modeling project. See [Clustering for segmented modeling](ts-clustering) for details.
See the associated [considerations](#feature-considerations) for additional information.
## How to use clustering models {: #how-to-use-clustering-models }
Clustering is useful when data doesn't come with explicit labels and you have to determine what they should be. You can upload any dataset to get an understanding of your data because no target is needed. Examples of clustering include:
* Detecting topics, types, taxonomies, and languages in a text collection. You can apply clustering to datasets containing a mix of text features and other feature types or a single text feature for topic modeling.
* Determining appropriate segments to be used for [time series segmented modeling](ts-clustering).
* Segmenting your customer base before running a predictive marketing campaign. Identify key groups of customers and send different messages to each group.
* Capturing latent categories in an [image collection](visual-ai/index).
* Deploying a clustering model using [MLOps](mlops/index) to serve cluster assignment requests at scale, as a step in a more extensive pipeline.
## Build a clustering model {: #build-a-clustering-model }
The clustering workflow is similar to the [anomaly detection](anomaly-detection) workflow, also an unsupervised learning application.
To build a clustering model:
1. [Upload data](import-data/index), click **No target?** and select **Clusters**.

**Modeling Mode** defaults to Comprehensive and **Optimization Metric** defaults to [Silhouette Score](opt-metric#silouette-score).
2. Click **Start**.
DataRobot generates clustering models based on default cluster counts for your dataset size. You can also [configure the number of clusters](#configure-the-number-of-clusters). For clustering, DataRobot divides the original dataset into training and validation partitions with no holdout partition.
When modeling is complete, the Leaderboard displays the generated clustering models ranked by silhouette score:

The **Clusters** column indicates the number of clusters used by the clustering algorithm.
3. Select a model to investigate.
By default, the [**Describe > Blueprint**](#sample-clustering-blueprint) tab displays.
4. Analyze [visualizations](#visualizations-for-exploring-clusters) to select a clustering model.
5. After evaluating and selecting a clustering model, [deploy the model](deployment/index) and [make predictions](predictions/index) on existing or new data as you would any other model. You can make predictions from the Leaderboard or the deployment.
## Sample clustering blueprint {: #sample-clustering-blueprint }
Following is an example of a clustering blueprint.

Click a blueprint node to access documention on the algorithm or transform. This example shows details on the K-Means Clustering node.
This dataset contains categorical, geospatial location, numeric, image, and text variables. The clustering algorithm is applied after preprocessing and dimensionality reduction of the variable types to improve processing speed.
## Visualizations for exploring clusters {: #visualizations-for-exploring-clusters }
The following visualization tools are useful for clustering projects:
### Cluster Insights {: #cluster-insights }
The [Cluster Insights](cluster-insights) visualization (**Understand > Cluster Insights**) helps you investigate clusters generated during modeling.

Compare the [feature values of each cluster](cluster-insights#investigate-cluster-features) to gain an understanding of the groupings.
### Image Embeddings {: #image-embeddings }
If your dataset contains images, use the [Image Embeddings](vai-insights#image-embeddings) visualization (**Understand > Image Embeddings**) to see how the images from each cluster are sorted.

For clustering models, the frame of each image displays in a color that represents the cluster containing the image. Hover over an image to view the probability of the image belonging to each cluster.

### Activation Maps {: #activation-maps }
With [Activation Maps](vai-insights#activation-maps), you can see which image areas the model is using when making prediction decisions, in this case, how best to cluster the data. Hover over an image to see which cluster the image was assigned to.

!!! note
For unsupervised projects, the default image preprocessing uses low-level featurization while supervised projects use multi-level featurization. See [Granularity](vai-tuning#granularity) for details. See also the [Visual AI reference](vai-ref).
### Feature Impact {: #feature-impact }
Use the [Feature Impact](feature-impact) tool (**Understand > Feature Impact**) to see which features had the most influence on the clustering outcomes:

!!! tip "How is Feature Impact calculated for clustering projects?"
As with supervised projects, DataRobot permutes each feature and looks at how much the prediction changes based on the [RMSE metric](opt-metric#rmse-weighted-rmse-rmsle-weighted-rmsle). The larger the change, the higher the impact of the feature.
### Feature Associations {: #feature-associations }
Because clustering can be computationally expensive, you might want to use the [Feature Associations](feature-assoc) tool (**Data > Feature Associations**) to determine if there are redundant features that you can possibly remove.

In this example, `year_built` and `sold_date` derive features that are highly correlated and thus might not be useful to the clustering algorithms. If so, you can remove the features and rerun clustering.
!!! note
To generate feature associations for a clustering project (or any unsupervised learning project), DataRobot uses the first 50 features alphabetically. Unlike supervised learning where the [ACE score](glossary/index#ace-scores) is used to select features, unsupervised projects don't use targets and therefore cannot compute the ACE score.
## Configure the number of clusters {: #configure-the-number-of-clusters }
Some clustering algorithms (i.e., K-Means) require a cluster count prior to modeling. Others (i.e., HDBSCAN—Hierarchical Density-Based Spatial Clustering of Applications with Noise) discover an effective number of clusters dynamically. You can learn more about these clustering algorithms in their [blueprints](#sample-clustering-blueprint).
??? tip "How do you decide the number of clusters?"
To detect the number of clusters, test out models that use different cluster counts and take a look at the distributions of the clusters. In some cases, you might want a balanced distribution. In other cases, you might want smaller, more fine-grained clusters. For example, for customer segmenting, a small cluster might be more actionable because you can target a smaller group of customers efficiently.
The following sections discuss how to set the cluster count:
* [Prior to modeling](#set-the-number-of-clusters-in-advanced-options)
* [When rerunning a single model](#update-the-number-of-clusters-and-rerun-a-model)
* [When rerunning all clustering models](#update-the-number-of-clusters-and-rerun-all-models)
### Set the number of clusters in Advanced Options {: #set-the-number-of-clusters-in-advanced-options }
Prior to starting a clustering run, you can customize the number of clusters you want DataRobot to use:
1. After you upload your data and set up clustering mode, click **Advanced settings**. In the **Advanced Options** section that displays, click **Clustering** on the left.

2. Enter one or more numbers in the **Number of clusters** field. You can enter up to 10 numbers. For each number you enter, DataRobot trains multiple models, one for each algorithm that supports setting a fixed number of clusters (such as K-Means or Gaussian Mixture Model).
### Update the number of clusters and rerun a model {: #update-the-number-of-clusters-and-rerun-a-model }
To rerun a model on a different number of clusters:
1. Click the **+** icon in the **Clusters** column of the model.

2. Enter the number of clusters to use for the run.

### Update the number of clusters and rerun all models {: #update-the-number-of-clusters-and-rerun-all-models }
To update the number of clusters and rerun all models:
1. Click **Rerun modeling** on the Workers pane on the right.
2. Update the numbers of clusters you want the clustering algorithms to use and click **Rerun**.

For this example, DataRobot runs clustering algorithms using 7, 10, 12, and 15 clusters.
## Feature considerations {: #feature-considerations }
When using clustering, consider the following:
* Datasets for clustering projects must be less than 5GB like other AutoML projects.
* Relational data is not supported.
* Word Clouds are not supported.
* The maximum number of clusters is 100.
* Prediction Explanations are not supported.
* Relational data (summarized categorical features, for example) are not supported.
* Clustering models can be deployed to dedicated prediction servers, but Portable Prediction Servers (PPS) and monitoring agents are not supported.
See also the [time series-specific clustering considerations](ts-consider#clustering-considerations).
|
clustering
|
---
title: Unsupervised learning
description: Work with unlabeled data to build models in unsupervised mode (anomaly detection and clustering).
---
# Unsupervised learning {: #unsupervised-learning }
Typically DataRobot works with labeled data, using supervised learning methods for model building. With supervised learning, you specify a target (what you want to predict) and DataRobot builds models using the other features of your dataset to make that prediction.
DataRobot also supports *unsupervised learning* where no target is specified and the data is unlabeled. Instead of generating predictions as in supervised learning, unsupervised learning surfaces insights about patterns in your data, answering questions like "Are there anomalies in my data?" and "Are there natural clusters?"
These unsupervised learning strategies are described in the following sections:
| Topic | Describes... |
|---|---|
| [Anomaly detection](anomaly-detection) | Use unsupervised learning to detect abnormalities in your dataset. |
| [Clustering](clustering) | Use unsupervised learning to group similar data and identify segments. |
|
index
|
---
title: Anomaly detection
description: Work with unlabeled data to build models in unsupervised mode (anomaly detection).
---
# Anomaly detection {: #anomaly-detection }
DataRobot works with unlabeled data (or [partially labeled](#partially-labeled-data) data) to build anomaly detection models. Anomaly detection, also referred to as outlier and novelty detection, is an application of unsupervised learning. Where supervised learning models use target features and make predictions based on the learning data, unsupervised learning models have no targets and detect patterns in the learning data.
Anomaly detection can be used in cases where there are thousands of normal transactions with a low percentage of abnormalities, such as network and cyber security, insurance fraud, or credit card fraud. Although supervised methods are very successful at predicting these abnormal, minority cases, it can be expensive and very time-consuming to label the relevant data.
See the associated [considerations](#feature-considerations) for important additional information.
## Anomaly detection workflow {: #anomaly-detection-workflow }
The following provides an overview of the anomaly detection workflow, which works for both AutoML and [time series](#time-series-anomaly-detection) projects.
1. Upload data, click **No target?** and select **Anomalies**.

2. If using time-aware modeling:
* Click **Set up time-aware modeling**.
* Select the primary date/time feature.
* Select to set up Time Series Modeling.
* Set the rolling window (FDW) for anomaly detection.
3. Set the modeling mode and click **Start**. If you chose manual mode, navigate to the **Repository** and run an [anomaly detection blueprint](#anomaly-detection-blueprints).
4. From the Leaderboard, [consider the scores](#interpret-anomaly-scores) and select a model.
5. For time series projects, expand a model and choose [**Anomaly Over Time** or **Anomaly Assessment**](anom-viz). This visualization helps to understand anomalies over time and functions similarly to the non-anomaly [**Accuracy Over Time**](aot).

6. Compute [Feature Impact](feature-impact).
!!! note
Regardless of project settings, Feature Impact for anomaly detection models trained from DataRobot blueprints is always computed using SHAP. For anomaly detection models from user blueprints, Feature Impact is computed using the permutation-based approach.
7. Compute [Feature Effect](feature-effects).
8. Compute [Prediction Explanations](pred-explain/index) to understand which features contribute to outlier identification.
9. Consider changing the [outlier threshold](#outlier-thresholds).
10. [Make predictions](predict#make-predictions-on-an-external-dataset) (or use [partially labeled](#partially-labeled-data) data).
### Synthetic AUC metric {: #synthetic-auc-metric }
Anomaly detection is performed in unsupervised mode, which finds outliers in the data without requiring a target. Without a target, however, traditional data science metrics cannot be calculated to estimate model performance. To address this, DataRobot uses the Synthetic AUC metric to compare models and sort the Leaderboard.
Once unsupervised mode is enabled, Synthetic AUC appears as the default metric. The metric works by generating two synthetic datasets out of the validation sample—one made more normal, one made more anomalous. Both samples are labelled accordingly, and then a model calculates anomaly score predictions for both samples. The usual ROC AUC value is estimated for each synthetic dataset, using artificial labels as the ground truth. If a model has a Synthetic AUC of 0.9, it is not correct to interpret that score to mean that the model is correct 90% of the time. It simply means that a model with, for example, a Synthetic AUC of 0.9 is likely to outperform a model with a Synthetic AUC of 0.6.

### Outlier thresholds {: #outlier-thresholds }
After you have run anomaly models, for some blueprints you can set the `expected_outlier_fraction` parameter in the [**Advanced Tuning**](adv-tuning) tab.

This parameter sets the percent of the data that you want considered as outliers—the expected "contamination factor" you would expect to see. In AutoML, it is used to define the content of the [**Insights**](#anomaly-score-insights) table display. In special cases such as the SVM model, this value sets the `nu` parameter, which affects the decision function threshold. By default, the `expected_outlier_fraction` is 0.1 (10%).
### Interpret anomaly scores {: #interpret-anomaly-scores }
As with non-anomaly models, DataRobot reports a model score on the Leaderboard. The meaning of the score differs, however. A "good" score indicates that the abnormal rows in the dataset are related somehow to the class. A "poor" score indicates that you do have anomalies but they are not related to the class. In other words, the score does not indicate how well the model performs. Because the models are unsupervised, scores could be influenced by something like noisy data—what you may think is an anomaly may not be.
Anomaly scores range between 0 and 1 with a larger score being more likely to be anomalous. They are calibrated to be interpreted as the probability that the model identifies a given row is an outlier when compared to other rows in the training set. However, since there is no target in unsupervised mode, the calibration is not perfect. The calibrated scores should be considered an estimated probability rather than quantitatively exact.
## Anomaly score insights {: #anomaly-score-insights }
!!! note
This insight is not available for time series projects.
DataRobot anomaly detection models automatically provide an anomaly score for all rows, helping you to identify unusual patterns that do not conform to expected behavior. A display available from the **Insights** tab lists up to the top 100 rows with the highest anomaly scores, with a maximum of 1000 columns and 200 characters per column. There is an **Export** button on the table display that allows you to download a CSV of the complete listing of anomaly scores. Alternatively, you can [compute predictions](predict) from the **Make Predictions** tab and download the results. The anomaly score is shown in the `Prediction` column of your results.
For a summary of anomaly results, click **Anomaly Detection** on the **Insights** tab:

DataRobot displays a table sorted on the anomaly scores (the score from making a prediction with the model). Each row of the table represents a row in the original dataset. From this table, you can identify rows in your original data by searching or you can download the model's predictions (which will have the row ID appended).
The number of rows presented is dependent on the [`expected_outlier_fraction`](#outlier-thresholds) parameter, with a maximum display of 100 rows (1000 columns and 200 characters per column). That is, the display includes the smaller of `(expected_outlier_fraction * number of rows)` and 100. You can download the entire anomaly table for all rows used to train the model by clicking the **Export** button.
To view insights for another anomaly model, click the pulldown in the model name bar and select a new model.
## Time series anomaly detection {: #time-series-anomaly-detection }
DataRobot’s time series anomaly detection allows you to detect anomalies in your data. To enable the capability, you do not specify a target variable at project start, which results in DataRobot performing unsupervised mode for time series data. Instead, you click to enable unsupervised mode.
After enabling unsupervised mode and selecting a primary date/time feature, you can adjust the [feature derivation window](glossary/index#feature-derivation-window-fdw-time-aware) (FDW) as you normally would in time series modeling. Notice, however, that there is no need to specify a forecast window. This is because DataRobot detects anomalies in real time, as the data becomes anomalous.

For example, imagine using DataRobot's anomaly detection for predictive maintenance. If you had a pump with sensors reporting different components’ pressure readings, your DataRobot time series model can alert you when one of those components has a pressure reading that is abnormally high. Then, you can investigate that component and fix anything that may be broken before an ultimate pump failure.
DataRobot offers a selection of [anomaly detection blueprints](#anomaly-detection-blueprints) and also creates and allows you to create blended blueprints. You may want to create a max blender model, for example, to make a model that produces a high false positive rate, making it extra sensitive to anomalies.
For time series anomaly detection, DataRobot ranks Leaderboard models using a novel error metric method, [Synthetic AUC](#synthetic-auc-metric). This error metric can help determine which blueprint may be best suited for your use case. If you want to verify AUC scores, you can upload [partially labeled data](#partially-labeled-data) and create a column to specify known anomalies. DataRobot can then use that partially labelled dataset to rank the Leaderboard by AUC score. Partially labeled data is data in which you’ve taken a sample of values in the training data set and flagged anomalies in real-life as “1” or lack of an anomaly as a “0”.
Anomaly scores can be calibrated to be interpreted as probabilities. This happens in-blueprint using outlier detection on the raw anomaly scores as a proxy for an anomaly label. Raw scores that are outliers amongst the scores from the training set are assumed to be anomalies for purposes of calibration. This synthetic target is used to do <a target="_blank" href="https://en.wikipedia.org/wiki/Platt_scaling">Platt scaling</a> on the raw anomaly scores. The calibrated score is interpreted as the probability that the raw score is an outlier, given the distribution of scores seen in the training set.
Deployments with time series anomaly detection work in the same way as all other time series blueprint deployments.
### Anomaly detection feature lists for time series {: #anomaly-detection-feature-lists-for-time-series }
DataRobot generates different [time series feature lists](ts-feature-lists) that are useful for point anomalies and anomaly windows detection. To provide the best performance, typically DataRobot selects the "SHAP-based Reduced Features" or "Robust z-score Only" feature list when running Autopilot.
Both "SHAP-based Reduced Features" or "Robust z-score Only" feature lists consider a selective set of features from all available derived features. Additional feature lists are available via the menu:
* "Actual Values and Rolling Statistics"
* "Actual Values Only"
* "Rolling Statistics Only"
* Time Series Informative Features
* Time Series Extracted Features
Note that if "Actual Values and Rolling Statistics" is a duplicate of "Time Series Informative Features", DataRobot only displays "Time Series Informative Features" in the menu. "Time Series Informative Features" does not include duplicate features, while "Time Series Extracted Features" contains all time series derived features.
#### Seasonality detection for feature lists {: #seasonality-detection-for-feature-lists }
There are cases where some features are periodic and/or have trend, but there are no anomalies present. Anomaly detection algorithms applied to the raw features do not take the periodicity or trend into account. They may identify false positives where the features have large amplitudes or may identify false negatives where there is an anomalous value that is small in comparison to the overall amplitude of the normal signal.
Because anomalies are inherently irregular, DataRobot prevents periodic features from being part of most default feature lists used for automated modeling in anomaly detection projects. That is, after applying seasonality detection logic to a project's numeric features, DataRobot removes those features before creating the default lists. This logic is not applied to (features are not deleted from) the Time Series Extracted Features and Time Series Informative Features lists. Specifically:
* If the feature is seasonal, the logic assumes that the actual values and rolling z-scores are also seasonal and therefore drops them.
* If the rolling window is shorter than the period for that feature, the rolling stats are assumed to be seasonal and the features are dropped.
These features are still available in the project and can be used for modeling by adding them to a user-created feature list.
### Partially labeled data {: #partially-labeled-data }
The following provides a quick overview of using partially labeled data. This capability is currently only available for time series projects:
1. Upload data, enable unsupervised mode, and run the unlabelled data through DataRobot’s unsupervised learning models.
2. Select a best fit model by considering [Synthetic AUC](#synthetic-auc-metric) model rankings.
3. Compare the real-life anomalies with the non-anomalies flagged by the model.
4. Take a copy of the original dataset or any labeled piece of data, and create an "actual value" column where you label scores as 0 or 1 (true anomaly as “1” and no anomaly as a “0”) based on the known real-life anomalies. This column must have a unique name (that is, it cannot already be used as a column name in the dataset).
5. Upload corrected, partially labeled, data to validate AUC and use the [Lift](lift-chart) and [ROC Curve](roc-curve) charts built for this data to evaluate results and select a model.
6. Deploy the model into production.
## Anomaly detection blueprints {: #anomaly-detection-blueprints }
DataRobot Autopilot composes blueprints based on the unique characteristics of the data, including the dataset size. Some blueprints, because they tend to consume too much of modeling resources or build more slowly, are not automatically added to the Repository. If there are specific anomaly detection blueprints you would like to run but do not see listed in the Repository, try composing them in the [blueprint editor](cml-blueprint-edit). If running the resulting blueprint still consumes too many resources, DataRobot generates model errors or out of memory errors and displays them in the [model logs](log).
The anomaly detection algorithms that DataRobot implements are:
| Model | Description |
|-------|-------------|
| Isolation Forest | Isolates" observations by randomly selecting a feature and randomly selecting a split value between the max and min values of the selected feature. Random partitioning produces shorter tree paths for anomalies. Good for high-dimensional data. |
| One Class SVM | Captures the shape of the dataset and is usually used for Novelty Detection. Good for high-dimensional data. |
| Local Outlier Factor (LOF) | Based on k-Nearest Neighbor, measures the local deviation of density for a given row with respect to its neighbors. Considered "local" in that the anomaly score depends on the object's isolation with respect to its surrounding neighborhood. |
| Double Median Absolute Deviation (MAD) | Uses two median values—one from the left tail (median of all points less than or equal to the median of all the data) and one from the right tail (median of all points greater than or equal to the median of all the data). It then checks if either tail median is greater than the threshold. Not practical for boolean or near-constant data; good for symmetric and asymmetric distributions. |
| Anomaly detection with Supervised Learning (XGB) | Uses the average score of the base models and labels a percentage as Anomaly and the rest as Normal. The percentage labeled as Anomaly is defined by the calibration\_outlier\_fraction parameter. Base models are Isolation Forest and Double MAD, resulting in a faster and less memory-intense experience. If the dataset contains text, there will be 2 XGBoost models in the Repository. One of the models uses singular-value decomposition from the text; the other model uses the most frequent words from the text. |
| Mahalanobis Distance | The Mahalanobis distance is a measure of the distance between a point, P, and a distribution, D. It is a multi-dimensional representation of the idea of measuring how many standard deviations away the point is from the mean of the distribution. This model requires more than one column of data. |
| *Time series*: Bollinger Band | A feature value that deviates significantly with respect to its most recent value can be an indication of anomalous behavior. Bollinger Band refers to <a target="_blank" href="https://www.itl.nist.gov/div898/handbook/eda/section3/eda35h.htm">robust z-score</a> (also known as modified z-score) values as a basis for anomaly detection. A robust z-score value is evaluated using the median value of samples, and it suggests how far a value is away from sample median (z-score is similar, but it references to sample mean instead). Bollinger Band suggests higher anomaly scores whenever the robust z-scores exceed the specified threshold. Bollinger Band refers to the median value of partition training data as reference for the computation of robust z-score. |
| *Time series*: Bollinger Band (rolling) | In contrast to Bollinger Band described above, Bollinger Band (rolling) refers to the median value of feature derivation window samples only, instead of the whole partition training data. Bollinger Band (rolling) requires use of the “Robust z-score Only” feature list for modeling, which has all the robust z-score values derived in a rolling manner. |
The blueprint(s) DataRobot selects during Autopilot depend on the size of the dataset. For example, Isolation Forest is typically selected, but for very large datasets, Autopilot builds Double MAD models. Regardless of which model DataRobot builds, all anomaly blueprints are available to run from the **Repository**.
## Sample use cases {: #sample-use-cases }
Following are some sample use cases for anomaly detection.
When the data is labeled:
> Kerry has millions of rows of credit card transactions but only a small percentage has been labeled as fraud or not-fraud. Of those that are labeled, the labels are noisy and are known to contain false positives and false negatives. She would like to assess the relationship between “anomaly” and “fraud” and then fine-tune anomaly detection models so that she can trust the predictions on the large amounts of unlabeled data. Because her company has limited resources for investigating claims, successful anomaly detection will allow them to prioritize the cases they think are most likely fraudulent.
or
> Kim works for a network security company which has huge amounts of data, much of which has been labeled. The problem is that when a malicious behavior is recognized and acted on (system entry blocked, for example), hackers change the behavior and create new forms of network intrusion. This makes it difficult to keep supervised models up-to-date so that they recognize the behavior change.
> Kim uses anomaly detection models to predict if new data is novel— that is, novel from “normal” access and previously known “intrusion” access. Because much less data is need to recognize a change, anomaly detection models do not have to re-trained as frequently as supervised models. Kim will use the existing labeled data to fine-tune existing anomaly detection models.
When the data is not labeled:
> Laura works for a manufacturing company that keeps machine-based data on machine status at specific points in time. With anomaly detection they hope to identify anomalous time points in their machine logs, thereby identifying necessary maintenance that could prevent a machine breakdown.
## Feature considerations {: #feature-considerations }
Consider the following when working with anomaly detection projects:
* In the case of numeric missing values, DataRobot supplies the imputed median (which, by definition, is non-anomalous).
* The higher the number of features in a dataset, the longer it takes DataRobot to detect anomalies and the more difficult it is to interpret results. If you have more than 1000 features, be aware that the anomaly score becomes difficult to interpret, making it potentially difficult to identify the root cause of anomalies.
* Because anomaly scores are normalized, DataRobot labels some rows as anomalies even if they’re not too far away from normal. For training data, the most anomalous row will have a score of 1. For some models, test data and external data can have anomaly score predictions that are greater than 1 if the row is more anomalous than other rows in the training data.
* Synthetic AUC is an approximation based on creating synthetic anomalies and inliers from the training data.
* Synthetic AUC scores are not available for blenders that contain image features.
* Feature Impact is limited to 1,000 features and is not available for blenders.
* Feature Impact for anomaly detection models trained from DataRobot blueprints is always computed using SHAP. For anomaly detection models from user blueprints, Feature Impact is computed using the permutation-based approach.
* Because time series anomaly detection is not yet optimized for pure text data anomalies, data must contain some numerical or categorical columns.
* The following methods are implemented and tunable:
| Method | Details |
|--------|-------------------|
| Isolation Forest | <ul><li>Up to 2 million rows</li><li>Dataset < 500 MB</li><li>Number of numerical + categorical + text columns > 2 </li><li>Up to 26 text columns</li></ul>|
| Double Mean Absolute Deviation (MAD) | <ul><li>Any number of rows</li><li>Datasets of all sizes </li><li>Up to 26 text columns</li></ul> |
| One Class Support Vector Machine (SVM) | <ul><li>Up to 10,000 rows</li><li>Dataset < 500 MB </li><li>Number of numerical + categorical + text columns < 500</ul> |
| Local outlier factor | <ul><li> Up to 500,001 rows</li><li>Dataset < 500 MB </li><li>Up to 26 text columns</li></ul>|
| Mahalanobis Distance | <ul><li>Any number of rows</li><li>Datasets of all sizes </li><li>Up to 26 text columns</li><li>At least one numerical or categorical column</ul>|
* Anomaly detection projects do not support weights or offsets, including [smart downsampling](smart-ds).
* Anomaly detection does not consider geospatial data (that is, models will build but those data types will not be present in blueprints).
Additionally, for time series projects:
* Millisecond data is the lower limit of data granularity.
* Datasets must be less than 1GB.
* Some blueprints don’t run on purely categorical data.
* Some blueprints are tied to feature lists and expect certain features (e.g., Bollinger Band rolling must be run on a feature list with robust z-score features only).
* For time series projects with periodicity:
* Because applying periodicity affects feature reduction/processing priorities, if there are too many features then seasonal features are also not included in Time Series Extracted and Time Series Informative Features lists.
Additionally, the [time series considerations](ts-consider) apply.
|
anomaly-detection
|
---
title: Composable ML overview
description: Composable ML provides a full-flexibility approach to model building, allowing you to build blueprints that best suit your needs using built-in tasks and custom Python/R code.
---
# Composable ML overview {: #composable-ml-overview }
Composable ML provides a full-flexibility approach to model building, allowing you to direct your data science and subject matter expertise to the models you build. With Composable ML, you build blueprints that best suit your needs using built-in tasks and custom Python/R code. Then, use your custom blueprint together with other DataRobot capabilities (MLOps, for example) to boost productivity.
## Get started with Composable ML {: #get-started-with-composable-ml }
The following resources are available to provide more information.
#### Read and view {: #read-and-view }
* [Demo video](https://www.datarobot.com/platform/composable-ml/){ target=_blank }
* [AutoML and Composable ML blog](https://www.datarobot.com/blog/building-ai-with-automl-and-composable-ml/){ target=_blank }
#### Quickstart
* [Quickstart](cml-quickstart) walks you through testing and learning Composable ML.
#### Code examples {: #code-examples }
* [Task templates](https://github.com/datarobot/datarobot-user-models/tree/master/task_templates){ target=_blank }
* [Drop-in environments](https://github.com/datarobot/datarobot-user-models/tree/master/public_dropin_environments){ target=_blank }
* Compose blueprints programmatically in the Blueprint Workshop:
* [Homepage](https://blueprint-workshop.datarobot.com/index.html#){ target=_blank }
* [Walkthrough](https://blueprint-workshop.datarobot.com/examples/walkthrough/Walkthrough.html){ target=_blank }
## How it works {: #how-it-works }
To compose a [blueprint](glossary/index#blueprint)—an ML pipeline that includes both preprocessing and modeling tasks—you use some or all of these four key components:

* *Task*: An ML method, for example, XGBoost or one-hot encoding, that is used to define a blueprint. There are hundreds of built-in tasks available and you can also define your own using Python or R. There are two types of tasks—**Estimator** and **Transform**—which are described in detail in the [blueprint editor](cml-blueprint-edit#how-tasks-work) documentation.
* *Environment*: A Docker container used to run a custom task.
* *Model*: A trained ML pipeline capable of scoring new data.
* *DataRobot user model (DRUM)*: A command line tool that helps to assemble, test, and run custom tasks. If you are using custom tasks, it is recommended that you install DRUM on your machine as a [Python package](cml-drum) so that you can quickly test tasks locally before uploading them into DataRobot.
## Why use Composable ML? {: #why-use-composable-ml }
Some of the key benefits of bringing training code into DataRobot include:
**Flexibility**: Use any method or algorithm for modeling and preprocessing.
* Use Python and/or R to define modeling logic.
* Stitch Python and R tasks together in a single blueprint—DataRobot will handle the data conversion.
* Install any dependency and, if required, bring your own Docker container.
**Productivity**: Instant integration with built-in components helps to streamline your end-to-end flow. Once a blueprint is trained on DataRobot's infrastructure, you get instant access to the model Leaderboard, MLOps, compliance documentation, model insights, Feature Discovery, and more.
**Collaboration**: With blueprint and task re-use, organizations can experience true modeling collaboration:
* Experts can build custom tasks and blueprints; users across the organization can easily re-use those creations in a few clicks, without needing to read the code.
* Citizen data scientists can share models with data science experts, who can then further experiment and enhance them.
## Use cases {: #use-cases }
Some things to try:
* Experiment with preprocessing and estimators to incorporate business and data science knowledge.
* Remove certain preprocessing steps to comply with regulatory/compliance requirements.
* Train and deploy models using domain-specific data: IP, chemical formulas, etc.
* Create a library of state-of-the-art algorithms for specific use cases to easily leverage it across the organization (data scientists build custom ML algorithms and share them with business analysts, who can then use them without coding).
* Compare your existing ML models to DataRobot's AutoML to find a better model or perhaps learn ways to improve your own model.
|
cml-overview
|
---
title: Custom environments
description: Describes how to build a custom environment when a custom task requires something not contained in one of DataRobot's built-in environments.
---
# Custom environments {: #custom-environments }
Once uploaded into DataRobot, [custom tasks](cml-custom-tasks) run inside of environments—Docker containers running in Kubernetes. In other words, DataRobot copies the uploaded files defining the custom task into the image container.
In most cases, adding a custom environment is not required because there are a variety of built-in environments available in DataRobot. Python and/or R packages can be easily added to these environments by uploading a `requirements.txt` file with the task’s code. A custom environment is only required when a custom task:
* Requires additional Linux packages.
* Requires a different operating system.
* Uses a language other than Python, R, or Java.
This document describes how to build a custom environment for these cases.
## Prerequisites {: #prerequisites }
To test a custom environment locally, install both Docker Desktop and the DataRobot user model (DRUM) CLI tool on your machine, as described in the [DRUM installation documentation](cml-drum).
## Create the environment {: #create-the-environment }
Once DRUM is installed, begin your environment creation by copying one of the examples from [GitHub](https://github.com/datarobot/datarobot-user-models/tree/master/public_dropin_environments){ target=_blank }. {% include 'includes/github-sign-in.md' %} Make sure:
1. The environment code stays in a single folder.
2. You remove the `env_info.json` file.
### Add Linux packages {: #add-linux-packages }
To add Linux packages to an environment, add code in the beginning of `dockerfile`, immediately after the `FROM datarobot…` line.
Use `dockerfile` syntax for an Ubuntu base. For example, the following command tells DataRobot which base to use and then to install packages `foo`, `boo`, and `moo` inside the Docker image:
FROM datarobot/python3-dropin-env-base
RUN apt-get update --fix-missing && apt-get install foo boo moo
### Add Python/R packages {: #add-python-r-packages }
In some cases, you might want to include Python/R packages in the environment. To do so, note the following:
* List packages to install in `requirements.txt`. For R packages, do not include versions in the list.
* Do not mix Python and R packages in the same `requirements.txt` file. Instead, create multiple files and adjust `dockerfile` so DataRobot can find and use them.
See an explanation and examples of `requirements.txt` files in the [custom tasks](cml-custom-tasks) documentation.
## Test the environment locally {: #test-the-environment-locally }
The following example illustrates how to quickly test your environment using Docker tools and DRUM.
1. To test a custom task together with a custom environment, navigate to the local folder where the task content is stored.
2. Run the following, replacing placeholder names in `< >` brackets with actual names:
```
drum fit --code-dir <path_to_task_content> --docker <path_to_a_folder_with_environment_code> --input <path_to_test_data.csv> --target-type <target_type> --target <target_column_name> --verbose
```
## Use the new environment {: #use-the-new-environment}
To use the tested environment, upload it to DataRobot and apply it to a task. You can also share and download the environment you created.
### Upload to DataRobot {: #upload-to-datarobot }
Upload a custom environment into DataRobot from **Model Registry > Custom Model Workshop > Environments > Add new environment**. When you upload an environment, it is only available to you unless you [share](#share-and-download) it with other individuals.

To make changes to an existing environment, create a new [version](custom-environments#add-an-environment-version).
### Add to task {: #add-to-task }
To use the new environment with a custom task:
1. Navigate to **Model Registry > Custom Model Workshop > Tasks** and click to select an existing task.
2. Select the new environment from the **Base Environment** dropdown.

## Work with environments {: #work-with-environments }
You can view a variety of information related to each custom and built-in environment, as well as download the content. With _custom_ environments you can also share and delete the content.
!!! warning
If you delete an environment, you are removing the environment for everyone that it may have been shared with.
### View environment information {: #view-environment-information }
There is a variety of information available for each each custom and built-in environment. To view:
1. Navigate to **Model Registry > Custom Model Workshop > Environments**. The resulting list shows all environments available to your account, with summary information.
2. For more information on an individual environment, click to select:

The versions tab lists a variety of version-specific information and provides a link for downloading that version's environment context file.
3. Click **Current Deployments** to see a list of all deployments in which the current environment has been used.
4. Click **Environment Info** to view information about the general environment, not including version information.
### Share and download {: #share-and-download }
You can share custom environments with anyone in your organization from the menu options on the right. These options are not available to built-in environments because all organization members have access and these environment options should not be removed.
!!! note
An environment is not available in the model registry to other users unless it was explicitly shared. That does not, however, limit users' ability to use blueprints that include tasks that use that environment. See the description of [_implicit sharing_](cml-custom-tasks#implicit-sharing) for more information.
From **Model Registry > Custom Model Workshop > Environments**, use the menu to [share and/or delete](custom-model-actions) any custom environment that you have appropriate permissions for. (Note that the link points to custom model actions, but the options are the same for custom tasks and environments.)

|
cml-custom-env
|
---
title: Composable ML
description: Documentation for Composable Machine Learning (ML), including a Quickstart, overview, and instructions for editing or crating blueprints, tasks, and environments.
---
# Composable ML {: #composable-ml }
Documentation for Composable Machine Learning (ML) includes a Quickstart, overview, and instructions for editing or crating blueprints, tasks, and environments.
Topic | Describes...
----- | ------------
[Overview](cml-overview) | How Composable ML works and provides sample use cases.
[Quickstart](cml-quickstart) | A simple example to try out Composable ML.
[Blueprint modification](cml-blueprint-edit) | DataRobot blueprints and describes how to modify them.
[Custom tasks](cml-custom-tasks) | Creating custom tasks.
[Custom environments ](cml-custom-env) | Creating custom environments.
[DRUM CLI tool](cml-drum) | How to install and use the DataRobot User Models (DRUM) CLI tool.
[Composable ML reference](cml-ref/index) | Blueprints in the AI Catalog and metadata for custom models and tasks.
[Sentiment analysis example](cml-sentiment-example) | A tip for capturing Text sentiments using Composable ML.
|
index
|
---
title: Modify a blueprint
description: Describes how a blueprint works and and the basics of using the blueprint editor.
---
# Modify a blueprint {: #modify-a-blueprint }
This section describes the blueprint editor. A blueprint represents the high-level end-to-end procedure for fitting the model, including any preprocessing steps, modeling, and post-processing steps. The description of the [**Describe > Blueprints**](blueprints) tab provides a detailed explanation of blueprint elements.

When you create your own blueprints, DataRobot [validates modifications](#blueprint-validation) to ensure that changes are intentional, not to enforce requirements. As such, blueprints with validation warnings are saved and can be trained, despite the warnings. While this flexibility prevents erroneously constraining you, be aware that a blueprint with warnings is not likely to successfully build a model.
## How a blueprint works {: #how-a-blueprint-works }
Before working with the editor, make sure you understand the kind of data processing a blueprint can handle, the components for building a pipeline, and how tasks within a pipeline work.
### Blueprint data processing abilities {: #blueprint-data-processing-abilities }
A blueprint is designed to implement training pipelines, including modeling, calibration, and model-specific preprocessing steps. Other types of data preparation are best addressed using other tools. When deciding where to implement data processing steps, consider that the following aspects apply to all blueprints:
* Input data is limited to a single post-[EDA2](eda-explained#eda2) dataset. No joins can be defined inside a blueprint. All joins should be accomplished prior to EDA2 (using, for example, [Spark SQL](spark), [Feature Discovery](fd-overview), code, or [Data Prep](companion-tools/index).
* Output data is limited to predictions for the project’s target, as well as information about those predictions ([Prediction Explanations](pred-explain/index)).
* Post-processing that produces output in a different format should be defined outside of the blueprint.
* No filtering or aggregation is allowed inside a blueprint but can be accomplished with [Spark SQL](spark), [Feature Discovery](fd-overview), code, or [Data Prep](companion-tools/index).
* When scoring new data, a single prediction can only depend on a single row of input data.
* The number of input and output rows must match.
### Blueprint task types {: #blueprint-task-types }
DataRobot supports two types of tasks—estimator and transform.
* _Estimator tasks_ predict new value(s) (`y`) by using the input data (`x`). The final task in any blueprint must be an estimator. During scoring, the estimator's output must always align with the target format. For example for multiclass blueprints, an estimator must return a probability of each class for each row.
Examples of estimator tasks are `LogisticRegression`, `LightGBM regressor`, and `Calibrate`.
* _Transform tasks_ transform the input data (`x`) in some way. Its output is always a dataframe, but unlike estimators, it can contain any number of columns and any data types.
Examples of transforms are `One-hot encoding`, `Matrix n-gram`, and more.
Both estimator and transform tasks have a `fit()` method that is used for training and learning data characteristics. For example, a binning task requires `fit()` to define the bins based on training data, and then applies those bins to all future incoming data. While both task types use the `fit()` method, estimators use a `score()` hook while transform tasks use a `transform()` hook. See the [descriptions of these hooks](cml-custom-tasks#define-task-code) when creating custom tasks for more information.
Transform and estimator tasks can each be used as intermediate steps inside a blueprint. For example, `Auto-Tuned N-Gram` is an estimator, providing the next task with predictions as input.
### How data passes through a blueprint {: #how-data-passes-through-a-blueprint }
Data is passed through a blueprint sequentially, task by task, left to right. When data is passed to a transform, DataRobot:
1. Fits it on the received data.
2. Uses the trained transform to transform the same data.
3. Passes the result to the next task.
Once passed to an estimator, DataRobot:
1. Fits it on the received data.
2. Uses the trained estimator to predict on the same data.
3. Passes the predictions to the next task. To reduce overfitting, DataRobot passes [stacked predictions](data-partitioning#what-are-stacked-predictions) when the estimator is not the final step in a blueprint.
When the trained blueprint is used to make predictions, data is passed through the same set of steps (with the difference that the `fit()` method is skipped).
## Access the blueprint editor {: #access-the-blueprint-editor }
You can access the blueprint editor from Leaderboard, the Repository, and the AI Catalog.
From the Leaderboard, select a model to use as the basis for further exploration and click to expand (which opens the **Describe > Blueprint** tab). From the Repository, select and expand a model from the library of modeling blueprints available for a selected project. From the AI Catalog, select the **Blueprints** tab to list an inventory of user blueprints.
In either method, once the blueprint diagram is open, choose **Copy and Edit** to open the blueprint editor, which makes a copy of the blueprint.

When you then make modifications, they are made to a copy and the original is left intact (either on the Leaderboard or the Repository, depending on where you opened it from). Click and drag to move the blueprint around on the canvas.
??? note "Why is the editable blueprint different from the original?"
When a blueprint is generated, it can contain branches for data types that are not present in the current project. Unused branches are pruned (ignored) in that case. These branches _are_ included in the copied blueprint, as they were part of the original (before the pruning) and they may be needed for future projects. For that reason, they are available and visible inside the blueprint editor.
When you have finished editing the blueprint:
* Click **+Add to AI Catalog** if you want to save it to the [AI Catalog](cml-catalog) for further editing, use in other projects, and sharing.
* Click **Train** to run the blueprint and add the resulting model to the Leaderboard.
## Use the blueprint editor {: #use-the-blueprint-editor }
A blueprint is composed of nodes and connectors:
* A [*node*](#work-with-nodes) is the pipeline step—it takes in data, performs an operation, and outputs the data in its new form. *Tasks* are the elements that complete those actions. From the editor, you can add, remove, and modify tasks and task hyperparameters.
* A [*connector*](#work-with-connectors) is a representation of the flow of data. From the editor, you can add or remove task connectors.
### Work with nodes {: #work-with-nodes }

The following table describes actions you can take on a node.
|Action | Description | How to |
|------ | ----------- | ------ |
| Modify a node | Change characteristics of the task contained in the node. | Hover over a node and click the associated pencil () icon. [Edit the task](#modify-a-node) or parameters as needed. |
| Add a node | Add a node to the blueprint. | Hover over the node that will serve as the new node's input and click the plus sign (). This creates a new branch with an empty node. Use the accompanying **Select a task** window to [configure the task](#modify-a-node). |
| Connect nodes | Connect tasks to direct the data flow. | Hover over the starting point node, drag the diagonal arrow () icon to the end point node, and click. |
| Remove a node | Remove a node and its associated task from the blueprint, as well as downstream nodes. | Hover over a node and click the associated trash can () icon. If you remove a node, its entire branch is removed (all downstream nodes). Click the undo () icon on the top left to restore the removed nodes. |
!!! note
If an action isn't applicable to a node, the icon for the action is not available.
Click the following buttons to undo and redo edits:
| | Action | Description |
|---|---|---|
|  | Undo | Cancels the most recent action and resets the blueprint graph to its prior state. |
|  | Redo | Restores the most recent action that was undone using the **Undo** action. |
### Work with connectors {: #work-with-connectors }

The following table describes actions you can take on a connector.
| Action | Description | How to |
|------|----------- |------|
| Add a node | Add a node to the blueprint. | Hover over the connector and click the plus sign () to create an empty node. Use the accompanying **Select a task** window to [configure the task](#modify-a-node). |
| Remove a connector | Disconnect two nodes. | Hover over a connector and click the resulting trash can () icon. If the icon does not appear, the connector cannot be deleted because its removal will make the blueprint invalid. |
!!! note
If an action isn't applicable to a connector, the icon for the action is not available.
### Modify a node {: #modify-a-node }
Use these steps to change an existing node or to add hyperparameters to a node newly added to the blueprint.
1. On the node to be changed, hover to display the task requirements and the available actions.

2. Click the pencil icon to open the task window. DataRobot presents a list of all parameters that define the task.

The following table describes the actions available from the task window:
| |Element | Click to...|
|---|----|---|
|| Open documentation link | Open the model documentation to read a description of the task and its parameters.|
|| [Task selector](#use-the-task-selector) | Choose an alternative task. Click through the options to select or search for a specific task. To learn about a task, use the **Open documentation** link. |
|| Recommended values | Reset all parameter values to the safe defaults recommended by DataRobot.|
|| Value entry | [Change a parameter](adv-tuning#set-a-parameter) value. When you select a parameter, a dropdown displays acceptable values. Click outside the box to set the new value. |
#### Use the task selector {: #use-the-task-selector }
Click the task name to expand the task finder. Either enter text into the search field or expand the task types to see options listed. If you previously created [custom tasks](cml-custom-tasks), they also are available in the list. You can also [create a task from this modal](#launch-custom-task-creation-workflow) before proceeding.

When you click to select a new task, the blueprint editor loads that task's parameters for editing (if desired). When you are finished, click **Update**. DataRobot replaces the task in the blueprint.

!!! note "Tasks that generate word clouds"
If you create a blueprint that generates a word cloud in both prediction and non-prediction tasks, DataRobot uses the non-prediction vertex to generate the cloud, since that type is always populated with text inputs. For example, in the blueprint below, both the Auto-Tuned Text Model and the Elastic-Net Regressor include a word cloud. In this case, DataRobot defaults to the Auto-Tuned Text Model's non-prediction word cloud.

#### Launch custom task creation workflow {: #launch-custom-task-creation-workflow }
You can access the custom task creation workflow by clicking the **add a custom task** link at the top of the task selector modal. The **Add Custom Task** modal opens in a new browser tab, initiating the [task creation workflow](cml-quickstart#create-a-custom-task).

Once the environment is set and the code is uploaded, close the tab. From the **Select a task** modal, click **Refresh** to make the new task available. You can find it either by expanding the **Custom** dropdown or searching:

### Add a data type {: #add-a-data-type }
You can change input data types available in a blueprint. Click on the **Data** node—the editor highlights the node and the included data types. Click the pencil icon to select or remove data types:

### Pass selected columns into a task {: #pass-selected-columns-into-a-task }
To pass a single or group of columns into a task, use the Task _Multiple Column Selector_. This task selects specific features in a dataset such that downstream transformations are only applied to a subset of columns. To use this task, add it directly after a data type (for example, directly after “Categorical Variables”), then use the task’s parameters to specify which features should or should not be passed to the next task.

To configure the task, use the **column_names** parameter to specify columns that should or should not be passed to the next task. Use the **method** parameter to specify whether those columns should be included or excluded from the input into the next task. Note that if you need to pass all columns of a certain type to a task, you don't need MCPICK, just connect the task to the data type node.
Click **Add** to see the new task referencing the chosen column(s).
Note that referencing specific columns in a blueprint requires that those columns be present to train the blueprint. DataRobot provides a warning reminder when editing or training a blueprint that the named columns may not be present in the current project.
## Blueprint validation {: #blueprint-validation }
DataRobot validates each node based on the incoming and outgoing edges, checking to ensure that data type, sparse vs. dense data, and shape (number of columns) requirements are met. If you have made changes that cause validation warnings, those affected nodes are displayed in yellow on the blueprint:

Hover on the node to see specifics:

In addition to checking a task's input and output, DataRobot validates that a blueprint doesn't form [cycles](https://en.wikipedia.org/wiki/Directed_acyclic_graph){ target=_blank }. If a cycle is introduced, DataRobot provides a warning, indicating which nodes are causing the issue.

## Train new models {: #train-new-models }
After changes have been made and saved for a blueprint, the option to [train a model](repository#create-a-new-model) using that blueprint becomes available. Click **Train** to open the window and then select a [feature list](feature-lists), sample size, and the number of [folds](data-partitioning#k-fold-cross-validation-cv) used in cross validation. Then, click **Train model**.
The model becomes available to the project on the model Leaderboard. If errors were encountered during model building, DataRobot provides several indicators.

You can view the errored node from the **Describe > Blueprint** tab. Click on the problematic task to see the error message or validation warning.

If a custom task fails, you can find the full error traceback in the [**Describe > Log**](log) tab.
!!! note
You can train a model even if the blueprint has warnings. To do so, click **Train with warnings**.
## More info... {: #more-info }
The following sections provide details to help ensure succesful blueprint creation.
### Boosting {: #boosting }
Boosting is a technique that can improve accuracy by training a model using predictions of another model. It uses multiple estimators together, which in turn either use data in multiple forms or help calibrate predictions.

A boosting pipeline has two key components:
* _Booster task_: A node that boosts the predictions (_Text fit on Residuals (L2/Binomial Deviance)_ in the example above). The list of built-in booster tasks available can be found in the task selector under **Models > Regression > Boosted Regression** (or similarly under the other **Models** sub-categories). For example:

* _Boosting input_: A node that supplies the prediction to boost (_eXtreme Gradient Boosted Trees Classifier with Early stopping_ in the example) and other tasks that pass extra variables of the booster (_Matrix of word-grams occurrences_).
It must meet the following criteria:
* There must be only one task that provides predictions to boost.
* There must be at least one task providing extra explanatory variables to the booster, other than the predictions (_Matrix of word-grams occurrences_ in the example).
|
cml-blueprint-edit
|
---
title: DRUM CLI tool
description: DataRobot Model Runner (DRUM) is a tool that allows you to work with Python, R, and Java custom models and to quickly test custom tasks.
---
{% include 'includes/drum-tool.md' %}
{% include 'includes/drum-for-ubuntu.md' %}
{% include 'includes/drum-for-mac.md' %}
### Use DRUM on Mac {: #use-drum-on-mac }
To test a task locally, run the `drum fit` command. For example, in a binary classification project:
1. Ensure that the `conda` environment **DR-custom-tasks** is activated.
2. Run the `drum fit` command (replacing placeholder folder names in `< >` brackets with actual folder names):
```
drum fit --code-dir <folder_with_task_content> --input <test_data.csv> --target-type binary --target <target_column_name> --docker <folder_with_dockerfile> --verbose
```
For example:
```
drum fit --code-dir datarobot-user-models/custom_tasks/examples/python3_sklearn_binary --input datarobot-user-models/tests/testdata/iris_binary_training.csv --target-type binary --target Species --docker datarobot-user-models/public_dropin_environments/python3_sklearn/ --verbose
```
!!! tip
To learn more, you can view available parameters by typing `drum fit --help` on the command line.
{% include 'includes/drum-for-windows.md' %}
### Use DRUM on Windows {: #use-drum-on-windows }
1. From the command line, open an Ubuntu terminal.
2. Use the following commands to activate the environment:
```
cd $HOME
source DR-custom-tasks-pyenv/bin/activate
```
3. Run the `drum fit` command in an Ubuntu terminal window (replacing placeholder folder names in `< >` brackets with actual folder names):
```
drum fit --code-dir <folder_with_task_content> --input <test_data.csv> --target-type binary --target <target_column_name> --docker <folder_with_dockerfile> --verbose
```
For example:
```
drum fit --code-dir datarobot-user-models/custom_tasks/examples/python3_sklearn_binary --input datarobot-user-models/tests/testdata/iris_binary_training.csv --target-type binary --target Species --docker datarobot-user-models/public_dropin_environments/python3_sklearn/ --verbose
```
|
cml-drum
|
---
title: Create custom tasks
description: Describes how to create and apply custom tasks and work with the resulting custom blueprints.
---
# Create custom tasks {: #create-custom-tasks }
!!! info "Availability information"
Custom tasks (also referred to as custom code) are not supported for Self-Managed AI Platform installations. [Custom blueprints](cml-blueprint-edit) are available for both Self-Managed AI Platform and managed AI Platform environments.
While DataRobot provides hundreds of built-in tasks, there are situations where you need preprocessing or modeling methods that are not currently supported out-of-the-box. To fill this gap, you can bring a custom task that implements a missing method, plug that task into a blueprint inside DataRobot, and then train, evaluate, and deploy that blueprint in the same way as you would for any DataRobot-generated blueprint. (You can review how the process works [here](cml-overview#how-it-works).)
The following sections describe creating and applying custom tasks and working with the resulting custom blueprints.
## Understand custom tasks {: #understand-custom-tasks }
The following helps to understand, generally, what a task is and how to use it. It then provides an overview of task content.
### Components of a custom task {: #components-of-a-custom-task }
To bring and use a task, you need to define two components—the task’s content and a container environment where the task’s content will run:
* The task content (described on this page) is code written in Python or R. To be correctly parsed by DataRobot, the code must follow certain criteria. Optionally, you can add files that will be uploaded and used together with the task’s code (for example, you might want to add a separate file with a dictionary if your custom task contains text preprocessing).
* The [container environment](cml-custom-env) is defined using a Docker file, and additional files, that will allow DataRobot to build an image where the task will run. There are a variety of built-in environments; users only need to build their own environment when they need to install Linux packages.

At a high level, the steps to define a custom task include:
1. Define and test task content locally (i.e., on your computer).
2. Optionally, create a container environment where the task will run.
3. Upload the task content and environment (if applicable) into DataRobot.
### Task types {: #task-types }
When creating a task, you must choose the one most appropriate for your project. DataRobot leverages two types of tasks—estimators and transforms—similar to sklearn. See the blueprint modification page to learn [how these tasks work](cml-blueprint-edit#blueprint-task-types).
### Use a custom task {: #use-a-custom-task }
Once a task is uploaded, you can:
* [Create and train a blueprint](cml-quickstart#apply-new-task-and-train) that contains that custom task. The blueprint will then appear on the project’s Leaderboard and can be used just like any other blueprint—in just a few clicks, you can compare it with other models, access model-agnostic insights, and deploy, monitor, and govern the resulting model.
* [Share a task](custom-model-actions) explicitly within your organization in the same way you share an environment. This can be particularly useful when you want to re-use the task in a future project. Additionally, because recipients don’t need to read and understand the task's code in order to use it, it can be applied by less technical colleagues. Custom tasks are also [implicitly](#implicit-task-sharing) shared when a project or blueprint is shared.
### Understand task content {: #understand-task-content }
To define a custom task, create a local folder containing the files listed in the table below (detailed descriptions follow the table).
!!! tip
You can find examples of these files in the [DataRobot task template repository](https://github.com/datarobot/datarobot-user-models/tree/master/task_templates){ target=_blank } on GitHub.

File | Description | Required
---- | ----------- | --------
`custom.py` or `custom.R` | The task code that DataRobot will run in training and predictions. | Yes
`model-metadata.yaml` | A file describing task's metadata, including input/output data requirements. | Required for custom transform tasks when a custom task outputs non-numeric data. If not provided, a default schema is used.
`requirements.txt` | A list of Python or R packages to add to the base environment. | No
Additional files | Other files used by the task (for example, a file that defines helper functions used inside `custom.py`). | No
#### `custom.py/custom.R` {: #custompy-customr }
The `custom.py`/`custom.R` file defines a custom task. It must contain the methods (functions) that enable DataRobot to correctly run the code and integrate it with other capabilities.
#### `model-metadata.yaml` {: model-metadatayaml }
For a custom task, you can supply a schema that can then be used to validate the task when building and training a blueprint. A schema lets you specify whether a custom task supports or outputs:
* Certain data types
* Missing values
* Sparse data
* A certain number of columns
#### `requirements.txt` {: #requirementstxt }
Use the `requirements.txt` file to pre-install Python or R packages that the custom task is using but are not a part of the base environment.
=== "Python example"
For Python, provide a list of packages with their versions (1 package per row). For example:
``` python
numpy>=1.16.0, <1.19.0
pandas==1.1.0
scikit-learn==0.23.1
lightgbm==3.0.0
gensim==3.8.3
sagemaker-scikit-learn-extension==1.1.0
```
=== "R example"
For R, provide a list of packages without versions (1 package per row). For example:
```
dplyr
stats
```
## Define task code {: #define-task-code }
To define a custom task using DataRobot’s framework, your code must meet certain criteria:
* It must have a `custom.py` or `custom.R` file.
* The `custom.py`/`custom.R` file must have methods, such as `fit()`, `score()`, or `transform()`, that define how a task is trained and how it scores new data. These are provided as interface classes or hooks. DataRobot automatically calls each one and passes the parameters based on the project and blueprint configuration. However, you have full flexibility to define the logic that runs inside each method.
View [an example on GitHub](https://github.com/datarobot/datarobot-user-models/blob/master/task_templates/1_transforms/1_python_missing_values/custom.py){ target=_blank } of a task implementing missing values imputation using a median.
!!! note
{% include 'includes/github-sign-in-plural.md' %}
The following table lists the available methods. Note that most tasks only require the `fit()` method. Classification tasks (binary or multiclass) must have `predict_proba()`, regression tasks require `predict()`, and transforms must have `transform()`. Other functions can be omitted.
Method | Purpose
------ | -------
`init()` | Load R libraries and files (R only, can be omitted for Python).
`fit()`| Train an estimator/transform task and store it in an artifact file.
`load_model()` | Load the trained estimator/transform from the artifact file.
`predict` or `predict_proba` (For hook, use `score()`) | Define the logic used by a custom estimator to generate predictions.
`transform()` | Define the logic used by a custom transform to generate transformed data.
The schema below illustrates how methods work together in a custom task. In some cases, some methods can be omitted, although `fit()` is always required during training.

The following sections describe each function, with examples.
### `init()` {: #init }
The `init` method allows the task to load libraries and additional files for use in other methods. It is required when using R but can typically be skipped with Python.
#### `init()` example {: #init-model }
The following provides a brief code snippet using `init()`; see a more complete example [here](https://github.com/datarobot/datarobot-user-models/blob/master/task_templates/2_estimators/5_r_binary_classification/custom.R){ target=_blank }.
=== "R example"
```
init <- function(code_dir) {
library(tidyverse)
library(caret)
library(recipes)
library(gbm)
source(file.path(code_dir, 'create_pipeline.R'))
}
```
#### `init()` input {: #init-input }
Input parameter | Description
--------------- | -----------
`code_dir` | A link to the folder where the code is stored.
#### `init()` output {: #init-output }
The `init()` method does not return anything.
### `fit()` {: #fit }
`fit()` must be implemented for any custom task.
#### `fit()` examples {: #fit-examples }
The following provides a brief code snippet using `fit()`; see a more complete example [here](https://github.com/datarobot/datarobot-user-models/blob/master/task_templates/2_estimators/4_python_binary_classification/custom.py){ target=_blank }.
=== "Python example"
The following is a Python example of `fit()` implementing Logistic Regression:
``` python
def fit(X, y, output_dir, class_order, row_weights):
estimator = LogisticRegression()
estimator.fit(X, y)
output_dir_path = Path(output_dir)
if output_dir_path.exists() and output_dir_path.is_dir():
with open("{}/artifact.pkl".format(output_dir), "wb") as fp:
pickle.dump(estimator, fp)
```
=== "R example"
The following is an example of R creating a regression model:
```
fit <- function(X, y, output_dir, class_order=NULL, row_weights=NULL){
model <- create_pipeline(X, y, 'regression')
model_path <- file.path(output_dir, 'artifact.rds')
saveRDS(model, file = model_path)
}
```
#### How `fit()` works {: #how-fit-works }
DataRobot runs `fit()` when a custom estimator/transform is being trained. It creates an artifact file (e.g., a `.pkl` file) where the trained object, such as a trained sklearn model, is stored. The trained object is loaded from the artifact and then passed as a parameter to `score()` and `transform()` when scoring data.
#### How to use `fit()` {: #how-to-use-fit}
To use, train and put a trained object into an artifact file (e.g., `.pkl`) inside the `fit()` function. The trained object must contain the information or logic used to score new data. Some examples of trained objects:
* A fitted [sklearn estimator](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html){ target=_blank }.
* A median of training data, for a missing value imputation using a median. When scoring new data, it is used to replace missing values.
DataRobot automatically uses training/validation/holdout partitions based on project settings.
#### `fit()` input parameters {: #fit-input-parameters }
The `fit()` task takes the following parameters:
Input parameters | Description
--------------- | -----------
`X` | A pandas DataFrame (Python) or R data.frame (R) containing data the task receives during training.
`y` | A pandas Series (Python) or R vector/factor (R) containing project's target data.
`output_dir` | A path to the output folder. The artifact containing the trained object must be saved to this folder. You can also save other files there and once the blueprint is trained, all files added into that folder during fit are downloadable via the UI using the [Artifact Download](#download-training-artifacts).
`class_order` | _Only passed for a binary classification estimator_. A list containing the names of classes. The first entry is the class that is considered negative inside DataRobot's project; the second class is the class that is considered positive.
`row_weights` | _Only passed in estimator tasks_. A list of weights passed when the project uses weights or smart downsampling.
`**kwargs` | Not currently used but maintained for future compatibility.
#### `fit()` output {: #fit-output }
Notes on `fit()` output:
* `fit()` does not return anything, but it creates an artifact containing the trained object.
* When no trained object is required (for example, a transform task implementing log transformation), create an “artificial” artifact by storing a number or a string in an artifact file. Otherwise (if `fit()` doesn't output an artifact), you must use `load_model`, which makes the task more complex.
* The artifact must be saved into the `output_dir` folder.
* The artifact can use any format.
* Some formats are natively supported. When `output_dir` contains exactly one artifact file in a natively supported format, DataRobot automatically picks that artifact when scoring/transforming data. This way, you do not need to write a custom `load_model` method.
* Natively supported formats include:
* Python: `.pkl`, `.pth`, `.h5`, `.joblib`
* Java: `.mojo`
* R: `.rds`
### `load_model()` {: #load-model }
The `load_model()` method loads one or more trained objects from the artifact(s). It is only required when a trained object is stored in an artifact that uses an unsupported format or when multiple artifacts are used. `load_model()` is not required when there is a single artifact in one of the supported formats:
* Python: `.pkl`, `.pth`, `.h5`, `.joblib`
* Java: `.mojo`
* R: `.rds`
#### `load_model()` example {: #load-model-example }
The following provides a brief code snippet using `load_model()`; see a more complete example [here](https://github.com/datarobot/datarobot-user-models/blob/master/task_templates/3_pipelines/14_python3_keras_joblib/custom.py){ target=_blank }.
=== "Python example"
In the following example, replace `deserialize_artifact` with an actual function you use to parse the artifact:
``` python
def load_model(code_dir: str):
return deserialize_artifact(code_dir)
```
=== "R example"
```
load_model <- function(code_dir) {
return(deserialize_artifact(code_dir))
}
```
#### `load_model()` input {: #load-model-input }
Input parameter | Description
--------------- | -----------
`code_dir` | A link to the folder where the artifact is stored.
#### `load_model()` output {: #load-model-output }
The `load_model()` method returns a trained object (of any type).
### `predict()` {: #predict}
The `predict()` method defines how DataRobot uses the trained object from `fit()` to score new data. DataRobot runs this method when the task is used for scoring inside a blueprint. This method is only usable for regression and anomaly tasks. Note that for R, instead use the `score()` hook outlined in the examples below.
#### `predict()` examples {: #predict-examples }
The following provides a brief code snippet using `predict()`; see a more complete example [here](https://github.com/datarobot/datarobot-user-models/blob/master/task_templates/2_estimators/1_python_regression/custom.py#L45){ target=_blank }.
=== "Python examples"
Python example for a regression or anomaly estimator:
``` python
def predict(self, data: pd.DataFrame, **kwargs):
return pd.DataFrame(data=self.estimator.predict(data), columns=["Predictions"])
```
=== "R examples"
R example for a regression or anomaly estimator:
```
score <- function(data, model, ...) {
return(data.frame(Predictions = predict(model, newdata=data, type = response")))
}
```
R example for a binary estimator:
```
score <- function(data, model, ...) {
scores <- predict(model, data, type = "response")
scores_df <- data.frame('c1' = scores, 'c2' = 1- scores)
names(scores_df) <- c("class1", "class2")
return(scores_df)
}
```
#### `predict()` input {: #predict-input }
Input parameter | Description
--------------- | -----------
`data` | A pandas DataFrame (Python) or R data.frame (R) containing the data the custom task will score.
`**kwargs` | Not currently used but maintained for future compatibility. (For R, use `score(data, model, …)`)
#### `predict()` output {: #predict-output }
Notes on `predict()` output:
* Returns a pandas DataFrame (or R data.frame/[tibble](https://tibble.tidyverse.org/){ target=_blank }).
* For regression or anomaly detection projects, the output must contain a single numeric column named **Predictions**.
### `predict_proba()` {: #predict-proba }
The `predict_proba()` method defines how DataRobot uses the trained object from `fit()` to score new data. This method is only usable for binary and multiclass tasks. DataRobot runs this method when the task is used for scoring inside a blueprint. Note that for R, you instead use the `score()` hook used in the examples below.
#### `predict_proba()` examples {: #predict-proba-examples }
The following provides a brief code snippet using `predict_proba()`; see a more complete example [here](https://github.com/datarobot/datarobot-user-models/blob/master/task_templates/2_estimators/4_python_binary_classification/custom.py#L40){ target=_blank }.
=== "Python examples"
Python example for a binary or multiclass estimator:
``` python
def predict_proba(self, data: pd.DataFrame, **kwargs) -> pd.DataFrame:
return pd.DataFrame(
data=self.estimator.predict_proba(data), columns=self.estimator.classes_
)
```
=== "R examples"
R example for a regression or anomaly estimator:
```
score <- function(data, model, ...) {
return(data.frame(Predictions = predict(model, newdata=data, type = response")))
}
```
R example for a binary estimator:
```
score <- function(data, model, ...) {
scores <- predict(model, data, type = "response")
scores_df <- data.frame('c1' = scores, 'c2' = 1- scores)
names(scores_df) <- c("class1", "class2")
return(scores_df)
}
```
#### `predict_proba()` input {: #predict-proba-input }
Input parameter | Description
--------------- | -----------
`data` | A pandas DataFrame (Python) or R data.frame (R) containing the data the custom task will score.
`**kwargs` | Not currently used but maintained for future compatibility. (For R, use `score(data, model, …)`)
#### `predict_proba()` output {: #predict-proba-output }
Notes on `predict_proba()` output:
* Returns a pandas DataFrame (or R data.frame/tibble).
* For binary or multiclass projects, output must have one column per class, with class names used as column names. Each cell must contain the probability of the respective class, and each row must sum up to 1.0.
### `transform()` {: #transform }
The `transform()` method defines the output of a custom transform and returns transformed data. Do not use this method for estimator tasks.
#### `transform()` example {: #transform-example }
The following provides a brief code snippet using `transform()`; see a more complete example [here](https://github.com/datarobot/datarobot-user-models/blob/master/task_templates/1_transforms/1_python_missing_values/custom.py){ target=_blank }.
=== "Python example"
A Python example that creates a transform and outputs to a dataframe:
``` python
def transform(X: pd.DataFrame, transformer) -> pd.DataFrame:
return transformer.transform(X)
```
=== "R example"
```
transform <- function(X, transformer, ...){
X_median <- transformer
for (i in 1:ncol(X)) {
X[is.na(X[,i]), i] <- X_median[i]
}
X
}
```
#### `transform()` input {: #transform-input }
Input parameter | Description
--------------- | -----------
`X` | A pandas DataFrame (Python) or R data.frame (R) containing data the custom task should transform.
`transformer` | A trained object loaded from the artifact (typically, a trained transformer).
`**kwargs` | Not currently used but maintained for future compatibility.
#### `transform()` output {: #transform-output }
The `transform()` method returns a pandas DataFrame or R data.frame with transformed data.
## Define task metadata {: #define-task-metadata }
To define metadata, create a `model-metadata.yaml` file and put it in the top level of the task/model directory. The file specifies additional information about a custom task and is described in detail [here](cml-validation).
## Define the task environment {: #define-the-task-environment }
There are multiple options for defining the environment where a custom task runs. You can:
* Choose from a variety of built-in environments.
* If a built-in environment is missing Python or R packages, add missing packages by specifying them in the task's [`requirements.txt`](#requirementstxt) file. If provided, `requirements.txt` must be uploaded together with `custom.py` or `custom.R `in the task content. If task content contains subfolders, it must be placed in the top folder.
* You can [build your own environment](cml-custom-env) if you need to install Linux packages.
## Test the task locally {: #test-the-task-locally }
While it is not a requirement that you test the task locally before uploading it to DataRobot, it is strongly recommended. Validating functionality in advance can save much time and debugging in the future.
A custom task must meet the following basic requirements to be successful:
* The task is compatible with DataRobot requirements and can be used to build a blueprint.
* The task works as intended (for example, a transform produces the output you need).
Use `drum fit` in the command line to quickly run and test your task. It will automatically validate that the task meets DataRobot requirements. To test that the task works as intended, combine `drum fit` with other popular debugging methods, such as printing output to a terminal or file.

### Prerequisites {: #prerequisites }
To test your task:
* Put the task's content into a single folder.
* [Install DRUM](cml-drum). Ensure that the Python environment where DRUM is installed is activated. Preferably, also install [Docker Desktop](https://www.docker.com/products/docker-desktop){ target=_blank }.
* Create a CSV file with test data you can use when testing a task.
* Because you will use the command line to run tests, open a terminal window.
### Test compatibility with DataRobot {: #test-compatibility-with-datarobot }
The following provides an example of using `drum fit` to test whether a task is compatible with DataRobot blueprints. To learn more about using `drum fit`, type `drum fit --help` in the command line.
For a custom task (estimator or transform), use the following basic command in your terminal. Replace placeholder names in `< >` brackets with actual paths and names. Note that the following options are available for TARGET_TYPE:
* For estimators: binary, multiclass, regression, anomaly
* For transforms: transform
```
drum fit --code-dir <folder_with_task_content> --input <test_data.csv> --target-type <TARGET_TYPE> --target <target_column_name> --docker <folder_with_dockerfile> --verbose
```
Note that the `target` parameter should be omitted when it is not used during training (for example, in case of anomaly detection estimators or some transform tasks). In that case, a command could look like this:
```
drum fit --code-dir <folder_with_task_content> --input <test_data.csv> --target-type anomaly --docker <folder_with_dockerfile> --verbose
```
### Test task logic {: #test-task-logic }
To confirm a task works as intended, combine `drum fit` with other debugging methods, such as adding "print" statements into the task's code:
* Add `print(msg)` into one of the methods; when running a task using `drum fit`, DataRobot will print the message in the terminal.
* Write intermediate or final results into a local file for later inspections, which could help to confirm that a custom task works as expected.

## Upload the task {: #upload-the-task}
Once a task's content is defined, upload it into DataRobot to use it to build and train a blueprint. Uploading a custom task into DataRobot involves three steps:
1. Create a new task in the [Model Registry](cml-quickstart#create-a-custom-task).
2. Select a container environment where the task will run.
3. [Upload the task content](cml-quickstart#upload-task-content).
Once uploaded, the custom task appears in the list of tasks available to the blueprint editor.

### Updating code {: #updating-code }
You can always upload updated code. To avoid conflicts, DataRobot creates a new version each time code is uploaded. When creating a blueprint, you can select the specific task version to use in your blueprint.

## Compose and train a blueprint {: #compose-and-train-a-blueprint }
Once a custom task is created, there are two options for composing a blueprint that uses the task:
* Compose a single-task blueprint, using only the task (estimator only) that you created.
* Create a multitask blueprint using the [blueprint editor](cml-blueprint-edit).
### Single-task blueprint {: #single-task-blueprint }
If your custom estimator task contains all the necessary training code, you can build and train a single-task blueprint. To do so, navigate to **Model Registry > Custom Model Workshop > Tasks**. Select the task and click **Train new model**:

When complete, a blueprint containing the selected task appears in the project's Leaderboard.
### Multitask blueprint {: #multitask-blueprint }
To compose a blueprint containing more than one task, use the blueprint editor. Below is a summary of the steps; see [the documentation](cml-blueprint-edit) for complete details.
1. From the project Leaderboard, **Repository**, or **Blueprints** tab in the AI Catalog, select a blueprint to use as a template for your new blueprint.
2. Navigate to the **Blueprint** view and start editing the selected blueprint.
3. Select an existing task or add a new one, then select a custom task from the dropdown of built-in and custom tasks.
4. Save and then train the new blueprint by clicking **Train**. A model containing the selected task appears in the project's Leaderboard.
## Get insights {: #get-insights }
You can use DataRobot insights to help evaluate the models that result from your custom blueprints.
### Built-in insights {: #built-in-insights }
Once a blueprint is trained, it appears in the project Leaderboard where you can easily compare accuracy with other models. [Metrics and model-agnostic insights](cml-quickstart#evaluate-and-deploy) are available just as for DataRobot models.
### Custom insights {: #custom-insights }
<!--public start-->
You can generate custom insights by [creating artifacts](#download-training-artifacts) during training. Additionally, you can generate insights using the Predictions API, just as for any other Leaderboard model.
<!--public end-->
<!--private start-->
You can generate custom insights by [creating artifacts](#download-training-artifacts) during training. Additionally, you can generate insights using the [Predictions API](https://app.datarobot.com/apidocs/entities/predictions.html#create-predictions){ target=_blank }, just as for any other Leaderboard model.
<!--private end-->
!!! tip
Custom insights are additional views that help to understand how a model works. They may come in the form of a visualization or a CSV file. For example, if you wanted to leverage [LIME's model-agnostic insights](https://cran.r-project.org/web/packages/lime/index.html){ target=_blank }, you could import that package, run it on the trained model in the `custom.py` or other helper files, and then write out the resulting model insights.
## Deploy {: #deploy }
Once a model containing a custom task is trained, it can be [deployed, monitored, and governed](mlops/index) just like any other DataRobot model.

## Download training artifacts {: #download-training-artifacts }
When training a blueprint with a custom task, DataRobot creates an artifact available for download. Any file that is put into `output_dir` inside `fit()` of a custom task becomes a part of the artifact. You can use the artifact to:
* Generate custom insights during training. For this, generate file(s) (such as image or text files) as a part of the `fit()` function. Write them to `output_dir`.
* Download a trained model (for example, as a `.pkl `file) that you can then load locally to generate additional insights or to deploy outside of DataRobot.
To download an artifact for a model, navigate to **Predict > Downloads > Artifact Download**.

You can also [download the code of any environment](cml-custom-env#share-and-download) you have access to. To download, click on an environment, select the version, and click **Download**.
## Implicit sharing {: #implicit-sharing }
A task or environment is not available in the model registry to other users unless it was explicitly shared. That does not, however, limit users' ability to use blueprints that include that task. This is known as _implicit sharing_.
For example, consider a project shared by User A and User B. If User A creates a new task, and then creates a blueprint using that task, User B can still interact with that blueprint (clone, modify, rerun, etc.) regardless of whether they have _Read_ access to any custom task within that blueprint. Because every task is associated with an environment, implicit sharing applies to environments as well. User A can also [explicitly share](custom-model-actions) just the task or environment, as needed.
Implicit sharing is unique permission model that grants _Execute_ access to everyone in the custom task author’s organization. When a user has access to a blueprint (but not necessarily explicit access to a custom task in that blueprint) _Execute_ access allows:
* Interacting with the resulting model. For example, retraining, running Feature Impact and Feature Effects, deploying, and making batch predictions.
* Cloning and editing a blueprint from the shared project, and then saving the blueprint as their own.
* Viewing and downloading Leaderboard logs.
Some capabilities that Execute access does not allow include:
* Downloading the custom task artifact.
* Viewing, modifying, or deleting the custom task from the model registry.
* Using the task in another blueprint. (Instead you would clone the blueprint containing the task and edit the blueprint and/or task.)
|
cml-custom-tasks
|
---
title: Composable ML Quickstart
description: The Quickstart provides an example that walks you through testing and learning Composable ML so that you can then apply it against your own use case.
---
# Composable ML Quickstart {: #composable-ml-quickstart }
Composable ML gives you full flexibility to build a custom ML algorithm and then use it together with other built-in capabilities (like the model Leaderboard or MLOps) to streamline your end-to-end flow and improve productivity. This Quickstart provides an example that walks you through testing and learning Composable ML so that you can then apply it against your own use case.
In the following sections you will build a blueprint with a custom algorithm. Specifically you will:
1. Create a project and open the blueprint editor.
2. Replace the missing values imputation task with a built-in alternative.
3. Create a custom missing values imputation task.
4. Add the custom task into a blueprint and train.
5. Evaluate the results and deploy.
## Create a project and open the blueprint editor {: #create-a-project-and-open-the-blueprint-editor }
This example replaces a built-in Missing Values Imputed task with a custom imputation task. To begin, create a project and open the blueprint editor.
1. Start a project with the [10K Lending Club Loans](https://s3.amazonaws.com/datarobot_public_datasets/10K_Lending_Club_Loans.csv){ target=_blank } dataset. You can download it and import as a local file or provide the URL. Use the following parameters to configure a project:
* Target: **is_bad**
* Autopilot mode: full or Quick
2. When available, expand a model on the Leaderboard to open the **Describe > Blueprint** tab. Click **Copy and edit** to open the [blueprint editor](cml-blueprint-edit), where you can add, replace, or remove tasks from the blueprint (as well as save the blueprint to the AI Catalog).

## Replace a task {: #replace-a-task }
Replace the Missing Values Imputed task within the blueprint with an alternate and save the new blueprint to the AI Catalog.
1. Select **Missing Values Imputed** and then click the pencil icon () to edit the task.

2. In the resulting dialog, [select an alternate](cml-blueprint-edit#use-the-task-selector) missing values imputation task.
* Click on the task name.
* Expand **Preprocessing > Numeric Preprocessing**.
* Select **Missing Values Imputed [PNI2]**.

Once selected, click **Update** to modify the task.
3. Click **Add to AI Catalog** to save it to the [AI Catalog](cml-catalog) for further editing, use in other projects, and sharing.
4. Evaluate potential issues by hovering on any highlighted task. When you have confirmed that all tasks are okay, [train the model](cml-blueprint-edit#train-new-models).

Once trained, the new model appears in the project’s Leaderboard where you can, for example, compare accuracy to other blueprints, explore insights, or deploy the model.
## Create a custom task {: #create-a-custom-task }
To use an algorithm that is not available among the built-in tasks, you can define a custom task using code. Once created, you then upload that code as a task and use it to define one or multiple blueprints. This part of the quickstart uses one of the [task templates](https://github.com/datarobot/datarobot-user-models/tree/master/task_templates) provided by DataRobot, however you can also [create your own](cml-custom-tasks) custom task.
When you have finished writing task code (locally), making the custom task available within the DataRobot platform involves three steps:
1. Add a new custom task in the Model Registry.
2. Select the environment where the task runs.
3. Upload the task code into DataRobot.
### Add a new task {: #add-a-new-task}
To create a new task, navigate to **Model Registry > Custom Model Workshop > Tasks** and select **+ Add new task**.

1. Provide a name for the task, MVI in this example.
2. Select the [task type](cml-blueprint-edit#blueprint-task-types), either **Estimator** or **Transform**. This example creates a transform (since missing values imputation is a transformation).
??? note "If you create a custom estimator"
When creating an estimator task, select the target (project) type where the task will be used. As an estimator, it can only then be used with the identified project type.

4. Click **Add Custom Task**.
### Select the environment {: #select-the-environment}
Once the task type is created, select the container environment where the task will run. This example uses one of the environments provided by DataRobot but you can also create your own [custom environment](custom-environments#create-a-custom-environment).
Under **Transform Environment**, click **Base Environment** and select **[DataRobot] Python 3 Scikit-Learn Drop-In**.

### Upload task content {: #upload-task-content}
Once you select an environment, the option to load task content (code) becomes available. You can import it directly from your local machine or, as in this example, upload it from GitHub.
1. Click **Select remote repository**.
2. If needed:
* Register a new GitHub repository by clicking **Add new** and selecting **GitHub**.

* [Authorize the GitHub app](custom-model-repos#github-repository) and supply `https://github.com/datarobot/datarobot-user-models` as the URL.
3. In the resulting window, select the remote repository you just created and click **Select content**.
5. When prompted for the content to be pulled from the GitHub repository:
* In the **GitHub reference** field, begin typing to select **master**.
* In the **Specify path to the artifacts** field enter `task_templates/1_transforms/1_python_missing_values`.
6. Click **Pull** to create the task.
Once DataRobot processes the GitHub content, the new task version and the option to apply the task become available. The task version is also saved to **Model Registry > Custom Model Workshop > Tasks**.

## Apply new task and train {: #apply-new-task-and-train }
To apply a new task:
1. Return to the Leaderboard and select any model. Click **Copy and Edit** to modify the blueprint.

2. Select the Missing Values Imputed or Numeric Data Cleansing task and click the pencil () icon to [modify it](cml-blueprint-edit#modify-a-node).
3. In the task window, click on the task name to replace it. Under **Custom**, select the task you created. Click **Update**.

4. Click **Train** (upper right) to open a window where you set training characteristics and create a new model from the Leaderboard.
!!! tip
Consider relabeling your customized blueprint so that it is easy to locate on the Leaderboard.
## Evaluate and deploy {: #evaluate-and-deploy }
Once trained, the model appears in the project Leaderboard where you can compare its accuracy with other custom and DataRobot models. The icon changes to indicate a user model. At this point, the model is treated like any other model, providing metrics and model-agnostic insights and it can be deployed and managed through MLOps.
Compare metrics:

View [**Feature Impact**](feature-impact):

!!! note
For models trained from customized blueprints, **Feature Impact** is always computed using the [permutation-based approach](feature-impact#shared-permutation-based-feature-impact), regardless of the project settings.
Use [**Model Comparison**](model-compare):

[Deploy the best model](deploy-model) in a few clicks.
For more information on using Composable ML, see the other available [learning resources](cml-overview#get-started-with-composable-ml).
|
cml-quickstart
|
---
title: Model insights
description: Visual AI provides several tools to help visually assess, understand, and evaluate model performance.
---
# Visual AI model insights {: #visual-ai-model-insights }
Visual AI provides several tools to help visually assess, understand, and evaluate model performance:
* [Image embeddings](#image-embeddings) allow you to view projections of images in two dimensions to see visual similarity between a subset of images and help identify outliers.
* [Activation maps](#activation-maps) highlight regions of an image according to its importance to a model's prediction.
* [Image Prediction Explanations](xemp-pe) illustrate what drives predictions, providing a quantitative indicator of the effect variables have on the predictions.
* The [Neural Network Visualizer](#neural-network-visualizer) provides a visual breakdown of each layer in the model's neural network.
**Image Embeddings** and **Activation Maps** are also available from the [**Insights**](analyze-insights) tab, allowing you to more easily compare models, for example, if you have applied [tuning](vai-tuning).
Additionally, the standard DataRobot insights ([Confusion Matrix](multiclass) (for multiclass classificaiton), [Feature Impact](feature-impact), and [Lift Chart](lift-chart), for example) are all available.
## Image Embeddings {: #image-embeddings }
Select **Understand > Image Embeddings** to view up to 100 images from the validation set projected onto a two-dimensional plane (using a technique that preserves similarity among images). This visualization answers the questions: What does the featurizer consider to be similar? Does this match human intuition? Is the featurizer missing something obvious?
!!! tip
See the [reference material](vai-ref#image-embeddings) for more information.
In addition to presenting the actual values for an image, DataRobot calculates the predicted values and allows you to:
* Filter by these values.
* Modify the prediction threshold and filter.
The border color on an image indicates its prediction probability. All images with a probability higher than the prediction threshold have colored borders. Images with a predicted probability below the threshold don't have any border and can also be filtered out to disappear from the canvas entirely. In clustering projects, the colored border visually indicates members of a cluster.
Filters allow you to narrow the display based on predicted and actual class values. Use filters to limit the display by specific classes, actual values, predicted values, and values that fall within a prediction threshold. **Image Embeddings** filter options differs depending on the project type. The options are illustrated below with a project type code, and are described in the table following:

Element | Description | Project type
--------|-------------|-------------
Filter by actual (dropdown) | Displays images whose actual values belong to the selected class. All classes display by default.| BC, MC, ML
Filter by actual (slider) | Displays images whose actual values fall within a custom range. | R
Filter by predicted (dropdown) | Displays images whose predicted values belong to the selected class. Modifying the prediction threshold (not applicable to multiclass) changes the output. | BC, MC, ML, AD, C
Filter by predicted (slider) | Displays images whose predicted value falls within the selected range. | R
Prediction threshold | Helps visualize how predictions would change if you adjust the probability threshold. As the threshold moves, the predicted outcome changes and the canvas (border colors) update. In other words, changing the threshold may change predicted label for an image. For anomaly detection projects, use the threshold to see what becomes an anomaly as the threshold changes. | BC, ML, AD
Select image column | If the dataset has multiple image columns, displays embeddings only for those images matching the column. | ML
### Working with the canvas
The Image Embeddings canvas displays projections of images in two dimensions to help you visualize similarities between groups of images and to identify outliers. You can use controls to get a more granular view of the images. The following controls are available:
* Use zoom controls to get access to all images:

Enlarge, reduce, or reset the space between images on the canvas so that you can more easily see details between the images or get access to an image otherwise hidden behind another image. This action can also be achieved with your mouse (CMD + scroll for Macs and SHIFT + scroll for Windows).
* To move areas of the display into focus, click and drag.
* Hover on an image to see the actual and predicted class information. Use these tooltips to compare images to see whether DataRobot is grouping images as you would expect:

* Click an image to see prediction probabilities for that image. The output is dependent on the project type. For example, compare a binary classification to a multilabel project:

The predicted values displayed in the preview are updated with any changes you make to the prediction threshold.
## Activation Maps {: #activation-maps }
With **Activation Maps**, you can see which image areas the model is using when making predictions—which parts of the images are driving the algorithm prediction decision.
An activation map can indicate whether your model is looking at the foreground or background of an image or whether it is focusing on the right areas. For example, is it looking only at “healthy” areas of a plant when there is disease and because it does not use the whole leaf, classifying it as "no disease"? Is there a problem with [overfitting](glossary/index#overfitting) or [target leakage](glossary/index#target-leakage)? These maps help to determine whether the model would be more effective if it were [tuned](vai-tuning).
To use the maps, select **Understand > Activation Maps** for a model. DataRobot previews up to 100 sample images from the project's validation set.

{% include 'includes/activation-map-include.md' %}
## Neural Network Visualizer {: #neural-network-visualizer }
The **Describe > Neural Network Visualizer** tab illustrates the order of, and connections between, layers for each layer in a model's neural network. It helps to understand if the network layers are connected in the expected order in that it describes the order of connections and the input and outputs for each layer in the network.

With the visualizer you can visualize the structure by:
* Clicking and dragging left and right to see all layers.
* Clicking to expand or collapse a grouped layer, displaying/hiding all layers in the group.

* Clicking **Display all layers** to load the blueprint with all layers expanded.
* For blueprints that contain multiple neural networks, a **Select graph** dropdown becomes available, allowing you to display the associated visualization for that neural network.

|
vai-insights
|
---
title: Visual AI predictions
description: There are a variety of methods for making predictions from DataRobot's image models; use Base64 encoding and sample scripts to simplify.
---
# Visual AI predictions {: #visual-ai-predictions }
There are various methods for making predictions from image models:
Method | Description | See...
------ | ----------- | -----
**UI predictions** | Use the same dataset format as the one used to create the project (upload a ZIP archive with one or more images). | [Make Predictions tab](predict)
**Model Deployment: API** <br> (real-time, small datasets)| Use the base64 format (described below). | <ul><li>[Deploy tab (UI)](deploy-model)</li><li>[Prediction API](dr-predapi)</li></ul>
**Model Deployment: Batch** <br> (large datasets)| For the API Client and HTTP Interface options, use the same dataset format as the original dataset used to create the project (i.e., upload a ZIP archive with one or more images). For the CLI Interface option use the base64 format (described below). | [Batch prediction scripts](cli-scripts)
**Portable Prediction Server** | Use the base64 format (described below). | [Portable Prediction Server](portable-pps)
## Base64 encoding format {: #base64-encoding-format }
If your training dataset consists of a ZIP archive with one or more image files, the prediction dataset needs to be converted to a different format so that it is fully contained in a single CSV file.
## Sample scripts {: #sample-scripts }
See the links below for help with visual data conversion:
- Tutorial: [Getting Predictions for Visual AI Projects via API Calls](vai-pred)
- DataRobot Python package: <a target="_blank" href="https://datarobot-public-api-client.readthedocs-hosted.com/page/reference/modeling/spec/binary_data.html#preparing-data-for-predictions">Preparing data for predictions using the DataRobot library</a>
- Script: <a target="_blank" href="https://github.com/datarobot-community/visual-ai-data-prep">Comprehensive data prep script</a>
!!! note
{% include 'includes/github-sign-in-plural.md' %}
The following shows sample usage:
`python visualai_data_prep.py pred_dataset.zip pred_dataset.csv image`
Where:
* `visualai_data_prep.py` is the comprehensive Data Prep script used for making conversions to base64 format.
* `pred_dataset.zip` is the input dataset (ZIP of images).
* `pred_dataset.csv` is the output, which can be used via prediction API.
* `image` is an image column name.
## Deep dive {: #deep-dive }
To convert a set of image files into a single CSV file, each image must be converted to <a target="_blank" href="https://en.wikipedia.org/wiki/Base64">base64 text</a>. This format allows DataRobot to embed images as a regular text column in the CSV. Encoding binary image data into base64 is a simple operation, present in all programming languages.
Here is an example in Python:
```python
import base64
import pandas as pd
from io import BytesIO
from PIL import Image
def image_to_base64(image: Image) -> str:
img_bytes = BytesIO()
image.save(img_bytes, 'jpeg', quality=90)
image_base64 = base64.b64encode(img_bytes.getvalue()).decode('utf-8')
return image_base64
# let's build a CSV with a single row that contains an image
# the same general approach works if you have multiple image rows or columns
image = Image.open('cat.jpg')
image_base64 = image_to_base64(image)
df = pd.DataFrame({'animal_image': [image_base64]})
df.to_csv('prediction_dataset.csv' index=False)
print(df)
```
!!! note
Encode a binary image file (not decoded pixel contents) to base64. This example uses `PIL.Image` to open the file, but you can base64-encode an image file directly.
|
vai-predictions
|
---
title: Visual AI overview
description: Working with image features in DataRobot follows the same workflow as that of non-image projects, with DataRobot automating the preparation, selection, and training of a wide variety of deep learning models.
---
# Visual AI overview {: #visual-ai-overview }
Working with image features in DataRobot follows the same workflow as that of non-image binary and multiclass classification (what kind of plant?) and regression (best listing price?) projects. Behind the scenes, DataRobot automates the preparation, selection, and training of a wide variety of deep learning models, recommending the model that is most accurate or the fastest for deployment. Visual AI allows you to combine supported image types, either alone with a single class label or in combination with all other supported feature types in a single dataset. Once you have uploaded images, you can preview them within DataRobot.

The following sample use cases illustrate the importance of visual learning:
* _Manufacturing_: Automate the quality control process by enabling models to identify defects
* _Healthcare_: Automated disease detection and diagnosis
* _Energy_: Analyze images from drones to make energy assets safer or more efficient
* _Public safety_: Detect intruders from security cameras
* _Insurance_: Risk analysis and claims assessment
Because processing images is an intensive and data rich process, Visual AI requires using deep learning models for decision making. These models use advanced math and millions of parameters, and without automation, require users have expertise in neural networks only to generate "black box models," which businesses are hesitant to deploy.
## How Visual AI works {: #how-visual-ai-works }
DataRobot makes image processing possible by turning images into numbers, a process known as “featurizing." As numbers, they can be passed to subsequent modeling tasks (algorithms) so that they can be combined with other feature types (numeric, categorical, text, etc.). Visual AI uses pretrained models to turn images into numeric vectors and feed those vectors to the final modeler (e.g., XGBoost, Elastic Net, etc.) with all other features. This technology can make changes to, and extract features from, the model levels, expanding the output of the pretrained models and featurizers. By fine-tuning the model parameters, you can control the feature creation process to meet your requirements. See the [deep dive](vai-ref) for more detail.
## Workflow overview {: #workflow-overview }
The sections that follow describe the Visual AI workflow:
1. Create an [image-processing ready dataset](vai-model#prepare-the-dataset).
2. Create projects from the [**AI Catalog**](vai-model#create-projects-from-the-ai-catalog) or via local file upload.
3. [Preview images](vai-model#review-data-before-building) for potential data quality issues.
4. [Build models](model-data) using the standard DataRobot workflow.
5. Review the data [after building](vai-model#review-data-after-building).
5. [Evaluate models](vai-insights) on the Leaderboard using:
- The blueprint.
- Activation maps, which highlight the parts of the image the model used to make predictions.
- Image embeddings to see projections of images, allowing you to visually check what is similar to DataRobot and help identify outliers.
- The Neural Network Visualizer to view each level and layer in the model's neural network.
- Standard DataRobot insights (Confusion Matrix, Feature Impact, and Lift Chart, for example).
6. [Fine-tune model parameters](vai-tuning) for higher or lower granularity or to use a different featurizer.
7. Select a model to use for [making predictions](vai-model#predictions) via **Make Predictions**, the DataRobot API, or batch predictions.
|
vai-overview
|
---
title: Tune models
description: Use the information gained from Visual AI insights to calibrate tuning, apply different augmentation strategies, change the neural net, and more.
---
# Tune models {: #tune-models }
Using the information gained from Visual AI [insights](vai-insights), you may decide to calibrate tuning to:
* apply a different [image augmentation list](ttia-lists).
* change the [neural network architecture](vai-ref).
* ensure the model is applying the right [level of granularity](#granularity) to the dataset.
These settings, and more, are accessed from the [**Advanced Tuning**](adv-tuning) tab. This page describes the settings on that page that are specific to Visual AI projects.

For a simplified tuning walkthrough, see the [tuning guide](vai-tuning-guide).
## Augmentation lists {: #augmentation-lists }
Initially, DataRobot applies the settings from [**Advanced options**](ttia) to build models and saves those settings in a list (similar to feature list) named **Initial Augmentation List**. After Autopilot completes, you can continue exploring different transformation strategies for a model by creating different settings and saving them as custom [augmentation lists](tti-augment/ttia-lists).

Select any saved augmentation list and begin tuning a model with it to compare the scores of models trained with a variety of different augmentation strategies.

## Granularity {: #granularity }
Set the preprocessing parameters to capture the correct level of granularity, adjusting whether to pay more attention to details than to high-level shapes and textures. For example, don’t use high-level features to focus on points, edges, corners for decisions; don’t use low-level features to focus on detecting common objects. The settings enable/disable the complexity of patterns the model reacts to when processing the input image.
The higher the level, the more complex type of pattern it can detect—simple shapes versus complex shapes. Imagine low-level features such as fish scales which are a simple part of a complex image of a fish.

Levels are cumulative—results are drawn for each selected level. So, combinations of low-level features form medium-level features, combinations of medium-level features form high-level features. Highest, High, and Medium levels are enabled (`True`) by default. The tradeoff is that the more detail you include (the more features you add to the model), the more you increase training time and the risk of overfitting.
When modifying levels, at least one level must be set to **True**. Note that the type of feature extraction for a given layer can vary depending on the network architecture chosen.
The following table briefly describes each parameter; open the documentation for more complete descriptions.

| Parameter | Description |
|--------------|--------------|
| [featurizer\_pool](#featurizer-pool-options) | The method used to aggregate across all filters in the final step of image featurization; different methods summarize in different ways, producing slightly different types of features. |
| network | The pretrained [model architecture.](vai-ref#pretrained-network-architectures) |
| use\_highest\_level\_features | Features extracted from the final layer of the network. Good for cases when the target contains “ordinary” images (e.g., animals or vehicles). |
| use\_high\_level\_features | Features extracted from one of the final convolutional layers. This layer provides high-level features, typically a combination of patterns from low- and medium- level features that form complex objects. |
| use\_medium\_level\_features | Features extracted from one of the intermediate convolution layers. Medium-level features are typically a combination of patterns from different levels of low-level features. If you think your model is underfitting and you want to include more features from the middle of the network, set this to **True**. |
| use\_low\_level\_features | Features extracted from one of the initial convolution layers. This layer provides low-level visual features and is good for problems that vary greatly from the “ordinary” (single object) image datasets.|
### Featurizer pool options {: #featurizer-pool-options }
The following briefly describes the `featurizer_pool` options. While the default `featurizer_pool` operation is `avg`, for improved accuracy test the `gem` pooling option. `gem` is the same as `avg` but with additional regularization applied (clipping extreme values).
* `avg` = GlobalAveragePooling2D
* `gem` = GeneralizedMeanPooling
* `max` = GlobalMaxPooling2D
* `gem-max` = applied concatenation operation on `gem` and `max`
* `avg-max` = applied concatenation operation on `avg` and `max`
## Image fine-tuning {: #image-fine-tuning }
The following sections describe setting the number of layers and the learning rates used by image fine-tuners.
### Trainable scope for fine-tuners {: #trainable-scope-for-fine-tuners }
Trainable scope specifies the number of layers to configure as trainable, starting from the classification/regression network layer and continuing through the input convolutional network layer. The count of layers starts from the final layer. This is because lower level information that starts from the input convolution layer and is pretrained on a larger, more representative image dataset are more generalizable across different problem types.
Determine this value based on how unique the data or the problem that the dataset represents it. The more unique, the better suited for allowing more layers to be trainable as that will improve the resulting metric score of the final trained CNN fine-tuner model. By default, all layers are trainable (as testing showed the greatest improvements for most datasets). Basic guidelines to follow while tuning this parameter is to slowly decrease the amount of trainable layers until improvement stops.
DataRobot also provides a smart adaptive method called "chain-thaw" that attempts to gradually unfreeze (set a layer to be trainable) the network. It iteratively trains each layer independently until convergence, but it is a very exhaustive and time consuming method. The following figure shows an example of layers-trained-by-iteration from the [**Training Dashboard**](training-dash) interface.

### Discriminative learning rates {: #discriminative-learning-rates }
Enabling discriminative learning rates use the same theory as that used for setting trainable scope. That is, starting from the final few layers allows the model to modify more of the higher level features and modify less of the lower level features (which already generalize well across different datasets). This parameter can be paired with trainable scope to enable and achieve a truly fine-grained learning process. The following figure shows an example learning-rate-by-layer graph from the [**Training Dashboard**](training-dash), which represents each trainable layer.

## Tune hyperparameters {: #tune-hyperparameters }
During Autopilot, Visual AI automatically adjusts the Neural Network Architecture and pooling type. As a result, Visual AI offers state-of-the-art accuracy and, at the same time, is computationally efficient. For extra accuracy, after model building completes, you can tune Visual AI hyperparameters and potentially further improve accuracy.
The following hyperparameter tuning section is divided into two segments—Deep Learning Tuning and Modeler Tuning. Each can be applied independently or together, with experimentation leading to the best results. Before beginning manual tuning, you must first run Autopilot (full or Quick).
### Deep learning tuning {: #deep-learning-tuning }
To tune models for deep learning, select the best Leaderboard blueprint and choose [Advanced Tuning](adv-tuning). The following options provide some tuning alternatives to consider:
_EfficientNetV2-S-Pruned_ is the <a target="_blank" href="https://arxiv.org/abs/2104.00298">latest incarnation</a> of Computer Vision research, developed by the Google AI team, which is delivering vision results much more efficiently than other vision transformers. The model is pre-trained with ImageNet21k (consisting of 14,197,122 images, each tagged in a single-label fashion by one of 21,841 possible classes). This larger ImageNet tuning scenario is beneficial if your image categories fall outside of the traditional ImageNet1K dataset labels.
_EfficientNetB4-Pruned_ is older than EfficientNetV2-S-Pruned (above); it is a part of the first generation of EfficientNets. It is pre-trained on the smaller ImageNet1K and B4 type is considered as large variant of EfficientNet family of models. DataRobot strongly advises trying this network.
_ResNet50-Pruned_ was the standard in the Computer Vision field for efficient Deep Learning architecture. It is pre-trained on ImageNet1K without using any advanced image augmentation techniques (for example, AutoAugment, MixUp, CutMix, etc). This training regime and architecture, solely based on skip-connections, is an excellent next-step neural network architecture in every tuning setting.
### Modeler tuning {: #modeler-tuning }
To tune models for deep learning, select the best Leaderboard blueprint (the type is specified in the suggestions) and choose [Advanced Tuning](adv-tuning). Consider the following options:
For a _Keras blueprint_, increase the number of epochs (by 2 or 3 for example). DataRobot Keras blueprint modelers are optimized for different image scenarios, but sometimes, depending on the image dataset, training more epochs can boost performance.
For a _Keras blueprint_, change its activation function to <a target="_blank" href="https://en.wikipedia.org/wiki/Swish_function">Swish</a>. For some image datasets, Swish is better than <a target="_blank" href="https://www.kaggle.com/dansbecker/rectified-linear-units-relu-in-deep-learning">Rectified linear Unit (ReLU)</a>.
For a _Best Gradient Boosting Trees (XGBoost/LightGBM) blueprint_, decrease/increase the learning rate.
Select the best blueprint of any type and change its scaling method. Some examples to try:
* `None -> Standardization`
* `None -> Robust Standardization`
* `Standardization -> Robust Standardization`
For a _Linear ElasticNet (binary target)_, double the max iteration (for example, change `max_iter = 100`, the default, to `max_iter = 200`.
For _Regularized Logistic Regression(L2)_, increase the regularization by a factor of 2. For example, for C (the inverse of regularization strength), if `C = 4`, change it to `C = 2`.
For the best _SGD(multiclass)_, increase the `n_iter` by a factor of 2. For example, if `n_iter` is 100, change it to 200.
|
vai-tuning
|
---
title: Visual AI
description: This topic introduces the workflow and reference materials for including images as part of your DataRobot project.
---
# Visual AI {: #visual-ai }
These sections describe the workflow and reference materials for including images as part of your DataRobot project.
Topic | Describes...
----- | ------
[Workflow overview](vai-overview) | View a simplified workflow overview to aid understanding.
[Build models](vai-model) | Prepare the dataset, build models, and make predictions.
[Train-time image augmentation](tti-augment/index) | Transform images to create new images to enlarge the dataset.
[Model insights](vai-insights) | Visually assess, understand, and evaluate model performance.
[Tune models](vai-tuning) | Apply advanced tuning to calibrate models.
[Predictions](vai-predictions) | Explore options for making image predictions.
[Visual AI reference](vai-reference/index) | Learn about [technological components](vai-ref) of Visual AI and see a [tuning walkthrough](vai-tuning-guide).
See considerations for working with [Visual AI](vai-model#feature-considerations).
|
index
|
---
title: Build Visual AI models
description: Building Visual AI models, as with any DataRobot project, starts with preparing and uploading data.
---
# Build Visual AI models {: #build-visual-ai-models }
As with any DataRobot project, building Visual AI models involves preparing and uploading data:
1. [Preparing the dataset](#prepare-the-dataset), with or without additional features types.
2. Creating projects from the [**AI Catalog**](#create-projects-from-the-ai-catalog) or via local file upload.
3. Reviewing the data [before building](#review-data-before-building).
Once you have [built models](model-data) as you would with any DataRobot project, you can:
4. Review the data [after building](#review-data-after-building).
5. [Evaluate](vai-insights) and [fine-tune](vai-tuning) models.
6. [Make predictions](#predictions).
!!! note
Train-time image augmentation is a processing step that randomly transforms existing images, augmenting the training data. You can configure augmentation both [before and after](tti-augment/ttia-introduction) model building.
See additional considerations for working with [Visual AI](#feature-considerations).
## Prepare the dataset {: #prepare-the-dataset }
When creating projects with Visual AI, you can provide data to DataRobot in a ZIP archive. There are two mechanisms for identifying image locations within the archive:
1. Using a CSV file that contains [paths to images](#paths-for-image-uploads) (works for all project types).
2. Using one folder for [each image class](#folder-based-image-datasets) and file-system folder names as image labels (works for a single-image feature classification dataset).
!!! note
Additionally, you can encode image data and provide the encoded strings as a column in the CSV dataset. Use base64 format to encode images before registering the data in DataRobot. (Any other encoding format or encoding error will result in model errors.) See [this tutorial](vai-pred) for access to a script for converting images and for information on how to make predictions on Visual AI projects with API calls.
Before beginning, verify that images meet the [size and format](#dataset-guidelines) guidelines. Once created, you can [share and preview](#create-projects-from-the-ai-catalog) the dataset in the **AI Catalog**.
### Dataset guidelines {: #dataset-guidelines }
The following table describes image requirements:
Support | Type
------- | -----
File types | .jpeg*, .jpg*, .png, .bmp, .ppm, .gif, .mpo, and .tiff/.tif
Bit support | 8-bit, 16-bit**
Pixel size | <ul><li>Images up to 2160x2160 pixels are accepted and are downsized to 224x224 pixels.</li><li>Images smaller than 224x224 are upsampled using <a target="_blank" href="https://pillow.readthedocs.io/en/stable/handbook/concepts.html#PIL.Image.LANCZOS">Lanczos resampling</a>.</li></ul>
Additionally:
* Visual AI class limit is the same as non-Visual AI ([1000 classes](multiclass#unlimited-multiclass)).
* Image subfolders must not be zipped (that is, no nested archives in the dataset's main ZIP archive).
* Any image paths referenced in the CSV must be included in the uploaded archive—they cannot be a remote URL.
* File and folder names cannot contain whitespaces.
* Use `/` (not `\`) for file paths.
??? note "* JPEG and lossy compression"
JPEG (or `.jpg`) image format is, by definition, a LOSSY format. The [JPEG standard](https://en.wikipedia.org/wiki/JPEG#Required_precision){ target=_blank } does not guarantee to produce bit-for-bit identical output images; it requires only that the error produced by the decoder/encoder is lower than the error specified by the standard. As a result, the same image can be decoded with slight differences, even when the same library version is used. If keeping prediction results consistent is required, use the data preparation script that is described [here](vai-predictions#sample-scripts) to convert images to base64-encoded strings and then upload them.
??? note "** How are 16-bit images handled"
DataRobot supports 16-bit images by converting the image internally to three 8-bit images (3x8-bit). Because TIFF images are processed by taking the first image, the resulting 16-bit image is essentially a greyscale image, which DataRobot then rescales. For more detail, see the [Pillow Image Module documentation](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=convert#PIL.Image.Image.convert){ target=_blank }.
### Paths for image uploads {: #paths-for-image-uploads }
Use a CSV for any type of project (regression or classification), both a straight class-and-image and when you want to add features to your dataset. With this method, you provide images in the same directory as the CSV in one of the following ways:
* Create a single folder with all images.
* Separate images into folders.
* Include the images, outside of folders, alongside the CSV.

To set up the CSV file:
1. Create a CSV in the same directory as the images with, at a minimum, the following columns:
- Target column.
- Relative path to each image.

2. Include any additional features.
If you have multiple images for a row, you can create an individual column in the dataset for each. If your images are categorized for example the front, back, left, and right of a healthy tomato plant, best practice suggests creating one column for each category (one column for front images, one for back images, one for left images, and one for right). If there is not an image in each row of an added column, DataRobot treats it as a missing value.
Create a ZIP archive of the directory and drag-and-drop it into DataRobot to start a project or add it to the [**AI Catalog**](#create-projects-from-the-ai-catalog).
??? tip "Quick CSV example"
Let’s say you have data about 600 articles of clothing. For each article, you know the brand, size, and category. Additionally, you have a text description and two pictures for each (one front image and one back image).
1. Create a `ClothesDataset` folder.
2. Add all the images to `ClothesDataset`. You can put images in a single subfolder or put them into two subfolders (front images and back images).
3. Create a CSV file containing four columns: Brand, Size, Category, and Description.
4. Add two columns to the CSV file for images: Front and Back. The Front column will contain the relative path to the Front image; the Back column will contain the relative path to the Back image.
5. Create a ZIP file from the `ClothesDataset` folder.
6. Upload your ZIP file into DataRobot.
DataRobot automatically identifies and creates a six-column dataset: four columns for item Brand, Size, Category, and Description and two columns for images (Front and Back). Now you can build a model to predict the category from the item's brand, size, and description, along with the front and back pictures of the related item.
### Folder-based image datasets {: #folder-based-image-datasets }
When adding only images, prepare your data by creating a folder for each class and putting images into the corresponding folders. For example, the classic "is it a hot dog?" classification would look like this, with a folder containing images of hot dogs and a folder of images that are not hot dogs:

Once image collection is complete, ZIP the folders into a single archive and upload the archive directly into DataRobot as a local upload or add it to the [**AI Catalog**](#create-projects-from-the-ai-catalog).
??? tip "Quick folder example"
Let's say you have 300 images: 100 images of oranges, 100 images of apples, and 100 images of grapefruit.
1. Create three folders: Orange, Apple, and Grapefruit.
2. Drop your images into the correct folders depending on the type of fruit. (Do not zip the subfolders.)
3. Create a ZIP file in the parent directory of the three folders. The ZIP file will contain the three folders and the images inside.
4. Drag and drop, or upload, your ZIP file into DataRobot.
DataRobot will automatically identify and create a three-column dataset: one for the label (Apple, Orange, Grapefruit), another for the image, and a third for the image path.
## Create projects from the AI Catalog {: #create-projects-from-the-ai-catalog }
It is common to access and share image archives from the [**AI Catalog**](catalog), where all tabs and catalog functionality are the same for image and non-image projects. The **AI Catalog** helps to get a sense of image features and check whether everything appears as expected before you begin model building.
To add an archive to the catalog:
1. Use the **Local File** option to upload the archive. When the dataset has finished registering, a banner indicates that publishing is complete.
2. Select the **Profile** tab to see a sample for each image class.

3. Click on a sample image to display unique and missing value statistics for the image class.

4. Click the **Preview Images** link to display 30 randomly selected images from the dataset.
5. Click **Create project** to kick off [EDA1](eda-explained) (for [materialized](glossary/index#materialized) datasets).
Next, review your data before building models.
## Review data before building {: #review-data-before-building }
After EDA1 completes, whether initiated from the **AI Catalog** or drag-and-drop, DataRobot runs data quality checks, identifies column types, and provides a preview of images for sampling. Confirm on the **Data** page that DataRobot processed dataset features as `class` and `image`:

After previewing images and data quality, as described below, you can build models using the regular workflow, identifying `class` as the target.
### Data quality checks {: #data-quality-checks }
Visual AI uses the Data Quality Assessment tool, with specific checks in place for images. After EDA1 completes, access the results from the **Data** page:

If images are missing, a dedicated section reports the percent missing as well as provides access to a log that provides more detail. "Missing" images include those with bad or unresolved paths (file names that don't exist in the archive) or an empty cell in the column expecting an image path. Click **Preview log** to open a modal showing per-image detail.
### Data page checks {: #data-page-checks }
From the **Data** page do the following to ensure that image files are in order:
1. Confirm that DataRobot has identified images as **Var Type** `image`.
2. Expand the `image` row in the data table to open the image preview, a random sample of 30 images from the dataset (the full dataset will be used for training). The preview confirms that the images were processed by DataRobot and also allows you to confirm that it is the image set you intended to use.

3. Click **View Raw Data** to open a modal displaying up to a 1MB random sample of the raw data DataRobot will be using to build models, both images and corresponding class.
## Review data after building {: #review-data-after-building }
After you have built a project using the [standard workflow](model-data), DataRobot provides additional information from the **Data** page.
Expand the `image` feature and click **Image Preview**. This visualization initially displays one sample for each class in your dataset. Click a class to display more samples for that class:

Click the **Duplicates** link to view whether DataRobot detected any duplicate images in your dataset. Duplicates are reported for:
* the same filename in more than one row of the dataset
* two images with different names but, as determined by DataRobot, exactly the same content
## Predictions {: #predictions }
Use the same prediction tools with Visual AI as with any other DataRobot project. That is, select a model and make predictions using either [Make Predictions](predict) or [Deploy](deploy-model). The requirements for the prediction dataset are the same as those for the modeling set.
Refer to the section on [image predictions](vai-predictions) for more details.
## Feature considerations {: #feature-considerations }
* For Prediction Explanations, there is a limit of 10,000 images per prediction dataset. Because DataRobot does not run EDA on prediction datasets, it estimates the number of images as `number of rows` x `number of image columns`. As a result, missing values will count toward the image limit.
* [Image Explanations](xemp-pe#prediction-explanations-for-visual-ai), or Prediction Explanations for images, are not available from a deployment (for example, Batch predictions or the Predictions API).
* There is no drift tracking for image features.
* Although Scoring Code export is not supported, you can use Portable Prediction Servers.
* Object detection is not available.
* Visual AI does not support time series. Time-aware [OTV](ts-date-time) projects are supported.
|
vai-model
|
---
title: Exploratory Spatial Data Analysis (ESDA)
description: Location AI provides a variety of tools for conducting ESDA within the DataRobot AutoML environment.
---
# Exploratory Spatial Data Analysis (ESDA) {: #exploratory-spatial-data-analysis-esda }
DataRobot Location AI provides a variety of tools for conducting ESDA within the DataRobot AutoML environment, including geometry map visualizations, categorical/numeric thematic maps, and smart aggregation of large geospatial datasets. Location AI’s modern web mapping tools allow you to interactively visualize, explore, and aggregate target, numeric, and categorical features on a map.
## Location visualization {: #location-visualization }
Within the **Data** tab, you can visualize and explore the spatial distribution of observations by expanding location features from the list and selecting the **Geospatial Map** link. Clicking **Compute feature over map** creates a chart showing the distribution on a map.
By default, Location AI displays a **Unique map** visualization depicting individual rows in the dataset as unique geometries. You can:
* Pan the map using a left-click hold (or equivalent touch gesture) and moving it.
* Zoom in left by double-clicking (or equivalent touch gesture).
* Use the zoom controls in the top-right corner of the map panel to zoom in and out.

Within the **Unique map** view, rows from the input dataset that are co-located in space are aggregated; the map legend in the top-left corner of the map panel displays a color gradient that represents counts of co-located points at a given location. Hovering over a geometry produces a pop-up displaying the count of co-located points and the coordinates of the location at that geometry. The opacity of the data can be controlled in **Visualization Settings**.
When the number or complexity of input geometries meets a certain threshold, Location AI automatically aggregates geometries into a **Kernel density map** to enhance the visualization experience and interpretability.
## Feature Over Space {: #feature-over-space }
In addition to visualizing the spatial distribution of the input geometries, Location AI also displays distributions of numeric and categorical variables on the **Geospatial Map**. Within the **Data** tab, navigate to any numeric or categorical features, select **Geospatial Map**, and click **Calculate Feature Over Map** to create the visualization.

By default, the **Feature Over Space** visualization displays a thematic map of unique locations with feature values depicted as colors. For geometries that are co-located spatially, the average value for the co-located locations are displayed. For numeric variables, you can change the metric used for the display by selecting “min”, “max”, or “avg” from the **Aggregation** dropdown menu at the bottom-left of the map panel. For categorical variables, the mode of the co-located categories is displayed. When the number of unique geometries grows large, DataRobot automatically aggregates individual geometries to enhance the visualization.
## Kernel density map {: #kernel-density-map }
A **Kernel density map** collects multiple observations within each given kernel and displays aggregated statistics with a color gradient. For location features, the count, min, max, and average can be selected from the **Aggregation** dropdown. For numeric features, the min, max, or average is available. For categorical features, the mode is displayed. Several visualization customizations are available in **Visualization Settings**.

## Hexagon map {: #hexagon-map }
In addition to viewing kernel density and unique maps of features, you can also view hexagon map visualizations. Select **Hexagon map** from the **Visualization** dropdown at the bottom-left of the map panel. Once selected, the map visualization displays hexagon-shaped cells. For location features, the count, min, max, and average can be selected from the Aggregation dropdown. For numeric features, the min, max, or average is available. For categorical features, the mode is displayed. Use the **Visualization settings** in the bottom-right of the map panel to adjust the settings.

## Heat map {: #heat-map }
You can also view heat map visualizations for geometry and numeric features. Heat map visualization is not available for categorical features. Select **Heat map** from the **Visualization** dropdown at the bottom-left of the map panel. Use the **Visualization settings** in the bottom-right of the map panel to adjust the settings.

|
lai-esda
|
---
title: Modeling
description: When a spatial structure is present in the input dataset, Location AI’s modeling enhancements expand traditional automated feature engineering and improve model options.
---
# Modeling {: #modeling }
When a spatial structure is present in the input dataset, Location AI’s modeling enhancements expand traditional automated feature engineering and improve model options. Location AI accomplishes this using several targeted techniques including:
* Automated feature engineering of location variable geometric properties.
* Creation of derived features from spatially lagged variables.
* Feature derivation characterizing spatial hotspots, coldspots, and transitions.
## Automated location feature engineering {: #automated-location-feature-engineering }
Location AI’s ability to ingest, autorecognize, and transform geospatial data unlocks powerful capabilities for DataRobot model blueprints. For example, geometric properties associated with row-level geometries can be powerful predictors in machine learning models. Location AI unlocks this potential in geospatial data by automatically deriving features from the properties of the input geometries. DataRobot derives features for the following geometric properties:
* MultiPoints
* Centroid
* Lines/MultiLines
* Centroid
* Length
* Minimum bounding rectangle area
* Polygons/MultiPolygons
* Centroid
* Perimeter
* Area
* Minimum bounding rectangle area
As with DataRobot’s [automated derivation](feature-transforms#variable-type-transformations) of date features, automatically derived geometry features are displayed within the **Data** tab as a child feature of the parent "Location" type feature.

## Derived spatial lag features {: #derived-spatial-lag-features }
Spatially lagged features are derived to gain insight into the spatial structure of the data (i.e., spatial autocorrelation) to help inform DataRobot models of spatial dependence patterns. Access the Location AI _Spatial Neighborhood Featurizer_ by searching the Leaderboard for models that include a spatial featurizer. Expand the model and view the blueprint to access the individual tasks.
!!! note
In this public preview version, the spatial featurizer is limited in the number of rows and numeric features it can process.
Location AI implements several techniques for automatically deriving spatially lagged features from the input dataset, including:
* _Spatial Lag_: A k-nearest neighbor approach to calculate mean neighborhood values of numeric features at varying spatial lags and neighborhood sizes.
* _Spatial Kernel_: Characterizes spatial dependence structure using a spatial kernel neighborhood technique. This technique characterizes spatial dependence structure for all numeric variables using varying kernel sizes, weighting by distance.
## Derived local autocorrelation features {: #derived-local-autocorrelation-features }
In addition to capturing spatial dependence structure in neighborhood features, Location AI uses local indicators of spatial association to capture hot and cold spots of spatial similarity within the context of the entire input dataset. The Spatial Neighborhood Featurizer calculates neighborhood indicators of association for all non-target numeric variables. The derived features characterize the relative magnitude of local spatial dependence in the input dataset. Features derived in this manner can help present particularly impactful local spatial dependence structures to DataRobot models, improving model accuracy where hot spots and cold spots or abrupt transitions in feature values are present.
|
lai-model
|
---
title: Accuracy Over Space
description: Location AI insights help to discover spatial patterns in prediction errors and visualize prediction errors across data partitions on a map visualization.
---
# Accuracy Over Space {: #accuracy-over-space }
To assess model fidelity in a spatial setting, Location AI adds powerful model evaluation tools to DataRobot. Location AI insights help to discover spatial patterns in prediction errors and visualize prediction errors across data partitions on a map visualization. Location AI facilitates these insights through the **Evaluate > Accuracy Over Space** tab for an individual model.
The **Accuracy Over Space** tab provides a spatial residual mapping within an individual model. It provides similar visualizations to Location AI [ESDA](lai-esda), but allows you to explore prediction error metrics across all data partitions.

By default, **Accuracy Over Space** displays residual (prediction error) values on a unique map based on the validation partition. Co-located points are displayed using the average value of all points at that location. The visualization settings for the map can be adjusted in the same manner as Location AI ESDA visualizations. Additional settings for the tool include:
* Data Selection: Sets which data partition to visualize, either validation, cross-validation, or holdout.
* Metric Type: Sets the value to report at each location, either Residual, Actual, or Predicted.
* Aggregation: Sets the arithmetic to use for co-located locations, either Avg, Min, Max, Count, or Value.
* Count reports the count of each geometry in the dataset. It reports the sum of counts for a hexagon or grid map of a geometry feature.
* Value: When all geometries have a count of 1, displays the feature value (no aggregation).

**Accuracy Over Space** also supports different map visualizations: [Kernel density map](lai-esda#kernel-density-map), [Hexagon map](lai-esda#hexagon-map), and [Heatmap](lai-esda#heat-map).
|
lai-insights
|
---
title: Location AI
description: DataRobot Location AI adds tools and support for geospatial analysis across the entire AutoML workflow.
---
# Location AI {: #location-ai }
DataRobot Location AI adds support for geospatial analysis across the entire AutoML workflow. These tools and techniques help users improve their modeling workflows by:
* Natively ingesting common geospatial formats
* Automatically recognizing geospatial coordinates in non-spatial formats
* Allowing Exploratory Spatial Data Analysis (ESDA)
* Enhancing model blueprints with spatially-explicit modeling tasks
* Visualizing geospatial data using interactive maps in pre- and post-modeling
* Gaining insights into geospatial patterns in your models
DataRobot’s Location AI enhances the standard AutoML workflow to capture a broad range of geospatial problems.
See the associated [considerations](#feature-considerations) for important additional information.
These sections describe:
Topic | Describes...
----- | ------
[Data ingest](lai-ingest) | Work with sources of geospatial data.
[ESDA](lai-esda) | Conduct Exploratory Spatial Data Analysis (ESDA) within the DataRobot environment.
[Modeling](lai-model) | Expand traditional automated feature engineering and improve model options.
[Accuracy Over Space](lai-insights) | Assess model fidelity through visualizations.

## Feature considerations {: #feature-considerations }
Consider the following [location](#location-features), [visualization](#visualizations), and [modeling](#modeling) points when working with Location AI.
### Location features {: #location-features }
Many features of Location AI operate with a primary location feature. You can have multiple Location features in a dataset; the primary location feature is the one that is used as the basis for most visualization and modeling.
* The primary location feature is automatically set to the first Location feature in a dataset. You can change this selection by using the dropdown in the Geospatial Modeling section of the EDA page. Once Autopilot is started, the primary location feature cannot be changed.
* Location features are automatically created when a project is created from:
* The geometry described in a native geospatial data source (e.g., a Shapefile or a PostGIS database) will be recognized as a Location feature.
* Columns in tables that contain Well-Known Text or Well-Known Binary (Hex) are recognized as Location.
* If there are two features in the dataset that have names containing “latitude” and “longitude” (English only, case insensitive) and [valid Location data](lai-ingest#native-geospatial-data), these will be automatically transformed into a Location feature. Only the first such pair in the dataset will be transformed.
* All of the rows in a Location column must be of the same shape type, e.g., Point, Line, or Polygon.
* You can manually create a Location feature from a pair of features with latitude and longitude data.
* These features must contain [valid Location data](lai-ingest#native-geospatial-data).
* You must add this newly created feature to the feature list for the new feature to be used in modeling. For Autopilot, this is usually Informative Features.
### Visualizations {: #visualizations }
Location AI provides several intuitive tools for Exploratory Spatial Data Analysis (ESDA) and to explore model performance insights.
* The **Unique Map** visualization, which will display every Location on a map, is typically available. When datasets are sufficiently large the data is automatically aggregated to improve performance:
* In any dataset with more than 20,000 unique geometries (points, lines, or polygons).
* Datasets with polygons or lines with more than 50,000 total vertices.
* **Accuracy Over Space** is available for Regression projects only. It is not available when using Over Time Validation (OTV).
### Modeling {: #modeling }
When using Location AI, modeling blueprints are enhanced to take advantage of the important information often provided by location.
* Location AI can be used for Exploratory Spatial Data Analysis (ESDA) in multiclass projects, but Location AI models will only be available for regression, binary classification, anomaly detection, and clustering projects.
* Time series is not supported.
* The Spatial Neighborhood Featurizer will not run if there are more than 100,000 rows or more than 500 numeric columns in a dataset.
* In cases of point Location features created by transforming a latitude and longitude feature, you may wish to exclude the original latitude and longitude feature from the feature list during modeling, as these carry the same information as the new Location feature.
* Some modeling blueprints do not support Location AI, such as Gaussian Process Regressors and Eureqa models. Some blueprints may run in Autopilot or be available in the Repository that will not use the location information.
* Scoring Code export is not supported.
|
index
|
---
title: Data ingest
description: DataRobot Location AI enables tapping into existing geospatial data sources through a variety of pathways.
---
# Data ingest {: #data-ingest }
DataRobot Location AI enables tapping into existing geospatial data sources through a variety of pathways, including:
* Native geospatial files
* Spatially-enabled database table
* Auto-recognized spatial coordinates
* User transformations to location variable type
Connecting directly to geospatial data saves the time and resources required for exporting from native geospatial data formats in a Geographic Information System (GIS) or a data preparation tool. DataRobot Location AI’s ability to automatically recognize geospatial data in non-native formats also allows non-traditional Geospatial Analysts to work explicitly with spatial data.
## Native geospatial data {: #native-geospatial-data }
DataRobot Location AI supports ingest of these native geospatial data formats:
* ESRI Shapefiles
* GeoJSON
* ESRI File Geodatabase
* Well Known Text (embedded in table column)
* PostGIS Databases
Native geospatial file formats are uploaded to DataRobot in [the same way](import-to-dr) as non-geospatial formats—such as drag-and-drop, URL upload, and using the **AI Catalog**.
### ESRI Shapefiles {: #esri-shapefiles }
<a target="_blank" href="https://en.wikipedia.org/wiki/Shapefile">ESRI Shapefiles</a> are a common native geospatial format, created in the late-1990s and still in wide use today. Shapefiles are a multifile format that require, at a minimum, the `.shp`, `.shx`, and `.dbf` extensions for completion. Because of the multifile nature of the format, DataRobot Location AI accepts ZIP archived files that include these extensions and the additional `.prj` extension describing the <a target="_blank" href="https://en.wikipedia.org/wiki/Spatial_reference_system">Coordinate Reference System (CRS)</a> for the data.
### GeoJSON {: #geojson }
<a target="_blank" href="https://en.wikipedia.org/wiki/GeoJSON">GeoJSON</a> is a more recent geospatial file format, often used in web mapping applications, and was submitted as a specification by the Internet Engineering Task Force (IETF). Unlike ESRI Shapefiles, GeoJSON is a single file format that describes the <a target="_blank" href="https://en.wikipedia.org/wiki/Spatial_reference_system">Coordinate Reference System (CRS)</a> within the file itself.
### ESRI File Geodatabase {: #esri-file-geodatabase }
<a target="_blank" href="https://www.loc.gov/preservation/digital/formats/fdd/fdd000294.shtml">ESRI File Geodatabase</a> is a proprietary format that approximates a database through a nested folder structure. Location AI can read a File Geodatabase directory (with extension `.gdb`) in a ZIP archive with extension `.gdb.zip`. Location AI reads the first layer in a Geodatabase file.
### Well Known Text {: #well-known-text }
<a target="_blank" href="https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry">Well Known Text (WKT)</a> is a markup language described in the Open Geospatial Consortium’s (OGC) <a target="_blank" href="https://www.ogc.org/standards/sfa">Simple Feature Access specification</a>. WKT is a versatile representation of vector geospatial geometries and can be utilized in any of DataRobot AutoML’s existing file types as a feature describing the geometry associated with a row. See the “WKT” column in the figure below.

### PostGIS Databases {: #postgis-databases }
Configuring PostGIS ingest follows the same workflow as non-geospatial databases.
## Auto-recognition of location data {: #auto-recognition-of-location-data }
In addition to native geospatial data ingest, DataRobot Location AI can automatically detect location data within non-geospatial formats. DataRobot Location AI will automatically recognize location variables when the columns contain the name **latitude** and **longitude** and contain values in these formats:
* Decimal degrees
* Degrees minutes seconds
* -46° 37′ 59.988″ and -23° 33′
* 46.63333W and 23.55S
* 46\*37′59.98"W and 23\*33′S
* W 46D 37m 59.988s and S 23D 33m
DataRobot marks geometry features created as the result of auto-recognized spatial coordinates with an icon in the **Data** page.

## User transformation to location data {: #user-transformation-to-location-data }
When spatial coordinates embedded in non-geospatial file formats are not recognized, you can still use DataRobot [variable type transform](feature-transforms#variable-type-transformations) functionality to create a location feature. To transform data into a location feature:

1. Navigate to one of the parent coordinate features and expand the feature listing; select **Var Type Transform** from the feature menu.
2. In the Numeric/Categorical Transformation dialog, select **Location** from the **Transform Numeric/Categorical** to dropdown.
3. Two additional dropdown menus appear—**Latitude** and **Longitude**. Select from the existing feature set to specify the parent coordinates.
4. Click **Create feature**.
The new feature appears after its parent feature as a new row in the **Data** table, noted with an icon indicating it is user-created.

## Location variable type {: #location-variable-type }
In addition to the traditional variable types of numeric, categorical, and date, Location AI adds a location variable type to provide explicit treatment of spatial data in DataRobot models.

The location variable type supports the 2d geometric primitives as specified in the OGC Simple Feature Access specification and some multipart geometries. These include support for:
* Point/MultiPoint
* LineString/MultiLineString
* Polygon/MultiPolygon
Location variables improve DataRobot’s ability to handle location data throughout the AutoML workflow, including model blueprints, feature importance calculations, and visualizations.
|
lai-ingest
|
---
title: Bias and Fairness reference
description: The Bias and Fairness feature calculates fairness for a machine learning model and identifies any biases from the model's predictive behavior.
---
# Bias and Fairness reference {: #bias-and-fairness-reference }
This section defines terminology that is commonly used across the feature.
### Protected Feature {: #protected-feature }
The dataset column to measure fairness of model predictions against.
That is, a model's fairness is calculated against the **protected features** from the dataset.
Also known as "protected attribute."
Examples: ``age_group``, ``gender``, ``race``, ``religion``
Only **_categorical features_** can be marked as protected features. Each categorical value of the protected feature is referred to as a **protected class** or class of the feature.
### Protected Class {: #protected-class }
One categorical value of the protected feature.
Examples: ``male`` can be a protected class (or simply a class) of the feature ``gender``;
``asian`` can be a protected class of the feature ``race``.
### Favorable Outcome {: #favorable-outcome }
A value of the target that is treated as the favorable outcome for the model.
Predictions from a binary classification model can be categorized as being a **favorable outcome** (i.e., good/preferable) or an unfavorable outcome (i.e., bad/undesirable) for the protected class.
Example: To check gender discrimination for loan approvals, the target ``is_bad`` indicates whether the loan will default or not.
In this case, the favorable outcome for the prediction is ``No`` (meaning the loan "is good") and therefore the value of ``No`` is the favorable (i.e., good) outcome for the loaner.
Favorable target outcome is not always the same as the assigned positive class. For example, a common lending use case involves predicting whether or not an applicant will default on their loan. The positive class could be 1 (or "will default"), whereas the favorable target outcome would be 0 (or "will not default"). The favorable target outcome refers to the outcome that the protected individual would prefer to receive.
### Fairness Score {: #fairness-score }
A numerical computation of model fairness against the protected class, based on the underlying [fairness metric](#fairness-metrics).
**Note**: A model's fairness scores cannot be compared if the model uses different fairness metrics or if the fairness scores were calculated on different prediction data.
### Fairness Threshold {: #fairness-threshold }
The fairness threshold helps measure if a model performs within appropriate fairness bounds for each protected class and does not affect the fairness score or performance of any protected class. If not specified, the threshold defaults to 0.8.
### Fairness Value {: #fairness-value }
Fairness scores normalized against the most favorable protected class (i.e., the class with the highest fairness score).
The fairness value will always be in the range ``[0.0, 1.0]``, where ``1.0`` is assigned to the most favorable protected class.
To ensure trust in the calculated fairness value for a given class of the protected feature, the tools determine if there was enough data in the sample to calculate fairness value with a high level of confidence (see **_Z score_**).
### Z Score {: #z-score }
A metric measuring whether a given class of the protected feature is "statistically significant" across the population.
Example: To measure fairness against ``gender`` in a dataset with _10,000_ rows identifying ``male`` and only _100_ rows identifying ``female``, the feature labels the ``female`` class as having insufficient data.
In this case, use a different sample of the dataset to ensure trust of the fairness measures over this sample.
## Fairness Metrics {: #fairness-metrics }
Fairness metrics are statistical measures of parity constraints used to assess fairness.
Each fairness metric result is calculated in two steps:
1. Calculating [fairness scores](#fairness-score) for each protected class of the model's protected feature.
2. Normalizing fairness scores against the highest fairness score for the protected feature.
Metrics that measure **Fairness by Error** evaluate whether the model's *error rate* is equivalent across each protected class. These metrics are best suited when you don't have control over the outcome or wish to conform to the ground truth, and simply want a model to be equally *right* between each protected group.
Metrics that measure **Fairness by Representation** evaluate whether the model's *predictions* are equivalent across each protected class. These metrics are best suited when you have control over the target outcome or are willing to depart from ground truth in order for a model's predictions to exhibit more equal *representation* between protected groups, regardless of the target distribution in the training data.
To help understand the ideal context/use case for applying a given fairness metric, this section covers hypothetical examples for each fairness metric. The examples are based on an **HR hiring** use case, where the fairness metrics evaluate a model that predicts the target ``Hired`` (``Yes`` or ``No``).
**Disclaimer**: The hypothetical examples do not reflect the views of DataRobot; they are meant solely for illustrative purposes.
#### Notation {: #notation }
- ``d``: decision of the model (i.e., ``Yes`` or ``No``)
- ``PF``: protected feature
- ``s``: predicted probability scores
- ``Y``: target variable
There are _eight_ individual fairness metrics in total. Certain metrics are best used when paired with a related fairness metric. When applicable, these are noted in the descriptions below.
### Proportional Parity {: #proportional-parity }
For each protected class, what is the probability of receiving favorable predictions from the model?
This metric is based on equal representation of the model's target across protected classes.
Also known as "statistical parity," "demographic parity," and "acceptance rate," it is used to score fairness for binary classification models. A common usage for Proportional Parity is the <a target="_blank" href="https://www.eeoc.gov/laws/guidance/questions-and-answers-clarify-and-provide-common-interpretation-uniform-guidelines">"four-fifths"</a> (i.e. 4/5ths) rule in the Uniform Guidelines on Employee Selection Procedures in context of HR hiring.
Required data:
- Protected feature (i.e., ``gender`` with values ``male`` or ``female``)
- Target with predicted decisions (i.e., ``Hired`` with values ``Yes`` or ``No``)
Formula:

Example:
A company has a pool of ``100`` applicants, where:
- ``70`` applicants are male
- ``30`` applicants are female
- ``60`` males are predicted to be hired and ``5`` females are predicted to be hired
Calculate the probability of being hired ("hire rate for each protected class").
The following are the manually calculated fairness scores:
```python
male hiring rate = (number of males hired) / (number of males) = 60/70 = 0.857 = 85.7%
female hiring rate = (number of females hired) / (number of females) = 5/30 = 0.167 = 16.7%
```
Calculate the disparate impact (the fairness value) for females as follows:
```python
disparate impact = (female hiring rate) / (male hiring rate) = 0.167/0.857 = 0.195 = 19.5%
```
Compare this relative fairness score (``19.5%``) against a fairness threshold of ``0.8`` (i.e., 4/5 = 0.8 = 80%).
The result (``19.5% < 80%``) indicates that the model does **not** satisfy the four-fifths rule and is therefore unfairly treating the females in hiring.
Example use case:
According to the 4/5ths Rule in the US for regulating Human Resources, if you're selecting candidates for a job, the selection rate must be equal between protected classes (i.e. proportional parity) within a threshold of 0.8. If you interview 80% of men and only 40% of women, that violates this law. To rectify this bias, you would need to interview at least 64% of women (the 80% selection rate for men * 0.8 fairness threshold).
### Equal Parity {: #equal-parity }
For each protected class, what is the total number of records with favorable predictions from the model?
This metric is based on equal representation of the model's target across protected classes.
It is used for scoring fairness for binary classification models.
Required data:
- Protected feature (i.e., ``gender`` with values ``male`` or ``female``)
- Target with predicted decisions (i.e., ``Hired`` with values ``Yes`` or ``No``)
Formula:

Example:
Using the previous example, the fairness scores for male and female predicted hirings are:
```python
males hired = 60
females hired = 5
```
Example use case:
In Europe, some countries require equal numbers of men and women on corporate boards.
### Prediction Balance {: #prediction-balance }
The set of Prediction Balance fairness metrics include favorable and unfavorable class balance, described below.
#### Favorable Class Balance {: #favorable-class-balance }
For all actuals that were favorable outcomes, what is the average predicted probability for each protected class?
This metric is based on equal representation of the model's average raw scores across each protected class and is part of the set of Prediction Balance fairness metrics.
A common usage for Favorable Class Balance is ranking hiring candidates by the model's raw scores to select higher-scoring candidates.
Required data:
- Protected feature (i.e., ``gender`` with values ``male`` or ``female``)
- Target with predicted probability scores (i.e., ``Hired`` with values in the range ``[0.0, 1.0]``)
- Target with actual outcomes (i.e., ``Hired_actual`` with values ``Yes`` or ``No``)
Formula:

Example:
A company has a pool of ``100`` applicants, where:
- ``70`` applicants are male
- ``30`` applicants are female
- ``50`` males were actually hired and ``20`` females were actually hired
- Range of predicted probability scores for males: ``[0.7, 0.9]``
- Range of predicted probability scores for females: ``[0.2, 0.4]``
Calculate the average for each protected class, based on a model's predicted probability scores, as follows:
```python
hired males average score = sum(hired male predicted probability scores) / 50 = 0.838
hired females average score = sum(hired female predicted probability scores) / 20 = 0.35
```
Example use case:
In a hiring context, you can rank candidates by probability of passing hiring manager review and use the model's raw scores to filter out lower-scoring candidates, even if the model predicts that they should be hired.
#### Unfavorable Class Balance {: #unfavorable-class-balance }
For all actuals that were unfavorable outcomes, what is the average predicted probability for each protected class?
This metric is based on equal representation of the model's average raw scores across each protected class and is part of the set of Prediction Balance fairness metrics.
A common usage for Unfavorable Class Balance is ranking hiring candidates by the model's raw scores to filter out lower-scoring candidates.
Required data:
- Protected feature (i.e., ``gender`` with values ``male`` or ``female``)
- Target with predicted probability scores (i.e., ``Hired`` with values in the range ``[0.0, 1.0]``)
- Target with actual outcomes (i.e., ``Hired_actual`` with values ``Yes`` or ``No``)
Formula:

Example:
A company has a pool of ``100`` applicants, where:
- ``70`` applicants are male
- ``30`` applicants are female
- ``20`` males were actually not hired and ``10`` females were actually not hired
- Range of predicted probability scores for males: ``[0.7, 0.9]``
- Range of predicted probability scores for females: ``[0.2, 0.4]``
Calculate the average for each protected class, based on a model's predicted probability scores, as follows:
```python
non-hired males average score = sum(non-hired male predicted probability scores) / 20 = 0.70
non-hired females average score = sum(non-hired female predicted probability scores) / 10 = 0.20
```
### True Favorable Rate Parity {: #true-favorable-rate-parity }
For each protected class, what is the probability of the model predicting the favorable outcome for all actuals of the favorable outcome?
This metric (also known as "True Positive Rate Parity") is based on equal error and is part of the set of True Favorable Rate & True Unfavorable Rate Parity fairness metrics.
Required data:
- Protected feature (i.e., ``gender`` with values ``male`` or ``female``)
- Target with predicted decisions (i.e., ``Hired`` with values ``Yes`` or ``No``)
- Target with actual outcomes (i.e., ``Hired_actual`` with values ``Yes`` or ``No``)
Formula:

Example:
A company has a pool of ``100`` applicants, where:
- ``70`` applicants are male
- ``30`` applicants are female
- ``50`` males were correctly predicted to be hired
- ``10`` males were incorrectly predicted to be _not_ hired
- ``8`` females were correctly predicted to be hired
- ``12`` females were incorrectly predicted to be _not_ hired
Calculate the True Favorable Rate for each protected class as follows:
```python
male favorable rate = TP / (TP + FN) = 50 / (50 + 10) = 0.8333
female favorable rate = TP / (TP + FN) = 8 / (8 + 12) = 0.4
```
Example use case:
In healthcare, a model can be used to predict which medication a patient should receive. You would not want to give any protected class the wrong medication in order to ensure that everyone gets the same medication– instead, you want your model to give each protected class the right medication, and you don't want it to make significantly more errors on any group.
### True Unfavorable Rate Parity {: #true-unfavorable-rate-parity }
For each protected class, what is the probability of the model predicting the unfavorable outcome for all actuals of the unfavorable outcome?
This metric (also known as "True Negative Rate Parity") is based on equal error and is part of the set of True Favorable Rate & True Unfavorable Rate Parity fairness metrics.
Required data:
- Protected feature (i.e., ``gender`` with values ``male`` or ``female``)
- Target with predicted decisions (i.e., ``Hired`` with values ``Yes`` or ``No``)
- Target with actual outcomes (i.e., ``Hired_actual`` with values ``Yes`` or ``No``)
Formula:

Example:
A company has a pool of ``100`` applicants, where:
- ``70`` applicants are male
- ``30`` applicants are female
- ``5`` males were correctly predicted to be _not_ hired
- ``5`` males were incorrectly predicted to be hired
- ``8`` females were correctly predicted to be _not_ hired
- ``2`` females were incorrectly predicted to be hired
Calculate the True Unfavorable Rate for each protected class as follows:
```python
male unfavorable rate = TN / (TN + FP) = 5 / (5 + 5) = 0.5
female unfavorable rate = TN / (TN + FP) = 8 / (8 + 2) = 0.8
```
### Favorable Predictive Value Parity {: #favorable-predictive-value-parity }
What is the probability of the model being correct (i.e., the actual results being favorable)?
This metric (also known as "Positive Predictive Value Parity") is based on equal error and is part of the set of Favorable Predictive & Unfavorable Predictive Value Parity fairness metrics.
Required data:
- Protected feature (i.e., ``gender`` with values ``male`` or ``female``)
- Target with predicted decisions (i.e., ``Hired`` with values ``Yes`` or ``No``)
- Target with actual outcomes (i.e., ``Hired_actual`` with values ``Yes`` or ``No``)
Formula:

Example:
A company has a pool of ``100`` applicants, where:
- ``70`` applicants are male
- ``30`` applicants are female
- ``50`` males were correctly predicted to be hired
- ``5`` males were incorrectly predicted to be hired
- ``8`` females were correctly predicted to be hired
- ``2`` females were incorrectly predicted to be hired
Calculate the Favorable Predictive Value Parity for each protected class as follows:
```python
male favorable predictive value = TP / (TP + FP) = 50 / (50 + 5) = 0.9091
female favorable predictive value = TP / (TP + FP) = 8 / (8 + 2) = 0.8
```
Example use case:
Insurance companies consider it ethical to charge men more than women, as men are considered significantly more reckless drivers than women. In this case, you would want the model to charge men the correct amount relative to their actual risk, even if the amount is different for women.
### Unfavorable Predictive Value Parity {: #unfavorable-predictive-value-parity }
What is the probability of the model being correct (i.e., the actual results being unfavorable)?
This metric (also known as "Negative Predictive Value Parity") is based on equal error and is part of the set of Favorable Predictive & Unfavorable Predictive Value Parity fairness metrics.
Required data:
- Protected feature (i.e., ``gender`` with values ``male`` or ``female``)
- Target with predicted decisions (i.e., ``Hired`` with values ``Yes`` or ``No``)
- Target with actual outcomes (i.e., ``Hired_actual`` with values ``Yes`` or ``No``)
Formula:

Example:
A company has a pool of ``100`` applicants, where:
- ``70`` applicants are male
- ``30`` applicants are female
- ``5`` males were correctly predicted to be _not_ hired
- ``10`` males were incorrectly predicted to be _not_ hired
- ``8`` females were correctly predicted to be _not_ hired
- ``12`` females were incorrectly predicted to be _not_ hired
Calculate the Unfavorable Predictive Value Parity for each protected class as follows:
```python
male unfavorable predictive value = TN / (TN + FN) = 5 / (5 + 10) = 0.333
female unfavorable predictive value = TN / (TN + FN) = 8 / (8 + 12) = 0.4
```
|
bias-ref
|
---
title: Bias and Fairness overview
description: Provides a high-level of overview of bias in machine learning and DataRobot's detection and prevention tools.
---
# Bias and Fairness overview {: #bias-and-fairness-overview }
In DataRobot, bias represents the difference between a model's predictions for different populations (or groups) while fairness is the measure of the model's bias. More specifically, it provides methods to calculate fairness for a binary classification model and to identify any biases in the model's predictive behavior.
Fairness metrics in modeling describe the ways in which a model can perform differently for distinct groups within data. Those groups, when they designate groups of people, might be identified by protected or sensitive characteristics, such as race, gender, age, and veteran status.
The largest source of bias in an AI system is the data it was trained on. That data might have historical patterns of bias encoded in its outcomes. Bias might also be a product not of the historical process itself but of data collection or sampling methods misrepresenting the ground truth.
See the [index](b-and-f/index) for links to the settings and tools available on DataRobot to enable bias mitigation.
## Additional resources {: #additional-resources }
* Blog post: [_Bias and Fairness as Dimensions of Trusted AI_](https://www.datarobot.com/trusted-ai-101/ethics/bias-and-fairness/){ target=_blank } resource page for more detail.
* Part one of a three-part series in partnership with Amazon Web Services: [How to Build & Govern Trusted AI Systems](https://www.datarobot.com/blog/how-to-build-govern-trusted-ai-systems-people/){ target=_blank }
* Podcast: [Moving Past Artificial Intelligence To Augmented Intelligence](https://www.moreintelligent.ai/podcasts/moving-past-artificial-intelligence-to-augmented-intelligence/){ target=_blank }
|
bias-overview
|
---
title: Bias and Fairness resources
description: Provides links to bias and fairness resources used in DataRobot.
---
# Bias and Fairness resources {: #bias-and-fairness-resources }
The tools of the Bias and Fairness feature test your models for bias. This allows you identify bias before (or after) models are deployed and then to take action before the model's decisions cause negative outcomes for your organization. See a more complete overview [here](bias-overview).
The workflow for implementing bias and fairness is:
* Select one or more protected features and pick a fairness metric.
* Use insights to determine if models are biased with respect to the protected features.
* Monitor production models for bias.
The tools available for each step of working with Bias and Fairness are described in the following sections. Fairness metrics and terminology are described in the [Bias and Fairness reference](bias-ref).
Topic | Describes...
---|---
**Settings** | :~~:
[Advanced options: fairness metrics](fairness-metrics#set-fairness-metrics) | Set fairness metrics prior to model building (or [from the Leaderboard](fairness-metrics#configure-metrics-and-mitigation-post-autopilot) post-modeling).
[Advanced options: mitigation](fairness-metrics#set-mitigation-techniques) | Set mitigation techniques prior to model building (or [from the Leaderboard](fairness-metrics#configure-metrics-and-mitigation-post-autopilot) post-modeling).
**Model insights** | :~~:
[Per-Class Bias](per-class) | Identify if a model is biased, and if so, how much and who it's biased towards or against.
[Cross-Class Data Disparity](cross-data) | Depict why a model is biased, and where in the training data it learned that bias from.
[Cross-Class Accuracy](cross-acc) | Measure the model's accuracy for each class segment of the protected feature.
[Bias vs Accuracy](bias-tab) | View the tradeoff between predictive accuracy and fairness.
**Deployments** | :~~:
[Fairness monitoring](fairness-settings) | Configure tests that allow models to recognize, in real-time, when protected features in the dataset fail to meet predefined fairness conditions.
[Per-Class Bias](mlops-fairness#view-per-class-bias) | Uses the fairness threshold and score of each class to determine if certain classes are experiencing bias in the model's predictive behavior.
[Fairness over time](mlops-fairness#view-fairness-over-time) | View how the distribution of a protected feature's fairness scores have changed over time.
**Reference** | :~~:
[Bias and Fairness overview](bias-overview) | View a brief overview and definition of bias and fairness, with links to further reading.
[Bias and Fairness reference](bias-ref) | Understand methods used to calculate fairness and to identify biases in the model's predictive behavior.
|
index
|
# Composable ML reference {: #composable-ml-reference }
Topic | Describes...
----- | ------------
[Blueprints in the AI Catalog](cml-catalog) | How to save, edit, share, and re-use blueprints from the AI Catalog.
[Validation schema](cml-validation) | How to define the expected input and output data requirements for a given custom task.
[Sentiment analysis example](cml-sentiment-example) | A tip for capturing Text sentiments using Composable ML.
[Composable ML considerations](cml-consider) | Considerations to be aware of when working with Composable ML.
## Learn more {: #learn-more }
The following resources are available to provide more information.
### Read and view {: #read-and-view }
* [Demo video](https://www.datarobot.com/platform/composable-ml/){ target=_blank }
* [AutoML and Composable ML blog](https://www.datarobot.com/blog/building-ai-with-automl-and-composable-ml/){ target=_blank }
### Try it {: #try-it }
* The Blueprint Workshop provides tools for building and editing blueprints using a programmatic interface. Visit the:
* [Homepage](https://blueprint-workshop.datarobot.com/index.html#){ target=_blank } for easy access to setup, tutorials, examples, and an API reference.
* [Walkthough](https://blueprint-workshop.datarobot.com/examples/walkthrough/Walkthrough.html){ target=_blank } of the most popularly used functionality in the Blueprint Workshop.
* [Task templates](https://github.com/datarobot/datarobot-user-models/tree/master/task_templates){ target=_blank }
* [Drop-in environments](https://github.com/datarobot/datarobot-user-models/tree/master/public_dropin_environments){ target=_blank }
|
index
|
---
title: Sentiment analysis example
description: Apply DataRobot's Composable ML to capture sentiment from text.
---
# Sentiment analysis example {: #sentiment-analysis-example }
The model in this example includes reviews or tweets. The goal is to get an uplift for the model by capturing the sentiment in the text. To do this, simply [modify the blueprint](cml-blueprint-edit#access-the-blueprint-editor) (i.e., click **Copy and Edit** to start). The following is a simple blueprint.—text only—but the model could have features as well.

Hover over either the **Matrix of word-grams counts** or **Elastic-Net Classifier** nodes to see:
* The type of input required for that task.
* The type of output returned.
For example, the **Matrix of word-grams counts** task requires the input to be of type Text and it returns a data frame with all numeric features:

To capture sentiments in a text feature for this example, hover over the **Text variables** node and click the [task selector](cml-blueprint-edit#use-the-task-selector) plus sign (). In the **Select a task** dialog box, expand **Preprocessing > Text Preprocessing** to see the options for text manipulations. (Some of these options are also available via [**Advanced Tuning**](adv-tuning), but others can only be accessed here.) Select to add **TextBlob Sentiment Featurizer**.

The blueprint now shows a new node, outlined in red. When you hover over the node, you can see that it requires text:

Note that the node's output is a data frame with numerical features (`Data Type: Numeric`). Because the **TextBlob Sentiment Featurizer** is a preprocessing module, you must [connect it to the model task](cml-blueprint-edit#work-with-nodes). (Hover over the TextBlob node, drag the diagonal arrow () icon to the **Elastic-Net Classifier** node, and click.)

The new blueprint is [ready to be trained](cml-blueprint-edit#train-new-models). (Before training, you can change the feature list or the training sample size.)
Here is the model on the Leaderboard, shown as one of the top four models.

|
cml-sentiment-example
|
---
title: Blueprints in the AI Catalog
description: How to save, edit, share, and re-use blueprints from the AI Catalog.
---
# Blueprints in the AI Catalog {: #blueprints-in-the-ai-catalog }
When Composable ML is enabled, you can save blueprints to the AI Catalog. From the catalog, a blueprint can be edited, used to train models in compatible projects, or shared.
The list of blueprints available in the catalog are blueprints that:
- Were shared with you.
- You explicitly saved from the Leaderboard or Repository, via the **Blueprint** tab by clicking **Add to AI Catalog**.

- You saved via the Blueprint Workshop or API.
## Access catalog blueprints {: #access-catalog-blueprints }
If you selected to save a blueprint from the Repository or Leaderboard, DataRobot presents a modal that allows you to rename it. When saving is complete, you can open the catalog to work with the blueprint or begin/continue editing from where you are.

Once in the catalog, click **Blueprints** to display a list of all saved user blueprints.

Click to select a blueprint. Once expanded, select one of the following tabs:
Tab | Description
--- | -----------
Info | Display creation and modification information for the blueprint. Additionally, you can add or edit the blueprint name, description, or tags.
Blueprint | Load the blueprint in a state where it is ready for additional editing. You can also train a model using the blueprint.
Comments | Add comments to a blueprint. Any text will be visible to users who are granted access to the blueprint.
As with other catalog assets, you can share your blueprints. Click [**Share**](catalog-asset#share-assets) in the top right corner, and then choose who to share it with and assign permissions.
## Link project blueprints {: #link-project-blueprints }
Some blueprints are meant to be used only with a specific project. For example, a blueprint preprocessing step might select features specific to the project. With DataRobot's automated project linking, feature selection is unavailable if the user blueprint is not linked to a project. This prevents a task from attempting to select a feature (column) that is not included in the project.
Consider the following example:
In a RandomForest Regressor blueprint, you want to select a specific column and reference a specific feature or features:

Click **Add** to add the new step. You can then train the blueprint or add it to the **AI Catalog**. If you then open the blueprint in the catalog, because the blueprint references features in a specific dataset, DataRobot has automatically linked it to the project.

If you then modify the linked project:

DataRobot provides a warning that the required columns do not exist in the dataset:

To use the blueprint, you must first edit the columns in the column selector step. If you do not want the blueprint linked to a project, use **Unlink project** in the project selection dropdown.

If a project is not linked automatically (because there are not project dependencies), you can manually link it through the project dropdown:

Finally, if you copy a user blueprint that is linked to a project, the link is also copied. Note that you can train a blueprint linked to a project ("project A") inside of another project ("project B"). DataRobot provides a warning if the project A blueprint refers to features that do not exist in project B.
!!! note
The linked project name, and the project selection dropdown, are also available from the catalog **Blueprint > Info** tab.
## Train blueprints in bulk {: #train-blueprints-in-bulk }
In the **AI Catalog**, you can apply bulk actions to user blueprints that [share the same target type](#select-target-type). If selected blueprints don't have at least one common target type, DataRobot [prevents bulk training](#target-type-validation).
To train in bulk, select **AI Catalog > Blueprints** to list blueprints and select those you want to apply to a project. Once the blueprints are selected, the ability to train the blueprints becomes enabled.
!!! note
When multiple target types are listed, the blueprint supports each listed target type.

Click **Train blueprints** and the training modal for multiple blueprints opens. If any of the selected blueprints fail validation, the **Train blueprints** button is disabled and a [message](#identify-errored-blueprints) identifies the errored blueprints. Blueprints that include warnings do not block training.

Use the current project or change projects from the dropdown. Projects are filtered and available for selection based on commonality of target type with the selected blueprint (multiclass in this example):

Complete the fields in the modal and click to train blueprints. DataRobot provides a notification that blueprints have started training and will be available on the Leaderboard.
### Select target type {: #select-target-type }
To make it easier to select appropriate blueprints, the **Blueprints** tab in the catalog provides a dropdown selector to filter blueprints by target type:

Choose one or more types and the display changes to list only those blueprints matching at least one of the selected types. For example, if you select binary and a blueprint has a target type of binary and multiclass, it will be included in the list.
### Target type validation {: #target-type-validation }
Because blueprints can only be applied to projects with a matching target type, DataRobot runs validation on selected blueprints and only allows training if there is a compatible target type. For example, if you select a *blueprint* with a binary target type and also choose a *project* with a binary target type, then you are able to proceed with training:

When it is not valid, training is disabled:

### Identify errored blueprints {: #identify-errored-blueprints }
The **Train multiple blueprints** modal displays a color-coded message that indicates status for the group of blueprints in the training request and the number of affected blueprints.
* If all blueprints are valid, the message is green.
* Yellow indicates that at least one blueprint contains a warning, but none are errored.
* Red indicates that at least one blueprint contains an error (and as such, training is disabled).
Click to expand the message and display the target type and a status indicator for each blueprint.

Hover over the icons to display a tooltip containing information on addressing the error or warning. Or, deselect an errored blueprint to make the remaining blueprints available for training. Click the **Deselect errors** link to deselect all errored blueprints in the the group.

When errored blueprints are removed from the group, the message turns yellow or green, and the **Train blueprints** button is enabled.

## Delete blueprints in bulk {: #delete-blueprints-in-bulk }
To delete multiple blueprints with a single action:
1. Use the checkboxes to select the blueprints to delete.
2. Click **Delete** () to open the confirmation modal, which tells you the number and type of blueprints that will be removed.

Deleting a blueprint removes it from the AI Catalog, but note that:
* Removing the blueprint from the catalog does not affect models that were created from that blueprint.
* You can restore a deleted blueprint to the catalog from any model that uses it.
|
cml-catalog
|
---
title: Model metadata and validation schema
description: How to use the model-metadata.yaml file to specify additional information about a custom task or a custom inference model.
---
# Model metadata and validation schema {: #model-metadata-and-validation-schema }
The `model-metadata.yaml` file is used to specify additional information about a custom task or a custom inference model, such as:
* Supported input/output data types that validate, when composing a blueprint, whether a task's input/output requirements match the neighboring tasks.
* The environment ID/model ID of a task or model when running `drum push`.
To define metadata, create a `model-metadata.yaml` file and put it in the top level of the task/model directory. In most cases it can be skipped, but it is required for custom transform tasks when a custom task outputs non-numeric data.
You can see a full [example](https://github.com/datarobot/datarobot-user-models/blob/master/task_templates/1_transforms/1_python_missing_values/custom.py){ target=_blank }. Note that `model-metadata.yaml` is located in the same folder as `custom.py`.
The sections below show how to define metadata for custom models and tasks.
## General metadata parameters {: #general-metadata-parameters }
The following table describes options that are available to tasks and/or inference models. The parameters are required when using `drum push` to supply information about the model/task/version to create. Some of the parameters are also required outside of `drum push`, for compatibility reasons.
!!! note
The `modelID` parameter adds a new version to a pre-existing custom model or task with the specified ID. Because of this, all options that configure a new base-level custom model or task are ignored when passed alongside this parameter. However, at this time these parameters still must be included.
Option | When required | Task or inference model | Description
------ | ---------- | ----------------------- | -----------
`name` | Always | Both | A string, preferably unique for easy searching, that `drum push` uses as the custom model title.
`type` | Always | Both | A string, either `training` (for custom tasks) or `inference` (for custom inference models).
`environmentID` | Always | Both | A hash of the execution environment to use while running your custom model or task. You can find a list of available execution environments in **Model Registry > Custom Model Workshop > Environments**. Expand the environment and click on the **Environment Info** tab to view and copy the file ID. Required for `drum push` only.
`targetType` | Always | Both | A string indicating the type of target. Must be one of: <br />• `binary` <br /> • `regression` <br /> • `anomaly` <br /> • `unstructured` (inference models only) <br /> • `multiclass` <br /> • `transform` (transform tasks only)
`modelID` | Optional | Both | After creating a model or task, it is best practice to use versioning to add code while iterating. To create a new version instead of a new model or task, use this field to link the custom model/task you created. The ID (hash) is available from the UI, via the URL of the custom model or task. Used with `drum push` only.
`description` | Optional | Both | A searchable field. If `modelID` is set, use the UI to change a model/task description. Used with `drum push` only.
`majorVersion` | Optional | Both | Specifies whether the model version you are creating should be a major (`True`, the default) or minor (`False`) version update. For example, if the previous model version is 2.3, a major version update would create version 3.0; a minor version update would create version 2.4. Used for `drum push` only.
`targetName` | Always | Model |A string indicating the column in your data that the model is predicting.
`positiveClassLabel` / `negativeClassLabel` | For binary classification models | Model | When your model predicts probability, the `positiveClassLabel` dictates what class the prediction corresponds to.
`predictionThreshold` | Optional (binary classification models only). | Model | The cutoff point between 0 and 1 that dictates which label will be chosen as the predicted label.
`trainOnProject` | Optional | Task | A hash with the ID of the project (PID) to train the model or version on. When using `drum push` to test and upload a custom estimator task, you have an option to train a single-task blueprint immediately after the estimator is successfully uploaded into DataRobot. The `trainOnProject` option specifies the project on which to train that blueprint.
## Validation schema and fields {: #validation-schema-and-fields }
The schema validation system, which is defined under the `typeSchema` field in `model_metadata.yaml`, is used to define the expected input and output data requirements for a given custom task. By including the optional `input_requirements` and `output_requirements` fields, you can specify exactly the kind of data a custom task expects or outputs. DataRobot displays the specified conditions in the blueprint editor to indicate whether the neighboring tasks match. It also uses them during blueprint training to validate whether the task's data format matches the conditions. Supported conditions include:
* data type
* data sparsity
* number of columns
* support of missing values
!!! note
Be aware that `output_requirements` are only supported for custom transform tasks and must be omitted for estimators.
The sections below describe allowed conditions and values. Unless noted otherwise, a single entry is all that is required for input and/or output requirements.
### `data_types` {: #data-types }
The `data_types` field specifies the data types that are expected, or those that are specifically disallowed. A single data type or a list is allowed for `input_requirements`; only a single data type is allowed as `output_requirements`.
Allowed values are NUM, TXT, IMG, DATE, CAT, DATE_DURATION, COUNT_DICT, and GEO.
The conditions used for `data_types` are:
* EQUALS: All of the listed data types are required in the dataframe. Missing or unexpected types raise an error.
* IN: All of the listed data types are supported, but not all are required to be present.
* NOT_EQUALS: The data type for the input dataframe may not be this value.
* NOT_IN: None of the listed data types are supported by the task.
### `sparse` {: #sparse }
The `sparse` field defines whether the task supports sparse data as an input or if the task can create output data that is in a sparse format.
* A condition of EQUALS must always be included in sparsity specifications.
For input, the following values apply:
* FORBIDDEN: The task cannot handle a sparse matrix format, and will fail if one is provided.
* SUPPORTED: The model must support both a dense dataframe and a sparse dataframe in CSR format. Either could be passed in from preceding tasks.
* REQUIRED: This task only supports a sparse matrix as input and cannot use a dense matrix. DRUM will load the matrix into a sparse dataframe.
For task output, the following values apply:
* NEVER: The task can never output a sparse dataframe.
* DYNAMIC: The task can output either a dense or sparse matrix.
* ALWAYS: The task will always output a sparse matrix.
* IDENTITY: The task can output either a sparse or dense matrix, and the sparsity will match the input matrix.
### `number_of_columns` {: #number-of-columns }
The `number_of_columns` field specifies whether a specific minimum or maximum number of columns is required. The value should be a non-negative integer.
For time-consuming tasks, specifying a maximum number of columns can help keep performance reasonable. The `number_of_columns` field allows multiple entries to create ranges of allowed values. Some conditions only allow a single entry (see the [example](#typeSchema-example)).
The conditions used for `number_of_columns` in a dataframe are:
* EQUALS: The number of columns must exactly match the value. No additional conditions allowed.
* IN: Multiple possible acceptable values are possible. The values are provided as a list in the value field. No additional conditions allowed.
* NOT_EQUALS: The number of columns must not be the specified value.
* GREATER_THAN: The number of columns must be greater than the value provided.
* LESS_THAN: The number of columns must be less than the value provided.
* NOT_GREATER_THAN: The number of columns must be less than or equal to the value provided.
* NOT_LESS_THAN: The number of columns must be greater than or equal to the value provided.
The value must be a non-negative integer.
### `contains_missing` {: #contains-missing }
The `contains_missing` field specifies whether a task can accept missing data or whether a task can output missing values.
* A condition of EQUALS must always be used.
For input, the following values apply to the input dataframe:
* FORBIDDEN: The task cannot accept missing values/NA.
* SUPPORTED: The task is capable of dealing with missing values.
For task output, the following values apply:
* NEVER: The task can never output missing values.
* DYNAMIC: The task can output missing values.
### Default schema {: #default-schema }
When a schema isn't supplied for a task, DataRobot uses the default schema, which allows [sparse data](#sparse-data) and [missing values](#contains-missing) in the input. By default:
```
name: default-transform-model-metadata
type: training
targetType: transform
typeSchema:
input_requirements:
- field: data_types
condition: IN
value:
- NUM
- CAT
- TXT
- DATE
- DATE_DURATION
- field: sparse
condition: EQUALS
value: SUPPORTED
- field: contains_missing
condition: EQUALS
value: SUPPORTED
output_requirements:
- field: data_types
condition: EQUALS
value: NUM
- field: sparse
condition: EQUALS
value: DYNAMIC
- field: contains_missing
condition: EQUALS
value: DYNAMIC
```
The default output data type is NUM. If any of these values are not appropriate for the task, a schema must be supplied in `model-metadata.yaml` (which is required for custom transform tasks that output non-numeric data).
### Running checks locally {: #running-checks-locally }
When running `drum fit` or `drum push`, the full set of validation is run automatically. Verification first checks that the supplied `typeSchema` items meet the required format. Any format issues must be addressed before the task can be trained locally or on DataRobot. After format validation, the input dataset used for fit is compared against the supplied `input_requirements` specifications. Following task training, the output of the task is compared to the `output_requirements` and an error is reported if a mismatch is present.
#### Ignore validation {: #ignore-validation}
During task development, it might be useful to disable validation. To ignore errors, use the following with `drum fit` or `drum push`:
--disable-strict-validation
|
cml-validation
|
---
title: Composable ML considerations
description: Platform support and considerations for working with DataRobot's Composable ML.
---
# Composable ML considerations {: #composable-ml-considerations }
Consider the following when working with Composable ML.
### Environment support {: #environment-support }
Composable ML is supported on DataRobot’s managed AI Platform (US, EU).
#### DRUM supported OSs {: #drum-supported-oss }
* DRUM works on Mac OS, Linux, and Windows 10. Testing is on Linux only and therefore compatibility issues may occur when running on other platforms.
#### Custom task languages {: #custom-task-languages}
* Python and R are supported.
* SAS is not supported.
#### Task limits {: #task-limits }
Component | Size
-------- | ---
RAM | 60GB training / 4GB scoring
CPU Cores | 4
GPU | not supported
Storage | 350GB
Artifact (Serialized trained model/transformer size) | 10GB
Timeout | 72 hrs (for fit)
Max custom tasks per blueprint | 3
### Modeling support {: #modeling-support }
The following describes modeling-specific capabilities.
#### Modeling specifics {: #modeling-specifics }
The following are supported:
* AutoML, including OTV (not time series) and Feature Discovery.
* Estimators, both built-in and custom, are available for binary classification, regression, multiclass, and anomaly detection.
* Preprocessing, both built-in and custom.
#### API {: #api }
* Python SDK for blueprint generation.
* Python client for custom task generation.
#### Modeling Options {: #modeling-options }
The following describes support for advanced modeling options with custom blueprints:
* Metrics and loss function, where “loss function” is used to train a model and “metric” is used to evaluate models and for accuracy monitoring. Typically, the same measure is used for both. Support includes:
* Built-in metrics.
* Custom loss functions (as a part of your custom estimator)
* Custom metrics are not yet supported.
* Exposure, Weight, Counts of Events, Offset
* For custom estimators, weight is supported (others are not).
* For built-in estimators, all options are supported.
!!! note
DataRobot does not indicate whether a task takes into account any of these options. As a result, using Composable ML on a project that uses those options is not recommended. (For example, if you train a custom blueprint on a project that uses exposure, there is no guarantee whether the model will use `y` or `y/exposure` as the target.) On the other hand, when using blueprints generated by Autopilot, these options are taken into account correctly, in line with the project settings.
* Monotonic constraints are not supported.
* Hyperparameter tuning is supported for built-in tasks, but is not yet supported for custom tasks (although it can be embedded inside a task).
* Blenders are supported with the exception of custom estimators. You can however _manually_ create a blueprint that uses multiple estimators.
#### Insights and compliance documentation {: #insights-and-compliance-documentation }
* Model-_agnostic_ insights are supported:
* All model evaluation insights are supported (including Evaluate tab, Model Comparison, Learning Curve, Speed vs Accuracy, etc.).
* Permutation-based Feature Impact, Feature Effects, and XEMP Prediction Explanations are supported (including for Anomaly Detection models).
* Model-_specific_ insights offer limited support:
* SHAP-based Feature Impact, Prediction Explanations, Hotspots, and Variable Effects are not supported.
* Coefficients and Tree-based variable importance are supported for built-in estimators. Word Cloud is supported for built-in estimators when the parameter `WordCloud` is set to `True`.
* Rating tables are not supported when there are custom tasks.
* Compliance documentation is supported.
### Deployment options {: #deployment-options }
* All MLOps model monitoring and management features are supported.
* Deployment of a custom blueprint inside the DataRobot platform is supported.
* Deployment outside of the DataRobot platform (using Scoring Code or Portable Prediction Server) is only supported for blueprints without custom tasks.
|
cml-consider
|
---
title: Use case examples
description: Sample use cases for how and when to use train-time image augmentation in image datasets.
---
# Use case examples {: #use-case-examples }
Below are some example use cases to help illustrate how you might leverage domain knowledge of your dataset to craft a beneficial augmentation strategy. You can try the suggestions and then modify the settings using the [**Advanced Tuning**](adv-tuning) tab. For each, the first screenshot explores the images by expanding the image feature in the **Data** tab. The second shows previews from the **Advanced options > Image Augmentation** tab.
## Identifying types of plankton {: #identifying-types-of-plankton }
This dataset contains tens of thousands of images of microscopic life and aquatic debris, taken with the ISIIS underwater imaging system.

To classify them into 24 classes:
* Because of the way that floating plankton and debris move through water, they can be in any orientation, irrespective of gravity. This example supports enabling **Horizontal** and **Vertical Flip** and setting **Rotation** to a high maximum value.
* Because of the way the images were cropped when the dataset was prepared and labelled, most images are centered with a similar margin. For this reason, you would not enable **Shift** or **Scale**.
* The images have a variety of blurriness. Enable a slight **Blur** to match.
* There are not many instances of shapes that occlude the plankton intended to be identified. In addition, since the images are very low resolution, there is probably a low chance of overfitting to specific small patterns or pixels. For these two reasons, do not enable **Cutout**.

## Classifying groceries {: #classifying-groceries }
This dataset contains a few thousand images—taken with a hand-held camera—of fruits, vegetables, and dairy products found in a grocery store.

Configuration suggestions to classify them into 83 classes:
* Although the fruits and vegetables can be any orientation in the bins, photos are always taken with the ground at the bottom of the photo (right-side-up); best not to enable **Vertical Flip**.
* While **Horizontal Flip** might be reasonable for fruits and vegetables, what about the dairy cartons? Does the model need to recognize specific text or a logo on the carton that would be harder to recognize if it were flipped? Use **Horizontal Flip** for the benefits it might provide to most other classes, but also experiment and compare with a model without **Horizontal Flip** (via **Advanced Tuning**).
* Most photos are taken from approximately an arms length away, so there is probably no need to enable **Scale**.
* Notice that the photos come from a wide variety of angles and are not always centered. To address this, apply **Rotation** and **Shift**.
* The photo resolution seems consistent and the very small details might be necessary to distinguish among varieties of the same fruit. For that reason, don't enable **Blur**.
* In addition, because there isn't obvious occlusion of the grocery items, first try without **Cutout**. Consider also trying with **Cutout** using **Advanced Tuning**.

## Finding powerlines {: #finding-powerlines }
This dataset contains a few thousand aerial images of the countryside. The example helps identify which images contain powerlines.

Consider:
* Since the photos are taken from above and could capture the ground at many angles depending on how the airplane is flying, enable **Horizontal Flip**, **Vertical Flip**, and a large maximum **Rotation**.
* Because the photos are taken from a variety of altitudes, enable **Scale**.
* There is no centering or consistent margin in the photos, so enable **Shift**.
* Enable **Blur** since the photos have a variety of blurriness/resolution levels.
* Birds, trees, or discolorations in the ground can decrease the contrast between the powerlines and the ground, which might make it hard for the model to detect the powerlines. Enable **Cutout** to simulate more instances where part of the powerline might be difficult to detect, in the hopes that the model will more robustly detect any part of the powerline.

|
ttia-examples
|
---
title: Transformations and lists
description: To simplify comparing multiple augmentation strategies across many models, DataRobot provides the capability to create augmentation lists.
---
# Transformations and lists {: #transformations-and-lists }
To simplify comparing multiple augmentation strategies across many models, DataRobot provides the capability to create [augmentation lists](#aug-list). This section describes those lists and the settings and [transformations](#augmentation-lists) that comprise them.
## Augmentation lists {: #augmentation-lists }
Augmentation lists store all the parameter settings for a given augmentation strategy. They function similarly to feature lists, with DataRobot providing the ability to create, rename, and delete lists.
DataRobot automatically creates an initial augmentation list if you set the transformation parameters from the **Advanced options** link [prior to modeling](ttia). Alternatively, you can add lists [after modeling](ttia-introduction). In either case, you can view lists once modeling completes.
To see your saved augmentation list(s), open a model on the Leaderboard and navigate to **Evaluate > Advanced Tuning**:

From here you can [create](#create-a-new-list) a new list or [manage](#manage-existing-lists) existing lists. Click **Show preview** to replace the graph image with a preview of the image transformations or **Hide preview** to return to the **Advanced Tuning** graph.
### Create a new list {: #create-a-new-list }
To create a new list, click **Create new list**. A list of transformation parameters (each described [below](#augmentation-lists)) and a preview appears. You can either begin to set parameters from the default settings or select an existing list from the dropdown as a starting point. Note that you if you manually enter a value instead of using the slider, you must click outside of the box before the change registers and can be saved. The preview displays the original image and a sample of transformed images.

Set the transformation parameters (scroll to access all options), preview augmentation if desired, and click **Save as new list**. To discard changes, click the arrow to return to **Advanced Tuning**.

### Manage existing lists {: #manage-existing-lists }
You can rename and delete augmentation lists by selecting **Manage lists** from the list dropdown:

Use the edit () or delete () icons to rename or remove lists. Note the following:
1. You cannot delete any list that has been used for modeling, including the "Initial List" (created from the [advanced options](ttia) settings).
2. You cannot rename the "Initial List."
## Augmentation list components {: #augmentation-list-components }
The following describes each component of an augmentation list. After setting values, you can use **Preview augmentation** to fine-tune values. The preview does not display all dataset images with all possible transformations. Instead, it contains rows that consist of an original image from the dataset with examples of transformations as they would appear in the data used for training.
### New images per original {: #new-images-per-original }

The **New images per original** specifies how many versions of the original image DataRobot will create. Basically, it sets how much larger your dataset will be after augmentation. For example, if your original dataset has 1000 rows, a "new images" value of 3 will result in 4000 rows for your model to train on (1000 original rows and 3000 new rows with transformed images).
The maximum allowed value for **New images per original** is dynamic. That is, DataRobot determines a value—based on the number of original rows—that it can safely use to build models without exceeding memory limits. Put simply, for a project (regardless of current feature list), the maximum is equal to `300,000 / (number_of_rows * feature_columns)` or 1, whichever is greater.
For fine-tuned models, the **New images per original** parameter has no effect. Instead, control the size of training data using the `epoch` and `earlystop_patience` parameters in the [**Advanced Tuning**](adv-tuning) tab.
### Choose transformations probability {: #choose-transformations-probability }
For each new image that is created, each transformation enabled in your augmentation list will have a probability of being applied equal to this parameter. So, if you enable **Rotate** and **Shift** and set the individual transformation probability to 0.8, this means that ~80% of your new images will at least have **Rotate** and ~80% will at least have **Shift**. Because the probability for each transformation is independent, and each new image could have neither, one, or both transformations, and your new images would be distributed as follows:
| | No Shift | Shift |
|--- | ---------- | ----- |
| No Rotate| 4% | 16% |
| Rotate| 16% | 64% |
### Transformations {: #transformations }
The following sections explain the transformation options available for images.

The best way to familiarize yourself with the available transformations is to explore them in DataRobot and see the resulting transformed images using **Preview Augmentation**. However, the following descriptions are provided for more context and implementation details.
There are two main purposes that a transformation can serve:
1. To create a new image that looks like it could have reasonably been in the original data. Since applying transformations is typically less expensive than collecting and labelling more data, this is a great way to increase your training set size with images that are almost as authentic as originals.
2. To intentionally remove some information from the image, guiding the model to focus on different aspects of the image and thereby learning a more robust representation of it. This is described with examples under the sections for **Blur** and **Cutout**.
#### Shift {: #shift }
The **Shift** transformation is useful when the object(s) to detect are not centered. Once selected, you also set a **Maximum Proportion**:

The **Maximum Proportion** parameter sets the maximum amount the image will be shifted up, down, left, or right. A value of 0.5 means that the image could be shifted up to half the width of the image left or right, or half the height of the image up or down. The actual amount shifted for each image is random, and **Shift** is only applied to each image with probability equal to the [individual transformation probability](#choose-transformations-probability). The image will be padded with <a target="_blank" href="https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad2d.html">reflection padding</a>. This transformation typically serves the purpose mentioned above—simulating whether the photographer had taken a step forward or back, or raised or lowered the camera.
#### Scale {: #scale }
**Scale** is likely to be helpful when:
* The object(s) to detect are not a consistent distance from the camera.
* The object(s) to detect vary in size.
Once selected, set a **Maximum Proportion** parameter to set the maximum amount the image will be scaled in or out. The actual amount scaled for each image is random—**Scale** is only applied to each image with probability equal to the [individual transformation probability](#choose-transformations-probability). If scaled out, the image will be padded with <a target="_blank" href="https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad2d.html">reflection padding</a>. This transformation typically serves the first purpose mentioned above, simulating whether the photographer had taken a step forward or backward.
#### Rotate {: #rotate }
**Rotate** is likely to be helpful when:
* The object(s) to detect can be in a variety of orientations.
* The object(s) to detect have some radial symmetry.

If set, use the **Maximum Degrees** parameter to set the maximum degree to which the image will be rotated clockwise or counterclockwise. The actual amount rotated for each image is random, and **Rotate** will only be applied to each image with probability equal to the [individual transformation probability](#choose-transformations-probability). **Rotate** best simulates if the object captured had turned or if the photographer had tilted the camera.
#### Blur {: #blur }
**Blur** and the accompanying **Maximum Filter Size** are helpful when:
* The images have a variety of blurriness.
* The model must learn to recognize large-scale features in order to make accurate predictions.
The **Maximum Filter Size** parameter sets the maximum size of the gaussian filter passed over the image to smooth it. For example, a filter size of 3 means that the value of each pixel in the new image will be an aggregate of the 3x3 square surrounding the original pixel. Higher filter size leads to a more blurry image. The actual filter size for each image is random, and **Blur** will only be applied to each image with probability equal to the [individual transformation probability](#choose-transformations-probability).
This transformation can serve both purposes described above. With regard to the first purpose, if the images have a variety of blurriness, adding **Blur** can simulate new images with varying levels of focus. With the second purpose, by adding **Blur** you guide the model to focus on larger-scale shapes or colors in the image rather than specific small groups of pixels. For example, if you are worried that the model is learning to identify cats only by a single patch of fur rather than also considering the whole shape, then adding **Blur** can help the model to focus on both small-scale and large-scale features. But if you're training a model to recognize tiny manufacturing defects, it's possible that applying **Blur** might only remove valuable information that would be useful for training.
#### Cutout {: #cutout }
**Cutout** is likely to be helpful when:
* The object(s) to detect are frequently partially occluded by other objects.
* The model should learn to make predictions based off multiple features in the image.
Once selected, there are a number of additional parameters you can set:
* The **Number of Holes** sets the number of black rectangles that will be pasted
over the image randomly.
* The **Maximum Height in Pixels** and **Maximum Width in Pixels** indicate the maximum height and width of each rectangle, though the value for each rectangle will be random.

**Cutout** is only applied to each image with probability equal to the [individual transformation probability](#choose-transformations-probability).
This transformation can serve both purposes described above. For the first, if the object(s) to detect are frequently partially occluded by other objects, adding **Cutout** can simulate new images with objects that continue to be partially obscured in new ways. Regarding the second purpose, adding **Cutout** guides the model to not always look at the same part of an object to make a prediction.
For example, imagine training a model to distinguish among various car types. The model might learn that the shape of the hood is enough to reach 80% accuracy, and so the signal from the hood might outweigh any other information in training. By applying **Cutout**, the model won't always be able to see the hood, and will be forced to learn to make a prediction using other parts of the car.
This could lead to a more accurate model overall, because it has now learned how to use various features in the image to make a prediction.
#### Horizontal Flip {: #horizontal-flip }
The following are scenarios in which the **Horizontal Flip** transformation is likely to be helpful:
* The object you're trying to detect has symmetry about a vertical line.
* The camera was pointed parallel to the ground.
* The object you're trying to detect could have come from either the left or the right.
This transformation has no parameters—new images will be flipped with probability of 50% (ignoring the value of the [individual transformation probability](#choose-transformations-probability)). It typically serves the purpose mentioned above, simulating if the object was coming from the left instead of the right or vice-versa.
#### Vertical Flip {: #vertical-flip }
The following are scenarios in which the **Horizontal Flip** transformation is likely to be helpful:
* The object(s) to detect have symmetry about a horizontal line.
* The camera was pointed perpendicular the ground—for example, down at the ground, table, or conveyor belt, or up at the sky.
* The images are of microscopic objects that are hardly affected by gravity.
This transformation has no parameters—new images will be flipped with probability of 50% (ignoring the value of the [individual transformation probability](#choose-transformations-probability). It typically serves the purpose of simulating if the object was flipped vertically or if the overhead image was captured from the opposite orientation.
|
ttia-lists
|
---
title: Train-time image augmentation
description: Train-time image augmentation is a processing step in the DataRobot blueprint that creates new images for training by randomly transforming existing images.
---
# Train-time image augmentation {: #train-time-image-augmentation }
**Train-time image augmentation** is a processing step in the DataRobot blueprint that creates new images for training by randomly transforming existing images, thereby increasing the size of (i.e., "augmenting") the training data. These sections describe components of augmentation, the process in which each image is transformed.
Topic | Describes...
----- | ------
[About augmented models](ttia-introduction) | Read an overview of augmention.
[Augmentation lists](ttia-lists) | Store all the parameter settings for a given augmentation strategy.
[Use case examples](ttia-examples) | See examples of leveraging domain knowledge to craft a beneficial augmentation strategy.
|
index
|
---
title: About augmented models
description: An overview of augmented modeling and how it supports the potential for smaller overall loss by improving the generalization of models on unseen data.
---
# About augmented models {: #about-augmented-models }
By creating new images for training by randomly transforming existing images, you can build insightful projects with datasets that might otherwise be too small. In addition, all image projects that use augmentation have the potential for smaller overall loss by improving the generalization of models on unseen data. That is:
* _Augmentation_ is the action taken on the image dataset.
* _Transformations_ are the actions applied to an image.
After the process of augmentation, each image is transformed.
For a general explanation of image augmentation, see the description in <a target="_blank" href="https://albumentations.ai/docs/introduction/image_augmentation/">_albumentations_</a> documentation—this is the open-source library that helps power DataRobot's implementation of the augmentation feature.
This page provides a general overview of how to configure augmentation. The parameters used to configure augmentation are detailed in this page about [augmentation lists and transformation parameters](ttia-lists).
## Image augmentation {: #image-augmentation }
There are two places where you can configure the **Train-Time Image Augmentation** step:
* [Before model building](ttia), in **Advanced options**.
* From the Leaderboard, [after model building](ttia-lists).
{% include 'includes/image-augmentation-include.md' %}
### Performance {: #performance }
A key advantage of train-time image augmentation is that because it is only applied during training, the prediction times for a model are relatively unchanged by whether it was trained with augmentation. This allows you to deploy models with better loss at no cost to your prediction times.
Some performance notes:
* Benchmarking has shown that in a project where dataset rows are doubled with
image augmentation, building in Autopilot will take about 50% longer.
* When image augmentation improves the LogLoss of a model, it improves it on average by approximately 10%, with a very large variance model-to-model and dataset-to-dataset.
### Data Drift {: #data-drift }
While models trained with image augmentation are often more robust to data drift than models trained without, transformations applied in image augmentation should not be used to anticipate future data drift. For example, if you are training a model to detect species of freshwater fish, and you anticipate that you'll apply your model in the future to a different region with larger fish, the best approach would be to collect data from that different region and incorporate it into your dataset. If you were to just apply the **Scale** transformation to your current dataset in an attempt to simulate larger fish not seen in your dataset, you would be creating images with larger fish in training, but when DataRobot scored your model against the validation or holdout, model performance would suffer because there were no larger fish in those partitions. This makes it difficult to correctly evaluate your model with augmentation against other models on the Leaderboard— _your current training dataset is not representative of your future data_.
### External resources {: #external-resources }
There are many research papers available that explain and provide evidence of the benefits of image augmentation for machine learning models—improved performance and outcomes as well as making them more robust. Below are a sample of external resources:
* Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020, November).
<a target="_blank" href="https://arxiv.org/pdf/2002.05709.pdf">A Simple Framework for Contrastive Learning of Visual Representations.</a>
* Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012).
<a target="_blank" href="https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf">ImageNet Classification with Deep Convolutional Neural Networks.</a>
* Wang, J., & Perez, L. (2017).
<a target="_blank" href="http://cs231n.stanford.edu/reports/2017/pdfs/300.pdf">The Effectiveness of Data Augmentation in Image Classification using Deep Learning.</a>
* Zoph, B., Cubuk, E. D., Ghiasi, G., Lin, T. Y., Shlens, J., & Le, Q. V. (2020, August).
<a target="_blank" href="https://arxiv.org/pdf/1906.11172v1.pdf">Learning Data Augmentation Strategies for Object Detection.</a>
|
ttia-introduction
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.