markdown
stringlengths 44
160k
| filename
stringlengths 3
39
|
|---|---|
# Google BigQuery {: #google-bigquery }
## Supported authentication {: #supported-authentication }
- OAuth
- Service account ([public preview](bigq-service-acct))
## Prerequisites {: #prerequisites }
The following is required before connecting to Google BigQuery in DataRobot:
- A Google account [authenticated with OAuth](https://cloud.google.com/bigquery/docs/authorization){ target=_blank }
- A Google BigQuery project
## Set up a connection in DataRobot {: #set-up-a-connection-in-datarobot }
When connecting with OAuth parameters, you must create a new data connection.
To set up a data connection using OAuth:
1. Follow the instructions for [creating a data connection](data-conn#create-a-new-connection)—making sure the minimum [required parameters](#required-parameters) are filled in—and [testing the connection](data-conn#test-the-connection).
2. After clicking **Test Connection**, a window appears. Click **Sign in using Google**.

3. Select the account you want to use.

4. To provide consent to the database client, click **Allow**.
If the connection is successful, the following message appears in DataRobot:

## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with Google BigQuery:
Required field | Description | Documentation
--------------- | ---------- | -----------
`Projectid` | A globally unique identifier for your project. | [Google Cloud documentation](https://cloud.google.com/resource-manager/docs/creating-managing-projects){ target=_blank }
Learn about additional [configuration options for Google BigQuery](https://cloud.google.com/bigquery/docs/reference/odbc-jdbc-drivers#current_jdbc_driver){ target=_blank } in the installation guide under "Connector Configuration Options".
## Caveats {: #caveats }
- You cannot use multiple Google accounts to authenticate Google BigQuery in DataRobot. Once a Google user is authenticated via OAuth, that Google account is used for all the BigQuery data connections for that DataRobot user.
- If your Google account has a large number of projects, it may take a long time to list schemas, even if the project is filtered with the `projectID` parameter.
{% include 'includes/data-conn-trouble.md' %}
Issue authenticating Google BigQuery<br><br>Need to reset the Google user assigned to authentication | Locate and remove the `bigquery-oauth` credential. | <ol><li>In DataRobot, navigate to the **Credentials Management** tab.</li><li>Select the `bigquery-oauth` credential and click **Delete**.</li><li>Select **User Settings > Data Connections**.</li><li>Reauthenticate your Google BigQuery data connection.</li></ol>
Issue authenticating Google BigQuery | Remove authentication consent in Google Cloud console. | <ol><li>Navigate to your [Google Account permissions](https://myaccount.google.com/u/1/permissions){ target=_blank }.</li><li>Select **DataRobot** under third-party applications.</li><li>Click **Remove Access**.</li></ol>
|
dc-bigquery
|
# Amazon Athena {: #amazon-athena }
## Supported authentication {: #supported-authentication }
- AWS Credential
## Prerequisites {: #prerequisites }
The following is required before connecting to Amazon Athena in DataRobot:
- [AWS account](https://docs.aws.amazon.com/athena/latest/ug/setting-up.html){ target=_blank }
- Username = AWS access key
- Password = AWS secret access key
- Athena managed policies attached to the AWS account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with Amazon Athena:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The service endpoint used to connect to AWS.<br><br>**Example:**<br> `athena.us-east-1.amazonaws.com:448` | [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/athena.html){ target=_blank }
`AwsRegion` | Separate geographic areas that AWS uses to house its infrastructure.<br><br>**Example:**<br> `us-east-1` | [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html){ target=_blank }
`S3OutputLocation` | Specifies the path to your data in Amazon S3. | [AWS documentation](https://docs.aws.amazon.com/athena/latest/ug/tables-location-format.html){ target=_blank }
## Caveats {: #caveats }
- Due to a limitation with the JDBC driver itself, the **Existing Table** tab is not available for Athena data connections. You must select **SQL Query** and enter a SQL query to retrieve data from Athena.
{% include 'includes/data-conn-trouble.md' %}
|
dc-athena
|
# MySQL {: #mysql }
## Supported authentication {: #supported-authentication }
- Username/password
## Prerequisites {: #prerequisites }
The following is required before connecting to MySQL in DataRobot:
- MySQL account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with MySQL:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The service endpoint used to connect to MySQL.<br><br>**Example:**<br> `jdbc-cert-mysql.cqyt4ezythbf4.us-east-1.rds.amazonaws.com:3309` | [MySQL documentation](https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-jdbc-url-format.html){ target=_blank }
For more information, see the [MySQL connector documentation](https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference.html){ target=_blank }.
{% include 'includes/data-conn-trouble.md' %}
|
dc-mysql
|
# Exasol {: #exasol }
## Supported authentication {: #supported-authentication }
- Username/password
## Prerequisites {: #prerequisites }
The following is required before connecting to Exasol in DataRobot:
- Exasol account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with Exasol:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The connection URL that supplies connection information for Exasol. **Example:** `jdbc:exa:testdb.exasol.com:5599` | [Exasol documentation](https://docs.exasol.com/db/latest/connect_exasol/drivers/jdbc.htm){ target=_blank }
`port` | Specific port of network services with which Exasol databases may communicate.<br><br>Auto-populated by DataRobot. | [Exasol documentation](https://docs.exasol.com/db/latest/administration/on-premise/manage_network/system_network_settings.htm){ target=_blank }
Learn about additional [configuration options for Exasol](https://docs.exasol.com/db/latest/connect_exasol/drivers/jdbc.htm){ target=_blank }. See also an end-to-end overview of [connecting to Exasol with DataRobot](https://community.exasol.com/t5/tech-blog/how-to-automate-your-machine-learning-with-datarobot-and-exasol/ba-p/2333){ target=_blank } on the Exasol Community.
{% include 'includes/data-conn-trouble.md' %}
|
dc-exasol
|
# Oracle {: #oracle }
There are two data connection types for Oracle Database: Service Name and SID. Use the appropriate parameters for the connection path you use to connect to Oracle Database.
## Supported authentication {: #supported-authentication }
- Username/password for both Service Name and SID
## Prerequisites {: #prerequisites }
The following is required before connecting to Oracle in DataRobot:
- Oracle Database account
## Required parameters {: #required-parameters }
### Service Name {: #service-name}
The table below lists the minimum required fields to establish a connection with Oracle (Service Name):
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The connection URL that supplies connection information for Oracle. | [Oracle documentation](https://docs.oracle.com/cd/E11882_01/appdev.112/e12137/getconn.htm#TDPJD129){ target=_blank }
`serviceName` | Specifies name used to connect to an instance. | [Oracle documentation](https://docs.oracle.com/cd/B19306_01/server.102/b14237/initparams188.htm#REFRN10194){ target=_blank }
### SID {: #sid}
The table below lists the minimum required fields to establish a connection with Oracle (SID):
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The connection URL that supplies connection information for Oracle.<br><br>**Example:** | Oracle | [Oracle documentation](https://docs.oracle.com/cd/E11882_01/appdev.112/e12137/getconn.htm#TDPJD129){ target=_blank }
`SID` | A unique identifier for your database. | [Oracle documentation](https://docs.oracle.com/cd/E11882_01/appdev.112/e12137/getconn.htm#TDPJD129){ target=_blank }
`port` | A unique identifier for your database. | [Oracle documentation](https://docs.oracle.com/cd/E11882_01/appdev.112/e12137/getconn.htm#TDPJD129){ target=_blank }
{% include 'includes/data-conn-trouble.md' %}
|
dc-oracle
|
# kdb+ {: #kdb }
## Supported authentication {: #supported-authentication }
- Username/password
## Prerequisites {: #prerequisites }
The following is required before connecting to kdb+ in DataRobot:
- kdb+ account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with kdb+:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The connection URL used to connect to kdb+.<br><br>**Example:**<br> `jdbc-cert-kdb+.cqz9ezythbf4.us-east-1.rds.amazonaws.com:3306` | [kx documentation](https://code.kx.com/q/interfaces/jdbc-client-for-kdb/){ target=_blank }
{% include 'includes/data-conn-trouble.md' %}
|
dc-kdb
|
# Supported databases {: #supported-databases }
<!--- When bumping versions, also update `datarobot_docs/en/api/reference/batch-prediction-api/index.md` --->
DataRobot with JDBC 4.1 has tested support for the following databases.
| Database | Version | Driver Jar |
|-----|-----|-----|
| [Amazon Redshift](dc-redshift) | 2.1.0.14 | <a target="_blank" href="https://docs.aws.amazon.com/redshift/latest/mgmt/configure-jdbc-connection.html">Amazon Redshift: Configuring a JDBC driver connection</a>|
| [AWS Athena](dc-athena) | 2.0.35| <a target="_blank" href="https://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html">Using Athena with the JDBC Driver</a>|
| [Azure SQL](dc-azure) | 9.4.0 | <a target="_blank" href="https://docs.microsoft.com/en-us/sql/connect/jdbc/release-notes-for-the-jdbc-driver?view=sql-server-ver15#previous-releases">JDBC Driver for Azure SQL</a>|
| [Azure Synapse](dc-azure) | 9.4.0 | <a target="_blank" href="https://docs.microsoft.com/en-us/sql/connect/jdbc/release-notes-for-the-jdbc-driver?view=sql-server-ver15#previous-releases">JDBC Driver for Synapse SQL</a>|
| Databricks | 2.6.21 | <a target="_blank" href="https://docs.databricks.com/integrations/bi/jdbc-odbc-bi.html#jdbc-driver">Databricks JDBC Driver </a> |
| [Exasol](dc-exasol) | 7.1.2 | <a target="_blank" href="https://docs.exasol.com/db/latest/connect_exasol/drivers/jdbc.htm">Exasol JDBC Driver </a> |
| [Google BigQuery](dc-bigquery) | spark-1.2.23.1027 | <a target="_blank" href="https://cloud.google.com/bigquery/docs/reference/odbc-jdbc-drivers">ODBC and JDBC drivers for BigQuery</a>|
| InterSystems | 3.0.0 | <a target="_blank" href="https://gettingstarted.intersystems.com/development-setup/jdbc-connections/">InterSystems JDBC Connections</a>|
| [KDB+](dc-kdb) * | 2019.11.11 | <a target="_blank" href="https://github.com/KxSystems/kdb/blob/master/c/jdbc.jar">Kx Systems jdbc.jar</a> ({% include 'includes/github-sign-in.md' %}) |
| [Microsoft SQL Server 6.4](dc-ms-sql-srvr) | 12.2.0 | <a target="_blank" href="https://docs.microsoft.com/en-us/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-ver15">Microsoft JDBC Driver for SQL Server</a> |
| [MySQL](dc-mysql) | 8.0.32 | <a target="_blank" href="https://www.mysql.com/products/connector/">MySQL Connectors</a>|
| [Oracle 6](dc-oracle) | oracle-xe_11.2.0-1.0 | <a target="_blank" href="https://repo1.maven.org/maven2/com/oracle/database/jdbc/">Central Repository Oracle JDBC</a> |
| [Oracle 8](dc-oracle) | 12.2.0.1 | <a target="_blank" href="https://repo1.maven.org/maven2/com/oracle/database/jdbc/">Central Repository Oracle JDBC</a> |
| [PostgresSQL](dc-postgresql) | 42.5.1 JDBC 4.2 | <a target="_blank" href="https://jdbc.postgresql.org/download/">PostgreSQL JBDC Driver</a> |
| [Presto](dc-presto) | 0.216 | <a target="_blank" href="https://prestodb.io/download.html">Presto downloads</a>|
| [SAP HANA](dc-sap-hana) | 2.15.10 | <a target="_blank" href="https://developers.sap.com/tutorials/hana-clients-jdbc.html">Connect Using the SAP HANA JDBC Driver</a>|
| [Snowflake](dc-snowflake) | 3.13.29 | <a target="_blank" href="https://docs.snowflake.com/en/user-guide/jdbc-download.html">Snowflake JDBC driver repository</a>|
| [TD-Hive](dc-treasure)| 0.5.10| <a target="_blank" href="https://docs.treasuredata.com/display/public/PD/JDBC+Driver+for+Hive+Query+Engine">JDBC Driver for Hive</a> |
| [Treasure Data](dc-treasure) | Select the query engine you contracted for. | <a target="_blank" href="https://www.treasuredata.com/data-integrations/">Treasure Data Integrations</a> |
\* Only supported with JDBC, not with KDB native query language `q`
Self-Managed AI Platform users: See [Manage JDBC drivers](manage-drivers) for steps to upload JDBC drivers.
## Deprecated databases {: #deprecated-databases }
Support is deprecated for these drivers:
| Database | Version |
|----|----|
| Apache Hive for JDBC | All |
| Apache Hive | All |
| Elasticsearch | All |
| Microsoft SQL Server | 6.0 |
!!! note
Older driver versions may still exist, but DataRobot recommends that you use the latest supported versions of the drivers.
|
index
|
# SAP HANA {: #sap-hana }
## Supported authentication {: #supported-authentication }
- Username/password
## Prerequisites {: #prerequisites }
The following is required before connecting to SAP HANA in DataRobot:
- SAP HANA account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with SAP HANA:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The server's address used to connect to SAP HANA. | [SAP documentation](https://help.sap.com/docs/SAP_HANA_PLATFORM/0eec0d68141541d1b07893a39944924e/b250e7fef8614ea0a0973d58eb73bda8.html?version=2.0.03)
For more information, see the [SAP HANA connector documentation](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/109397c2206a4ab2a5386d494f4cf75e.html).
{% include 'includes/data-conn-trouble.md' %}
|
dc-sap-hana
|
# Amazon S3 {: #amazon-s3 }
## Supported authentication {: #supported-authentication }
- AWS Credential
## Prerequisites {: #prerequisites }
The following is required before connecting to Amazon S3 in DataRobot:
- Amazon S3 account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with Amazon S3:
Required field | Description | Documentation
--------------- | ---------- | -----------
`bucketName` | A container that stores your data in Amazon S3. | [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html){ target=_blank }
`bucketRegion` | The AWS region of your bucket. | [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/s3.html){ target=_blank }
## Caveats {: #caveats }
- This connector does not use refresh tokens, however, you can specify it as part of your AWS Credential. Only a static access key, which does not get refreshed periodically, works with the connector.
|
dc-s3
|
# Presto {: #presto }
## Supported authentication {: #supported-authentication }
- Username/password
## Prerequisites {: #prerequisites }
The following is required before connecting to Presto in DataRobot:
- Presto account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with Presto:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The connection URL that supplies connection information for Presto. | [Presto documentation](https://prestodb.io/docs/current/installation/jdbc.html#connecting){ target=_blank }
Learn about additional [configuration options for Presto](https://prestodb.io/docs/current/installation/jdbc.html#connecting){ target=_blank }.
{% include 'includes/data-conn-trouble.md' %}
|
dc-presto
|
# Amazon Redshift {: #amazon-redshift }
## Supported authentication {: #supported-authentication }
- Username/password
## Prerequisites {: #prerequisites }
The following is required before connecting to Redshift in DataRobot:
- Amazon Redshift account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with Redshift:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The connection URL that supplies connection information for Redshift.<br><br>**Example:** `redshift.cjzu88438s1ja.us-east-1.redshift.amazonaws.com:7632` | [Redshift documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-build-connection-url.html){ target=_blank }
`database` | A unique identifier for your database. | [Redshift documentation](https://docs.aws.amazon.com/redshift/latest/dg/r_CURRENT_DATABASE.html){ target=_blank }
Learn about additional [configuration options for Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-configuration-options.html){ target=_blank }.
{% include 'includes/data-conn-trouble.md' %}
|
dc-redshift
|
---
title: Predictions
description: You can make predictions with models using engineered features the same way as with any other DataRobot model, using the Make Predictions or the Deploy tab.
---
# Predictions {: #predictions }
You can make predictions with models using engineered features in the same way as you do with any other DataRobot model. To make predictions, use either:
* [Make Predictions](#use-make-predictions) to generate predictions, in-app, on a scoring dataset.
* [Deploy](#use-deploy-for-batch-predictions) to create a deployment capable of batch predictions and prediction integrations.
When using Feature Discovery with either method, the dataset configuration options available at prediction time are the same. The following describes that configuration followed by descriptions of each tab option.
## Select a secondary dataset configuration {: #select-a-secondary-dataset-configuration }
When applying a dataset configuration to prediction data, Feature Discovery allows you to use:
* The [default](#use-the-default-configuration) configuration.
* An alternative, [existing](#use-an-alternate-configuration) configuration.
* A [newly created](#create-a-new-configuration) configuration.
If the feature list used by a model doesn’t rely on any of the secondary datasets supplied—because no features were derived or a custom feature list excluded the Feature Discovery features, for example—DataRobot supplies an informational message.
### Use the default configuration {: #use-the-default-configuration }
By default, DataRobot makes predictions using the secondary dataset configuration defined by the relationships used when building the project. You can view this configuration by clicking the **Preview** link.
From **Make Predictions**:

From **Deploy**:

The default configuration cannot be modified or deleted from the predictions tabs.
### Use an alternate configuration {: #use-an-alternate-configuration }
To select an alternative to the default configuration—or to create a new configuration—click **Change**. When you create new secondary dataset configurations, they become available to all models in the project <em>that use the same feature list</em>. Note that you must make any changes to the secondary dataset configuration before uploading your prediction dataset.
#### Apply an existing configuration {: #apply-an-existing-configuration }
To apply a different, existing configuration, click **Change** to open the **Secondary Datasets Configuration** modal:

Expand the menu and select one of the following options:
* To preview but not select the configuration, click on a configuration name and select **Preview** from the menu. The [Secondary Datasets Configurations](#secondary-datasets-configurations-modal) modal opens.
* To select a configuration other than the currently selected item, click **Select** from the menu. The item highlights and the [Secondary Datasets Configurations](#secondary-datasets-configurations-modal) modal opens.
* Click **Delete** to remove a configuration. You cannot delete the default configuration.
#### Create a new configuration {: #create-a-new-configuration }
To create a new configuration, click **Change** and then **create new** in the resulting **Secondary Datasets Configurations** modal:

The [Secondary Datasets Configurations](#secondary-datasets-configurations-modal) modal expands to include configuration fields.
After changing or creating the configuration, you will see the entry listed in the secondary dataset configuration preview and also as the selected configuration on the **Make Predictions** or **Deploy** page. Click to select which configuration to use when making predictions with this model and click **Apply**.

### Secondary Datasets Configurations modal {: #secondary-datasets-configurations-modal }
Access the modal by clicking **Preview**, **Change > Preview**, or **Change > create new**.

Complete the fields as follows:
| | Field | Description |
|-----|-----------|--------------|
|  |Configuration name | The name of the configuration, "New Configuration" by default. Click the pencil icon to change. |
|  |Configuration datasets | The dataset(s) that make up the relationship for the new or existing secondary configuration. |
|  |Dataset details | Basic dataset information, similar but less detailed than the information available from the **AI Catalog**. Click on a configuration dataset to display. |
|  |Snapshot policy | The snapshot policy to apply to the dataset. By default, DataRobot applies the [snapshot policy](#snapshot-reference) you defined in the graphs to your secondary |
|  |Replace | A tool to choose which dataset(s) to use as the basis of the relationship to the primary dataset. Clicking "Replace" provides a list of available files in the **AI Catalog**. Click a dataset to view dataset details; click **Use dataset in configuration** to add the dataset and return to the previous screen.
### Snapshot reference {: #snapshot-reference }
DataRobot applies snapshot policy for a secondary dataset as follows:
* <em>Dynamic</em>: DataRobot pulls data from the associated data connection at the time you upload a new primary dataset.
* <em>Latest snapshot</em>: DataRobot uses the latest snapshot available when you upload a new primary dataset.
* <em>Specific snapshot</em>: DataRobot uses a specific snapshot, even if a more recent snapshot exists.
Note that:
1. Changes apply to primary datasets that are uploaded after the changes are saved; you must review the secondary datasets before uploading a primary dataset.
2. Changes are only applicable while you are on the **Make Predictions** or **Deploy** page. Leaving or refreshing the page causes the default snapshot policy to apply to a new primary dataset.
### Features required in relationships {: #features-required-in-relationships }
After the Feature Discovery and reduction workflow, DataRobot may prune some engineered features that were not used for modeling. This pruning may mean that some raw features are never used, and as such, are not necessary to include in the secondary datasets used for predictions. In other words, if only a subset of raw features are used in the final models, your secondary datasets do not need to include them. This allows you to collect and upload a subset of datasets and/or features, which can result in faster prediction and deployment times.
When creating secondary dataset configurations, the modal initially displays the datasets used when building the model. Below the dataset entry, DataRobot reports the raw features used to engineer new features (the required raw features). Any replacement datasets must include those features:

If you replace a dataset with an alternative that does not include the required features, DataRobot returns a validation error:

## Use Make Predictions {: #use-make-predictions }
If you accessed the secondary dataset configurations from the [**Make Predictions**](predict) tab, complete the remaining fields as you would with any other DataRobot model. When you upload [test data](predict) to run against the model, DataRobot runs validation that it includes the [required features](#features-required-in-relationships) and returns a validation error if it does not. If validation passes, the uploaded dataset is added to the list of available datasets and uses the new configuration.
Remember that you must finalize the secondary dataset configuration before uploading a dataset:

Once you upload a new scoring dataset, DataRobot extracts relevant data from secondary datasets that were used in the associated project relationships. Note that if the secondary datasets or selected snapshot policy are dynamic, DataRobot prompts for authentication. The **Import data from...** button is locked until credentials are provided.
## Use Deploy for batch predictions {: #use-deploy-for-batch-predictions }
You can make batch predictions using the [**Deploy**](deploy-model) tab. Once deployed, the model is added to the Deployments page and [model deployment and management](mlops/index) functionality is available.
To make batch predictions:
1. From the chosen model, select **Predict** > **Deploy**. The [**Deploy**](deploy-model) tab opens for configuration. If the model is not the one chosen and prepared for deployment by DataRobot, consider using the **Prepare for deployment** option.
2. Click **Deploy model** to launch the **Deployments** page for further configuration of the new deployment:

3. In the [**Secondary datasets configuration**](fd-overview#define-relationships) section, select a configuration. Reference the details on working with [these configurations](#select-a-secondary-dataset-configuration).
4. If the secondary dataset configuration was created with a dynamic snapshot policy, authenticate to proceed. When authentication succeeds, or if it is not required, click the **Create deployment** button (upper right corner). If the button is not activated, be sure you have correctly configured the [association ID](accuracy-settings#association-id) (If association ID is toggled on, you must supply the feature name containing the IDs.)
5. When deployment completes, DataRobot opens the **Overview** page for your deployment on the Deployments page. Select **Predictions > Prediction API** tab, select **Batch** and **API Client** to access the snippet required for making batch predictions:

6. Click the **Copy script to clipboard** link.
Follow the sample and make the necessary changes when you want to integrate the model, via API, into your production application.
7. You can view the secondary dataset configuration used in the deployment from the **Deployments > Settings** tab.

Click preview to open a modal displaying the configuration.
### Batch prediction considerations {: #batch-prediction-considerations }
* Only DataRobot models are supported, no external or custom model support.
* Governance workflow and Feature Discovery model package export is not supported for Feature Discovery models.
* You cannot replace a Feature Discovery model with a non-Feature Discovery model or vice versa.
* You cannot change the configuration once a deployment is created. To use a different configuration, you must create a new deployment.
* When a Feature Discovery model is replaced with another Feature Discovery model, the configuration used by the new model becomes the default configuration.
* Feature Discovery predictions will be slower than other DataRobot models because feature engineering is also applied.
|
fd-predict
|
---
title: Feature Discovery projects
description: How to create a project from multiple datasets. You define the relationships. Feature Discovery aggregates the secondary datasets to enrich the primary dataset.
---
# Feature Discovery projects {: #feature-discovery-projects }
Feature Discovery is based on relationships—between datasets and the features within those datasets. DataRobot provides an intuitive relationship editor that allows you to build and visualize these relationships. The end product is a multitude of additional features that result from these linkages. These derived features can then train more accurate models and generate better predictions. DataRobot’s Feature Discovery engine analyzes the graphs and the included datasets to determine a feature engineering “recipe,” and from that recipe generates secondary features for training and predictions.
!!! note
See the Feature Discovery [file requirements](file-types#feature-discovery-file-import-sizes) for dataset sizes information.
Review the next section to [get started with Feature Discovery](#get-started-with-feature-discovery). Or, skip to the step-by-step instructions that describe:
1. [Add datasets to a project](#add-datasets).
2. [Create relationships](#define-relationships).
3. [Set join conditions](#set-join-conditions).
4. [Assess the quality of relationship configurations](#relationship-quality-assessment).
5. [Configure Feature Discovery settings](#feature-discovery-settings).
6. [Start the project](#start-the-project).
You can also take a deeper dive into [time-aware feature engineering](fd-time), [derived features](fd-gen), and [making predictions](fd-predict) on models that have derived features.
## Get started with Feature Discovery {: #get-started-with-feature-discovery }
In most cases, all you need to start a Feature Discovery project is a simple *primary* dataset that includes:
* The target (column that you want to predict).
* An identifier (for example, *customer_id* or *transaction_id*) to link the dataset to additional related datasets. This *key* serves as the basis of dataset joins.
* An optional time index—a date feature in the primary dataset—to support [time-aware Feature Discovery](fd-time#prediction-point-and-time-indexes). This date feature is used as the prediction point for generating new features.
Each record of the primary dataset represents the desired unit of analysis. From this primary dataset, DataRobot guides you through creating relationships to additional datasets, called *secondary datasets*.

Secondary datasets have features that can potentially enrich the primary dataset. While it may be the case that both primary and secondary datasets have one-to-one relationships when they are added, it is not required. In most cases, DataRobot aggregates and then summarizes features in the secondary datasets, and, from there, enriches the primary dataset.
### Sample use case {: #sample-use-case }
The following sections use an example to illustrate how DataRobot automatically discovers new features from multiple datasets to predict whether a loan will default. In the primary dataset, **CreditRisk - Loan Applications**, the *is-bad* column is the project target. The relation between the datasets is the *CustID* column.

Two additional relational datasets, **CreditRisk - Credit Inquiries** and **CreditRisk - Tradeline Acccounts**, are the secondary datasets used for Feature Discovery.

Once model building begins, DataRobot runs through EDA2, adding newly created features to the [**Data** page](fd-gen). The **Data** page provides a variety of information about all the resulting project data, both new and old.
## Add datasets {: #add-datasets }
From the **AI Catalog**, select the primary dataset and click **Create project**. Then, enter the target feature.
!!! note
This procedure shows how to load datasets using the **AI Catalog**, so to begin, make sure all the assets are in the catalog. Alternatively, you can use the drag-and-drop method to upload datasets. If you do so, all datasets that you upload are automatically registered to the **AI Catalog**.
_A valid Feature Discovery project requires at least one secondary dataset_—the following tabs describe how to load additional datasets into the project from both the **Start** page and the **relationship editor**:
=== "From the Start page"
1. On the **Start** page, click **Add datasets** to add one or more _additional_ datasets to the project.

3. On the **Specify prediction point** page of the **relationship editor**, optionally **Select a date feature to use as a prediction point**. This date/time feature from the primary dataset serves as a reference date for feature derivation windows.

!!! note
The step to specify a prediction point does not display if you have already specified a prediction point for the project.
For an in-app explanation of prediction points, expand **Show Example**.
4. Click **Set up as prediction point** for a time-aware Feature Discovery project or **Continue without prediction point** for a non time-aware project.
!!! note
Although you can select the same date feature used for the out-of-time validation (OTV) partition as the prediction point, clicking **Continue without prediction point** automatically uses the OTV partition feature when generating new features.
If you [add or edit the prediction point](#primary-datasets), DataRobot accounts for that change when generating new features.
5. In the **Add datasets** page of the **relationship editor**, select a data import method under **Add Data From**.

This example shows how to add a dataset from the **AI Catalog**.
6. From the **AI Catalog**, select the datasets you want to include by clicking **Select**. Use the search functionality to easily locate datasets for selection. When finished, click **Add**.

7. Click **Continue** to finalize your selection. The secondary datasets you select on this page are immediately added to the configuration, so if you reload the page without clicking **Continue**, the data is not lost.

The **Define Relationships** page displays the datasets.

Best practice suggests continuing within this editor to define relationships. You can, however, click **Continue to project** to return to the **Start** screen.

The datasets display and you can see the number of relationships that have been defined.
At any time, you can click **Define relationships** to return to the **Define Relationships** page.
=== "From the relationship editor"
If your project has more than one secondary dataset, you can add more datasets after saving. From the **Define Relationships** page:
1. Click **Add datasets** and select a data import method.

This example shows how to add a dataset from the **AI Catalog**.
2. From the **AI Catalog**, select the datasets you want to include by clicking **Select**. Use the search functionality to easily locate datasets for selection. When finished, click **Add**.

The **Define Relationships** page displays the datasets.

Each dataset displayed on the canvas has a menu with shortcuts to dataset-related tasks. See details of working with [primary datasets](#primary-datasets) and [secondary datasets](#secondary-datasets).
After adding secondary datasets to your project, [define the relationships](#define-relationships) between the datasets.
### Snowflake integration {: #snowflake-integration }
An integration between DataRobot and Snowflake allows joint users to both execute data science projects in DataRobot and perform computations in Snowflake as a way to optimize workload performance. **Feature Discovery** training and prediction workflows will push down relational inner-joins, projection, and filter operations to the Snowflake platform (via SQL). By natively conducting joins in the Snowflake database, data is filtered into smaller datasets for transfer across the network before loading into DataRobot. The smaller datasets reduce project runtimes.
To enable integration with Snowflake, the following requirements must be met:
* A Snowflake [data connection](data-conn#dataconn-add) is set up.
* All secondary datasets are stored in Snowflake.
* All Snowflake sources are stored in the same warehouse.
* All datasets are configured as [dynamic datasets](catalog-asset#asset-states) in the AI Catalog.
* You have write permissions to one of the schemas in use or one `PUBLIC` schema of the database in use.
If the above requirements are met, DataRobot automatically establishes the integration and displays the Snowflake icon and **Snowflake mode enabled**, in blue, at the top of the **Define Relationships** page.

### View dataset details {: #view-dataset-details }
You can access dataset details directly from the relationship editor using one of the following methods:
=== "Brief description"
On the dataset tile, hover over the line beneath the dataset name to display metadata for the dataset.

=== "Detailed description"
Click the menu icon on the top right of the dataset tile and select **Details** to open the [**Info**](catalog-asset#work-with-metadata) page in the **AI Catalog**. From here you can access the profile, feature lists, relationships, version history, and comments associated with the dataset.

You can also delete the dataset from this menu.

## Define relationships {: #define-relationships }
Once all datasets are loaded, the next step is to define relationships on the **Define Relationships** page. The primary dataset is on the canvas while any secondary sets are listed in the left pane. After establishing a relationship between two datasets, you can define the relationship by setting [join conditions](#set-join-conditions) and [feature derivation windows (FDW)](#set-feature-derivation-windows) for time-aware feature engineering.
To define relationships:
1. Click a secondary dataset to highlight it; notice the addition of a plus sign on the primary set.

2. Click the plus sign. DataRobot adds the selected secondary dataset to the canvas and opens the configuration editor.

The following table describes the elements of the **Create new relationship** page:
| | Element | Description |
|---|-----------|--------------|
|  | Secondary dataset for join | Sets the secondary dataset used in the join. Change via the dropdown to any added dataset. Changes are reflected in the canvas below. |
|  | Primary dataset for join | Sets the primary dataset used in the join. |
|  | Suggested join condition | Sets the join condition (feature) for the corresponding dataset (listed above the condition). DataRobot suggests up to five conditions, each of which is editable. Use the dropdown to select a new feature; use the trash icon () to delete the join. |
|  | Add join condition | Provides a manual join configuration option. |
|  | Save or Save and configure time-aware | Saves the relationship configuration. **Save** is the option if there is no date feature or you did not set a prediction point. If you did set a [prediction point](#load-datasets) from the primary dataset, the **Save and configure time-aware** button displays. |
|  | Canvas display controls | Zooms in or out, or resets the default display size. |
|  | Dataset menu options | Provides access to a variety of actions that can be enacted on a [primary](#primary-dataset-menu-actions) or [secondary](#secondary-dataset-menu-actions) dataset. |
|  | Join edit launch | Opens the relationship editor, allowing you to define or modify the relationship between the datasets joined by the line you clicked. |
|  | Primary icon | Indicates, with a bullseye icon, that this is the primary dataset. |
|  | Tour launch | Opens a short tour that provides an overview of configuring Feature Discovery. |
|  | Continue to project | Returns to the **Start** screen where you can revise your time-aware settings, set advanced options, set a modeling mode, and start the modeling process.|
### Set join conditions {: #set-join-conditions }
If tables in your datasets are well-formed, DataRobot automatically detects compatible features and creates up to five "suggested" joins. You can modify the suggested join using the dropdowns associated with each join key.

You can also manually create join keys by clicking **Add join condition**. In the resulting dialog, select a join feature from each dataset from the feature dropdown.
??? note "Join feature type compatibility and restrictions"
See the table below for compatible join types when creating or modifying joins in the relationship editor:
Feature type | Compatible join types
---------- | -----------
Numeric | Numeric, Categorical
Categorical | Categorical, Numeric, Text
Text | Text, Categorical
Date | Date
The following feature types cannot be used as join keys:
* Summarized Categorical
* Length
* Currency
* Percentage
* Audio
* Image
* Document

Once you've added all of your secondary datasets and selected your relationship configuration settings, click **Save and configure time-aware** or **Save** for a non time-aware project.
* If the project is not time-aware, the **Start** page displays.
* If the project is time-aware, the **Time-aware feature engineering** page displays where you can [configure FDWs](#set-feature-derivation-windows).
### Set feature derivation windows {: #set-feature-derivation-windows}
After adding secondary datasets to a time-aware project, you can define the FDWs—a rolling window of past values used to generate features before the prediction point. The FDW constrains the time history—in the example below, no further back than 30 days, no more recent than 2 days.
1. Click **Select time feature** to choose a time index feature for the secondary dataset.

2. Configure the FDWs. You can configure up to three FDWs for each dataset, but each window must be unique. To add a FDW, click **Add window**.

Once set, the FDW is reflected in the dataset's tile on the canvas:

These time-aware settings ensure that the generated features are based only on records that occur before the prediction point. For more details, see [Time-aware feature engineering](fd-time).
## Work with datasets {: #work-with-datasets }
Once a dataset is added to the canvas, you can modify and refine its configuration. Primary datasets appear on the canvas by default, but all secondary datasets must be added.
### Primary datasets {: #primary-datasets }
!!! note
Be sure to save your configuration before using the menu options. Unsaved changes are lost when you leave a page.
Working from the canvas, you can select the menu option on the dataset tile. The primary dataset allows you to add a relationship or edit the prediction point:

| Option | Description |
|-----------|----------------|
| Add relation | Choose **Add relation** when you don't have any previous relationships configured to open the **Create new relationship** page. This is the equivalent of selecting the dataset from the list on the left and clicking the plus sign on the primary's canvas tile. Once the page opens, select a secondary dataset from the dropdown and it is added to the canvas. |
| Edit prediction point | Select **Edit prediction point** to choose a different date feature to use as your prediction point. |
### Secondary datasets {: #secondary-datasets }
When a secondary dataset has been selected and moved to the canvas, a menu option becomes available on its tile. The table below describes the options available from the menu:

| Option | Description |
|-----------|----------------|
| Add relation | Opens the relationship editor and allows you to select a dataset (from any available in the left pane) to join with. |
| Edit alias | Allows you to set an alias for the dataset. The string displays on the canvas as the secondary dataset name. The alias does not change the display in the left-pane dataset list or the relationship editor pages. |
| [Configure dataset](#configure-secondary-datasets) | Opens the dataset configuration editor, where you can set dataset details. |
| Configure time-awareness | Opens the time-aware feature engineering configuration dialog, where you can select a time index for the secondary dataset or confirm that the correct date/time feature is selected. |
| Details | Click to open the [**Info**](catalog-asset#work-with-metadata) window for the dataset in the **AI Catalog**. |
| Delete | Deletes the dataset, and all its relationships, from the current relationship configuration. The dataset is still available to the configuration and listed in the left panel. |
#### Configure secondary datasets {: #configure-secondary-datasets }
Selecting **Configure dataset** from a secondary dataset menu opens the **Dataset Editor**.

From here you can:
* Change the dataset alias. If not manually set, DataRobot auto-generates an alias based on the file name. Click in the box to modify the alias; the alias for the primary dataset cannot be modified.
* Choose a snapshot policy, either Latest, Fixed, or Dynamic, to use for this project. By default, the selected snapshot policy will apply at [prediction time](fd-predict).
* Choose a feature list to apply against the corresponding dataset. Use this option to limit the size of the table by selecting relevant features. You can create new feature lists from the [**AI Catalog**](catalog-asset#work-with-feature-lists).
## Relationship Quality Assessment {: #relationship-quality-assessment }
After configuring at least one secondary dataset, you can test the quality of those relationship configurations to identify and resolve potential problems early in the creation process. The Relationship Quality Assessment tool verifies join keys, dataset selection, and time-aware settings before EDA2 begins.
Click the **Review configuration** button to trigger the Relationship Quality Assessment.

A progress indicator (loading spinner) displays on each dataset and on the **Review Configuration** button, which is disabled, to indicate that an assessment is currently running.

Once the assessment is complete, DataRobot marks all tested datasets. Those with identified issues display a yellow warning icon and those with no identified issues display a green tick.
??? tip "Deep dive: Relationship assessments"
Depending on the project type, DataRobot assesses the relationship's enrichment rate, window settings, and most recent data—each of which is described in the table below:
| Category | Description | Solution | Project type |
| ---------- | ----------- | -------- | --------- |
| Enrichment rate | Quickly determines, as a percentage, how many rows in the secondary dataset map to rows in the primary table. | Review the dataset and relationship. | All |
| Window settings | Determines how many rows in the secondary dataset map to the primary dataset within the specified FDWs. | Expand the window settings to find more rows. | Time-aware |
| Most recent data | Compares the minimum and maximum time index of the secondary and primary datasets to determine if the secondary dataset is outdated. | Review the selected feature list and snapshot policy. | Time-aware |
Assessments are always updated for JDBC sources with dynamic snapshot policy.
DataRobot calculates enrichment rate using the following formula:
(`rows_of_primary_that_can_be_mapped_to_secondary` / `total_rows_of_primary`) x `100`
Select the warning icon to view a summary of the issues with suggested potential fixes. A summary of the issues identified during the assessment is displayed at the top of the window.

To open the detailed report, click the orange arrow on the right. DataRobot breaks down the assessment by category, providing additional information to diagnose the issue. If a secondary dataset has multiple FDWs, a detailed report is created for each one.

To resolve warnings, click the orange link displayed below each warning— Review dataset, Review relationship, or Review window settings—and a pane appears at the top of the relationship editor allowing you to modify relationship configurations.

After EDA2 completes and model building begins, you can view the most recent Relationship Quality Assessment in the **Data > Feature Discovery** tab.
## Feature Discovery settings {: #feature-discovery-settings}
The Feature Discovery process uses a variety of heuristics to determine the list of [features to derive](fd-gen) in a DataRobot project. In **Feature Discovery Settings**, you can control which transformations DataRobot will try when deriving new features ([feature engineering controls](#feature-engineering-controls)), as well as set DataRobot to automatically remove redundant features and those with low impact ([feature reduction](#feature-reduction)).
To access **Feature Discovery Settings**, click the settings gear on the **Define Relationships** page.

### Feature engineering controls {: #feature-engineering-controls }
You can influence how DataRobot conducts feature engineering by setting feature engineering controls. You might want to do this to:
* Use your domain knowledge to guide the feature engineering process and improve the quality of the derived features.
* Speed up feature engineering.
* Improve accuracy by deriving more features, for example, using [categorical statistics](fd-gen#categorical-statistics), skewness, and kurtosis.
* Exclude specific transforms that might be too complex to explain to business stakeholders. You can exclude these features post-modeling but that adds to the complexity of the modeling process.
Set the feature engineering options in the relationship editor prior to [EDA2](eda-explained#eda2).
2. In **Feature Discovery Settings**, click the **Feature Engineering** tab. Consider which feature engineering transformations make the most sense for your project and select the ones you want DataRobot to try when deriving new features.

You can hover over a transformation to view a tool tip that describes it.

??? note "Latest vs. Latest within window"
Transformation | Description | Default
---------- | ----------- |
Latest | Generates new features by exploring all historical data up until the end point of any defined FDWs. Note that this method ignores all FDW start points. | Disabled
Latest within window | Generates new features within the defined FDW. For time-aware feature engineering, only the data within the FDW is required when making predictions. | Enabled
3. Click **Save changes**.
### Feature reduction {: #feature-reduction }
During Feature Discovery, DataRobot generates new features then removes the features that have low impact or are redundant. This is called *feature reduction*. You can instead include all features when building models by disabling feature reduction using the following method:
In the relationship configuration (the **Define Relationships** page), click the settings () gear. Select the **Feature Reduction** tab and toggle off **Use supervised feature reduction**:

## Start the project {: #start-the-project }
1. Once you are happy with the definition of the relationship(s), click **Continue to project** to return to the **Start** screen.

The **Secondary Datasets** section provides visual queues that provide details about the secondary datasets.
| | Visual queue | Indicates
|---|---|---|
|  | Datasets with blue text | The dataset is in use and part of the project. |
|  | Datasets with white text | The dataset is loaded but not part of the relationship definition. |
|  | Linked datasets | The number of datasets linked with this dataset. |
|  | Number of datasets and relationships | The number of secondary datasets and how many have relationships defined. |
2. Click **Start**.
DataRobot conducts feature engineering as part of EDA2 and begins generating model blueprints.
### Share assets {: #share-assets }
As with any DataRobot project, you can share Feature Discovery projects (depending on your permissions). The assignable [roles](roles-permissions) provide different levels of permission for the recipient. Unique to Feature Discovery projects, however, is the ability to share engineering graphs and datasets as well.
To share a project, click the share icon (). For the recipient to interact with the project, they must have access to the additional assets. By default, assets are not shared. Check to enable sharing relationships and datasets, or DataRobot provides a warning:

Note that in addition to the assigned role, the listing of project users also indicates whether project assets have been shared.

|
fd-overview
|
---
title: Feature Discovery
description: With DataRobot, you can automatically discover and generate new features from multiple datasets, without consolidating manually.
---
# Feature Discovery {: #feature-discovery }
To deploy AI across the enterprise, you must be able to access relevant features to make the best use of predictive models. Often, the starting point of your data does not contain the right set of features. Feature Discovery discovers and generates new features from multiple datasets so that you no longer need to perform manual feature engineering to consolidate multiple datasets into one.
See the associated [considerations](#feature-considerations) for important additional information.
Select topics from the following table to learn about the feature engineering workflow:
| Topic | Describes... |
|---------|-----------------|
| [Feature Discovery projects](fd-overview) | Create and configure projects with secondary datasets, including a simple use-case-based workflow overview. |
| [Time-aware feature engineering](fd-time) | Configure time-aware feature engineering. |
| [Derived features](fd-gen) | Introduction to the list of aggregations and the feature reduction process. |
| [Predictions](fd-predict) | Score data with models created using secondary datasets. |
## Feature considerations {: #feature-considerations }
When using Feature Discovery, consider the following:
* JDBC drivers must be compatible with Java 1.8 and later.
* For secondary datasets, only uploaded files and JDBC sources registered in the **AI Catalog** are supported.
* The following features are not supported in Feature Discovery projects:
* Scoring Code
* Time series
* Challenger models
* V1.0 prediction API
* Portable prediction server (PPS)
* Automated Retraining
* Sliced insights
* Maximum supported values:
* Datasets per project = 30
* The combined size of a project's primary and secondary datasets cannot exceed 100GB. Individual dataset size limits are based on **AI Catalog** limits.
* If the primary dataset is larger than 40 MB, [CV partitioning](data-partitioning) is disabled by default.
* Column names in Feature Discovery datasets cannot contain the following:
* A trailing or leading single quote (e.g., `feature1'` or `'feature1`)
* A trailing or leading space (e.g., `feature1<space>` or `<space>feature1`)
* When there is an error during project start, you cannot return to defining relationships. You must restart the configuration.
* There can be issues with the colors used in the visualization of linkages in the Feature Engineering relationship editor.
* You must whitelist the following IP addresses to connect to the DataRobot JDBC connector:
{% include 'includes/whitelist-ip.md' %}
### Batch prediction considerations {: #batch-prediction-considerations }
* Only DataRobot models are supported; no external or custom model support.
* Model package export is not supported for Feature Discovery models.
* You cannot replace a Feature Discovery model with a non-Feature Discovery model or vice versa.
* When a Feature Discovery model is replaced with another Feature Discovery model, the configuration used by the new model becomes the default configuration.
* Feature discovery predictions will be slower than other DataRobot models because feature engineering is applied.
* When Feature Discovery generates features using secondary datasets, the hash values of all the feature values (`ROW_HASH`) are used to break any ties (when applicable). The value of hash changes when applied to different datasets, so if you make predictions with another secondary configuration, you may receive different predictions.
### Feature Discovery compatibility {: #feature-discovery-compatibility }
The following table indicates which features are supported for Feature Discovery and describes any limitations.
| Feature | Supported? | Limitations |
| ---- | ---- | ----|
| Monotonicity | Yes | Limited to features from the primary dataset used to start the project. **Note**: Users can start the project without specifying constraints. They can then manually constrain models from the Leaderboard and the Repository on eligible blueprints using discovered/generated features. |
| Pairwise interaction in GA2M models | Yes | Limited to features from the primary dataset used to start the project. |
| Positive class assignment | Yes | |
| Smart downsampling | Yes | |
| Supervised feature reduction| Yes | Only applies if secondary datasets are provided. |
| Search for interactions | Yes | Automatically enabled. Cannot be disabled if secondary datasets are provided. |
| Only blueprints with Scoring Code support | No | |
| Create blenders from top models | Yes | |
| Include only SHAP-supported blueprints | Yes | |
| Recommend and prepare a model for deployment | Yes | |
| Challenger models in MLOps | No | |
| Include blenders when recommending a model | Yes | |
| Use accuracy-optimized metablueprint | Yes | These models are extremely slow. |
| Upperbound running time | Yes | |
| Weight | Yes | Weight feature must be in the primary dataset used to start the project. |
| Offset | Yes | Offset feature must be in the primary dataset used to start the project. |
| Exposure | Yes | Exposure feature must be in the primary dataset used to start the project. |
| Random seed | Yes | |
| Count of events | Yes |Count of events feature must be in the primary dataset used to start the project. |
|
index
|
---
title: Derived features
description: Complete details on new features DataRobot derives during Feature Discovery, and how to work with these features on the Data page after EDA2 completes.
---
# Derived features {: #derived-features }
The Feature Discovery process uses a variety of heuristics to determine the list of features to derive in a DataRobot project. The results depend on a number of factors such as detected feature types, characteristics of the features, relationships between datasets, data size constraints, and more.
See also [Feature engineering controls](fd-overview#feature-engineering-controls) and [Feature reduction](fd-overview#feature-reduction) sections.
## Analysis of derived features {: #analysis-of-derived-features }
After [EDA2](eda-explained#eda2) completes, the [**Data**](model-ref#data-summary-information) page lists newly discovered and derived features with their corresponding importance scores on the **Project Data** tab.

All derived features are now listed. The name is comprised of the dataset alias and type of transformation. (See the [aggregation reference](#feature-aggregations) for more detail.) If the display is concatenated, you can hover on a feature to see the complete name:

Some tabs available on the **Data** page function the same as projects that don't use Feature Discovery:
* [**Transformations**](feature-disc#explore-new-features)
* [**Feature Lists**](feature-lists#create-feature-lists-from-the-data-page)
* [**Feature Associations**](feature-assoc)
DataRobot provides additional tabs and tools available on the **Data** tab that help you analyze Feature Discovery projects:
* [**Feature Lineage**](#feature-lineage-in-the-project-data-tab) on the **Project Data** tab shows how your engineered features were derived.
* The [**Feature Discovery**](#feature-discovery-tab) tab provides a feature derivation log and a summary of dataset relationships.
### Feature Lineage {: #feature-lineage }
The **Feature Lineage** tab is available when you access a feature on the **Project Data** tab. The **Project Data** tab provides a list of all available project features—original, user- or auto-transformed, and derived by the Feature Discovery process. Click to expand a feature and explore its characteristics. For each feature, depending on type, there are [a variety of sub-tabs](histogram) available, one of which is the **Feature Lineage** tab.
The **Feature Lineage** tab provides a visual description of how the feature was derived and the datasets that were involved in the feature derivation process. It visualizes the steps followed to generate the features (on the left) from the original dataset (on the right). Each element represents an action or a JOIN.
Click a feature to expand it and then click the **Feature Lineage** tab. For example:

You can work with the results as follows:
* Under **Original**, DataRobot displays the primary and secondary datasets. Click the name of the secondary dataset to see its **Info** page in the [**AI Catalog**](catalog).
* Hover on any info (`i`) icon to see details of the element.
* Click on elements of the visualization to understand the lineage. Parent actions are to the left of the element you click. Click once on a feature to show its parent feature, click again to return to the full display.

Clicking the yellow CustomerID, by contrast, illustrates the JOIN and resulting derived feature.

* The white triangle indicates that the next action (e.g., max, count, etc.) will be performed on this feature.

* Elements marked with the clock icon () are time-aware (i.e., derived using time index).
### Feature Discovery tab {: #feature-discovery-tab }
The **Feature Discovery** tab on the **Data** page provides [dataset relationship details](#dataset-relationship-details), a [feature derivation summary](#feature-derivation-summary), and a [feature derivation log](#feature-derivation-log).
#### Dataset relationship details {: #dataset-relationship-details }
The **Feature Discovery** tab provides a visualization of the dataset relationships. The tab shows the number of secondary datasets, explored features, and derived features that resulted from Feature Discovery.

Click **Details** in the menu on the dataset's tile for more information about the dataset.
#### Feature derivation summary {: #feature-derivation-summary }
Before generating features for the full primary dataset, DataRobot evaluates a sample of the dataset to identify and discard:
* Low impact features
* Redundant features

Click **Show more** in the **Feature Discovery** tab to display the feature engineering controls used to explore the features.

In the example above, 200 features were evaluated (explored) and 132 were discarded in the feature reduction process, resulting in 68 derived features on the full dataset. DataRobot automatically adds those 68 derived features to the [Informative Features](feature-lists#automatically-created-feature-lists) feature list.
Click the **Download dataset** option in the menu on the right to download the dataset generated by the Feature Discovery process—that is, the multiple new features derived from the secondary datasets.

The downloaded CSV contains the original dataset and the Feature Discovery-derived features; it excludes discarded features and those that resulted from the [Search for interaction](feature-disc#search-for-interactions) option.
#### Feature derivation log {: #feature-derivation-log }
Click the **Feature Derivation log** option in the menu on the right for details of the feature generation and reduction process.

The feature derivation log indicates:
* Relationships between tables
* Number of features processed in each secondary dataset
* Removed features and reasons for removal

Depending on the number of features in your dataset, the log may not display all activity and instead serves as a preview. Click **Download** to access the complete log contents.
### Feature aggregations {: #feature-aggregations }
When DataRobot creates new features as part of the feature derivation process, the feature name provides an indication of the action taken on the feature, as described and then illustrated below:
* _Primary table_: Feature names begin with the name of the feature. The name of the primary table is not included. This also applies to date features that are used as the prediction point.
* _Secondary table(s)_: The table name is appended to the primary table feature name, with the secondary feature name indicated in brackets `[ ]`. The applied feature engineering is appended in parentheses `( )`.
* _Transformations_: Automatic or user-created transformed features are prefaced with an info icon ().

The following tables list aggregations that apply based on the detected feature type. These use a sample customer/sales dataset to provide examples.
!!! note
You can enable and disable transformations for specific feature types during Feature Discovery. See [Feature engineering controls](#feature-engineering-controls) for details.
#### General feature types {: #general-feature-types }
| Aggregation | Example |
|---------------|-------------|
| Record count | Number of transactions for each customer |
| Min count per intermediate entity | Minimum number of items per order across orders of each customer |
| Max count per intermediate entity | Maximum number of items per order across orders of each customer |
| Average count per intermediate entity | Average number of items per order across orders of each customer |
| Latest | Most recent product bought by each customer |
#### Numeric feature types {: #numeric-feature-types }
| Aggregation | Example |
|---------------|-------------|
| Min | Minimum transaction amount, per customer |
| Max | Maximum transaction amount, per customer |
| Sum | Total amount from all transactions, per customer |
| Average | Average number of items, per order, among customer orders |
| Median | Median number of items, per order, among customer orders |
| Missing count | Number of transactions, per customer, that have a missing amount |
| Standard deviation (_measures the variation of a set of values_) | Std of item prices among orders, per customer |
| Skewness (_measure of the asymmetry of the frequency-distribution curve_) | Asymmetry of the distribution of item prices among customer orders relative to the mean |
| Kurtosis (_measures the heaviness of a distribution's tails relative to a normal distribution_) | "Tailedness" of the distribution of item prices among customer orders |
#### Categorical feature types {: #categorical-feature-types }
| Aggregation | Example |
|---------------|-------------|
| Most frequent | Most frequent merchant type in transactions, per customer |
| Entropy | Entropy of merchant types in transactions, per customer |
| Summarized counts | Count of transactions per merchant type for each customer |
| Unique count | Number of unique merchant types for each customer |
| Missing count | Number of transactions, per customer, with missing merchant type |
#### Date feature types {: #date-feature-types }
| Aggregation | Example |
|---------------|-------------|
| Interval from previous | Time since the last transaction by the same customer, per transaction |
| Time since last | Time since the cutoff date of the last transaction of the customer |
| Duration from creation date | Age of customer at profile creation date |
| Entropy of date difference | Entropy of binned difference with cutoff date |
| Pairwise date difference | Pairwise data difference within a secondary dataset (maximum of 10 different date columns) |
#### Text feature types {: #text-feature-types }
| Aggregation | Example |
|---------------|-------------|
| Word/character count | Length of remarks |
| Summarized token counts | Counts of each word/character in the product descriptions of all transactions |
#### Categorical Statistics {: #categorical-statistics }
Numeric features can be aggregated by common statistics like sum, min, max, count, and average but sometimes it makes more sense to aggregate these statistical groupings by other category column values.
In the following business use case, the average spending by product type is more useful than the overall average amount of spending. *Spending* and *Product_Type* are features in a secondary dataset. The values of the *Spending* numeric feature correspond to the categories of the *Product-Type* categorical feature:

If Categorical Statistics aggregation is enabled for Feature Discovery, DataRobot explores numeric statistics for each category of the *Product-Type* feature, for example:
* *Spending(30 days min)*
* *Spending(30 days min by Product_Type = A)*
* *Spending(30 days min by Product_Type = B)*
* *Spending(30 days min by Product_Type = C)*
* ...

Categorical Statistics aggregation is turned off by default. See [Feature engineering controls](#feature-engineering-controls) to learn how to enable it.
!!! note
Feature Discovery only explores Categorical Statistics for categorical columns that have at most 50 unique values.
|
fd-gen
|
---
title: Time-aware feature engineering
description: How to configure time-aware feature engineering using only information available before the prediction point.
---
# Time-aware feature engineering {: #time-aware-feature-engineering }
Time-based feature engineering in Feature Discovery projects involves use of a date feature in the primary table. This date prevents any feature derivation beyond the prediction point. In time-aware projects, the partition date column is, by default, used as the prediction point.
!!! tip "How time series features are derived"
For information about how DataRobot derives time series features, see [time series feature derivation](feature-eng). See also specific details about [derived date features](fd-gen#date-feature-types).
In non time-aware projects, where there is no partition date feature, you can set a prediction point to enable time-aware feature engineering.
## Prediction point and time indexes {: #prediction-point-and-time-indexes }
In most cases, the primary dataset includes a prediction point feature, describing when it would have been needed to make the prediction. For example, in a loan request the prediction point feature might be the "loan request date," because each time a customer requests a loan the model must generate a prediction to decide whether to approve or decline.
In some cases, the primary dataset is built using one or multiple extracts done at some regular point in the past. For instance, to predict on the first of the month, you would want a monthly prediction point when building the training dataset (e.g., 2019-10-01, 2019-11-01, etc.). In this example, the prediction point feature might be “extract_date.”
In both cases, you want to avoid using information from secondary datasets that was not available before the prediction point (for example, transactions that happened after the loan request). To avoid this "time travel paradox," DataRobot integrates time-aware feature engineering capabilities and allows you to configure a [feature derivation window (FDW)](ts-customization#set-window-values), which defines a rolling window of past values that models use to generate features before the prediction point. With Feature Discovery, setting FDWs from the [relationship editor](#configure-time-aware-feature-engineering) can be understood as:

Using the loan application example, the loan request date would be the prediction point. If you only have a date (e.g., 02-14-20), and not a timestamp set, you don't know whether an event happened before or after the time of the specific loan request (in terms of the actual hour/minute/etc.). To be conservative, DataRobot excludes everything on that exact date so that the model doesn't coincidentally include data that happened "too early." Using time-aware settings, you can set a rolling window to ensure that the most relevant data is included.
### Configure time-aware feature engineering {: #configure-time-aware-feature-engineering }
After a join is saved, if the added dataset has a date feature and if you set a prediction point, the **Save and configure time-aware** option becomes available. Click to open the **Time-aware feature engineering** editor.

Set the date/time feature of the secondary dataset to ensure that it only uses records happening before the prediction point to generate features. Once set, the FDW settings can be modified.
Set the boundaries of the FDW to determine how much historical data to use. By default, DataRobot sets the window to 30 to 0 days (e.g., transactions that happened in the 30 days before the "loan request date" of now). You can change the boundary by both entering a new value and also setting the increment. Keep in mind that using a larger FDW will slow down the Feature Discovery process.

DataRobot also automatically calculates additional, smaller FDWs for the project, in addition to the window you specify. For example, if you set the FDW parameters to "30 to 0 Days," DataRobot selects additional candidate durations (perhaps 1 to 0 weeks, 1 to 0 days, and 6 to 0 hours) and derives features from those windows. The new candidate window sizes are based on an internal algorithm that:
* Chooses additional windows between 50% and 0.5% of the original FDW size.
* Ensures the additional windows do not use a time unit with a smaller granularity than what is relevant for the primary date/time feature format.
If the time index doesn’t reflect the time when data is accessible, you can change the FDW <em>end</em> boundary to reflect the delay. For example, perhaps a secondary dataset is provided by an external data provider and that provider gives you access with a two day delay. You can specify a gap of two days (before the prediction point).

The FDW is reflected in the dataset tile:

## Prediction point rounding {: #prediction-point-rounding }
If a prediction point has many distinct values, the Feature Discovery process may be slow. To speed up processing, DataRobot, by default, rounds down the prediction point to the nearest minute. For example, if a loan has a prediction point ("loan_request_date") of 2020-01-15 08:13:53, DataRobot will round that value down to 2020-01-15 08:13, dropping the `53` seconds.
While rounding makes the Feature Discovery process faster, it does come at a cost of potentially losing fresh secondary dataset records. In this example, records that happened between 2020-01-15 08:13:00 and 2020-01-15 08:13:53.
If your project is sensitive to that level of record loss, you can change the default rounding from nearest minute to a more suitable selection:

## Determine the final cutoff {: #determine-the-final-cutoff }
Once Feature Discovery applies prediction point rounding and the FDW end, DataRobot derives the final "cutoff" used for time-aware engineering. The cutoff point is the point at which DataRobot will not go forward when generating features. In other words, the FDW (the rolling window of past values) is comprised of the furthest time back and the nearest time, both modified by the rounding selection.
For example, this setting:

Can be understood conceptually as:

|
fd-time
|
---
title: Leverage AI accelerators
description: Understand how AI accelerators work and how you can leverage them to get value from code-first machine learning workflows.
---
# Leverage AI accelerators {: #leverage-ai-accelerators }
After reviewing [how to get started with DataRobt as a code-first user](gs-code) and determining the platform in which you want to work, you can browse AI accelerators to learn about workflows you may want to explore via code.
## AI Accelerator overview
If you do not wish to begin coding from scratch, or want to further understand how to leverage DataRobot's capabilities from a code-centric perspective, browse DataRobot's many [AI accelerators](https://github.com/datarobot-community/ai-accelerators){ target=_blank } that outline common use cases and machine learning workflows using version 3.x of DataRobot's Python client.
If a particular use case is satisfactory to you, you can download it from DataRobot's AI accelerator repo and upload it to DataRobot Notebooks to test it , copy code to use as a template, and more to leverage your code-first experience.
The table below provides some sample accelerators for you to browse.
Topic | Describes... |
----- | ------ |
[Automated Feature Discovery with Multiple Tables](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/AFD){ target=_blank } | Use a repeatable end-to-end AFD workflow in Snowflake from data import to making batch predictions.
[Azure storage end-to-end workflow with DataRobot](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Azure_End_to_End.ipynb){ target=_blank } | This workflow ingests a dataset hosted in an Azure blob container, trains a series of models using DataRobot's AutoML capabilities, deploys a recommended model, and sets up a batch prediction job that writes predictions back to the original container.
[Creating Custom Blueprints with Composable ML](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/custom_blueprints) | Learn how to create custom blueprints.
[Customize Lift Charts](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/customizing_lift_charts){ target=_blank } | Learn how to create a custom lift chart.
[Deploy a DataRobot model into AWS SageMaker](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/sagemaker_deployment){ target=_blank } | Build a model with DataRobot which will then be deployed and hosted within AWS SageMaker.
[End-to-end demand forecasting and retraining workflow](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Demand_forecasting_retraining/End_to_end_demand_forecasting_retraining.ipynb){ target=_blank } | Set up retraining for time series and execute it when a model degrades. |
[End-to-end Time Series Demand Forecasting Workflow](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/End_to_end_demand_forecasting){ target=_blank } | Learn how to create a time series project with the API. |
[Feature Reduction with FIRE](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/feature_reduction_with_fire){ target=_blank } | Learn how to use FIRE for feature selection. |
To browse more AI accelerators, visit the [GitHub repo](https://github.com/datarobot-community/ai-accelerators){ target=_blank }.
|
gs-ai
|
---
title: Code-first experience
description: Get started with DataRobot's code-first experience. Build and execute notebooks and leverage AI accelerators.
---
# Code-first experience {: #code-first-experience }
Get started with DataRobot's code-first experience. Build and execute notebooks and leverage AI accelerators.
Topic | Describes...
---------------|---------------
[Get started with code](gs-code) | Understand the ways in which you can get started coding with DataRobot, including authentication steps and notebook creation.
[Get started with AI accelerators](gs-ai)| Learn how you can leverage AI accelerators to quickly engaging in code-first machine learning workflows.
|
index
|
---
title: Work with notebooks
description: Provides an overview of how to engage with DataRobot's code-centric platform.
---
# Work with notebooks {: #work-with-notebooks }
Follow five simple steps to get started with DataRobot's code-first experience. This page will outline how to get value out of DataRobot Notebooks as means for engaging with code-centric data science.
## 1: Learn how to work with DataRobot APIs {: #learn-how-to-work-with-datarobot-apis }
DataRobot's [API quickstart guide](api-quickstart/index) provides the fundamental requirements for you to work with the API. Review its topics to understand any considerations required to engage in a code-first workflow with DataRobot, such as [API prerequisites](api-quickstart/index#prerequisites), [creating an API Key](api-quickstart/index#create-a-datarobot-api-key), and [Authenticating with DataRobot](api-quickstart/index#configure-api-authentication).
## 2: Review the DataRobot Notebook workflow {: #review-the-datarobot-notebook-workflow }
Use the flowchart below to understand the common workflows for working with DataRobot Notebooks.
``` mermaid
graph TB
A[Create a DataRobot notebook]
A --> |New notebook|C[Add a new notebook]
A --> |Existing notebook|D[Upload an .ipynb notebook];
C --> E{Configure the environment}
D --> E
E --> F[Start the notebook session]
F --> G[Edit the notebook]
G --> |Writing guidelines?|H[Create and edit Markdown cells]
G --> |Coding?|I[Reference code snippets and create code cells]
H --> J[Run the notebook]
I --> J
J --> K[Create a revision history]
```
## 3: Create a DataRobot Notebook {: #create-a-datarobot-notebook }
To add notebooks to DataRobot, navigate to the **Notebooks** page. This brings you the notebook dashboard, which hosts all notebooks currently available. Simple select **Add new > Add notebook** to begin working in a DataRobot notebook.


## 4: Review and import an AI accelerator {: #review-and-import-an-ai-accelerator }
If you do not wish to begin coding from scratch, or want to further understand how to leverage DataRobot's capabilities from a code-centric perspective, browse DataRobot's many [AI accelerators](https://github.com/datarobot-community/ai-accelerators){ target=_blank } that outline common use cases and machine learning workflows using version 3.x of DataRobot's Python client.
If a particular use case is satisfactory to you, you can download it from DataRobot's AI accelerator repo and upload it to DataRobot Notebooks to test it , copy code to use as a template, and more to leverage your code-first experience.
Read more about the [AI accelerators available for use](gs-ai).
## 5: Reference sample code snippets in DataRobot notebooks {: #reference-sample-code-snippets-in-datarobot-notebooks }
As you develop your notebook in DataRobot, you may be trying to find ways to execute specific DataRobot functions. DataRobot provides a set of pre-defined code snippets, inserted as cells in a notebook, for commonly used methods in the DataRobot API as well as other data science tasks. These include connecting to external data sources, deploying a model, creating a model factory, and more. Access code snippets by selecting the code icon in the sidebar.

|
gs-code
|
---
title: Models in production
description: Get started with DataRobot MLOps by deploying a DataRobot model to DataRobot infrastructure.
---
# Models in production {: #models-in-production }
DataRobot MLOps provides a central hub to [deploy](deployment/index), [monitor](monitor/index), [manage](manage-mlops/index), and [govern](governance/index) all your models in production, regardless of how they were created or when and where they were deployed. MLOps helps improve and maintain the quality of your models using health monitoring that accommodates changing conditions via continuous, automated model competitions ([challenger models](challengers)). It also ensures that all centralized production machine learning processes work under a robust governance framework across your organization, leveraging and sharing the burden of production model management.
With MLOps, you can deploy any model to your production environment of choice. By instrumenting the [MLOps agent](deployment/mlops-agent/index), you can monitor any existing production model already deployed for live updates on behavior and performance from a single and centralized machine learning operations system. MLOps makes it easy to deploy models written in any open-source language or library and expose a production-quality, REST API to support real-time or batch predictions. MLOps also offers built-in [write-back integrations](batch-pred-jobs) to systems such as Snowflake and Tableau.
MLOps provides constant monitoring and production diagnostics to improve the performance of your existing models. Automated best practices enable you to track [service health](service-health), [accuracy](deploy-accuracy), and [data drift](data-drift) to explain why your model is degrading. You can build your own challenger models or use DataRobot's Automated Machine Learning to build them for you and test them against your current champion model. This process of continuous learning and evaluation enables you to avoid surprise changes in model performance.
The tools and capabilities of every deployment are determined by the data available to it: [training data](glossary/index#training-data), [prediction data](glossary/index#prediction-data), and outcome data (also referred to as [actuals](glossary/index#actuals)).
## Example deployment workflow {: #example-deployment-workflow }
The primary deployment workflow is deploying a DataRobot model to a DataRobot prediction environment. You can complete the standard DataRobot deployment process in five steps listed below:
1. [Register a model](#register-a-model)
2. [Deploy a model](#deploy-a-model)
3. [Configure a deployment](#configure-a-deployment)
4. [Set up retraining](#set-up-retraining-and-replacement)
5. [Monitor model performance](#monitor-model-performance)
For alternate workflow examples, see the [MLOps deployment workflows](deploy-workflows/index) documentation.
## 1: Register a model {: #register-a-model }
DataRobot AutoML automatically generates models and displays them on the Leaderboard. The [model recommended for deployment](model-rec-process) appears at the top of the page. You can register this (or any other) model from the Leaderboard. Once the model is registered, you can create a deployment to start making and monitoring predictions.

[Register a model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](dr-model-reg){ .md-button }
## 2: Deploy a model {: #deploy-a-model }
After you've added a model to the Model Registry, you can deploy it at any time to start making and monitoring predictions.

[Deploy a model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](deploy-model#deploy-from-the-model-registry){ .md-button }
## 3: Configure a deployment {: #configure-a-deployment }
The deployment information page outlines the capabilities of your current deployment based on the data provided, for example, training data, prediction data, or actuals. It populates fields for you to provide details about the training data, inference data, model, and your outcome data.

[Configure a deployment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](add-deploy-info){ .md-button }
## 4: Set up model retraining and replacement {: #set-up-retraining-and-replacement }
To maintain model performance after deployment without extensive manual work, DataRobot provides an automatic retraining capability for deployments. Upon providing a retraining dataset registered in the AI Catalog, you can define up to five retraining policies on each deployment, each consisting of a trigger, a modeling strategy, modeling settings, and a replacement action. When triggered, retraining will produce a new model based on these settings and notify you to consider promoting it. If necessary, you can [manually replace a deployed model](deploy-replace).

[Set up retraining and replacement <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](set-up-auto-retraining){ .md-button }
## 5: Monitor model performance {: #monitor-model-performance }
To trust a model for mission-critical operations, users must have confidence in all aspects of model deployment. Model monitoring is the close tracking of the performance of ML models in production used to identify potential issues before they impact the business. Monitoring ranges from whether the service is reliably providing predictions in a timely manner and without errors to ensuring the predictions themselves are reliable. DataRobot automatically monitors model deployments and offers a central hub for detecting errors and model accuracy decay as soon as possible. For each deployment, DataRobot provides a status banner—model-specific information is also available on the Deployments inventory page.

[Monitor model performance <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](monitor/index){ .md-button }
|
gs-mlops
|
---
title: Work with data (Classic)
description: An overview of the tools DataRobot Classic provides for importing, preparing, and managing data for machine learning.
---
# Work with data (Classic) {: #work-with-data-classic }
DataRobot knows that high-quality data is integral to the ML workflow—from importing and cleaning data to transforming and engineering features, from scoring with prediction datasets to deploying on a prediction server—data is critical. DataRobot provides tools to help you seamlessly and securely interact with your data.
**Import data from various sources, including from external data sources** to minimize data movement and control data governance across your cloud data warehouses and lakes.

**Explore patterns and insights** in your data; automate the discovery, testing, and creation of hundreds of valuable new features.

## 1: Import data {: #import-data }
Import data into the DataRobot platform from the [AI Catalog](catalog), directly from a [connected data source](data-conn), or as a [local file](import-to-dr).

??? tip "Learn more"
To learn more about the topics discussed in this section, see:
- [File size requirements](file-types)
- [Import data documentation](import-data/index)
## 2: Explore data {: #explore-data }
After importing your data, DataRobot performs [exploratory data analysis](eda-explained), a process that analyzes the datasets, summarizes their main characteristics, and [automatically creates feature transformations](auto-transform) —the results of which are displayed on the [Data page](histogram) of your project.
Once EDA1 completes, you can use the [Data Quality Assessment](data-quality) to find and address quality issues surfaced in your dataset.

??? tip "Learn more"
To learn more about the topics discussed in this section, see:
- [EDA Explained](eda-explained)
- [View the results of EDA1](histogram)
- [Data Quality Assessment](data-quality)
## 3: Prepare data {: #prepare-data }
Now that you've explored your dataset and identified areas for improvement, you can:
Perform [manual feature transformations](feature-transforms).

[Prepare your data using Spark SQL](spark).

[Add secondary datasets and then define those relationships to the primary in Feature Discovery projects](fd-overview).

??? tip "Learn more"
To learn more about the topics discussed in this section, see:
- [Manual feature transformations](feature-transforms)
- [Prepare your data with Spark SQL](spark)
- [Configure a Feature Discovery project](fd-overview)
## Next steps {: #next-steps }
Now that your data is where it needs to be, you're ready to start [modeling](gs-model).
|
gs-data
|
---
title: DataRobot Classic
description: Get started with DataRobot's value-driven AI. Analyze data, create and deploy models, and leverage code-first accelerators and notebooks.
---
# DataRobot Classic {: #datarobot-classic }
Get started with DataRobot's classic experience. Analyze data, create and deploy models, and leverage code-first accelerators and notebooks.
Topic | Describes...
---------------|---------------
[Fundamentals of DataRobot Classic](gs-dr-fundamentals) | Understand the types of ML modeling projects you can create in DataRobot. Learn the general process of modeling, analyzing, and selecting models for deployment.
[Get started with data](gs-data) | Use DataRobot's integrated capabilities to import, explore, and prepare your data for modeling.
[Get started with modeling](gs-model) | Data in, models out—click to model and immediately start investigating model insights.
[Get started with MLOps](gs-mlops) | Deploy, monitor, and manage models in production.
|
index
|
---
title: Fundamentals of DataRobot Classic
description: Learn about modeling methods supported in DataRobot Classic, as well as the modeling lifecycle.
---
# Fundamentals of DataRobot Classic {: #fundamentals-of-datarobot-classic }
DataRobot uses automated machine learning (AutoML) to build models that solve real-world problems across domains and industries. DataRobot takes the data you provide, generates multiple machine learning (ML) models, and recommends the best model to put into use. You don't need to be a data scientist to build ML models using DataRobot, but an understanding of the basics will help you build better models. Your domain knowledge and DataRobot's AI expertise will lead to successful models that solve problems with speed and accuracy.
DataRobot supports many different approaches to ML modeling—supervised learning, unsupervised learning, time series modeling, segmented modeling, multimodal modeling, and more. This section describes these approaches and also provides tips for analyzing and selecting the best models for deployment. You can begin this process by logging in to DataRobot Classic.
??? note "Login methods"
If your organization is using an external account management system for single sign-on:
<ul><li>If using LDAP, note that your username is not necessarily your registered email address. Contact your DataRobot administrator to obtain your username, if necessary.</li>
<li>If using a SAML-based system, on the login page, ignore the entry box for credentials. Instead, click <b>Single Sign-On</b> and enter credentials on the resulting page.</li>
## Modeling methods {: #modeling-methods }
ML modeling is the process of developing algorithms that learn by example from historical data. These algorithms predict outcomes and uncover patterns not easily discerned.
### Supervised and unsupervised learning {: #supervised-and-unsupervised-learning }
The most basic form of machine learning is *supervised learning*.

With supervised learning, you provide "labeled" data. A label in a dataset provides information to help the algorithm learn from the data. The label—also called the *target*—is what you're trying to predict.
* In a *regression* project, the target is a numeric value. A regression model estimates a continuous dependent variable given a list of input variables (also referred to as *features* or *columns*). Examples of regression problems include financial forecasting, time series forecasting, maintenance scheduling, and weather analysis.
* In a *classification* project, the target is a category. A classification model groups observations into categories by identifying shared characteristics of certain classes. It compares those characteristics to the data you're classifying and estimates how likely it is that the observation belongs to a particular class. Classification projects can be *binary* (two classes) or *multiclass* (three or more classes). For classification, DataRobot also supports [multilabel modeling](multilabel) where the target feature has a variable number of classes or *labels*; each row of the dataset is associated with one, several, or zero labels.
Another form of machine learning is *unsupervised learning*.

With unsupervised learning, the dataset is unlabeled and the algorithm must infer patterns in the data.
* In an [anomaly detection](anomaly-detection) project, the algorithm detects unusual data points in your dataset. Use cases include detection of fraudulent transactions, faults in hardware, and human error during data entry.
* In a [clustering](clustering) project, the algorithm splits the dataset into groups according to similarity. Clustering is useful for gaining intuition about your data. The clusters can also help label your data so that you can then use a supervised learning method on the dataset.
### Time-aware modeling {: #time-aware-modeling }
Time data is a crucial component in solving prediction and forecasting problems. DataRobot provides several methods and tools for time-aware modeling.

* With [time series modeling](time/index), you can generate a forecast—a series of predictions for a period of time in the future. You train time series models on past data to predict future events. Predict a range of values in the future or use [nowcasting](nowcasting) to make a prediction at the current point in time. Use cases for time series modeling include predicting pricing and demand in domains such as finance, healthcare, and retail— basically, any domain where problems have a time component.
* You can use time series modeling for a dataset containing a single series, but you can also build a model for a dataset that contains multiple series. For this type of [multiseries](multiseries) project, one feature serves as the *series identifier*. An example is a "store location" identifier that essentially divides the dataset into multiple series, one for each location. So you might have four store locations (Paris, Milan, Dubai, and Tokyo) and therefore four series for modeling.
* With a multiseries project, you can choose to generate a model for each series using [segmented modeling](ts-segmented). In this case, DataRobot creates a deployment using the best model for each segment.
* Sometimes, the dataset for the problem you're solving contains date and time information, but instead of generating a forecast as you do with time series modeling, you predict a target value on each individual row. This approach is called [out-of-time validation (OTV)](otv).
* Along with supervised learning models, you can also develop [time series anomaly detection models](anomaly-detection#time-series-anomaly-detection).
See [What is time-aware modeling](whatis-time) for an in-depth discussion of these strategies.
### Specialized modeling workflows {: #specialized-modeling-workflows }
DataRobot provides specialized workflows to help you address a wide range of problems.

* [Visual AI](vai-overview) allows you to include images as features in your datasets. Use the image data alongside other data types to improve outcomes for various types of modeling projects—regression, classification, anomaly detection, clustering, and more.
* With [Composable ML](cml-overview), you can build and edit your own ML [blueprints](blueprints), incorporating DataRobot preprocessing and modeling algorithms, as well as your own models.
* For text features in your data, use Text AI insights like [Word Clouds](word-cloud) and [Text Mining](analyze-insights#text-mining) to understand the impact of the text features.
* [Location AI](location-ai/index) supports geospatial analysis of modeling data. Use geospatial features to gain insights and visualize data using interactive maps before and after modeling.
This powerful collection of modeling strategies will ensure successful automated modeling projects.
## ML modeling workflow {: #ml-modeling-workflow }
This section walks you through the steps for implementing a DataRobot modeling project.
1. To begin the modeling process, [import your data](import-data/index).

2. DataRobot conducts the first stage of [exploratory data analysis](eda-explained) [(EDA1)](eda-explained#eda1), where it analyzes data features.

3. Next, you select your [target](model-data#set-the-target-feature) and a [modeling mode](model-data#set-the-modeling-mode), then [start modeling](model-data#start-the-build).

DataRobot generates [feature lists](feature-lists) from which to build models. By default, it uses the feature list with the most [informative features](feature-disc#feature-lists-and-created-features). Alternatively, you can select different generated feature lists or customize your own.
4. DataRobot performs [EDA2](eda-explained#eda2) and further evaluates the data, determining which features correlate to the target ([feature importance](model-ref#importance-score)) and which features are informative, among other information.

The application performs [feature engineering](transform-data/index)—transforming, generating, and reducing the feature set depending on the project type and selected settings.
5. DataRobot selects [blueprints](blueprints) based on the project type and builds candidate models.

## Analyze and select a model {: #analyze and select a model }
DataRobot automatically generates models and displays them on the Leaderboard. The [recommended model](model-rec-process) displays at the top with a **Recommended for Deployment** indicator, but you can select any of the models to deploy.

To analyze and select a model:
1. Compare models by selecting an [optimization metric](model-data#optimization-metric) from the **Metric** dropdown; [RMSE](opt-metric#rmse-weighted-rmse-rmsle-weighted-rmsle) (root mean squared error) is the metric displayed in this example.
2. Analyze the model using the visualization tools that are best suited for the type of model you are building.

See the [list of project types and associated visualizations](#which-visualizations-should-i-use) below.
3. Experiment with modeling settings to potentially improve the accuracy of your model. You can try rerunning Autopilot using a different feature list or use a different modeling mode like [Comprehensive Autopilot](more-accuracy).
4. After analyzing your models, select the best for [deployment](deploy-model).
!!! tip
It's recommended that you [test predictions](predict) before deploying. If you aren't satisfied with the results, you can revisit the modeling process and further experiment with feature lists and optimization settings. You might also find that gathering more informative data features can improve outcomes.
5. As part of the deployment process, you [upload predictions](batch-pred). You can also [set up a recurring batch prediction job](batch-pred-jobs).
6. DataRobot [monitors](monitor/index) your deployment. Use the application's visualizations to track [data (feature) drift](data-drift), [accuracy](deploy-accuracy), [bias](mlops-fairness), and [service health](service-health). You can set up notifications so that you are regularly informed of the model's status.

!!! tip
Consider enabling [automatic retraining](set-up-auto-retraining) to automate an end-to-end workflow. With automatic retraining, DataRobot regularly tests [challenger models](challengers#enable-challenger-models) against the current best model (the *champion model*) and replaces the champion if a challenger outperforms it.
## Which visualizations should I use? {: #which-visualizations-should-i-use }
DataRobot provides many visualizations for analyzing models. Not all visualization tools are applicable to all modeling projects—the visualizations you can access depend on your project type. The following table lists project types and examples of visualizations that are suited to their analysis:
| Project type | Analysis tools |
|----|----|
| All models | <ul><li>[Feature Impact](feature-impact): Provides a high-level visualization that identifies which features are most strongly driving model decisions (**Understand > Feature Impact**).</li><li>[Feature Effects](feature-effects): Visualizes the effect of changes in the value of each feature on the model’s predictions (**Understand > Feature Effects**).</li><li>[Prediction Explanations](pred-explain/index): Illustrates what drives predictions on a row-by-row basis, answering why a given model made a certain prediction (**Understand > Prediction Explanations**).</li></ul> |
| Regression | <ul><li>[Lift Chart](lift-chart): Shows how well a model segments the target population and how capable it is of predicting the target (**Evaluate > Lift Chart**). </li><li>[Residuals plot](residuals): Depicts the predictive performance and validity of a regression model by showing how linearly your models scale relative to the actual values of the dataset used (**Evaluate > Residuals**).</li></ul> |
| Classification | <ul><li>[ROC Curve](roc-curve): Explores classification, performance, and statistics related to a selected model at any point on the probability scale (**Evaluate > ROC Curve**). </li><li> [Confusion Matrix (binary projects)](confusion-matrix): Compares actual data values with predicted data values in binary projects (**Evaluate > ROC Curve**).</li><li>[Confusion Matrix (multiclass projects)](multiclass): Compares actual data values with predicted data values in multiclass projects (**Evaluate > Confusion Matrix**). </li> </ul> |
| Time-aware modeling (time series and out-of-time validation) | <ul><li>[Accuracy Over Time](aot): Visualizes how predictions change over time (**Evaluate > Accuracy Over Time**).</li><li>[Forecast vs Actual](fore-act): Compares how different predictions behave at different forecast points to different times in the future (**Evaluate > Forecast vs Actual**).</li><li>[Forecasting Accuracy](forecast-acc): Provides a visual indicator of how well a model predicts at each forecast distance in the project’s forecast window (**Evaluate > Forecasting Accuracy**).</li><li>[Stability](stability): Provides an at-a-glance summary of how well a model performs on different backtests (**Evaluate > Stability**).</li><li>[**Over Time** chart](ts-leaderboard#understand-a-features-over-time-chart): Identifies trends and potential gaps in your data by visualizing how features change over the primary date/time feature (**Data > Over Time**). </li></ul> |
| Multiseries | [Series Insights](series-insights-multi): Provides a histogram and table for series-specific information (**Evaluate > Series Insights**). |
| Segmented modeling | [Segmentation tab](ts-segmented#leaderboard-model-scores): Displays data about each segment of a Combined Model (**Describe > Segmentation**). |
| Multilabel modeling | [Feature Statistics](multilabel#feature-statistics-tab): Helps evaluate a dataset with multilabel characteristics, providing a pairwise matrix so that you can visualize correlations, joint probability, and conditional probability of feature pairs (**Data > Feature Statistics**). |
| Visual AI | <ul><li>[Image Embeddings](vai-insights#image-embeddings): Displays a projection of images onto a two-dimensional space defined by similarity (**Understand > Image Embeddings**). </li><li>[Activation Maps](vai-insights#activation-maps): Visualizes areas of images that a model is using when making predictions (**Insights > Activation Maps**).</li></ul> |
| Text AI | <ul><li>[Word Cloud](word-cloud): Visualizes variable keyword relevancy (**Understand > Word Cloud**).</li><li>[Text Mining](analyze-insights#text-mining): Visualizes relevancy of words and short phrases (**Insights > Text Mining**).</li></ul>|
| Geospatial AI | <ul><li>[Geospatial Map](lai-esda): Provides exploratory spatial data analysis (ESDA) by visualizing the spatial distribution of observations (**Data > Geospatial Map**). </li><li>[Accuracy Over Space](lai-insights): Provides a spatial residual mapping within an individual model (**Evaluate > Accuracy Over Space**).</li></ul> |
| Clustering | <ul><li>[Cluster Insights](clustering#cluster-insights): Captures latent features in your data, surfacing and communicating actionable insights and identifying segments for further modeling (**Understand > Cluster Insights**). </li><li>[Image Embeddings](clustering#image-embeddings): Displays a projection of images onto a two-dimensional space defined by similarity (**Understand > Image Embeddings**). </li><li>[Activation Maps](clustering#activation-maps): Visualizes areas of images that a model is using when making predictions (**Understand > Activation Maps**). </li></ul>|
| Anomaly detection | <ul><li>[Anomaly Over Time](anom-viz#anomaly-over-time): Plots how anomalies occur across the timeline of your data (**Evaluate > Anomaly Over Time**). </li><li>[Anomaly Assessment](anom-viz#anomaly-assessment): Plots data for the selected backtest and provides SHAP explanations for up to 500 anomalous points (**Evaluate > Anomaly Assessment**).</li></ul> |
|
gs-dr-fundamentals
|
---
title: Start modeling
description: Provides a quick overview of modeling and deploying models with DataRobot.
---
# Start modeling {: #start-modeling }
To build models in DataRobot, you first create a project by importing a dataset, selecting a target feature, and clicking **Start** to begin the modeling process. A DataRobot project contains all of the models built with the imported dataset. The following steps provide a quick overview of how to begin modeling data with DataRobot. Links within the steps point to the full documentation if you need assistance.
## 1: Create a new DataRobot project {: #create-a-new-datarobot-project }
Importing a dataset using any one of the methods on the new project page to [create a new DataRobot project](import-to-dr):

You can see [the file type reference](file-types) for information about file size limitations.
## 2: Configure modeling settings {: #configure-modeling-settings }
To begin modeling, type the name of the target and configure the optional settings described below:

| | Element | Description |
|----|----|----|
|  | What would you like to predict?| Type the name of the target feature (the column in the dataset you would like to predict) or click **Use as target** next to the name in the feature list below. |
|  | No target? | Click to build an [unsupervised](unsupervised/index) model. |
|  | Secondary datasets | Optionally, add a secondary dataset by clicking **+ Add datasets**. DateRobot performs [Feature Discovery](feature-discovery/index) and creates relationships to the datasets. |
|  | Feature list | Displays the [feature list](feature-lists) to be used for training models. |
|  | Optimization Metric| Optionally, select an [optimization metric](opt-metric) to score models. DataRobot automatically selects a metric based on the target feature you select and the type of modeling project (i.e., regression, classification, multiclass, unsupervised, etc.). |
|  | Show advanced options | Specify modeling options such as partitioning, bias and fairness, and optimization metric (click **Additional**). |
|  | Time-Aware Modeling | Build [time-aware models](whatis-time) based on time features. |
Scroll down to see the list of available features. Optionally, select a **Feature List** to be used for model training. Click **View info** in the Data Quality Assessment area on the right to investigate the quality of features.

## 3: Start modeling {: #start-modeling }
After specifying the target feature, you can select a [Modeling Mode](model-data#modeling-modes-explained) to instruct DataRobot to build more or fewer models and click **Start** to begin modeling:

!!! tip
For large datasets, see the section on [early target selection](fast-eda#fast-eda-and-early-target-selection).
Or, you can set a variety of advanced options to fine-tune your project's model-building process:

DataRobot prepares the project ([EDA2](eda-explained)) and starts running models. A progress indicator for running models is displayed in the Worker Queue on the right of the screen. Depending on the size of the dataset, it may take several minutes to complete the modeling process. The results of the modeling process are displayed in the model Leaderboard, with the best-performing models (based on the chosen optimization metric) at the top of the list.

## 4: Review model details {: #review-model-details }
On the Leaderboard, click a model to display the model blueprint and [access the many tabs](analyze-models/index) available for investigating model information and insights.

## 5: Test predictions before deployment {: #test-predictions-before-deployment }
You can test and generate predictions from any model manually without deploying to production via [**Predict > Make Predictions**](predict). Provide a dataset by drag-and-dropping a file onto the screen or use a method from the dropdown. Once data upload completes, click **Compute Predictions** to generate predictions for the new dataset and **Download**, when complete, to view the results in a CSV file.

## Next steps {: #next-steps }
From here, you can:
* [Deploy your model into production](gs-mlops).
* [Create DataRobot Notebooks](gs-code).
* [Leverage DataRobot's AI Accelerators](gs-ai).
|
gs-model
|
---
title: Workbench capabilities
description: An evolving comparison of capabilities available in DataRobot Classic and Workbench.
---
# Workbench capabilities {: #workbench-capabilities }
{% include 'includes/wb-capability-matrix.md' %}
|
gs-wb-capabilities
|
---
title: Workbench experimentation
description: Get started with DataRobot's value-driven AI. Analyze data, create models, and leverage code-first accelerators and notebooks.
---
# Workbench experimentation {: #workbench-experimentation }
Get started with DataRobot's Workbench experience. Analyze data, create models, and leverage code-first accelerators and notebooks.
Topic | Describes...
---------------|---------------
[Workbench capabilities](gs-wb-capabilities) | Compare the evolving capabilities of Workbench with the DataRobot Classic experience.
[Fundamentals of Workbench](gs-wb-fundamentals) | Understand the components of the DataRobot Workbench interface, including the architecture, some sample workflows, and directory landing page.
[Work with data](gs-wb-data) | Use DataRobot's integrated capabilities to import, explore, and prepare your data for modeling.
[Build experiments](gs-wb-experiments) | Data in, models out—click to model and immediately start investigating model insights.
|
index
|
---
title: Fundamentals of Workbench
description: Understand the components of the DataRobot Workbench interface, including the architecture, some sample workflows, and directory landing page.
---
# Fundamentals of Workbench {: #fundamentals-of-workbench }
{% include 'includes/wb-overview.md' %}
|
gs-wb-fundamentals
|
---
title: Work with with data (Workbench)
description: An overview of the tools DataRobot provides in Workbench for importing, preparing, and managing data for machine learning.
---
# Work with with data (Workbench) {: #work-with-data-workbench }
DataRobot knows that high-quality data is integral to the ML workflow—from importing and cleaning data to transforming and engineering features, from scoring with prediction datasets to deploying on a prediction server—data is critical.
Whether you're using Workbench or DataRobot Classic, DataRobot provides tools to help you seamlessly and securely interact with your data.
**Import data from various sources, including from external data sources** to minimize data movement and control data governance across your cloud data warehouses and lakes.

**Explore patterns and insights** in your data; automate the discovery, testing, and creation of hundreds of valuable new features.

**Clean and prepare your data for modeling** using DataRobot's Exploratory Data Analysis in conjunction with ML data preparation.

In just three steps, your data can be ready for modeling.
## 1: Import data {: #import-data }
Add data to your Use Case via [local file](wb-local-file), the [Data Registry](wb-data-registry), or a [Snowflake data connection](wb-connect).

Not only do data connections minimize data movement, they also allow you to interactively browse, preview, profile, and prepare your data using DataRobot's integrated data preparation capabilities.
??? tip "Learn more"
To learn more about the topics discussed in this section, see:
- [File size requirements](file-types)
- [Add data documentation](wb-add-data/index)
## 2: Explore data {: #explore-data }
While a dataset is being registered in Workbench, DataRobot also performs EDA1—analyzing and profiling every feature to detect feature types, automatically transform date-type features, and assess feature quality. Once registration is complete, you can [explore the information](wb-data-tab#view-exploratory-data-insights) uncovered while computing EDA1.

??? tip "Learn more"
To learn more about the topics discussed in this section, see:
- [Exploratory Data Insights in Workbench](wb-data-tab#view-exploratory-data-insights)
- [EDA Explained](eda-explained)
- [View the results of EDA1](histogram)
## 3: Prepare data {: #prepare-data }
If you've added data from Snowflake, you can use DataRobot's wrangling capabilities which provide a seamless, scalable, and secure way to access and transform data for modeling. In Workbench, "wrangle" is a visual interface for executing data cleaning at the source, leveraging the compute environment and distributed architecture of your data source.

When you've finished wrangling your dataset, you can "push down" your transformations to Snowflake, generating a new output dataset.
??? tip "Learn more"
To learn more about the topics discussed in this section, see:
- [Build a recipe](wb-add-operation)
- [Publish a recipe](wb-pub-recipe)
## Next steps {: #next-steps }
Now that your data is where it needs to be, you're ready to start [modeling](gs-wb-experiments).
|
gs-wb-data
|
---
title: Build experiments
description: Build models in minutes, gain insights, compare results, then move your models into production.
---
# Build experiments {: #build-experiments }
DataRobot takes the data you provide, generates multiple machine learning models, and recommends the best model to put into production. With your domain knowledge and DataRobot's programmatic AI expertise, you will have successful models that solve real-world problems—in minutes!
## 1: Create a use case {: #create-a-use-case }
From the Workbench directory, click **Create Use Case** in the upper right:

Provide a name for the use case and click the check mark to accept. You can change this name at any time by opening the use case and clicking on the existing name:

## 2: Create an experiment and add data {: #create-an-experiment-and-add-data }
After you create a use case from one of the many available data sources, it is time to create an experiment to start building models. Each Workbench experiment is a set of parameters (data, targets, and modeling settings) that you can compare to find the optimal models to solve your business problem:

Add data to the new experiment, either by [adding new data](gs-wb-data) (1) or selecting a dataset that has already been loaded to the Use Case (2).

## 3: Set the target and start modeling {: #set-the-target-and-start-modeling }
Once you have proceeded to target selection, Workbench prepares the dataset for modeling ([EDA 1](eda-explained#eda1){ target=_blank }). When the process finishes, set the target either by:
=== "Hover on feature name"
Scroll through the list of features to find your target. If it is not showing, expand the list from the bottom of the display:

Once located, click the entry in the table to use the feature as the target.

=== "Enter target name"
Type the name of the target feature you would like to predict in the entry box. DataRobot lists matching features as you type:

Once a target is entered, Workbench displays a histogram providing information about the target feature's distribution and, in the right pane, a summary of the experiment settings.

From here, you are ready to build models with the default settings. Or, you can [modify the default settings](wb-experiment-create#customize-settings) and then begin. If using the default settings, click **Start modeling** to begin the [Quick mode](model-data#modeling-modes-explained){ target=_blank } Autopilot modeling process.
## 4: Evaluate models {: #evaluate-models }
Once you start modeling, Workbench begins to construct your model Leaderboard, a list of models ranked by performance to help with quick model evaluation. The Leaderboard provides a summary of model information, including scoring information, for each model built in an experiment. From the Leaderboard, you can click a model to access visualizations for further exploration. Using these tools can help to assess what to do in your next experiment.

After Workbench completes [Quick mode](model-data#modeling-modes-explained){ target=_blank } on the 64% sample size phase, the most accurate model is selected and trained on 100% of the data. That model is marked with the [**Prepared for Deployment**](model-rec-process#prepare-a-model-for-deployment){ target=_blank } badge.
For all Leaderboard models, you can view model insights to help interpret, explain, and validate what drives a model’s predictions. Available insights are dependent on experiment type, but may include:
* [Feature Impact](wb-experiment-evaluate#feature-impact)
* [Feature Effects](wb-experiment-evaluate#feature-effects)
* [Blueprint](wb-experiment-evaluate#blueprint)
* [ROC Curve](wb-experiment-evaluate#roc-curve)
* [Lift Chart](wb-experiment-evaluate#lift-chart)
* [Residuals](wb-experiment-evaluate#residuals)
* [Accuracy Over Time](wb-experiment-evaluate#accuracy-over-time)
* [Stability](wb-experiment-evaluate#stability)
From the Leaderboard, you can also [generate compliance documentation](wb-experiment-evaluate#compliance-documentation) and [train a model on new settings](wb-experiment-add#train-on-new-settings).
## 5: Make predictions {: #make-predictions }
After you create an experiment and train models, you can make predictions to validate those models. Select the model from the **Models** list and then click **Model actions > Make predictions**.

On the **Make Predictions** page, upload a **Prediction source**:

After you upload a prediction source, you can [configure the prediction options and make predictions](wb-predict).
## Next steps {: #next-steps }
From here, you can:
* [Create DataRobot Notebooks](gs-code).
* [Leverage DataRobot's AI Accelerators](gs-ai).
|
gs-wb-experiments
|
---
title: Get help
description: This help section provides basic account access troubleshooting and quick, task-based instructions for success in modeling.
---
# Get help {: #get-help }
## Troubleshooting {: #troubleshooting }
This section provides information on troubleshooting DataRobot authentication and access:
Section | Describes how to...
--------|--------------------
[Trial FAQ](trial-faq) | Questions and answers about the DataRobot Self-Service SaaS trial.
[Signing in](signin-help) | Things to try if you are having issues signing in.
[Two-factor authentication](2fa) | Common issues when working with 2FA.
[Check platform status](status-help) | View and subscribe to platform status announcements.
[Number of workers](workers-help) | Learn about increasing the worker count.
[Troubleshooting the Python client](py-help) | Review cases that can cause issues with using the Python client and known fixes
## Tutorials {: #tutorials }
DataRobot offers a variety of tutorials to assist you in using different aspects of the application, outlined below:
Section | Describes how to...
--------|--------------------
[Prepare learning data](prep-learning-data/index) | Assemble, curate, and prepare data for use with DataRobot.
[Create AI models](creating-ai-models/index) | Execute common tasks related to modeling and DataRobot projects.
[Explore AI insights](explore-ai-insights/index) | Analyze model-specific insights generated by DataRobot.
|
index
|
---
title: DataRobot in 5
description: A short overview of the steps involved in building and deploying models in DataRobot.
---
# DataRobot in 5 {: #datarobot-in-5 }
Building and deploying models in DataRobot—regardless of the data handling, modeling options, prediction methods, and deployment actions—comes down to the same five steps:
1. Work with data.
2. Build experiments.
3. Investigate models.
4. Make predictions.
5. Deploy models.
Review the steps below, which link out to more complete descriptions, to get a quick understanding of how to be successful in DataRobot.
## 1: Work with data {: #1-work-with-data }
[Working with data](gs-wb-data) involves importing (or connecting), exploring, and preparing your data. Three steps and your data is ready for modeling.

## 2: Build experiments {: #2-build-experiments }
To [build experiments](gs-wb-experiments), create a use case, add data, and start modeling.

## 3: Evaluate models {: #3-evaluate-models }
DataRobot's Leaderboard shows you all the models built for your experiment. Click on any model to [access visualizations](gs-wb-experiments#evaluate-models) for evaluation and to inform further experimentation.

## 4: Make predictions {: #4-make-predictions }
Once you have selected a model, you can make predictions with it to assess model performance before deploying.

## 5: Deploy models {: #5-deploy-models }
At this time, model deployment options are available in the DataRobot Classic interface. If you are in Workbench, transfer to DataRobot Classic and find your experiment in the [project management center](manage-projects#manage-projects-control-center){ target=_blank }. From the Leaderboard, select the model to deploy and choose [**Predict > Deploy**](deploy-model#deploy-from-the-leaderboard){ target=_blank }.

## Next steps {: #next-steps }
From here, you can:
* [Create DataRobot Notebooks](gs-code).
* [Leverage DataRobot's AI Accelerators](gs-ai).
|
index
|
---
title: DataRobot status
description: Status page announcements provide information on service outages, scheduled maintenance, and historical uptime.
---
# Check platform status {: #check-platform-status }
DataRobot performs service maintenance regularly. Although most maintenance will occur unnoticed, some may cause a temporary impact. Status page announcements provide information on service outages, scheduled maintenance, and historical uptime. You can view and subscribe to notifications from the [DataRobot status page](https://status.datarobot.com/){ target=_blank }.

|
status-help
|
---
title: Need help signing in?
description: This article addresses common questions related to signing up or signing in to the DataRobot AI Platform or the DataRobot Community.
---
# Need help signing in?
This article addresses common questions related to signing up or signing in to the DataRobot AI Platform or the DataRobot Community.
## Are you signing in to the correct AI Platform?
Make sure that you are logging in to the appropriate region of the DataRobot AI Platform. You must log in to the application based on the region selected when registering your account—either [app.datarobot.com](https://app.datarobot.com){ target=_blank } or [app.eu.datarobot.com](https://app.eu.datarobot.com){ target=_blank }.
## Do you have the right password?
If you are not sure if you are entering the right password, try a password reset. Note:
* Make sure you have the right URL for your region before attempting a reset.
* If you are using enforced SSO, the SSO admin will have to do the reset.
* If you did not complete account setup, you will not get a password reset email.
## Still having trouble?
If these steps haven't worked and you're still having issues, try the following:
* Make sure you are using the latest version of the Chrome browser, which DataRobot recommends for the best user experience.
* Clear your browser cache and cookies and try accessing the domain again.
* Try signing in with the browser in incognito mode.
* Contact your administrator.
* If you are in an office or public environment, there may be a firewall blocking website access. Try using a different network to access the site.
## What do my login credentials give me access to?
When you sign up for the DataRobot AI Platform, you can also use those credentials in [DataRobot Community](https://community.datarobot.com){ target=_blank } to post and get involved.
To access [DataRobot University](https://learn.datarobot.com){ target=_blank }, visit the site and create a new set of login credentials.
## How can I add two-factor authentication?
Once you are logged in, you can set up two-factor authentication (2FA):
=== "DataRobot Account Portal (Trial)"
For a trial of the DataRobot AI Platform, you can enable 2FA from [id.datarobot.com/security](https://id.datarobot.com/security){ target=_blank }. For more information about setting up 2FA, see the full [documentation](2fa#set-up-2fa) on the topic. For help, see [2FA troubleshooting](2fa-help).

=== "DataRobot Profile Settings"
For DataRobot's enterprise offerings (managed AI Platform or Self-Managed AI Platform), you can set up two-factor authentication (2FA) from your profile settings. Click your profile avatar (or the default avatar ) in the upper-right corner of DataRobot, click **Profile**, and then click **Security**. For more information about setting up 2FA, see the full [documentation](2fa) on the topic. For help, see [2FA troubleshooting](2fa-help).

## Get more assistance
If the suggestions above did not answer your question(s), contact your administrator or reach out to support@datarobot.com.
|
signin-help
|
---
title: Troubleshooting the Worker Queue
description: If you expect to be able to increase your worker count but cannot, check the reasons described here.
---
# Troubleshooting the Worker Queue {: #troubleshooting-the-worker-queue }
{% include 'includes/worker-queue-tbsht-include.md' %}
|
workers-help
|
---
title: Troubleshooting 2FA
description: Help with two-factor authentication (2FA), an opt-in feature that provides additional security for DataRobot users.
---
# Troubleshooting 2FA {: #troubleshooting-2fa }
Two-factor authentication (2FA) is an opt-in feature that provides additional security for DataRobot users. See the full documentation for instructions on [setting up 2FA](2fa); see the tips below for troubleshooting assistance.
## I am receiving a message that my code is invalid {: #i-am-receiving-a-message-that-my-code-is-invalid }
* Make sure that you have only one instance of DataRobot authentication in your authenticator app. Each time you scan the QR code, the authenticator app creates a new account based on that code. The code you enter must be associated with the QR code displayed, and with multiple entries, it can be unclear which code to enter.
**Solution**: Rename or delete any DataRobot accounts listed in your authentication app.
* To do this with Google, for example, click the pencil icon and select all accounts registered to DataRobot. Select **DELETE** and when prompted, select **REMOVE ACCOUNTS**. (To reinstate the account, you can toggle "Enable two-factor authentication" in **Settings** and recapture the QR code).
## So many codes! {: #so-many-codes }
* Some authentication systems (Google, for example) add new accounts to the bottom of the list.
**Solution**: When prompted for a code, enter the last DataRobot entry.
## I lost my codes {: #i-lost-my-codes }
**Solution**: If you lose access to your phone and recovery codes, contact your administrator or DataRobot Support.
## I no longer want to use 2FA {: #i-no-longer-want-to-use-2fa }
**Solution**: Toggle the feature off on the [**Settings**](user-settings) page.

Enter a 6-digit authentication code or a saved recovery code and click **Disable**. The feature is removed from your account, but you can re-enable it at any time.
## I forgot my password (but I have my code) {: #i-forgot-my-password-but-i-have-my-code }
**Solution**: From the login page, click **Don't Remember?** and then on the next screen, click **Reset Password**:

When prompted, enter your authentication app code or, if you don't have your mobile device, click **Switch to recovery code** and enter one of your saved codes.
DataRobot will send a link to reset your password.
|
2fa-help
|
---
title: Troubleshooting
description: View common issues and troubleshooting tips for a smooth DataRobot experience.
---
# Troubleshooting {: #troubleshooting }
This section provides information on troubleshooting DataRobot authentication and access:
Topic | Describes...
----- | ------------
[Trial FAQ](trial-faq) | Questions and answers about the DataRobot Self-Service SaaS trial.
[Signing in](signin-help) | Things to try if you are having issues signing in.
[Two-factor authentication](2fa) | Common issues when working with 2FA.
[Check platform status](status-help) | View and subscribe to platform status announcements.
[Number of workers](workers-help) | Learn about increasing the worker count.
[Troubleshooting the Python client](py-help) | Review cases that can cause issues with using the Python client and known fixes.
|
index
|
---
title: Troubleshooting the Python client
description: Review cases that can cause issues with using the Python client and known fixes.
---
# Troubleshooting the Python client {: #troubleshooting-the-python-client }
This page outlines cases that can cause issues with using the Python client and provides known fixes.
### InsecurePlatformWarning {: #insecureplatformwarning }
Python versions earlier than 2.7.9 might report an [InsecurePlatformWarning](https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning) in your output. To prevent this warning without updating your Python version, you should install the [pyOpenSSL](https://urllib3.readthedocs.org/en/latest/security.html#pyopenssl) package:
`pip install pyopenssl ndg-httpsclient pyasn1`
### AttributeError: 'EntryPoint' object has no attribute 'resolve' {: #attributeerror-entrypoint-object-has-no-attribute-resolve }
Some earlier versions of [setuptools](https://setuptools.pypa.io/en/latest/){ target=_blank } cause an error when importing DataRobot.
```
>>> import datarobot as dr
...
File "/home/clark/.local/lib/python2.7/site-packages/trafaret/__init__.py", line 1550, in load_contrib
trafaret_class = entrypoint.resolve()
AttributeError: 'EntryPoint' object has no attribute 'resolve'
```
The recommended fix is upgrading setuptools to the latest version.
`pip install --upgrade setuptools`
If you are unable to upgrade, pin [trafaret](https://pypi.python.org/pypi/trafaret/){ target=_blank } to version <=7.4 to correct this issue.
### Connection errors {: #connection-errors }
`configuration.rst` describes how to configure the DataRobot client with the `max_retries` parameter to fine tune
behaviors like the number of attempts to retry failed connections.
### ConnectTimeout {: #connecttimeout }
If you have a slow connection to your DataRobot installation, you may see a traceback like:
```python
ConnectTimeout: HTTPSConnectionPool(host='my-datarobot.com', port=443): Max
retries exceeded with url: /api/v2/projects/
(Caused by ConnectTimeoutError(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f130fc76150>,
'Connection to my-datarobot.com timed out. (connect timeout=6.05)'))
```
You can configure a longer connect timeout (increasing the wait time on each request attempting to connect to the DataRobot server before aborting) using a `connect_timeout` value in either a configuration file or when calling `datarobot.Client`.
### project.open_leaderboard_browser {: #project-open-leaderboard-browser }
Calling ``project.open_leaderboard_browser`` may be blocked if you run it with a text-mode browser or
on a server that doesn't have the ability to open a browser.
|
py-help
|
---
title: Trial FAQ
description: Questions and answers about DataRobot's self-service trial experience.
---
# Trial FAQ {: #trial-faq }
??? faq "What is self-service SaaS?"
DataRobot's _self-service_ SaaS includes the same capabilities and features that are available in the managed AI Platform enterprise software. It provides your organization with the opportunity to explore enhanced capabilities and fully test functionality in AI/ML projects. Through _self-service_ SaaS, users can sign up, manage their organization and users, and run workloads—all without needing to speak to a DataRobot representative.
??? faq "What is DataRobot’s self-service SaaS trial?"
DataRobot’s Self-Service SaaS trial, or “Trial,” is a one-time, 30-day free trial period for organizations to fully explore the platform’s features without any financial commitment. The 30-day trial is based on 30 calendar days, which begins the day you register.
??? faq "Who can access DataRobot’s Self-Service SaaS Trial?"
Trial is available to anyone who would like to try DataRobot. Sign up with an email and password, or with a social login such as Google or GitHub.
??? faq "Can Trial users collaborate with teammates?"
Yes. You can invite colleagues to join your trial account. Once the colleagues have accepted the invite, they become part of your DataRobot _organization_. You can then share Use Cases, datasets, and other assets through all of the traditional sharing methods or by adding the colleagues to a Use Case. Additionally, you can create [No-Code AI Apps](wb-apps/index){ target=_blank } and share the app with users both within and outside of DataRobot through an app-specific sharing link.
??? faq "Can Trial users share with teammates outside their organization?"
Trial users can only share assets within their DataRobot organization, not across organizations. If a user joins under a different organization, even if they are teammates from the same company, they cannot share datasets or assets in a Use Case, although they can still share No-Code AI Apps.
??? faq "How do trial users get product support?"
Trial users do not have access to DataRobot Customer Support teams, or the Customer Support ticketing system. For product support, users can post questions in the [DataRobot Community]((https://community.datarobot.com){ target=_blank }.
??? faq "Does trial offer full DataRobot functionality?"
Yes, you will have access to the full power of the platform with both Workbench and DataRobot Classic interfaces. Each account is assigned a specific number of modeling workers, and those workers are shared by all users in the account. Trial users are provided access to batch predictions only.
??? faq "What if I want to continue after the 30 days have passed?"
After the 30-day trial has ended, you can reach out to Sales to upgrade your account.
|
trial-faq
|
---
title: Tutorials
description: Tutorials provide quick, task-based instructions for success in modeling.
---
# Tutorials {: #tutorials }
DataRobot offers a variety of tutorials to assist you in using different aspects of the application, outlined below:
Topic | Describes how to...
----- | ------
[Prepare learning data](prep-learning-data/index) | Assemble, curate, and prepare data for use with DataRobot
[Create AI models](creating-ai-models/index) | Execute common tasks related to modeling and DataRobot projects
[Explore AI insights](explore-ai-insights/index) | Analyze model-specific insights generated by DataRobot
|
index
|
---
dataset_name: 1k_diabetes-train.csv
expiration_date: 10-10-2024
owner: misha.yakubovskiy@datarobot.com
domain: core-modeling
title: Select a target
description: This tutorial provides instructions to select a prediction target for your project.
url: https://docs.datarobot.com/en/tutorials/creating-ai-models/tut-target.html
---
# Select a target {: #select-a-target }
In this tutorial, you'll learn how to select the target feature in a project.
The model building phase of the project begins with selecting a target feature, in other words, the column in your dataset that captures what you would like to predict. Once you select a target, additional configuration options become available.
## Takeaways {: #takeaways }
This tutorial explains:
- How to select a target feature
- Available actions after target selection
## Select a target feature {: #select-a-target-feature }
You can begin building a project after importing a training dataset to DataRobot, which takes you to the **Data** tab.
1. Consider your use case, and begin typing the name of the target feature you would like to predict below **What would you like to predict**.

2. Select the target from the dropdown.

??? tip
If you're unsure of the target feature name, scroll down to view a list of every feature in the training dataset.
Once selected, DataRobot automatically analyzes the training dataset, determines the project type (classification if the dataset has categories or regression if the target is numerical), and displays the distribution of the target feature.
The other **Start** screen configuration options also become available, including [modeling modes](model-ref#modeling-modes) and [advanced modeling options](adv-opt/index).
## Learn more {: #learn-more }
**Documentation:**
- [Configure advanced modeling options](adv-opt/index)
- [Select a modeling mode](model-ref#modeling-modes)
- [Set target feature in basic modeling workflow](model-data#set-the-target-feature)
- [Explore your data prior to model building](analyze-data/index)
|
tut-target
|
---
title: Set the modeling mode
dataset_name: 1k_diabetes-train.csv
description: This tutorial provides instructions to select a modeling mode for your project.
domain: core-modeling
expiration_date: 10-10-2024
owner: izzy@datarobot.com
url: https://docs.datarobot.com/en/tutorials/creating-ai-models/tut-model-mode.html
---
# Set the modeling mode {: #set-the-modeling-mode }
In this tutorial, you'll learn how to choose the appropriate modeling mode from the four options offered by DataRobot: Autopilot, Quick (the default), Manual, and Comprehensive. A project's modeling mode controls the [sample size](glossary/index#sample-size) percentage of the training set that DataRobot uses to build models.
## Takeaways {: #takeaways }
This tutorial explains:
* How to select a modeling mode
* Differences between modeling modes
* When to use each modeling mode
## Select the modeling mode {: #select-the-modeling-mode }
After selecting a target feature for your project, the ability to choose a modeling mode becomes available. To set the project's modeling mode, click the **Modeling Mode** dropdown.

Use the section below to determine which modeling mode is the best fit for your use case.
### Autopilot {: #autopilot }
In full Autopilot, DataRobot starts by building models using 16% of the total data. The 16 models with the highest accuracy score move on to the next round of modeling where they are rerun on 32% of the data. In round 2, the top 8 most accurate models move on to another round of modeling where they are rerun on 64% of the data (or 500MB of data, whichever is smaller).
By default, Autopilot runs on the [Informative Features feature list](feature-lists#automatically-created-feature-lists).
!!! tip "When is Autopilot useful?"
When you want to generate a more diverse group of models for your specific use case (results in slower runtimes).
### Quick {: #quick }
Quick (Autopilot)—the default modeling mode in DataRobot—is a shortened and optimized version of full Autopilot mode, running on 64% of the data. It builds a smaller group of models, whose selection is based on the target feature and performance metric. The intent is to quickly generate a set of models.
!!! tip "When is Quick mode useful?"
When you want to generate the best models for your specific use case using a balance of speed and accuracy.
### Manual {: #manual }
Manual mode allows you to select and run individual models from the model Repository.
!!! tip "When is Manual mode useful?"
When you already know which kind of modeling approach you want to use.
### Comprehensive {: #comprehensive }
Comprehensive mode allows you to run all blueprints in the Model Repository, using the maximum Autopilot sample size to ensure more accurate models.
!!! tip "When is Comprehensive mode useful?"
When you want to find the most accurate model for your use case, regardless of time. Note that this mode can result in significantly longer build times.
## Learn more {: #learn-more }
**Documentation:**
* [Descriptions of modeling modes](model-data#set-the-modeling-mode)
* [Circumstantial modeling mode behaviour](model-ref#modeling-modes)
* [Automatically created feature lists](feature-lists#automatically-created-feature-lists)
|
tut-model-mode
|
---
title: Create AI models
description: The tutorials in this section provide quick, task-based instructions for achieving common tasks related to modeling.
---
# Create AI models {: #create-ai-models }
The content in this section provides quick FAQ answers as well as task-based tutorials for achieving common tasks related to modeling.
Topic | Describes how to...
----- | ------
[Select a target](tut-target) | Select a target feature for your project.
[Set the modeling mode](tut-model-mode) | Choose the appropriate modeling mode for your project.
|
index
|
---
title: Analyze feature associations
dataset_name: N/A
description: How to use a Feature Association matrix to visualize relationships among your features.
domain: platform
expiration_date: 10-10-2024
owner: izzy@datarobot.com
url: docs.datarobot.com/docs/tutorials/prep-learning-data/analyze-feature-associations.html
---
# Analyze feature associations {: #analyze-feature-associations }
In this tutorial, you'll learn how to use a [Feature Association](feature-assoc) matrix to visualize relationships among your numeric and categorical features. You can quickly see the top ten associations and the clusters that are present in your data.
??? tip "How are feature associations calculated?"
Feature associations are calculated using Mutual Information, by default, but you can switch to Cramer's V. Learn [more about these metrics](feature-assoc#more-about-metrics) in the [Feature Association documentation](feature-assoc).
## Takeaways {: #takeaways }
This tutorial shows how to:
* View the Feature Associations matrix.
* Investigate feature relationships including pairs and clusters of features.
## View the Feature Associations tab {: #view-the-feature-associations-tab }
The **Feature Associations** tab is available after your features are analyzed in EDA2.
The sample dataset featured in this tutorial contains patient data.

The goal is to predict the likelihood of patient readmission to the hospital. The target feature is `readmitted`.
1. On the **Begin a project** page, upload your data, then specify a target and and click **Start**.
DataRobot performs EDA2 prior to generating model blueprints.
2. Once DataRobot finishes feature analysis, click **Feature Associations** on the **Data** tab.

The Feature Assocations matrix displays.

The features are listed on the x and y axes of the matrix. The association between two features (a *feature pair*) is represented by a colored dot.
3. Investigate clusters of features.

*Feature clusters* are groups of features that are associated to some degree. The dots in a cluster display in the same general color with association strength represented by the depth of the color—dark (opaque) to light (more transparent). White dots indicate features that are not in a cluster.
Notice the green, red, and blue clusters identifed in the chart. The red cluster contains the `change`, `diabetesMed`, `insulin`, and `metformin` features. It makes sense that these features are in a cluster because they all relate to diabetes medications—insulin and metformin are diabetes medications, while the `change` feature indicates that the patient's medication was changed.
4. To zoom in, drag the cursor to outline a section of the matrix.

To view the whole matrix again, click **Reset zoom** below the display.
5. Explore the features by sorting them using the **Sort By** dropdown menu.

By default, the list is sorted by **Feature Cluster**. You can also sort by name and [Importance](model-ref#importance-score).
6. Use the **Feature List** dropdown menu to view the feature associations based on a different feature list.

## Explore pairs of features {: #explore-pairs-of-features }
1. Select a dot in the matrix to view details about the feature pair.

The **Associations** tab on the right shows the cluster that contains the feature pair, as well as the value for the selected metric (Mutual Information, in this case). The tab also provides details about the individual features.
2. Click **View Feature Association Pairs** at the bottom of the **Associations** tab.

The window displays a visualization of the association between the two features.

In this case, the features are both categorical so a contingency table shows the frequency distribution of the feature values. For other feature types, different plots display.
3. Select other pairs of features from the **Feature 1** and **Feature 2** dropdown menus.
For pairs of numeric features, DataRobot generates scatter plots.

If a pair includes a numeric feature and a categorical feature, DataRobot generates a box and whisker plot.

In this example, the feature pair of `admission_type_id` (a categorical feature) and `time_in_hospital` (a numeric feature) generates a box and whisker plot. The plot shows the upper and lower quartiles for the data. The endpoints represent the upper and lower extremes.
## Learn more {: #learn-more }
**Documentation:**
* [Feature Association tab](feature-assoc)
* [Importing data into DataRobot](import-data/index)
* [EDA2](eda-explained#eda2)
|
analyze-feature-associations
|
---
title: Assess data quality during EDA
dataset_name: N/A
description: How DataRobot performs Exploratory Data Analysis (EDA) and how to assess the quality of your data at each stage of EDA.
domain: platform
expiration_date: 10-10-2024
owner: izzy@datarobot.com
title: Assess data quality during EDA
url: docs.datarobot.com/docs/tutorials/prep-learning-data/assess-data-quality-eda.html
---
# Assess data quality during EDA {: #assess-data-quality-during-eda }
In this tutorial, you'll learn how DataRobot performs Exploratory Data Analysis (EDA) and how to assess the quality of your data at each stage of EDA—*EDA1* and *EDA2*.
Preparing your data is an iterative process. Even if you clean and prep your training data prior to uploading it to DataRobot, you can still improve its quality by assessing features during EDA.
## Takeaways {: #takeaways }
This tutorial explains:
* Exploratory Data Analysis, including EDA1 and EDA2
* How to add your data to DataRobot
* How to use the Data Quality Assessment tool
* How to evaluate feature importance
## Stages of EDA {: #stages-of-eda }
During EDA, DataRobot performs Data Quality Assessment. The assessment provides information about data quality issues that are relevant to the stage of model building you are peforming. Click one of the following tabs to learn about the two EDA stages.
=== "EDA1"
<br>
EDA1 (data ingest) occurs after you upload your data. EDA1 assesses the **All Features** list and detects issues like:<ul><li>[Outliers](data-quality#outliers)</li><li>[Inliers](data-quality#inliers)</li><li>[Excess zeros](data-quality#excess-zeros)</li><li>[Disguised missing values](data-quality#disguised-missing-values)</li><li>[Inconsistent gaps in time series projects](data-quality#irregular-time-steps)</li></ul>
=== "EDA2"
<br>
Once you click **Start** on the **Data** page, DataRobot performs another round of EDA. During this stage, DataRobot detects [target leakage](data-quality#target-leakage) and non-linear correlations between the features and the target, which helps you analyze [feature importance](#investigate-feature-importance). EDA2 reports on the selected feature list. If a feature list is not selected, EDA2 reports on the default **All Features** list.
## Load and view your dataset {: #load-and-view-your-dataset }
As soon as you load your dataset, DataRobot performs EDA1. In this phase, DataRobot generates summary statistics based on a sample of your data.
1. Import your dataset.

To do so, drag a local file to the **Begin a project** page, browse for a **Local file**, or import from an external data source or URL.
DataRobot uploads the dataset, creates a new project, and performs an initial EDA. View the progress in the Worker Queue on the right.

??? tip
To learn how DataRobot handles larger datasets, see [Fast EDA](fast-eda#fast-eda-application).
2. Once you import your data, click **Explore the data** or scroll down to see the features in your dataset.
DataRobot displays the features and provides summary information and statistics.

| | Label | Description |
|---|---|---|
|  | **Var Type** | The data type DataRobot identifies for the feature during EDA, for example, Numeric, Categorical, Boolean, Image, Text, and special features types like Date. |
|  | **Unique** | The number of unique values for the feature. |
|  | **Missing** | The number of missing values for the feature.|
|  | **Mean, Std Dev, Median, Min, Max** | DataRobot calculates these statistics for numerical features. |
The sample dataset featured in this tutorial contains patient data.

The goal is to predict the likelihood of patient readmission to the hospital. The target feature is `readmitted`.
## Assess data quality after EDA1 {: #assess-data-quality-after-eda1 }
EDA1 helps you catch data issues before you start modeling.
1. Above your feature list and to the right, click **View info**.
The Data Quality Assessment dropdown menu displays.

??? tip
The Data Quality Assessment provides the following issue status flags:<ul><li>Warning : Attention or action required.</li><li>Informational : No action required.</li><li>No issue .</li></ul>
2. Optionally, click **Filter affected features by type of issue detected** and select particular issues to search for.

3. Scroll down to locate the features with issues.
If a feature has an issue, the issue flag displays in the **Data Quality** column. Hover over the flag to view the type of issue.

4. Click a feature that displays an issue flag, then use tools such as the Histogram, Frequent Values, and Feature Associations to explore further.
See [Learn more](#learn-more) for tutorials that show how to use these tools.
## Assess data quality after EDA2 {: #assess-data-quality-after-eda2 }
EDA2 kicks off after you set your target and start the modeling process.
1. Under **What would you like to predict**, enter your target feature.
??? tip
You can keep the mode set to the default, **Quick** autopilot, or you can select a different [modeling mode](model-data#set-the-modeling-mode). You can also customize your [modeling settings](model-data#customize-the-model-build).
2. Click **Start**.
DataRobot performs a number of processing steps. Monitor the steps in the Worker Queue.

As soon as DataRobot finishes analyzing features, you can take a look at feature importance. DataRobot continues with blueprint generation.
## Investigate feature importance {: #investigate-feature-importance }
The importance bars show the degree to which a feature is correlated with the target. Importance is calculated using an algorithm that measures the information content of the variable. This calculation is done independently for each feature in the dataset.
Investigate feature importance to determine which features are most useful for building accurate models and which features you can remove from your training data.
1. In the **Data** tab, scroll down to the feature list.
2. Take a look at the **Importance** column.
The green bars indicate how closely a feature is related to the target.

You might want to remove features that are unrelated to the target.
## Learn more {: #learn-more }
**Related tutorials**
* [Analyze features using histograms](analyze-features-using-histograms)
* [Analyze frequent values](analyze-frequent-values)
* [Analyze feature associations](analyze-feature-associations)
* [Work with feature lists](work-with-feature-lists)
**Documentation:**
* [Data Quality Assessment](data-quality)
* [Data upload overview](import-data/index)
|
assess-data-quality-eda
|
---
title: Analyze features using histograms
dataset_name: N/A
description: How to analyze numeric features using histograms, which let you analyze the distribution of values and view outlier values.
domain: platform
expiration_date: 10-10-2024
owner: izzy@datarobot.com
url: docs.datarobot.com/docs/tutorials/prep-learning-data/analyze-features-using-histograms.html
---
# Analyze features using histograms {: #analyze-features-using-histograms }
DataRobot generates a histogram for each numeric feature so that you can analyze the distribution of the feature's values and view outlier values. In this tutorial, you'll learn how to analyze numeric features using histograms.
## Takeaways {: #takeaways }
This tutorial explains how to use a histogram to:
* View the distribution of values for a feature
* Investigate outliers
## Visualize feature distribution {: #visualize-feature-distribution }
For numeric features, use the histogram to view a rough distribution of values.
1. [Import your dataset](import-data/index).

The sample dataset featured in this tutorial contains patient data.

The goal is to predict the likelihood of patient readmission to the hospital. The target feature is `readmitted`.
??? tip
See the [Assess data quality during EDA](assess-data-quality-eda) tutorial to learn how to use the Data Quality Assessment tool.
2. When the import completes, navigate to the **Project Data** list and select a feature.
For numeric features, a histogram displays equal-sized ranges called *bins*. The height of each bar represents the number of rows with values in that range.

3. Hover over a bin to view the range of the bin and the number of rows that fall within the range.

The `time_in_hospital` feature is the number of days spent in the hospital. The histogram indicates that a visit of one to three days is most common.
4. Click the **Showing** dropdown menu on the bottom left to change the number of bins.

With the additional bins, you can now see that a visit of *two to three* days is most common.
## Visualize outliers {: #visualize-outliers }
Use the histogram to investigate a feature that has outlier values.
1. Select a feature that has outliers if one exists in your feature list.
??? tip
Use the [Data Quality Assessment](data-quality) tool to locate features with outliers. If a feature has outliers, a warning icon () displays in the **Data Quality** column. The warning tip indicates the type of issue.
2. In the histogram that displays, toggle **Show outliers** on.

The red dots at the top of the histogram are the outlier values. The gold box plot shows the middle quartiles for the data to help you determine whether the distribution is skewed.
3. Hover over a red dot to view the value of the outlier.

In this example, the outlier shown for the `num_medications` feature is 74.1—far from the median of 14.
## View average target values {: #view-average-target-values }
After you kick off [EDA2](eda-explained#eda2), you can also view the average target values for features.
In the histogram, notice the orange circles that overlay the histogram.

The circles indicate the average target value for a bin. In this example, hospital visits of 8 days result in the highest average target value— for 8-day visits, 46.12% of rows have `readmitted` = 1.
## Learn more {: #learn-more }
**Related tutorials**
* [Assess data quality during EDA](assess-data-quality-eda)
**Documentation**
* [Histogram chart](histogram#histogram-chart)
* [EDA](eda-explained)
* [Data Quality Assessment](data-quality)
|
analyze-features-using-histograms
|
---
title: Import data to DataRobot
dataset_name: N/A
description: How to import data to DataRobot by uploading a local file, specifying a URL, or connecting to a data source.
domain: platform
expiration_date: 10-10-2024
owner: izzy@datarobot.com
url: docs.datarobot.com/docs/more-info/tutorials/prep-learning-data/import-data-tutorial.html
---
# Import data to DataRobot {: #import-data-to-datarobot }
The DataRobot platform provides many methods of ingesting data for machine learning—uploading local files, entering a URL, and connecting to external databases, among others.
This tutorial focuses on importing data using the DataRobot interface. You can also [import to the AI Catalog](catalog) or import using the API (see the [API Quickstart](api-quickstart/index) to get started with the DataRobot API).
## Takeaways {: #takeaways }
This tutorial:
* Provides guidelines for importing to DataRobot.
* Walks through the steps of importing to DataRobot directly.
## Guidelines for imports {: #guidelines-for-imports }
Review the following data guidelines for AutoML, Time Series, and Visual AI projects prior to importing.
### For AutoML projects {: #for-automl-projects }
* The data must be in a flat-file, tabular format.
* You must have a column that includes the target you are trying to predict.
### For Time Series projects {: #for-time-series-projects }
* The data must be in a flat-file, tabular format.
* You must include a [date/time feature](file-types#date-and-time-formats) for each row.
* When using time series modeling, DataRobot detects the time step—the delta between rows measured as a number and a time-delta unit in the data, for example (15, “minutes”). Your dataset must have a row for each time-delta unit. For example, if you are predicting seven days in the future (time step equals 7, days), then your dataset must have row for each day for the entire date range; similarly, if you are forecasting out seven years, then your data must have one row for each year for the entire date range.
* You must have a column that includes the target that you are trying to predict.
### For Visual AI projects {: #for-visual-ai-projects }
* [Set up folders that contain images](vai-model#prepare-the-dataset) for each class and name the folder for that class. Create a ZIP archive of that folder of folders and upload it to DataRobot.
* You can also add tabular data if you include the links to the images within the top folder. You can find more information on that here.
## Import to DataRobot {: #import-to-datarobot }
To import to DataRobot, sign in and navigate to the **Begin project** page by clicking the DataRobot logo on the top left. There are other methods of accessing this page depending on your account type.

The following table describes the methods you can use to import to DataRobot:
| | Import method | Description |
|---|---|---|
|  | Drag and drop | Drag and drop a file from your computer onto the **Begin a project** page. |
|  | Import from | Choose an option:<ul><li>**Data Source**: Connect to a [configured data source](data-conn).</li><li>**URL**: Use a URL.</li><li>**Local file**: Upload a local file from your machine.</li></ul>
|  | Browse | [Browse the AI Catalog](catalog). You can import, store, blend, and share your data through the AI catalog. |
|  | File types | View the accepted formats for imports. See [Dataset requirements](file-types) for more details. |
### Upload a local file {: #upload-a-local-file }
Click **Local file** and browse for a file or drag a file directly onto the **Begin a project** page.

DataRobot uploads the data and creates a project.
### Import from a URL {: #import-from-a-url }
Use a URL to import your data. It can be local, HTTP, HTTPS, Google Cloud Storage, Azure Blob Storage, or S3 (URL must use HTTP).
1. Click **URL**.
2. Enter the URL to your data and click **Create New Project**.

DataRobot imports the data and creates a project.
!!! note
The ability to import from Google Cloud, Azure Blob Storage, or S3 using a URL needs to be configured for your organization's installation. Contact your system administrator for information about configured import methods.
### Import from a data source {: #import-from-a-data-source }
Before importing from a data source, [configure a JBDC connection](data-conn) to the external database.
1. Click **Data Source**.
2. Search and select a data source.

You can also choose to [add a new data connection](data-conn).
3. Choose an account.

4. Select the data you want to connect to.

5. Click to create a project.

DataRobot connects to the data and creates a project.
## What's next? {: #whats-next }
After you import your data, DataRobot creates a project and performs [Exploratory Data Analysis](assess-data-quality-eda).
## Learn more {: #learn-more }
**Documentation:**
* [Dataset requirements](file-types)
* [Import to DataRobot directly](import-to-dr)
* [Import and create projects in the AI Catalog](catalog)
* [Connect to data sources](data-conn)
|
import-data-dr-tutorial
|
---
title: Manage data with the AI Catalog
dataset_name: N/A
description: How to import data to the AI Catalog and how to use the catalog to prepare, blend, and create a project from your data.
domain: platform
expiration_date: 10-10-2024
owner: izzy@datarobot.com
url: docs.datarobot.com/docs/tutorials/prep-learning-data/ai-catalog-tutorial.html
---
# Manage data with the AI Catalog {: #manage-data-with-ai-catalog }
DataRobot’s AI Catalog is comprised of three key functions:
* **Ingest**: Data is imported into DataRobot and sanitized for use throughout the platform.
* **Storage**: Reusable data assets are stored, accessed, and shared.
* **Data Preparation**: Clean, blend, transform, and enrich your data to maximize the effectiveness of your application.
You can access the AI Catalog from anywhere within DataRobot by clicking the **AI Catalog** tab at the top of the brower.
## Takeaways {: #takeaways }
This tutorial shows you how to:
* Add data to the AI Catalog.
* View information about a dataset.
* Blend a dataset with another dataset using Spark SQL.
* Create a project.
## Add data {: #add-data }
To add data to the AI Catalog:
1. Click **AI Catalog** at the top of DataRobot window.
2. Click **Add to catalog** and select an import method.

The following table describes the methods:
| Method | Description
|---|---|
| New Data Connection | [Configure a JBDC connection](data-conn) to import from an external database of data lake. |
| Existing Data Connection | [Select a configured data source](import-to-dr#import-from-a-data-source) to import data. Select the account and the data you want to add. |
| Local File | Browse to [upload a local dataset](import-to-dr#import-local-files) or [drag and drop a dataset](import-to-dr#drag-and-drop). |
| URL | [Import by specifying a URL](import-to-dr#import-a-dataset-from-a-url).|
| Spark SQL | Use [Spark SQL queries to select and prepare the data](catalog#use-a-sql-query) you want to store. |
DataRobot registers the data after performing an initial exploratory data analysis ([EDA1](eda-explained#eda1)). Once registered, you can do the following:
* [View information](#view-information-about-a-dataset) about a dataset, including its history.
* [Blend the dataset](#blend-a-dataset-using-spark-sql) with another dataset.
* [Create an AutoML project](#create-a-project).
## View information about a dataset {: #view-information-about-a-dataset }
Click a dataset in the catalog to view information about it.

| | Element | Description |
|---|---|---|
|  | Asset tabs | Select a tab to work with the asset (dataset): <ul><li>**Info**: View and edit basic information about the dataset. Update the name and description, and add tags to use for searches. </li><li>**Profile**: Preview dataset column names and row data. </li><li>**Feature Lists**: Create new feature lists and transformations from the dataset. </li><li>**Relationships**: View relationships configured during [Feature Discovery](feature-discovery/index).</li><li>**Version History**: List and view status for all versions of the dataset. Select a version to create a project or download.</li><li>**Comments**: Add a comment to a dataset. Tag users in your comment and DataRobot sends them an email notification. </li></ul> |
|  | Dataset Info | Update the name and description, and add tags to use for searches. The number of rows and features display on the right, along with other details.
|  | State badges | Displayed badges indicate the [state of the asset](catalog-asset#asset-states)—whether it's in the process of being registered, whether it's static or dynamic, generated from a Spark SQL query, or snapshotted.
|  | Create project | [Create a machine learning project](#create-a-project) from the dataset.
|  | Share | [Share assets](sharing) with other users, groups, and organizations.
|  | actions menu | Download, delete, or create a snapshot of the dataset.
|  | Renew Snapshot | Add a [scheduled snapshot](snapshot). |
## Blend a dataset using Spark SQL {: #blend-a-dataset-using-spark-sql }
You can blend two or more datasets and use Spark SQL to select and transform features.
1. In the catalog, click **Add to catalog** and select **Spark SQL**.

2. Click **Add data**.

3. Select the tables you want to blend and click **Add selected data**.

3. For each dataset, click the actions menu and click **Select Features**.

4. Choose the features and click **Add selected features to SQL**. You can click the right arrows to add features one at a time.

4. Once you have added features from the datsets, add SQL commands to the editing window to generate a query (click **Spark Docs** on the upper right for Spark SQL documentation). Try out the query by clicking **Run**.

5. Click **Save** when you have the results you want. DataRobot registers the new dataset.

## Create a project {: #create-a-project }
Click a registered dataset in the catalog and click **Create project**. DataRobot uploads the data, conducts [exploratory data analysis](assess-data-quality-eda), and creates the machine learning project. You can then start [building models](model-data).
## Learn more {: #learn-more }
**Documentation:**
* [Dataset requirements](file-types)
* [Import using the AI Catalog](catalog)
* [Connect to data sources](data-conn)
* [Work with catalog assets](catalog-asset)
|
ai-catalog-tutorial
|
---
title: Work with feature lists
dataset_name: N/A
description: How to use automatically generated feature lists and build your own as training data for machine learning.
domain: platform
expiration_date: 10-10-2024
owner: izzy@datarobot.com
url: docs.datarobot.com/docs/tutorials/prep-learning-data/work-with-feature-lists.html
---
# Work with feature lists {: #work-with-feature-lists }
DataRobot builds models using *feature lists*—subsets of features from your dataset. DataRobot automatically generates several feature lists. You can also create custom feature lists, using domain knowledge to select the features that will be most useful in building accurate models.
Following are examples of feature lists DataRobot generates.
| Feature list | Description |
|----|----|
| **All Features** (default) | Includes all dataset features; performs no feature engineering. |
| **Informative Features** | Includes features that are potentially valuable for modeling. The features that will not be useful are removed, for example, reference IDs, features that contain empty values, and features that are derived from the target. DataRobot also creates features, such as date type features (for example, day of the week and day of the month).|
| **Raw Features** | Includes all features present in your dataset when uploaded. |
| **Univariate Selections** | Includes features that meet a certain threshold for non-linear correlation with the target. This list is available after the target is set. |
| **DR Reduced Features** | Includes the features DataRobot determines to be most important based on Feature Impact scores from a particular model. This list is generated after the models are built. |
## Takeaways {: #takeaways }
This tutorial shows how to:
* View feature lists
* Create new feature lists
* Compare models built with different feature lists
* Run Autopilot on a specific feature list
## View feature lists {: #view-feature-lists }
View the features in the lists that DataRobot generates automatically.
1. [Import your dataset](import-data/index) and [select a target](model-data#set-the-target-feature).
The sample dataset featured in this tutorial contains patient data.

The goal is to predict the likelihood of patient readmission to the hospital. The target feature is `readmitted`.
2. Scroll down to the **Project Data** tab.
By default, the All Features list displays.

This list identifies features DataRobot determines to be non-informative.

In this example, some features have too few values and some are duplicates.
3. Click the **Feature List** dropdown menu and select the Informative Features list.

The Informative Features list displays, and the non-informative features are removed.

## Create a feature list {: #create-a-feature-list }
Select features to build your own custom feature lists.
1. In the **Project Data** tab, select features using the check boxes to the left of the feature names.
2. Click **+ Create feature list** and enter the new feature list name to save your custom feature list.

## Create a feature list from an existing list {: #create-a-feature-list-from-an-existing-list }
Use the menu to select an existing feature list, then add or remove features to create a new feature list.
1. Click **Menu** on the top left of the **Project Data** tab and click **Select features by feature list**.

2. Add or remove features using the check boxes to the left of the feature names.
3. Click **+ Create feature list** and enter the new feature list name to save your custom feature list.
## Filter and select by var type {: #filter-and-select-by-var-type }
Filter and select features by variable data type.
1. Click **Menu** on the top left of the **Project Data** tab and click **Select features by var type**.

2. Add or remove features using the check boxes to the left of the feature names.
3. Click **+ Create feature list** and enter the new feature list name to save your custom feature list.
## Build and compare models {: #build-and-compare-models }
Compare models built with different feature lists by comparing [optimization metrics](opt-metric). Choose metrics based on the project type, for example, regression, binary classification, or multiclass.
1. After loading your data and setting your target, click the **Feature List** dropdown menu and select a feature list.
2. Click **Start** to begin modeling.
3. When modeling is complete, click the **Models** tab at the top.

The Leaderboard lists the generated models and indicates the feature list that was used to generate each.
4. In the **Metric** dropdown menu, select a metric to use to compare the models.

This example uses LogLoss as the metric. For LogLoss, the lower the value the more acurate the model.
## Rerun Autopilot on a feature list {: #rerun-autopilot-on-a-feature-list }
After you build your models, you might decide to customize a feature list and generate more models.
1. [Create a custom feature list](#create-a-feature-list).
2. On the **Data** tab, click the **Feature Lists** tab.

3. Click the menu to the right of the feature list you want to use to build new models and select  **Rerun Autopilot**.

4. In the **Rerun Modeling** window, select the **Modeling mode** and click **Rerun**.

??? tip
You can also view, edit, export, and delete feature lists using the menu on the right of each feature list.
## Learn more {: #learn-more }
**Documentation:**
* [Explore feature list details.](feature-lists)
* [Assess feature impact—how important a feature is in the context of a particular model.](feature-impact)
|
work-with-feature-lists
|
---
title: Enrich data using Feature Discovery
dataset_name: N/A
description: How Feature Discovery helps you combine datasets of different granularities and perform automated feature engineering.
domain: platform
expiration_date: 10-10-2024
owner: izzy@datarobot.com
title: Enrich data using Feature Discovery
url: docs.datarobot.com/docs/tutorials/prep-learning-data/enrich-data-using-feature-discovery.html
---
# Enrich data using Feature Discovery {: #enrich-data-using-feature-discovery }
In this tutorial, you'll learn how Feature Discovery helps you combine datasets of different granularities and perform automated feature engineering.
More often than not, features are split across multiple data assets. Bringing these data assets together can take a lot of work—joining them and then running machine learning models on top. It's even more difficult when the datasets are of different granularities. In this case, you have to aggregate to join the data successfully.
Feature Discovery solves this problem by automating the procedure of joining and aggregating your datasets. After defining how the datasets need to be joined, you leave feature generation and modeling to DataRobot.
This tutorial uses data taken from Instacart, an online aggregator for grocery shopping. The business problem is to predict whether a customer is likely to purchase a banana.
## Takeaways {: #takeaways }
This tutorial shows how to:
* Add datasets to a project
* Define relationships
* Set join conditions
* Configure time-aware settings
* Review features that are generated during Feature Discovery
* Score models built using Feature Discovery
## Load the datasets to AI Catalog {: #load-the-datasets-to-ai-catalog }
The tutorial uses these datasets:
| Table | Description |
|----|----|
| Users | Information on users and whether or not they bought bananas on particular order dates. |
| Orders | Historical orders made by a user. A User record is joined with multiple Order records. |
| Transactions | Specific products bought by the user in an order. An Order record is joined with multiple Transaction records. |
Each of these tables has a different unit of analysis, which defines the *who* or *what* you're predicting, as well as the level of granularity of the prediction. This tutorial shows how to join the tables together so that you have a suitable unit of analysis that produces good results.
Start by loading the primary dataset—the dataset containing the target feature you want to predict.
1. Go to the **AI Catalog** and for each dataset you want to upload, click **Add to catalog**.

You can add the data in various ways, for example, by connecting to a data source or uploading a local file.
2. Once all of your datasets are uploaded, select the dataset you want to be your primary dataset and click **Create project** in the upper right.

## Add secondary datasets {: #add-secondary-datasets }
Once you upload your datasets to the AI Catalog, you can add the secondary datasets to the primary dataset in the project you created.
1. In the project you created, specify your target, then under **Secondary Datasets**, click **Add datasets**.

2. On the **Specify prediction point** page of the **Relationship editor**, select the feature that indexes your primary dataset by time under **Select date feature to use as prediction point**. Then click **Set up as prediction point**.

In this dataset, the date feature is `time`.
3. In the **Add datasets** page of the **Relationship editor**, select **AI Catalog**.

4. In the **Add datasets** window, click **Select** next to each dataset you want to add, then click **Add**.

5. Click **Continue** to finalize your selection.

## Define relationships {: #define-relationships }
Next, create relationships between your datasets by specifying the conditions for joining the datasets, for example, the columns on which they are joined. You can also configure time-aware settings if needed for your data.
1. On the **Define Relationships** page, click a secondary dataset to highlight it, then click the plus sign that appears at the bottom of the primary dataset tile.

2. Set join conditions—in this case, specify the columns for joining. DataRobot recommends the `user_id` column for the join. Click **Save and configure time-aware**.

??? tip
Instead of a single column, you can add a list of features for more complex joining operations. Click **+ join condition** and select features to build complex relationships.
3. Select the time feature from the secondary dataset and the feature derivation window, and click **Save**.

See [Time series modeling](time/index) for details on setting time-aware options.
4. Repeat these steps to add any other secondary datasets.
In this example, the three datasets are joined with these relationships:

## Build your models {: #build-your-models }
Now that the secondary datasets are in place and DataRobot knows how to join them, you can go back to the project and begin modeling.
1. Click **Continue to project** in the top right.

Back on the main **Data** page, you can see under **Secondary Datasets** that two relationships have been defined for the *Orders* secondary dataset and one relationship has been defined for the *Transactions* secondary dataset.

2. Click **Start** to begin modeling.
DataRobot loads the secondary datasets and discovers features:

In the next section, you'll learn how to analyze them.
## Review derived features {: #review-derived-features }
DataRobot automatically generates hundreds of features and removes features that might be redundant or have a low impact on model accuracy.
!!! note
To prevent DataRobot from removing less informative features, turn off supervised feature reduction on the [**Feature Reduction** tab](fd-gen#disable-feature-reduction) of the **Feature Discovery Settings** page.
You can begin reviewing the derived features once EDA2 completes.
1. On the **Data** tab, click a derived feature and and view the **Histogram** tab.

Derived feature names include the dataset alias and the type of transformation. In this example, the transformation is the unique count of orders by the day of the month.
2. Click the **Feature Lineage** tab to see how this feature was created.

3. To download the new dataset with the derived features, scroll to the top of the **Data** page, click the **Feature Discovery** tab, click the menu icon on the right, and select **Download dataset**.

4. To understand the process DataRobot used to derive and prune the features, click the menu icon on the right and click **Feature Derivation log**.

The **Feature Derivation Log** shows information about the features processed, generated, and removed, along with the reasons why features were removed. You can optionally save the log by clicking **Download**:

## Score models built with Feature Discovery {: #score-models-built-with-feature-discovery }
When scoring models built with Feature Discovery, you need to ensure the secondary datasets are up-to-date and that feature derivation will complete without problems.
To make predictions on models built with Feature Discovery:
1. In the **Models** page, click the **Leaderboard** tab and click the model you selected for deployment.
2. Click **Predict**, then under **Prediction Datasets**, click **Import data from** and import the scoring dataset.

The dataset must have the same schema as the dataset used to create the project. The target column is optional and you don't need to upload secondary datasets at this point.
3. After the dataset is uploaded, click **Compute Predictions**.

4. To change the default configuration for the secondary datasets, under **Secondary datasets configuration**, click **Change**.

Updating the secondary dataset configuration is necessary if the scoring data has a different time period and is not joinable with the secondary datasets used in the training phase.
5. To add a new configuration, click **create new**.

6. To replace secondary dataset, on the **Secondary Datasets Configuration** window, locate the secondary dataset and click **Replace**.

!!! note
If you need to replace a secondary dataset, do so before uploading your scoring dataset to DataRobot. If not, DataRobot will use the default settings to compute the joins and perform feature derivation.
## Learn more {: #learn-more }
See the following documentation topics for detailed information on Feature Discovery:
* [Creating a project for Feature Discovery](fd-overview)
* [Time-aware feature engineering](fd-time)
* [Derived features](fd-gen)
* [Predictions](fd-predict)
|
enrich-data-using-feature-discovery
|
---
title: Prepare learning data
description: The tutorials in this section provide quick, task-based instructions that will help you with common data preparation tasks.
---
# Prepare learning data {: #prepare-learning-data }
The content in this section provides quick FAQ answers as well as task-based tutorials for achieving common data preparation and management tasks.
Topic | Describes how to...
----- | ------
[Import data to DataRobot](import-data-dr-tutorial) | Learn guidelines for importing data, and walk through import methods using the DataRobot interface.
[Manage data with the AI Catalog](ai-catalog-tutorial) | Learn how to add data to the AI Catalog, including using Spark SQL to blend data from multiple data sources.
[Assess data quality during EDA](assess-data-quality-eda) | Assess the quality of your data during each phase of Exploratory Data Analysis (EDA). |
[Work with feature lists](work-with-feature-lists) | View the feature lists DataRobot builds and create your own custom feature lists. |
[Analyze features using histograms](analyze-features-using-histograms) | Analyze the distribution of a feature's values and investigate outliers. |
[Analyze frequent values](analyze-frequent-values) | Look into the values that appear most and least frequently for a feature. |
[Analyze feature associations](analyze-feature-associations) | Use a Feature Association matrix to visualize feature relationships and clusters. |
[Enrich data using Feature Discovery](enrich-data-using-feature-discovery) | Join datasets to allow DataRobot to discover and engineer new features.
|
index
|
---
title: Analyze frequent values
dataset_name: N/A
description: How to use the Frequent Values chart, a histogram that shows the number of rows containing each value of a feature.
domain: platform
expiration_date: 10-10-2024
owner: izzy@datarobot.com
url: docs.datarobot.com/docs/tutorials/prep-learning-data/analyze-frequent-values.html
---
# Analyze frequent values {: #analyze-frequent-values }
In this tutorial, you'll learn how to use the Frequent Values chart, a histogram that shows the number of rows containing each value of a feature. You can also see the percentage of rows where the target is a particular value.
## Takeaways {: #takeaways }
This tutorial shows how to:
* Access and analyze the Frequent Values chart
* View average target values
## Analyze frequent values {: #analyze-frequent-values_1 }
Use the Frequent Values chart to compare the values of a feature.
1. [Import your dataset](import-data/index).

The sample dataset featured in this tutorial contains patient data.

The goal is to predict the likelihood of patient readmission to the hospital. The target feature is `readmitted`.
??? tip
See the [Assess data quality during EDA](assess-data-quality-eda) tutorial to learn how to use the Data Quality Assessment tool.
2. In the **Project Data** list, click a feature.
For some features like categorical and boolean features, the **Frequent Values** tab is the default. For numeric features, the **Frequent Values** tab is to the right of the **Histogram** tab.
The Feature Values chart displays each value that appears in the dataset for the feature and the number of rows with that value:

For the `admission_type_id` feature, the most common values are *Emergency* and *Urgent*.
## View average target values {: #view-average-target-values }
After you kick off EDA2, you can also view the average target values for features.
1. Under **What would you like to predict**, enter your target feature.
2. Click **Start**.
As soon as DataRobot finishes analyzing features, you can view the average target values in the Frequent Values chart.
3. In the **Project Data** list, select the feature you are analyzing.

Notice the orange circles that overlay the histogram. The circles indicate the average target value for a bin.
## Learn more {: #learn-more }
**Related tutorials**
* [Assess data quality during EDA](assess-data-quality-eda)
**Documentation:**
* [Frequent Values chart](histogram#frequent-values-chart)
****
|
analyze-frequent-values
|
---
dataset_name: 10k_diabetes.xlsx
expiration_date: 10-10-2024
owner: izzy@datarobot.com
domain: trust-explainable-ai
title: Understand the Word Cloud
description: This tutorial provides instructions to access and understand the Word Cloud insight.
url: https://docs.datarobot.com/en/tutorials/explore-ai-insights/tut-wordcloud.html
---
# Understand the Word Cloud {: #understand-the-word-cloud }
In this tutorial, you'll learn how to draw insights from text features in models using the Word Cloud.
To access the Word Cloud, select a model on the Leaderboard and click **Understand** > **Word Cloud**.
## Takeaways {: #takeaways }
This tutorial explains:
- How to access and interpret the Word Cloud
- How to export the Word Cloud as raw values
## Access the Word Cloud {: #access-the-word-cloud }
When a training dataset contains one or more text features, DataRobot specially trains models to generate text-based insights, including the Word Cloud. If a dataset has multiple text features, a Word Cloud is created for each one.
1. Select a model that supports Word Clouds on the Leaderboard, for example the **Auto-Tuned Word N-Gram Text Modeler**. (See the [**Word Cloud**](word-cloud) description for other model types.)

??? tip "Search tip for finding models"
To narrow down Leaderboard results, enter "insights" into the search bar at the top to quickly find models that produce a visualization on the [**Insights**](analyze-insights) page.
2. Click **Understand** > **Word Cloud**.

## Interpret the Word Cloud {: #interpret-the-word-cloud }
After selecting **Word Cloud**, the window displays a visualization of the model's top 200 text features chosen based on their relationship to the target feature.
1. Mouse over a word. The active word displays in the upper-left corner.

??? tip "Stop words"
To prevent common stop words (the, for, was, etc.) from appearing in the Word Cloud, select the box next to **Filter stop words**.
2. Look at the size of the word. Size represents the frequency of the word in the dataset—larger words appear more frequently than smaller words.

3. Look at the color of the word. Color represents how closely related the word is to the target feature—red indicates a positive effect on the target feature and blue indicates a negative effect on the target feature.

4. Look at the [coefficient](coefficients#coefficientpreprocessing-information-with-text-variables) value.
## Export the Word Cloud {: #export-the-word-cloud }
You can export Word Cloud insights as raw values in a CSV file. To export, click the **Export** button and then **Download** in the resulting dialog.

When the download is complete, open the CSV file.

Fields of the CSV are described below:
Column | Description
------ | -----------
`name` | The word found in the column (in `var_name`).
`var_name` | Feature name (name of the column).
`resp` | Normalized coefficient from the linear model.
`freq` | Normalized word occurrences.
`abs_freq` | Total word occurrences (count).
`stop_word` | Whether stop words are filtered.
## Learn more {: #learn-more }
??? tip "How does DataRobot handle text features?"
If a dataset contains one or more text features, DataRobot uses natural language processing (NLP) tools, such as Auto-Tuned Word N-Gram Text Modelers, to specially tune models and generate NLP visualization techniques, including frequency value tables and word clouds.
During model building, DataRobot incorporates a matrix of word-grams in blueprints. The matrix is produced using common techniques, TF-IDF values, and a combination of multiple text columns.
For large datasets, DataRobot uses the Auto-Tuned Word N-Gram Text Modelers tool, which looks at one text column at a time. This approach uses a single N-Gram model for each text feature in the input dataset, and then uses the predictions from these models as inputs for other models.
**Documentation:**
- [Additional details on Word Cloud insights](analyze-insights#word-cloud-insights)
- [Other text-based insights in DataRobot](analyze-insights#text-based-insights)
|
tut-wordcloud
|
---
title: Interpret the Leaderboard
dataset_name: 1k_diabetes-train.csv
expiration_date: 10-10-2024
owner: izzy@datarobot.com
domain: core-modeling
description: This tutorial provides an overview of how to read the Leaderboard tab and available actions.
url: https://docs.datarobot.com/en/tutorials/explore-ai-insights/tut-read-leaderboard.html
---
# Interpret the Leaderboard {: #interpret-the-leaderboard }
The Leaderboard provides useful summary information for each model built in a project and ranks them based on the chosen optimization metric—meaning the best performing models are at the top. From the Leaderboard, you can also [access a variety of insight tabs](analyze-models/index#model-leaderboard) to further fine-tune and evaluate your models.
To access the Leaderboard, click **Models** > **Leaderboard** at the top of the page.
## Takeaways {: #takeaways }
This tutorial explains:
* How to read the following model summary information on the Leadboard:
* Performance metrics
* Model badges, icons, and indicators
* Recommended models
* Feature list and sample size
* How to filter the Leaderboard
## Compare performance metrics {: #compare-performance-metrics }
Models are ranked by the optimization metric chosen prior to model building, which is displayed at the top. The Leaderboard displays the model's Validation, Cross Validation, and Holdout (if unlocked) scores.

To change the optimization metric used for ranking, click the **Metric** dropdown and select a new metric.

## Understand model icons {: #understand-model-icons }
The Leaderboard provides a wealth of information for each model built in a project using various badges, tags, and indicators. The [badge to the left of the model name](leaderboard-ref#model-icons) indicates the type, and the text below describes model type and version, or whether it uses unaltered open source code.

The [tags and indicators along the bottom](leaderboard-ref#tags-and-indicators) provide quick model identifying and scoring information.

### Recommended model {: #recommended-model }
The model at the top of the list features a **Recommended for deployment** tag, meaning DataRobot [recommends and prepares the model for deployment](model-rec-process) based on its accuracy and complexity.

Before selecting the recommended model, [compare it against other models on the leaderboard](model-compare) using the model comparison tools to make sure it's the best model for your use case.
### Starred models {: #starred-models }
Starring models on the Leaderboard allows you to quickly find certain models later on.
1. Hover over the model you would like to star.

2. Click the **star** icon. When the star is filled in, that means you've successfully marked this model as a favorite.

## Read feature list and sample size {: #read-feature-list-and-sample-size }
The **Feature List & Sample Size** column displays the feature list and sample size used to build the model and allows you to retrain the model using different parameters.

## Filter the Leaderboard {: #filter-the-leaderboard }
If you're looking for a specific model or model criteria, you can filter the Leaderboard to narrow down the results.
### By optimiziation metric {: #by-optimiziation-metric }
To sort by Validation, Cross Validation, or Holdout scores, click the column header. When the header is blue, the Leaderboard lists models from most accurate to least accurate for the selected partition.
### By starred models {: #by-starred-models }
To view all starred models, click **Filter Models** and select **Starred Models**. The Leaderboard updates to only display [models that have been starred](#starred-models).

### By sample size and feature list {: #by-sample-size-and-feature-list }
You can also filter Leaderboard models by the feature list and sample size used to build them.
1. Click the **Feature List & Sample Size** column header.
2. Select the feature list and sample size parameters of the models you would like the Leaderboard to display.

## Learn more {: #learn-more }
**Documentation:**
* [Leaderboard reference page](leaderboard-ref)
* [Model recommendation process](model-rec-process)
* [Optimization metric details](opt-metric)
* [Create a feature list from the **Data** page](feature-lists#create-feature-lists-from-the-data-page)
* [Compare Leaderboard models](model-compare)
* [Model insight tabs](analyze-models/index#model-leaderboard)
|
tut-read-leaderboard
|
---
dataset_name: predictive_maintenance_train.csv
expiration_date: 10-10-2024
owner: tony.martin@datarobot.com
domain: time-series
title: Use anomaly detection with time series
description: This tutorial describes working with anomaly detection models in DataRobot.
url: https://docs.datarobot.com/en/tutorials/explore-ai-insights/tut-ts-anomaly-detection.html
---
# Use anomaly detection with time series {: #use-anomaly-detection-with-time-series }
Anomaly detection is a method for detecting abnormalities in data, often used in cases where there are thousands of normal transactions and only a low percentage of outliers (for example, network analysis or cybersecurity). In this tutorial, you'll learn how to interpret models that were trained to use anomaly detection (an application of unsupervised learning) to detect [time series anomalies](anomaly-detection#time-series-anomaly-detection). With unsupervised learning you do not specify a target. Instead, DataRobot applies anomaly detection, also referred to as outlier and novelty detection, to detect abnormalities in your dataset.
## Takeaways {: #takeaways }
This tutorial explains:
- How to select the best anomaly detection model.
- How to interpret the selected model.
- Ways to deploy the anomaly detection models.
## Create an anomaly detection model {: #create-an-anomaly-detection-model }
Using the [anomaly detection workflow](anomaly-detection#anomaly-detection-workflow), select **No target** and then **Anomalies** to build an anomaly detection model.

Configure the [project settings](ts-flow-overview)—feature derivation windows, backtests, calendars, and any other customizations. Set a modeling mode and click **Start**.
## Select the best anomaly detection model {: #select-the-best-anomaly-detection-model }
Once Autopilot completes, examine the Leaderboard. Without a target, traditional data science metrics cannot be calculated to estimate model performance so DataRobot instead uses the [Synthetic AUC metric](anomaly-detection#synthetic-auc-metric).
### Upload an external test dataset {: #upload-an-external-test-dataset }
Synthetic AUC is a good basis for model selection if you don't have an [external test dataset](predict#make-predictions-on-an-external-dataset) available. If you *do* have an external dataset available it is better to use that. This is because the anomalies that Synthetic AUC finds may be different than the actual anomalies in your dataset.
To use an external dataset, select a model on the Leaderboard and go to [**Predict > Make Predictions**](predict).
1. [Upload](predict#make-predictions-on-an-external-dataset) an external dataset.
2. Once uploaded, click **Forecast settings**:

And then **Forecast Range Predictions**. From there, enter the name of the "known anomalies column" (to generate scores) and click **Compute predictions**.

4. Once scores are computed, return to the Leaderboard and use the menu to change the display so it shows the external test column.

5. In the **External test** column, click **Run** to compute scores for the other blueprints. The Leaderboard reorders results to show values sorted by the actual (not synthetic) AUC.

Once the external tests are scored, click on any model to explore the visualizations.
## Explore visualizations {: #explore-visualizations }
While there are many visualizations to investigate, for anomaly-specific models some of the most important to consider are:
* [Anomaly Over Time](#anomaly-over-time-tab)
* [Anomaly Assessment](#anomaly-assessment-tab)
The following tabs are always useful for understanding your data and are described in detail in the full documentation:
* [ROC Curve tools](roc-curve-tab/index) help you understand how well the prediction distribution captures the model separation.
* [Feature Impact](feature-impact) displays the relative impact of each feature—both original and derived—on the model.
* [Feature Effects](feature-effects) show how changes to the value of each feature change model predictions in relation to the anomaly score.
* [Prediction Explanations](pred-explain/index) help you understand why a model assigned a value to a specific observation.
### Anomaly Over Time tab {: #anomaly-over-time-tab }
The [**Evaluate > Anomaly Over Time**](anom-viz#anomaly-over-time) chart helps you understand when anomalies occur across the timeline of your data. You can change the backtest being displayed to evaluate anomaly scores across specific validation periods. You can also use the chart from the [Model Comparison](model-compare) tab, which is a good method for identifying two complementary models to blend, increasing the likelihood of capturing more potential issues.

### Anomaly Assessment tab {: #anomaly-assessment-tab }
The [Anomaly Assessment](anom-viz#anomaly-assessment) chart plots data for the selected backtest and provides [SHAP](glossary/index#shap-shapley-values) explanations for up to 500 anomalous points. It helps to identify which features are contributing to the anomaly score (via the SHAP values) and is useful for explaining high scores.

## Make predictions {: #make-predictions }
There are three mechanisms for making predictions with the selected anomaly detection model.
1. [Make Predictions tab](#make-predictions-tab)
2. [Deploy tab](#deploy-tab)
3. [Portable Prediction Server](#portable-prediction-server-pps)
### Make Predictions tab {: #make-predictions-tab }
The **Make Predictions** tab is typically used for:
* Testing from a simple Leaderboard interface.
* Small (less than 1 B) prediction datasets.
* Ad-hoc projects that don’t require frequent predictions.

[Make Predictions with time series projects](ts-predictions#make-predictions-tab) works slightly differently than for non time-series projects. For time series projects, **Make Predictions** requires specific criteria for the prediction dataset and applies forecast settings.
### Deploy tab {: #deploy-tab }
Alternatively, you can create a deployment—a REST endpoint that manages prediction requests via the API. This method connects the model to a dedicated prediction server and creates a dedicated deployment object. Use the [**Deploy**](deploy-model) tab to create a deployment with the model:
1. Click **Prepare for deployment** if the model has not already been prepared.
2. [Add deployment information](deploy-model#add-deployment-information) and click **Create deployment**.

3. View your deployment in the deployment [inventory](deploy-inventory).
### Portable Prediction Server (PPS) {: #portable-prediction-server-pps }
You can deploy a model via Docker with DataRobot's [Portable Prediction Server (PPS)](portable-pps). The PPS is a DataRobot execution environment for DataRobot model packages (.mlpkg files) distributed as a self-contained Docker image. Using this method moves the model closer to production data and allows you to integrate into already existing pipelines and applications.
## Learn more {: #learn-more }
- DataRobot University's [Time Series Anomaly Detection Lab](https://university.datarobot.com/anomaly-detection-lab){ target=_blank } (requires a DataRobot University subscription) to build and evaluate a time-aware unsupervised ML model to detect anomalies in a predictive maintenance dataset.
- Towards Data Science, [Anomaly Detection for Dummies](https://towardsdatascience.com/anomaly-detection-for-dummies-15f148e559c1){ target=_blank }, for an introduction to anomaly detection.
**Documentation:**
- [Time-based modeling](time/index), both out-of-time validation (OTV) and time series
- [Time series model recommendation process](ts-date-time#recommended-time-series-models)
- [**Make Predictions**](predict), full documentation (including AutoML)
- [Prediction options](predictions/index)
|
tut-ts-anomaly-detection
|
---
title: Explore AI insights
description: The tutorials and FAQ in this section provide quick, task-based instructions for achieving common tasks related to modeling.
---
# Explore AI insights {: #explore-ai-insights }
The tutorials in this section provide quick, task-based instructions for achieving common tasks related to modeling.
Topic | Describes how to...
----- | ------
[Interpret the Leaderboard](tut-read-leaderboard) | Interpret Leaderboard model summary information.
[Understand the Word Cloud ](tut-wordcloud) | Understand text-based insights using the Word Cloud.
[Use anomaly detection with time series](tut-ts-anomaly-detection) | Build and interpret anomaly detection models in time series projects.
|
index
|
---
title: Portable prediction methods
description: Learn about DataRobot's available methods for portable predictions.
---
# Portable prediction methods {: #batch-scoring-methods }
{% include 'includes/port-pred-options.md' %}
|
index
|
---
title: Qlik predictions
description: Submit Qlik data for scoring via the prediction API and a sample code snippet.
---
# Qlik predictions {: #qlik-predictions }
To integrate with Qlik, DataRobot provides a code snippet containing the commands and identifiers necessary to submit Qlik data for scoring using the [Prediction API](dr-predapi).
From a deployment's **Predictions** > **Integrations** tab, click the **Qlik** tile.

To use the **Qlik Integrations Code Snippet**, follow the sample and make the necessary changes to integrate the model, via the API, into your production application. Click the **Prediction Explanations** checkbox to include prediction explanations (1) alongside the prediction results.

Copy the sample code (2) and modify it as necessary. Once customized, your code snippet is ready for use with the Prediction API.
|
integration-code-snippets
|
---
title: Prediction API snippets
description: How to adapt downloadable DataRobot Python code to submit a CSV or JSON file for scoring and integrate it into a production application via the Prediction API.
---
# Prediction API snippets {: #prediction-api-snippets }
DataRobot provides sample Python code containing the commands and identifiers required to submit a CSV or JSON file for scoring. You can use this code with the [DataRobot Prediction API](dr-predapi).
You can also read below for more information on:
* [Disabling data drift tracking for individual prediction requests](deploy-inventory#self-managed-ai-platform-deployments-with-monitoring-disabled)
* [Using the monitoring snippet with deployments](#monitoring-snippet)
## Prediction API scripting code {: #prediction-api-scripting-code }
To use the Prediction API Scripting Code, open the deployment you want to make predictions through and click **Predictions** > **Prediction API**. On the Prediction API Scripting Code page, you can choose from several scripts for **Batch** and **Real-time** predictions. Follow the sample provided and make the necessary changes when you want to integrate the model, via API, into your production application.
To find and access the script required for your use case, configure the following settings:
### Batch prediction snippet settings

| | Content | Description |
|-|---------|-------------|
|  | Prediction type | Determines the prediction method used. Select **Batch**. |
|  | Interface | Determines the interface type of the batch prediction script you generate. Select one of the following interfaces:<ul><li>**CLI**: A standalone batch prediction script using the DataRobot API Client. Before using the CLI script, if you haven't already downloaded `predict.py`, click **download CLI tools**.</li><li>**API Client**: An example batch prediction script using the DataRobot's Python package.</li><li>**HTTP**: An example batch prediction script using the raw Python-based HTTP requests. |
|  | Platform<br>(*only* for CLI) | When you select the **CLI** interface option, the platform setting determines the OS on which you intend to run the generated CLI prediction script. Select one of the following platform types:<ul><li>**Mac/Linux**</li><li>**Windows**</li> |
|  | Copy script to clipboard | Copies the entire code snippet to your clipboard. |
|  | Code overview screen| Displays the example code you can download and run on your local machine. You should edit this code snippet to fit your needs. |
### Real-time prediction snippet settings

| | Content | Description |
|-|---------|-------------|
|  | Prediction type | Determines the prediction method used. Select **Real time**. |
|  | Language | Determines the language of the real-time prediction script generated. Select a format:<ul><li>**Python**: An example real-time prediction script using the DataRobot's Python package.</li><li>**cURL**: A script using cURL, a command-line tool for transferring data using various network protocols, available by default in most Linux distributions and macOS.</li> |
|  | Copy script to clipboard | Copies the entire code snippet to your clipboard. |
|  | Code overview screen | Displays the example code you can download and run on your local machine. You should edit this code snippet to fit your needs. |
To deploy the code, copy the sample and either:
* Review [deployment options](index)
* Integrate for use with a [dedicated prediction server](dr-predapi)
### Disable data drift {: #disable-data-drift }
You can disable data drift tracking for individual prediction requests by applying a unique header to the request. This may be useful, for example, in the case where you are using synthetic data that does not have real-world consequences.
Insert the header, `X-DataRobot-Skip-Drift-Tracking=1`, into the request snippet. For example:
```
headers['X-DataRobot-Skip-Drift-Tracking'] = '1'
requests.post(url, auth=(USERNAME, API_KEY), data=data, headers=headers)
```
After you apply this header, drift tracking is not calculated for the request. However, service stats are still provided (data errors, system errors, execution time, and more).
### Monitoring snippet {: #monitoring-snippet }
When you create an external model deployment, you are notified that the deployment requires the use of monitoring snippets to report deployment statistics with the [monitoring agent](../../mlops/deployment/mlops-agent/index).

You can follow the link at the bottom of the page or navigate to **Predictions** > **Monitoring** for your deployment to view the snippet:

The monitoring snippet is designed to configure your MLOps library to send a model's statistics to DataRobot MLOps and represent those statistics in the deployment. Use this functionality to report back Scoring Code metrics to your deployment.
To instrument your Scoring Code with a deployment, select the Java language and copy the snippet to your clipboard when you are ready to use it. For further instructions, reference the Quick Start guide available in the monitoring agent internal documentation.

If you have not yet configured the monitoring agent to monitor your deployment, a download of the MLOps agent tarball is available from a link in the **Monitoring** tab. Additional documentation for setting up the monitoring agent is included in the tarball.

|
code-py
|
---
title: Real-time scoring methods
description: Learn about DataRobot's available methods for making real-time predictions.
---
# Real-time scoring methods {: #real-time-scoring-methods }
Make real-time predictions by sending an HTTP request for a model via a synchronous call. After DataRobot receives the request, it immediately returns a response containing the prediction results.
The simplest method for making real-time predictions is to [deploy a model from the Leaderboard](add-deploy-info) and make prediction requests with the [Prediction API](dr-predapi).
After deploying a model, you can also navigate to a deployment's [**Prediction API**](code-py) tab to access and configure scripting code that allows you to make simple requests to score data. The deployment also hosts [integration snippets](integration-code-snippets).
|
index
|
---
title: Batch prediction methods
description: Learn about DataRobot's available methods for scoring large files efficiently.
---
# Batch prediction methods {: #batch-prediction-methods }
DataRobot offers a variety of methods to efficiently score large files via batch predictions:
Method | Description
------ | ------------
[Batch Prediction UI](batch-dep/index) | Configure Batch Prediction jobs directly from the DataRobot interface included in deployments via MLOps.
[Batch Prediction API](../../api/reference/batch-prediction-api/index) | Use the API to create Batch Prediction jobs that score to and from local files, S3 buckets, the **AI Catalog**, and databases.
[Batch prediction scripts](cli-scripts) | Use command-line tools that wrap the Batch Prediction API, available for Windows, macOS, and Linux.
In addition, you can monitor and manage predictions using monitoring jobs and batch prediction jobs:
Method | Description
------ | ------------
[Monitoring Jobs UI](api-monitoring-jobs) | Use the job definition UI to create monitoring jobs, allowing DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot.
[Monitoring Jobs API](ui-monitoring-jobs) | Use the Batch Monitoring API to create monitoring job definitions, allowing DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot.
[Batch Jobs tab ](batch-jobs) | Use the Batch Jobs tab to view and manage monitoring and prediction jobs.
|
index
|
---
title: Manage batch jobs
description: View and manage running or complete jobs.
---
# Manage batch jobs {: #manage-batch-jobs }
To access batch jobs, navigate to **Deployments > Batch Jobs**. You can view and manage all running or complete jobs. Any prediction or monitoring jobs created for deployments appear on this page. In addition, you can [filter jobs](#filter-prediction-jobs) by status, type, start and end time, deployment, job definition ID, job ID, and prediction environment.

### View batch jobs {: #view-batch-jobs }
The following table describes the information displayed in the **Batch Jobs** list.
| Category | Description |
|----------|-------------|
| Job definition | The job definition used to create the job. |
| Job source | Specifies the action that initiated the job—Make Predictions, Scheduled Run, Manual Run, Integration, Ad hoc API, Insights, Portable, and Challengers. |
| Batch Job type | Specifies the job type—batch prediction job or monitoring job. |
| Added to queue | Time at which the job was initialized. |
| Total run time | Time it took to run the job.
| Created by | User who triggered the job. |
| Status | [State](batch-prediction-api/index#job-states) of the job. |
| Source | [Intake adapter](intake-options) for this prediction job. |
| Destination | [Output adapter](output-options) for the prediction job. |
### Manage batch jobs {: #manage-batch-jobs }
To manage a job, select from the action menu on the right:

| Element | Definition | When to use |
|---------|------------|-------------|
| View logs | Displays the log in progress and lets you copy the log to your clipboard. | Jobs that do not use streaming intake |
| Run again | Restarts the run. | Jobs that have finished running |
| Go to deployment | Opens the **Overview** tab for the deployment. | Any job—completed successfully, aborted, or in progress |
| Edit job definition | Opens the **Edit Prediction Job Definition** tab. Update and save the job definition. | Any job |
| Create job definition | Creates a new job definition populated with the settings from the existing prediction job. The new job definition displays, and you can edit and save it. (Alternatively, you can select the **Clone definition** command for a job on the **Job Definitions** tab.) | Any job—except Challenger jobs |
### Filter batch jobs {: #filter-batch-jobs }
To filter the batch jobs:
1. Click **Filters** on the **Batch Jobs** tab:

2. Set filters and click **Apply filters**. Click **Clear filters** to reset the fields.

| | Element | Description |
|---|---|---|
|  | Status | Select job status types to filter by: **Queued**, **Running**, **Succeeded**, **Aborted**, and **Failed**.|
|  | Batch job type | Select job type to filter by:<ul><li>**Batch prediction job**</li><li>**Monitoring job**</li></ul> |
|  | Job source | Select job source type to filter by:<ul><li> Batch prediction jobs (**Schedule Run**, **Manual Run**)</li><li>Integration jobs (**Integration**)</li><li>[Batch predictions generated from the UI](batch-pred) (**Make Predictions**)</li><li>[Batch Prediction API](batch-prediction-api/index) jobs (**Ad hoc API**)</li><li>Insight jobs (**Insights**)</li><li>[Portable Prediction Server](portable-pps) jobs (**Portable**)</li><li>[Challenger](challengers) jobs (**Challenger**)</li></ul>|
|  | Added to queue | Filter by a time range: **Before** or **After** a date you select. |
|  | Deployment | Select a deployment to filter by. Start typing and select a deployment from the dropdown list. |
|  | Job Definition ID | Filter by the jobs generated from a specific job definition. Start typing and select a job definition ID from the dropdown list. |
|  | Prediction Job ID | Enter a specific prediction job ID. |
|  | Prediction Environment | Select from your configured [prediction environments](pred-env). |
|
batch-jobs
|
---
title: Batch prediction scripts
description: Use the Prediction API with these scripts to score large files efficiently.
---
# Batch prediction scripts {: #batch-prediction-scripts }
The Batch prediction scripts are command-line tools for Windows, macOS, and Linux.
They wrap the [Batch Prediction API](batch-prediction-api/index).
To access the scripts, you need a trained model and an active deployment. Then, navigate to the [Predictions API tab](code-py):
* Navigate to the **Deployments** tab and click to select a deployment.
* Select the **Predictions** > [**Prediction API** tab](code-py)
* Select `Batch` as `Predictions Type`
To understand more about how to interact with the code samples, see the [Prediction API Scripting Code](code-py).
|
cli-scripts
|
---
title: JAR structure
description: Review the structure of the downloadable Scoring Code JAR package.
---
# JAR structure {: #jar-structure }
Once you have downloaded the Scoring Code JAR package to your machine, you'll see that it has a well-organized structure:

## Root directory {: #root-directory }
The root directory contains a set of `.so` and `.jnilib` files. These contain compiled Java Native Interface code for LAPACK and BLAS libraries. When a JAR is launched, it first attempts to locate these libraries in the OS. If located, model scoring is greatly speeded up. If the libraries are not located, Scoring Code falls back to a slower Java implementation.

### com.github.fommil package {: #comgithubfommil-package }
The `com.github.fommil` package contains the Java-side of LAPACK and BLAS native interfaces.

### drmodel_ID package {: #drmodel_id-package }
The `drmodel_ID` package contains a set of binary files with parameters for individual nodes of a DataRobot model (blueprint). While these parameters are not human-readable, you can still get their values by debugging `readParameters(DRDataInputStream dis)` methods inside of classes that implement nodes of the model. These classes are located inside of the `om.datarobot.prediction.dr<model_ID>` package.

### com.datarobot.prediction package {: #comdatarobotprediction-package }
The `com.datarobot.prediction` package contains commonly used Java interfaces inside of a Scoring Code JAR. To maintain backward compatibility, it contains both current and deprecated versions of the interfaces. The deprecated interfaces are Predictor, MulticlassPredictor, and Row.

### com.datarobot.prediction.dr<model_ID> package {: #comdatarobotpredictiondr-package }
The`com.datarobot.prediction.dr<model_ID>` package contains the classes that implement the model (blueprint) as well as some utility code.

To understand the model, start with the `BP.java` class. This class manages data flow through the model. The raw data comes into the `DP.java` class where feature conversion and transformation operations take place. Then, the preprocessed data goes into each one of `V<number>` classes where actual steps of model execution take place. All of these classes use three main utility classes:
* `BaseDataStructure` defines a unified container for data.
* `DRDataInputStream` reads binary parameters from the package `dr<model_ID>`.
* `BaseVertex` contains actual implementations of machine learning algorithms and utility functions.
* `DRModel` defines the low-level implementation of a model API. The classes `RegressionPredictorImpl` and `ClassificationPredictorImpl` are top-level APIs built on top of `DRModel`. It is highly recommended that you use these classes instead of using `DRModel` directly. More information about these interfaces can be found in the javadoc (linked from the **Downloads** tab) and in the section [Backward-compatible Java API](java-back-compat).
### com.datarobot.prediction.drmatrix package {: #comdatarobotpredictiondrmatrix-package }
The `com.datarobot.prediction.drmatrix` package contains implementations of common matrix operations on dense and sparse matrices.

### com.datarobot.prediction.engine and com.datarobot.prediction.io packages {: #comdatarobotpredictionengine-and-comdatarobotpredictionio-packages }
The `com.datarobot.prediction.engine` and `com.datarobot.prediction.io` packages contain high-performance scoring logic that enables each Scoring Code JAR to be used as a command line [scoring tool](scoring-cli) for CSV files.

## Differences between source and binary JARs {: #differences-between-source-and-binary-jars }
The following table describes the differences between the source and binary download options.
|Files| Binary `.jar` | Source `.jar` |
|------------------|-------------|-------------|
| Native `.so` and `jnilib` files for BLAS and LAPAC libraries | Yes | No|
| `com.github.fommil` for BLAS and LAPAC libraries | Yes | No |
| `dr<model_ID>` (binary parameters for nodes of the model) | Yes | Yes |
| `com.datarobot.prediction` | Yes | No |
| `com.datarobot.prediction.drmodel_ID` | Yes | Yes |
| `com.datarobot.prediction.drmatrix` | Yes | No |
| `com.datarobot.prediction.engine` | Yes | No |
| `com.datarobot.prediction.io` | Yes | No |
DataRobot provides “source” .jar files for downloading to simplify the process of model inspection. By using the “source” download option, you get only the code that directly implements the model. It is the same code as the “binary” .jar, but stripped of all of the dependencies.
|
jar-package
|
---
title: Scoring Code for time series projects
description: How to use the Scoring Code feature for qualifying time series models, allowing you to use DataRobot-generated models outside of the DataRobot platform.
---
# Scoring Code for time series projects {: #scoring-code-for-time-series-projects }
[Scoring Code](scoring-code/index) is a portable, low-latency method of utilizing DataRobot models outside of the DataRobot application. You can export time series models in a Java-based Scoring Code package from:
* The [Leaderboard](sc-download-leaderboard): (**Leaderboard > Predict > Portable Predictions**)
* A [deployment](sc-download-deployment): (**Deployments > Predictions > Portable Predictions**)
{% include 'includes/scoring-code-consider-ts.md' %}
## Time series parameters for CLI scoring {: #time-series-parameters-for-cli-scoring }
DataRobot supports using [scoring at the command line](scoring-cli). The following table describes the time series parameters:
| Field | Required? | Default| Description |
|---------|-----------|--------------|--------------|
| `--forecast_point=<value>` | No | None | Formatted date from which to forecast. |
| `--date_format=<value>` | No | None | Date format to use for output.
| `--predictions_start_date=<value>` | No | None | Timestamp that indicates when to start calculating predictions. |
| `--predictions_end_date=<value>` | No | None | Timestamp that indicates when to stop calculating predictions.
| `--with_intervals` | No | None | Turns on prediction interval calculations. |
| `--interval_length=<value>` | No | None |Interval length as `int` value from 1 to 99. |
| `--time_series_batch_processing` | No | Disabled | Enables performance-optimized batch processing for time-series models. |
## Scoring Code for segmented modeling projects {: #scoring-code-for-segmented-modeling-projects }
With [segmented modeling](ts-segmented), you can build individual models for segments of a multiseries project. DataRobot then merges these models into a Combined Model.
!!! note
Scoring Code support is available for segments defined by an ID column in the dataset, not segments discovered by a clustering model.
### Verify that segment models have Scoring Code {: #verify-that-segment-models-have-scoring-code }
If the champion model for a segment does not have Scoring Code, select a model that does have Scoring Code:
1. Navigate to the Combined Model on the Leaderboard.

2. From the **Segment** dropdown menu, select a segment. Locate the champion for the segment (designated by the SEGMENT CHAMPION [indicator](leaderboard-ref#tags-and-indicators)).

3. If the segment champion does not have a SCORING CODE indicator, select a new model that meets your modeling requirements and has the SCORING CODE indicator. Then select **Leaderboard options > Mark Model as Champion** from the **Menu** at the top.

The segment now has a segment champion with Scoring Code:

4. Repeat the process for each segment of the Combined Model to ensure that all of the segment champions have Scoring Code.
### Download Scoring Code for a Combined Model {: #download-scoring-code-for-a-combined-model }
To download the Scoring Code JAR for a Combined Model:
* From the leaderboard: [Download the Scoring Code](sc-download-leaderboard) from the Combined Model.
* From a deployment: [Deploy your Combined Model](deploy-model), ensure that [each segment has Scoring Code](#verify-that-segment-models-have-scoring-code), and [download the Scoring Code](sc-download-deployment) from the Combined Model deployment.
## Prediction intervals in Scoring Code {: #prediction-intervals-in-scoring-code }
You can now include prediction intervals in the downloaded Scoring Code JAR for a time series model. Supported intervals are 1 to 99.
### Download Scoring Code with prediction intervals {: #download-scoring-code-with-prediction-intervals }
To download the Scoring Code JAR with prediction intervals enabled:
* From the leaderboard: [Download the Scoring Code](sc-download-leaderboard) with **Include Prediction Intervals** enabled.
* From a deployment: [Deploy your model](deploy-model) and [download the Scoring Code](sc-download-deployment) with **Include Prediction Intervals** enabled.
### CLI example using prediction intervals {: #cli-example-using-prediction-intervals }
The following is a CLI example for scoring models using prediction intervals:
``` bash
java -jar model.jar csv \
--input=syph.csv \
--output=output.csv \
--with_intervals \
--interval_length=87
```
|
sc-time-series
|
---
title: Scoring Code JAR integrations
description: How to import DataRobot Scoring Code JARs into external platforms.
---
# Scoring Code JAR integrations {: #scoring-code-jar-integrations }
!!! info "Availability information"
Contact your DataRobot representative for information on enabling the Scoring Code feature.
Although DataRobot provides its own scalable prediction servers that are fully integrated with other platforms, there are several reasons you may decide to deploy Scoring Code on another platform:
* Company policy or governance decision.
* Custom functionality on top of the DataRobot model.
* Low-latency scoring without the API call overhead.
* The ability to integrate models into systems that cannot communicate with the DataRobot API.
To use the Scoring Code, download the JAR from the [Leaderboard](sc-download-leaderboard) or from a [deployment](sc-download-deployment) and import it to the platform of your choice, as described in the following topics.
Topic | Describes...
------|-------------
[Use Scoring Code with Amazon SageMaker](sc-sagemaker) | Importing DataRobot Scoring Code models to SageMaker.
[Use Scoring Code with AWS Lambda](sc-lambda) | Making predictions using Scoring Code deployed on AWS Lambda.
[Use Scoring Code with Azure ML](sc-azureml) | Importing DataRobot Scoring Code models to Azure ML.
[Android Scoring Code integration](android) | Using DataRobot Scoring Code on Android.
[Apache Spark API for Scoring Code](sc-apache-spark) | Using the Spark API to integrate DataRobot Scoring Code JARs into Spark clusters.
[Generate Snowflake UDF Scoring Code](snowflake-sc) | Using the DataRobot Scoring Code JAR as a user-defined function (UDF) on Snowflake.
|
sc-jar-integrations
|
---
title: Backward-compatible Java API
description: Review the process of using scoring code with models created on different versions of DataRobot.
---
# Backward-compatible Java API {: #backward-compatible-java-api }
This section describes the process of using scoring code with models created on different versions of DataRobot. See also the [example](quickstart-api#java-api-example) for models generated with the same version.
A Java application can have multiple DataRobot models loaded into the same JVM runtime. As long as all the models are generated by the same version of DataRobot, it is safe to use the model API embedded into those JAR files (`com.datarobot.prediction` package).
If a JVM process is hosting models generated by different versions of DataRobot, there is no guarantee that the correct version of the model API will be loaded from one of the JAR files.
An attempt to load a model can generate an exception such as:
Exception in thread "main" java.lang.IllegalArgumentException:
Cannot find a` `predictor with the 5d2db3e5bad451002ac53318 ID.
To use models generated by different versions of DataRobot, use the Compatible Model API described below.
1. Add <a target="_blank" href="https://mvnrepository.com/artifact/com.datarobot/datarobot-prediction">datarobot-prediction</a> and <a target="_blank" href="https://mvnrepository.com/artifact/com.datarobot/datarobot-transform">datarobot-transform</a> Maven references to your project.
2. Change the namespace for all classes from `com.datarobot.prediction` to `com.datarobot.prediction.compatible`.
The Compatible Model API always supports the newest API and is backward-compatible with all versions of DataRobot.
The following is an example of the code using the Compatible Model API:
import com.datarobot.prediction.compatible.IPredictorInfo;
import com.datarobot.prediction.compatible.IRegressionPredictor;
import com.datarobot.prediction.compatible.Predictors;
import java.util.HashMap;
import java.util.Map;
public class Main {
public static void main(String[] args) {
// data is being passed as a Java map
Map<String, Object> row = new HashMap<>();
row.put("a", 1);
row.put("b", "some string feature");
row.put("c", 999);
// below is an example of prediction of a single variable (regression)
// model id is the name of the .jar file
String regression_modelId = "5d2db3e5bad451002ac53318";
// get a regression predictor object given model
IRegressionPredictor regression_predictor =
Predictors.getPredictor(regression_modelId);
double scored_value = regression_predictor.score(row);
System.out.println("The predicted variable: " + scored_value);
// below is an example of prediction of class probabilities (classification)
// model id is the name of the .jar file
String classification_modelId = "5d36ee03962d7429f0a6be72";
// get a classification predictor object given model
IClassificationPredictor predictor =
Predictors.getPredictor(classification_modelId);
Map<String, Double> class_probabilities = predictor.score(row);
for (String class_label : class_probabilities.keySet()) {
System.out.println(String.format("The probability of the row belonging to class %s is %f",
class_label, class_probabilities.get(class_label)));
}
}
}
|
java-back-compat
|
---
title: Download Scoring Code from a deployment
description: Download a Scoring Code JAR file directly from a DataRobot deployment.
---
# Download Scoring Code from a deployment {: #download-scoring-code-from-a-deployment }
!!! info "Availability information"
The behavior of deployments from which you download Scoring Code depends on the [MLOps configuration](pricing) for your organization.
You can download [Scoring Code](sc-overview) for models as pre-compiled JAR files (with all dependencies included) to be used outside of the DataRobot platform. This topic describes how to download Scoring Code from a deployment. Alternatively, you can download it from the [Leaderboard](sc-download-leaderboard).
## Deployment download {: #deployment-download }
For Scoring Code-enabled models deployed to an [external prediction server](pred-env#add-an-external-prediction-environment), you can download Scoring Code from a deployment's [Actions menu](actions-menu) in the **Deployments** inventory or from a deployment's **Predictions > Portable Predictions** tab. For Scoring Code-enabled models deployed to a DataRobot prediction environment, you can only download Scoring Code from the **Deployments** inventory.
1. Navigate to the **Deployments** inventory, and then take either of the following steps:
* Open a deployment, then navigate to the **Predictions > Portable Predictions** tab and click **Scoring Code**.

!!! note
The **Portable Predictions** tab is only available for models deployed to an external environment. If you deployed your model to a DataRobot environment, use the **Deployments** inventory Scoring Code download method.
* Open the deployment's **Actions** menu and click **Get Scoring Code**.

The **Download Scoring Code** dialog opens.
2. Complete the fields described below in the **Portable Predictions** tab (or the **Download Scoring Code** dialog).

| | Element | Description |
|-|---------|-------------|
|  | Scoring Code | Provides a Java package containing your DataRobot model. Under **Portable Prediction Method**, select **Scoring Code**. You can alternatively select **[Portable Prediction Server](portable-pps)** to set up a REST API-based prediction server. |
|  | Coding language | Select the location from which you want to call the Scoring Code: [Python API](https://pypi.org/project/datarobot-predict/){ target=_blank }, [Java API](quickstart-api#java-api-example), or the [command line interface (CLI)](scoring-cli). Selecting a location updates the example snippet displayed below to the corresponding language. |
|  | Include Monitoring Agent | Downloads the [MLOps Agent](mlops-agent/index) with your Scoring Code. |
|  | Include Prediction Explanations | Includes code to calculate [Prediction Explanations](pred-explain/index) with your Scoring Code. This allows you to get Prediction Explanations from your Scoring Code by adding the command line option: `--with-explanations`. See [Scoring at the command line](scoring-cli) for more information. |
|  | Include Prediction Intervals (for time series) | Includes code to calculate [Prediction Intervals](ts-predictions#prediction-preview) with your Scoring Code. This allows you to get Prediction Intervals (from 1 to 99) from your Scoring Code by adding the command line option: `--interval_length=<integer value from 1 to 99>`. See [Scoring at the command line](scoring-cli) for more information.|
|  | Prepare and download / Prepare and download as source code | <ul><li>**Prepare and download**: Downloads the Scoring Code as a Java package. The package contains compiled Java executables, which include all dependencies and can be used to make predictions.</li><li>**Prepare and download as source code**: Downloads Java source code files. These are a non-obfuscated version of the model; they cannot be used to score the model since they are not compiled and dependency packages are not included. Use the source files to explore the model’s decision-making process. This option is only available if you don't have the monitoring agent and prediction explanations enabled.</li></ul> |
|  | Example | Provides a code example that calls the Scoring Code using the selected coding language. |
|  | Copy to clipboard | Copies the Scoring Code example to your clipboard so that you can paste it in your IDE or on the command line. |
!!! tip
Access the [DataRobot Prediction Library](https://pypi.org/project/datarobot-predict/){ target=_blank } to make predictions using various prediction methods supported by DataRobot via a Python API. The library provides a common interface for making predictions, making it easy to swap out any underlying implementation. Note that the library requires a Scoring Code JAR file.
3. Once the settings are configured, click **Prepare and download** to download a Java package or **Prepare and download as source code** to download source code files.
!!! warning
For users not on the 5.0 [pricing plan](pricing) who choose to download Scoring Code, the deployment becomes permanent and cannot be deleted. A warning message prompts you to accept this condition. Use the toggle to indicate your understanding, then click **Prepare and download** to download a Java package or **Prepare and download as source code** to download source code files.
4. When the Scoring Code download completes, use the snippet provided on the tab to call the Scoring Code.
For implementation examples, reference the MLOps agent tarball documentation, which you can download from the [**Developer Tools**](api-key-mgmt#mlops-agent-tarball) page. You can also use the [monitoring snippet](code-py#monitoring-snippet) to integrate with the MLOps Agent.
|
sc-download-deployment
|
---
title: Download Scoring Code from the Leaderboard
description: Download a Scoring Code JAR file directly from the Leaderboard.
---
# Download Scoring Code from the Leaderboard {: #download-scoring-code-from-the-leaderboard }
You can download [Scoring Code](sc-overview) for models as pre-compiled JAR files (with all dependencies included) to be used outside of the DataRobot platform. This topic describes how to download Scoring Code from the Leaderboard. Alternatively, you can download from a [deployment](#deployment-download).
## Leaderboard download {: #leaderboard-download }
!!! info "Availability information"
The ability to download Scoring Code for a model from the Leaderboard depends on the [MLOps configuration](pricing) for your organization.
If you have built a model with AutoML and want to download Scoring Code, you can download directly from the Leaderboard:
1. Navigate to the model on the Leaderboard, select the **Predict > Portable Predictions** tab, and select **Scoring Code**. Complete the fields described below.

| | Element | Description |
|-|---------|-------------|
|  | Scoring Code | Provides a Java package containing your DataRobot model. Under **Portable Prediction Method**, select **Scoring Code**. You can alternatively select **[Portable Prediction Server](portable-pps)** to set up a REST API-based prediction server. |
|  | Coding language | Select the location from which you want to call the Scoring Code: [Python API](https://pypi.org/project/datarobot-predict/){ target=_blank }, [Java API](quickstart-api#java-api-example), or the [command line interface (CLI)](scoring-cli). Selecting a location updates the example snippet displayed below to the corresponding language. |
|  | Include Prediction Explanations | Includes code to calculate [Prediction Explanations](pred-explain/index) with your Scoring Code. This allows you to get Prediction Explanations from your Scoring Code by adding the command line option: `--with-explanations`. See [Scoring at the command line](scoring-cli) for more information.|
|  | Include Prediction Intervals (for time series) | Includes code to calculate [Prediction Intervals](ts-predictions#prediction-preview) with your Scoring Code. This allows you to get Prediction Intervals (from 1 to 99) from your Scoring Code by adding the command line option: `--interval_length=<integer value from 1 to 99>`. See [Scoring at the command line](scoring-cli) for more information.|
|  | Prepare and download / Prepare and download as source code | <ul><li>**Prepare and download**: Downloads the Scoring Code as a Java package. The package contains compiled Java executables, which include all dependencies and can be used to make predictions.</li><li>**Prepare and download as source code**: Downloads Java source code files. These are a non-obfuscated version of the model; they cannot be used to score the model since they are not compiled and dependency packages are not included. Use the source files to explore the model’s decision-making process. This option is only available if you don't have the monitoring agent and prediction explanations enabled.</li></ul> |
|  | Example | Provides a code example that calls the Scoring Code using the selected coding language. |
|  | Copy to clipboard | Copies the Scoring Code example to your clipboard so that you can paste it in your IDE or on the command line. |
!!! tip
Access the [DataRobot Prediction Library](https://pypi.org/project/datarobot-predict/){ target=_blank } to make predictions using Scoring Code and other prediction methods supported by DataRobot via a Python API. The library provides a common interface for making predictions, making it easy to swap out any underlying implementation.
3. Once the settings are configured, click **Prepare and download** to download a Java package or **Prepare and download as source code** to download source code files. The download appears in the downloads bar when complete.
4. Use the snippet provided on the tab to call the Scoring Code.
|
sc-download-leaderboard
|
---
title: Scoring Code usage examples
description: Learn how to use DataRobot's Scoring Code feature.
---
# Scoring Code usage examples {: #scoring-code-usage-examples }
!!! info "Availability information"
Contact your DataRobot representative for information on enabling the Scoring Code feature.
Models displaying the SCORING CODE [indicator](leaderboard-ref#tags-and-indicators) on the Leaderboard support Scoring Code downloads. You can download Scoring Code JARs from the [Leaderboard](sc-download-leaderboard) or from a [deployment](sc-download-deployment).

!!! note
The model JAR files require Java 8 or later.
See below for examples of:
* Using the binary Scoring Code JAR to score a CSV file on the [command line](#command-line-interface-example).
* Using the downloaded JAR in a [Java project](#java-api-example).
For more information, see the Scoring Code [considerations](scoring-code/index#feature-considerations).
## Command line interface example {: #command-line-interface-example }
The following example uses the binary scoring code JAR to score a CSV file. See [Scoring with the embedded CLI](scoring-cli#scoring-with-the-embedded-cli) for complete syntax.
``` bash
java -Dlog4j2.formatMsgNoLookups=true -jar 5cd071deef881f011a334c2f.jar csv --input=Iris.csv --output=Isis_out.csv
```
Returns:
```
head Iris_out.csv
Iris-setosa,Iris-virginica,Iris-versicolor
0.9996371740832738,1.8977798830979584E-4,1.7304792841625776E-4
0.9996352462865297,1.9170611877686303E-4,1.730475946939417E-4
0.9996373523223016,1.8970270284380858E-4,1.729449748545291E-4
```
See also descriptions of [command line parameters](scoring-cli#command-line-parameters) and increasing [Java heap memory](scoring-cli#increase-java-heap-memory).
## Java API example {: #java-api-example }
To be used with the Java API, add the downloaded JAR file to the classpath of the Java project. This API has different output formats for regression and classification projects. Below is an example of both:
``` java
import com.datarobot.prediction.IClassificationPredictor;
import com.datarobot.prediction.IRegressionPredictor;
import com.datarobot.prediction.Predictors;
import java.util.HashMap;
import java.util.Map;
public class Main {
public static void main(String[] args) {
Map<String, Object> row = new HashMap<>();
row.put("a", 1);
row.put("b", "some string feature");
row.put("c", 999);
// below is an example of prediction of a single variable (regression)
// get a regression predictor by model id
IRegressionPredictor regressionPredictor = Predictors.getPredictor("5d2db3e5bad451002ac53318");
double scored_value = regressionPredictor.score(row);
System.out.println("The predicted variable: " + scored_value);
// below is an example of prediction of class probabilities (classification)
// get a regression predictor by model id
IClassificationPredictor predictor = Predictors.getPredictor("5d36ee03962d7429f0a6be72");
Map<String, Double> classProbabilities = predictor.score(row);
for (String class_label : classProbabilities.keySet()) {
System.out.printf("The probability of the row belonging to class %s is %f%n",
class_label, classProbabilities.get(class_label));
}
}
}
```
See also a [backward-compatibility](java-back-compat) example for use when models are generated by different versions of DataRobot.
### Java Prediction Explanation examples {: #java-prediction-explanation-examples }
When you download a Scoring Code JAR from the [Leaderboard](sc-download-leaderboard) or from a [deployment](sc-download-deployment) with **Include Prediction Explanations** enabled, you can calculate [Prediction Explanations](pred-explain/index) in your Java code.
!!! note
For availability information, see the [Prediction Explanations for Scoring Code considerations](scoring-code/index#prediction-explanations-support).
The following examples calculate Prediction Explanations with the _default_ parameters:
=== "Regression"
``` java
IRegressionPredictor predictor = Predictors.getPredictor();
Score<Double> score = predictor.scoreWithExplanations(featureValues);
List<Explanation> explanations = score.getPredictionExplanation();
```
=== "Binary Classification"
``` java
IClassificationPredictor predictor = Predictors.getPredictor();
Score<Map<String, Double>> score = predictor.scoreWithExplanations(featureValues);
List<Explanation> explanations = score.getPredictionExplanation();
```
The following examples calculate Prediction Explanations with _custom_ parameters:
=== "Regression"
``` java
IRegressionPredictor predictor = Predictors.getPredictor();
ExplanationParams parameters = predictor.getDefaultPredictionExplanationParams();
parameters = parameters
.withMaxCodes(10)
.withThresholdHigh(0.8)
.withThresholdLow(0.3);
Score<Double> score = predictor.scoreWithExplanations(featureValues, parameters);
List<Explanation> explanations = score.getPredictionExplanation();
```
=== "Binary Classification"
``` java
IClassificationPredictor predictor = Predictors.getPredictor();
ExplanationParams defaultParameters = predictor.getDefaultPredictionExplanationParams();
defaultParameters = defaultParameters
.withMaxCodes(10)
.withThresholdHigh(0.8)
.withThresholdLow(0.3);
Score<Map<String, Double>> score = predictor.scoreWithExplanations(featureValues, defaultParameters);
List<Explanation> explanations = score.getPredictionExplanation();
```
|
quickstart-api
|
---
title: Download Scoring Code from the Leaderboard (Legacy)
description: Download a Scoring Code JAR file directly from the Leaderboard as a legacy user.
---
# Download Scoring Code for legacy users {: #download-scoring-code-for-legacy-users }
Models displaying the SCORING CODE [indicator](leaderboard-ref#tags-and-indicators) on the Leaderboard are available for Scoring Code download.
!!! info "Availability information"
The ability to download Scoring Code for a model from the Leaderboard depends on the [MLOps configuration](pricing) for your organization. Legacy users will see the option described below. MLOps users can download Scoring Code [from the Leaderboard](sc-download-leaderboard) and directly [from a deployment](sc-download-deployment).
Navigate to the **Predict > Downloads** tab, where you can select a download option and access a link to the up-to-date Java API documentation.

There are two download options for Scoring Code:
| Selection | Description |
|-------------|-------------|
| Binary | These are compiled Java executables, which include all dependencies and can be used to make predictions.|
| Source (Java source code files) | These are a non-obfuscated version of the model; they cannot be used to score the model since they are not compiled and dependency packages are not included. Use the source files to explore the model’s decision-making process. |
Additional information about the Java API can be found in the [DataRobot javadocs](https://javadoc.io/doc/com.datarobot/datarobot-prediction/2.0.11"){ target=_blank }.
|
sc-download-legacy
|
---
title: Scoring Code
description: How to export Scoring Code so that you can use DataRobot-generated models outside of the DataRobot platform.
---
# Scoring Code {: #scoring-code }
!!! info "Availability information"
Contact your DataRobot representative for information on enabling the Scoring Code feature.
The **Scoring Code** feature exports Scoring Code for qualifying Leaderboard models, allowing you to use DataRobot-generated models outside the platform.
For more information, see the Scoring Code [considerations](#feature-considerations).
The following sections describe how to work with Scoring Code:
| Topic | Describes |
|-------|-----------|
| [Scoring Code overview](sc-overview) | Scoring Code, how you download it, and how to score with it. |
| [Download Scoring Code from the Leaderboard](sc-download-leaderboard) | Downloading and configuring Scoring Code from the Leaderboard. |
| [Download Scoring Code from a deployment](sc-download-deployment) | Downloading and configuring Scoring Code from a deployment. |
| [Download time series Scoring Code](sc-time-series) | Downloading and configuring Scoring Code for a time series project. |
| [Scoring at the command line](scoring-cli) | Syntax for scoring with embedded CLI. |
| [Scoring Code usage examples](quickstart-api) | Examples showing how to use the Scoring Code JAR to score from the CLI and in a Java project. |
| [JAR structure](jar-package) | The contents of the Scoring Code JAR package. |
| [Generate Java models in an existing project](build-verify) | Retraining models that were created before the Scoring Code feature was enabled. |
| [Backward-compatible Java API](java-back-compat) | Using Scoring Code with models created on different versions of DataRobot. |
| [Scoring Code JAR integrations](sc-jar-integrations) | Deploying DataRobot Scoring Code on an external platform. |
{% include 'includes/scoring-code-consider.md' %}
|
index
|
---
title: Generate Java models in an existing project
description: Retrain legacy models for which you want to download Scoring Code.
---
# Generate Java models in an existing project {: #generate-java-models-in-an-existing-project }
If you have projects that were created before the Scoring Code feature was enabled for your organization, you must retrain the models for which you want to download code. You do not need to recreate the entire project.
To retrain a model:
1. Click the checkbox at the left of the model to select it. Note the blueprint number (BPxx) as you will need this information later.
2. From the dropdown menu, select **Delete**.

3. Open the [**Repository**](repository) and search for a model with same blueprint number. Check the box to the left of the model to select it.
4. Set the [values](repository#create-a-new-model) and click the **Run task(s)** button.
5. When the model has finished training, return to the Leaderboard and enter the blueprint number in the search field.
6. Expand the model version with the SCORING CODE  tag, navigate to **Predict > Downloads**, select a download option, and click **Download** to access the scoring code.
!!! note
A retrained model may have slightly different predictions than the original model due to the nature of the parameter initialization process used by machine learning algorithms.
|
build-verify
|
---
title: Scoring Code overview
description: How to use the Scoring Code feature for qualifying Leaderboard models, allowing you to use DataRobot-generated models outside of the DataRobot platform.
---
# Scoring Code overview {: #scoring-code-overview }
!!! info "Availability information"
Contact your DataRobot representative for information on enabling the Scoring Code feature.
Scoring Code allows you to export DataRobot-generated models as JAR files that you can use outside of the platform. DataRobot automatically runs code generation for qualifying models and indicates code availability with a SCORING CODE [indicator](leaderboard-ref#tags-and-indicators) on the Leaderboard.
You can export a model's Scoring Code from the [Leaderboard](sc-download-leaderboard) or [the model's deployment](sc-download-deployment). The download includes a pre-compiled JAR file (with all dependencies included), as well as the source code JAR file. Once exported, you can view the model's source code to help understand each step DataRobot takes in producing your predictions.
Scoring Code JARs contain Java Scoring Code for a predictive model. The prediction calculation logic is identical to the DataRobot API—the code generation mechanism tests each model for accuracy as part of the generation process. The generated code is easily deployable in any environment and is not dependent on the DataRobot application.
??? tip "How does DataRobot determine which models will have Scoring Code?"
When the Scoring Code feature is enabled, DataRobot generates a Java alternative for each blueprint preprocessing step and compares its results on the validation set with the original results. If the difference between results is greater than 0.00001, DataRobot does not provide the option to download the Scoring Code. In this way, DataRobot ensures that the Scoring Code JAR model always produces the same predictions as the original model. If verification fails, check the [**Log**](log) tab for error details.
## Why use Scoring Code? {: #why-use-scoring-code }
* **Flexibility**: Can be used anywhere that Java code can be executed.
* **Speed**: Provides low-latency scoring without the API call overhead. Java code is typically faster than scoring through the Python API.
* **Integrations**: Lets you integrate models into systems that can’t necessarily communicate with the DataRobot API. The Scoring Code can be used either as a primary means of scoring for fully offline systems or as a backend for systems that are using the DataRobot API.
* **Precision**: Provides a complete match of predictions generated by DataRobot and the JAR model.
* **Hardware**: Allows you to use additional hardware to score large amounts of data.
See the following sections for more details:
* Downloading Scoring Code from the [Leaderboard](sc-download-leaderboard) or [a deployment](sc-download-deployment)
* [Scoring at the command line](scoring-cli)
* [JAR file structure](jar-package)
* [Scoring Code JAR integrations](sc-jar-integrations)
!!! note
The model JAR files require Java 8 or later.
{% include 'includes/scoring-code-consider.md' %}
|
sc-overview
|
---
title: Scoring at the command line
description: The following sections provide syntax for scoring at the command line.
keywords: Python, Java, source code, codegen, binary, source, scoring, transparent model, code validation, jar
---
# Scoring at the command line {: #scoring-at-the-command-line }
The following sections provide syntax for scoring at the command line.
## Command line parameters {: #command-line-parameters }
| Field | Required? | Default | Description |
|---------|-----------|---------|-------------|
| `--help` | No | Disabled | Prints all of the available options as well as some model metadata.|
| `--input=<value>` | Yes | None | Defines the source of the input data. Valid values are: <ul><li> `--input=-` to set the input from standard input </li><li> `--input=</path/to/input/csv>/input.csv` to set the source of the data. </li>|
| `--output=<value>` | Yes | None | Sets the way to output results. Valid values are: <ul><li> `--output=-` to write the results to standard output</li><li> `--output=/path/to/output/csv/output.csv` to save results to a file. The output file always contains the same number of rows as the original file, and they are always in the same order. Note that for files smaller than 1GB, you can specify the output file to be the same as the input file, causing it to replace the input with the scored file. </li>|
| `--encoding=<value>` | No | Default system encoding | Sets the charset encoding used to read file content. Use one of the canonical names for <a target="_blank" href="https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html"><code>java.io API</code> and <code>java.lang API</code></a>. If the option is not set, the tool will be able to detect UTF8 and UTF16 BOM. |
| `--delimiter=<value>` | No | `,` (comma) | Specifies the delimiter symbol used in CSV files to split values between columns.**Note:** use the option `--delimiter=“;”` to use the semicolon `;` as a delimiter (`;` is a reserved symbol in bash/shell).|
| `--passthrough_columns` | No | None | Sets the input columns to include in the results file. For example, if the flag contains a set of columns (e.g., `column1,column2`), the output will contain predictive column(s) and the columns 1 and 2 only. To include all original columns, use `All`. The resulting file will contain columns in the same order, and will use the same format and the same value as the `delimiters` parameter. If this parameter is not specified, the command only returns the prediction column(s).|
| `--chunk_size=<value>` | No | min(1MB, {file_size}/{cores_number}) | "Slices" the initial dataset into chunks to score in a sequence as separate asynchronous tasks. In most cases, the default value will produce the best performance. Bigger chunks can be used to score very fast models and smaller chunks can be used to score very slow models.|
| `--workers_number=<value>` | No | Number of logical cores | Specifies the number of workers that can process chunks of work concurrently. By default, the value will match the number of logical cores and will produce the best performance.|
| `--log_level=<value>` | No | INFO | Sets the level of information to be output to the console. Available options are INFO, DEBUG, and TRACE.|
| `--pred_name=<value>` | No | DR_Score | For regression projects, this field sets the name of the prediction column in the output file. In classification projects, the prediction labels are the same as the class labels.|
| `--buffer_size=<value>` | No | 1000 | Controls the size of the asynchronous task queue. Set it to a smaller value if you are experiencing `OutOfMemoryException` errors while using this tool. This is an advanced parameter.|
| `--config=<value>` | No | .jar file directory | Sets the location for the `batch.properties` file, which writes all config parameters to a single file. If you place it in the same directory as the .jar, you do not need to set this parameter. If you want to place `batch.properties` into another directory, you need to set the value of the parameter to be the path to the target directory.|
| `--with_explanations` | No | Disabled | Turns on prediction explanation computations. |
| `--max_codes=<value>` | No | 3 | Sets the maximum number of explanations to compute. |
| `--threshold_low=<value>` | No | Null | Sets the low threshold for prediction rows to be included in the explanations. |
| `--threshold_high=<value>` | No | Null | Sets the high threshold for prediction rows to be included in the explanations. |
| `--enable_mlops` | No | Enabled | Initializes an MLOps instance for tracking scores. |
| `--dr_token=<value>` | Required if --enabled_mlops is set. | None | Specifies the authorization token for monitoring agent requests. |
| `--disable_agent` | No | Enabled | When --enable_mlops is enabled, sets whether to allow offline tracking. |
| **Time series parameters** | :~~: | :~~: | :~~: |
| `--forecast_point=<value>` | No | None | Formatted date from which to forecast. |
| `--date_format=<value>` | No | None | Date format to use for output.
| `--predictions_start_date=<value>` | No | None | Timestamp that indicates when to start calculating predictions. |
| `--predictions_end_date=<value>` | No | None | Timestamp that indicates when to stop calculating predictions.
| `--with_intervals` | No | None | Turns on prediction interval calculations. |
| `--interval_length=<value>` | No | None |Interval length as `int` value from 1 to 99. |
| `--time_series_batch_processing` | No | Disabled | Enables performance-optimized batch processing for time-series models. |
!!! note
For more information, see [Scoring Code usage examples](quickstart-api).
## Increase Java heap memory {: #increase-java-heap-memory }
Depending on the model's binary size, you may have to increase the Java virtual machine (JVM) heap memory size. When scoring your model, if you receive an `OutOfMemoryError: Java heap space error` message, increase your Java heap size by calling `java -Xmx1024m` and adjusting the number as necessary to allocate sufficient memory for the process.
To guarantee, in case of error, scoring result consistency and a non-zero exit code, run the application with the `-XX:+ExitOnOutOfMemoryError` flag.
The following example increases heap memory to 2GB:
```
java -XX:+ExitOnOutOfMemoryError -Xmx2g -Dlog4j2.formatMsgNoLookups=true -jar 5cd071deef881f011a334c2f.jar csv --input=Iris.csv --output=Isis_out.csv
```
|
scoring-cli
|
---
title: Custom model Portable Prediction Server
description: How to download, build, and run the custom model Portable Prediction Server (PPS) to deploy a custom model to an external prediction environment.
---
# Custom model Portable Prediction Server {: #custom-model-portable-prediction-server }
The custom model Portable Prediction Server (PPS) is a solution for deploying a custom model to an external prediction environment. It can be built and run disconnected from main installation environments. The PPS is available as a downloadable bundle containing a deployed custom model, a custom environment, and the monitoring agent. Once started, the custom model PPS installation serves predictions via the DataRobot REST API.
## Download and configure the custom model PPS bundle {: #download-and-configure-the-custom-model-pps-bundle }
The custom model PPS bundle is provided for any custom model tagged as having an [external prediction environment](pred-env) in the deployment inventory.
!!! note
Before proceeding, note that DataRobot supports Linux-based prediction environments for PPS. It is possible to use other Unix-based prediction environments, but only Linux-based systems are validated and officially supported.
Select the custom model you wish to use, navigate to the **Predictions > Portable Predictions** tab of the deployment, and select **Download portable prediction package**.

Alternatively, instead of downloading the contents in one bundle, you can download the custom model, custom environment, or the monitoring agent as individual components.

After downloading the .zip file, extract it locally with an unzip command:
`unzip <cm_pps_installer_*>.zip`
Next, access the installation script (unzipped from the bundle) to build the custom model PPS with monitoring agent support. To do so, run the command displayed in step 2:

For more build options, such as the ability to skip the monitoring agent Docker image install, run:
`bash ./cm_pps_installer.sh --help`
If the build passes without errors, it adds two new Docker images to the local Docker registry:
* `cm_pps_XYZ` is the image assembling the custom model and custom environment.
* `datarobot/mlops-tracking-agent` is the monitoring agent Docker image, used to report prediction statistics back to DataRobot.
## Make predictions with PPS {: #make-predictions-with-pps }
DataRobot provides two example <a target="_blank" href="https://docs.docker.com/compose/install/">Docker Compose configurations</a> in the bundle to get you started with the custom model PPS:
* `docker-compose-fs.yml`: uses a file system-based spooler between the model container and the monitoring agent container. Recommended for a single model.
* `docker-compose-rabbit.yml`: uses a RabbitMQ-based spooler between the model container and the monitoring agent container. Use this configuration to run several models with a single monitoring agent instance.
!!! note
To utilize the provided Docker Compose files, be sure you have added the [`datarobot-mlops` package](https://pypi.org/project/datarobot-mlops){ target=_blank } (with additional dependencies as needed) to your model's `requirements.txt` file.
After selecting the configuration to use, edit the Docker Compose file to include the deployment ID and your [API key](api-key-mgmt) in the corresponding fields.
Once configured, start the prediction sever:
* For single models using the file system-based spooler, run:
`docker-compose -f docker-compose-fs.yml up`
* For multiple models with a single monitoring agent instance, use the RabbitMQ-based spooler:
`docker-compose -f docker-compose-rabbit.yml up`
When the PPS is running, the Docker image exposes three HTTP endpoints:
* `POST /predictions` scores a given dataset.
* `GET /info` returns information about the loaded model.
* `GET /ping` ensures the tech stack is running.
!!! note
Note that prediction routes only support comma delimited (CSV) scoring datasets. The maximum payload size is 50 MB.
The following demonstrates a sample prediction request and JSON response:
```
curl -X POST http://localhost:6788/predictions/ \
-H "Content-Type: text/csv" \
--data-binary @path/to/scoring.csv
```
{
"data": [{
"prediction": 23.03329917456927,
"predictionValues": [{
"label": "MEDV",
"value": 23.03329917456927
}],
"rowId": 0
},
{
"prediction": 33.01475956455371,
"predictionValues": [{
"label": "MEDV",
"value": 33.01475956455371
}],
"rowId": 1
},
]
}
### MLOps environment variables {: #mlops-environment-variables }
The following table lists the MLOps service environment variables supported for all custom models using PPS. You may want to adjust these settings based on the run environment used.
| Variable | Description | Default |
|------------------------------|--------------|--------------------|
| `MLOPS_SERVICE_URL` | The address of the running DataRobot application. | Autogenerated value |
| `MLOPS_API_TOKEN` | Your DataRobot API key. | Undefined; must be provided. |
| `MLOPS_SPOOLER_TYPE` | The type of spooler used by the custom model and monitoring agent. | Autogenerated value |
| `MLOPS_FILESYSTEM_DIRECTORY` | The filesystem spooler configuration for the monitoring agent. | Autogenerated value |
| `MLOPS_RABBITMQ_QUEUE_URL` | The RabbitMQ spooler configuration for the monitoring agent. | Autogenerated value |
| `MLOPS_RABBITMQ_QUEUE_NAME` | The RabbitMQ spooler configuration for the monitoring agent. | Autogenerated value |
| `START_DELAY` | Triggers a delay before starting the monitoring agent. | Autogenerated value |
### DRUM-based environment variables {: #drum-based-environment-variables }
The following table lists the environment variables supported for [DRUM-based](custom-local-test) custom environments:
| Variable | Description | Default |
|----------|---------------|------------------|
| `ADDRESS` | The prediction server's starting address. | `0.0.0.0:6788` |
| `MODEL_ID` | The ID of the deployed model (required for monitoring). | Autogenerated value |
| `DEPLOYMENT_ID` | The deployment ID.| Undefined; must be provided. |
| `MONITOR` | A flag that enables MLOps monitoring. | True. Provide an empty value or remove this variable to disable monitoring.|
| `MONITOR_SETTINGS` | Settings for the monitoring agent spooler. | Autogenerated value |
### RabbitMQ service environment variables {: #rabbitmq-service-environment-variables }
| Variable | Description | Default |
|-------------|-------------------|-----------|
| `RABBITMQ_DEFAULT_USER` | The default RabbitMQ user. | Autogenerated value |
| `RABBITMQ_DEFAULT_PASS` | The default RabbitMQ password. | Autogenerated value |
|
custom-pps
|
---
title: Portable Prediction Server running modes
description: Learn how to configure the Portable Prediction Server for single-model or multi-model running mode.
---
# Portable Prediction Server running modes {: #portable-prediction-server-running-modes }
There are two model modes supported by the server: single-model (SM) and multi-model (MM). Use SM mode when only a single model package has been mounted into the Docker container inside the `/opt/ml/model` directory. Use MM mode in all other cases. Despite being compatible predictions-wise, SM mode provides a simplified HTTP API that does not require a model package to be identified on disk and preloads a model into memory on start.
The Docker container Filesystem directory should match the following layouts.
For SM mode:
```
/opt/ml/model/
└── model_5fae9a023ba73530157ebdae.mlpkg
```
For MM mode:
```
/opt/ml/model/
├── fraud
| └── model_5fae9a023ba73530157ebdae.mlpkg
└── revenue
├── config.yml
└── revenue-estimate.mlpkg
```
### HTTP API (single-model) {: #http-api-single-model }
When running in single-model mode, the Docker image exposes three HTTP endpoints:
* `POST /predictions` scores a given dataset.
* `GET /info` returns information about the loaded model.
* `GET /ping` ensures the tech stack is up and running.
!!! note
Prediction routes only support comma-delimited CSV and JSON records scoring datasets. The maximum payload size is 50 MB.
``` sh
curl -X POST http://<ip>:8080/predictions \
-H "Content-Type: text/csv" \
--data-binary @path/to/scoring.csv
{
"data": [
{
"predictionValues": [
{"value": 0.250833758, "label": "yes"},
{"value": 0.749166242, "label": "no"},
],
"predictionThreshold": 0.5,
"prediction": 0.0,
"rowId": 0
}
]
}
```
If CSV is the preferred output, request it using the `Accept: text/csv` HTTP header.
``` sh
curl -X POST http://<ip>:8080/predictions \
-H "Accept: text/csv" \
-H "Content-Type: text/csv" \
--data-binary @path/to/scoring.csv
<target>_yes_PREDICTION,<target>_no_PREDICTION,<target>_PREDICTION,THRESHOLD,POSITIVE_CLASS
0.250833758,0.749166242,0,0.5,yes
```
### HTTP API (multi-model) {: #http-api-multi-model }
In multi-model mode, the Docker image exposes the following endpoints:
* `POST /deployments/:id/predictions` scores a given dataset.
* `GET /deployments/:id/info` returns information about the loaded model.
* `POST /deployments/:id` uploads a model package to the container.
* `DELETE /deployments/:id` deletes a model package from the container.
* `GET /deployments` returns a list of model packages that are in the container.
* `GET /ping` ensures the tech stack is up and running.
The `:id` included in the `/deployments` routes above refers to the unique identifier for model packages on the disk. The ID is the directory name containing the model package. Therefore, if you have the following `/opt/ml/model` layout:
```
/opt/ml/model/
├── fraud
| └── model_5fae9a023ba73530157ebdae.mlpkg
└── revenue
├── config.yml
└── revenue-estimate.mlpkg
```
You may use `fraud` and `revenue` instead of `:id` in the `/deployments` set of routes.
!!! note
Prediction routes only support comma delimited CSV and JSON records scoring datasets. The maximum payload size is 50 MB.
``` sh
curl -X POST http://<ip>:8080/deployments/revenue/predictions \
-H "Content-Type: text/csv" \
--data-binary @path/to/scoring.csv
{
"data": [
{
"predictionValues": [
{"value": 0.250833758, "label": "yes"},
{"value": 0.749166242, "label": "no"},
],
"predictionThreshold": 0.5,
"prediction": 0.0,
"rowId": 0
}
]
}
```
## Monitoring {: #monitoring }
!!! note
Before proceeding, be sure to configure monitoring for the PPS container. See the [Environment Variables](#environment-variables) and [Examples](#examples) sections for details. To use the [monitoring agent](mlops-agent/index), you need to configure the [agent spoolers](spooler) as well.
You can monitor prediction statistics such as [data drift](data-drift) and [accuracy](accuracy-settings) by [creating an external deployment](deploy-external-model) in DataRobot's deployment inventory.
In order to connect your model package to a certain deployment, provide the deployment ID of the deployment you want to host your prediction statistics.
If you're in Single Model (SM) mode, the deployment ID has to be provided via the `MLOPS_DEPLOYMENT_ID` environment variable. In Multi Model (MM) mode, a special `config.yml` should be prepared and dropped alongside the model package with the desired `deployment_id` value:
```yaml
deployment_id: 5fc92906ad764dde6c3264fa
```
If you want to track accuracy, [configure it](accuracy-settings) for the deployment, and then provide extra settings for the running model:
For SM mode, set the following environment variables:
* `MLOPS_ASSOCIATION_ID_COLUMN=transaction_country` (required)
* `MLOPS_ASSOCIATION_ID_ALLOW_MISSING_VALUES=false` (optional, default=`false`)
For MM mode, set the following properties in `config.yml`:
```yaml
association_id_settings:
column_name: transaction_country
allow_missing_values: false
```
## HTTPS support {: #https-support }
!!! info "Availability information"
If you are running PPS images that were downloaded previously, these parameters will not be available until the PPS image is manually updated:
* Managed AI Platform (SaaS): starting Aug 2021
* Self-Managed AI Platform: starting v7.2
By default, PPS serves predictions over an *insecure* listener on port `8080` (clear text HTTP over TCP).
You can also serve predictions over a *secure* listener port `8443` (HTTP over TLS/SSL, or simply HTTPS). When the secure listener is enabled, the insecure listener becomes unavailable.
!!! note
You cannot configure PPS to be available on both ports simultaneously; it is either HTTP on `8080` or HTTPS on `8443`.
The configuration is accomplished using the environment variables described below:
* `PREDICTION_API_TLS_ENABLED`: The master flag that enables HTTPS listener on port `8443` and disables HTTP listener on port `8080`.
* **Default**: false (HTTPS disabled)
* **Valid values** (case-insensitive):
| Parameter value | Interpretation |
|-----------------|-------------|
| true, yes, y, 1 | true |
| false, no, n, 0 | false |
!!! note
The flag value must be interpreted as `true` to enable TLS. All other `PREDICTION_API_TLS_*` environment variables (if passed) are ignored if this setting is not enabled.
* `PREDICTION_API_TLS_CERTIFICATE`: PEM-formatted content of the TLS/SSL certificate.
* **Required**: Yes if `PREDICTION_API_TLS_ENABLED` is `true`, otherwise no.
* **See also**: [NGINX SSL certificate documentation](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate){ target=_blank }
* `PREDICTION_API_TLS_CERTIFICATE_KEY`: PEM-formatted content of the *secret* certificate key of the TLS/SSL certificate key.
* **Required**: Yes if `PREDICTION_API_TLS_ENABLED` is `true`, otherwise no.
* **See also**: [NGINX SSL certificate key documentation](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate_key){ target=_blank }
* `PREDICTION_API_TLS_CERTIFICATE_KEY_PASSWORD`: Passphrase for the *secret* certificate key passed in `PREDICTION_API_TLS_CERTIFICATE_KEY`.
* **Required**: Yes, only if a certificate key was created with a passphrase.
* `PREDICTION_API_TLS_PROTOCOLS`: Encryption protocol implementation(s) to use.
* **Default**: `TLSv1.2 TLSv1.3`
* **Valid values**: `SSLv2`|`SSLv3`|`TLSv1`|`TLSv1.1`|`TLSv1.2`|`TLSv1.3`, or any space-separated combination of these values.
!!! warning
As of August 2021, all implementations except `TLSv1.2` and `TLSv1.3` are considered deprecated and/or insecure. DataRobot highly recommends using only these implementations. New installations may consider using `TLSv1.3` exclusively as it is the most recent and secure TLS version.
* `PREDICTION_API_TLS_CIPHERS`: List of cipher suites to use.
* **Default**: [Mandatory TLSv1.3 ciphers](https://datatracker.ietf.org/doc/html/rfc8446#section-9.1){ target=_blank } and [recommended TLSv1.2 ciphers](https://datatracker.ietf.org/doc/html/rfc7525#section-4.2){ target=_blank }
* **Required**: No.
* **Valid values**: See [OpenSSL syntax](https://www.openssl.org/docs/man1.1.1/man1/ciphers.html){ target=_blank } for cipher suites.
!!! warning
TLS support is an advanced feature. The cipher suites list has been carefully selected to follow the latest recommendations and current best practices. DataRobot does not recommend overriding it.
## Environment variables {: #environment-variables }
| Variable | Description | Default |
|----------|------------------|---|
| `PREDICTION_API_WORKERS` | Sets the number of workers to spin up. This option controls the number of HTTP requests the Prediction API can process simultaneously. Typically, set this to the number of CPU cores available for the container. | `1` |
| `PREDICTION_API_MODEL_REPOSITORY_PATH` | Sets the path to the directory where DataRobot should look for model packages. If the `PREDICTION_API_MODEL_REPOSITORY_PATH` points to a directory containing a single model package in its root, the single-model running mode is assumed by PPS. Multi-model mode is assumed otherwise. | `/opt/ml/model/` |
| `PREDICTION_API_PRELOAD_MODELS_ENABLED` | Requires every worker to proactively preload all mounted models on start. This should help to eliminate the problem of cache misses for the first requests after the server starts and the cache is still "cold." See also `PREDICTION_API_SCORING_MODEL_CACHE_MAXSIZE` to completely eliminate the cache misses. | <ul><li>`false` for multi-model mode</li><li>`true` to single-model mode</li></ul> |
| `PREDICTION_API_SCORING_MODEL_CACHE_MAXSIZE` | The maximum number of scoring models to keep in each worker's RAM cache to avoid loading them on demand for each request. In practice, the default setting is low. If the server running PPS has enough RAM, you should set this to a value greater than the total number of premounted models to fully leverage caching and avoid cache misses. Note that each worker's cache is independent, so each model will be copied to each worker's cache. Also consider enabling `PREDICTION_API_PRELOAD_MODELS_ENABLED` for multi-model mode to avoid cache misses. | `4` |
| `PREDICTION_API_DEPLOYED_MODEL_RESOLVER_CACHE_TTL_SEC` | By default, the PPS will periodically attempt to read deployment information from an `mplkg` in case the package was re-uploaded via HTTP. If you are not planning to update the `mplkg` after the PPS starts, consider setting this to `0` to disable deployment info cache invalidation. This will help reduce latency for some requests. | `60` |
| `PREDICTION_API_MONITORING_ENABLED` | Sets whether DataRobot offloads data monitoring. If true, the Prediction API will offload monitoring data to the [monitoring agent](mlops-agent/index). | `false` |
| `PREDICTION_API_MONITORING_SETTINGS`| Controls how to offload monitoring data from the Prediction API to the [monitoring agent](mlops-agent/index). Specify a list of [spooler configuration settings](spooler) in key=value pairs separated by semicolons. <br><br>Example for a Filesystem spooler:<br>`PREDICTION_API_MONITORING_SETTINGS="spooler_type=filesystem;directory=/tmp;max_files=50;file_max_size=102400000"`<br><br>Example for an SQS spooler:<br> `PORTABLE_PREDICTION_API_MONITORING_SETTINGS="spooler_type=sqs;sqs_queue_url=<SQS_URL>"`<br><br>For single-model mode of PPS, the `MLOPS_DEPLOYMENT_ID` and `MLOPS_MODEL_ID` variables are required; they are not required for multi-model mode. | `None` |
| `MONITORING_AGENT` | Sets whether the monitoring agent runs alongside the Prediction API. To use the [monitoring agent](mlops-agent/index), you need to configure the [agent spoolers](spooler). | `false`|
| `MONITORING_AGENT_DATAROBOT_APP_URL` | Sets the URI to the DataRobot installation (e.g., https://app.datarobot.com). | `None` |
| `MONITORING_AGENT_DATAROBOT_APP_TOKEN` | Sets a user token to be used with the DataRobot API. | `None` |
| `PREDICTION_API_TLS_ENABLED` | Sets the TLS listener master flag. Must be activated for the TLS listener to work.| `false` |
| `PREDICTION_API_TLS_CERTIFICATE` | Adds inline content of the certificate, in PEM format. | `None` |
| `PREDICTION_API_TLS_CERTIFICATE_KEY` | Adds inline content of the certificate key, in PEM format. | `None` |
| `PREDICTION_API_TLS_CERTIFICATE_KEY_PASSWORD` | Adds plaintext passphrase for the certificate key file. | `None` |
| `PREDICTION_API_TLS_PROTOCOLS` | Overrides the TLS/SSL protocols. | `TLSv1.2 TLSv1.3` |
| `PREDICTION_API_TLS_CIPHERS` | Overrides default cipher suites. | Mandatory TLSv1.3, recommended TLSv1.2 |
| `PREDICTION_API_RPC_DUAL_COMPUTE_ENABLED` <br> _(Self-Managed 8.x installations)_ | For self-managed 8.x installations, this setting requires that the PPS run Python 2 *and* Python 3 interpreters. Then, the PPS automatically determines the version requirement based on which Python version the model was trained on. When this setting is enabled, `PYTHON3_SERVICES` is redundant and ignored. Note that this requires additional RAM to run both versions of the interpreter. | `False` |
| `PYTHON3_SERVICES` | Only enable this setting when the `PREDICTION_API_RPC_DUAL_COMPUTE_ENABLED` setting is disabled *and* each model was trained on Python 3. You can save approximately 400MB of RAM by excluding the Python2 interpreter service from the container. | `None` |
!!! important "Python support for self-managed installations"
For Self-Managed installations before 9.0, the PPS _does not_ support Python 3 models by default; therefore, setting `PYTHON3_SERVICES` to `true` is required to use Python 3 models in those installations.
If you are running an 8.x version of DataRobot, you can enable "dual-compute mode" (`PREDICTION_API_RPC_DUAL_COMPUTE_ENABLED='true'`) to support both Python2 and Python 3 models; however, this configuration requires an extra 400MB of RAM. If you want to reduce the RAM footprint (and *all* models are either Python2 or Python3), you should avoid enabling "dual-compute mode." If all models are trained on Python 3, enable Python 3 services (`PYTHON3_SERVICES='true''`). If all models are trained on Python2, there is no need to configure an additional environment variable, as the default interpreter is still Python 2.
## Request parameters {: #request-parameters }
### Headers {: #headers }
The PPS does not support authorization; therefore, `Datarobot-key` and `Authorization` are not needed.
| Key | Type | Description | Example(s) |
|------|----------------|--------------|------------|
| `Content-Type` | string | Required. Defines the request format. | <ul><li> textplain; charset=UTF-8 </li><li> text/csv </li><li> application/JSON </li><li> multipart/form-data (For files with data, i.e., .csv, .txt files) |
| `Content-Encoding` | string | Optional. Currently supports only `gzip`-encoding with the default data extension. | `gzip` |
| `Accept` | string | Optional. Controls the shape of the response schema. Currently JSON (default) and CSV are supported. See examples. | <ul><li>`application/json` (default)</li><li>`text/csv` (for CSV output)</li></ul> |
### Query arguments {: #query-arguments }
The `predictions` routes (`POST /predictions` (single-model mode) and `POST /deployments/:id/predictions`) have the same query arguments and HTTP headers as their standard route counterparts, with a few exceptions. As with regular Dedicated Predictions API, the exact list of supported arguments depends on the deployed model. Below is the list of general query arguments supported by every deployment.
| Key | Type | Description | Example(s) |
|------|----------------|--------------|------------|
| `passthroughColumns` | list of strings | Optional. Controls which columns from a scoring dataset to expose (or to copy over) in a prediction response. <br><br> The request may contain zero, one, or more columns. (There’s no limit on how many column names you can pass.) Column names must be passed as UTF-8 bytes and must be percent-encoded (see the [HTTP standard](https://tools.ietf.org/html/rfc2616){ target=_blank } for this requirement). Make sure to use the exact name of a column as a value. | `/v1.0/deployments/<deploymentId>/predictions?passthroughColumns=colA&passthroughColumns=colB` |
| `passthroughColumnsSet` | string| Optional. Controls which columns from a scoring dataset to expose (or to copy over) in a prediction response. The only possible option is `all` and, if passed, all columns from a scoring dataset are exposed. | `/v1.0/deployments/deploymentId/predictions?passthroughColumnsSet=all` |
| `decimalsNumber` | integer | Optional. Configures the precision of floats in prediction results. Sets the number of digits after the decimal point. <br><br> If there are no digits after the decimal point, rather than adding zeros, the float precision will be less than `decimalsNumber`. | `?decimalsNumber=15` |
Note the following:
* You can't pass the `passthroughColumns` and `passthroughColumnsSet` parameters in the same request.
* While there is no limit on the number of column names you can pass with the `passthroughColumns` query parameter, there is a limit on the size of the [HTTP request line](https://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html#sec5.1"){ target=_blank } (currently 8192 bytes).
### Prediction Explanation parameters {: #prediction-explanation-parameters }
You can parametrize the [Prediction Explanations](dr-predapi#making-prediction-explanations) prediction request with the following query parameters:
!!! note
To trigger prediction explanations `maxExplanations=N`, where N is greater than `0` must be sent.
| Key | Type | Description | Example(s) |
|------|----------------|--------------|------------|
| `maxExplanations` | int OR string | Optional. Limits the number of explanations returned by server. Previously called `maxCodes` (deprecated). For SHAP explanations only a special constant `all` is also accepted. | <ul><li>`?maxExplanations=5`</li><li>`?maxExplanations=all`</li></ul> |
| `thresholdLow` | float | Optional. Prediction Explanation low threshold. Predictions must be below this value (or above the thresholdHigh value) for Prediction Explanations to compute. | `?thresholdLow=0.678` |
| `thresholdHigh` | float | Optional. Prediction Explanation high threshold. Predictions must be above this value (or below the thresholdLow value) for Prediction Explanations to compute. | `?thresholdHigh=0.345` |
| `excludeAdjustedPredictions` | bool | Optional. Includes or excludes exposure-adjusted predictions in prediction responses if exposure was used during model building. The default value is `true` (exclude exposure-adjusted predictions). | `?excludeAdjustedPredictions=true` |
| `explanationNumTopClasses` | int | Optional. Multiclass models only; <br><br> Number of top predicted classes for each row that will be explained. Only for multiclass explanations. Defaults to 1. Mutually exclusive with `explanationClassNames`. | `?explanationNumTopClasses=5` |
| `explanationClassNames` | list of string types | Optional. Multiclass models only. A list of class names that will be explained for each row. Only for multiclass explanations. Class names must be passed as UTF-8 bytes and must be percent-encoded (see the [HTTP standard](https://tools.ietf.org/html/rfc2616){ target=_blank } for this requirement). This parameter is mutually exclusive with `explanationNumTopClasses`. By default, `explanationNumTopClasses=1` is assumed. | `?explanationClassNames=classA&explanationClassNames=classB` |
### Time series parameters {: #time-series-parameters }
You can parametrize the time series prediction request using the following query parameters:
| Key | Type | Description | Example(s) |
|------|----------------|--------------|------------|
| `forecastPoint` | ISO-8601 string | An [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html){ target=_blank } formatted DateTime string, without timezone, representing the [forecast point](glossary/index#forecast-point). This parameter cannot be used if `predictionsStartDate` and `predictionsEndDate` are passed. | `?predictionsStartDate=2013-12-20T01:30:00Z` |
| `relaxKnownInAdvanceFeaturesCheck` | bool | `true` or `false`. When `true`, missing values for known-in-advance features are allowed in the forecast window at prediction time. The default value is `false`. Note that the absence of known-in-advance values can negatively impact prediction quality. | `?relaxKnownInAdvanceFeaturesCheck=true` |
| `predictionsStartDate` | ISO-8601 string | The time in the dataset when bulk predictions begin generating. This parameter must be defined together with `predictionsEndDate`. The `forecastPoint` parameter cannot be used if `predictionsStartDate` and `predictionsEndDate` are passed. | `?predictionsStartDate=2013-12-20T01:30:00Z&predictionsEndDate=2013-12-20T01:40:00Z` |
| `predictionsEndDate` | ISO-8601 string | The time in the dataset when bulk predictions stop generating. This parameter must be defined together with `predictionsStartDate`. The `forecastPoint` parameter cannot be used if `predictionsStartDate` and `predictionsEndDate` are passed. | See above. |
## External configuration {: #external-configuration }
You can also use the Docker image to read and set the configuration options listed in the table above (from `/opt/ml/config`). The file must contain `<key>=<value>` pairs, where each key name is a corresponding environment variable.
## Examples {: #examples }
1. Run with two workers:
``` sh
docker run \
-v /path/to/mlpkgdir:/opt/ml/model \
-e PREDICTION_API_WORKERS=2 \
-e PREDICTION_API_SCORING_MODEL_CACHE_MAXSIZE=32 \
-e PREDICTION_API_PRELOAD_MODELS_ENABLED='true' \
-e PREDICTION_API_DEPLOYED_MODEL_RESOLVER_CACHE_TTL_SEC=0 \
datarobot/datarobot-portable-prediction-api:<version>
```
2. Run with external monitoring configured:
``` sh
docker run \
-v /path/to/mlpkgdir:/opt/ml/model \
-e PREDICTION_API_MONITORING_ENABLED='true' \
-e PREDICTION_API_MONITORING_SETTINGS='<settings>' \
datarobot/datarobot-portable-prediction-api:<version>
```
3. Run with internal monitoring configured:
``` sh
docker run \
-v /path/to/mlpkgdir:/opt/ml/model \
-e PREDICTION_API_MONITORING_ENABLED='true' \
-e PREDICTION_API_MONITORING_SETTINGS='<settings>' \
-e MONITORING_AGENT='true' \
-e MONITORING_AGENT_DATAROBOT_APP_URL='https://app.datarobot.com/' \
-e MONITORING_AGENT_DATAROBOT_APP_TOKEN='<token>' \
datarobot/datarobot-portable-prediction-api:<version>
```
4. Run with HTTPS support using default protocols and ciphers:
``` sh
docker run \
-v /path/to/mlpkgdir:/opt/ml/model \
-p 8443:8443 \
-e PREDICTION_API_TLS_ENABLED='true' \
-e PREDICTION_API_TLS_CERTIFICATE="$(cat /path/to/cert.pem)" \
-e PREDICTION_API_TLS_CERTIFICATE_KEY="$(cat /path/to/key.pem)" \
datarobot/datarobot-portable-prediction-api:<version>
```
5. Run with Python3 interpreter only to minimize RAM footprint:
``` sh
docker run \
-v /path/to/my_python3_model.mlpkg:/opt/ml/model \
-e PREDICTION_API_RPC_DUAL_COMPUTE_ENABLED='false' \
-e PYTHON3_SERVICES='true' \
datarobot/datarobot-portable-prediction-api:<version>
```
6. Run with Python2 interpreter only to minimize RAM footprint:
``` sh
docker run \
-v /path/to/my_python2_model.mlpkg:/opt/ml/model \
-e PREDICTION_API_RPC_DUAL_COMPUTE_ENABLED='false' \
datarobot/datarobot-portable-prediction-api:<version>
```
|
pps-run-modes
|
---
title: Portable Prediction Server
description: Learn how to configure and execute DataRobot's Portable Prediction Server.
---
# Portable Prediction Server {: #portable-prediction-server }
The Portable Prediction Server (PPS) is a remote DataRobot execution environment for DataRobot model packages (`MLPKG` files) distributed as a self-contained Docker image. It can host one or more production models. The models are accessible through DataRobot's Prediction API for predictions and Prediction Explanations.
!!! info "Availability information"
The Portable Prediction Server is a feature exclusive to DataRobot MLOps. Contact your DataRobot representative for information on enabling it.
| Topic | Describes |
|-------|-----------|
| [Portable Prediction Server](portable-pps) | Downloading and configuring the Portable Prediction Server. |
| [Portable Prediction Server running modes](pps-run-modes) | Configuring the Portable Prediction Server for single-model or multi-model running mode. |
| [Portable batch predictions](portable-batch-predictions) | Scoring datasets in batches on a remote environment with the Portable Prediction Server. |
| [Custom model Portable Prediction Server](custom-pps) | Downloading and configuring the custom model Portable Prediction Server. |
!!! important
DataRobot strongly recommends using an Intel CPU to run the Portable Prediction Server. Using non-Intel CPUs can result in prediction inconsistencies, especially in deep learning models like those built with Tensorflow or Keras.
|
index
|
---
title: Portable Prediction Server
description: How to use the Portable Prediction Server (PPS), which executes a DataRobot model package distributed as a self-contained Docker image.
---
# Portable Prediction Server {: #portable-prediction-server }
The Portable Prediction Server (PPS) is a DataRobot execution environment for DataRobot model packages (`.mlpkg` files) distributed as a self-contained Docker image. After you configure the Portable Prediction Server, you can begin running [single or multi model portable real-time predictions](pps-run-modes) and [portable batch prediction](portable-batch-predictions) jobs.
!!! important
DataRobot strongly recommends using an Intel CPU to run the Portable Prediction Server. Using non-Intel CPUs can result in prediction inconsistencies, especially in deep learning models like those built with Tensorflow or Keras.
The general configuration steps are:
* Download the model package.
* Download the PPS Docker image.
* Load the PPS image to Docker.
* Copy the Docker snippet DataRobot provides to run the Portable Prediction Server in your Docker container.
!!! important
If you want to configure the Portable Prediction Server for a model through a deployment, you must first add an [external prediction environment](pred-env#add-an-external-prediction-environment) and deploy that model to an external environment.
## Download the model package {: #download-the-model-package }
You can download a PPS model package for a deployed DataRobot model running on an [external prediction environment](pred-env#add-an-external-prediction-environment). In addition, with the correct MLOps permissions, you can download a model package from the Leaderboard. You can then run prediction jobs with a portable prediction server outside of DataRobot.
=== "Deployment download (with monitoring)"
When you download a model package from a deployment, the Portable Prediction Server will [monitor](pps-run-modes#monitoring) your model for performance and track prediction statistics; however, you must ensure that your deployment supports model package downloads. The deployment must have a _DataRobot_ build environment and an _external_ prediction environment, which you can verify using the [**Governance Lens**](gov-lens) in the deployment inventory:

??? tip "What if a deployment doesn't have an external prediction environment?"
If the deployed model you want to run in the Portable Prediction Server isn't associated with an external prediction environment, you can do either of the following:
* Create a new deployment with an external prediction environment.
* If you have the correct permissions, download the model package from the Leaderboard.
If you access a deployment that doesn't support model package download, you can quickly navigate to the Leaderboard from the deployment:
1. Click the **Model** name (on the **Overview** tab) to open the model package in the Model Registry.
2. In the **Model Registry**, click the **Model Name** (on the **Package Info** tab) to open the model on the Leaderboard.
3. On the **Leaderboard**, download the **Portable Prediction Server** model package from the **Predict** > **Portable Predictions** tab.
When you download the model package from the Leaderboard, the Portable Prediction Server won't monitor your model for performance or track prediction statistics.
On the **Deployments** tab (the *deployment inventory*), open a deployment with both a DataRobot build environment and an *external* prediction environment, and then navigate to the **Predictions > Portable Predictions** tab:

| | Element | Description |
|---|---|---|
|  | Portable Prediction Server | Helps you configure a REST API-based prediction server as a Docker image. |
|  | Portable Prediction Server Usage | Links to the **Developer Tools** tab where you [obtain the Portable Prediction Server Docker image](#obtain-the-pps-docker-image). |
|  | Download model package (.mlpkg) | Downloads the model package for your deployed model. Alternatively, you can download the model package from the Leaderboard. |
|  | Docker snippet | After you download your model package, use the Docker snippet to launch the Portable Prediction Server for the model with monitoring enabled. You will need to specify your [API key](api-key-mgmt#access-api-key-management), local filenames, paths, and [monitoring](#monitoring) before launching. |
|  | Copy to clipboard | Copies the Docker snippet to your clipboard so that you can paste it on the command line. |
In the **Predictions > Portable Predictions** tab, click **Download model package**. The download appears in the downloads bar when complete.

After downloading the model package, click **Copy to clipboard** and save the code snippet for later. You need this code to launch the Portable Prediction Server for the downloaded model package.

=== "Leaderboard download"
!!! info "Availability information"
The ability to download a model package from the Leaderboard depends on the [MLOps configuration](pricing) for your organization.
If you have built a model with AutoML and want to download its model package for use with the Portable Prediction Server, navigate to the model on the Leaderboard and select the **Predict > Portable Predictions** tab.

!!! note
When downloaded from the Leaderboard, the Portable Prediction Server won't [monitor](pps-run-modes#monitoring) your model for performance or track prediction statistics.
Click **Download .mlpkg**. After downloading the model package, click **Copy to clipboard** and save the code snippet for later. You need this code to launch the Portable Prediction Server for the downloaded model package.
## Configure the Portable Prediction Server {: #configure-the-portable-prediction-server }
To deploy the model package you downloaded to the Portable Prediction Server, you must first download the PPS Docker image and then load that image to Docker.
### Obtain the PPS Docker image {: #obtain-the-pps-docker-image }
Navigate to the **Developer Tools** tab to download the [Portable Prediction Server Docker image](api-key-mgmt#portable-prediction-server-docker-image). Depending on your DataRobot environment and version, options for accessing the latest image may differ, as described in the table below.
| Deployment type | Software version | Access method |
|------------------|--------------------|-----------------|
| Self-Managed AI Platform | v6.3 or older | Contact your DataRobot representative. The image will be provided upon request. |
| Self-Managed AI Platform | v7.0 or later | [Download](api-key-mgmt#portable-prediction-server-docker-image) the image from **Developer Tools**; install as described [below](#load-the-image-to-docker). If the image is not available contact your DataRobot representative. |
| Managed AI Platform | Jan 2021 and later | [Download](api-key-mgmt#portable-prediction-server-docker-image) the image from **Developer Tools**; install as described [below](#load-the-image-to-docker).|
### Load the image to Docker {: #load-the-image-to-docker }
!!! warning
DataRobot is working to reduce image size; however, the compressed Docker image can exceed 6GB (Docker-loaded image layers can exceed 14GB). Consider these sizes when downloading and importing PPS images.
Before proceeding, make sure you have downloaded the image from [Developer Tools](api-key-mgmt#portable-prediction-server-docker-image). It is a `gzip`'ed tar archive that can be loaded by Docker.
Once downloaded and the file checksum is verified, use [`docker load`](https://docs.docker.com/engine/reference/commandline/load/){ target=_blank} to load the image. You do not have to uncompress the downloaded file because Docker supports loading images from `gzip`'ed tar archives natively.
=== "Load image to Docker"
Copy the command below, replace `<version>`, and run the command to load the PPS image to Docker:
``` sh
docker load < datarobot-portable-prediction-api-<version>.tar.gz
```
!!! note
If the PPS file isn't located in the current directory, you need to provide a local, absolute filepath to the tar file (for example, `/path/to/datarobot-portable-prediction-api-<version>.tar.gz`).
=== "Example: Load image to Docker"
After running the `docker load` command for your PPS file, you should see output similar to the following:
``` sh
docker load < datarobot-portable-prediction-api-9.0.0-r4582.tar.gz
33204bfe17ee: Loading layer [==================================================>] 214.1MB/214.1MB
62c077c42637: Loading layer [==================================================>] 3.584kB/3.584kB
54475c7b6aee: Loading layer [==================================================>] 30.21kB/30.21kB
0f91625c248c: Loading layer [==================================================>] 3.072kB/3.072kB
21c5127d921b: Loading layer [==================================================>] 27.05MB/27.05MB
91feb2d07e73: Loading layer [==================================================>] 421.4kB/421.4kB
12ca493d22d9: Loading layer [==================================================>] 41.61MB/41.61MB
ffb6e915efe7: Loading layer [==================================================>] 26.55MB/26.55MB
83e2c4ee6761: Loading layer [==================================================>] 5.632kB/5.632kB
109bf21d51e0: Loading layer [==================================================>] 3.093MB/3.093MB
d5ebeca35cd2: Loading layer [==================================================>] 646.6MB/646.6MB
f72ea73370ce: Loading layer [==================================================>] 1.108GB/1.108GB
4ecb5fe1d7c7: Loading layer [==================================================>] 1.844GB/1.844GB
d5d87d53ea21: Loading layer [==================================================>] 71.79MB/71.79MB
34e5df35e3cf: Loading layer [==================================================>] 187.3MB/187.3MB
38ccf3dd09eb: Loading layer [==================================================>] 995.5MB/995.5MB
fc5583d56a81: Loading layer [==================================================>] 3.584kB/3.584kB
c51face886fc: Loading layer [==================================================>] 402MB/402MB
c6017c1b6604: Loading layer [==================================================>] 1.465GB/1.465GB
7a879d3cd431: Loading layer [==================================================>] 166.6MB/166.6MB
8c2f17f7a166: Loading layer [==================================================>] 188.7MB/188.7MB
059189864c15: Loading layer [==================================================>] 115.9MB/115.9MB
991f5ac99c29: Loading layer [==================================================>] 3.072kB/3.072kB
f6bbaa29a1c6: Loading layer [==================================================>] 2.56kB/2.56kB
4a0a241b3aab: Loading layer [==================================================>] 415.7kB/415.7kB
3d509cf1aa18: Loading layer [==================================================>] 5.632kB/5.632kB
a611f162b44f: Loading layer [==================================================>] 1.701MB/1.701MB
0135aa7d76a0: Loading layer [==================================================>] 6.766MB/6.766MB
fe5890c6ddfc: Loading layer [==================================================>] 4.096kB/4.096kB
d2f4df5f0344: Loading layer [==================================================>] 5.875GB/5.875GB
1a1a6aa8556e: Loading layer [==================================================>] 10.24kB/10.24kB
77fcb6e243d1: Loading layer [==================================================>] 12.97MB/12.97MB
7749d3ff03bb: Loading layer [==================================================>] 4.096kB/4.096kB
29de05e7fdb3: Loading layer [==================================================>] 3.072kB/3.072kB
2579aba98176: Loading layer [==================================================>] 4.698MB/4.698MB
5f3d150f5680: Loading layer [==================================================>] 4.699MB/4.699MB
1f63989f2175: Loading layer [==================================================>] 3.798GB/3.798GB
3e722f5814f1: Loading layer [==================================================>] 182.3kB/182.3kB
b248981a0c7e: Loading layer [==================================================>] 3.072kB/3.072kB
b104fa769b35: Loading layer [==================================================>] 4.096kB/4.096kB
Loaded image: datarobot/datarobot-portable-prediction-api:9.0.0-r4582
```
Once the `docker load` command completes successfully with the `Loaded image` message, you should verify that the image is loaded with the [`docker images`](https://docs.docker.com/engine/reference/commandline/images/){ target=_blank} command:
=== "View loaded images"
Copy the command below and run it to view a list of the images in Docker:
``` sh
docker images
```
=== "Example: View loaded images"
In this example, you can see the `datarobot/datarobot-portable-prediction-api` image loaded in the previous step:
``` sh
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
datarobot/datarobot-portable-prediction-api 9.0.0-r4582 df38ea008767 29 hours ago 17GB
```
!!! tip
Optionally, to save disk space, you can delete the compressed image archive `datarobot-portable-prediction-api-<version>.tar.gz` after your Docker image loads successfully.
## Launch the PPS with the code snippet {: #launch-the-pps-with-the-code-snippet }
After you've downloaded the model package and configured the Docker PPS image, you can use the associated [`docker run`](https://docs.docker.com/engine/reference/commandline/run/){ target=_blank} code snippet to launch the Portable Prediction Server with the downloaded model package.
=== "Deployment code snippet (with monitoring)"
In the example code snippet below from a deployed model, you should configure the following highlighted options:
``` sh linenums="1" hl_lines="3 4 9 10"
docker run \
-p 8080:8080 \
-v <local path to model package>/:/opt/ml/model/ \
-e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \
-e PREDICTION_API_MONITORING_ENABLED="True" \
-e MLOPS_DEPLOYMENT_ID="6387928ebc3a099085be32b7" \
-e MONITORING_AGENT="True" \
-e MONITORING_AGENT_DATAROBOT_APP_URL="https://app.datarobot.com" \
-e MONITORING_AGENT_DATAROBOT_APP_TOKEN="<your api token>" \
datarobot-portable-prediction-api
```
* `-v <local path to model package>/:/opt/ml/model/ \`: Provide the local, absolute file path to the location of the model package you downloaded. The `-v` (or `--volume`) option bind mounts a volume, adding the contents of your local model package directory (at `<local path to model package>`) to your Docker container's `/opt/ml/model` volume.
* `-e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \`: Provide the file name of the model package mounted to the `/opt/ml/model/` volume. This sets the `PREDICTION_API_MODEL_REPOSITORY_PATH` environment variable, indicating where the PPS can find the model package.
* `-e MONITORING_AGENT_DATAROBOT_APP_TOKEN="<your api token>" \`: Provide your API token from the DataRobot Developer Tools for monitoring purposes. This sets the `MONITORING_AGENT_DATAROBOT_APP_TOKEN` environment variable, where the PPS can find your API key.
* `datarobot-portable-prediction-api`: Replace this line with the image name and version of the PPS image you're using. For example, `datarobot/datarobot-portable-prediction-api:<version>`.
=== "Leaderboard code snippet"
In the example code snippet below for a Leaderboard model, you should configure the following highlighted options:
``` linenums="1" hl_lines="3 4 5"
docker run \
-p 8080:8080 \
-v <local path to model package>/:/opt/ml/model/ \
-e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \
datarobot-portable-prediction-api
```
* `-v <local path to model package>/:/opt/ml/model/ \`: Provide the local, absolute file path to the directory containing the model package you downloaded. The `-v` (or `--volume`) option bind mounts a volume, adding the contents of your local model package directory (at `<local path to model package>`) to your Docker container's `/opt/ml/model` volume.
* `-e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \`: Provide the file name of the model package mounted to the `/opt/ml/model/` volume. This sets the `PREDICTION_API_MODEL_REPOSITORY_PATH` environment variable, indicating where the PPS can find the model package.
* `datarobot-portable-prediction-api`: Replace this line with the image name and version of the PPS image you're using. For example, `datarobot/datarobot-portable-prediction-api:<version>`.
??? tip "Use docker tag to name and tag an image"
Alternatively, you can keep `datarobot-portable-prediction-api` in the last line if you use [`docker tag`](https://docs.docker.com/engine/reference/commandline/tag/){ target=_blank} to tag the new image as `latest` and rename it to `datarobot-portable-prediction-api`.
In this example, Docker renames the image and replaces the `9.0.0-r4582` tag with the `latest` tag:
``` sh
docker tag datarobot/datarobot-portable-prediction-api:9.0.0-r4582 datarobot-portable-prediction-api:latest
```
To verify the new tag and name, you can use the `docker images` command again:
``` sh
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
datarobot/datarobot-portable-prediction-api 9.0.0-r4582 df38ea008767 29 hours ago 17GB
datarobot-portable-prediction-api latest df38ea008767 29 hours ago 17GB
```
After completing the setup, you can use the Docker snippet to [run single or multi model portable real-time predictions](pps-run-modes) or [run portable batch predictions](portable-batch-predictions#run-portable-batch-predictions). See also [additional examples](portable-batch-predictions#more-examples) for prediction jobs using PPS. The PPS can be run disconnected from the main DataRobot installation environments. Once started, the image serves HTTP API via the `:8080` port.
??? important "Run the PPS for FIPS-enabled model packages"
If you configure your DataRobot cluster with `ENABLE_FIPS_140_2_MODE: true` (in the `config.yaml` file at the cluster level), that cluster builds MLKPG files that require that you to launch the PPS with `ENABLE_FIPS_140_2_MODE: true`. For this reason, you can’t host FIPS-enabled models and standard models in the same PPS instance.
To run the PPS with support for FIPS-enabled models, you can include the following argument in the [`docker run`](https://docs.docker.com/engine/reference/commandline/run/){ target=_blank} command:
``` sh
-e ENABLE_FIPS_140_2_MODE="true"
```
The full command for PPS container startup would look like the following example:
``` sh
docker run
-td
-p 8080:8080
-e PYTHON3_SERVICES="true"
-e ENABLE_FIPS_140_2_MODE="true"
-v <local path to model package>/:/opt/ml/model
--name portable_predictions_server
--rm datarobot/datarobot-portable-prediction-api:<version>
```
|
portable-pps
|
---
title: Portable batch predictions
description: How to use the portable batch predictions (PBP) with PPS and score data in a batch in an isolated environment.
---
# Portable batch predictions {: #portable-batch-predictions }
Portable batch predictions (PBP) let you score large amounts of data on disconnected environments.
Before you can use portable batch predictions, you need to configure the [Portable Prediction Server](portable-pps) (PPS), a DataRobot execution environment for DataRobot model packages (`.mlpkg` files) distributed as a self-contained Docker image. Portable batch predictions use the same Docker image as the PPS but run it in a different mode.
!!! info "Availability information"
The Portable Prediction Server is a feature exclusive to DataRobot MLOps. Contact your DataRobot representative for information on enabling it.
## Scoring methods {: #scoring-methods }
Portable batch predictions can use the following adapters to score datasets:
* `Filesystem`
* `JDBC`
* `AWS S3`
* `Azure Blob`
* `GCS`
* `Snowflake`
* `Synapse`
To run portable batch predictions, you need the following artifacts:
=== "SaaS"
* [Portable prediction server Docker image](portable-pps#obtain-the-pps-docker-image)
* [A defined batch prediction job](#job-definitions)
* [An ENV config file with credentials](#credentials-environment-variables) (optional)
=== "Self-Managed"
* [A Portable Prediction Server Docker image](portable-pps#obtain-the-pps-docker-image)
* [A defined batch prediction job](#job-definitions)
* [An ENV config file with credentials](#credentials-environment-variables) (optional)
* [A JDBC driver](manage-drivers) (optional)
After you prepare these artifacts, you can [run portable batch predictions](#run-portable-batch-predictions). See also [additional examples](#more-examples) of running portable batch predictions.
## Job definitions {: #job-definitions }
You can define jobs using a `JSON` config file in which you describe `prediction_endpoint`, `intake_settings`,
`output_settings`, `timeseries_settings` (optional) for time series scoring, and `jdbc_settings` (optional) for JDBC scoring.
??? note "Self-Managed AI Platform only: Prediction endpoint SSL configuration"
If you need to disable SSL verification for the `prediction_endpiont`, you can set `ALLOW_SELF_SIGNED_CERTS` to `True`. This configuration disables SSL certificate verification for requests made by the application to the web server. This is useful if you have SSL encryption enabled on your cluster and are using certificates that are not signed by a globally trusted Certificate Authority (self-signed).
The `prediction_endpoint` describes how to access the PPS and is constructed as `<schema>://<hostname>:<port>`, where you define the following attributes:
Attribute | Description
----------|------------
`schema` | `http` *or* `https`
`hostname` | The hostname of the instance where your PPS is running
`port` | The port of the prediction API running inside the PPS
The `jdbc_setting` has the following attributes:
Attribute | Description
----------|------------
`url` | The URL to connect via the JDBC interface
`class_name` | The class name used as an entry point for JDBC communication
`driver_path` | The path to the JDBC driver on your filesystem (available inside the PBP container)
`template_name` | The name of the template in case of write-back. To obtain the names of the support templates, please contact your DataRobot representative.
All other parameters are the same as regular Batch Predictions.
The following outlines a JDBC example that scores to and from Snowflake using single-mode PPS running locally and can be defined as a `job_definition_jdbc.json` file:
```json
{
"prediction_endpoint": "http://127.0.0.1:8080",
"intake_settings": {
"type": "jdbc",
"table": "SCORING_DATA",
"schema": "PUBLIC"
},
"output_settings": {
"type": "jdbc",
"table": "SCORED_DATA",
"statement_type": "create_table",
"schema": "PUBLIC"
},
"passthrough_columns_set": "all",
"include_probabilities": true,
"jdbc_settings": {
"url": "jdbc:snowflake://my_account.snowflakecomputing.com/?warehouse=WH&db=DB&schema=PUBLIC",
"class_name": "net.snowflake.client.jdbc.SnowflakeDriver",
"driver_path": "/tmp/portable_batch_predictions/jdbc/snowflake-jdbc-3.12.0.jar",
"template_name": "Snowflake"
}
}
```
## Credentials environment variables {: #credentials-environment-variables }
If you are using JDBC or private containers in cloud storage, you can specify the required
credentials as environment variables. The following table shows which variables names are used:
| Name | Type | Description |
| :------------- | :------------- | :------------- |
| `AWS_ACCESS_KEY_ID` | string | AWS Access key ID |
| `AWS_SECRET_ACCESS_KEY` | string | AWS Secret access key |
| `AWS_SESSION_TOKEN` | string | AWS token |
| `GOOGLE_STORAGE_KEYFILE_PATH` | string | Path to GCP credentials file |
| `AZURE_CONNECTION_STRING` | string | Azure connection string |
| `JDBC_USERNAME` | string | Username for JDBC |
| `JDBC_PASSWORD` | string | Password for JDBC |
| `SNOWFLAKE_USERNAME` | string | Username for Snowflake |
| `SNOWFLAKE_PASSWORD` | string | Password for Snowflake |
| `SYNAPSE_USERNAME` | string | Username for Azure Synapse |
| `SYNAPSE_PASSWORD` | string | Password for Azure Synapse |
Here's an example of the `credentials.env` file used for JDBC scoring:
``` shell
export JDBC_USERNAME=TEST_USER
export JDBC_PASSWORD=SECRET
```
## Run portable batch predictions {: #run-portable-batch-predictions }
Portable batch predictions run inside a Docker container. You need to mount job definitions, files, and datasets (if you are going to score from a host filesystem and set a path inside the container) onto Docker. Using a JDBC job definition and credentials from previous examples, the following outlines a complete example of how to start a portable batch predictions job to score to and from Snowflake.
``` shell
docker run --rm \
-v /host/filesystem/path/job_definition_jdbc.json:/docker/container/filesystem/path/job_definition_jdbc.json \
--network host \
--env-file /host/filesystem/path/credentials.env \
datarobot-portable-predictions-api batch /docker/container/filesystem/path/job_definition_jdbc.json
```
Here is another example of how to run a complete end-to-end flow, including PPS and a write-back
job status into the DataRobot platform for monitoring progress.
``` shell
#!/bin/bash
# This snippet starts both the PPS service and PBP job using the same PPS docker image
# available from Developer Tools.
#################
# Configuration #
#################
# Specify path to directory with mlpkg(s) which you can download from deployment
MLPKG_DIR='/host/filesystem/path/mlpkgs'
# Specify job definition path
JOB_DEFINITION_PATH='/host/filesystem/path/job_definition.json'
# Specify path to file with credentials if needed (for cloud storage adapters or JDBC)
CREDENTIALS_PATH='/host/filesystem/path/credentials.env'
# For DataRobot integration, specify API host and Token
API_HOST='https://app.datarobot.com'
API_TOKEN='XXXXXXXX'
# Run PPS service in the background
PPS_CONTAINER_ID=$(docker run --rm -d -p 127.0.0.1:8080:8080 -v $MLPKG_DIR:/opt/ml/model datarobot/datarobot-portable-prediction-api:<version>)
# Wait some time before PPS starts up
sleep 15
# Run PPS in batch mode to start PBP job
docker run --rm -v $JOB_DEFINITION_PATH:/tmp/job_definition.json \
--network host \
--env-file $CREDENTIALS_PATH \
datarobot/datarobot-portable-prediction-api:<version> batch /tmp/job_definition.json
--api_host $API_HOST --api_token $API_TOKEN
# Stop PPS service
docker stop $PPS_CONTAINER_ID
```
## More examples {: #more-examples }
In all of the following examples, assume that PPS is running locally on port `8080`, and the filesystem structure has the following format:
```
/host/filesystem/path/portable_batch_predictions/
├── job_definition.json
├── credentials.env
├── datasets
| └── intake_dataset.csv
├── output
└── jdbc
└── snowflake-jdbc-3.12.0.jar
```
### Filesystem scoring with single-model mode PPS {: #filesystem-scoring-with-single-model-mode-pps }
`job_definition.json` file:
``` json
{
"prediction_endpoint": "http://127.0.0.1:8080",
"intake_settings": {
"type": "filesystem",
"path": "/tmp/portable_batch_predictions/datasets/intake_dataset.csv"
},
"output_settings": {
"type": "filesystem",
"path": "/tmp/portable_batch_predictions/output/results.csv"
}
}
```
``` shell
#!/bin/bash
docker run --rm \
--network host \
-v /host/filesystem/path/portable_batch_predictions:/tmp/portable_batch_predictions \
datarobot/datarobot-portable-prediction-api:<version> batch \
/tmp/portable_batch_predictions/job_definition.json
```
### Filesystem scoring with multi-model mode PPS {: #filesystem-scoring-with-multi-model-mode-pps }
`job_definition.json` file:
```json
{
"prediction_endpoint": "http://127.0.0.1:8080",
"deployment_id": "lending_club",
"intake_settings": {
"type": "filesystem",
"path": "/tmp/portable_batch_predictions/datasets/intake_dataset.csv"
},
"output_settings": {
"type": "filesystem",
"path": "/tmp/portable_batch_predictions/output/results.csv"
}
}
```
``` shell
#!/bin/bash
docker run --rm \
--network host \
-v /host/filesystem/path/portable_batch_predictions:/tmp/portable_batch_predictions \
datarobot/datarobot-portable-prediction-api:<version> batch \
/tmp/portable_batch_predictions/job_definition.json
```
### Filesystem scoring with multi-model mode PPS and integration with DR job status tracking {: #filesystem-scoring-with-multi-model-mode-pps-and-integration-with-dr-job-status-tracking }
`job_definition.json` file:
``` json
{
"prediction_endpoint": "http://127.0.0.1:8080",
"deployment_id": "lending_club",
"intake_settings": {
"type": "filesystem",
"path": "/tmp/portable_batch_predictions/datasets/intake_dataset.csv"
},
"output_settings": {
"type": "filesystem",
"path": "/tmp/portable_batch_predictions/output/results.csv"
}
}
```
For the PPS MLPKG, in `config.yaml`, specify the deployment ID of the deployment for which you are running the portable batch prediction job.
``` shell
#!/bin/bash
docker run --rm \
--network host
-v /host/filesystem/path/portable_batch_predictions:/tmp/portable_batch_predictions \
datarobot/datarobot-portable-prediction-api:<version> batch \
/tmp/portable_batch_predictions/job_definition.json \
--api_host https://app.datarobot.com --api_token XXXXXXXXXXXXXXXXXXX
```
### JDBC scoring with single-model mode PPS {: #jdbc-scoring-with-single-model-mode-pps }
`job_definition.json` file:
``` json
{
"prediction_endpoint": "http://127.0.0.1:8080",
"deployment_id": "lending_club",
"intake_settings": {
"type": "jdbc",
"table": "INTAKE_TABLE"
},
"output_settings": {
"type": "jdbc",
"table": "OUTPUT_TABLE",
"statement_type": "create_table"
},
"passthrough_columns_set": "all",
"include_probabilities": true,
"jdbc_settings": {
"url": "jdbc:snowflake://your_account.snowflakecomputing.com/?warehouse=SOME_WH&db=MY_DB&schema=MY_SCHEMA",
"class_name": "net.snowflake.client.jdbc.SnowflakeDriver",
"driver_path": "/tmp/portable_batch_predictions/jdbc/snowflake-jdbc-3.12.0.jar",
"template_name": "Snowflake"
}
}
```
`credentials.env` file:
```
JDBC_USERNAME=TEST
JDBC_PASSWORD=SECRET
```
``` shell
#!/bin/bash
docker run --rm \
--network host \
-v /host/filesystem/path/portable_batch_predictions:/tmp/portable_batch_predictions \
--env-file /host/filesystem/path/credentials.env \
datarobot/datarobot-portable-prediction-api:<version> batch \
/tmp/portable_batch_predictions/job_definition.json
```
### S3 scoring with single-model mode PPS {: #s3-scoring-with-single-model-mode-pps }
`job_definition.json` file:
``` json
{
"prediction_endpoint": "http://127.0.0.1:8080",
"intake_settings": {
"type": "s3",
"url": "s3://intake/dataset.csv",
"format": "csv"
},
"output_settings": {
"type": "s3",
"url": "s3://output/result.csv",
"format": "csv"
}
}
```
`credentials.env` file:
```
AWS_ACCESS_KEY_ID=XXXXXXXXXXXX
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXX
```
``` shell
#!/bin/bash
docker run --rm \
--network host \
-v /host/filesystem/path/portable_batch_predictions:/tmp/portable_batch_predictions \
--env-file /path/to/credentials.env \
datarobot/datarobot-portable-prediction-api:<version> batch \
/tmp/portable_batch_predictions/job_definition.json
```
### Snowflake scoring with multi-model mode PPS {: #snowflake-scoring-with-multi-model-mode-pps }
`job_definition.json` file:
``` json
{
"prediction_endpoint": "http://127.0.0.1:8080",
"deployment_id": "lending_club",
"intake_settings": {
"type": "snowflake",
"table": "INTAKE_TABLE",
"schema": "MY_SCHEMA",
"external_stage": "MY_S3_STAGE_IN_SNOWFLAKE"
},
"output_settings": {
"type": "snowflake",
"table": "OUTPUT_TABLE",
"schema": "MY_SCHEMA",
"external_stage": "MY_S3_STAGE_IN_SNOWFLAKE",
"statement_type": "insert"
},
"passthrough_columns_set": "all",
"include_probabilities": true,
"jdbc_settings": {
"url": "jdbc:snowflake://your_account.snowflakecomputing.com/?warehouse=SOME_WH&db=MY_DB&schema=MY_SCHEMA"
"class_name": "net.snowflake.client.jdbc.SnowflakeDriver",
"driver_path": "/tmp/portable_batch_predictions/jdbc/snowflake-jdbc-3.12.0.jar",
"template_name": "Snowflake"
}
}
```
`credentials.env` file:
```
# Snowflake creds for JDBC connectivity
SNOWFLAKE_USERNAME=TEST
SNOWFLAKE_PASSWORD=SECRET
# AWS creds needed to access external stage
AWS_ACCESS_KEY_ID=XXXXXXXXXXXX
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXX
```
``` shell
#!/bin/bash
docker run --rm \
--network host \
-v /host/filesystem/path/portable_batch_predictions:/tmp/portable_batch_predictions \
--env-file /host/filesystem/path/credentials.env \
datarobot/datarobot-portable-prediction-api:<version> batch \
/tmp/portable_batch_predictions/job_definition.json
```
### Time series scoring over Azure Blob with multi-model mode PPS {: #ts-azure-scoring-with-multi-model-mode-pps }
`job_definition.json` file:
``` json
{
"prediction_endpoint": "http://127.0.0.1:8080",
"deployment_id": "euro_date_ts_mlpkg",
"intake_settings": {
"type": "azure",
"url": "https://batchpredictionsdev.blob.core.windows.net/datasets/euro_date.csv",
"format": "csv"
},
"output_settings": {
"type": "azure",
"url": "https://batchpredictionsdev.blob.core.windows.net/results/output_ts.csv",
"format": "csv"
},
"timeseries_settings":{
"type": "forecast",
"forecast_point": "2007-11-14",
"relax_known_in_advance_features_check": true
}
}
```
`credentials.env` file:
```
# Azure Blob connection string
AZURE_CONNECTION_STRING='DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=XXX;EndpointSuffix=core.windows.net'
```
``` shell
#!/bin/bash
docker run --rm \
--network host \
-v /host/filesystem/path/portable_batch_predictions:/tmp/portable_batch_predictions
--env-file /host/filesystem/path/credentials.env
datarobot/datarobot-portable-prediction-api:<version> batch \
/tmp/portable_batch_predictions/job_definition.json
```
|
portable-batch-predictions
|
---
title: DataRobot Prime
description: Learn how DataRobot Prime optimizes models for use outside the DataRobot application. You can build a DataRobot Prime model for most models on the Leaderboard.
---
# DataRobot Prime {: #datarobot-prime }
!!! info "Availability information"
The ability to create new DataRobot Prime models has been removed from the application. This does not affect existing Prime models or deployments. To export Python code in the future, use the Python code export function in any RuleFit model.
DataRobot Prime builds models for use outside of the DataRobot application, which can provide [multiple benefits](#reasons-to-use-datarobot-prime). Once created, you can export these models as a Python module or a Java class, and [run the exported script](rulefit-examples).
Using a technique known as "knowledge distillation," a form of regularization, DataRobot trains a smaller (“student”) model using the original Leaderboard (“teacher”) model’s predictions as the target. Once the rule-based Prime model is on the Leaderboard, you can compare its validation score against the teacher and other models in the project.
??? tip "Deep dive: Knowledge distillation"
Training a model based on the outputs of another model is called knowledge distillation. In this case the initial model is the "teacher" model and the DataRobot Prime model is the "student." DataRobot Prime creates a parametric model (a model with a finite number of parameters) that performs comparably to a selected model on the Leaderboard. All metrics are calculated for the model's ability to predict the target rather than the ability to predict the teacher model's output.
Knowledge distillation is an effective regularization technique used to better predict the "truth." The meaning of truth may be different in the training data versus what comes after the model is deployed. In training data, truth is the target column. At prediction time, it is what the value of the target column will eventually be (even though it’s not known yet).
Here's a simple example:
You want to predict the likelihood that a flipped coin will come up heads. You have predictive features like temperature, wind speed, humidity, time of day, day of the week (one-hot-encoded), and many more.
The standard approach would be to predict on the raw data (0s for tails, 1s for heads). Even with data from thousands of coin flips, a model would overfit. For example, if every time the temperature was 67.013 degrees and wind speed was 10mph, the coin came up heads, the model would conclude this will universally remain true in the future.
Now, as a human you know that temperature and wind speed don’t affect a coin flip—the best prediction is “50% likelihood of heads.” So you set the target to reflect that value and train a model on data where everything is set at 50%. Your resulting (student) model always predicts 50% because that is the only target it has seen. This would yield a great model—better than the one that used the raw data.
This example is a contrived scenario that proposes that you know the accompanying features don’t actually help. Its point is to describe knowledge distillation as an approach that moves slightly towards the “human putting in 50%,” but in a way that’s appropriate for more realistic scenarios.
Raw data involves _some_ random chance. A teacher model typically does not make predictions of 0% or 100% probability. Instead, it uncovers some of the underlying structure in _how_ to make good predictions. (The exact amount of structure depends on the model type.) If, for instance, you used a tree-based model in the coin-flipping example, it would group “similar” coin flips together so that the predicted outcomes don't result in 0s and 1s—it would group flips together based on features. Then its prediction is the average outcome in each group.
For example, if you had 15 flips meeting some grouping criteria like:
* Between 1PM and 2PM
* Wind speed under 10mph*
* Temperature between 65 and 70 degrees
Of those 15 flips, 12 (80%) were heads and 3 (20%) were tails. Given that, the model would predict 80% likelihood of heads for anything in the future meeting those criteria.
The take-away here is that the small (naive) model can benefit from the teacher model's structure, making it overfit less.
You can build a DataRobot Prime model for most models on the Leaderboard. There are, however, some situations in which this type of model cannot be built. See the associated [considerations](#feature-considerations) for additional information.
## Creating a DataRobot Prime model {: #creating-a-datarobot-prime-model }
DataRobot Prime makes predictions using the number of features it has determined to be the optimal balance against the project's original [metric](additional#change-the-optimization-metric). To create a DataRobot Prime model:
1. [Process your dataset](model-data#set-the-modeling-mode) using any of the modeling modes.
2. Expand the model you want to apply DataRobot Prime to; click the **DataRobot Prime** tab.
3. On the resulting screen, click **RUN DATAROBOT PRIME**. You will see the the modeling job added to the [Worker Queue](worker-queue) and receive a success message:

When the job completes, the new DataRobot Prime model is available on the Leaderboard. The description below the model name contains the name and model number of the parent model, as well as the number of rules used in the downloadable code.

4. Expand the new DataRobot Prime model and click the **DataRobot Prime** tab to view a graph (explained [here](#exploring-the-datarobot-prime-model)) of 10 rule count options plotted against the resulting metric score for each:

### Changing the rule count {: #changing-the-rule-count }
Initially, DataRobot builds a model based on the best rule count choice. There are [reasons](#why-to-change-the-rule-count) why you may want to change the rule count, however. To use a different rule count:
1. Determine, from the graph, the number of rules in your chosen selection.
2. Select the new rule count by clicking the associated radio button.
3. Confirm the new model request by clicking **CONTINUE**. When you click, DataRobot generates a new DataRobot Prime model, with the new rule count, and adds the entry to the Leaderboard.
## Exporting your DataRobot Prime model {: #exporting-your-datarobot-prime-model }
Once you are satisfied with the performance of your DataRobot Prime model, you can generate and download production code to make predictions.
### Downloading production code {: #downloading-production-code }
To download production code:
1. Using the **Select Language** dropdown in the bottom left corner, choose either Python or Java.
!!! tip
When using the generated source code in Python, you must [specify the encoding](rulefit-examples) if you are using a character set other than UTF-8.
2. Click **Generate and Download Code**. If this is the first time you are generating code for the model, DataRobot launches a <em>Prime Validation</em> job to test and verify the integrity of the source code it is generating. You can monitor the job progress in the Worker Queue:

3. When testing completes, DataRobot displays a message indicating whether validation passed or [failed](#if-validation-fails) and provides a button to download the code:

To download DataRobot Prime model code for production use, click **DOWNLOAD GENERATED CODE** and browse to a save location. Your can now use the code outside of DataRobot to make predictions.
### Using debugging information {: #using-debugging-information }
When creating code, DataRobot tries to predict each row and, if an exception or error occurs, records the error in the code output (`stderr`). Search for these messages to verify the integrity of your production code data or if you encounter problems when trying to run the production code.
For example, where "healthy" production code returns this:
def predict_dataframe(ds):
return ds.apply(predict, axis=1)
Errors code returns something similar to this:
def predict_dataframe(ds):
try:
return ds.apply(predict, axis=1)
except TypeError as e:
sys.stderr.write('Error processing column: ' + unicode(e) +'\n')
os._exit(1)
## Using a DataRobot Prime model {: #using-a-datarobot-prime-model }
Once you have [exported your DataRobot Prime model](#exporting-your-datarobot-prime-model) in a selected language, you can use it for prediction. See the [Prime examples section](rulefit-examples) for more information.
## More info... {: #more-info }
This section provides additional details on DataRobot Prime models as well as tips in the event [validation fails](#if-validation-fails).
### Reasons to use DataRobot Prime {: #reasons-to-use-datarobot-prime }
DataRobot Prime supports the model transparency goals of DataRobot by providing:
* Generated model and scoring code.
* A coefficients model to verify data integrity.
* Multiple language support.
* DataRobot integration into systems that can’t necessarily communicate with the DataRobot environment (for example, for privacy reasons).
* Proof of performance as evidenced by the Prime model also placing on the Leaderboard.
* Low-latency scoring without the API call overhead. For example, if you use a real-time, low-latency scoring platform with GLMs and custom code, rule-based systems in a fast language like C++ or Java, DataRobot's Prime code export allows you to score directly on your low-latency platform without the API call-time overhead.
### Exploring the DataRobot Prime model {: #exploring-the-datarobot-prime-model }
To view the graph of rule count options plotted against the resulting metric score for each graph, expand the DataRobot Prime model on the Leaderboard and click the **DataRobot Prime** tab:

The following table describes the elements of the **DataRobot Prime** tab page for existing Prime models:
| Element | Description |
|------------------|--------------------|
| Complexity vs. <metric> (1) | Displays the metric used in the original project build. |
| Rule count options (2) | List the 10 rule count options, and their associated metric value, available for the model. Click the radio button to begin the build of a new model with a different rule count. |
| Language selection (3) | Provides a mechanism for choosing the language for your downloadable code.|
| Code generation link (4) | Begins the code generation (and, ultimately, download) process for [exporting your DataRobot Prime model](#exporting-your-datarobot-prime-model). |
### Why to change the rule count {: #why-to-change-the-rule-count }
Initially, DataRobot builds a model based on the best rule count choice. You may learn from the graph, however, that there is a better rule count choice and so you can change the rule count to simplify your model. For example, a particular rule count may have fewer rules than the best selection, while only suffering a small score penalty.
When you [change the rule count](#changing-the-rule-count), DataRobot builds a new DataRobot Prime model and adds it to the Leaderboard. Any previous DataRobot Prime models built from the blueprint remain available. Note that you must generate and download code for each model individually.
### Supported transformations {: #supported-transformations }
There may be cases when you have applied a var type transformation on a feature and then created a feature list using the transformed feature. You can create a DataRobot Prime model using a <em>var type transformation</em> (a change from the type DataRobot detected and assigned to a type of your own choosing). If you execute the generated code on a dataset that does not contain the transformed feature, the DataRobot Prime model returns the same results as the internal predictions results. Because transformations allow you to define a "NaN" value, DataRobot replaces invalid values in the generated code with the value you defined.
DataRobot Prime does not support user-defined, log, square, or power transformations. Specifically, you can use the following var type transformations:
| Original | Transformed |
|-------------|-------------|
| Date | Categorical |
| Date | Numeric |
| Numeric | Categorical |
| Categorical | Numeric |
| Categorical | Text |
| Text | Categorical |
| Text | Numeric |
### If validation fails {: #if-validation-fails }
Although rare, it is possible that DataRobot returns an error message when it runs validation in response to a request to generate code. There are two reasons for error; DataRobot reports the error type in the message it returns. Note that even with an error message, you can still download code. It is best to [email DataRobot Customer Support](getting-help#access-resources) describing the issue for further assistance. Reasons for failure include:
* Predictions from the generated code were not close enough to the predictions from the DataRobot Prime model. In this case, generated code can still be run.
* Generated code could not run due to issues such as problem data or out of memory error. In this case, generated code probably will not run. That is, if the issue is a problem with the data, the code, most likely, will not run. If it is a memory error, if your local machine is large enough (while the workers that were trying to validate the code were not) the code may run.
You can re-run the validation if you feel circumstances may return a different result. Also, review the [DataRobot Prime considerations](#feature-considerations). To re-run a validation job:
1. Delete the DataRobot Prime model.
2. Run the model again (either by rerunning the original model or generating a new model from the [**DataRobot Prime**](#exploring-the-datarobot-prime-model) tab graph.
3. Click **Generate and Download Code** to run the validation job again.
If validation still fails, click the link in the modal where the failure is indicated. DataRobot opens your email client and populates a message with the DataRobot Customer Support recipient, a subject line, and message content to help Support assist you in debugging the issue. You can add any additional information, if you choose.
## Feature considerations {: #feature-considerations }
The following considerations apply to DataRobot Prime:
* DataRobot Prime models cannot be built when the model:
* Image, Location, Date, or Summarized Categorical features, or derived features, are in the feature list, or when the feature list contains a single-column text list.
* Is part of a multiclass project.
* Date/time partitioning is not available for DataRobot Prime.
* DataRobot Prime models are not displayed on the [Learning Curve](learn-curve), but do display on [Speed vs Accuracy](speed).
* DataRobot Prime models must be run on the same feature list, and at the same sample size, as the original model.
* You cannot manually launch cross-validation from a DataRobot Prime model.
* When using DataRobot Prime, you must run the model with enough data left to include a validation set. In other words, you cannot build or retrain a DataRobot Prime model on 100% of data. Instead, you can set the holdout set to 0% and make the validation set smaller. Be aware, however, that your model results will be not properly compared if the validation set is too small. Generally the validation set should be at least 10%.
* DataRobot Prime does not employ the same level of ts-date-time format checking as the other prediction mechanisms. As a result, ts-date-time formatting inconsistencies between training data and prediction data may lead to incorrect predictions (the date value will be imputed as NaN rather than explicitly erroring, as would happen with other DataRobot prediction mechanisms). To ensure that this does not cause a problem, verify the formats are the same before running predictions.
* DataRobot Prime is disabled when **Exposure** and/or **Offset** parameters are set.
|
index
|
---
title: RuleFit export examples
description: Learn how to generate source code for a model as a Python module or Java class, and use DataRobot Prime with Python or Java.
---
# RuleFit export examples {: #ruleFit-export-examples }
You can generate source code for the model as a [Python module](#using-rulefit-with-python) or [Java class](#using-rulefit-with-java).
## Using RuleFit with Python {: #using-rulefit-with-python }
Using RuleFit with Python requires:
* Python (Recommended: 3.7)
* Numpy (Recommended: 1.16)
* Pandas < 1.0 (Recommended: 0.23)
To make predictions with a DataRobot Prime model, run the exported Python script file using the following command:
python <prediction_file> --encoding=<encoding> <data_file> <output_file>
Where:
* <prediction_file> specifies the downloaded Python code version of the RuleFit model.
* <encoding> (optional) specifies the encoding of the dataset you are going to make predictions with. RuleFit defaults to UTF-8 if not otherwise specified. See the "Codecs" column of the <a target="_blank" href="https://docs.python.org/3/library/codecs#standard-encodings">Python-supported standards chart</a> for possible alternative entries.
* <data_file> specifies a .csv file (your dataset); columns must correspond to the feature set used to generate the model.
* <output_file> specifies the filename where DataRobot writes the results.
### Python Example {: #python-example }
In the following example, `rulefit.py` is a Python script containing a RuleFit model trained on the following dataset:
race,gender,age,readmitted
Caucasian,Female,[50-60),0
Caucasian,Male,[50-60),0
Caucasian,Female,[80-90),1
The following command produces predictions for the data in `data.csv` and outputs the results to `results.csv`.
python rulefit.py data.csv results.csv
The file `data.csv` is a .csv file that looks like this:
race,gender,age
Hispanic,Male,[40-50)
Caucasian,Male,[80-90)
AfricanAmerican,Male,[60-70)
The results in `results.csv` look like this:
Index,Prediction
0,0.438665626555
1,0.611403738867
2,0.269324648106
## Using RuleFit with Java {: #using-rulefit-with-java }
To run DataRobot Prime with Java:
* You must use the <a target="_blank" href="https://www.oracle.com/technetwork/java/javase/downloads/index.html">JDK</a> for Java version 1.7.x or later.
* Do not rename any of the classes in the file.
* You must include the <a target="_blank" href="https://commons.apache.org/proper/commons-csv/">Apache commons CSV library</a> version 1.1 or later to be able to run the code.
* You must rename the exported code Java file to `Prediction.java`.
Compile the Java file using the following command:
javac -cp ./:./commons-csv-1.1.jar Prediction.java -d ./ -encoding 'UTF-8'
Execute the compiled Java class using the following command:
java -cp ./:./commons-csv-1.1.jar Prediction <data file> <output file>
Where:
* <data_file> specifies a .csv file (your dataset); columns must correspond to the feature set used to generate the RuleFit model.
* <output_file> specifies the filename where DataRobot writes the results.
### Java Example {: #java-example }
The following example generates predictions for `data.csv` and writes them to `results.csv`:
javac -cp ./:./commons-csv-1.1.jar Prediction.java -d ./ -encoding 'UTF-8'
java -cp ./:./commons-csv-1.1.jar Prediction data.csv results.csv
See the [Python example](#python-example) for details on the format of input / output data.
|
rulefit-examples
|
---
title: Make a one-time batch prediction
description: Make a batch prediction for a deployed model with a dataset of any size. Learn about additional prediction options for time series deployments.
---
# Make a one-time batch prediction {: #make-a-one-time-batch-prediction }
Use the **Deployments > Make Predictions** tab to efficiently score datasets with a deployed model by making batch predictions.
!!! note
To make predictions with a model before deployment, select the model from the **Leaderboard** and navigate to [**Predict > Make Predictions**](predict).
Batch predictions are a method of making predictions with large datasets, in which you pass input data and get predictions for each row. DataRobot writes these predictions to output files. You can also:
* Schedule [Batch Prediction Jobs](batch-pred-jobs) by specifying the prediction data source and destination and determining when DataRobot runs the predictions.
* Make predictions with the [Batch Prediction API](batch-prediction-api/index).
## Select a prediction source {: #select-a-prediction-source }
To make batch predictions with a deployed model, navigate to the deployment's **Predictions > Make Predictions** tab and upload a prediction source:
* Click and drag a file into the **Prediction source** group box.
* Click **Choose file** to upload a **Local file** or a dataset stored in the **AI Catalog**.

!!! note
When uploading a prediction dataset, it is automatically stored in the **AI Catalog** once the upload is complete. Be sure not to navigate away from the page during the upload, or the dataset will not be stored in the catalog. If the dataset is still processing after the upload, that means DataRobot is [running EDA](eda-explained) on the dataset before it becomes available for use.
## Make predictions with a deployment {: #make-predictions-with-a-deployment }
This section explains how to use the **Make Predictions** tab to make batch predictions for standard deployments and time series deployments.

| | Field name | Description |
|-|------------|---------------|
|  | Prediction source | [Select a prediction source](#select-a-prediction-source) by uploading a local file or importing a dataset from the AI Catalog. |
|  | Time series options | Specify and configure a [time series prediction method](#set-time-series-options). |
|  | Prediction options | [Configure the prediction options](#set-prediction-options). |
|  | Compute and download predictions | [Score the data and download the predictions](#compute-and-download-predictions). |
|  | Your recent predictions | View your recent batch predictions and download the results. These predictions are available for download for 48 hours. |
## Set time series options {: #set-time-series-options }
{% include 'includes/batch-pred-ts-scoring-data-requirements.md' %}
{% include 'includes/batch-pred-ts-options-include.md' %}
## Set prediction options {: #set-prediction-options}
Once the file is uploaded, configure the **Prediction options**. Optionally, you can click **Show advanced options** to configure additional options.
{% include 'includes/prediction-options-include.md' %}
## Compute and download predictions {: #compute-and-download-predictions }
Once configured, click **Compute and download predictions** to start scoring the data.

When scoring completes, click **Download Predictions** to download a predictions file.
If the prediction job fails, click **View logs** to view and optionally copy the run details.
Predictions are available for download on the **Predictions > Make Predictions** page for the next 48 hours. You can also view and download predictions and logs on the [**Deployments > Prediction Jobs** tab](batch-pred-jobs#manage-prediction-jobs).
!!! tip "Cancel a batch prediction job"
Click the orange **X** while the job is running to cancel it. Once canceled, you can click the arrow to view the logs for the job.

|
batch-pred
|
---
title: Manage prediction job definitions
description:
---
# Manage prediction job definitions
To view and manage monitoring job definitions, select a deployment on the **Deployments** tab and navigate to the **Job Definitions > Prediction Jobs** tab.

Click the action menu for a job definition and select one of the actions described below:
| Element | Description |
|---|---|
| View job history | Displays the **Deployments > Batch Jobs** tab listing all prediction jobs generated from the job definition. |
| Run now | Runs the job definition immediately. Go to the **Deployments > Batch Jobs** tab to view progress. |
| Edit definition| Displays the job definition so that you can update and save it. |
| Disable definition| Suspends a job definition. Any scheduled batch runs from the job definition are suspended. From the action menu of a job definition, click **Disable definition**. After you select **Disable definition**, the menu item becomes **Enable definition**. Click **Enable definition** to re-enable batch runs from this job description. |
| Clone definition | Creates a new job definition populated with the values from an existing job definition. From the action menu of the existing job definition, click **Clone definition**, update the fields as needed, and click **Save prediction job definition**. Note that the **Jobs schedule** settings are turned off by default.|
| Delete definition | Deletes the job definition. Click **Delete definition**, and in the confirmation window, click **Delete definition** again. All scheduled jobs are cancelled. |
|
manage-pred-job-def
|
---
title: Batch prediction UI
description: Use a deployment's batch prediction interface to score large files efficiently.
---
# Batch prediction UI {: #batch-scoring-methods }
To make batch predictions from the UI, you must first deploy a model. After deploying, navigate to the [**Make Predictions** tab](batch-pred) and use the interface to either make a [one-time batch prediction](batch-pred) or [configure batch prediction jobs](batch-pred-jobs).
You can also view [example configurations using a Snowflake database](pred-job-examples-snowflake).
|
index
|
---
title: Schedule recurring batch prediction jobs
description: How to configure, execute, and schedule batch prediction jobs for deployed models.
---
# Schedule recurring batch prediction jobs {: #schedule-recurring-batch-prediction-jobs }
You might want to make a [one-time batch prediction](batch-pred), but you might also want to schedule regular batch prediction jobs. This section shows how to create and schedule batch prediction jobs.
Be sure to review the [deployment and prediction considerations](deployment/index#feature-considerations) before proceeding.
## Create a prediction job definition {: #create-a-prediction-job-definition }
Job definitions are flexible templates for creating batch prediction jobs. You can store definitions inside DataRobot and run new jobs with a single click, API call, or automatically via a schedule. Scheduled jobs do not require you to provide connection, authentication, and prediction options for each request.
To create a job definition for a deployment, navigate to the **Job Definitions** tab. The following table describes the information and actions available on the **New Prediction Job Definition** tab.

| | Field name | Description |
|---|---------------------|--------------|
|  | Prediction job definition name | Enter the name of the prediction job that you are creating for the deployment. |
|  | Prediction source | Set the [source type](#set-up-prediction-sources) and [define the connection](data-conn) for the data to be scored. |
|  | Prediction options | [Configure the prediction options](#set-prediction-options). |
|  | Time series options | Specify and configure a [time series prediction method](#set-time-series-options). |
|  | Prediction destination | Indicate the output destination for predictions. Set the [destination type](#set-up-prediction-destinations) and [define the connection](data-conn). |
|  | Jobs schedule | Toggle whether to run the job immediately and whether to [schedule the job](#schedule-prediction-jobs).|
|  | Save prediction job definition | Click this button to save the job definition. The button changes to **Save and run prediction job definition** if the **Run this job immediately** toggle is turned on. Note that this button is disabled if there are validation errors. |
Once fully configured, click **Save prediction job definition** (or **Save and run prediction job definition** if **Run this job immediately** is enabled).
!!! note
Completing the **New Prediction Job Definition** tab configures the details required by the Batch Prediction API. Reference the [Batch Prediction API](batch-prediction-api/index) documentation for details.
## Set up prediction sources {: #set-up-prediction-sources }
Select a prediction source (also called an [intake adapter](intake-options)):

To set a prediction source, complete the appropriate authentication workflow for the [source type](#source-connection-types).
For AI Catalog sources, the job definition displays the modification date, the user that set the source, and a [badge](http://datarobot-docs.hq.datarobot.com/3317/en/html/data/import-data/catalog-asset.html#asset-states) that represents the state of the asset (in this case, STATIC).
After you set your prediction source, DataRobot validates that the data is applicable for the deployed model:

!!! note
DataRobot validates that a data source is applicable with the deployed model when possible but not in all cases. DataRobot validates for AI Catalog, most JDBC connections, Snowflake, and Synapse.
### Source connection types {: #source-connection-types }
Select a connection type below to view field descriptions.
!!! note
When browsing for connections, invalid adapters are not shown.
**Database connections**
* [JDBC](intake-options#jdbc-scoring)
**Cloud Storage Connections**
* [Azure](intake-options#azure-blob-storage-scoring)
* [Google Cloud Storage](intake-options#google-cloud-storage-scoring) (GCP Cloud)
* [S3](intake-options#s3-scoring)
**Data Warehouse Connections**
* [BigQuery](intake-options#bigquery-scoring)
* [Snowflake](intake-options#snowflake-scoring)
* [Synapse](intake-options#synapse-scoring)
**Other**
* [AI Catalog](intake-options#ai-catalog-dataset-scoring)
For information about supported data sources, see [Data sources supported for batch predictions](batch-prediction-api/index#data-sources-supported-for-batch-predictions).
## Set prediction options {: #set-prediction-options }
Specify what information to include in the prediction results:
{% include 'includes/prediction-options-include.md' %}
## Set time series options {: #set-time-series-options }
{% include 'includes/batch-pred-ts-scoring-data-requirements.md' %}
{% include 'includes/batch-pred-jobs-ts-options-include.md' %}
## Set up prediction destinations {: #set-up-prediction-destinations }
Select a prediction destination (also called an [output adapter](output-options)):

Complete the appropriate authentication workflow for the [destination type](#source-connection-types).
### Destination connection types {: #destination-connection-types }
Select a connection type below to view field descriptions.
!!! note
When browsing for connections, invalid adapters are not shown.
**Database connections**
* [JDBC](output-options#jdbc-write)
**Cloud Storage Connections**
* [Azure](output-options#azure-blob-storage-write)
* [Google Cloud Storage](output-options#google-cloud-storage-write) (GCP Cloud)
* [S3](output-options#s3-write)
**Data Warehouse Connections**
* [BigQuery](output-options#bigquery-write)
* [Snowflake](output-options#snowflake-write)
* [Synapse](output-options#synapse-write)
**Other**
* [Tableau](output-options#tableau-write)
## Schedule prediction jobs {: #schedule-prediction-jobs }
You can schedule prediction jobs to run automatically on a schedule. When outlining a job definition, toggle the jobs schedule on. Specify the frequency (daily, hourly, monthly, etc.) and time of day to define the schedule on which the job runs.

For further granularity, select **Use advanced scheduler**. You can specify the exact time for the prediction job to run down to the minute.

After setting all applicable options, click **Save prediction job definition**.
|
batch-pred-jobs
|
---
title: Snowflake prediction job examples
description: Configure prediction jobs with Snowflake connections.
---
# Snowflake prediction job examples {: #snowflake-prediction-job-examples }
There are two ways to set up a batch prediction job definition for Snowflake:
* Using a [JDBC connector with Snowflake](#jdbc-with-snowflake) as an external data source.
* Using the [Snowflake adapter](#snowflake-with-an-external-stage) with an [external stage](glossary/index#external-stage).
??? tip "Which connection method should I use for Snowflake?"
Using JDBC to transfer data can be costly in terms of IOPS (input/output operations per second) and expense for data warehouses. The Snowflake adapter reduces the load on database engines during prediction scoring by using cloud storage and bulk insert to create a hybrid JDBC-cloud storage solution.
## JDBC with Snowflake {: #jdbc-with-snowflake }
To complete these examples, follow the steps in [Create a prediction job definition](batch-pred-jobs#create-a-prediction-job-definition), using the following procedures to configure JDBC with Snowflake as your prediction source and destination.
### Configure JDBC with Snowflake as source {: #configure-jdbc-with-snowflake-as-source }
!!! tip
See [Prediction intake options](intake-options#jdbc-scoring) for field descriptions.
1. For **Prediction source**, select **JDBC** as the **Source type** and click **+ Select connection**.

2. Select a previously added JDBC Snowflake [connection](data-conn).

3. Select your Snowflake account.

4. Select your Snowflake schema.

5. Select the table you want scored and click **Save connection**.

6. Continue setting up the rest of the job definition. [Schedule](batch-pred-jobs#schedule-prediction-jobs) and save the definition. You can also run it immediately for testing. [Manage your jobs](batch-pred-jobs#manage-prediction-jobs) on the **Prediction Jobs** tab.
### Configure JDBC with Snowflake as destination {: #configure-jdbc-with-snowflake-as-destination }
!!! tip
See [Prediction output options](output-options#jdbc-write) for field descriptions.
1. For **Prediction source**, select **JDBC** as the **Destination type** and click **+ Select connection**.

2. Select a previously added JDBC Snowflake [connection](data-conn).

3. Select your Snowflake account.

4. Select the schema you want to write the predictions to.

5. Select a table or create a new table. If you create a new table, DataRobot creates the table with the proper features, and assigns the correct data type to each feature.

6. Enter the table name and click **Save connection**.

7. Select the **Write strategy**. In this case, **Insert** is selected because the table is new.

8. Continue setting up the rest of the job definition. [Schedule](batch-pred-jobs#schedule-prediction-jobs) and save the definition. You can also run it immediately for testing. [Manage your jobs](batch-pred-jobs#manage-prediction-jobs) on the **Prediction Jobs** tab.
## Snowflake with an external stage {: #snowflake-with-an-external-stage }
Before using the Snowflake adapter for job definitions, you need to:
* Set up the Snowflake [connection](data-conn#create-a-new-connection).
* Create an external stage for Snowflake, a cloud storage location used for loading and unloading data. You can create an [Amazon S3 stage](https://docs.snowflake.com/en/user-guide/data-load-s3-create-stage.html){ target=_blank } or a [Microsoft Azure stage](https://docs.snowflake.com/en/user-guide/data-load-azure-create-stage.html){ target=_blank }. You will need your account and authentication keys.
To complete these examples, follow the steps in [Create a prediction job definition](batch-pred-jobs#create-a-prediction-job-definition), using the following procedures to configure Snowflake as your prediction source and destination.
### Configure Snowflake with an external stage as source {: #configure-snowflake-with-an-external-stage-as-source }
!!! tip
See [Prediction intake options](intake-options#snowflake-scoring) for field descriptions.
1. For **Prediction source**, select **Snowflake** as the **Source type** and click **+ Select connection**.

2. Select a previously added Snowflake [connection](data-conn).

3. Select your Snowflake account.

4. Select your Snowflake schema.

5. Select the table you want scored and click **Save connection**.

6. Toggle on **Use external stage** and select your **Cloud storage type** (Azure or S3).

7. Enter the **External stage** you created for your Snowflake account. Enable **This external stage requires credentials** and click **+ Add credentials**.

8. Select your credentials.

The completed **Prediction source** section looks like the following:

9. Continue setting up the rest of the job definition. [Schedule](batch-pred-jobs#schedule-prediction-jobs) and save the definition. You can also run it immediately for testing. [Manage your jobs](batch-pred-jobs#manage-prediction-jobs) on the **Prediction** Jobs tab.
### Configure Snowflake with an external stage as destination {: #configure-snowflake-with-an-external-stage-as-destination }
!!! tip
See [Prediction output options](output-options#snowflake-write) for field descriptions.
1. For **Prediction destination**, select **Snowflake** as the **Destination type** and click **+ Select connection**.

2. Select a previously added Snowflake [connection](data-conn).

3. Select your Snowflake account.

4. Select your Snowflake schema.

5. Select a table or create a new table. If you create a new table, DataRobot creates the table with the proper features, and assigns the correct data type to each feature.

6. Enter the table name and click **Save connection**.

7. Toggle on **Use external stage** and select your **Cloud storage type** (Azure or S3).

8. Enter the **External stage** you created for your Snowflake account. Enable **This external stage requires credentials** and click **+ Add credentials**.

9. Select your credentials.

The completed **Prediction destination** section looks like the following:

10. Continue setting up the rest of the job definition. [Schedule](batch-pred-jobs#schedule-prediction-jobs) and save the definition. You can also run it immediately for testing. [Manage your jobs](batch-pred-jobs#manage-prediction-jobs) on the **Prediction** Jobs tab.
|
pred-job-examples-snowflake
|
---
title: Prediction monitoring jobs
description: To integrate more closely with external data sources, monitoring job definitions allow DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot.
---
# Prediction monitoring jobs
To integrate more closely with external data sources, monitoring job definitions allow DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot. For example, you can create a monitoring job to connect to Snowflake, fetch raw data from the relevant Snowflake tables, and send the data to DataRobot for monitoring purposes.
Method | Description
-------|-------------
[Create monitoring jobs](ui-monitoring-jobs) | Use the job definition UI to create monitoring jobs, allowing DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot.
[Monitoring jobs API](api-monitoring-jobs) | Use the Batch Monitoring API to create monitoring job definitions, allowing DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot.
[Manage monitoring job definitions](manage-monitoring-job-def) | Use a deployment's Monitoring Jobs tab to manage the monitoring job definitions you create.
[View and manage batch jobs](batch-jobs) | Use the Batch Jobs tab to view and manage monitoring and prediction jobs.
|
index
|
---
title: Manage monitoring job definitions
description: Manage monitoring job definitions
---
# Manage monitoring job definitions
To view and manage monitoring job definitions, select a deployment on the **Deployments** tab and navigate to the **Job Definitions > Monitoring Jobs** tab.

Click the action menu for a job definition and select one of the actions described below:
| Element | Description |
|---|---|
| View job history | Displays the **Deployments > Batch Jobs** tab listing all monitoring jobs generated from the job definition. |
| Run now | Runs the job definition immediately. Go to the **Deployments > Batch Jobs** tab to view progress. |
| Edit definition| Displays the job definition so that you can update and save it. |
| Disable definition| Suspends a job definition. Any scheduled batch runs from the job definition are suspended. From the action menu of a job definition, click **Disable definition**. After you select **Disable definition**, the menu item becomes **Enable definition**. Click **Enable definition** to re-enable batch runs from this job description. |
| Clone definition | Creates a new job definition populated with the values from an existing job definition. From the action menu of the existing job definition, click **Clone definition**, update the fields as needed, and click **Save prediction job definition**. Note that the **Jobs schedule** settings are turned off by default.|
| Delete definition | Deletes the job definition. Click **Delete definition**, and in the confirmation window, click **Delete definition** again. All scheduled jobs are cancelled. |
|
manage-monitoring-job-def
|
---
title: Monitoring jobs API
description: Use the Batch Monitoring API to create monitoring job definitions, allowing DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot.
---
# Monitoring jobs API
This integration creates a Batch Monitoring API with `batchMonitoringJobDefinitions` and `batchJobs` endpoints, allowing you to create monitoring jobs. Monitoring job [intake](intake-options) and [output](output-options) settings are configured using the same options as batch prediction jobs. Use the following routes, properties, and examples to create monitoring jobs:
## Monitoring job definition and batch job routes
### `batchMonitoringJobDefinitions` endpoints
Access endpoints for performing operations on batch monitoring job definitions:
Operation and endpoint | Description
----------------------------------------------------|------------
`POST /api/v2/batchMonitoringJobDefinitions/` | Create a monitoring job definition given a payload.
`GET /api/v2/batchMonitoringJobDefinitions/` | List all monitoring job definitions.
`GET /api/v2/batchMonitoringJobDefinitions/{monitoringJobDefinitionId}/` | Retrieve the specified monitoring job definition.
`DELETE /api/v2/batchMonitoringJobDefinitions/{monitoringJobDefinitionId}/` | Delete the specified monitoring job definition.
`PATCH /api/v2/batchMonitoringJobDefinitions/{monitoringJobDefinitionId}/` | Update the specified monitoring job definition given a payload.
### `batchJobs` endpoints
Access endpoints for performing operations on batch jobs:
Operation and endpoint | Description
----------------------------------------------|------------
`POST /api/v2/batchJobs/fromJobDefinition/` | Launch (run now) a monitoring job from a `monitoringJobDefinition`. The payload should contain the `monitoringJobDefinitionId`.
`GET /api/v2/batchJobs/` | List the full history of monitoring jobs, including running, aborted, and executed jobs.
`GET /api/v2/batchJobs/{monitoringJobId}/` | Retrieve a specific monitoring job.
`DELETE /api/v2/batchJobs/{monitoringJobId}/` | Abort a running monitoring job.
## Monitoring job properties
### `monitoringColumns` properties
Define which columns to use for batch monitoring:
Property | Type | Description
-------------------------|--------|------------
`predictionsColumns` | string | (Regression) The column in the data source containing prediction values. You must provide this field and/or `actualsValueColumn`.
`predictionsColumns` | array | (Classification) The columns in the data source containing each prediction class. You must provide this field and/or `actualsValueColumn`. (Supports a maximum of 1000 items)
`associationIdColumn` | string | The column in the data source which contains the association ID for predictions.
`actualsValueColumn` | string | The column in the data source which contains actual values. You must provide this field and/or `predictionsColumns`.
`actualsTimestampColumn` | string | The column in the data source which contains the timestamps for actual values.
### `monitoringOutputSettings` properties
Configure the output settings specific to monitoring jobs:
Property | Type | Description
-----------------------------|--------|------------
`uniqueRowIdentifierColumns` | array | Columns from the data source that will serve as unique identifiers for each row. These columns are copied to the data destination to associate each monitored status with its corresponding source row. (Supports a maximum of 100 items)
`monitoredStatusColumn` | string | The column in the data destination containing the monitoring status for each row.
!!! note
For general batch job output settings, see the [Prediction output settings](output-options) documentation.
### `monitoringAggregation` properties
For external models with [large-scale monitoring](agent-use#enable-large-scale-monitoring) enabled, describe the retention policy and the amount of raw data retained for challengers. To support the use of challenger models, you must send raw features. For large datasets, you can report a small sample of raw feature and prediction data to support challengers and reporting; then, you can send the remaining data in aggregate format.
!!! important
If you define these properties, raw data is aggregated by the MLOps library. This means that the data isn't stored in the DataRobot platform. Stats aggregation only supports feature and prediction data, not actuals data. If you've defined `actualsValueColumn` or `associationIdColumn` ( which means actuals will be provided later), DataRobot cannot aggregate data.
Property | Type | Description
------------------|---------|------------
`retentionPolicy` | string | The policy definition determines if aggregation uses the number of samples or a percentage of entire dataset. <br> enum: `['samples', 'percentage']`
`retentionValue` | integer | The amount of data to retain, this can be a percentage or the number of samples.
## Monitoring job examples
=== "Example: Regression monitoring job payload"
``` json title="Regression"
{
"batchJobType": "monitoring",
"deploymentId": "<deployment_id>",
"intakeSettings": {
"type": "jdbc",
"dataStoreId": "<data_store_id>",
"credentialId": "<credential_id>",
"table": "lending_club_regression",
"schema": "SCORING_CODE_UDF_SCHEMA",
"catalog": "SANDBOX"
},
"outputSettings": {
"type": "jdbc",
"dataStoreId": "<data_store_id>",
"table": "lending_club_regression_out",
"catalog": "SANDBOX",
"schema": "SCORING_CODE_UDF_SCHEMA",
"statementType": "insert",
"createTableIfNotExists": true,
"credentialId": "<credential_id>",
"commitInterval": 10,
"whereColumns": [],
"updateColumns": []
},
"passthroughColumns": [],
"monitoringColumns": {
"predictionsColumns": "PREDICTION",
"associationIdColumn": "id",
"actualsValueColumn": "loan_amnt"
},
"monitoringOutputSettings": {
"monitoredStatusColumn": "monitored",
"uniqueRowIdentifierColumns": ["id"]
}
"schedule": {
"minute": [ 0 ],
"hour": [ 17 ],
"dayOfWeek": ["*" ],
"dayOfMonth": ["*" ],
"month": [ "*” ]
},
"enabled": true
}
```
=== "Example: Classification monitoring job payload"
``` json title="Classification"
{
"batchJobType": "monitoring",
"deploymentId": "<deployment_id>",
"intakeSettings": {
"type": "jdbc",
"dataStoreId": "<data_store_id>",
"credentialId": "<credential_id>",
"table": "lending_club_regression",
"schema": "SCORING_CODE_UDF_SCHEMA",
"catalog": "SANDBOX"
},
"outputSettings": {
"type": "jdbc",
"dataStoreId": "<data_store_id>",
"table": "lending_club_regression_out",
"catalog": "SANDBOX",
"schema": "SCORING_CODE_UDF_SCHEMA",
"statementType": "insert",
"createTableIfNotExists": true,
"credentialId": "<credential_id>",
"commitInterval": 10,
"whereColumns": [],
"updateColumns": []
},
"monitoringColumns": {
"predictionsColumns": [
{
"className": "True",
"columnName": "readmitted_True_PREDICTION"
},
{
"className": "False",
"columnName": "readmitted_False_PREDICTION"
}
],
"associationIdColumn": "id",
"actualsValueColumn": "loan_amnt"
},
"monitoringOutputSettings": {
"uniqueRowIdentifierColumns": ["id"],
"monitoredStatusColumn": "monitored"
}
"schedule": {
"minute": [ 0 ],
"hour": [ 17 ],
"dayOfWeek": ["*" ],
"dayOfMonth": ["*" ],
"month": [ "*” ]
},
"enabled": true
}
```
|
api-monitoring-jobs
|
---
title: Create monitoring jobs
description: Use the job definition UI to create monitoring jobs, allowing DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot.
---
# Create monitoring jobs via the UI
In addition to the Prediction API, you can create monitoring job definitions through the DataRobot UI. You can then [view and manage](manage-monitoring-job-def) monitoring job definitions as you would any other job definition.
To create the monitoring jobs in DataRobot:
1. Click **Deployments** and select a deployment from the inventory.
2. On the selected deployment's **Overview**, click **Job Definitions**.
3. On the **Job Definitions** page, click **Monitoring Jobs**, and then click **Add Job Definition**.
4. On the **New Monitoring Job Definition** page, configure the following options:

| | Field name | Description |
|------------------------|------------|---------------|
|  | Monitoring job definition name | Enter the name of the monitoring job that you are creating for the deployment. |
|  | Monitoring data source | Set the [source type](#set-monitoring-data-source) and [define the connection](data-conn) for the data to be scored. |
|  | Monitoring options | Configure [monitoring options](#set-monitoring-options) and [aggregation options](#set-aggregation-options). |
|  | Data destination | (Optional) [Configure the data destination options](#set-output-monitoring-and-data-destination-options) if you enable output monitoring. |
|  | Jobs schedule | Configure whether to run the job immediately and whether to [schedule the job](#schedule-monitoring-jobs).|
|  | Save monitoring job definition | Click this button to save the job definition. The button changes to **Save and run monitoring job definition** if **Run this job immediately** is enabled. Note that this button is disabled if there are any validation errors. |
## Set monitoring data source {: #set-monitoring-data-source }
Select a monitoring source, called an [intake adapter](intake-options), and complete the appropriate authentication workflow for the source type. Select a connection type below to view field descriptions:
!!! note
When browsing for connections, invalid adapters are not shown.
**Database connections**
* [JDBC](../../../api/reference/batch-prediction-api/intake-options.html#jdbc-scoring)
**Cloud Storage Connections**
* [Azure](intake-options#azure-blob-storage-scoring)
* [GCP](intake-options#google-cloud-storage-scoring) (Google Cloud Platform Storage)
* [S3](intake-options#amazon-s3-scoring)
**Data Warehouse Connections**
* [BigQuery](intake-options#bigquery-scoring)
* [Snowflake](intake-options#snowflake-scoring)
* [Synapse](intake-options#synapse-scoring)
**Other**
* [AI Catalog](intake-options#ai-catalog-dataset-scoring)
After you set your monitoring source, DataRobot validates that the data is applicable to the deployed model.
!!! note
DataRobot validates that a data source is compatible with the model when possible, but not in all cases. DataRobot validates for AI Catalog, most JDBC connections, Snowflake, and Synapse.
## Set monitoring options {: #set-monitoring-options }
When setting the monitoring options, the options available depend on the model type: regression or classification.
=== "Regression models"

Option | Description
-------------------------|-------------
Association ID column | Identifies the column in the data source containing the association ID for predictions.
Predictions column | Identifies the column in the data source containing prediction values. You must provide this field and/or **Actuals value column**.
Actuals value column | Identifies the column in the data source containing actual values. You must provide this field and/or **Predictions column**.
Actuals timestamp column | Identifies the column in the data source containing the timestamps for actual values.
=== "Classification models"

Option | Description
-------------------------|-------------
Association ID column | Identifies the column in the data source containing the association ID for predictions.
Predictions column | Identifies the columns in the data source containing each prediction class. You must provide this field and/or **Actuals value column**.
Actuals value column | Identifies the column in the data source containing actual values. You must provide this field and/or **Predictions column**.
Actuals timestamp column | Identifies the column in the data source containing the timestamps for actual values.
## Set aggregation options {: #set-aggregation-options }
For external models with [large-scale monitoring](agent-use#enable-large-scale-monitoring) enabled, you can enable the **Use aggregation** option and define the retention policy and the amount of raw data retained for challengers. To support the use of challenger models, you must send raw features. For large datasets, you can report a small sample of raw feature and prediction data to support challengers and reporting; then, you can send the remaining data in aggregate format.

Property | Description
------------------|------------
Retention policy | The policy definition determines if aggregation uses the number of **Samples** or a **Percentage** of the entire dataset.
Retention value | The amount of data to retain, either a percentage of data or number of samples.
!!! important
If you define these properties, raw data is aggregated by the MLOps library. This means that the data isn't stored in the DataRobot platform. Stats aggregation only supports feature and prediction data, not actuals data. If you've defined one or more of the **Association ID column**, **Actuals value column**, or **Actuals timestamp column**, DataRobot cannot aggregate data. If you enable the **Use aggregation** option, the association ID and actuals-related fields are disabled.

## Set output monitoring and data destination options {: #set-output-monitoring-and-data-destination-options }
After setting the prediction and actuals monitoring options, you can choose to enable **Output monitoring status** and configure the following options:

Option | Description
------------------------------|-------------
Monitored status column | Identifies the column in the data destination containing the monitoring status for each row.
Unique row identifier columns | Identifies the columns from the data source to serve as unique identifiers for each row. These columns are copied to the data destination to associate each monitored status with its corresponding source row.
With **Output monitoring status** enabled, you must also configure the **Data destination** options to specify where the monitored data results should be stored. Select a monitoring data destination, called an [output adapter](output-options), and complete the appropriate authentication workflow for the destination type. Select a connection type below to view field descriptions:
!!! note
When browsing for connections, invalid adapters are not shown.
**Database connections**
* [JDBC](output-options#jdbc-write)
**Cloud Storage Connections**
* [Azure](output-options#azure-blob-storage-write)
* [GCP](output-options#google-cloud-storage-write) (Google Cloud Platform Storage)
* [S3](output-options#amazon-s3-write)
**Data Warehouse Connections**
* [BigQuery](output-options#bigquery-write)
* [Snowflake](output-options#snowflake-write)
* [Synapse](output-options#azure-synapse-write)
**Other**
* [Tableau](output-options#tableau-write)
## Schedule monitoring jobs {: #schedule-monitoring-jobs }
You can schedule monitoring jobs to run automatically on a schedule. When outlining a monitoring job definition, enable **Run this job automatically on a schedule**, then specify the frequency (daily, hourly, monthly, etc.) and time of day to define the schedule on which the job runs.

For further granularity, select **Use advanced scheduler**. You can set the exact time (to the minute) you want to run the monitoring job.

After setting all applicable options, click **Save monitoring job definition**.
|
ui-monitoring-jobs
|
---
title: Settings
description: Edit general configuration details and sharing permissions, and view usage information for No-Code AI Apps.
---
# Settings {:#settings }
The **Settings** tab allows you to edit the application's general configuration details and sharing permissions, and view usage information.
To access this tab, make sure you're in **Build** mode and click **Settings**.

## General Configuration {: #general-configuration }
The **General Configuration** tab allows you to edit the following settings:

| Setting | Description |
| ---------- | ----------- |
| App name | Set the application name. |
| App description | Add a description for the application. |
| App logo | Upload a custom logo for the application.
| Prediction decimal places | Set the number of decimal places displayed for predictions. Affects the All Rows widget, Prediction Explanations, and What-if and Optimizer widget. |
| CSV export | Toggle on **Include BOM** to include the byte order mark in exports. |
Click **Save** to apply any changes made to the general configuration settings.
### Add a custom logo {: #add-a-custom-logo }
You can add a custom logo to your application, allowing you to keep the branding of the app consistent with that of your company before sharing it either externally or internally.

To add a custom logo:
1. Under **App logo**, click **Browse**. Alternatively, you can drag-and-drop an image into the field.

!!! note "Upload requirements"
The image must be saved as a PNG, JPEG, or JPG, and the file size cannot exceed 100KB.
4. In the explorer window, locate and select the new image, and click **Open**.

A notification (1) appears in the upper-right corner to let you know the upload was successful, and both the image preview (2) and app logo (3) update to reflect the new image. To remove a custom logo and revert back to the DataRobot logo, click **Remove file** (4).
## Permissions {: #permissions }
The **Permissions** tab allows you to manage access to the application and share it with other users, groups, and organizations, including those without access to DataRobot. The options on this page vary depending on which option you've selected in the **Who can access the app** dropdown.
- If you select **Invited user only**, you can only share the application with users, groups, and organizations.

- If you select **Anyone with the Sharing Link**, you can share the application with users, groups, organizations, as well as users outside of DataRobot. Use the shareable link generated in the field below **Link sharing on** (1).

All users who access the app with this link have _Consumer_ permissions. Note that you can revoke access to users accessing the link by clicking **Generate new link** (2). You will need to share the new link to provide those users with access again.
For more information on user roles, see [Roles and permissions](roles-permissions#no-code-ai-app-roles).
## App Usage {: #app-usage }
The **App Usage** tab displays the number of users who viewed the application over the specified time range, as well as user activity.

To select a different time range for the chart, open the **Time range** dropdown and select a new option. The chart automatically updates.
|
app-settings
|
---
title: Widgets
description: Add and configure widgets in No-Code AI Apps to create visual, interactive, and purpose-driven end-user applications.
---
# Widgets {: #widgets}
Applications are composed of widgets that create visual, interactive, and purpose-driven end-user applications.
There are two main categories of widgets:
- **Default widgets** are included in every application, no matter the template, and cannot typically be removed.
- **Optional widgets** can be be added to customize an application for your specific use case. All optional widgets, which add visualizations, surface insights, or filter content, must be configured before using an application. If a widget is not configured or is configured incorrectly, DataRobot displays an error message.
The tabs below further describe each widget type:
=== "Default"
Applications automatically include the following [default widgets](default-widgets) to make predictions and view prediction results. Note that [time series applications](ts-app) have a different set of default widgets.
| Widget | Description |
| ---------- | ---- |
| Add Data | Allows you to upload prediction files. |
| All Rows | Displays prediction history by row. |
| Add New Row | Allows you to make single record predictions. |
| General Information | Displays feature values you want to view for each prediction that don't necessarily impact the results. |
| Prediction Information | Displays feature values likely to impact the prediction, as well as Prediction Explanations. |
| Prediction Explanations | Displays a chart with prediction results and a table with Prediction Explanations. |
=== "Filter"
[Filter widgets](optional-widgets#filters) provide additional filtering options within an application. The table below describes the available filter widgets:
| Widget | Description |
| ---------- | ----------- |
| Categories | Filters by one or more categorical features. |
| Dates | Filters by date features. |
| Numbers | Filters by numeric features. You must define a Min and Max in the widget properties. |
=== "Chart"
[Chart widgets](optional-widgets#charts) add visualizations to an application and can be configured to surface important insights in your data and prediction results. The table below describes the available chart widgets:
| Widget | Description |
| ---------- | ----------- |
| Line | Displays a Line chart for the selected features—useful for visualizing trends, understanding the distribution of your data, comparing values in larger datasets, and understanding the relationship between value sets. |
| Bar | Displays a Bar chart for the selected features—useful for understanding the distribution of your data and comparing values in smaller datasets. |
| Line + Bar | Displays a Line and Bar chart for the selected features. You can toggle between the two in the open application. |
| Area | Displays an Area chart for the selected features—useful for visualizing the composition of data. |
| Donut | Displays a pie chart based on one dimension and one measure—useful for visualizing the composition of data, especially how individual parts compare to the whole. |
| Single Value | Displays the average value of the selected feature. |
=== "What-if and Optimizer"
The [What-if and Optimizer widget](whatif-opt) can be a scenario comparison tool, a scenario optimizer tool, or both by providing two tools for interacting with prediction results.
The initial configuration of this widget is based on the [template selected during app creation](create-app#template-options).
## Add widgets {: #add-widgets }
To add a widget to your application, open the **Widget** panel  in the upper-left corner, then drag-and-drop a widget from the left pane onto the canvas.

## Configure widgets {: #configure-widgets }
To configure a widget, either click to select it or hover over the widget and click the pencil icon. Once selected, a panel opens on the left with the tabs described below:
=== "Data tab"
The **Data** tab allows you to manage widget features, including adding and removing features, changing the feature display name, and setting constraints.

Name | Element | Description
---------- | ----------- | ----------
 | [Manage](#manage-widget-features) | Adds or removes features from the widget, and add tooltips.
 | [Set Constraints](whatif-opt#constraints) | Adds feature constraints that instruct the app to only include values falling in the range determined by numeric constraints or specific values for a categorical feature.
 | Dimensions / Measures | Displays the current feature selections for dimensions and measures. Click the pencil icon to change the display name of the feature in the widget.
 | Tooltips | Displays current feature names and any tooltips manually added in the **Manage Feature** window.
=== "Properties tab"
On the **Properties** tab, you can control the widget's behavior and appearance. You may use these customization options, for example, to fine-tune a widget to better suit your use case or change the appearance to match your company's branding.

!!! note
Configuration options are based on widget and project type, for example, multiclass projects include additional parameters for the What-if and Optimizer widget.
See also [Default widgets](default-widgets) or [Optional widgets](optional-widgets) for a complete list of customization options.
### Manage widget features {: #manage-widget-features }
Configuring widget features is an important, and often necessary step when setting up an application. This controls, for example, chart widget visualizations and the features available when making single record predictions.
For many widgets, you must select both a dimension and a feature:
* **Dimensions:** Features that contain qualitative values used to categorize and reveal details in data.
* **Measures:** Features with numeric, quantitative values that can be measured.
To manage widget features:
1. Select the widget and click the **Data** tab in the left-hand panel.
2. Click **Manage**. The **Manage Feature** window opens.

3. In the **Dimensions** tab, click the orange arrows next to one or more features you’d like to visualize on the x-axis. You can select categorical, date, and boolean features.

??? tip "Viewing feature details"
In the **Manage Feature** window, click a feature to view a histogram of the feature values in the training data.

Instead of a histogram, location feature types display a static map with the training data represented by data points.

4. Click **Measures** and click the arrow next to the feature you'd like to measure on the y-axis. The **Measures** tab only displays numeric and custom features.

5. Click **Save** to apply the configuration.
!!! note
You must select at least one dimension and one measure to configure a widget (with the exception of the [**Single Value**](optional-widgets#single-value) widget).
If the widget displays a yellow error message stating there is no valid data, the application does not have access to the training data. You must create a project from the dataset in the **AI Catalog**.
#### Custom features {: #custom-features}
Similar to [feature transformations](feature-transforms) in the main DataRobot platform, you can create custom features for chart widgets in your application.
1. In the **Manage Feature** window, click **Add custom feature**.

2. Name the custom feature, then type the function and features using the [supported syntax](feature-transforms#transform-options-and-syntax). The example below measures the cost of shipments per kilogram.

3. Click **Create**. The custom feature appears in the **Measures** tab of the **Manage Features** window.

!!! note
You can only use numeric features to create custom feature expressions.
#### Add a predicted class for multiclass projects {: #add-a-predicted-class-for-multiclass-projects}
For multiclass projects, you can add the predicted class field to the **All Rows** widget.
1. On the **Home** page of the application, select the **All Rows** widget and click **Manage**.

2. Click the orange arrow next to the feature marked **(Predicted Class)**.

3. click **Save**—the predicted class is now displayed as a column in the **All Rows** widget.

|
app-widgets
|
---
title: What-if and Optimizer
description: Describes how to configure the What-if and Optimizer widget—a scenario comparison and optimizer tool.
---
# What-if and Optimizer {: #what-if-and-optimizer}
The **What-if and Optimizer** widget provides two tools for interacting with prediction results:
* **What-if:** A decision-support tool that allows you to create and compare multiple prediction simulations to identify the option that provides the best outcome. You can also make a prediction, then change one or more inputs to create a new simulation, and see how those changes affect the target feature.
* **Optimizer:** Identifies the maximum or minimum predicted value for a target or custom expression by varying the values of a selection of flexible features in the model.
To edit the **What-if and Optimizer** widget, open the **Editing page** dropdown and select **Prediction Details/Optimizer Details**.

Select the widget and click the **Properties** tab. The app offers a number of settings that enhance the output of predicted values for your target. The settings displayed are based on which tools are enabled as well as the project type.
=== "Optimizer + What-if"

=== "Optimizer only"

=== "What-if only"

=== "Multiclass"

If you create an Optimizer application from a multiclass project, you need to select a target for the optimizer from the predicted class values. To do so, click the Properties tab and use the dropdown to select your target. Note that Enable scenario optimizer must be toggled on.
The table below describes each configurable parameter:
Parameter | Description
---------- | -----------
What-if and Optimizer toggles | **Enable scenario what-if:** Toggle to enable or disable the comparison functionality.<br><br>**Enable scenario optimizer:** Toggle to enable or disable the optimizer functionality. If optimizer is enabled, you must select an option under **Outcome of optimal scenario** and can optionally include a custom optimization expression.
Select target for optimizer dropdown (multiclass only) | Sets a target for the optimizer from the predicted class values if **Enable scenario optimizer** is toggled on.
Outcome of optimal scenario | Sets whether to minimize or maximize the predicted values for the target feature. Minimizing leads to the lowest outcome (e.g., custom churn), and maximizing the highest (e.g., sale price).
[Custom optimization expression](#custom-optimization-expressions) | Creates an equation using curly braces that uses one or more features, such as `{converted} * {renewal_price}`.
Set optimization algorithm | Sets an algorithm, when enabled; otherwise leaves the optimization choice to DataRobot. Choose from the algorithms listed and determine the number of simulations to run. <br><br>**Grid Search** is an exhaustive, brute-force search of options on up to three flexible features. This may result in long run times because it tries many possibilities, even if prior iterations don't suggest a strong outcome. <br><br>**Particle Swarm** is a metaheuristic strategy that tests a large number of options with up to 30 flexible features. It can be effective for numeric flexible features but may not be as effective for flexible categorical features.<br><br>**Hyperopt** efficiently explores significantly fewer options on up to 20 flexible features. It is effective for categorical and numeric features. With this algorithm, you can set up to 400 simulations. More iterations may yield better results, but can result in longer run times as it takes many iterations to converge.
[Constrain sum of features](#constrain-sum-of-features) | Sets output constraints. Constraints ensure that each record’s optimization iterations don’t output results that exceed a given value for the target feature. For example, if you are optimizing the price of a home, you may want to expand the gross living area by finishing part of the basement or adding a bedroom. You can use a sum constraint to limit the space each project is allowed to occupy in sq/ft. Choose _Maximum_ (selected solutions must never exceed) or _Equality_ (selected solutions must be equal) to the constrain value.
Views | Displays the information as a chart, a table, or both.
!!! note
The default configuration is determined by the template selected during app creation. The **What-if and Optimizer** widget is disabled for the Predictor template but can be enabled by clicking the **Eye** icon.
??? tip "Reduce runtime by disabling Prediction Explanations"
Computing Prediction Explanations can increase prediction times, so if speed is more important than computing Prediction Explanations, turn off the **Enable prediction explanations** toggle.
## Custom optimization expressions {: #custom-optimization-expressions}
For batch prediction optimization, use a field defined in the batch upload as part of the custom optimization expression the same way you use a feature from the dataset that the app is deployed from. For example, if you label a field in a spreadsheet `net_profit`, and you have a `time_to_market` feature in the project's underlying dataset, the following would be a valid custom expression:
```
{net_profit}/{time_to_market}
```
??? tip "Add constraints to custom expressions"
Not only can you use custom expressions to specify an optimal outcome by modifying the distance to a target number (instead of just minimizing or maximizing the predicted value for the target), you can also use custom expressions to add constraints on conditions of other flexible features. To do this, you must add a constraint to the custom expression as a penalty term.
For example, your target feature is `sales` and you want to maximize sales while monitoring how much is being spent on marketing (e.g., YouTube and TV ads). To do this, you want to make sure overspending on marketing is penalized appropriately—overspending by $1.00 should be less penalized than overspending by $1,000,000.
If you have a marketing budget of $100,000 to split between YouTube and TV ads, the custom expression might look like this:
```
Maximize
Sales
- (((youtube_spend + tv_spend) > 100000) * factor1)
- ((youtube_spend + tv_spend) * factor2)
```
`factor1` represents how much you want to penalize for overspending, and `factor2` represents how much you want to penalize for spending in general.
`factor2` can be thought of as the marketing ROI. If your industry expects a 3% ROI for marketing then this value would be 1.03
## Constrain sum of features {: #constrain-sum-of-features}
To select the features that you want to be part of the sum:
1. Turn on **Constrain sum of features** at the bottom of the **Properties** tab. Under **Part of sum features**, click **Select**.

2. In the **Manage Features** window, check the box next to at least two fixed or flexible features—the selected features must be numeric. This option is not available if you use the [Hyperopt algorithm](#what-if-and-optimizer).

3. Click **Save** when you are finished selecting features.
## Flexible features {: #flexible-features}
The **Data** tab allows you to select the features that represent the factors you have control over when searching for your optimized outcome. For example, when optimizing the price of homes, some flexible features are the quality of the kitchen, the cost of the mortgage, and the size of the garage.
To manage flexible features:
1. Click **Manage**.

2. Use the orange arrows to add or remove features.

3. Click **Save** to confirm your flexible features.
## Constraints {: #constraints}
When you have selected flexible features, you can apply constraints to them. This instructs the app to only include values falling in the range determined by numeric constraints or specific values for a categorical feature.
To apply constraints to a feature, click **Set Constraints** on the **Data** tab.

Selecting a flexible feature expands a dropdown displaying the feature distribution.
=== "Categorical features"
For categorical features, open the **Search from categories** dropdown and choose which features to include in the simulation by checking the corresponding box.

=== "Numeric features"
For numeric features, you can enter individual values for the minimum and maximum numeric ranges or drag the boundaries on the histogram.

Toggle **Integer values** on to include only integer values (exclude decimals).
Click **Save** to confirm your feature constraints.
|
whatif-opt
|
---
title: Pages
description: Use pages in No-Code AI Apps to organize and group insights.
---
# Pages {: #pages }
Pages divide an application into separate sections that you can navigate between—allowing you to organize and group insights in a way that makes sense for your use case. By default, each non-time series application has the following pages:
| Page | Description |
| ---------- | ----------- |
| Home | View the application landing page, where you can upload batch predictions and view individual prediction rows. |
| Create Prediction / Create Optimization | Make single record predictions (non-time series only). |
| Prediction Details | View prediction results for individual prediction rows. |
To manage your pages, click the **Pages** panel icon on the left or open the **Editing page** dropdown and click **Manage pages**. The **Editing page** dropdown is also how you select the page you want to edit.

In the **Pages** panel, you can:
| | Element | Description |
|---|---|---|
| | + Add | Adds a new page to the application. |
|  | Reorder | Modifies the order of the pages. |
|  | Rename | Renames a page. |
|  | More options | Deletes or hides a page. |
|  | Editing page | Controls the application page you are currently editing. |
Pages are displayed at the top of the application.

## Create pages {: #create-pages }
In addition to the default pages described above (Home, Create, Prediction Details), you can customize applications by creating pages new pages. You may want to do this, for example, to more intuitively group insights for your specific use case.
To create a new page, open the **Pages** panel and click **+ Add**.

You can then click the pencil icon to rename the page (1) and drag-and-drop it to a new position (2).

After publishing your changes and leaving Build mode, your new page is displayed along the top of the application.

## Delete pages {: #delete-pages }
If you want to remove a page from the end-user application, you can either hide or delete the page. Hiding the page, means it's no longer accessible when using the application, but the page and its contents are preserved. You may want to hide a page, for example, while you're working on it until it's ready to be shared publicly. Deleting a page removes it from the application entirely and it cannot be restored.
To hide or remove a page, open the **Pages** panel and click the more actions icon. Then, select the **Hide** or **Delete**.

!!! note
You cannot delete default pages, however, you can hide them from the end-user application.
|
app-pages
|
---
title: Edit applications
description: Modify the configuration of current No-Code AI Apps using widgets.
---
# Edit applications {: #edit-applications}
On the **Applications** tab, click **Open** next to the application you want to manage and click **Build**. The **Build** page allows you to modify the configuration of an application using widgets. Before the app opens, you must sign in with DataRobot and authorize access.
These sections describe the configurable elements of No-Code AI Apps:
Topic | Describes...
---------- | -----------
[Pages](app-pages) | Add or remove pages to organize and group application insights.
[Widgets](app-widgets) | Add, remove, and configure widgets—tools for surfacing insights, creating visualizations, and using applications.
[What-if and Optimizer widget](whatif-opt) | Configure the What-if and Optimizer widget—a single widget that provides scenario comparison and optimizer tools.
[Settings](app-settings) | Modify general configuration information and permissions, as well as view usage details.
## UI overview {: #ui-overview }

| | Element | Description |
|---|---|---|
|  | [Pages panel](app-pages) | Allows you to rename, reorder, add, hide, and delete application pages. |
|  | [Widget panel](app-widgets) | Allows you to add widgets to your application. |
|  | [Settings](app-settings) | Modifies general configurations and permissions as well as displays app usage. |
|  | Documentation | Opens the DataRobot documentation for No-Code AI Apps. |
|  | Editing page dropdown | Controls the application page you are currently editing. To view a different page, click the dropdown and select the page you want to edit. Click **Manage pages** to open the **Pages** panel. |
|  | Preview | Previews the application on different devices. |
|  | Go to app / Publish | Opens the end-user application, where you can [make new predictions](app-make-pred), as well as [view prediction results](app-analyze-result) and widget visualizations. After editing an application, this button displays **Publish**, which you must click to apply your changes.|
|  | Widget actions | Moves, hides, edits, and deletes widgets. |
|
index
|
---
title: Make predictions
description: Make single record or batch predictions in No-Code AI Apps.
---
# Make predictions {: #make-predictions }
There are two ways to make predictions in No-Code AI Apps: [batch predictions](#batch-predictions) or [single record predictions](#single-record-predictions).
!!! note
All prediction requests are sent to DataRobot for processing, therefore, applications have the same prediction limits as the main DataRobot platform.
## Batch predictions {: #batch-predictions}
To make multiple prediction requests at once from the **Home** page, click **choose file** or drag the files into the box.

!!! note
Anonymous users (i.e., those accessing an application through a sharing link) can only submit batch predictions using a local file (CSV), while signed-in users can submit batch predictions using a local file (XLSX or CSV) or the AI Catalog. When signed-in users submit a batch prediction using a local XLSX file, it is automatically registered in the catalog.
After adding new files, the application processes your predictions and displays them in the **All Rows** widget on the **Home** page. Click on any record to view the prediction results.

## Single record predictions {: #single-record-predictions}
To make a new prediction:
1. Click **Add new row**, bringing you to the **Create Prediction** page with the **Add New Row** widget, which displays the features available to make a prediction.
??? faq "Why aren't some of my features showing up?"
**Reason 1:** By default, the **Add New Row** widget only displays 10 features.
To display additional features, click **Show more** at the bottom of the widget. If there are still features missing, you must add them to the widget in **Build** mode. To add features, see the documentation on [managing widget features](app-widgets#manage-widget-features).
**Reason 2:** No-Code AI Apps only uses "prediction features," meaning features that impact the deployment's predictions.
2. Fill in the feature fields—at least one field must have a value— and the [association ID](accuracy-settings#select-an-association-id) if one was added for the deployment. If a field is left blank, the feature field displays _N/A_ on the prediction results page. Alternatively, you can click **Populate averages** to fill in the fields with the average value for each numeric feature and the first alphabetically ordered value of a categorical feature.

??? note "Location features for geospatial projects"
If the dataset contains a location feature, a globe icon appears in the feature field. You can manually enter a feature value in the field, or click the globe icon to view a visual representation of the training data.

The geometry type of the location feature determines the appearance of the training data on the map and affects which draw tool—Point, Polygon, or Path—you can use to highlight your prediction. In the example below, the location feature uses point geometry, so use the **Point** tool to add a new point to the map. With the point selected, click **Save selected location**; the point is then converted to a geojson string to make your prediction.

Click **Add**. After DataRobot completes the request, the prediction results page opens.
To add or remove feature fields, click **Build** and navigate to the **Create Prediction** page.
|
app-make-pred
|
---
title: View prediction results
description: View prediction information and insights for individual predictions in No-Code AI Apps.
---
# View prediction results {: #view-prediction-results }
The prediction results page displays prediction information and insights based on the values entered for an individual prediction. This page automatically opens after making a single record prediction, but you can also view prediction results by selecting a row in the **All rows** widget.
The **General Information** widget displays helpful values for features that don't necessarily impact the prediction results. The **Prediction Information** widget, on the other hand, displays the values for features likely to impact the prediction, as well as Prediction Explanations.

??? note "Add or remove fields"
To add or remove feature fields, see the documentation on [managing widget features](app-widgets#manage-widget-features).
## Prediction Explanations {: #prediction-explanations}
The **Prediction Explanations** widget displays a chart with your predictions results, as well as a table with [Prediction Explanations](pred-explain/index) (either [XEMP-based](xemp-pe) or, as shown in the example below, [SHAP-based](shap-pe))—a measure of how features in the model impact a prediction based on their relationship to the target.

| | Element |
| --- | ---------- |
| | The predicted value for the row. |
|  | The basis of the prediction classification, determined by the dataset the app deployed from. |
|  | A table displaying the top 10 Prediction Explanations as well as a visualization representing how the feature is distributed in the training data. |
!!! note
You must compute Prediction Explanations for the model before making a prediction.
### Prediction Explanation table {: #prediction-explanation-table }
The Prediction Explanation table displays the top 10 features with the largest impact on a prediction based on their relationship in the training data.

See the table below for a description of each field:
| Field | Description |
|---| ---|
| Impact | The measured impact of a given feature on the prediction. For a description of the icons used, see the ["qualitativeStrength" indicator](dep-predex#qualitativestrength-indicator) description. |
| Feature | The feature in the training data impacting the prediction. |
| Value | The feature value causing the feature's measured impact on the prediction. |
| Distribution| A histogram that represents the distribution of a given feature in the training data. Hover over the visualization for additional information that can then be used to add context to the Prediction Explanation. |
## What-if and Optimizer {: #what-if-and-optimizer}
The **What-if and Optimizer** widget displays various scenarios in a chart or a table view. To learn how to configure this widget, see the documentation on [building applications](whatif-opt)

The table below describes the components of the **What-if and Optimizer** widget
| | Description |
|---|---|
|  | Opens the **Add scenario** pane. |
|  | Filters the display to show only actual, optimal, and all manually added scenarios. |
|  | Displays prediction insights in chart or table view. |
|  | Adds new scenarios to the widget. |
|  | Selects features for the x- and y-axis. |
!!! tip
Check **Show only manually added, optimal, and actual scenarios** to more easily find your simulations.

### Create simulations {: #create-simulations}
If the **What-if** functionality is enabled, you can manually add scenarios to the widget's display.
To create a simulation, click **Add scenario**.

Provide values for each variable selected on the **Build** page. When you have entered values for each feature, click **Add**. This triggers a prediction request to a DataRobot prediction server and returns a predicted value for your target feature.

Selecting a point on the chart (data point) opens a pane on the right that displays prediction results and feature values for the selected scenario. If Prediction Explanations are available for your prediction request, they appear in the **Prediction Explanations** tab in the same pane.

You can continue to make predictions by updating the variable inputs with new values and repeating this process. After making several predictions, click **Table view**. This view allows you to drag-and-drop scenarios for side-by-side comparison.

### Optimization simulation results {: #optimization-simulation-results}
If the **Optimizer** functionality is enabled, the chart displays an optimal scenario, as well as different outcomes based on the [flexible features](whatif-opt#flexible-features) added to the widget's configuration.
Once DataRobot completes running simulations, the chart populates the results. The y-axis measures the values of the target feature, and the x-axis indicates the simulation iteration. Each point on the graph represents the predicted value for each simulation run.
The orange data point represents your prediction and the green data point represents the optimal scenario—the values of the feature that most often produce the optimal result for your target (the minimal or maximal, based on your settings). The selected values are those you selected for a given iteration, allowing you to compare the selected values for each iteration to the overall most optimal values determined by the app.

|
app-analyze-result
|
---
title: Use applications
description: Test different No-Code AI App configurations before sharing the app with end-users.
---
# Use applications {: #use-applications}
On the **Applications** tab, click **Open** next to the application you want to launch—from here you can test different application configurations before sharing it with users.
!!! note
End-users must sign in with a DataRobot account or access the application via a [link that can be shared with users](app-settings#permissions) outside of DataRobot.
These sections describe the actions available when using No-Code AI Apps:
Topic | Describes...
---------- | -----------
[Make predictions](app-make-pred) | Make single record or batch predictions.
[Analyze prediction results](app-analyze-result) | Analyze prediction information and insights for individual predictions.
## UI overview {: #ui-overview }

| | Element | Description |
|---|---| ---|
|  | Application name | Displays the application name. Click to return to the app's Home page. |
|  | Pages | Navigates between application pages. |
|  | Build | Allows you to edit the application. |
|  | [Share](current-app#share-applications)| Shares the application with users, groups, or organizations within DataRobot. |
|  | Add new row | Opens the **Create Prediction** page, where you can make single record predictions. |
|  | Add Data | Uploads batch predictions—from the **AI Catalog** or a local file. |
|  | All rows | Displays a history of predictions. Select a row to view prediction results for that entry. |
|
index
|
---
title: Feature Discovery support in No-Code AI Apps
description: Create No-Code AI Apps from Feature Discovery projects.
section_name: Apps
maturity: public-preview
platform: cloud-only
---
# Feature Discovery support in No-Code AI Apps {: #feature-discovery-support-in-no-code-ai-apps }
!!! info "Availability information"
Feature Discovery support in No-Code AI Apps is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flags:</b>
- Enable Application Builder Feature Discovery Support
- Enable Feature Cache for Feature Discovery
Now available for public preview, you can create No-Code AI Apps from Feature Discovery projects (i.e., projects built with multiple datasets) with feature cache enabled. [Feature cache](safer-ft-cache) instructs DataRobot to source data from multiple datasets and generate new features in advance, storing this information in a "cache," which is then drawn from to make predictions.
Before [creating an application](create-app), ensure the deployment has feature cache enabled in the deployment's **Settings** tab if the project contains multiple datasets.

Once created, you can [use the app](use-apps/index) to build simulations and charts, run optimizations, and create what-if scenarios.
## Considerations {: #considerations }
Consider the following when enabling Feature Discovery support for No-Code AI Apps:
- Postgres DB must be installed for feature cache to work properly.
- Feature cache must be enabled to make single record predictions.
- Feature cache only caches features for the most recent period.
|
app-ft-cache
|
---
title: Prefill application templates
description: Prefill applications upon creation to more easily visualize the end-user experience.
section_name: Apps
maturity: public-preview
---
# Prefill application templates {: #prefill-application-templates }
!!! info "Availability information"
Prefilled No-Code AI App templates are off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flags:</b> Enable Prefill NCA Templates with Training Data
Now available for public preview, you can choose whether or not to prefill application insights with training data, allowing you to more easily visualize the experience for the end-user. Previously, application widgets appeared blank until the app was used to make a prediction; now, with prefilled No-Code App templates enabled, the application can use the project's training data to populate the widgets.
After creating a new application, DataRobot exports the training dataset to the AI Catalog, and if the dataset is larger than 50,000 rows, performs random downsampling. Once registered, DataRobot then performs a batch prediction job by scoring the training data.

The **All Rows** widget populates using the training data as the prediction data source. Additionally, new tabs have been added so you can view training data, prediction data, or both.

The **Predictions data** tab is only active after adding a batch prediction file or making a single-row prediction in the app.
!!! note "App prediction limit"
The process of scoring the training data does not affect your application prediction limit.
## Set the prediction data source {: #set-the-prediction-data-source }
By default, the app uses only training data as the prediction data source. You can update the prediction data source in the app's settings.
To update the settings:
1. Click **Build** in the upper-right corner.
2. Click **Settings > General Configuration**.
3. Under **Prediction Data Source** select a new option—Prediction Data Only, Training Data Only, or All Predictions Data.

## Feature considerations {: #feature-considerations }
Prefilled templates are not available for time series applications.
|
app-prefill
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.