markdown
stringlengths 44
160k
| filename
stringlengths 3
39
|
|---|---|
---
title: AI accelerators
description: Review comprehensive workflows, notebooks, and tutorials that help you find complete examples of common data science and machine learning workflows.
---
# AI accelerators {: #ai-accelerators }
AI Accelerators are designed to help speed up model experimentation, development, and production using the DataRobot API. They codify and package data science expertise in building and delivering successful machine learning projects into repeatable, code-first workflows and modular building blocks. AI Accelerators are ready right out-of-the-box, work with the notebook of your choice, and can be combined to suit your needs.
AI accelerators cover a variety of topics, but primarily aim to assist you by:
* Providing curated templates for workflows that utilize best-in-class data science techniques to help frame your business problem (e.g., customize a data visualization to your liking or rank models by ROI).
* Getting you started quickly on a new AI or ML project by providing necessary insights, problem-framing, and use cases in notebooks.
* Fine-tuning your projects and getting the most value from your existing data and infrastructure investments, including third-party integrations.
Topic | Describes... |
----- | ------ |
[Mastering tables in production ML](ml-tables) | Review an AI accelerator that uses a repeatable framework for a production pipeline from multiple tables.
[End-to-end ML workflow with Google Cloud Platform and BigQuery](ml-gcp) | Use Google Collaboratory to source data from BigQuery, build and evaluate a model using DataRobot, and deploy predictions from that model back into BigQuery and GCP. |
[End-to-end ML workflow with Databricks](ml-databricks) | Build models in DataRobot with data acquired and prepared in a Spark-backed notebook environment provided by Databricks.
[Deploy a model in AWS SageMaker](deploy-sagemaker) | Learn how to programmatically build a model with DataRobot and export and host the model in AWS SageMaker.
[Track ML experiments with MLFlow](mlflow) | Learn how to programmatically build a model with DataRobot, and then export and host the model in AWS SageMaker
[End-to-end ML workflow with Snowflake](ml-snowflake) | Work with Snowflake and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions.
[End-to-end ML workflow with AWS](ml-aws) | Work with AWS and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions.
[Customize lift charts](custom-lift-chart) | Leverage popular Python packages with DataRobot's Python client to recreate and augment DataRobot's lift chart visualization.
[Select models using custom metrics](ai-custom-metrics) | This AI Accelerator demonstrates how one can leverage DataRobot's Python client to extract predictions, compute custom metrics, and sort their DataRobot models accordingly.
[End-to-end ML workflow with Azure](ml-azure) | Work with Azure and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions.
[Tune blueprints for preprocessing and model hyperparameters](opt-grid) | Learn how to access, understand, and tune blueprints for both preprocessing and model hyperparameters.
[Fine-tune models with Eureqa](tune-eureqa) | Apply symbolic regression to your dataset in the form of the Eureqa algorithm.
[End-to-end time series demand forecasting workflow](demand-flow) | Perform large-scale demand forecasting using DataRobot's Python package.
[Cold start demand forecasting workflow](cold-start) | This accelerator provides a framework to compare several approaches for cold start modeling on series with limited or no history.
[Use Gramian angular fields to improve datasets](gramian) | Generate advanced features used for high frequency data use cases.
[Migrate a model to a new cluster](model-migrate) | Download a deployed model from DataRobot cluster X, upload it to DataRobot cluster Y, and then deploy and make requests from it.
[Feature Reduction with FIRE](fire) | Learn about the benefits of Feature Importance Rank Ensembling (FIRE)—a method of advanced feature selection that uses a median rank aggregation of feature impacts across several models created during a run of Autopilot. |
[Creating Custom Blueprints with Composable ML](custom-bp-nb) | Customize models on the Leaderboard using the Blueprint Workshop. |
[Tackle churn before modeling](ml-churn) | Discover the problem-framing and data management steps required to successfully model for churn, using a B2C retail example and a B2B example based on a DataRobot’s churn model. |
[Demand forecasting with the What-if app](ml-what-if) | Discover the problem framing and data management steps required to successfully model for churn, using a B2C retail example and a B2B example based on a DataRobot’s churn model. |
[Build a recommendation engine](rec-engine) | Explore how to use historical user purchase data in order to create a recommendation model, which will attempt to guess which products out of a basket of items the customer will be likely to purchase at a given point in time. |
[Prepare and leverage image data with Databricks](image-databricks) | Import image files using Spark and prepare them into a data frame suitable for ingest into DataRobot. |
[Gather churn prediction insights with the Streamlit app](streamlit-app) | Use the Streamlit churn predictor app to present the drivers and predictions of your DataRobot model. |
[Perform multi-model analysis](ml-analysis) | Use Python functions to aggregate DataRobot model insights into visualizations. |
[Enrich data using the Hyperscaler API](enrich-gcp) | Call the GCP API and enrich a modeling dataset that predicts customer churn. |
[Use feature engineering and Visual AI with acoustic data](ml-viz) | Generate image features in addition to aggregate numeric features for high frequency data sources. |
[Monitor AWS Sagemaker models with MLOps](aws-mlops) | Train and host a SageMaker model that can be monitored in the DataRobot platform.|
[Integrate DataRobot and Snowpark by maximizing the data cloud](snowpark-data) | Leverage Snowflake for data storage and Snowpark for deployment, feature engineering, and model scoring with DataRobot. |
[Demand forecasting and retraining workflow](df-retrain) | Implement retraining policies with DataRobot MLOps demand forecast deployments. |
[Predict factory order quantities for new products](pred-products) | Build a model to improve decisions about initial order quantities using future product details and product sketches. |
[End-to-end workflow with SAP Hana](ml-sap) | Learn how to programmatically build a model with DataRobot using SAP Hana as the data source. |
[Use self-joins with panel data to improve model accuracy](self-joins) | Explore how to implement self-joins in panel data analysis. |
[Predict lumber prices with Ready Signal and time series forecasts](ready-signal) | Use Ready Signal to add external control data, such as census and weather data, to improve time series predictions. |
[Netlift modeling workflow](ml-uplift) | Leverage machine learning to find patterns around the types of people for whom marketing campaigns are most effective. |
[Create a trading volume profile curve with a time series model factory](ts-factory) | Use a framework to build models that will allow you to predict how much of the next day trading volume will happen at each time interval. |
[Zero-shot text classification for error analysis](zero-shot) | Use zero-shot text classification with large language models (LLMs), focusing on its application in error analysis of supervised text classification models. |
[Deploy Scoring Code as a microservice](sc-micro) | Follow a step-by-step procedure to embed Scoring Code in a microservice and prepare it as the Docker container for a deployment on customer infrastructure (it can be self- or hyperscaler-managed K8s). |
[No-show appointment forecasting](no-show) | How to build a model that identifies patients most likely to miss appointments, with correlating reasons. |
|
index
|
---
title: Enrich data using the Hyperscaler API
description: Call the GCP API and enrich a modeling dataset that predicts customer churn.
---
# Enrich data using the Hyperscaler API {: #enrich-data-using-the-hyperscaler-api }
[Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/gcp_sentiment/GCP_enrich_sentiment.ipynb){ .md-button }
Many companies are recognizing the value of unstructured data, particularly in the form of text, and are looking for ways to extract insights from it. This data includes emails, social media posts, customer feedback, call transcripts, and more. One of the most powerful tools for analyzing text data is sentiment analysis.
Sentiment analysis is the process of identifying the emotional tone of a piece of text, such as positive, negative, or neutral. It is a valuable tool to enrich the dataset for building machine learning models. For example, the sentiment expressed through a customer's recent call transcript with customer service could be predictive of the customer's likelihood to churn.
However, building sentiment analysis models is not an easy task. It requires a significant investment of time, resources, and expertise, especially in terms of accurately labeled data with large corpus to train. Most companies do not have the resources or expertise to develop their own sentiment analysis models.
Fortunately, there are now APIs available that provide sentiment analysis as a service. By using these APIs, companies can save time and money while still gaining the benefits of sentiment analysis. One of the most significant benefits of using hyperscaler APIs for sentiment analysis is their accuracy. The models behind the APIs are trained on large amounts of data, making them highly accurate at identifying emotions and sentiments in text data.
This accelerator demonstrates how easy it is to call the GCP API and enrich a modeling dataset that predicts customer churn, which shows an improvement in the model performance because of the retrieved sentiment scores based on the customers' call transcripts.
|
enrich-gcp
|
---
title: Select models using custom metrics
description: This AI Accelerator demonstrates how one can leverage DataRobot's Python client to extract predictions, compute custom metrics, and sort their DataRobot models accordingly.
---
# Select models using custom metrics {: #select-models-using-custom-metrics }
[Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/custom_metrics/custom_metrics.ipynb){ .md-button }
When it comes to evaluating model performance, DataRobot provides many of the standard metrics [out-of-the box](opt-metric), either on the [Leaderboard](tut-read-leaderboard) or as part of a [model insight](analyze-models/index).
However, depending on the industry, you may need to sort your DataRobot leaderboard by a specific metric not natively supported by DataRobot. This AI Accelerator demonstrates how one can leverage DataRobot's Python client to extract predictions, compute custom metrics, and sort their DataRobot models accordingly. The topics covered are as follows:
* Setup: import libraries and connect to DataRobot
* Build models with Autopilot
* Retrieve predictions and actuals
* Sort models by Brier Skill Score (BSS)
* Sort models by Rate@Top1%
* Sort models by return-on-investment (ROI)
|
ai-custom-metrics
|
---
title: Deploy Scoring Code as a microservice
description: Follow a step-by-step procedure to embed Scoring Code in a microservice and prepare it as the Docker container for a deployment on customer infrastructure (it can be self- or hyperscaler-managed K8s).
---
# Deploy Scoring Code as a microservice {: #deploy-scoring-code-as-a-microservice}
[Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/scoring-code-as-microservice){ .md-button }
This accelerator guides through the step-by-step procedure that makes it possible to embed scoring code in the microservice and to prepare it as the Docker container for the deployment on the customer infrastructure (it can be self or hyperscaler-managed K8s). The K8s configuration and deployment on K8s are out of scope. The accelerator also includes an example Maven project with the Java code.
|
sc-micro
|
---
title: End-to-end ML workflow with Snowflake
description: Work with Snowflake and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions.
---
# End-to-end ML workflow with Snowflake {: #end-to-end-ml-workflow-with-snowflake }
[Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/Snowflake_End_to_End){ .md-button }
This AI accelerator walks through how to work with Snowflake (as a data source) and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions. More broadly, the DataRobot API is a critical tool for data scientists to accelerate their machine learning projects with automation while integrating the platform's capabilities into their code-first workflows and coding environments of choice.
By using this accelerator, you will:
* Connect to DataRobot.
* Import data from Snowflake into DataRobot.
* Create a DataRobot project and run Autopilot.
* Select and evaluate the top performing model.
* Deploy the recommended model with MLOps model monitoring.
* Orchestrate scheduled batch predictions that write results back to Snowflake.
|
ml-snowflake
|
---
title: Migrate a model to a new cluster
description: Download a deployed model from DataRobot cluster X, upload it to DataRobot cluster Y, and then deploy and make requests from it.
---
# Migrate a model to a new cluster {: #migrate-a-model-to-a-new-cluster}
[Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/model_migration/Model_Migration_Example.ipynb){ .md-button }
Currently under development, an experimental DataRobot API allows administrators to download a deployed model from DataRobot cluster X, upload it to DataRobot cluster Y, and then deploy and make requests from it.
Note that this notebook will not work using https://app.datarobot.com.
### Prerequisites
* This notebook must be able to write to the model directory, located in the same directory as this accelerator's notebook. For best results, run this notebook from the local file system
* Ensure that the model you choose to migrate must be a deployed model.
* Provide API keys for both the source and destination clusters.
* The Source and Destination users must have the "Enable Experimental API access" feature flag enabled to follow this workflow.
* The notebook must have connectivity to the Source and Destination clusters.
* DataRobot versions on the clusters must be consistent with the Supported Paths above.
* For models on clusters of DataRobot v7.x, you must have SSH access to the App Node of the cluster.
* The Source and Destination DataRobot clusters must have the following in the config.yaml:
|
model-migrate
|
---
title: End-to-end time series demand forecasting Workflow
description: Perform large-scale demand forecasting using DataRobot's Python package.
---
# End-to-end time series demand forecasting workflow {: #end-to-end-time-series-demand-forecasting-workflow}
[Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/End_to_end_demand_forecasting/){ .md-button }
Demand forecasting models have many common challenges: large quantities of SKUs or series to predict, partial history or irregular history for many SKUs, multiple locations with different local or regional demand patterns, and cold-start prediction requests from the business for new products. The list goes on.
Time series in DataRobot, however, has a diverse range of functionality to help tackle these challenges. For example:
- Automatic feature engineering and creation of lagged variables across multiple data types, as well as training dataset creation.
- Diverse approaches for time series modeling with text data, learning from cross-series interactions and scaling to hundreds or thousands of series.
- Feature generation from an uploaded calendar of events file specific to your business or use case.
- Automatic backtesting controls for regular and irregular time-series.
- Training dataset creation for irregular series via custom aggregations.
- Segmented modeling, hierarchical clustering for multi-series models, multimodal modeling, and ensembling.
- Periodicity and stationarity detection, and automatic feature list creation with various differencing strategies.
- Cold start modeling on series with limited or no history.
- Insights for all of the above.
In this first installment of a three-part series on demand forecasting, this accelerator provides the building blocks for a time-series experimentation and production workflow. This notebook provides a framework to inspect and handle common data and modeling challenges, identifies common pitfalls in real-life time series data, and provides helper functions to scale experimentation with the tools mentioned above and more.
The dataset consists of 50 series (46 SKUs across 22 stores) over a two year period with varying series history, typical of a business releasing and removing products over time.
|
demand-flow
|
---
title: End-to-end ML workflow with AWS
description: Work with AWS and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions.
---
# End-to-end ML workflow with AWS {: #end-to-end-ml-workflow-with-aws}
Being one of the largest cloud providers in the world, AWS has multiple ways of storing data within its cloud.
You can use either of two AI accelerators that allow you to source data from S3 or Athena, build and evaluate a model using DataRobot, and send predictions from that model back to S3.
[Access the AI accelerator for S3 on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/AWS_End_to_End/Amazon_S3_End_to_End.ipynb){ .md-button }
[Access the AI accelerator for AWS Athena on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/AWS_End_to_End/AWS_Athena_End_to_End.ipynb){ .md-button }
Each AI accelerator will perform the following steps to help you integrate DataRobot with your data in AWS:
* **Import data for training**:
* In the S3 AI accelerator, you will be able to take data in the parquet file format, assemble it, and upload it to DataRobot's AI Catalog.
* In the Athena AI Accelerator, you will create a JDBC data source within DataRobot to connect to Athena and then pull data in via a SQL query.
* Using the DataRobot Python API, you will have DataRobot build up to 50 different machine learning models while also evaluating how those models perform on this dataset.
|
ml-aws
|
---
title: Mastering tables in production ML
description: Review an AI accelerator that uses a repeatable framework for a production pipeline from multiple tables.
---
# Mastering tables in production ML {: #mastering-tables-in-production-ml }
[Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/AFD){ .md-button }
We've all been there: data for customer transactions are in one table, but the customer membership history is in another. Or, you have sensor-level data at the sub-second level in one table, machine errors in another table, and production demand in yet another table, all at different time frequencies. Electronic Medical Records (EMRs) are another common instance of this challenge. You have a use case for your business you want to explore, so you build a v0 dataset and use simple aggregations from before, perhaps in a feature store. But moving past v0 is hard.
The reality is, the hypothesis space of relevant features explodes when considering multiple data sources with multiple data types in them. By dynamically exploring the feature space across tables, you minimize the risk of missing signal by feature omission and further reduce the burden of a priori knowledge of all possible relevant features.
Event-based data is present in every vertical and is becoming more ubiquitous across industries. Building the right features can drastically improve performance. However, understanding which joins and time horizons are best suited to your data is challenging, and also time-consuming and error-prone to explore.
In this accelerator, you'll find a repeatable framework for a production pipeline from multiple tables. This code uses Snowflake as a data source, but it can be extended to any supported database. Specifically, the accelerator provides a template to:
* Build time-aware features across multiple historical time-windows and datasets using DataRobot and multiple tables in Snowflake (or any database).
* Build and evaluate multiple feature engineering approaches and algorithms for all data types.
* Extract insights and identify the best feature engineering and modeling pipeline.
* Test predictions locally.
* Deploy the best-performing model and all data preprocessing/feature engineering in a Docker container, and expose a REST API.
* Score from Snowflake and write predictions back to Snowflake.
|
ml-tables
|
---
title: Feature Reduction with FIRE
description: Learn about the benefits of Feature Importance Rank Ensembling (FIRE)—a method of advanced feature selection that uses a median rank aggregation of feature impacts across several models created during a run of Autopilot.
---
# Feature Reduction with FIRE {: #feature-reduction-with-fire}
[Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/feature_reduction_with_fire/feature_reduction_with_fire.ipynb){ .md-button }
You can significantly reduce the number of features in your dataset by leveraging DataRobot's ability to train hundreds of high-quality models in a matter of minutes.
Feature Importance Rank Ensembling (FIRE) aggregates the rankings of individual features using Feature Impact from several blueprints on the leaderboard. This approach can provide greater accuracy and robustness over other feature reduction methods.
This accelerator shows how to apply FIRE to your dataset and dramatically reduce the number of features without impacting the performance of the final model.
|
fire
|
---
title: API Quickstart
description: Learn how to begin using the DataRobot REST API to create projects and generate predictions.
---
# API quickstart {: #api-quickstart }
The DataRobot API provides a programmatic alternative to the web interface for creating and managing DataRobot projects. The API can be used via REST or with DataRobot's Python or R clients in Windows, UNIX, and OS X environments. This guide walks you through setting up your environment and then you can follow [a sample problem](#use-the-api-predicting-fuel-economy) that outlines an end-to-end workflow for the API.
!!! note
Note that the API quickstart guide uses methods for [3.x versions of DataRobot's Python client](pythonv3). If you are a Self-Managed AI Platform user, consult the [API resources table](api/index#self-managed-ai-platform-api-resources) to verify which versions of DataRobot's clients are supported for your version of the DataRobot application.
## Prerequisites {: #prerequisites }
Before proceeding, access and install the DataRobot client package for Python or R.
=== "Python"
The following prerequisites are for [3.x versions of DataRobot's Python client](pythonv3):
* Python >=3.7
* A registered DataRobot account
* pip
=== "R"
* R >= 3.2
* [httr](https://cran.r-project.org/web/packages/httr/index.html){ target=_blank } (≥ 1.2.0)
* [jsonlite](https://cran.r-project.org/web/packages/jsonlite/index.html){ target=_blank } (≥ 1.0)
* [yaml](https://cran.r-project.org/web/packages/yaml/index.html){ target=_blank } (≥ 2.1.19)
* A registered DataRobot account
## Install the client {: #install-the-client }
!!! note
Self-Managed AI Platform users may want to install a previous version of the client in order to match their installed version of the DataRobot application. Reference the [available versions](api/index#self-managed-ai-platform-api-resources) to map your installation to the correct version of the API client.
=== "Python"
`pip install datarobot`
Optionally, if you would like to build custom blueprints programmatically, install two additional packages: graphviz and blueprint-workshop.
**For Windows users:**
[Download the graphviz installer](https://www.graphviz.org/download/#windows){ target=_blank }
**For Ubuntu users:**
`sudo apt-get install graphviz`
**For Mac users:**
`brew install graphviz`
Once graphviz is installed, install the workshop:
`pip install datarobot-bp-workshop`
=== "R"
`install.packages(“datarobot”)`
## Configure your environment {: #configure-your-environment }
Using DataRobot APIs, you will execute a complete modeling workflow, from uploading a dataset to making predictions on a model deployed in a production environment.
### Create a DataRobot API key {: #create-a-datarobot-api-key }
1. From the DataRobot UI, click your user icon in the top right corner and select **Developer Tools**.
2. Click **Create new key**.
3. Name the new key, and click **Save**. This activates your new key, making it ready for use.

Once created, each individual key has three pieces of information:

| Label | Element | Description |
|---|---|---|
|  | Key | The name of the key, which you can edit. |
|  | Date | The date the key was last used. Newly created and not yet used keys display “Wasn’t used.” |
|  | Key value | The key itself. |
### Retrieve the API endpoint {: #retrieve-the-api-endpoint }
DataRobot provides several deployment options to meet your business requirements. Each deployment type has its own set of endpoints. Choose from the tabs below:
=== "AI Platform (US)"
The _AI Platform (US)_ offering is primarily accessed by US and Japanese users. It can be accessed at [https://app.datarobot.com](https://app.datarobot.com){ target=_blank }.
API endpoint root: `https://app.datarobot.com/api/v2`
=== "AI Platform (EU)"
The _AI Platform (EU)_ offering is primarily accessed by EMEA users. It can be accessed at [https://app.eu.datarobot.com](https://app.eu.datarobot.com){ target=_blank }.
API endpoint root: `https://app.eu.datarobot.com/api/v2`
=== "Self-Managed AI Platform"
For Self-Managed AI Platform users, the API root will be the same as your DataRobot UI root. Replace `{datarobot.example.com}` with your deployment endpoint.
API endpoint root: `https://{datarobot.example.com}/api/v2`
### Configure API authentication {: #configure-api-authentication }
To authenticate with DataRobot's API, your code needs to have access to an endpoint and token from the previous steps. This can be done in three ways:
=== "drconfig.yaml"
`drconfig.yaml` is a file that the DataRobot Python and R clients automatically look for. This is DataRobot's recommended authentication method. You can instruct the API clients to look for the file in a specific location, `~/.config/datarobot/drconfig.yaml` by default, or under a unique name. Therefore, you can leverage this to have multiple config files. The example below demonstrates the format of the .yaml:
endpoint: 'https://app.datarobot.com/api/v2'
token: 'NjE3ZjA3Mzk0MmY0MDFmZGFiYjQ0MztergsgsQwOk9G'
Once created, you can test your access to the API.
**For Python:**
If the config file is located at `~/.config/datarobot/drconfig.yaml`, then all you need to do is import the library:
``` python
import datarobot as dr
```
Otherwise, use the following command:
``` python
import datarobot as dr
dr.Client(config_path = "<file-path-to-drconfig.yaml>")
```
**For R:**
If the config file is located at `~/.config/datarobot/drconfig.yaml`, then all you need to do is load the library:
`library(datarobot)`
Otherwise, use the following command:
```
ConnectToDataRobot(configPath = "<file-path-to-drconfig.yaml>"))
```
=== "Environment variables"
Set up an endpoint by setting environment variables in the UNIX shell:
``` bash
export DATAROBOT_ENDPOINT=https://app.datarobot.com/api/v2
export DATAROBOT_API_TOKEN=your_api_token
```
Once set, authenticate to connect to DataRobot.
**For Python:**
``` python
import datarobot as dr
dr.Project.list()
```
**For R:**
`library(datarobot)`
**For cURL:**
```
curl --location -X GET "${DATAROBOT_API_ENDPOINT}/projects" --header "Authorization: Bearer ${DATAROBOT_API_TOKEN}"
```
=== "Embed in your code"
Optional. Be cautious to never commit your credentials to Git.
**For Python:**
``` python
import datarobot as dr
dr.Client(endpoint='https://app.datarobot.com/api/v2', token='NjE3ZjA3Mzk0MmY0MDFmZGFiYjQ0MztergsgsQwOk9G')
```
**For R:**
```
ConnectToDataRobot(endpoint =
"https://app.datarobot.com/api/v2",
token =
'NjE3ZjA3Mzk0MmY0MDFmZGFiYjQ0MztergsgsQwOk9G')
```
**For cURL**:
```
GET https://app.datarobot.com/api/v2/ HTTP/1.1
Authorization: Bearer DnwzBUNTOtKBO6Sp1hoUByG4YgZwCCw4
```
## Use the API: Predicting fuel economy {: #use-the-api-predicting-fuel-economy }
Once you have configured your API credentials, endpoints, and environment, you can use the DataRobot API to follow this example. The example uses the Python client and the REST API (using cURL), so a basic understanding of Python3 or cURL is required. It guides you through a simple problem: predicting the miles-per-gallon fuel economy from known automobile data (e.g., vehicle weight, number of cylinders, etc.).
!!! note
Python client users should note that the following workflow uses methods introduced in version 3.0 of the client. Ensure that your client is up-to-date before executing the code included in this example.
The following sections provide sample code, for Python and cURL, that will:
1. **Upload** a dataset.
2. **Train** a model to learn from the dataset.
3. **Test** prediction outcomes on the model with new data.
4. **Deploy** the model.
5. **Predict** outcomes on the deployed model using new data.
### Upload a dataset {: #upload-a-dataset }
The first step to create a project is uploading a dataset. This example uses the dataset `auto-mpg.csv`, which you can download [here](https://github.com/datarobot-community/quickstart-guide/tree/master/data){ target=_blank }.
=== "Python"
``` python
import datarobot as dr
dr.Client()
# Set to the location of your auto-mpg.csv and auto-mpg-test.csv data files
# Example: dataset_file_path = '/Users/myuser/Downloads/auto-mpg.csv'
training_dataset_file_path = ''
test_dataset_file_path = ''
# Load dataset
training_dataset = dr.Dataset.create_from_file(training_dataset_file_path)
# Create a new project based on dataset
project = dr.Project.create_from_dataset(training_dataset.id, project_name='Auto MPG DR-Client')
```
=== "R"
```
# Set to the location of your auto-mpg.csv and auto-mpg-test.csv data files
# Example: dataset_file_path = '/Users/myuser/Downloads/auto-mpg.csv'
training_dataset_file_path <- ""
test_dataset_file_path <- ""
training_dataset <- utils::read.csv(training_dataset_file_path)
test_dataset <- utils::read.csv(test_dataset_file_path)
head(training_dataset)
project <- SetupProject(dataSource = training_dataset, projectName = "Auto MPG DR-Client", maxWait = 60 * 60)
```
=== "cURL"
``` shell
DATAROBOT_API_TOKEN=${DATAROBOT_API_TOKEN}
DATAROBOT_ENDPOINT=${DATAROBOT_ENDPOINT}
location=$(curl -Lsi \
-X POST \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" \
-F 'projectName="Auto MPG"' \
-F "file=@${DATASET_FILE_PATH}" \
"${DATAROBOT_ENDPOINT}"/projects/ | grep -i 'Location: .*$' | \
cut -d " " -f2 | tr -d '\r')
echo "Uploaded dataset. Checking status of project at: ${location}"
while true; do
project_id=$(curl -Ls \
-X GET \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" "${location}" \
| grep -Eo 'id":\s"\w+' | cut -d '"' -f3 | tr -d '\r')
if [ "${project_id}" = "" ]
then
echo "Setting up project..."
sleep 10
else
echo "Project setup complete."
echo "Project ID: ${project_id}"
break
fi
done
```
### Train models {: #train-models }
Now that DataRobot has data, it can use the data to train and build models with Autopilot. Autopilot is DataRobot's "survival of the fittest" modeling mode that automatically selects the best predictive models for the specified target feature and runs them at increasing sample sizes. The outcome of Autopilot is not only a selection of best-suited models, but also identification of a recommended model—the model that best understands how to predict the target feature "mpg." Choosing the best model is a balance of accuracy, metric performance, and model simplicity. You can read more about the [model recommendation process](model-rec-process) in the UI documentation.
=== "Python"
``` python
# Use training data to build models
from datarobot import AUTOPILOT_MODE
# Set the project's target and initiate Autopilot (runs in Quick mode unless a different mode is specified)
project.analyze_and_model(target='mpg', worker_count=-1, mode=AUTOPILOT_MODE.QUICK)
# Open the project's Leaderboard to monitor the progress in UI.
project.open_in_browser()
# Wait for the model creation to finish
project.wait_for_autopilot()
model = project.get_top_model()
```
=== "R"
```
# Set the project target and initiate Autopilot
SetTarget(project,
target = "mpg")
# Block execution until Autopilot is complete
WaitForAutopilot(project)
model <- GetRecommendedModel(project, type = RecommendedModelType$RecommendedForDeployment)
```
=== "cURL"
``` shell
response=$(curl -Lsi \
-X PATCH \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" \
-H "Content-Type: application/json" \
--data '{"target": "mpg", "mode": "quick"}' \
"${DATAROBOT_ENDPOINT}/projects/${project_id}/aim" | grep 'location: .*$' \
| cut -d " " | tr -d '\r')
echo "AI training initiated. Checking status of training at: ${response}"
while true; do
initial_project_status=$(curl -Ls \
-X GET \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" "${response}" \
| grep -Eo 'stage":\s"\w+' | cut -d '"' -f3 | tr -d '\r')
if [ "${initial_project_status}" = "" ]
then
echo "Setting up AI training..."
sleep 10
else
echo "Training AI."
echo "Grab a coffee or catch up on email."
break
fi
done
project_status=$(curl -Lsi \
-X GET \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" \
"${DATAROBOT_ENDPOINT}/projects/${project_id}/status" \
| grep -Eo 'autopilotDone":\strue'
)
if [ "${project_status}" = "" ]
then
echo "Autopilot training in progress..."
sleep 60
else
echo "Autopilot training complete. Model ready to deploy."
break
fi
done
```
### Make predictions against the model {: #make-predictions-against-the-model }
After building models and identifying the top performers, you can further test a model by making predictions on new data. Typically, you would test predictions with a smaller dataset to ensure the model is behaving as expected before deploying the model to production. DataRobot offers several methods for making predictions on new data. You can read more about [prediction methods](predictions/index) in the UI documentation.
=== "Python"
This code makes predictions on the recommended model using the test set you identified in the first step (`test_dataset_file_path`), when you uploaded data.
``` python
# Test predictions on new data
predict_job = model.request_predictions(test_dataset_file_path)
predictions = predict_job.get_result_when_complete()
predictions.head()
```
=== "R"
This code makes predictions on the recommended model using the test set you identified in the first step (`test_dataset_file_path`), when you uploaded data.
```
# Uploading the testing dataset
scoring <- UploadPredictionDataset(project, dataSource = test_dataset)
# Requesting prediction
predict_job_id <- RequestPredictions(project, modelId = model$modelId, datasetId = scoring$id)
# Grabbing predictions
predictions_prob <- GetPredictions(project, predictId = predict_job_id, type = "probability")
head(predictions_prob)
```
=== "cURL"
This code makes predictions on the recommended model using the test set you identified in the first step (`test_dataset_file_path`), when you uploaded data.
``` shell
# Test predictions on new data
# shellcheck disable=SC2089
prediction_location=$(curl -Lsi\
-X POST \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" \
-F "file=@${TEST_DATASET_FILE_PATH}" \
"${DATAROBOT_ENDPOINT}/projects/${project_id}/predictionDatasets/fileUploads/"\
| grep -i 'location: .*$' | cut -d " " -f2 | tr -d '\r')
echo "Uploaded prediction dataset. Checking status of upload at: ${prediction_location}"
while true; do
prediction_dataset_id=$(curl -Ls \
-X GET \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" "${prediction_location}" \
| grep -Eo 'id":\s"\w+' | cut -d '"' -f3 | tr -d '\r')
if [ "${prediction_dataset_id}" = "" ]
then
echo "Uploading predictions..."
sleep 10
else
echo "Predictions upload complete."
echo "Predictions dataset ID: ${prediction_dataset_id}"
break
fi
done
prediction_request_data="{\
\"modelId\":\"${recommended_model_id}\",\
\"datasetId\":\"${prediction_dataset_id}\"\
}"
predict_job=$(curl -Lsi \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" \
--data "${prediction_request_data}" \
"${DATAROBOT_ENDPOINT}/projects/${project_id}/predictions/"\
| grep -i 'location: .*$' | cut -d " " -f2 | tr -d '\r')
while true; do
initial_job_response=$(curl -Ls \
-X GET \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" "${predict_job}" \
| grep -Eo 'status":\s"\w+' | cut -d '"' -f3 | tr -d '\r')
if [ "${initial_job_status}" = "inprogress" ]
then
echo "Generating predictions..."
sleep 10
else
echo "Predictions complete."
break
fi
done
curl -Ls \
-X GET \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" "${predict_job}"
```
### Deploy the model {: #deploy-the-model }
Deployment is the method by which you integrate a machine learning model into an existing production environment to make predictions with live data and generate insights. See the [machine learning model deployment overview](https://www.datarobot.com/wiki/machine-learning-model-deployment/){ target=_blank } for more information.
=== "Python (AI Platform Trial)"
```python
# Deploy model
deployment = dr.Deployment.create_from_learning_model(
model_id=model.id, label="MPG Prediction Server",
description="Deployed with DataRobot client")
# View deployment stats
service_stats = deployment.get_service_stats()
print(service_stats.metrics)
```
=== "Python (managed AI Platform [US or EU])"
```python
# Deploy model
prediction_server = dr.PredictionServer.list()[0]
deployment = dr.Deployment.create_from_learning_model(
model_id=model.id, label="MPG Prediction Server",
description="Deployed with DataRobot client",
default_prediction_server_id=prediction_server.id
)
# View deployment stats
service_stats = deployment.get_service_stats()
print(service_stats.metrics)
```
=== "R"
```
predictionServer <- ListPredictionServers()[[1]]
deployment <- CreateDeployment(model,
label = "MPG Prediction Server",
description = "Deployed with DataRobot client",
defaultPredictionServerId = predictionServer)
```
=== "cURL"
``` shell
recommended_model_id=$(curl -s \
-X GET \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" \
"${DATAROBOT_ENDPOINT}/projects/${project_id}/recommendedModels"\
"/recommendedModel/" \
| grep -Eo 'modelId":\s"\w+' | cut -d '"' -f3 | tr -d '\r')
server_data=$(curl -s -X GET \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" \
"${DATAROBOT_ENDPOINT}/predictionServers/")
default_server_id=$(echo $server_data \
| grep -Eo 'id":\s"\w+' | cut -d '"' -f3 | tr -d '\r')
server_url=$(echo $server_data | grep -Eo 'url":\s".*?"' \
| cut -d '"' -f3 | tr -d '\r')
server_key=$(echo $server_data | grep -Eo 'datarobot-key":\s".*?"' \
| cut -d '"' -f3 | tr -d '\r')
request_data="{\
\"defaultPredictionServerId\":\"${default_server_id}\",\
\"modelId\":\"${recommended_model_id}\",\
\"description\":\"Deployed with cURL\",\
\"label\":\"MPG Prediction Server\"\
}"
deployment_response=$(curl -Lsi -X POST \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" \
-H "Content-Type: application/json" \
--data "${request_data}" \
"${DATAROBOT_ENDPOINT}/deployments/fromLearningModel/")
deploy_response_code_202=$(echo $deployment_response | grep -Eo 'HTTP/2 202')
if [ "${deploy_response_code_202}" = "" ]
then
deployment_id=$(echo "$deployment_response" | grep -Eo 'id":\s"\w+' \
| cut -d '"' -f3 | tr -d '\r')
echo "Prediction server ready."
else
deployment_status=$(echo "$deployment_response" | grep -Eo 'location: .*$' \
| cut -d " " | tr -d '\r')
while true; do
deployment_ready=$(curl -Ls \
-X GET \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" "${deployment_status}" \
| grep -Eo 'id":\s"\w+' | cut -d '"' -f3 | tr -d '\r')
if [ "${deployment_ready}" = "" ]
then
echo "Waiting for deployment..."
sleep 10
else
deployment_id=$deployment_ready
echo "Prediction server ready."
break
fi
done
fi
```
### Make predictions against the deployed model {: #make-predictions-against-the-deployed-model }
When you have successfully deployed a model, you can use the DataRobot Prediction API to make predictions. This allows you to access advanced [model management](mlops/index) features such as data drift, accuracy, and service health statistics.
=== "Python"
You can also reference a Python prediction snippet from the UI. Navigate to the **Deployments** page, select your deployment, and go to **Predictions > Prediction API** to reference the snippet for making predictions.
``` python
import requests
from pprint import pprint
import json
import os
# JSON records for example autos for which to predict mpg
autos = [
{
"cylinders": 4,
"displacement": 119.0,
"horsepower": 82.00,
"weight": 2720.0,
"acceleration": 19.4,
"model year": 82,
"origin": 1,
},
{
"cylinders": 8,
"displacement": 120.0,
"horsepower": 79.00,
"weight": 2625.0,
"acceleration": 18.6,
"model year": 82,
"origin": 1,
},
]
# Create REST request for prediction API
prediction_server = deployment.default_prediction_server
prediction_headers = {
"Authorization": "Bearer {}".format(os.getenv("DATAROBOT_API_TOKEN")),
"Content-Type": "application/json",
"datarobot-key": prediction_server['datarobot-key']
}
predictions = requests.post(
f"{prediction_server['url']}/predApi/v1.0/deployments"
f"/{deployment.id}/predictions",
headers=prediction_headers,
data=json.dumps(autos),
)
pprint(predictions.json())
```
=== "R"
```
# Prepare to connect to the prediction server
URL <- paste0(deployment$defaultPredictionServer$url,
"/predApi/v1.0/deployments/",
deployment$id,
"/predictions")
USERNAME <- "deployment$owners$preview$email" # This should be your DR email account
API_TOKEN <- Sys.getenv("DATAROBOT_API_TOKEN") # This is configured implicitly when you first run `library(datarobot)`
# Invoke Predictions API with the test_dataset
response <- httr::POST(URL,
body = jsonlite::toJSON(test_dataset),
httr::add_headers("datarobot-key" = deployment$defaultPredictionServer$dataRobotKey),
httr::content_type_json(),
authenticate(USERNAME, API_TOKEN, "basic"))
# Parse the results from the prediction server
predictionResults <- fromJSON(httr::content(response, as = "text"),
simplifyDataFrame = TRUE,
flatten = TRUE)$data
print(predictionResults)
```
=== "cURL"
``` shell
autos='[{
"cylinders": 4,
"displacement": 119.0,
"horsepower": 82.00,
"weight": 2720.0,
"acceleration": 19.4,
"model year": 82,
"origin":1
},{
"cylinders": 8,
"displacement": 120.0,
"horsepower": 79.00,
"weight": 2625.0,
"acceleration": 18.6,
"model year": 82,
"origin":1
}]'
curl -X POST \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" \
-H "datarobot-key: ${server_key}" \
--data "${autos}" \
"${server_url}/predApi/v1.0/deployments/${deployment_id}/predictions"
```
## Learn more {: #learn-more }
After getting started with DataRobot's APIs, navigate to the [user guide](../guide/index) for overviews, Jupyter notebooks, and task-based tutorials that help you find complete examples of common data science and machine learning workflows. You can also read the [reference documentation](reference/index) available for DataRobot's programmatic tools.
!!! note
{% include 'includes/github-sign-in-plural.md' %}
|
index
|
---
title: API reference documentation
description: Review the reference documentation available for DataRobot's programmatic tools.
---
# API reference documentation {: #api-documentation-home }
The table below outlines the reference documentation available for DataRobot's programmatic tools.
<!--private start-->
Resource | Description
-------- | -----------
 | **REST API**: The [DataRobot REST API](public-api/index) provides a programmatic alternative to the UI for creating and managing DataRobot projects. It allows you to automate processes and iterate more quickly, and lets you use DataRobot with scripted control. The API provides an intuitive modeling and prediction interface. You can also access the **[legacy REST API docs](/apidocs/)**. <br /> **Open API specification**: Reference the [OpenAPI specification](/api/v2/openapi.yaml) for the DataRobot REST API, which helps automate the generation of a client for languages that DataRobot doesn't directly support. It also assists with the design, implementation, and testing integration with DataRobot's REST API using a variety of automated OpenAPI-compatible tools.
 | **Python client**: Installation, configuration, and reference documentation for working with the Python client library.<br /> <ul><li> <a href="https://pypi.org/project/datarobot/" target="_blank">Access the client package.</a> </li> <li> <a href="https://datarobot-public-api-client.readthedocs-hosted.com/" target="_blank">Read client documentation.</a> </li>
 | **R client**: Installation, configuration, and reference documentation for working with the R client library.<br /> <ul><li> <a href="https://cran.r-project.org/package=datarobot" target="_blank">Access the R package</a> </li> <li> <a href="https://cran.r-project.org/web/packages/datarobot/datarobot.pdf" target="_blank">Read R client documentation</a> </li>
 | **Blueprint workshop**: <a href="https://blueprint-workshop.datarobot.com/index.html" target="_blank">Construct and modify</a> DataRobot blueprints and their tasks using a programmatic interface.
 | **Prediction API**: [Generate predictions](dr-predapi) with a deployment by submitting JSON or CSV input data via a POST request.
 | **Batch prediction API**: [Score large datasets](batch/index) with flexible options for intake and output using the prediction servers you have deployed via the Batch Prediction API.
<!--private end-->
<!--public start-->
Resource | Description
-------- | -----------
 | **DataRobot REST API**: The [DataRobot REST API](reference/public-api/index) provides a programmatic alternative to the UI for creating and managing DataRobot projects. It allows you to automate processes and iterate more quickly, and lets you use DataRobot with scripted control.
 | **Python client**: Installation, configuration, and reference documentation for working with the Python client library.<br /> <ul><li> <a href="https://pypi.org/project/datarobot/" target="_blank">Access the client package.</a> </li> <li> <a href="https://datarobot-public-api-client.readthedocs-hosted.com/" target="_blank">Read client documentation.</a> </li>
 | **R client**: Installation, configuration, and reference documentation for working with the R client library.<br /> <ul><li> <a href="https://cran.r-project.org/package=datarobot" target="_blank">Access the R package</a> </li> <li> <a href="https://cran.r-project.org/web/packages/datarobot/datarobot.pdf" target="_blank">Read R client documentation</a> </li>
 | **Blueprint workshop**: <a href="https://blueprint-workshop.datarobot.com/index.html" target="_blank">Construct and modify</a> DataRobot blueprints and their tasks using a programmatic interface.
 | **Prediction API**: [Generate predictions](dr-predapi) with a deployment by submitting JSON or CSV input data via a POST request.
 | **Batch prediction API**: [Score large datasets](batch/index) with flexible options for intake and output using the prediction servers you have deployed via the Batch Prediction API.
<!--public end-->
|
index
|
---
title: Common use cases
description: Review Jupyter notebooks that outline common use cases for version 3.x of DataRobot's Python client.
---
# Common use cases {: #common-use-cases }
Review Jupyter notebooks that outline common use cases and machine learning workflows using version 3.x of DataRobot's Python client.
Topic | Describes... |
----- | ------ |
[Use cases for version 2.x](python2/index) | Notebooks for uses cases that use methods for 2.x versions of DataRobot's Python client.
[Identify money laundering with anomaly detection](aml/index) | How to use a historical financial transaction dataset and train models that detect instances of money laundering. |
[Measure price elasticity of demand](elasticity/index) | A use case to identify relationships between price and demand, maximize revenue by properly pricing products, and monitor price elasticities for changes in price and demand. |
[Insurance claim triage](insurance/index) | How to evaluate the severity of an insurance claim in order to triage it effectively. |
[Predict loan defaults](loan-default/index) | A use case that reduces defaults and minimizes risk by predicting the likelihood that a borrower will not repay their loan. |
[No-show appointment forecasting](no-show-appt/index) | How to build a model that identifies patients most likely to miss appointments, with correlating reasons. |
[Predict late shipments](predict-shipment/index) | A use case that determines whether a shipment will be late or if there will be a shortage of parts. |
[Reduce 30-Day readmissions rate](readmission/index) | How to reduce the 30-day readmission rate at a hospital. |
[Predict steel plate defects](steel/index) | A use case that helps manufacturers significantly improve the efficiency and effectiveness of identifying defects of all kinds, including those for steel sheets. |
[Predict customer churn](customer-churn-v3.ipynb) | How to predict customers that are at risk to churn and when to intervene to prevent it. |
[Large scale demand forecasting](demand-v3.ipynb) | An end-to-end demand forecasting use case that uses DataRobot's Python package. |
[Predictions for fantasy baseball](fantasy-v3.ipynb) | An estimate of a baseball player's true talent level and their likely performance for the coming season.
[Lead scoring](lead-scoring-v3.ipynb) | A binary classification problem of whether a prospect will become a customer. |
[Forecast sales with multiseries modeling](multiseries-v3.ipynb) | How to forecast future sales for multiple stores using multiseries modeling. |
[Identify money laundering with anomaly detection](outlier-v3.ipynb) | How to train anomaly detection models to detect outliers. |
[Predict CO₂ levels with out-of-time validation modeling](otv-v3.ipynb) | How to use [out-of-time validation (OTV)](otv) modeling with DataRobot's Python client to predict monthly CO₂ levels for one of Hawaii's active volcanoes, Mauna Loa. |
[Predict equipment failure](part-fail-v3.ipynb) | A use case that that determines whether or not equipment part failure will occur. |
[Predict fraudulent medical claims](pred-fraud-v3.ipynb) | The identification of fraudulent medical claims using the DataRobot Python package. |
[Generate SHAP-based Prediction Explanations](shap-nb.ipynb) | How to use DataRobot's SHAP Prediction Explanations to determine what qualities of a home drive sale value. |
|
index
|
---
title: Make Visual AI predictions via the API
dataset_name: N/A
description: Learn how to make predictions on Visual AI projects with API calls.
domain: DSX
expiration_date: 06-01-2022
owner: nathan.goudreault@datarobot.com
url: docs.datarobot.com/en/tutorials/using-the-api/vai-pred.html
---
# Make Visual AI predictions via the API {: #make-visual-ai-predictions-via-the-api }
This tutorial outlines how to make predictions on Visual AI projects with API calls. To complete this tutorial, you must have trained and deployed a visual AI model.
## Takeaways {: #takeaways }
This tutorial shows how to:
* Configure scripting code for making batch predictions via the API
* Make an API call to get batch predictions for a visual AI model
* Format images to a base64 format
## Predictions workflow {: #predictions-workflow }
2. Prepare your data for Visual AI. Before making predictions, convert the images you want to score to [base64 format](vai-predictions#base64-encoding-format) (the standard format for handling images in API calls). Note that when the model returns prediction results, images will return in base64 format. To convert data, use DataRobot's Python package, described in the guide <a target="_blank" href="https://datarobot-public-api-client.readthedocs-hosted.com/page/reference/modeling/spec/binary_data.html#preparing-data-for-predictions">Preparing binary data for predictions</a>.
2. <a name="step-two"></a> After training and deploying a Visual AI model, navigate to the deployment and access the **Predictions > Prediction API** tab. This tab provides the scripting code used to make predictions via the API.
3. To configure the scripting code, select **Batch** as the prediction type and **API Client** as the interface type.

4. Copy the code and save it as a Python script (e.g., `datarobot-predict.py`). You can edit the script to incorporate additional steps. For example, add the `passthrough_columns_set` argument to `BatchPredictionJob` if you would like to include columns from the input file (e.g., `image_id`) to the output file.

5. Using the scripting code from [step two](#step-two) and a base64-converted image file (`InputDataConverted.csv`), make an API call to get predictions from the deployed model:
`python datarobot-predict.py InputDataConverted.csv Predictions.csv`
6. Access the output file (`Predictions.csv`) to view prediction results.

## Learn More {: #learn-more }
??? tip "Additional tools"
Reference the <a target="_blank" href="https://github.com/datarobot-community/visual-ai-data-prep/blob/master/visualai_data_prep.py">DataRobot Community GitHub</a> pages for data prep tools, including a script to help with the base64 conversion process. {% include 'includes/github-sign-in.md' %}
For more information on the scripting code used in this tutorial, refer to the <a target="_blank" href="https://datarobot-public-api-client.readthedocs-hosted.com/">Python Package documentation</a>.
??? tip "Multiclass example"
For the multiclass classification problem's prediction results in step 6, note that the prediction output file includes the probability of the image falling under each class, the class name with the highest probability, and all the other optional columns requested from the Python scoring script (such as `prediction_status` and `image_id`).
## Documentation {: #documentation }
* [Visual AI overview](../../../modeling/special-workflows/visual-ai/index)
* [Making predictions with Visual AI](vai-predictions)
* [Prediction API overview](../../reference/predapi/index)
|
vai-pred
|
---
title: Make batch predictions with Azure Blob storage
dataset_name: N/A
description: Use the DataRobot Python Client package to set up a batch prediction job that reads an input file for scoring from Azure Blob storage and then writes the results back to Azure.
domain: DSX
expiration_date: 06-01-2022
owner: nathan.goudreault@datarobot.com
url: docs.datarobot.com/en/tutorials/using-the-api/azure-pred.html
---
# Make batch predictions with Azure Blob storage {: #make-batch-predictions-with-azure-blob-storage }
The DataRobot Batch Prediction API allows you to take in large datasets and score them against deployed models running on a prediction server. The API also provides flexible options for the intake and output of these files.
In this tutorial, you will learn how to use the DataRobot Python Client package (which calls the Batch Prediction API) to set up a batch prediction job. The job reads an input file for scoring from Azure Blob storage and then writes the results back to Azure. This approach also works for Azure Data Lake Storage Gen2 accounts because the underlying storage is the same.
## Requirements {: #takeaways }
In order to use the code provided in this tutorial, make sure you have the following:
* Python 2.7 or 3.4+
* [The DataRobot Python package](https://pypi.org/project/datarobot){ target=_blank } (2.21.0+) (pypi) (conda)
* [A DataRobot deployment](deploy-methods/index)
* An Azure storage account
* An Azure storage container
* A scoring dataset in the storage container to use with your DataRobot deployment
## Create stored credentials {: #create-stored-credentials }
Running batch prediction jobs requires the appropriate credentials to read and write to Azure Blob storage. You must provide the name of the Azure storage account and an access key.
1. To retrieve these credentials, select the **Access keys** menu in the Azure portal.

2. Click **Show keys** to retrieve an access key. You can use either of the keys shown (key1 or key2).

3. Use the following code to create a new credential object within DataRobot that can be used in the batch prediction job to connect to your Azure storage account.
```python
AZURE_STORAGE_ACCOUNT = "YOUR AZURE STORAGE ACCOUNT NAME"
AZURE_STORAGE_ACCESS_KEY = "AZURE STORAGE ACCOUNT ACCESS KEY"
DR_CREDENTIAL_NAME = "Azure_{}".format(AZURE_STORAGE_ACCOUNT)
# Create Azure-specific credentials
# You can also copy the connection string, which is found below the access key in Azure.
credential = dr.Credential.create_azure(
name=DR_CREDENTIAL_NAME,
azure_connection_string="DefaultEndpointsProtocol=https;AccountName={};AccountKey={};".format(AZURE_STORAGE_ACCOUNT, AZURE_STORAGE_ACCESS_KEY)
)
# Use this code to look up the ID of the credential object created.
credential_id = None
for cred in dr.Credential.list():
if cred.name == DR_CREDENTIAL_NAME:
credential_id = cred.credential_id
break
print(credential_id)
```
## Run the prediction job {: #run-the-prediction-job }
With a credential object created, you can now configure the batch prediction job as shown in the code sample below:
- Set `intake_settings` and `output_settings` to the `azure` type.
- For `intake_settings` and `output_settings`, set `url` to the files in Blob storage that you want to read and write to (the output file does not need to exist already).
- Provide the ID of the credential object that was created above.
The code sample creates and runs the batch prediction job. Once finished, it provides the status of the job. This code also demonstrates how to configure the job to return both Prediction Explanations and passthrough columns for the scoring data.
!!! note
You can find the deployment ID in the sample code output of the [**Deployments > Predictions > Prediction API**](code-py) tab (with **Interface** set to "API Client").
```python
DEPLOYMENT_ID = 'YOUR DEPLOYMENT ID'
AZURE_STORAGE_ACCOUNT = "YOUR AZURE STORAGE ACCOUNT NAME"
AZURE_STORAGE_CONTAINER = "YOUR AZURE STORAGE ACCOUNT CONTAINER"
AZURE_INPUT_SCORING_FILE = "YOUR INPUT SCORING FILE NAME"
AZURE_OUTPUT_RESULTS_FILE = "YOUR OUTPUT RESULTS FILE NAME"
# Set up our batch prediction job
# Input: Azure Blob Storage
# Output: Azure Blob Storage
job = dr.BatchPredictionJob.score(
deployment=DEPLOYMENT_ID,
intake_settings={
'type': 'azure',
'url': "https://{}.blob.core.windows.net/{}/{}".format(AZURE_STORAGE_ACCOUNT, AZURE_STORAGE_CONTAINER,AZURE_INPUT_SCORING_FILE),
"credential_id": credential_id
},
output_settings={
'type': 'azure',
'url': "https://{}.blob.core.windows.net/{}/{}".format(AZURE_STORAGE_ACCOUNT, AZURE_STORAGE_CONTAINER,AZURE_OUTPUT_RESULTS_FILE),
"credential_id": credential_id
},
# If explanations are required, uncomment the line below
max_explanations=5,
# If passthrough columns are required, use this line
passthrough_columns=['column1','column2']
)
job.wait_for_completion()
job.get_status()
```
When the job completes successfully, you should see the output file in your Azure Blob storage container.
## Documentation {: #documentation }
* [Prediction API overview](../../reference/predapi/index)
* [DataRobot Batch Prediction API](../../reference/batch-prediction-api/index)
|
azure-pred
|
---
title: Python code examples
description: Review comprehensive workflows, notebooks, and tutorials that help you find complete examples of common data science and machine learning workflows.
---
# Python code examples {: #python-code-examples }
The API user guide includes overviews and workflows for DataRobot's Python client that outline complete examples of common data science and machine learning workflows. Be sure to review the [API quickstart guide](api-quickstart/index) before using the notebooks below.
Topic | Describes... |
----- | ------ |
[Feature selection notebooks](feat-select/index) | Notebooks that outline Feature Importance Rank Ensembling (FIRE) and advanced feature selection with Python. |
[Build a model factory](Build-a-Model-Factory.ipynb) | A system or a set of procedures that automatically generate predictive models with little to no human intervention. |
[Make Visual AI predictions via the API](vai-pred) | Scripting code for making batch predictions for a Visual AI model via the API. |
[Using the Batch Prediction API](batch-pred-api.ipynb) | DataRobot's batch prediction API to score large datasets with a deployed DataRobot model. |
[Configure datetime partitioning](datetime-v3.ipynb) | How to use datetime partitioning to guard a project against time-based target leakage. |
[Create and schedule JDBC prediction jobs](jdbc-nb.ipynb) | How to use DataRobot's Python client to schedule prediction jobs and write them to a JDBC database. |
[Migrate models](migrate-nb.ipynb) | How to transfer models from one DataRobot cluster to another as an .mlpkg file. |
[Create and deploy a custom model](custom-models.ipynb) | An end-to-end workflow for creating a custom model and deploying it to make predictions. |
[Make batch predictions with Azure Blob storage](azure-pred) | How to generate SHAP-based Prediction Explanations with a use case that determines what drives home value in Iowa. |
[Make batch predictions with Google Cloud Storage](gcs-pred) | How to read input data from and write predictions back to Google Cloud Storage. |
|
index
|
---
title: Make batch predictions with Google Cloud storage
dataset_name: N/A
description: Learn how to make and write predictions to Google Cloud Storage.
domain: DSX
expiration_date: 06-01-2022
owner: nathan.goudreault@datarobot.com
url: docs.datarobot.com/en/tutorials/using-the-api/gcs-pred.html
---
# Make batch predictions with Google Cloud storage {: #make-batch-predictions-with-google-cloud-storage }
The DataRobot Batch Prediction API allows you to take in large datasets and score them against deployed models running on a prediction server. The API also provides flexible options for the intake and output of these files.
In this tutorial, you will learn how to use the DataRobot Python Client package (which calls the Batch Prediction API) to set up a batch prediction job. The job reads an input file for scoring from Google Cloud Storage (GCS) and then writes the results back to GCS.
## Requirements {: #requirements }
In order to use the code provided in this tutorial, make sure you have the following:
* Python 2.7 or 3.4+
* [The DataRobot Python package](https://pypi.org/project/datarobot/){ target=_blank } (version 2.21.0+)
* [A DataRobot deployment](deploy-methods/index)
* A GCS bucket
* A service account with access to the GCS bucket (detailed below)
* A scoring dataset that lives in the GCS bucket to use with your DataRobot deployment
## Configure a GCP service account {: #configure-a-gcp-service-account }
Running batch prediction jobs requires the appropriate credentials to read and write to GCS. You must create a service account within the Google Cloud Platform that has access to the GCS bucket, then download a key for the account to use in the batch prediction job.
1. To retrieve these credentials, log into the Google Cloud Platform console and select **IAM & Admin > Service Accounts** from the sidebar.

2. Click **Create Service Account**. Provide a name and description for the account, then click **Create > Done**.
3. On the **Service Account** page, find the account that you just created, navigate to the **Details** page, and click **Keys**.
4. Go to the **Add Key** menu and click **Create new key**. Select JSON for the key type and click **Create** to generate a key and download a JSON file with the information required for the batch prediction job.

5. Return to your GCS bucket and navigate to the **Permissions** tab. Click **Add**, enter the email address for the service account user you created, and give the account the “Storage Admin” role. Click **Save** to confirm the changes. This grants your GCP service account access to the GCS bucket.
## Create stored credentials {: #create-stored-credentials }
After downloading the JSON key, use the following code to create a new credential object within DataRobot. The credentials will be used in the batch prediction job to connect to the GCS bucket. Open the JSON key file and copy its contents into the key variable. The DataRobot Python client reads the JSON data as a dictionary and parses it accordingly.
```python
# Set name for GCP credential in DataRobot
DR_CREDENTIAL_NAME = "YOUR GCP DATAROBOT CREDENTIAL NAME"
# Create a GCP-specific Credential
# NOTE: This cannot be done from the UI
# This can be generated and downloaded ready to drop in from within GCP
# 1. Go to IAM & Admin -> Service Accounts
# 2. Search for the Service Account you want to use (or create a new one)
# 3. Go to Keys
# 4. Click Add Key -> Create Key
# 5. Selection JSON key type
# 6. copy the contents of the json file into the gcp_key section of the credential code below
key = {
"type": "service_account",
"project_id": "**********",
"private_key_id": "***************",
"private_key": "-----BEGIN PRIVATE KEY-----\n********\n-----END PRIVATE KEY-----\n",
"client_email": "********",
"client_id": "********",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/*********"
}
credential = dr.Credential.create_gcp(
name=DR_CREDENTIAL_NAME,
gcp_key=key
)
# Use this code to look up the ID of the credential object created.
credential_id = None
for cred in dr.Credential.list():
if cred.name == DR_CREDENTIAL_NAME:
credential_id = cred.credential_id
break
print(credential_id)
```
## Run the prediction job {: #run-the-prediction-job }
With a credential object created, you can now configure the batch prediction job. Set the `intake_settings` and `output_settings` to the `gcp` type. Provide both attributes with the URL to the files in GCS that you want to read and write to (the output file does not need to exist already). Additionally, provide the ID of the credential object that was created above. The code below creates and runs the batch prediction job. Once finished, it provides the status of the job. This code also demonstrates how to configure the job to return both Prediction Explanations and passthrough columns for the scoring data.
!!! note
You can find the deployment ID in the sample code output of the [**Deployments > Predictions > Prediction API**](code-py) tab (with **Interface** set to "API Client").
```python
DEPLOYMENT_ID = 'YOUR DEPLOYMENT ID'
# Set GCP Info
GCP_BUCKET_NAME = "YOUR GCS BUCKET NAME"
GCP_INPUT_SCORING_FILE = "YOUR INPUT SCORING FILE NAME"
GCP_OUTPUT_RESULTS_FILE = "YOUR OUTPUT RESULTS FILE NAME"
# Set up the batch prediction job
# Input: Google Cloud Storage
# Output: Google Cloud Storage
job = dr.BatchPredictionJob.score(
deployment=DEPLOYMENT_ID,
intake_settings={
'type': 'gcp',
'url': "gs://{}/{}".format(GCP_BUCKET_NAME,GCP_INPUT_SCORING_FILE),
"credential_id": credential_id
},
output_settings={
'type': 'gcp',
'url': "gs://{}/{}".format(GCP_BUCKET_NAME,GCP_OUTPUT_RESULTS_FILE),
"credential_id": credential_id
},
# If explanations are required, uncomment the line below
max_explanations=5,
# If passthrough columns are required, use this line
passthrough_columns=['column1','column2']
)
job.wait_for_completion()
job.get_status()
```
When the job completes successfully, you will see the output file in the GCS bucket.
## Documentation {: #documentation }
* [Prediction API overview](../../reference/predapi/index)
* [DataRobot Batch Prediction API](../../reference/batch-prediction-api/index)
|
gcs-pred
|
---
title: R client v2.29 reference documentation
description: Learn about the new methods in version 2.29 of DataRobot's R client available for public preview.
---
# R client v2.29 reference documentation
The table below outlines the major classes for the methods introduced in v2.29 of the R client.
Each endpoint has a corresponding function in the `datarobot.apicore` package. There is also a "wrapper function" in the `datarobot` package that is consistent with the existing package; DataRobot recommends using these wrapper functions if you have used earlier versions of the API Client.
All URIs are relative to *https://app.datarobot.com/api/v2*.
### AiCatalogApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**CatalogItemsList**</li><li>datarobot::ListCatalogItems</li></ul>*List all catalog items accessible by the user.* | **GET** /catalogItems/ |
| <ul><li>**CatalogItemsPatch**</li><li>datarobot::PatchCatalogItems</li></ul>*Update the name, description, or tags for the requested catalog item.* | **PATCH** /catalogItems/{catalogId}/ |
| <ul><li>**CatalogItemsRetrieve**</li><li>datarobot::RetrieveCatalogItems</li></ul>*Retrieves latest version information, by ID, for catalog items.* | **GET** /catalogItems/{catalogId}/ |
| <ul><li>**DataEngineQueryGeneratorsCreate**</li><li>datarobot::CreateDataEngineQueryGenerator</li></ul>*Create a data engine query generator* | **POST** /dataEngineQueryGenerators/ |
| <ul><li>**DataEngineQueryGeneratorsRetrieve**</li><li>datarobot::RetrieveDataEngineQueryGenerators</li></ul>*Retrieve a data engine query generator given ID.* | **GET** /dataEngineQueryGenerators/{dataEngineQueryGeneratorId}/ |
| <ul><li>**DataEngineWorkspaceStatesCreate**</li><li>datarobot::CreateDataEngineWorkspaceStates</li></ul>*Create Data Engine workspace state* | **POST** /dataEngineWorkspaceStates/ |
| <ul><li>**DataEngineWorkspaceStatesFromDataEngineQueryGeneratorCreate**</li><li>datarobot::CreateDatasetFromDataEngineQueryGenerator</li></ul>*Create Data Engine workspace state from a query generator* | **POST** /dataEngineWorkspaceStates/fromDataEngineQueryGenerator/ |
| <ul><li>**DataEngineWorkspaceStatesRetrieve**</li><li>datarobot::RetrieveDataEngineWorkspaceStates</li></ul>*Read Data Engine workspace state* | **GET** /dataEngineWorkspaceStates/{workspaceStateId}/ |
| <ul><li>**DatasetsAccessControlList**</li><li>datarobot::ListDatasetsAccessControl</li></ul>*List dataset access* | **GET** /datasets/{datasetId}/accessControl/ |
| <ul><li>**DatasetsAccessControlPatchMany**</li><li>datarobot::PatchManyDatasetsAccessControl</li></ul>*Modify dataset access* | **PATCH** /datasets/{datasetId}/accessControl/ |
| <ul><li>**DatasetsAllFeaturesDetailsList**</li><li>datarobot::ListDatasetsAllFeaturesDetails</li></ul>*Get dataset features* | **GET** /datasets/{datasetId}/allFeaturesDetails/ |
| <ul><li>**DatasetsDelete**</li><li>datarobot::DeleteDatasets</li></ul>*Delete dataset* | **DELETE** /datasets/{datasetId}/ |
| <ul><li>**DatasetsDeletedPatchMany**</li><li>datarobot::PatchManyDatasetsDeleted</li></ul>*Recover deleted dataset* | **PATCH** /datasets/{datasetId}/deleted/ |
| <ul><li>**DatasetsFeatureHistogramsRetrieve**</li><li>datarobot::RetrieveDatasetsFeatureHistograms</li></ul>*Get dataset feature histogram* | **GET** /datasets/{datasetId}/featureHistograms/{featureName}/ |
| <ul><li>**DatasetsFeatureTransformsCreate**</li><li>datarobot::CreateDatasetsFeatureTransforms</li></ul>*Create dataset feature transform* | **POST** /datasets/{datasetId}/featureTransforms/ |
| <ul><li>**DatasetsFeatureTransformsList**</li><li>datarobot::ListDatasetsFeatureTransforms</li></ul>*List dataset feature transforms* | **GET** /datasets/{datasetId}/featureTransforms/ |
| <ul><li>**DatasetsFeatureTransformsRetrieve**</li><li>datarobot::RetrieveDatasetsFeatureTransforms</li></ul>*Get dataset feature transform* | **GET** /datasets/{datasetId}/featureTransforms/{featureName}/ |
| <ul><li>**DatasetsFeaturelistsCreate**</li><li>datarobot::CreateDatasetsFeaturelists</li></ul>*Create dataset featurelist* | **POST** /datasets/{datasetId}/featurelists/ |
| <ul><li>**DatasetsFeaturelistsDelete**</li><li>datarobot::DeleteDatasetsFeaturelists</li></ul>*Delete dataset featurelist* | **DELETE** /datasets/{datasetId}/featurelists/{featurelistId}/ |
| <ul><li>**DatasetsFeaturelistsList**</li><li>datarobot::ListDatasetsFeaturelists</li></ul>*Retrieve dataset featurelists* | **GET** /datasets/{datasetId}/featurelists/ |
| <ul><li>**DatasetsFeaturelistsPatch**</li><li>datarobot::PatchDatasetsFeaturelists</li></ul>*Update dataset featurelist* | **PATCH** /datasets/{datasetId}/featurelists/{featurelistId}/ |
| <ul><li>**DatasetsFeaturelistsRetrieve**</li><li>datarobot::RetrieveDatasetsFeaturelists</li></ul>*Get dataset featurelist* | **GET** /datasets/{datasetId}/featurelists/{featurelistId}/ |
| <ul><li>**DatasetsFileList**</li><li>datarobot::ListDatasetsFile</li></ul>*Retrieve original dataset data* | **GET** /datasets/{datasetId}/file/ |
| <ul><li>**DatasetsFromDataEngineWorkspaceStateCreate**</li><li>datarobot::CreateDatasetsFromDataEngineWorkspaceState</li></ul>*Create dataset from Data Engine workspace* | **POST** /datasets/fromDataEngineWorkspaceState/ |
| <ul><li>**DatasetsFromDataSourceCreate**</li><li>datarobot::CreateDatasetsFromDataSource</li></ul>*Create dataset from data source* | **POST** /datasets/fromDataSource/ |
| <ul><li>**DatasetsFromFileCreate**</li><li>datarobot::CreateDatasetsFromFile</li></ul>*Create dataset from file* | **POST** /datasets/fromFile/ |
| <ul><li>**DatasetsFromHDFSCreate**</li><li>datarobot::CreateDatasetsFromHDFS</li></ul>*Create dataset from HDFS URL* | **POST** /datasets/fromHDFS/ |
| <ul><li>**DatasetsFromURLCreate**</li><li>datarobot::CreateDatasetsFromURL</li></ul>*Create dataset from URL* | **POST** /datasets/fromURL/ |
| <ul><li>**DatasetsList**</li><li>datarobot::ListDatasets</li></ul>*List datasets* | **GET** /datasets/ |
| <ul><li>**DatasetsPatch**</li><li>datarobot::PatchDatasets</li></ul>*Modify dataset* | **PATCH** /datasets/{datasetId}/ |
| <ul><li>**DatasetsPatchMany**</li><li>datarobot::PatchManyDatasets</li></ul>*Execute bulk dataset action* | **PATCH** /datasets/ |
| <ul><li>**DatasetsPermissionsList**</li><li>datarobot::ListDatasetsPermissions</li></ul>*Describe dataset permissions* | **GET** /datasets/{datasetId}/permissions/ |
| <ul><li>**DatasetsProjectsList**</li><li>datarobot::ListDatasetsProjects</li></ul>*Get dataset projects* | **GET** /datasets/{datasetId}/projects/ |
| <ul><li>**DatasetsRefreshJobsCreate**</li><li>datarobot::CreateDatasetsRefreshJobs</li></ul>*Schedule dataset refresh* | **POST** /datasets/{datasetId}/refreshJobs/ |
| <ul><li>**DatasetsRefreshJobsDelete**</li><li>datarobot::DeleteDatasetsRefreshJobs</li></ul>*Deletes an existing dataset refresh job* | **DELETE** /datasets/{datasetId}/refreshJobs/{jobId}/ |
| <ul><li>**DatasetsRefreshJobsExecutionResultsList**</li><li>datarobot::ListDatasetsRefreshJobsExecutionResults</li></ul>*Results of dataset refresh job.* | **GET** /datasets/{datasetId}/refreshJobs/{jobId}/executionResults/ |
| <ul><li>**DatasetsRefreshJobsList**</li><li>datarobot::ListDatasetsRefreshJobs</li></ul>*Information about scheduled jobs for given dataset.* | **GET** /datasets/{datasetId}/refreshJobs/ |
| <ul><li>**DatasetsRefreshJobsPatch**</li><li>datarobot::PatchDatasetsRefreshJobs</li></ul>*Update a dataset refresh job* | **PATCH** /datasets/{datasetId}/refreshJobs/{jobId}/ |
| <ul><li>**DatasetsRefreshJobsRetrieve**</li><li>datarobot::RetrieveDatasetsRefreshJobs</li></ul>*Gets configuration of a user scheduled dataset refresh job by job ID* | **GET** /datasets/{datasetId}/refreshJobs/{jobId}/ |
| <ul><li>**DatasetsRelationshipsCreate**</li><li>datarobot::CreateDatasetsRelationships</li></ul>*Create dataset relationship.* | **POST** /datasets/{datasetId}/relationships/ |
| <ul><li>**DatasetsRelationshipsDelete**</li><li>datarobot::DeleteDatasetsRelationships</li></ul>*Delete dataset relationship.* | **DELETE** /datasets/{datasetId}/relationships/{datasetRelationshipId}/ |
| <ul><li>**DatasetsRelationshipsList**</li><li>datarobot::ListDatasetsRelationships</li></ul>*List related datasets * | **GET** /datasets/{datasetId}/relationships/ |
| <ul><li>**DatasetsRelationshipsPatch**</li><li>datarobot::PatchDatasetsRelationships</li></ul>*Update dataset relationship.* | **PATCH** /datasets/{datasetId}/relationships/{datasetRelationshipId}/ |
| <ul><li>**DatasetsRetrieve**</li><li>datarobot::RetrieveDatasets</li></ul>*Get dataset details* | **GET** /datasets/{datasetId}/ |
| <ul><li>**DatasetsSharedRolesList**</li><li>datarobot::ListDatasetsSharedRoles</li></ul>*List dataset shared roles* | **GET** /datasets/{datasetId}/sharedRoles/ |
| <ul><li>**DatasetsSharedRolesPatchMany**</li><li>datarobot::PatchManyDatasetsSharedRoles</li></ul>*Modify dataset shared roles* | **PATCH** /datasets/{datasetId}/sharedRoles/ |
| <ul><li>**DatasetsVersionsAllFeaturesDetailsList**</li><li>datarobot::ListDatasetsVersionsAllFeaturesDetails</li></ul>*Get dataset features* | **GET** /datasets/{datasetId}/versions/{datasetVersionId}/allFeaturesDetails/ |
| <ul><li>**DatasetsVersionsDelete**</li><li>datarobot::DeleteDatasetsVersions</li></ul>*Delete dataset version* | **DELETE** /datasets/{datasetId}/versions/{datasetVersionId}/ |
| <ul><li>**DatasetsVersionsDeletedPatchMany**</li><li>datarobot::PatchManyDatasetsVersionsDeleted</li></ul>*Recover deleted dataset version* | **PATCH** /datasets/{datasetId}/versions/{datasetVersionId}/deleted/ |
| <ul><li>**DatasetsVersionsFeatureHistogramsRetrieve**</li><li>datarobot::RetrieveDatasetsVersionsFeatureHistograms</li></ul>*Get dataset feature histogram* | **GET** /datasets/{datasetId}/versions/{datasetVersionId}/featureHistograms/{featureName}/ |
| <ul><li>**DatasetsVersionsFeaturelistsList**</li><li>datarobot::ListDatasetsVersionsFeaturelists</li></ul>*Retrieve dataset featurelists* | **GET** /datasets/{datasetId}/versions/{datasetVersionId}/featurelists/ |
| <ul><li>**DatasetsVersionsFeaturelistsRetrieve**</li><li>datarobot::RetrieveDatasetsVersionsFeaturelists</li></ul>*Get dataset featurelist* | **GET** /datasets/{datasetId}/versions/{datasetVersionId}/featurelists/{featurelistId}/ |
| <ul><li>**DatasetsVersionsFileList**</li><li>datarobot::ListDatasetsVersionsFile</li></ul>*Retrieve original dataset data* | **GET** /datasets/{datasetId}/versions/{datasetVersionId}/file/ |
| <ul><li>**DatasetsVersionsFromDataEngineWorkspaceStateCreate**</li><li>datarobot::CreateDatasetsVersionsFromDataEngineWorkspaceState</li></ul>*Create dataset version from Data Engine workspace* | **POST** /datasets/{datasetId}/versions/fromDataEngineWorkspaceState/ |
| <ul><li>**DatasetsVersionsFromDataSourceCreate**</li><li>datarobot::CreateDatasetsVersionsFromDataSource</li></ul>*Create dataset version from Data Source* | **POST** /datasets/{datasetId}/versions/fromDataSource/ |
| <ul><li>**DatasetsVersionsFromFileCreate**</li><li>datarobot::CreateDatasetsVersionsFromFile</li></ul>*Create dataset version from file* | **POST** /datasets/{datasetId}/versions/fromFile/ |
| <ul><li>**DatasetsVersionsFromHDFSCreate**</li><li>datarobot::CreateDatasetsVersionsFromHDFS</li></ul>*Create dataset version from HDFS URL* | **POST** /datasets/{datasetId}/versions/fromHDFS/ |
| <ul><li>**DatasetsVersionsFromLatestVersionCreate**</li><li>datarobot::CreateDatasetsVersionsFromLatestVersion</li></ul>*Create dataset version from data source* | **POST** /datasets/{datasetId}/versions/fromLatestVersion/ |
| <ul><li>**DatasetsVersionsFromURLCreate**</li><li>datarobot::CreateDatasetsVersionsFromURL</li></ul>*Create dataset version from URL* | **POST** /datasets/{datasetId}/versions/fromURL/ |
| <ul><li>**DatasetsVersionsFromVersionCreate**</li><li>datarobot::CreateDatasetsVersionsFromVersion</li></ul>*Create dataset version from previous version* | **POST** /datasets/{datasetId}/versions/{datasetVersionId}/fromVersion/ |
| <ul><li>**DatasetsVersionsList**</li><li>datarobot::ListDatasetsVersions</li></ul>*List dataset versions* | **GET** /datasets/{datasetId}/versions/ |
| <ul><li>**DatasetsVersionsProjectsList**</li><li>datarobot::ListDatasetsVersionsProjects</li></ul>*Get dataset projects by version* | **GET** /datasets/{datasetId}/versions/{datasetVersionId}/projects/ |
| <ul><li>**DatasetsVersionsRetrieve**</li><li>datarobot::RetrieveDatasetsVersions</li></ul>*Get dataset details by version* | **GET** /datasets/{datasetId}/versions/{datasetVersionId}/ |
| <ul><li>**UserBlueprintsBulkValidationsCreate**</li><li>datarobot::CreateUserBlueprintsBulkValidations</li></ul>*Validate many user blueprints.* | **POST** /userBlueprintsBulkValidations/ |
| <ul><li>**UserBlueprintsCreate**</li><li>datarobot::CreateUserBlueprints</li></ul>*Create a user blueprint.* | **POST** /userBlueprints/ |
| <ul><li>**UserBlueprintsDelete**</li><li>datarobot::DeleteUserBlueprints</li></ul>*Delete a user blueprint.* | **DELETE** /userBlueprints/{userBlueprintId}/ |
| <ul><li>**UserBlueprintsDeleteMany**</li><li>datarobot::DeleteManyUserBlueprints</li></ul>*Delete user blueprints.* | **DELETE** /userBlueprints/ |
| <ul><li>**UserBlueprintsFromBlueprintIdCreate**</li><li>datarobot::CreateUserBlueprintsFromBlueprintId</li></ul>*Clone a blueprint from a project.* | **POST** /userBlueprints/fromBlueprintId/ |
| <ul><li>**UserBlueprintsFromCustomTaskVersionIdCreate**</li><li>datarobot::CreateUserBlueprintsFromCustomTaskVersionId</li></ul>*Create a user blueprint from a single custom task.* | **POST** /userBlueprints/fromCustomTaskVersionId/ |
| <ul><li>**UserBlueprintsFromUserBlueprintIdCreate**</li><li>datarobot::CreateUserBlueprintsFromUserBlueprintId</li></ul>*Clone a user blueprint.* | **POST** /userBlueprints/fromUserBlueprintId/ |
| <ul><li>**UserBlueprintsInputTypesList**</li><li>datarobot::ListUserBlueprintsInputTypes</li></ul>*Retrieve input types.* | **GET** /userBlueprintsInputTypes/ |
| <ul><li>**UserBlueprintsList**</li><li>datarobot::ListUserBlueprints</li></ul>*List user blueprints.* | **GET** /userBlueprints/ |
| <ul><li>**UserBlueprintsPatch**</li><li>datarobot::PatchUserBlueprints</li></ul>*Update a user blueprint.* | **PATCH** /userBlueprints/{userBlueprintId}/ |
| <ul><li>**UserBlueprintsProjectBlueprintsCreate**</li><li>datarobot::CreateUserBlueprintsProjectBlueprints</li></ul>*Add user blueprints to a project.* | **POST** /userBlueprintsProjectBlueprints/ |
| <ul><li>**UserBlueprintsRetrieve**</li><li>datarobot::RetrieveUserBlueprints</li></ul>*Retrieve a user blueprint.* | **GET** /userBlueprints/{userBlueprintId}/ |
| <ul><li>**UserBlueprintsSharedRolesList**</li><li>datarobot::ListUserBlueprintsSharedRoles</li></ul>*Get a list of users, groups and organizations that have an access to this user blueprint* | **GET** /userBlueprints/{userBlueprintId}/sharedRoles/ |
| <ul><li>**UserBlueprintsSharedRolesPatchMany**</li><li>datarobot::PatchManyUserBlueprintsSharedRoles</li></ul>*Share a user blueprint with a user, group, or organization* | **PATCH** /userBlueprints/{userBlueprintId}/sharedRoles/ |
| <ul><li>**UserBlueprintsTaskParametersCreate**</li><li>datarobot::CreateUserBlueprintsTaskParameters</li></ul>*Validate task parameters.* | **POST** /userBlueprintsTaskParameters/ |
| <ul><li>**UserBlueprintsTasksList**</li><li>datarobot::ListUserBlueprintsTasks</li></ul>*Retrieve tasks for blueprint construction.* | **GET** /userBlueprintsTasks/ |
| <ul><li>**UserBlueprintsValidationsCreate**</li><li>datarobot::CreateUserBlueprintsValidations</li></ul>*Validate a user blueprint.* | **POST** /userBlueprintsValidations/ |
### AnalyticsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**EventLogsEventsList**</li><li>datarobot::ListEventLogsEvents</li></ul>*Retrieve all the available events. DEPRECATED API.* | **GET** /eventLogs/events/ |
| <ul><li>**EventLogsList**</li><li>datarobot::ListEventLogs</li></ul>*Retrieve one page of audit log records.* | **GET** /eventLogs/ |
| <ul><li>**EventLogsPredictionUsageList**</li><li>datarobot::ListEventLogsPredictionUsage</li></ul>*Retrieve prediction usage data.* | **GET** /eventLogs/predictionUsage/ |
| <ul><li>**EventLogsRetrieve**</li><li>datarobot::RetrieveEventLogs</li></ul>*Get audit record by ID.* | **GET** /eventLogs/{recordId}/ |
| <ul><li>**UsageDataExportsCreate**</li><li>datarobot::CreateUsageDataExports</li></ul>*Create a customer usage data artifact request. Requires \"CAN_ACCESS_USER_ACTIVITY\" permission.* | **POST** /usageDataExports/ |
| <ul><li>**UsageDataExportsRetrieve**</li><li>datarobot::RetrieveUsageDataExports</li></ul>*Retrieve a prepared customer usage data artifact.* | **GET** /usageDataExports/{artifactId}/ |
| <ul><li>**UsageDataExportsSupportedEventsList**</li><li>datarobot::ListUsageDataExportsSupportedEvents</li></ul>*Describe supported available audit events with which to filter result data.* | **GET** /usageDataExports/supportedEvents/ |
| <ul><li>**UsersRateLimitUsageDelete**</li><li>datarobot::DeleteUsersRateLimitUsage</li></ul>*Reset resource usage for the resource* | **DELETE** /users/{userId}/rateLimitUsage/{resourceName}/ |
| <ul><li>**UsersRateLimitUsageDeleteMany**</li><li>datarobot::DeleteManyUsersRateLimitUsage</li></ul>*Reset resource usage for all resources* | **DELETE** /users/{userId}/rateLimitUsage/ |
| <ul><li>**UsersRateLimitUsageList**</li><li>datarobot::ListUsersRateLimitUsage</li></ul>*List resource usage for a user* | **GET** /users/{userId}/rateLimitUsage/ |
### ApplicationsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**ApplicationUserRoleRetrieve**</li><li>datarobot::RetrieveApplicationUserRole</li></ul>*Get application user role* | **GET** /applications/{applicationId}/userRole/ |
| <ul><li>**ApplicationsAccessControlList**</li><li>datarobot::ListApplicationsAccessControl</li></ul>*A list of users with access to this application* | **GET** /applications/{applicationId}/accessControl/ |
| <ul><li>**ApplicationsAccessControlPatchMany**</li><li>datarobot::PatchManyApplicationsAccessControl</li></ul>*Update access control for this application.* | **PATCH** /applications/{applicationId}/accessControl/ |
| <ul><li>**ApplicationsCreate**</li><li>datarobot::CreateApplications</li></ul>*Create an application* | **POST** /applications/ |
| <ul><li>**ApplicationsDelete**</li><li>datarobot::DeleteApplications</li></ul>*Delete an application* | **DELETE** /applications/{applicationId}/ |
| <ul><li>**ApplicationsDeploymentsCreate**</li><li>datarobot::CreateApplicationsDeployments</li></ul>*Links a deployment to an application* | **POST** /applications/{applicationId}/deployments/ |
| <ul><li>**ApplicationsDeploymentsDelete**</li><li>datarobot::DeleteApplicationsDeployments</li></ul>*Delete link between application and deployment.* | **DELETE** /applications/{applicationId}/deployments/{modelDeploymentId}/ |
| <ul><li>**ApplicationsDuplicateCreate**</li><li>datarobot::CreateApplicationsDuplicate</li></ul>*Create a duplicate of the application* | **POST** /applications/{applicationId}/duplicate/ |
| <ul><li>**ApplicationsList**</li><li>datarobot::ListApplications</li></ul>*Paginated list of applications created by the currently authenticated user.* | **GET** /applications/ |
| <ul><li>**ApplicationsPatch**</li><li>datarobot::PatchApplications</li></ul>*Update an application's name and/or description* | **PATCH** /applications/{applicationId}/ |
| <ul><li>**ApplicationsRetrieve**</li><li>datarobot::RetrieveApplications</li></ul>*Retrieve an application* | **GET** /applications/{applicationId}/ |
| <ul><li>**ApplicationsSharedRolesList**</li><li>datarobot::ListApplicationsSharedRoles</li></ul>*Get a list of users, groups and organizations that have an access to this application* | **GET** /applications/{applicationId}/sharedRoles/ |
| <ul><li>**ApplicationsSharedRolesPatchMany**</li><li>datarobot::PatchManyApplicationsSharedRoles</li></ul>*Share an application with a user, group, or organization* | **PATCH** /applications/{applicationId}/sharedRoles/ |
### BlueprintsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**ProjectsBlueprintsBlueprintChartList**</li><li>datarobot::GetBlueprintChart</li></ul>*Retrieve a blueprint chart by blueprint id.* | **GET** /projects/{projectId}/blueprints/{blueprintId}/blueprintChart/ |
| <ul><li>**ProjectsBlueprintsBlueprintDocsList**</li><li>datarobot::GetBlueprintDocumentation</li></ul>*Retrieve blueprint tasks documentation.* | **GET** /projects/{projectId}/blueprints/{blueprintId}/blueprintDocs/ |
| <ul><li>**ProjectsBlueprintsList**</li><li>datarobot::ListBlueprints</li></ul>*List blueprints* | **GET** /projects/{projectId}/blueprints/ |
| <ul><li>**ProjectsBlueprintsRetrieve**</li><li>datarobot::GetBlueprint</li></ul>*Retrieve a blueprint by its ID.* | **GET** /projects/{projectId}/blueprints/{blueprintId}/ |
| <ul><li>**ProjectsModelsBlueprintChartList**</li><li>datarobot::GetModelBlueprintChart</li></ul>*Retrieve a reduced model blueprint chart by model id.* | **GET** /projects/{projectId}/models/{modelId}/blueprintChart/ |
| <ul><li>**ProjectsModelsBlueprintDocsList**</li><li>datarobot::GetModelBlueprintDocumentation</li></ul>*Retrieve task documentation for a reduced model blueprint.* | **GET** /projects/{projectId}/models/{modelId}/blueprintDocs/ |
| <ul><li>**ProjectsModelsLogsList**</li><li>datarobot::ListProjectsModelsLogs</li></ul>*Retrieve an archive (tar.gz) of the logs produced and persisted by a model.* | **GET** /projects/{projectId}/models/{modelId}/logs/ |
| <ul><li>**ProjectsModelsTrainingArtifactList**</li><li>datarobot::ListProjectsModelsTrainingArtifact</li></ul>*Retrieve an archive (tar.gz) of the artifacts produced and persisted by a model.* | **GET** /projects/{projectId}/models/{modelId}/trainingArtifact/ |
| <ul><li>**UserBlueprintsBulkValidationsCreate**</li><li>datarobot::CreateUserBlueprintsBulkValidations</li></ul>*Validate many user blueprints.* | **POST** /userBlueprintsBulkValidations/ |
| <ul><li>**UserBlueprintsCreate**</li><li>datarobot::CreateUserBlueprints</li></ul>*Create a user blueprint.* | **POST** /userBlueprints/ |
| <ul><li>**UserBlueprintsDelete**</li><li>datarobot::DeleteUserBlueprints</li></ul>*Delete a user blueprint.* | **DELETE** /userBlueprints/{userBlueprintId}/ |
| <ul><li>**UserBlueprintsDeleteMany**</li><li>datarobot::DeleteManyUserBlueprints</li></ul>*Delete user blueprints.* | **DELETE** /userBlueprints/ |
| <ul><li>**UserBlueprintsFromBlueprintIdCreate**</li><li>datarobot::CreateUserBlueprintsFromBlueprintId</li></ul>*Clone a blueprint from a project.* | **POST** /userBlueprints/fromBlueprintId/ |
| <ul><li>**UserBlueprintsFromCustomTaskVersionIdCreate**</li><li>datarobot::CreateUserBlueprintsFromCustomTaskVersionId</li></ul>*Create a user blueprint from a single custom task.* | **POST** /userBlueprints/fromCustomTaskVersionId/ |
| <ul><li>**UserBlueprintsFromUserBlueprintIdCreate**</li><li>datarobot::CreateUserBlueprintsFromUserBlueprintId</li></ul>*Clone a user blueprint.* | **POST** /userBlueprints/fromUserBlueprintId/ |
| <ul><li>**UserBlueprintsInputTypesList**</li><li>datarobot::ListUserBlueprintsInputTypes</li></ul>*Retrieve input types.* | **GET** /userBlueprintsInputTypes/ |
| <ul><li>**UserBlueprintsList**</li><li>datarobot::ListUserBlueprints</li></ul>*List user blueprints.* | **GET** /userBlueprints/ |
| <ul><li>**UserBlueprintsPatch**</li><li>datarobot::PatchUserBlueprints</li></ul>*Update a user blueprint.* | **PATCH** /userBlueprints/{userBlueprintId}/ |
| <ul><li>**UserBlueprintsProjectBlueprintsCreate**</li><li>datarobot::CreateUserBlueprintsProjectBlueprints</li></ul>*Add user blueprints to a project.* | **POST** /userBlueprintsProjectBlueprints/ |
| <ul><li>**UserBlueprintsRetrieve**</li><li>datarobot::RetrieveUserBlueprints</li></ul>*Retrieve a user blueprint.* | **GET** /userBlueprints/{userBlueprintId}/ |
| <ul><li>**UserBlueprintsSharedRolesList**</li><li>datarobot::ListUserBlueprintsSharedRoles</li></ul>*Get a list of users, groups and organizations that have an access to this user blueprint* | **GET** /userBlueprints/{userBlueprintId}/sharedRoles/ |
| <ul><li>**UserBlueprintsSharedRolesPatchMany**</li><li>datarobot::PatchManyUserBlueprintsSharedRoles</li></ul>*Share a user blueprint with a user, group, or organization* | **PATCH** /userBlueprints/{userBlueprintId}/sharedRoles/ |
| <ul><li>**UserBlueprintsTaskParametersCreate**</li><li>datarobot::CreateUserBlueprintsTaskParameters</li></ul>*Validate task parameters.* | **POST** /userBlueprintsTaskParameters/ |
| <ul><li>**UserBlueprintsTasksList**</li><li>datarobot::ListUserBlueprintsTasks</li></ul>*Retrieve tasks for blueprint construction.* | **GET** /userBlueprintsTasks/ |
| <ul><li>**UserBlueprintsValidationsCreate**</li><li>datarobot::CreateUserBlueprintsValidations</li></ul>*Validate a user blueprint.* | **POST** /userBlueprintsValidations/ |
### CommentsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**CommentsCreate**</li><li>datarobot::CreateComments</li></ul>*Post a comment* | **POST** /comments/ |
| <ul><li>**CommentsDelete**</li><li>datarobot::DeleteComments</li></ul>*Delete a comment* | **DELETE** /comments/{commentId}/ |
| <ul><li>**CommentsList**</li><li>datarobot::ListComments</li></ul>*List comments* | **GET** /comments/{entityType}/{entityId}/ |
| <ul><li>**CommentsPatch**</li><li>datarobot::PatchComments</li></ul>*Update a comment* | **PATCH** /comments/{commentId}/ |
| <ul><li>**CommentsRetrieve**</li><li>datarobot::RetrieveComments</li></ul>*Retrieve a comment* | **GET** /comments/{commentId}/ |
### CredentialsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**CredentialsAssociationsListForCredential**</li><li>datarobot::ListForCredentialCredentialsAssociations</li></ul>*List all objects associated with the specified credentials.* | **GET** /credentials/{credentialId}/associations/ |
| <ul><li>**CredentialsAssociationsPatchMany**</li><li>datarobot::PatchManyCredentialsAssociations</li></ul>*Add objects associated with credentials* | **PATCH** /credentials/{credentialId}/associations/ |
| <ul><li>**CredentialsCreate**</li><li>datarobot::CreateCredentials</li></ul>*Store a new set of credentials which can be used for data sources creation.* | **POST** /credentials/ |
| <ul><li>**CredentialsDelete**</li><li>datarobot::DeleteCredentials</li></ul>*Delete the credentials set.* | **DELETE** /credentials/{credentialId}/ |
| <ul><li>**CredentialsList**</li><li>datarobot::ListCredentials</li></ul>*List credentials.* | **GET** /credentials/ |
| <ul><li>**CredentialsPatch**</li><li>datarobot::PatchCredentials</li></ul>*Update specified credentials* | **PATCH** /credentials/{credentialId}/ |
| <ul><li>**CredentialsRetrieve**</li><li>datarobot::RetrieveCredentials</li></ul>*Retrieve the credentials set.* | **GET** /credentials/{credentialId}/ |
### CustomTasksApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**CustomTaskVersionCreateFromLatest**</li><li>datarobot::CreateFromLatestCustomTaskVersion</li></ul>*Update custom task version files.* | **PATCH** /customTasks/{customTaskId}/versions/ |
| <ul><li>**CustomTasksAccessControlList**</li><li>datarobot::ListCustomTasksAccessControl</li></ul>*Get a list of users who have access to this custom task and their roles on it.* | **GET** /customTasks/{customTaskId}/accessControl/ |
| <ul><li>**CustomTasksAccessControlPatchMany**</li><li>datarobot::PatchManyCustomTasksAccessControl</li></ul>*Grant access or update roles for users on this custom task and appropriate learning data.* | **PATCH** /customTasks/{customTaskId}/accessControl/ |
| <ul><li>**CustomTasksCreate**</li><li>datarobot::CreateCustomTasks</li></ul>*Create a custom task* | **POST** /customTasks/ |
| <ul><li>**CustomTasksDelete**</li><li>datarobot::DeleteCustomTasks</li></ul>*Delete custom task.* | **DELETE** /customTasks/{customTaskId}/ |
| <ul><li>**CustomTasksDownloadList**</li><li>datarobot::ListCustomTasksDownload</li></ul>*Download the latest custom task version content.* | **GET** /customTasks/{customTaskId}/download/ |
| <ul><li>**CustomTasksFromCustomTaskCreate**</li><li>datarobot::CreateCustomTasksFromCustomTask</li></ul>*Clone custom task.* | **POST** /customTasks/fromCustomTask/ |
| <ul><li>**CustomTasksList**</li><li>datarobot::ListCustomTasks</li></ul>*List custom tasks.* | **GET** /customTasks/ |
| <ul><li>**CustomTasksPatch**</li><li>datarobot::PatchCustomTasks</li></ul>*Update custom task.* | **PATCH** /customTasks/{customTaskId}/ |
| <ul><li>**CustomTasksRetrieve**</li><li>datarobot::RetrieveCustomTasks</li></ul>*Get custom task.* | **GET** /customTasks/{customTaskId}/ |
| <ul><li>**CustomTasksVersionsCreate**</li><li>datarobot::CreateCustomTasksVersions</li></ul>*Create custom task version.* | **POST** /customTasks/{customTaskId}/versions/ |
| <ul><li>**CustomTasksVersionsDependencyBuildCreate**</li><li>datarobot::CreateCustomTasksVersionsDependencyBuild</li></ul>*Start a custom task version's dependency build.* | **POST** /customTasks/{customTaskId}/versions/{customTaskVersionId}/dependencyBuild/ |
| <ul><li>**CustomTasksVersionsDependencyBuildDeleteMany**</li><li>datarobot::DeleteManyCustomTasksVersionsDependencyBuild</li></ul>*Cancel dependency build.* | **DELETE** /customTasks/{customTaskId}/versions/{customTaskVersionId}/dependencyBuild/ |
| <ul><li>**CustomTasksVersionsDependencyBuildList**</li><li>datarobot::ListCustomTasksVersionsDependencyBuild</li></ul>*Retrieve the custom task version's dependency build status.* | **GET** /customTasks/{customTaskId}/versions/{customTaskVersionId}/dependencyBuild/ |
| <ul><li>**CustomTasksVersionsDependencyBuildLogList**</li><li>datarobot::ListCustomTasksVersionsDependencyBuildLog</li></ul>*Retrieve the custom task version's dependency build log.* | **GET** /customTasks/{customTaskId}/versions/{customTaskVersionId}/dependencyBuildLog/ |
| <ul><li>**CustomTasksVersionsDownloadList**</li><li>datarobot::ListCustomTasksVersionsDownload</li></ul>*Download custom task version content.* | **GET** /customTasks/{customTaskId}/versions/{customTaskVersionId}/download/ |
| <ul><li>**CustomTasksVersionsFromRepositoryCreate**</li><li>datarobot::CreateCustomTasksVersionsFromRepository</li></ul>*Create custom task version from remote repository.* | **POST** /customTasks/{customTaskId}/versions/fromRepository/ |
| <ul><li>**CustomTasksVersionsFromRepositoryPatchMany**</li><li>datarobot::PatchManyCustomTasksVersionsFromRepository</li></ul>*Create custom task version from remote repository with files from previous version.* | **PATCH** /customTasks/{customTaskId}/versions/fromRepository/ |
| <ul><li>**CustomTasksVersionsList**</li><li>datarobot::ListCustomTasksVersions</li></ul>*List custom task versions.* | **GET** /customTasks/{customTaskId}/versions/ |
| <ul><li>**CustomTasksVersionsPatch**</li><li>datarobot::PatchCustomTasksVersions</li></ul>*Update custom task version.* | **PATCH** /customTasks/{customTaskId}/versions/{customTaskVersionId}/ |
| <ul><li>**CustomTasksVersionsRetrieve**</li><li>datarobot::RetrieveCustomTasksVersions</li></ul>*Get custom task version.* | **GET** /customTasks/{customTaskId}/versions/{customTaskVersionId}/ |
### DataConnectivityApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**ExternalDataDriverFileCreate**</li><li>datarobot::CreateExternalDataDriverFile</li></ul>*Upload JDBC driver from file.* | **POST** /externalDataDriverFile/ |
| <ul><li>**ExternalDataDriversConfigurationList**</li><li>datarobot::ListExternalDataDriversConfiguration</li></ul>*Driver configuration details.* | **GET** /externalDataDrivers/{driverId}/configuration/ |
| <ul><li>**ExternalDataDriversCreate**</li><li>datarobot::CreateExternalDataDrivers</li></ul>*Create a new JDBC driver.* | **POST** /externalDataDrivers/ |
| <ul><li>**ExternalDataDriversDelete**</li><li>datarobot::DeleteExternalDataDrivers</li></ul>*Delete the driver.* | **DELETE** /externalDataDrivers/{driverId}/ |
| <ul><li>**ExternalDataDriversList**</li><li>datarobot::ListDrivers</li></ul>*List drivers* | **GET** /externalDataDrivers/ |
| <ul><li>**ExternalDataDriversPatch**</li><li>datarobot::PatchExternalDataDrivers</li></ul>*Update properties of an existing JDBC Driver.* | **PATCH** /externalDataDrivers/{driverId}/ |
| <ul><li>**ExternalDataDriversRetrieve**</li><li>datarobot::GetDriver</li></ul>*Retrieve driver details.* | **GET** /externalDataDrivers/{driverId}/ |
| <ul><li>**ExternalDataSourcesAccessControlList**</li><li>datarobot::ListExternalDataSourcesAccessControl</li></ul>*Get data source's access control list* | **GET** /externalDataSources/{dataSourceId}/accessControl/ |
| <ul><li>**ExternalDataSourcesAccessControlPatchMany**</li><li>datarobot::PatchManyExternalDataSourcesAccessControl</li></ul>*Update data source's access controls* | **PATCH** /externalDataSources/{dataSourceId}/accessControl/ |
| <ul><li>**ExternalDataSourcesCreate**</li><li>datarobot::CreateDataSource</li></ul>*Create a data source.* | **POST** /externalDataSources/ |
| <ul><li>**ExternalDataSourcesDelete**</li><li>datarobot::DeleteDataSource</li></ul>*Delete the data source.* | **DELETE** /externalDataSources/{dataSourceId}/ |
| <ul><li>**ExternalDataSourcesList**</li><li>datarobot::ListDataSources</li></ul>*List data sources.* | **GET** /externalDataSources/ |
| <ul><li>**ExternalDataSourcesPatch**</li><li>datarobot::UpdateDataSource</li></ul>*Update the data source.* | **PATCH** /externalDataSources/{dataSourceId}/ |
| <ul><li>**ExternalDataSourcesPermissionsList**</li><li>datarobot::ListExternalDataSourcesPermissions</li></ul>*Describe data source permissions.* | **GET** /externalDataSources/{dataSourceId}/permissions/ |
| <ul><li>**ExternalDataSourcesRetrieve**</li><li>datarobot::GetDataSource</li></ul>*Data source details.* | **GET** /externalDataSources/{dataSourceId}/ |
| <ul><li>**ExternalDataSourcesSharedRolesList**</li><li>datarobot::ListExternalDataSourcesSharedRoles</li></ul>*Get data source's access control list* | **GET** /externalDataSources/{dataSourceId}/sharedRoles/ |
| <ul><li>**ExternalDataSourcesSharedRolesPatchMany**</li><li>datarobot::PatchManyExternalDataSourcesSharedRoles</li></ul>*Modify data source shared roles.* | **PATCH** /externalDataSources/{dataSourceId}/sharedRoles/ |
| <ul><li>**ExternalDataStoresAccessControlPatchMany**</li><li>datarobot::PatchManyExternalDataStoresAccessControl</li></ul>*Update data store's controls* | **PATCH** /externalDataStores/{dataStoreId}/accessControl/ |
| <ul><li>**ExternalDataStoresColumnsCreate**</li><li>datarobot::CreateExternalDataStoresColumns</li></ul>*Retrieves a data store's data columns.* | **POST** /externalDataStores/{dataStoreId}/columns/ |
| <ul><li>**ExternalDataStoresCreate**</li><li>datarobot::CreateDataStore</li></ul>*Create a data store.* | **POST** /externalDataStores/ |
| <ul><li>**ExternalDataStoresCredentialsList**</li><li>datarobot::ListExternalDataStoresCredentials</li></ul>*List credentials associated with the specified data store.* | **GET** /externalDataStores/{dataStoreId}/credentials/ |
| <ul><li>**ExternalDataStoresDelete**</li><li>datarobot::DeleteDataStore</li></ul>*Delete the data store.* | **DELETE** /externalDataStores/{dataStoreId}/ |
| <ul><li>**ExternalDataStoresList**</li><li>datarobot::ListDataStores</li></ul>*List data stores.* | **GET** /externalDataStores/ |
| <ul><li>**ExternalDataStoresPatch**</li><li>datarobot::UpdateDataStore</li></ul>*Updates a data store configuration.* | **PATCH** /externalDataStores/{dataStoreId}/ |
| <ul><li>**ExternalDataStoresPermissionsList**</li><li>datarobot::ListExternalDataStoresPermissions</li></ul>*Describe data store permissions.* | **GET** /externalDataStores/{dataStoreId}/permissions/ |
| <ul><li>**ExternalDataStoresRetrieve**</li><li>datarobot::GetDataStore</li></ul>*Data store details.* | **GET** /externalDataStores/{dataStoreId}/ |
| <ul><li>**ExternalDataStoresSchemasCreate**</li><li>datarobot::GetDataStoreSchemas</li></ul>*Retrieves a data store's data schemas.* | **POST** /externalDataStores/{dataStoreId}/schemas/ |
| <ul><li>**ExternalDataStoresSharedRolesList**</li><li>datarobot::ListExternalDataStoresSharedRoles</li></ul>*Get data store's access control list* | **GET** /externalDataStores/{dataStoreId}/sharedRoles/ |
| <ul><li>**ExternalDataStoresSharedRolesPatchMany**</li><li>datarobot::PatchManyExternalDataStoresSharedRoles</li></ul>*Modify data store shared roles.* | **PATCH** /externalDataStores/{dataStoreId}/sharedRoles/ |
| <ul><li>**ExternalDataStoresTablesCreate**</li><li>datarobot::GetDataStoreTables</li></ul>*Retrieves a data store's database tables (including views).* | **POST** /externalDataStores/{dataStoreId}/tables/ |
| <ul><li>**ExternalDataStoresTestCreate**</li><li>datarobot::TestDataStore</li></ul>*Tests data store connection.* | **POST** /externalDataStores/{dataStoreId}/test/ |
| <ul><li>**ExternalDataStoresVerifySQLCreate**</li><li>datarobot::CreateExternalDataStoresVerifySQL</li></ul>*Verifies a SQL query for a data store.* | **POST** /externalDataStores/{dataStoreId}/verifySQL/ |
| <ul><li>**ExternalDriverConfigurationsList**</li><li>datarobot::ListExternalDriverConfigurations</li></ul>*List available driver configurations.* | **GET** /externalDriverConfigurations/ |
| <ul><li>**ExternalDriverConfigurationsRetrieve**</li><li>datarobot::RetrieveExternalDriverConfigurations</li></ul>*Driver configuration details.* | **GET** /externalDriverConfigurations/{configurationId}/ |
### DatetimePartitioningApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**ProjectsDatetimePartitioningCreate**</li><li>datarobot::CreateProjectsDatetimePartitioning</li></ul>*Preview the fully specified datetime partitioning generated by the requested configuration.* | **POST** /projects/{projectId}/datetimePartitioning/ |
| <ul><li>**ProjectsDatetimePartitioningList**</li><li>datarobot::GetDatetimePartition</li></ul>*Retrieve datetime partitioning configuration.* | **GET** /projects/{projectId}/datetimePartitioning/ |
| <ul><li>**ProjectsOptimizedDatetimePartitioningsCreate**</li><li>datarobot::CreateProjectsOptimizedDatetimePartitionings</li></ul>*Create an optimized datetime partitioning configuration using the target.* | **POST** /projects/{projectId}/optimizedDatetimePartitionings/ |
| <ul><li>**ProjectsOptimizedDatetimePartitioningsList**</li><li>datarobot::ListProjectsOptimizedDatetimePartitionings</li></ul>*List all created optimized datetime partitioning configurations* | **GET** /projects/{projectId}/optimizedDatetimePartitionings/ |
| <ul><li>**ProjectsOptimizedDatetimePartitioningsRetrieve**</li><li>datarobot::RetrieveProjectsOptimizedDatetimePartitionings</li></ul>*Retrieve optimized datetime partitioning configuration* | **GET** /projects/{projectId}/optimizedDatetimePartitionings/{datetimePartitioningId}/ |
| <ul><li>**ProjectsTimeSeriesFeatureLogFileList**</li><li>datarobot::DownloadTimeSeriesFeatureDerivationLog</li></ul>*Retrieve a text file containing the time series project feature log* | **GET** /projects/{projectId}/timeSeriesFeatureLog/file/ |
| <ul><li>**ProjectsTimeSeriesFeatureLogList**</li><li>datarobot::GetTimeSeriesFeatureDerivationLog</li></ul>*Retrieve the feature derivation log content and log length for a time series project as JSON.* | **GET** /projects/{projectId}/timeSeriesFeatureLog/ |
### DeploymentsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**DeletedDeploymentsList**</li><li>datarobot::ListDeletedDeployments</li></ul>*List deleted deployments* | **GET** /deletedDeployments/ |
| <ul><li>**DeletedDeploymentsPatchMany**</li><li>datarobot::PatchManyDeletedDeployments</li></ul>*Erase deleted deployments* | **PATCH** /deletedDeployments/ |
| <ul><li>**DeploymentsAccuracyList**</li><li>datarobot::GetDeploymentAccuracy</li></ul>*Retrieve accuracy metric* | **GET** /deployments/{deploymentId}/accuracy/ |
| <ul><li>**DeploymentsAccuracyOverTimeList**</li><li>datarobot::ListDeploymentsAccuracyOverTime</li></ul>*Retrieve accuracy over time data for one single metric.* | **GET** /deployments/{deploymentId}/accuracyOverTime/ |
| <ul><li>**DeploymentsActualsFromDatasetCreate**</li><li>datarobot::CreateDeploymentsActualsFromDataset</li></ul>*Submit actuals values from AI Catalog* | **POST** /deployments/{deploymentId}/actuals/fromDataset/ |
| <ul><li>**DeploymentsActualsFromJSONCreate**</li><li>datarobot::SubmitActuals</li></ul>*Submit actuals values* | **POST** /deployments/{deploymentId}/actuals/fromJSON/ |
| <ul><li>**DeploymentsCapabilitiesList**</li><li>datarobot::ListDeploymentsCapabilities</li></ul>*Retrieve capabilities.* | **GET** /deployments/{deploymentId}/capabilities/ |
| <ul><li>**DeploymentsChallengerPredictionsCreate**</li><li>datarobot::CreateDeploymentsChallengerPredictions</li></ul>*Score challenger models* | **POST** /deployments/{deploymentId}/challengerPredictions/ |
| <ul><li>**DeploymentsChallengersCreate**</li><li>datarobot::CreateDeploymentsChallengers</li></ul>*Create challenger model* | **POST** /deployments/{deploymentId}/challengers/ |
| <ul><li>**DeploymentsChallengersDelete**</li><li>datarobot::DeleteDeploymentsChallengers</li></ul>*Delete challenger model* | **DELETE** /deployments/{deploymentId}/challengers/{challengerId}/ |
| <ul><li>**DeploymentsChallengersList**</li><li>datarobot::ListDeploymentsChallengers</li></ul>*List challenger models* | **GET** /deployments/{deploymentId}/challengers/ |
| <ul><li>**DeploymentsChallengersPatch**</li><li>datarobot::PatchDeploymentsChallengers</li></ul>*Update challenger model* | **PATCH** /deployments/{deploymentId}/challengers/{challengerId}/ |
| <ul><li>**DeploymentsChallengersRetrieve**</li><li>datarobot::RetrieveDeploymentsChallengers</li></ul>*Get challenger model* | **GET** /deployments/{deploymentId}/challengers/{challengerId}/ |
| <ul><li>**DeploymentsDelete**</li><li>datarobot::DeleteDeployment</li></ul>*Delete deployment* | **DELETE** /deployments/{deploymentId}/ |
| <ul><li>**DeploymentsFeatureDriftList**</li><li>datarobot::ListDeploymentsFeatureDrift</li></ul>*Retrieve feature drift scores* | **GET** /deployments/{deploymentId}/featureDrift/ |
| <ul><li>**DeploymentsFeatureDriftOverTimeList**</li><li>datarobot::ListDeploymentsFeatureDriftOverTime</li></ul>*Retrieve drift over time info for a feature of the deployment.* | **GET** /deployments/{deploymentId}/featureDriftOverTime/ |
| <ul><li>**DeploymentsFeaturesList**</li><li>datarobot::ListDeploymentsFeatures</li></ul>*Get deployment features* | **GET** /deployments/{deploymentId}/features/ |
| <ul><li>**DeploymentsFromLearningModelCreate**</li><li>datarobot::CreateDeployment</li></ul>*Create deployment from DataRobot model* | **POST** /deployments/fromLearningModel/ |
| <ul><li>**DeploymentsHumilityStatsList**</li><li>datarobot::ListDeploymentsHumilityStats</li></ul>*Retrieve humility stats* | **GET** /deployments/{deploymentId}/humilityStats/ |
| <ul><li>**DeploymentsHumilityStatsOverTimeList**</li><li>datarobot::ListDeploymentsHumilityStatsOverTime</li></ul>*Retrieve humility stats over time* | **GET** /deployments/{deploymentId}/humilityStatsOverTime/ |
| <ul><li>**DeploymentsList**</li><li>datarobot::ListDeployments</li></ul>*List deployments* | **GET** /deployments/ |
| <ul><li>**DeploymentsModelPatchMany**</li><li>datarobot::ReplaceDeployedModel</li></ul>*Model Replacement.* | **PATCH** /deployments/{deploymentId}/model/ |
| <ul><li>**DeploymentsModelSecondaryDatasetConfigurationHistoryList**</li><li>datarobot::ListDeploymentsModelSecondaryDatasetConfigurationHistory</li></ul>*List the secondary datasets configuration history for a deployment* | **GET** /deployments/{deploymentId}/model/secondaryDatasetConfigurationHistory/ |
| <ul><li>**DeploymentsModelSecondaryDatasetConfigurationList**</li><li>datarobot::ListDeploymentsModelSecondaryDatasetConfiguration</li></ul>*Retrieve secondary datasets configuration for a deployment.* | **GET** /deployments/{deploymentId}/model/secondaryDatasetConfiguration/ |
| <ul><li>**DeploymentsModelSecondaryDatasetConfigurationPatchMany**</li><li>datarobot::PatchManyDeploymentsModelSecondaryDatasetConfiguration</li></ul>*Update the secondary datasets configuration for the deployed model.* | **PATCH** /deployments/{deploymentId}/model/secondaryDatasetConfiguration/ |
| <ul><li>**DeploymentsModelValidationCreate**</li><li>datarobot::ValidateReplaceDeployedModel</li></ul>*Model Replacement Validation.* | **POST** /deployments/{deploymentId}/model/validation/ |
| <ul><li>**DeploymentsMonitoringDataDeletionsCreate**</li><li>datarobot::CreateDeploymentsMonitoringDataDeletions</li></ul>*Endpoint for deleting deployment monitoring data.* | **POST** /deployments/{deploymentId}/monitoringDataDeletions/ |
| <ul><li>**DeploymentsOnDemandReportsCreate**</li><li>datarobot::CreateDeploymentsOnDemandReports</li></ul>*Add report to execution queue.* | **POST** /deployments/{deploymentId}/onDemandReports/ |
| <ul><li>**DeploymentsPatch**</li><li>datarobot::PatchDeployments</li></ul>*Update deployment* | **PATCH** /deployments/{deploymentId}/ |
| <ul><li>**DeploymentsPredictionInputsFromDatasetCreate**</li><li>datarobot::CreateDeploymentsPredictionInputsFromDataset</li></ul>*Submit external deployment prediction data.* | **POST** /deployments/{deploymentId}/predictionInputs/fromDataset/ |
| <ul><li>**DeploymentsPredictionResultsList**</li><li>datarobot::ListDeploymentsPredictionResults</li></ul>*Retrieve predictions results.* | **GET** /deployments/{deploymentId}/predictionResults/ |
| <ul><li>**DeploymentsPredictionsCreate**</li><li>datarobot::CreateDeploymentsPredictions</li></ul>*Compute predictions synchronously* | **POST** /deployments/{deploymentId}/predictions/ |
| <ul><li>**DeploymentsRetrainingPoliciesCreate**</li><li>datarobot::CreateDeploymentsRetrainingPolicies</li></ul>*Endpoint for creating a deployment retraining policy.* | **POST** /deployments/{deploymentId}/retrainingPolicies/ |
| <ul><li>**DeploymentsRetrainingPoliciesDelete**</li><li>datarobot::DeleteDeploymentsRetrainingPolicies</li></ul>*Endpoint for deleting a deployment retraining policy.* | **DELETE** /deployments/{deploymentId}/retrainingPolicies/{retrainingPolicyId}/ |
| <ul><li>**DeploymentsRetrainingPoliciesList**</li><li>datarobot::ListDeploymentsRetrainingPolicies</li></ul>*Endpoint for fetching a list of deployment retraining policies.* | **GET** /deployments/{deploymentId}/retrainingPolicies/ |
| <ul><li>**DeploymentsRetrainingPoliciesPatch**</li><li>datarobot::PatchDeploymentsRetrainingPolicies</li></ul>*Endpoint for updating a deployment retraining policy.* | **PATCH** /deployments/{deploymentId}/retrainingPolicies/{retrainingPolicyId}/ |
| <ul><li>**DeploymentsRetrainingPoliciesRetrieve**</li><li>datarobot::RetrieveDeploymentsRetrainingPolicies</li></ul>*Endpoint for fetching a deployment retraining policy.* | **GET** /deployments/{deploymentId}/retrainingPolicies/{retrainingPolicyId}/ |
| <ul><li>**DeploymentsRetrainingPoliciesRunsCreate**</li><li>datarobot::CreateDeploymentsRetrainingPoliciesRuns</li></ul>*Endpoint for initiating a deployment retraining policy run.* | **POST** /deployments/{deploymentId}/retrainingPolicies/{retrainingPolicyId}/runs/ |
| <ul><li>**DeploymentsRetrainingPoliciesRunsList**</li><li>datarobot::ListDeploymentsRetrainingPoliciesRuns</li></ul>*Endpoint for fetching a list of deployment retraining policy runs.* | **GET** /deployments/{deploymentId}/retrainingPolicies/{retrainingPolicyId}/runs/ |
| <ul><li>**DeploymentsRetrainingPoliciesRunsPatch**</li><li>datarobot::PatchDeploymentsRetrainingPoliciesRuns</li></ul>*Endpoint for updating a single deployment retraining policy run.* | **PATCH** /deployments/{deploymentId}/retrainingPolicies/{retrainingPolicyId}/runs/{runId}/ |
| <ul><li>**DeploymentsRetrainingPoliciesRunsRetrieve**</li><li>datarobot::RetrieveDeploymentsRetrainingPoliciesRuns</li></ul>*Endpoint for fetching a single deployment retraining policy run.* | **GET** /deployments/{deploymentId}/retrainingPolicies/{retrainingPolicyId}/runs/{runId}/ |
| <ul><li>**DeploymentsRetrainingSettingsList**</li><li>datarobot::ListDeploymentsRetrainingSettings</li></ul>*Endpoint for fetching deployment retraining settings.* | **GET** /deployments/{deploymentId}/retrainingSettings/ |
| <ul><li>**DeploymentsRetrainingSettingsPatchMany**</li><li>datarobot::PatchManyDeploymentsRetrainingSettings</li></ul>*Endpoint for updating deployment retraining settings.* | **PATCH** /deployments/{deploymentId}/retrainingSettings/ |
| <ul><li>**DeploymentsRetrieve**</li><li>datarobot::GetDeployment</li></ul>*Retrieve deployment* | **GET** /deployments/{deploymentId}/ |
| <ul><li>**DeploymentsScoringCodeBuildsCreate**</li><li>datarobot::CreateDeploymentsScoringCodeBuilds</li></ul>*Build Java package containing Scoring Code with agent integration.* | **POST** /deployments/{deploymentId}/scoringCodeBuilds/ |
| <ul><li>**DeploymentsScoringCodeList**</li><li>datarobot::ListDeploymentsScoringCode</li></ul>*Retrieve Scoring Code* | **GET** /deployments/{deploymentId}/scoringCode/ |
| <ul><li>**DeploymentsServiceStatsList**</li><li>datarobot::GetDeploymentServiceStats</li></ul>*Retrieve service statistics* | **GET** /deployments/{deploymentId}/serviceStats/ |
| <ul><li>**DeploymentsServiceStatsOverTimeList**</li><li>datarobot::ListDeploymentsServiceStatsOverTime</li></ul>*Retrieve service statistics over time* | **GET** /deployments/{deploymentId}/serviceStatsOverTime/ |
| <ul><li>**DeploymentsSettingsList**</li><li>datarobot::GetDeploymentSettings</li></ul>*Get deployment settings* | **GET** /deployments/{deploymentId}/settings/ |
| <ul><li>**DeploymentsSettingsPatchMany**</li><li>datarobot::UpdateDeploymentSettings</li></ul>*Update deployment settings* | **PATCH** /deployments/{deploymentId}/settings/ |
| <ul><li>**DeploymentsSharedRolesList**</li><li>datarobot::ListDeploymentsSharedRoles</li></ul>*Get model deployment's access control list* | **GET** /deployments/{deploymentId}/sharedRoles/ |
| <ul><li>**DeploymentsSharedRolesPatchMany**</li><li>datarobot::PatchManyDeploymentsSharedRoles</li></ul>*Update model deployment's controls* | **PATCH** /deployments/{deploymentId}/sharedRoles/ |
| <ul><li>**DeploymentsStatusPatchMany**</li><li>datarobot::PatchManyDeploymentsStatus</li></ul>*Change deployment status* | **PATCH** /deployments/{deploymentId}/status/ |
| <ul><li>**DeploymentsTargetDriftList**</li><li>datarobot::ListDeploymentsTargetDrift</li></ul>*Retrieve target drift* | **GET** /deployments/{deploymentId}/targetDrift/ |
| <ul><li>**PredictionServersList**</li><li>datarobot::ListPredictionServers</li></ul>*List prediction servers.* | **GET** /predictionServers/ |
### DocumentationApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**AutomatedDocumentOptionsList**</li><li>datarobot::ListAutomatedDocumentOptions</li></ul>*List all available document types and locales.* | **GET** /automatedDocumentOptions/ |
| <ul><li>**AutomatedDocumentsCreate**</li><li>datarobot::CreateAutomatedDocuments</li></ul>*Request generation of automated document* | **POST** /automatedDocuments/ |
| <ul><li>**AutomatedDocumentsDelete**</li><li>datarobot::DeleteAutomatedDocuments</li></ul>*Delete automated document.* | **DELETE** /automatedDocuments/{documentId}/ |
| <ul><li>**AutomatedDocumentsList**</li><li>datarobot::ListAutomatedDocuments</li></ul>*List all generated documents.* | **GET** /automatedDocuments/ |
| <ul><li>**AutomatedDocumentsRetrieve**</li><li>datarobot::RetrieveAutomatedDocuments</li></ul>*Download generated document.* | **GET** /automatedDocuments/{documentId}/ |
| <ul><li>**ComplianceDocTemplatesCreate**</li><li>datarobot::UploadComplianceDocTemplate</li></ul>*Create a new compliance documentation template* | **POST** /complianceDocTemplates/ |
| <ul><li>**ComplianceDocTemplatesDefaultList**</li><li>datarobot::ListComplianceDocTemplatesDefault</li></ul>*Retrieve the default documentation template* | **GET** /complianceDocTemplates/default/ |
| <ul><li>**ComplianceDocTemplatesDelete**</li><li>datarobot::DeleteComplianceDocTemplate</li></ul>*Delete a compliance documentation template* | **DELETE** /complianceDocTemplates/{templateId}/ |
| <ul><li>**ComplianceDocTemplatesList**</li><li>datarobot::ListComplianceDocTemplates</li></ul>*List compliance documentation templates* | **GET** /complianceDocTemplates/ |
| <ul><li>**ComplianceDocTemplatesPatch**</li><li>datarobot::UpdateComplianceDocTemplate</li></ul>*Update an existing model compliance documentation template* | **PATCH** /complianceDocTemplates/{templateId}/ |
| <ul><li>**ComplianceDocTemplatesRetrieve**</li><li>datarobot::GetComplianceDocTemplate</li></ul>*Retrieve a documentation template* | **GET** /complianceDocTemplates/{templateId}/ |
| <ul><li>**ComplianceDocTemplatesSharedRolesList**</li><li>datarobot::ListComplianceDocTemplatesSharedRoles</li></ul>*Get template's access control list* | **GET** /complianceDocTemplates/{templateId}/sharedRoles/ |
| <ul><li>**ComplianceDocTemplatesSharedRolesPatchMany**</li><li>datarobot::PatchManyComplianceDocTemplatesSharedRoles</li></ul>*Update template's access controls* | **PATCH** /complianceDocTemplates/{templateId}/sharedRoles/ |
| <ul><li>**ModelComplianceDocsInitializationsCreateOne**</li><li>datarobot::CreateOneModelComplianceDocsInitializations</li></ul>*Initialize compliance documentation pre-processing for the model* | **POST** /modelComplianceDocsInitializations/{entityId}/ |
| <ul><li>**ModelComplianceDocsInitializationsRetrieve**</li><li>datarobot::RetrieveModelComplianceDocsInitializations</li></ul>*Check if compliance documentation pre-processing is initialized for the model* | **GET** /modelComplianceDocsInitializations/{entityId}/ |
### GovernanceApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**ApprovalPoliciesCreate**</li><li>datarobot::CreateApprovalPolicies</li></ul>*Create a new Approval Policy.* | **POST** /approvalPolicies/ |
| <ul><li>**ApprovalPoliciesDelete**</li><li>datarobot::DeleteApprovalPolicies</li></ul>*Delete an Approval Policy.* | **DELETE** /approvalPolicies/{approvalPolicyId}/ |
| <ul><li>**ApprovalPoliciesList**</li><li>datarobot::ListApprovalPolicies</li></ul>*List Approval Policies.* | **GET** /approvalPolicies/ |
| <ul><li>**ApprovalPoliciesPut**</li><li>datarobot::PutApprovalPolicies</li></ul>*Update an Approval Policy.* | **PUT** /approvalPolicies/{approvalPolicyId}/ |
| <ul><li>**ApprovalPoliciesRetrieve**</li><li>datarobot::RetrieveApprovalPolicies</li></ul>*Retrieve an Approval Policy.* | **GET** /approvalPolicies/{approvalPolicyId}/ |
| <ul><li>**ApprovalPoliciesShareableChangeRequestsList**</li><li>datarobot::ListApprovalPoliciesShareableChangeRequests</li></ul>*Retrieve associated Change Requests Info.* | **GET** /approvalPolicies/{approvalPolicyId}/shareableChangeRequests/ |
| <ul><li>**ApprovalPolicyMatchList**</li><li>datarobot::ListApprovalPolicyMatch</li></ul>*Get policy ID matching the query* | **GET** /approvalPolicyMatch/ |
| <ul><li>**ApprovalPolicyTriggersList**</li><li>datarobot::ListApprovalPolicyTriggers</li></ul>*Get a list of available policy triggers.* | **GET** /approvalPolicyTriggers/ |
| <ul><li>**ChangeRequestsCreate**</li><li>datarobot::CreateChangeRequests</li></ul>*Create Change Request.* | **POST** /changeRequests/ |
| <ul><li>**ChangeRequestsList**</li><li>datarobot::ListChangeRequests</li></ul>*List Change Requests.* | **GET** /changeRequests/ |
| <ul><li>**ChangeRequestsPatch**</li><li>datarobot::PatchChangeRequests</li></ul>*Update Change Request.* | **PATCH** /changeRequests/{changeRequestId}/ |
| <ul><li>**ChangeRequestsRetrieve**</li><li>datarobot::RetrieveChangeRequests</li></ul>*Retrieve Change Request.* | **GET** /changeRequests/{changeRequestId}/ |
| <ul><li>**ChangeRequestsReviewsCreate**</li><li>datarobot::CreateChangeRequestsReviews</li></ul>*Create review.* | **POST** /changeRequests/{changeRequestId}/reviews/ |
| <ul><li>**ChangeRequestsReviewsList**</li><li>datarobot::ListChangeRequestsReviews</li></ul>*List Change Request reviews.* | **GET** /changeRequests/{changeRequestId}/reviews/ |
| <ul><li>**ChangeRequestsReviewsRetrieve**</li><li>datarobot::RetrieveChangeRequestsReviews</li></ul>*Retrieve review.* | **GET** /changeRequests/{changeRequestId}/reviews/{reviewId}/ |
| <ul><li>**ChangeRequestsStatusPatchMany**</li><li>datarobot::PatchManyChangeRequestsStatus</li></ul>*Resolve or Cancel the Change Request.* | **PATCH** /changeRequests/{changeRequestId}/status/ |
| <ul><li>**ChangeRequestsSuggestedReviewersList**</li><li>datarobot::ListChangeRequestsSuggestedReviewers</li></ul>*List suggested reviewers.* | **GET** /changeRequests/{changeRequestId}/suggestedReviewers/ |
### ImagesApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**DatasetsImagesDataQualityLogFileList**</li><li>datarobot::ListDatasetsImagesDataQualityLogFile</li></ul>*Retrieve a text file containing the images data quality log.* | **GET** /datasets/{datasetId}/imagesDataQualityLog/file/ |
| <ul><li>**DatasetsImagesDataQualityLogList**</li><li>datarobot::ListDatasetsImagesDataQualityLog</li></ul>*Retrieve the images data quality log content and log length as JSON* | **GET** /datasets/{datasetId}/imagesDataQualityLog/ |
| <ul><li>**ImageAugmentationListsCreate**</li><li>datarobot::CreateImageAugmentationLists</li></ul>*Creates a new augmentation list based on the posted payload data.* | **POST** /imageAugmentationLists/ |
| <ul><li>**ImageAugmentationListsDelete**</li><li>datarobot::DeleteImageAugmentationLists</li></ul>*Delete an existing augmentation lists by id* | **DELETE** /imageAugmentationLists/{augmentationId}/ |
| <ul><li>**ImageAugmentationListsList**</li><li>datarobot::ListImageAugmentationLists</li></ul>*List of augmentation lists that match the specified query.* | **GET** /imageAugmentationLists/ |
| <ul><li>**ImageAugmentationListsPatch**</li><li>datarobot::PatchImageAugmentationLists</li></ul>*Update and existing augmentation list, with passed in values.* | **PATCH** /imageAugmentationLists/{augmentationId}/ |
| <ul><li>**ImageAugmentationListsRetrieve**</li><li>datarobot::RetrieveImageAugmentationLists</li></ul>*Returns a single augmentation list with the specified id.* | **GET** /imageAugmentationLists/{augmentationId}/ |
| <ul><li>**ImageAugmentationListsSamplesCreate**</li><li>datarobot::CreateImageAugmentationListsSamples</li></ul>*Requests the creation of sample augmentations based on the augmentation list passed in.* | **POST** /imageAugmentationLists/{augmentationId}/samples/ |
| <ul><li>**ImageAugmentationListsSamplesList**</li><li>datarobot::ListImageAugmentationListsSamples</li></ul>*Retrieve latest Augmentation Samples generated for list* | **GET** /imageAugmentationLists/{augmentationId}/samples/ |
| <ul><li>**ImageAugmentationOptionsRetrieve**</li><li>datarobot::RetrieveImageAugmentationOptions</li></ul>*Augmentation list of all available transformations that are supported in the system.* | **GET** /imageAugmentationOptions/{projectId}/ |
| <ul><li>**ImageAugmentationSamplesCreate**</li><li>datarobot::CreateImageAugmentationSamples</li></ul>*(Deprecated in v2.28) Requests the creation of sample augmentations based on the augmentation list parameters passed in. This route is being deprecated please use `POST /imageAugmentationLists/<augmentationId>/samples` route instead.* | **POST** /imageAugmentationSamples/ |
| <ul><li>**ImageAugmentationSamplesList**</li><li>datarobot::ListImageAugmentationSamples</li></ul>*(Deprecated in v2.28) Retrieve previously generated Augmentation Samples. This route is being deprecated please use `GET /imageAugmentationLists/<augmentationId>/samples` route instead.* | **GET** /imageAugmentationSamples/{samplesId}/ |
| <ul><li>**ProjectsDuplicateImagesList**</li><li>datarobot::ListProjectsDuplicateImages</li></ul>*Get a list of duplicate images containing the number of occurrences of each image* | **GET** /projects/{projectId}/duplicateImages/{column}/ |
| <ul><li>**ProjectsImageActivationMapsList**</li><li>datarobot::ListProjectsImageActivationMaps</li></ul>*List all Image Activation Maps for the project.* | **GET** /projects/{projectId}/imageActivationMaps/ |
| <ul><li>**ProjectsImageBinsList**</li><li>datarobot::ListProjectsImageBins</li></ul>*List image bins and covers for every target value or range.* | **GET** /projects/{projectId}/imageBins/ |
| <ul><li>**ProjectsImageEmbeddingsList**</li><li>datarobot::ListProjectsImageEmbeddings</li></ul>*List all Image Embeddings for the project.* | **GET** /projects/{projectId}/imageEmbeddings/ |
| <ul><li>**ProjectsImageSamplesList**</li><li>datarobot::ListProjectsImageSamples</li></ul>*List all metadata for images in the EDA sample* | **GET** /projects/{projectId}/imageSamples/ |
| <ul><li>**ProjectsImagesDataQualityLogFileList**</li><li>datarobot::ListProjectsImagesDataQualityLogFile</li></ul>*Retrieve a text file containing the images data quality log* | **GET** /projects/{projectId}/imagesDataQualityLog/file/ |
| <ul><li>**ProjectsImagesDataQualityLogList**</li><li>datarobot::ListProjectsImagesDataQualityLog</li></ul>*Retrieve the images data quality log content and log length as JSON* | **GET** /projects/{projectId}/imagesDataQualityLog/ |
| <ul><li>**ProjectsImagesFileList**</li><li>datarobot::ListProjectsImagesFile</li></ul>*Returns a file for a single image* | **GET** /projects/{projectId}/images/{imageId}/file/ |
| <ul><li>**ProjectsImagesList**</li><li>datarobot::ListProjectsImages</li></ul>*Returns a list of image metadata elements.* | **GET** /projects/{projectId}/images/ |
| <ul><li>**ProjectsImagesRetrieve**</li><li>datarobot::RetrieveProjectsImages</li></ul>*Returns a single image metadata.* | **GET** /projects/{projectId}/images/{imageId}/ |
| <ul><li>**ProjectsModelsImageActivationMapsCreate**</li><li>datarobot::CreateProjectsModelsImageActivationMaps</li></ul>*Request the computation of image activation maps for the specified model.* | **POST** /projects/{projectId}/models/{modelId}/imageActivationMaps/ |
| <ul><li>**ProjectsModelsImageActivationMapsList**</li><li>datarobot::ListProjectsModelsImageActivationMaps</li></ul>*Retrieve Image Activation Maps for a feature of a model.* | **GET** /projects/{projectId}/models/{modelId}/imageActivationMaps/ |
| <ul><li>**ProjectsModelsImageEmbeddingsCreate**</li><li>datarobot::CreateProjectsModelsImageEmbeddings</li></ul>*Request the computation of image embeddings for the specified model.* | **POST** /projects/{projectId}/models/{modelId}/imageEmbeddings/ |
| <ul><li>**ProjectsModelsImageEmbeddingsList**</li><li>datarobot::ListProjectsModelsImageEmbeddings</li></ul>*Retrieve ImageEmbeddings for a feature of a model.* | **GET** /projects/{projectId}/models/{modelId}/imageEmbeddings/ |
### InfrastructureApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**ClusterLicenseList**</li><li>datarobot::ListClusterLicense</li></ul>*Retrieve the information about cluster license* | **GET** /clusterLicense/ |
| <ul><li>**ClusterLicensePutMany**</li><li>datarobot::PutManyClusterLicense</li></ul>*Create or replace license* | **PUT** /clusterLicense/ |
| <ul><li>**ClusterLicenseValidationCreate**</li><li>datarobot::CreateClusterLicenseValidation</li></ul>*Check if a license is valid* | **POST** /clusterLicenseValidation/ |
| <ul><li>**VersionList**</li><li>datarobot::GetServerVersion</li></ul>*Retrieve version information.* | **GET** /version/ |
### InsightsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**MultilabelInsightsHistogramList**</li><li>datarobot::ListMultilabelInsightsHistogram</li></ul>*Retrieve multicategorical feature histogram.* | **GET** /multilabelInsights/{multilabelInsightsKey}/histogram/ |
| <ul><li>**MultilabelInsightsPairwiseManualSelectionsCreate**</li><li>datarobot::CreateMultilabelInsightsPairwiseManualSelections</li></ul>*Save a list of manually selected labels for Feature Statistics matrix.* | **POST** /multilabelInsights/{multilabelInsightsKey}/pairwiseManualSelections/ |
| <ul><li>**MultilabelInsightsPairwiseManualSelectionsDelete**</li><li>datarobot::DeleteMultilabelInsightsPairwiseManualSelections</li></ul>*Delete label list.* | **DELETE** /multilabelInsights/{multilabelInsightsKey}/pairwiseManualSelections/{manualSelectionListId}/ |
| <ul><li>**MultilabelInsightsPairwiseManualSelectionsList**</li><li>datarobot::ListMultilabelInsightsPairwiseManualSelections</li></ul>*Get all label lists.* | **GET** /multilabelInsights/{multilabelInsightsKey}/pairwiseManualSelections/ |
| <ul><li>**MultilabelInsightsPairwiseManualSelectionsPatch**</li><li>datarobot::PatchMultilabelInsightsPairwiseManualSelections</li></ul>*Update label list's name.* | **PATCH** /multilabelInsights/{multilabelInsightsKey}/pairwiseManualSelections/{manualSelectionListId}/ |
| <ul><li>**MultilabelInsightsPairwiseStatisticsList**</li><li>datarobot::ListMultilabelInsightsPairwiseStatistics</li></ul>*Retrieve pairwise statistics for the given multilabel insights key.* | **GET** /multilabelInsights/{multilabelInsightsKey}/pairwiseStatistics/ |
| <ul><li>**ProjectsAnomalyAssessmentRecordsDelete**</li><li>datarobot::DeleteAnomalyAssessmentRecord</li></ul>*Delete the anomaly assessment record.* | **DELETE** /projects/{projectId}/anomalyAssessmentRecords/{recordId}/ |
| <ul><li>**ProjectsAnomalyAssessmentRecordsExplanationsList**</li><li>datarobot::GetAnomalyAssessmentExplanations</li></ul>*Retrieve anomaly assessment record.* | **GET** /projects/{projectId}/anomalyAssessmentRecords/{recordId}/explanations/ |
| <ul><li>**ProjectsAnomalyAssessmentRecordsList**</li><li>datarobot::ListAnomalyAssessmentRecords</li></ul>*Retrieve anomaly assessment records.* | **GET** /projects/{projectId}/anomalyAssessmentRecords/ |
| <ul><li>**ProjectsAnomalyAssessmentRecordsPredictionsPreviewList**</li><li>datarobot::GetAnomalyAssessmentPredictionsPreview</li></ul>*Retrieve predictions preview for the anomaly assessment record.* | **GET** /projects/{projectId}/anomalyAssessmentRecords/{recordId}/predictionsPreview/ |
| <ul><li>**ProjectsBiasVsAccuracyInsightsList**</li><li>datarobot::ListProjectsBiasVsAccuracyInsights</li></ul>*List Bias vs Accuracy insights.* | **GET** /projects/{projectId}/biasVsAccuracyInsights/ |
| <ul><li>**ProjectsDatetimeModelsAccuracyOverTimePlotsList**</li><li>datarobot::GetAccuracyOverTimePlot</li></ul>*Retrieve the data for the Accuracy over Time plots.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/accuracyOverTimePlots/ |
| <ul><li>**ProjectsDatetimeModelsAccuracyOverTimePlotsMetadataList**</li><li>datarobot::GetAccuracyOverTimePlotsMetadata</li></ul>*Retrieve the metadata for the Accuracy over Time plots.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/accuracyOverTimePlots/metadata/ |
| <ul><li>**ProjectsDatetimeModelsAccuracyOverTimePlotsPreviewList**</li><li>datarobot::GetAccuracyOverTimePlotPreview</li></ul>*Retrieve the preview for the Accuracy over Time plots.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/accuracyOverTimePlots/preview/ |
| <ul><li>**ProjectsDatetimeModelsAnomalyOverTimePlotsList**</li><li>datarobot::GetAnomalyOverTimePlot</li></ul>*Retrieve the data for the Anomaly over Time plots.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/anomalyOverTimePlots/ |
| <ul><li>**ProjectsDatetimeModelsAnomalyOverTimePlotsMetadataList**</li><li>datarobot::GetAnomalyOverTimePlotsMetadata</li></ul>*Retrieve the metadata for the Anomaly over Time plots.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/anomalyOverTimePlots/metadata/ |
| <ul><li>**ProjectsDatetimeModelsAnomalyOverTimePlotsPreviewList**</li><li>datarobot::GetAnomalyOverTimePlotPreview</li></ul>*Retrieve the preview for the Anomaly over Time plots.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/anomalyOverTimePlots/preview/ |
| <ul><li>**ProjectsDatetimeModelsBacktestStabilityPlotList**</li><li>datarobot::ListProjectsDatetimeModelsBacktestStabilityPlot</li></ul>*Retrieve a plot displaying the stability of the datetime model across different backtests.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/backtestStabilityPlot/ |
| <ul><li>**ProjectsDatetimeModelsDatasetAccuracyOverTimePlotsMetadataList**</li><li>datarobot::ListProjectsDatetimeModelsDatasetAccuracyOverTimePlotsMetadata</li></ul>*Retrieve the metadata of the Accuracy Over Time (AOT) chart for an external dataset.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/datasetAccuracyOverTimePlots/{datasetId}/metadata/ |
| <ul><li>**ProjectsDatetimeModelsDatasetAccuracyOverTimePlotsPreviewList**</li><li>datarobot::ListProjectsDatetimeModelsDatasetAccuracyOverTimePlotsPreview</li></ul>*Retrieve a preview of the Accuracy Over Time (AOT) chart for an external dataset.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/datasetAccuracyOverTimePlots/{datasetId}/preview/ |
| <ul><li>**ProjectsDatetimeModelsDatasetAccuracyOverTimePlotsRetrieve**</li><li>datarobot::RetrieveProjectsDatetimeModelsDatasetAccuracyOverTimePlots</li></ul>*Retrieve the Accuracy Over Time (AOT) chart data for an external dataset for a project.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/datasetAccuracyOverTimePlots/{datasetId}/ |
| <ul><li>**ProjectsDatetimeModelsDatetimeTrendPlotsCreate**</li><li>datarobot::ComputeDatetimeTrendPlots</li></ul>*Computes Datetime Trend plots for time series and OTV projects.* | **POST** /projects/{projectId}/datetimeModels/{modelId}/datetimeTrendPlots/ |
| <ul><li>**ProjectsDatetimeModelsFeatureEffectsCreate**</li><li>datarobot::CreateProjectsDatetimeModelsFeatureEffects</li></ul>*Add a request to the queue to calculate Feature Effects for a backtest.* | **POST** /projects/{projectId}/datetimeModels/{modelId}/featureEffects/ |
| <ul><li>**ProjectsDatetimeModelsFeatureEffectsList**</li><li>datarobot::ListProjectsDatetimeModelsFeatureEffects</li></ul>*Retrieve Feature Effects for a model backtest.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/featureEffects/ |
| <ul><li>**ProjectsDatetimeModelsFeatureEffectsMetadataList**</li><li>datarobot::ListProjectsDatetimeModelsFeatureEffectsMetadata</li></ul>*Retrieve Feature Effects metadata for each backtest. Response contains status and available sources for each backtest of the model.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/featureEffectsMetadata/ |
| <ul><li>**ProjectsDatetimeModelsFeatureFitCreate**</li><li>datarobot::CreateProjectsDatetimeModelsFeatureFit</li></ul>*Add a request to the queue to calculate Feature Fit for a backtest.* | **POST** /projects/{projectId}/datetimeModels/{modelId}/featureFit/ |
| <ul><li>**ProjectsDatetimeModelsFeatureFitList**</li><li>datarobot::ListProjectsDatetimeModelsFeatureFit</li></ul>*Retrieve Feature Fit for a model backtest.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/featureFit/ |
| <ul><li>**ProjectsDatetimeModelsFeatureFitMetadataList**</li><li>datarobot::ListProjectsDatetimeModelsFeatureFitMetadata</li></ul>*Retrieve Feature Fit metadata for each backtest. Response contains status and available sources for each backtest of the model.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/featureFitMetadata/ |
| <ul><li>**ProjectsDatetimeModelsForecastDistanceStabilityPlotList**</li><li>datarobot::ListProjectsDatetimeModelsForecastDistanceStabilityPlot</li></ul>*Retrieve a plot displaying the stability of the time series model across different forecast distances.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/forecastDistanceStabilityPlot/ |
| <ul><li>**ProjectsDatetimeModelsForecastVsActualPlotsList**</li><li>datarobot::GetForecastVsActualPlot</li></ul>*Retrieve the data for the Forecast vs Actual plots.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/forecastVsActualPlots/ |
| <ul><li>**ProjectsDatetimeModelsForecastVsActualPlotsMetadataList**</li><li>datarobot::GetForecastVsActualPlotsMetadata</li></ul>*Retrieve the metadata for the Forecast vs Actual plots.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/forecastVsActualPlots/metadata/ |
| <ul><li>**ProjectsDatetimeModelsForecastVsActualPlotsPreviewList**</li><li>datarobot::GetForecastVsActualPlotPreview</li></ul>*Retrieve the preview for the Forecast vs Actual plots.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/forecastVsActualPlots/preview/ |
| <ul><li>**ProjectsDatetimeModelsMulticlassFeatureEffectsCreate**</li><li>datarobot::CreateProjectsDatetimeModelsMulticlassFeatureEffects</li></ul>*Compute feature effects for a multiclass datetime model.* | **POST** /projects/{projectId}/datetimeModels/{modelId}/multiclassFeatureEffects/ |
| <ul><li>**ProjectsDatetimeModelsMulticlassFeatureEffectsList**</li><li>datarobot::ListProjectsDatetimeModelsMulticlassFeatureEffects</li></ul>*Retrieve feature effects for each class in a multiclass datetime model.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/multiclassFeatureEffects/ |
| <ul><li>**ProjectsDatetimeModelsMultiseriesHistogramsList**</li><li>datarobot::ListProjectsDatetimeModelsMultiseriesHistograms</li></ul>*Retrieve the histograms for series insights.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/multiseriesHistograms/ |
| <ul><li>**ProjectsDatetimeModelsMultiseriesScoresCreate**</li><li>datarobot::RequestSeriesAccuracy</li></ul>*Request the computation of per-series scores for a multiseries model.* | **POST** /projects/{projectId}/datetimeModels/{modelId}/multiseriesScores/ |
| <ul><li>**ProjectsDatetimeModelsMultiseriesScoresFileList**</li><li>datarobot::ListProjectsDatetimeModelsMultiseriesScoresFile</li></ul>*Retrieve the CSV file for the series accuracy.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/multiseriesScores/file/ |
| <ul><li>**ProjectsDatetimeModelsMultiseriesScoresList**</li><li>datarobot::GetSeriesAccuracyForModel</li></ul>*List the scores per individual series for the specified multiseries model.* | **GET** /projects/{projectId}/datetimeModels/{modelId}/multiseriesScores/ |
| <ul><li>**ProjectsExternalScoresCreate**</li><li>datarobot::CreateProjectsExternalScores</li></ul>*Compute model scores for external dataset.* | **POST** /projects/{projectId}/externalScores/ |
| <ul><li>**ProjectsExternalScoresList**</li><li>datarobot::ListProjectsExternalScores</li></ul>*List of scores on prediction datasets for a project.* | **GET** /projects/{projectId}/externalScores/ |
| <ul><li>**ProjectsFeatureAssociationFeaturelistsList**</li><li>datarobot::ListProjectsFeatureAssociationFeaturelists</li></ul>*List all featurelists with feature association matrix availability flags for a project.* | **GET** /projects/{projectId}/featureAssociationFeaturelists/ |
| <ul><li>**ProjectsFeatureAssociationMatrixCreate**</li><li>datarobot::CreateProjectsFeatureAssociationMatrix</li></ul>*Compute feature association matrix.* | **POST** /projects/{projectId}/featureAssociationMatrix/ |
| <ul><li>**ProjectsFeatureAssociationMatrixDetailsList**</li><li>datarobot::GetFeatureAssociationMatrixDetails</li></ul>*Retrieval for feature association plotting between a pair of features.* | **GET** /projects/{projectId}/featureAssociationMatrixDetails/ |
| <ul><li>**ProjectsFeatureAssociationMatrixList**</li><li>datarobot::GetFeatureAssociationMatrix</li></ul>*Retrieval for pairwise feature association statistics.* | **GET** /projects/{projectId}/featureAssociationMatrix/ |
| <ul><li>**ProjectsFeaturesFrequentValuesList**</li><li>datarobot::ListProjectsFeaturesFrequentValues</li></ul>*Retrieve the frequent values information for a particular feature.* | **GET** /projects/{projectId}/features/{featureName}/frequentValues/ |
| <ul><li>**ProjectsGeometryFeaturePlotsCreate**</li><li>datarobot::CreateProjectsGeometryFeaturePlots</li></ul>*Create a map of one location feature* | **POST** /projects/{projectId}/geometryFeaturePlots/ |
| <ul><li>**ProjectsGeometryFeaturePlotsRetrieve**</li><li>datarobot::RetrieveProjectsGeometryFeaturePlots</li></ul>*Retrieve a map of one location feature* | **GET** /projects/{projectId}/geometryFeaturePlots/{featureName}/ |
| <ul><li>**ProjectsImageActivationMapsList**</li><li>datarobot::ListProjectsImageActivationMaps</li></ul>*List all Image Activation Maps for the project.* | **GET** /projects/{projectId}/imageActivationMaps/ |
| <ul><li>**ProjectsImageEmbeddingsList**</li><li>datarobot::ListProjectsImageEmbeddings</li></ul>*List all Image Embeddings for the project.* | **GET** /projects/{projectId}/imageEmbeddings/ |
| <ul><li>**ProjectsModelsAnomalyAssessmentInitializationCreate**</li><li>datarobot::InitializeAnomalyAssessment</li></ul>*Calculate the anomaly assessment insight.* | **POST** /projects/{projectId}/models/{modelId}/anomalyAssessmentInitialization/ |
| <ul><li>**ProjectsModelsAnomalyInsightsFileList**</li><li>datarobot::ListProjectsModelsAnomalyInsightsFile</li></ul>*Retrieve a CSV file of the raw data displayed with the anomaly score from the model.* | **GET** /projects/{projectId}/models/{modelId}/anomalyInsightsFile/ |
| <ul><li>**ProjectsModelsAnomalyInsightsTableList**</li><li>datarobot::ListProjectsModelsAnomalyInsightsTable</li></ul>*Retrieve a table of the raw data displayed with the anomaly score from the specific model.* | **GET** /projects/{projectId}/models/{modelId}/anomalyInsightsTable/ |
| <ul><li>**ProjectsModelsClusterInsightsCreate**</li><li>datarobot::CreateProjectsModelsClusterInsights</li></ul>*Compute Cluster Insights.* | **POST** /projects/{projectId}/models/{modelId}/clusterInsights/ |
| <ul><li>**ProjectsModelsClusterInsightsDownloadList**</li><li>datarobot::ListProjectsModelsClusterInsightsDownload</li></ul>*Download Cluster Insights result.* | **GET** /projects/{projectId}/models/{modelId}/clusterInsights/download/ |
| <ul><li>**ProjectsModelsClusterInsightsList**</li><li>datarobot::ListProjectsModelsClusterInsights</li></ul>*Retrieve Cluster Insights for all features.* | **GET** /projects/{projectId}/models/{modelId}/clusterInsights/ |
| <ul><li>**ProjectsModelsConfusionChartsClassDetailsList**</li><li>datarobot::ListProjectsModelsConfusionChartsClassDetails</li></ul>*Calculates and sends frequency of class in distributed among other classes for actual and predicted data.* | **GET** /projects/{projectId}/models/{modelId}/confusionCharts/{source}/classDetails/ |
| <ul><li>**ProjectsModelsConfusionChartsList**</li><li>datarobot::ListProjectsModelsConfusionCharts</li></ul>*Retrieve all available confusion charts for model.* | **GET** /projects/{projectId}/models/{modelId}/confusionCharts/ |
| <ul><li>**ProjectsModelsConfusionChartsMetadataList**</li><li>datarobot::ListProjectsModelsConfusionChartsMetadata</li></ul>*Retrieve metadata for the confusion chart of a model.* | **GET** /projects/{projectId}/models/{modelId}/confusionCharts/{source}/metadata/ |
| <ul><li>**ProjectsModelsConfusionChartsRetrieve**</li><li>datarobot::GetConfusionChart</li></ul>*Retrieve the confusion chart data from a single source.* | **GET** /projects/{projectId}/models/{modelId}/confusionCharts/{source}/ |
| <ul><li>**ProjectsModelsCrossClassAccuracyScoresCreate**</li><li>datarobot::CreateProjectsModelsCrossClassAccuracyScores</li></ul>*Start Cross Class Accuracy calculations.* | **POST** /projects/{projectId}/models/{modelId}/crossClassAccuracyScores/ |
| <ul><li>**ProjectsModelsCrossClassAccuracyScoresList**</li><li>datarobot::ListProjectsModelsCrossClassAccuracyScores</li></ul>*List Cross Class Accuracy scores.* | **GET** /projects/{projectId}/models/{modelId}/crossClassAccuracyScores/ |
| <ul><li>**ProjectsModelsDataDisparityInsightsCreate**</li><li>datarobot::CreateProjectsModelsDataDisparityInsights</li></ul>*Start insight calculations.* | **POST** /projects/{projectId}/models/{modelId}/dataDisparityInsights/ |
| <ul><li>**ProjectsModelsDataDisparityInsightsList**</li><li>datarobot::ListProjectsModelsDataDisparityInsights</li></ul>*Get Cross Class Data Disparity results.* | **GET** /projects/{projectId}/models/{modelId}/dataDisparityInsights/ |
| <ul><li>**ProjectsModelsDatasetConfusionChartsClassDetailsList**</li><li>datarobot::ListProjectsModelsDatasetConfusionChartsClassDetails</li></ul>*Calculate and sends frequency of class in distributed among other classes for actual and predicted data.* | **GET** /projects/{projectId}/models/{modelId}/datasetConfusionCharts/{datasetId}/classDetails/ |
| <ul><li>**ProjectsModelsDatasetConfusionChartsList**</li><li>datarobot::ListProjectsModelsDatasetConfusionCharts</li></ul>*List of Confusion Charts objects on external datasets for a project with filtering option by dataset.* | **GET** /projects/{projectId}/models/{modelId}/datasetConfusionCharts/ |
| <ul><li>**ProjectsModelsDatasetConfusionChartsMetadataList**</li><li>datarobot::ListProjectsModelsDatasetConfusionChartsMetadata</li></ul>*Retrieve metadata for the confusion chart of a model on external dataset for a project.* | **GET** /projects/{projectId}/models/{modelId}/datasetConfusionCharts/{datasetId}/metadata/ |
| <ul><li>**ProjectsModelsDatasetConfusionChartsRetrieve**</li><li>datarobot::RetrieveProjectsModelsDatasetConfusionCharts</li></ul>*Retrieve Confusion Chart objects on external datasets for a project.* | **GET** /projects/{projectId}/models/{modelId}/datasetConfusionCharts/{datasetId}/ |
| <ul><li>**ProjectsModelsDatasetLiftChartsList**</li><li>datarobot::ListProjectsModelsDatasetLiftCharts</li></ul>*Retrieve List of Lift chart data on prediction datasets for a project.* | **GET** /projects/{projectId}/models/{modelId}/datasetLiftCharts/ |
| <ul><li>**ProjectsModelsDatasetMulticlassLiftChartsList**</li><li>datarobot::ListProjectsModelsDatasetMulticlassLiftCharts</li></ul>*Retrieve List of Multiclass Lift chart data on prediction datasets for a project.* | **GET** /projects/{projectId}/models/{modelId}/datasetMulticlassLiftCharts/ |
| <ul><li>**ProjectsModelsDatasetResidualsChartsList**</li><li>datarobot::ListProjectsModelsDatasetResidualsCharts</li></ul>*List of residuals charts objects on prediction datasets.* | **GET** /projects/{projectId}/models/{modelId}/datasetResidualsCharts/ |
| <ul><li>**ProjectsModelsDatasetRocCurvesList**</li><li>datarobot::ListProjectsModelsDatasetRocCurves</li></ul>*List of ROC curve objects on prediction datasets for a project with filtering option by dataset.* | **GET** /projects/{projectId}/models/{modelId}/datasetRocCurves/ |
| <ul><li>**ProjectsModelsFairnessInsightsCreate**</li><li>datarobot::CreateProjectsModelsFairnessInsights</li></ul>*Start insight calculations.* | **POST** /projects/{projectId}/models/{modelId}/fairnessInsights/ |
| <ul><li>**ProjectsModelsFairnessInsightsList**</li><li>datarobot::ListProjectsModelsFairnessInsights</li></ul>*List calculated Per Class Bias insights.* | **GET** /projects/{projectId}/models/{modelId}/fairnessInsights/ |
| <ul><li>**ProjectsModelsFeatureEffectsCreate**</li><li>datarobot::CreateProjectsModelsFeatureEffects</li></ul>*Add a request to the queue to calculate Feature Effects.* | **POST** /projects/{projectId}/models/{modelId}/featureEffects/ |
| <ul><li>**ProjectsModelsFeatureEffectsList**</li><li>datarobot::ListProjectsModelsFeatureEffects</li></ul>*Retrieve Feature Effects for the model.* | **GET** /projects/{projectId}/models/{modelId}/featureEffects/ |
| <ul><li>**ProjectsModelsFeatureEffectsMetadataList**</li><li>datarobot::ListProjectsModelsFeatureEffectsMetadata</li></ul>*Retrieve Feature Effects metadata. Response contains status and available sources.* | **GET** /projects/{projectId}/models/{modelId}/featureEffectsMetadata/ |
| <ul><li>**ProjectsModelsFeatureFitCreate**</li><li>datarobot::CreateProjectsModelsFeatureFit</li></ul>*Add a request to the queue to calculate Feature Fit.* | **POST** /projects/{projectId}/models/{modelId}/featureFit/ |
| <ul><li>**ProjectsModelsFeatureFitList**</li><li>datarobot::ListProjectsModelsFeatureFit</li></ul>*Retrieve Feature Fit for the model.* | **GET** /projects/{projectId}/models/{modelId}/featureFit/ |
| <ul><li>**ProjectsModelsFeatureFitMetadataList**</li><li>datarobot::ListProjectsModelsFeatureFitMetadata</li></ul>*Retrieve Feature Fit metadata. Response contains status and available sources.* | **GET** /projects/{projectId}/models/{modelId}/featureFitMetadata/ |
| <ul><li>**ProjectsModelsFeatureImpactCreate**</li><li>datarobot::RequestFeatureImpact</li></ul>*Add a request to calculate feature impact to the queue.* | **POST** /projects/{projectId}/models/{modelId}/featureImpact/ |
| <ul><li>**ProjectsModelsFeatureImpactList**</li><li>datarobot::GetFeatureImpactForModel</li></ul>*Retrieve feature impact scores for features in a model.* | **GET** /projects/{projectId}/models/{modelId}/featureImpact/ |
| <ul><li>**ProjectsModelsFeatureListsClusterInsightsList**</li><li>datarobot::ListProjectsModelsFeatureListsClusterInsights</li></ul>*Retrieve Cluster Insights for a single featurelist* | **GET** /projects/{projectId}/models/{modelId}/featureLists/{datasetId}/clusterInsights/ |
| <ul><li>**ProjectsModelsImageActivationMapsCreate**</li><li>datarobot::CreateProjectsModelsImageActivationMaps</li></ul>*Request the computation of image activation maps for the specified model.* | **POST** /projects/{projectId}/models/{modelId}/imageActivationMaps/ |
| <ul><li>**ProjectsModelsImageActivationMapsList**</li><li>datarobot::ListProjectsModelsImageActivationMaps</li></ul>*Retrieve Image Activation Maps for a feature of a model.* | **GET** /projects/{projectId}/models/{modelId}/imageActivationMaps/ |
| <ul><li>**ProjectsModelsImageEmbeddingsCreate**</li><li>datarobot::CreateProjectsModelsImageEmbeddings</li></ul>*Request the computation of image embeddings for the specified model.* | **POST** /projects/{projectId}/models/{modelId}/imageEmbeddings/ |
| <ul><li>**ProjectsModelsImageEmbeddingsList**</li><li>datarobot::ListProjectsModelsImageEmbeddings</li></ul>*Retrieve ImageEmbeddings for a feature of a model.* | **GET** /projects/{projectId}/models/{modelId}/imageEmbeddings/ |
| <ul><li>**ProjectsModelsLabelwiseRocCurvesList**</li><li>datarobot::ListProjectsModelsLabelwiseRocCurves</li></ul>*Retrieve labelwise ROC curves for model and given source.* | **GET** /projects/{projectId}/models/{modelId}/labelwiseRocCurves/{source}/ |
| <ul><li>**ProjectsModelsLiftChartList**</li><li>datarobot::ListProjectsModelsLiftChart</li></ul>*Retrieve all available lift charts for model.* | **GET** /projects/{projectId}/models/{modelId}/liftChart/ |
| <ul><li>**ProjectsModelsLiftChartRetrieve**</li><li>datarobot::RetrieveProjectsModelsLiftChart</li></ul>*Retrieve the lift chart data from a single source.* | **GET** /projects/{projectId}/models/{modelId}/liftChart/{source}/ |
| <ul><li>**ProjectsModelsMulticlassFeatureEffectsCreate**</li><li>datarobot::CreateProjectsModelsMulticlassFeatureEffects</li></ul>*Compute feature effects for a multiclass model.* | **POST** /projects/{projectId}/models/{modelId}/multiclassFeatureEffects/ |
| <ul><li>**ProjectsModelsMulticlassFeatureEffectsList**</li><li>datarobot::ListProjectsModelsMulticlassFeatureEffects</li></ul>*Retrieve feature effects for each class in a multiclass model.* | **GET** /projects/{projectId}/models/{modelId}/multiclassFeatureEffects/ |
| <ul><li>**ProjectsModelsMulticlassFeatureImpactList**</li><li>datarobot::ListProjectsModelsMulticlassFeatureImpact</li></ul>*Retrieve feature impact scores for each class in a multiclass model.* | **GET** /projects/{projectId}/models/{modelId}/multiclassFeatureImpact/ |
| <ul><li>**ProjectsModelsMulticlassLiftChartList**</li><li>datarobot::ListProjectsModelsMulticlassLiftChart</li></ul>*Retrieve all available lift charts for multiclass model.* | **GET** /projects/{projectId}/models/{modelId}/multiclassLiftChart/ |
| <ul><li>**ProjectsModelsMulticlassLiftChartRetrieve**</li><li>datarobot::RetrieveProjectsModelsMulticlassLiftChart</li></ul>*Retrieve the multiclass lift chart data from a single source.* | **GET** /projects/{projectId}/models/{modelId}/multiclassLiftChart/{source}/ |
| <ul><li>**ProjectsModelsMultilabelLiftChartsRetrieve**</li><li>datarobot::RetrieveProjectsModelsMultilabelLiftCharts</li></ul>*Retrieve labelwise lift charts for model and given source.* | **GET** /projects/{projectId}/models/{modelId}/multilabelLiftCharts/{source}/ |
| <ul><li>**ProjectsModelsPredictionExplanationsInitializationCreate**</li><li>datarobot::RequestPredictionExplanationsInitialization</li></ul>*Create a new prediction explanations initialization.* | **POST** /projects/{projectId}/models/{modelId}/predictionExplanationsInitialization/ |
| <ul><li>**ProjectsModelsPredictionExplanationsInitializationDeleteMany**</li><li>datarobot::DeletePredictionExplanationsInitialization</li></ul>*Delete an existing PredictionExplanationsInitialization.* | **DELETE** /projects/{projectId}/models/{modelId}/predictionExplanationsInitialization/ |
| <ul><li>**ProjectsModelsPredictionExplanationsInitializationList**</li><li>datarobot::GetPredictionExplanationsInitialization</li></ul>*Retrieve the current PredictionExplanationsInitialization.* | **GET** /projects/{projectId}/models/{modelId}/predictionExplanationsInitialization/ |
| <ul><li>**ProjectsModelsResidualsList**</li><li>datarobot::ListProjectsModelsResiduals</li></ul>*Retrieve all residuals charts for a model.* | **GET** /projects/{projectId}/models/{modelId}/residuals/ |
| <ul><li>**ProjectsModelsResidualsRetrieve**</li><li>datarobot::RetrieveProjectsModelsResiduals</li></ul>*Retrieve the residuals chart data from a single source.* | **GET** /projects/{projectId}/models/{modelId}/residuals/{source}/ |
| <ul><li>**ProjectsModelsRocCurveList**</li><li>datarobot::ListProjectsModelsRocCurve</li></ul>*Retrieve all available ROC curves for model.* | **GET** /projects/{projectId}/models/{modelId}/rocCurve/ |
| <ul><li>**ProjectsModelsRocCurveRetrieve**</li><li>datarobot::RetrieveProjectsModelsRocCurve</li></ul>*Retrieve the ROC curve data from a single source.* | **GET** /projects/{projectId}/models/{modelId}/rocCurve/{source}/ |
| <ul><li>**ProjectsModelsShapImpactCreate**</li><li>datarobot::CreateProjectsModelsShapImpact</li></ul>*Create a Shap based Feature Impact.* | **POST** /projects/{projectId}/models/{modelId}/shapImpact/ |
| <ul><li>**ProjectsModelsShapImpactList**</li><li>datarobot::ListProjectsModelsShapImpact</li></ul>*Retrieve Feature Impact for a model.* | **GET** /projects/{projectId}/models/{modelId}/shapImpact/ |
| <ul><li>**ProjectsModelsWordCloudList**</li><li>datarobot::GetWordCloud</li></ul>*Retrieve word cloud data for a model.* | **GET** /projects/{projectId}/models/{modelId}/wordCloud/ |
| <ul><li>**ProjectsMulticategoricalInvalidFormatFileList**</li><li>datarobot::ListProjectsMulticategoricalInvalidFormatFile</li></ul>*Get file with format errors of potential multicategorical features.* | **GET** /projects/{projectId}/multicategoricalInvalidFormat/file/ |
| <ul><li>**ProjectsMulticategoricalInvalidFormatList**</li><li>datarobot::ListProjectsMulticategoricalInvalidFormat</li></ul>*Retrieve multicategorical data quality log.* | **GET** /projects/{projectId}/multicategoricalInvalidFormat/ |
| <ul><li>**ProjectsPayoffMatricesCreate**</li><li>datarobot::CreateProjectsPayoffMatrices</li></ul>*Create a payoff matrix.* | **POST** /projects/{projectId}/payoffMatrices/ |
| <ul><li>**ProjectsPayoffMatricesDelete**</li><li>datarobot::DeleteProjectsPayoffMatrices</li></ul>*Delete a payoff matrix in a project.* | **DELETE** /projects/{projectId}/payoffMatrices/{payoffMatrixId}/ |
| <ul><li>**ProjectsPayoffMatricesList**</li><li>datarobot::ListProjectsPayoffMatrices</li></ul>*List of all payoff matrices for a project.* | **GET** /projects/{projectId}/payoffMatrices/ |
| <ul><li>**ProjectsPayoffMatricesPut**</li><li>datarobot::PutProjectsPayoffMatrices</li></ul>*Update a payoff matrix.* | **PUT** /projects/{projectId}/payoffMatrices/{payoffMatrixId}/ |
| <ul><li>**ProjectsPredictionExplanationsCreate**</li><li>datarobot::RequestPredictionExplanations</li></ul>*Create a new PredictionExplanations object (and its accompanying PredictionExplanationsRecord).* | **POST** /projects/{projectId}/predictionExplanations/ |
| <ul><li>**ProjectsPredictionExplanationsList**</li><li>datarobot::GetPredictionExplanationsPage</li></ul>*Retrieve stored Prediction Explanations.* | **GET** /projects/{projectId}/predictionExplanations/{predictionExplanationsId}/ |
| <ul><li>**ProjectsPredictionExplanationsRecordsDelete**</li><li>datarobot::DeletePredictionExplanations</li></ul>*Delete saved Prediction Explanations.* | **DELETE** /projects/{projectId}/predictionExplanationsRecords/{predictionExplanationsId}/ |
| <ul><li>**ProjectsPredictionExplanationsRecordsList**</li><li>datarobot::ListPredictionExplanationsMetadata</li></ul>*List PredictionExplanationsRecord objects for a project.* | **GET** /projects/{projectId}/predictionExplanationsRecords/ |
| <ul><li>**ProjectsPredictionExplanationsRecordsRetrieve**</li><li>datarobot::GetPredictionExplanationsMetadata</li></ul>*Retrieve a PredictionExplanationsRecord object.* | **GET** /projects/{projectId}/predictionExplanationsRecords/{predictionExplanationsId}/ |
| <ul><li>**ProjectsShapMatricesCreate**</li><li>datarobot::CreateProjectsShapMatrices</li></ul>*Calculate a matrix with SHAP based prediction explanations scores.* | **POST** /projects/{projectId}/shapMatrices/ |
| <ul><li>**ProjectsShapMatricesList**</li><li>datarobot::ListProjectsShapMatrices</li></ul>*List SHAP matrix records.* | **GET** /projects/{projectId}/shapMatrices/ |
| <ul><li>**ProjectsShapMatricesRetrieve**</li><li>datarobot::RetrieveProjectsShapMatrices</li></ul>*Get matrix with SHAP prediction explanations scores.* | **GET** /projects/{projectId}/shapMatrices/{shapMatrixId}/ |
### JobsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**StatusDelete**</li><li>datarobot::DeleteStatus</li></ul>*Delete a task* | **DELETE** /status/{statusId}/ |
| <ul><li>**StatusList**</li><li>datarobot::ListStatus</li></ul>*List tasks* | **GET** /status/ |
| <ul><li>**StatusRetrieve**</li><li>datarobot::RetrieveStatus</li></ul>*Get task status* | **GET** /status/{statusId}/ |
### MlopsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**MlopsPortablePredictionServerImageList**</li><li>datarobot::ListMlopsPortablePredictionServerImage</li></ul>*Downloads the latest Portable Prediction Server (PPS) Docker image* | **GET** /mlops/portablePredictionServerImage/ |
| <ul><li>**MlopsPortablePredictionServerImageMetadataList**</li><li>datarobot::ListMlopsPortablePredictionServerImageMetadata</li></ul>*Fetches currently active PPS Docker image metadata* | **GET** /mlops/portablePredictionServerImage/metadata/ |
### ModelsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**CustomInferenceImagesFeatureImpactCreate**</li><li>datarobot::CreateCustomInferenceImagesFeatureImpact</li></ul>*Create custom model feature impact.* | **POST** /customInferenceImages/{imageId}/featureImpact/ |
| <ul><li>**CustomInferenceImagesFeatureImpactList**</li><li>datarobot::ListCustomInferenceImagesFeatureImpact</li></ul>*Get custom model feature impact.* | **GET** /customInferenceImages/{imageId}/featureImpact/ |
| <ul><li>**CustomModelDeploymentsList**</li><li>datarobot::ListCustomModelDeployments</li></ul>*List custom model deployments.* | **GET** /customModelDeployments/ |
| <ul><li>**CustomModelLimitsList**</li><li>datarobot::ListCustomModelLimits</li></ul>*Get custom model resource limits.* | **GET** /customModelLimits/ |
| <ul><li>**CustomModelTestsCreate**</li><li>datarobot::CreateCustomModelTests</li></ul>*Create custom model test.* | **POST** /customModelTests/ |
| <ul><li>**CustomModelTestsDelete**</li><li>datarobot::DeleteCustomModelTests</li></ul>*Cancel custom model test.* | **DELETE** /customModelTests/{customModelTestId}/ |
| <ul><li>**CustomModelTestsList**</li><li>datarobot::ListCustomModelTests</li></ul>*List custom model tests.* | **GET** /customModelTests/ |
| <ul><li>**CustomModelTestsLogList**</li><li>datarobot::ListCustomModelTestsLog</li></ul>*Get custom model test log.* | **GET** /customModelTests/{customModelTestId}/log/ |
| <ul><li>**CustomModelTestsRetrieve**</li><li>datarobot::RetrieveCustomModelTests</li></ul>*Get custom model test.* | **GET** /customModelTests/{customModelTestId}/ |
| <ul><li>**CustomModelTestsTailList**</li><li>datarobot::ListCustomModelTestsTail</li></ul>*Get custom model test log tail.* | **GET** /customModelTests/{customModelTestId}/tail/ |
| <ul><li>**CustomModelsAccessControlList**</li><li>datarobot::ListCustomModelsAccessControl</li></ul>*Get a list of users who have access to this custom model and their roles on it.* | **GET** /customModels/{customModelId}/accessControl/ |
| <ul><li>**CustomModelsAccessControlPatchMany**</li><li>datarobot::PatchManyCustomModelsAccessControl</li></ul>*Grant access or update roles for users on this custom model and appropriate learning data.* | **PATCH** /customModels/{customModelId}/accessControl/ |
| <ul><li>**CustomModelsCreate**</li><li>datarobot::CreateCustomModels</li></ul>*Create custom model.* | **POST** /customModels/ |
| <ul><li>**CustomModelsDelete**</li><li>datarobot::DeleteCustomModels</li></ul>*Delete custom model.* | **DELETE** /customModels/{customModelId}/ |
| <ul><li>**CustomModelsDownloadList**</li><li>datarobot::ListCustomModelsDownload</li></ul>*Download the latest custom model version content.* | **GET** /customModels/{customModelId}/download/ |
| <ul><li>**CustomModelsFromCustomModelCreate**</li><li>datarobot::CreateCustomModelsFromCustomModel</li></ul>*Clone custom model.* | **POST** /customModels/fromCustomModel/ |
| <ul><li>**CustomModelsList**</li><li>datarobot::ListCustomModels</li></ul>*List custom models.* | **GET** /customModels/ |
| <ul><li>**CustomModelsPatch**</li><li>datarobot::PatchCustomModels</li></ul>*Update custom model.* | **PATCH** /customModels/{customModelId}/ |
| <ul><li>**CustomModelsPredictionExplanationsInitializationCreate**</li><li>datarobot::CreateCustomModelsPredictionExplanationsInitialization</li></ul>*Create a new prediction explanations initialization for custom model.* | **POST** /customModels/predictionExplanationsInitialization/ |
| <ul><li>**CustomModelsRetrieve**</li><li>datarobot::RetrieveCustomModels</li></ul>*Get custom model.* | **GET** /customModels/{customModelId}/ |
| <ul><li>**CustomModelsTrainingDataPatchMany**</li><li>datarobot::PatchManyCustomModelsTrainingData</li></ul>*Assign training data to custom model.* | **PATCH** /customModels/{customModelId}/trainingData/ |
| <ul><li>**CustomModelsVersionCreateFromLatest**</li><li>datarobot::CreateFromLatestCustomModelsVersion</li></ul>*Update custom model version files.* | **PATCH** /customModels/{customModelId}/versions/ |
| <ul><li>**CustomModelsVersionsConversionsCreate**</li><li>datarobot::CreateCustomModelsVersionsConversions</li></ul>*Generates JAR file from particular files.* | **POST** /customModels/{customModelId}/versions/{customModelVersionId}/conversions/ |
| <ul><li>**CustomModelsVersionsConversionsDelete**</li><li>datarobot::DeleteCustomModelsVersionsConversions</li></ul>*Stop a given custom model conversion.* | **DELETE** /customModels/{customModelId}/versions/{customModelVersionId}/conversions/{conversionId}/ |
| <ul><li>**CustomModelsVersionsConversionsList**</li><li>datarobot::ListCustomModelsVersionsConversions</li></ul>*Get a list or latest custom model conversion(s).* | **GET** /customModels/{customModelId}/versions/{customModelVersionId}/conversions/ |
| <ul><li>**CustomModelsVersionsConversionsRetrieve**</li><li>datarobot::RetrieveCustomModelsVersionsConversions</li></ul>*Get a given custom model conversion.* | **GET** /customModels/{customModelId}/versions/{customModelVersionId}/conversions/{conversionId}/ |
| <ul><li>**CustomModelsVersionsCreate**</li><li>datarobot::CreateCustomModelsVersions</li></ul>*Create custom model version.* | **POST** /customModels/{customModelId}/versions/ |
| <ul><li>**CustomModelsVersionsDependencyBuildCreate**</li><li>datarobot::CreateCustomModelsVersionsDependencyBuild</li></ul>*Start a custom model version's dependency build.* | **POST** /customModels/{customModelId}/versions/{customModelVersionId}/dependencyBuild/ |
| <ul><li>**CustomModelsVersionsDependencyBuildDeleteMany**</li><li>datarobot::DeleteManyCustomModelsVersionsDependencyBuild</li></ul>*Cancel dependency build.* | **DELETE** /customModels/{customModelId}/versions/{customModelVersionId}/dependencyBuild/ |
| <ul><li>**CustomModelsVersionsDependencyBuildList**</li><li>datarobot::ListCustomModelsVersionsDependencyBuild</li></ul>*Retrieve the custom model version's dependency build status.* | **GET** /customModels/{customModelId}/versions/{customModelVersionId}/dependencyBuild/ |
| <ul><li>**CustomModelsVersionsDependencyBuildLogList**</li><li>datarobot::ListCustomModelsVersionsDependencyBuildLog</li></ul>*Retrieve the custom model version's dependency build log.* | **GET** /customModels/{customModelId}/versions/{customModelVersionId}/dependencyBuildLog/ |
| <ul><li>**CustomModelsVersionsDownloadList**</li><li>datarobot::ListCustomModelsVersionsDownload</li></ul>*Download custom model version content.* | **GET** /customModels/{customModelId}/versions/{customModelVersionId}/download/ |
| <ul><li>**CustomModelsVersionsFeatureImpactCreate**</li><li>datarobot::CreateCustomModelsVersionsFeatureImpact</li></ul>*Create custom model feature impact.* | **POST** /customModels/{customModelId}/versions/{customModelVersionId}/featureImpact/ |
| <ul><li>**CustomModelsVersionsFeatureImpactList**</li><li>datarobot::ListCustomModelsVersionsFeatureImpact</li></ul>*Get custom model feature impact.* | **GET** /customModels/{customModelId}/versions/{customModelVersionId}/featureImpact/ |
| <ul><li>**CustomModelsVersionsFromRepositoryCreate**</li><li>datarobot::CreateCustomModelsVersionsFromRepository</li></ul>*Create custom model version from remote repository.* | **POST** /customModels/{customModelId}/versions/fromRepository/ |
| <ul><li>**CustomModelsVersionsFromRepositoryPatchMany**</li><li>datarobot::PatchManyCustomModelsVersionsFromRepository</li></ul>*Create custom model version from remote repository with files from previous version.* | **PATCH** /customModels/{customModelId}/versions/fromRepository/ |
| <ul><li>**CustomModelsVersionsList**</li><li>datarobot::ListCustomModelsVersions</li></ul>*List custom model versions.* | **GET** /customModels/{customModelId}/versions/ |
| <ul><li>**CustomModelsVersionsPatch**</li><li>datarobot::PatchCustomModelsVersions</li></ul>*Update custom model version.* | **PATCH** /customModels/{customModelId}/versions/{customModelVersionId}/ |
| <ul><li>**CustomModelsVersionsPredictionExplanationsInitializationCreate**</li><li>datarobot::CreateCustomModelsVersionsPredictionExplanationsInitialization</li></ul>*Create a new prediction explanations initialization for custom model version.* | **POST** /customModels/{customModelId}/versions/{customModelVersionId}/predictionExplanationsInitialization/ |
| <ul><li>**CustomModelsVersionsRetrieve**</li><li>datarobot::RetrieveCustomModelsVersions</li></ul>*Get custom model version.* | **GET** /customModels/{customModelId}/versions/{customModelVersionId}/ |
| <ul><li>**CustomTrainingBlueprintsCreate**</li><li>datarobot::CreateCustomTrainingBlueprints</li></ul>*Create a blueprint from a single custom training estimator.* | **POST** /customTrainingBlueprints/ |
| <ul><li>**CustomTrainingBlueprintsList**</li><li>datarobot::ListCustomTrainingBlueprints</li></ul>*List training blueprints.* | **GET** /customTrainingBlueprints/ |
| <ul><li>**ModelPackagesArchiveCreate**</li><li>datarobot::CreateModelPackagesArchive</li></ul>*Archive a model package.* | **POST** /modelPackages/{modelPackageId}/archive/ |
| <ul><li>**ModelPackagesCapabilitiesList**</li><li>datarobot::ListModelPackagesCapabilities</li></ul>*Retrieve capabilities.* | **GET** /modelPackages/{modelPackageId}/capabilities/ |
| <ul><li>**ModelPackagesFeaturesList**</li><li>datarobot::ListModelPackagesFeatures</li></ul>*Retrieve feature list.* | **GET** /modelPackages/{modelPackageId}/features/ |
| <ul><li>**ModelPackagesFromLearningModelCreate**</li><li>datarobot::CreateModelPackagesFromLearningModel</li></ul>*Create model package from DataRobot model.* | **POST** /modelPackages/fromLearningModel/ |
| <ul><li>**ModelPackagesList**</li><li>datarobot::ListModelPackages</li></ul>*List model packages* | **GET** /modelPackages/ |
| <ul><li>**ModelPackagesRetrieve**</li><li>datarobot::RetrieveModelPackages</li></ul>*Retrieve info about a model package.* | **GET** /modelPackages/{modelPackageId}/ |
| <ul><li>**ModelPackagesSharedRolesList**</li><li>datarobot::ListModelPackagesSharedRoles</li></ul>*Get model package's access control list* | **GET** /modelPackages/{modelPackageId}/sharedRoles/ |
| <ul><li>**ProjectsBiasMitigatedModelsCreate**</li><li>datarobot::CreateProjectsBiasMitigatedModels</li></ul>*Add a request to the queue to train a model with bias mitigation applied.* | **POST** /projects/{projectId}/biasMitigatedModels/ |
| <ul><li>**ProjectsBiasMitigatedModelsList**</li><li>datarobot::ListProjectsBiasMitigatedModels</li></ul>*List of bias mitigated models for the selected project.* | **GET** /projects/{projectId}/biasMitigatedModels/ |
| <ul><li>**ProjectsBiasMitigationFeatureInfoCreateOne**</li><li>datarobot::CreateOneProjectsBiasMitigationFeatureInfo</li></ul>*Submit a job to create bias mitigation data quality information for a given projectId and featureName.* | **POST** /projects/{projectId}/biasMitigationFeatureInfo/{featureName}/ |
| <ul><li>**ProjectsBiasMitigationFeatureInfoList**</li><li>datarobot::ListProjectsBiasMitigationFeatureInfo</li></ul>*Get bias mitigation data quality information for a given projectId and featureName.* | **GET** /projects/{projectId}/biasMitigationFeatureInfo/ |
| <ul><li>**ProjectsBlenderModelsBlendCheckCreate**</li><li>datarobot::IsBlenderEligible</li></ul>*Check if models can be blended.* | **POST** /projects/{projectId}/blenderModels/blendCheck/ |
| <ul><li>**ProjectsBlenderModelsCreate**</li><li>datarobot::RequestBlender</li></ul>*Create a blender from other models using a specified blender method.* | **POST** /projects/{projectId}/blenderModels/ |
| <ul><li>**ProjectsBlenderModelsList**</li><li>datarobot::ListProjectsBlenderModels</li></ul>*List all blenders in a project.* | **GET** /projects/{projectId}/blenderModels/ |
| <ul><li>**ProjectsBlenderModelsRetrieve**</li><li>datarobot::GetBlenderModel</li></ul>*Retrieve a blender.* | **GET** /projects/{projectId}/blenderModels/{modelId}/ |
| <ul><li>**ProjectsCombinedModelsList**</li><li>datarobot::ListProjectsCombinedModels</li></ul>*Retrieve all existing combined models for this project.* | **GET** /projects/{projectId}/combinedModels/ |
| <ul><li>**ProjectsCombinedModelsRetrieve**</li><li>datarobot::RetrieveProjectsCombinedModels</li></ul>*Retrieve an existing combined model.* | **GET** /projects/{projectId}/combinedModels/{combinedModelId}/ |
| <ul><li>**ProjectsCombinedModelsSegmentsDownloadList**</li><li>datarobot::ListProjectsCombinedModelsSegmentsDownload</li></ul>*Download Combined Model segments info.* | **GET** /projects/{projectId}/combinedModels/{combinedModelId}/segments/download/ |
| <ul><li>**ProjectsCombinedModelsSegmentsList**</li><li>datarobot::ListProjectsCombinedModelsSegments</li></ul>*Retrieve Combined Model segments info.* | **GET** /projects/{projectId}/combinedModels/{combinedModelId}/segments/ |
| <ul><li>**ProjectsDatetimeModelsBacktestsCreate**</li><li>datarobot::ScoreBacktests</li></ul>*Score all the available backtests of a datetime model.* | **POST** /projects/{projectId}/datetimeModels/{modelId}/backtests/ |
| <ul><li>**ProjectsDatetimeModelsCreate**</li><li>datarobot::RequestNewDatetimeModel</li></ul>*Train a new datetime model.* | **POST** /projects/{projectId}/datetimeModels/ |
| <ul><li>**ProjectsDatetimeModelsFromModelCreate**</li><li>datarobot::CreateProjectsDatetimeModelsFromModel</li></ul>*Retrain an existing datetime model with specified parameters.* | **POST** /projects/{projectId}/datetimeModels/fromModel/ |
| <ul><li>**ProjectsDatetimeModelsList**</li><li>datarobot::ListProjectsDatetimeModels</li></ul>*List datetime partitioned project models* | **GET** /projects/{projectId}/datetimeModels/ |
| <ul><li>**ProjectsDatetimeModelsRetrieve**</li><li>datarobot::GetDatetimeModel</li></ul>*Get datetime model* | **GET** /projects/{projectId}/datetimeModels/{modelId}/ |
| <ul><li>**ProjectsDeploymentReadyModelsCreate**</li><li>datarobot::CreateProjectsDeploymentReadyModels</li></ul>*Prepare a model for deployment* | **POST** /projects/{projectId}/deploymentReadyModels/ |
| <ul><li>**ProjectsEureqaDistributionPlotRetrieve**</li><li>datarobot::RetrieveProjectsEureqaDistributionPlot</li></ul>*Retrieve Eureqa model details plot.* | **GET** /projects/{projectId}/eureqaDistributionPlot/{solutionId}/ |
| <ul><li>**ProjectsEureqaModelDetailRetrieve**</li><li>datarobot::RetrieveProjectsEureqaModelDetail</li></ul>*Retrieve Eureqa model details plot.* | **GET** /projects/{projectId}/eureqaModelDetail/{solutionId}/ |
| <ul><li>**ProjectsEureqaModelsCreate**</li><li>datarobot::AddEureqaSolution</li></ul>*Create a new model from an existing eureqa solution.* | **POST** /projects/{projectId}/eureqaModels/ |
| <ul><li>**ProjectsEureqaModelsRetrieve**</li><li>datarobot::GetParetoFront</li></ul>*Retrieve the pareto front for the specified Eureqa model.* | **GET** /projects/{projectId}/eureqaModels/{modelId}/ |
| <ul><li>**ProjectsFrozenDatetimeModelsCreate**</li><li>datarobot::RequestFrozenDatetimeModel</li></ul>*Train a frozen datetime model.* | **POST** /projects/{projectId}/frozenDatetimeModels/ |
| <ul><li>**ProjectsFrozenModelsCreate**</li><li>datarobot::RequestFrozenModel</li></ul>*Train a new frozen model with parameters from an existing model.* | **POST** /projects/{projectId}/frozenModels/ |
| <ul><li>**ProjectsFrozenModelsList**</li><li>datarobot::ListProjectsFrozenModels</li></ul>*List all frozen models from a project.* | **GET** /projects/{projectId}/frozenModels/ |
| <ul><li>**ProjectsFrozenModelsRetrieve**</li><li>datarobot::GetFrozenModel</li></ul>*Look up a particular frozen model.* | **GET** /projects/{projectId}/frozenModels/{modelId}/ |
| <ul><li>**ProjectsModelJobsDelete**</li><li>datarobot::DeleteModelJob</li></ul>*Cancel a modeling job.* | **DELETE** /projects/{projectId}/modelJobs/{jobId}/ |
| <ul><li>**ProjectsModelJobsList**</li><li>datarobot::ListModelJobs</li></ul>*List modeling jobs* | **GET** /projects/{projectId}/modelJobs/ |
| <ul><li>**ProjectsModelJobsRetrieve**</li><li>datarobot::GetModelJob</li></ul>*Look up a specific modeling job* | **GET** /projects/{projectId}/modelJobs/{jobId}/ |
| <ul><li>**ProjectsModelsAdvancedTuningCreate**</li><li>datarobot::RunInteractiveTuning</li></ul>*Submit a job to make a new version of the model with different advanced tuning parameters.* | **POST** /projects/{projectId}/models/{modelId}/advancedTuning/ |
| <ul><li>**ProjectsModelsAdvancedTuningParametersList**</li><li>datarobot::GetTuningParameters</li></ul>*Retrieve information about all advanced tuning parameters available for the specified model.* | **GET** /projects/{projectId}/models/{modelId}/advancedTuning/parameters/ |
| <ul><li>**ProjectsModelsClusterNamesList**</li><li>datarobot::ListProjectsModelsClusterNames</li></ul>*Retrieve cluster names assigned to an unsupervised cluster model* | **GET** /projects/{projectId}/models/{modelId}/clusterNames/ |
| <ul><li>**ProjectsModelsClusterNamesPatchMany**</li><li>datarobot::PatchManyProjectsModelsClusterNames</li></ul>*Update cluster names assigned to an unsupervised cluster model* | **PATCH** /projects/{projectId}/models/{modelId}/clusterNames/ |
| <ul><li>**ProjectsModelsCreate**</li><li>datarobot::RequestNewModel</li></ul>*Train a new model* | **POST** /projects/{projectId}/models/ |
| <ul><li>**ProjectsModelsCrossValidationCreate**</li><li>datarobot::CrossValidateModel</li></ul>*Run Cross Validation on a model.* | **POST** /projects/{projectId}/models/{modelId}/crossValidation/ |
| <ul><li>**ProjectsModelsCrossValidationScoresList**</li><li>datarobot::GetCrossValidationScores</li></ul>*Get Cross Validation scores for each partition in a model.* | **GET** /projects/{projectId}/models/{modelId}/crossValidationScores/ |
| <ul><li>**ProjectsModelsDelete**</li><li>datarobot::DeleteModel</li></ul>*Delete a model from the leaderboard.* | **DELETE** /projects/{projectId}/models/{modelId}/ |
| <ul><li>**ProjectsModelsFeaturesList**</li><li>datarobot::ListModelFeatures</li></ul>*List the features used in a model.* | **GET** /projects/{projectId}/models/{modelId}/features/ |
| <ul><li>**ProjectsModelsFromModelCreate**</li><li>datarobot::CreateProjectsModelsFromModel</li></ul>*Retrain a model* | **POST** /projects/{projectId}/models/fromModel/ |
| <ul><li>**ProjectsModelsList**</li><li>datarobot::ListModels</li></ul>*List project models* | **GET** /projects/{projectId}/models/ |
| <ul><li>**ProjectsModelsMissingReportList**</li><li>datarobot::GetMissingValuesReport</li></ul>*Retrieve a summary of how the model's subtasks handle missing values.* | **GET** /projects/{projectId}/models/{modelId}/missingReport/ |
| <ul><li>**ProjectsModelsNumIterationsTrainedList**</li><li>datarobot::ListProjectsModelsNumIterationsTrained</li></ul>*Get number of iterations trained* | **GET** /projects/{projectId}/models/{modelId}/numIterationsTrained/ |
| <ul><li>**ProjectsModelsParametersList**</li><li>datarobot::GetModelParameters</li></ul>*Retrieve model parameters.* | **GET** /projects/{projectId}/models/{modelId}/parameters/ |
| <ul><li>**ProjectsModelsPatch**</li><li>datarobot::StarModel</li></ul>*Updates a model's attribute(s).* | **PATCH** /projects/{projectId}/models/{modelId}/ |
| <ul><li>**ProjectsModelsPredictionIntervalsCreate**</li><li>datarobot::CreateProjectsModelsPredictionIntervals</li></ul>*Calculate prediction intervals for the specified percentiles for this model.* | **POST** /projects/{projectId}/models/{modelId}/predictionIntervals/ |
| <ul><li>**ProjectsModelsPredictionIntervalsList**</li><li>datarobot::ListProjectsModelsPredictionIntervals</li></ul>*Retrieve prediction intervals that are already calculated for this model.* | **GET** /projects/{projectId}/models/{modelId}/predictionIntervals/ |
| <ul><li>**ProjectsModelsPrimeInfoList**</li><li>datarobot::GetPrimeEligibility</li></ul>*Check a Model for Prime Eligibility* | **GET** /projects/{projectId}/models/{modelId}/primeInfo/ |
| <ul><li>**ProjectsModelsPrimeRulesetsCreate**</li><li>datarobot::RequestApproximation</li></ul>*Create Rulesets* | **POST** /projects/{projectId}/models/{modelId}/primeRulesets/ |
| <ul><li>**ProjectsModelsPrimeRulesetsList**</li><li>datarobot::GetRulesets</li></ul>*List Rulesets* | **GET** /projects/{projectId}/models/{modelId}/primeRulesets/ |
| <ul><li>**ProjectsModelsRetrieve**</li><li>datarobot::GetModel</li></ul>*Get model* | **GET** /projects/{projectId}/models/{modelId}/ |
| <ul><li>**ProjectsModelsScoringCodeList**</li><li>datarobot::DownloadScoringCode</li></ul>*Retrieve Scoring Code* | **GET** /projects/{projectId}/models/{modelId}/scoringCode/ |
| <ul><li>**ProjectsModelsSupportedCapabilitiesList**</li><li>datarobot::GetModelCapabilities</li></ul>*Get supported capabilities for a model.* | **GET** /projects/{projectId}/models/{modelId}/supportedCapabilities/ |
| <ul><li>**ProjectsPrimeFilesCreate**</li><li>datarobot::CreatePrimeCode</li></ul>*Create a Prime File* | **POST** /projects/{projectId}/primeFiles/ |
| <ul><li>**ProjectsPrimeFilesDownloadList**</li><li>datarobot::DownloadPrimeCode</li></ul>*Download Code* | **GET** /projects/{projectId}/primeFiles/{primeFileId}/download/ |
| <ul><li>**ProjectsPrimeFilesList**</li><li>datarobot::ListPrimeFiles</li></ul>*Get Prime Files* | **GET** /projects/{projectId}/primeFiles/ |
| <ul><li>**ProjectsPrimeFilesRetrieve**</li><li>datarobot::GetPrimeFile</li></ul>*Retrieve metadata about a DataRobot Prime file* | **GET** /projects/{projectId}/primeFiles/{primeFileId}/ |
| <ul><li>**ProjectsPrimeModelsCreate**</li><li>datarobot::RequestPrimeModel</li></ul>*Create a Prime Model from a Ruleset* | **POST** /projects/{projectId}/primeModels/ |
| <ul><li>**ProjectsPrimeModelsList**</li><li>datarobot::ListPrimeModels</li></ul>*List all Prime models in a project* | **GET** /projects/{projectId}/primeModels/ |
| <ul><li>**ProjectsPrimeModelsRetrieve**</li><li>datarobot::GetPrimeModel</li></ul>*Retrieve a Prime model details* | **GET** /projects/{projectId}/primeModels/{modelId}/ |
| <ul><li>**ProjectsRatingTableModelsCreate**</li><li>datarobot::RequestNewRatingTableModel</li></ul>*Create New Models From A Rating Table* | **POST** /projects/{projectId}/ratingTableModels/ |
| <ul><li>**ProjectsRatingTableModelsList**</li><li>datarobot::ListRatingTableModels</li></ul>*List Rating Table Models* | **GET** /projects/{projectId}/ratingTableModels/ |
| <ul><li>**ProjectsRatingTableModelsRetrieve**</li><li>datarobot::GetRatingTableModel</li></ul>*Retrieve Rating Table Model* | **GET** /projects/{projectId}/ratingTableModels/{modelId}/ |
| <ul><li>**ProjectsRatingTablesCreate**</li><li>datarobot::CreateRatingTable</li></ul>*Upload Modified Rating Table File* | **POST** /projects/{projectId}/ratingTables/ |
| <ul><li>**ProjectsRatingTablesFileList**</li><li>datarobot::DownloadRatingTable</li></ul>*Retrieve Rating Table File* | **GET** /projects/{projectId}/ratingTables/{ratingTableId}/file/ |
| <ul><li>**ProjectsRatingTablesList**</li><li>datarobot::ListRatingTables</li></ul>*List Rating Tables For The Project* | **GET** /projects/{projectId}/ratingTables/ |
| <ul><li>**ProjectsRatingTablesPatch**</li><li>datarobot::RenameRatingTable</li></ul>*Update an uploaded rating table* | **PATCH** /projects/{projectId}/ratingTables/{ratingTableId}/ |
| <ul><li>**ProjectsRatingTablesRetrieve**</li><li>datarobot::GetRatingTable</li></ul>*Retrieve Rating Table Information* | **GET** /projects/{projectId}/ratingTables/{ratingTableId}/ |
| <ul><li>**ProjectsRecommendedModelsList**</li><li>datarobot::ListModelRecommendations</li></ul>*List recommended models for the project* | **GET** /projects/{projectId}/recommendedModels/ |
| <ul><li>**ProjectsRecommendedModelsRecommendedModelList**</li><li>datarobot::ListProjectsRecommendedModelsRecommendedModel</li></ul>*Get recommended model* | **GET** /projects/{projectId}/recommendedModels/recommendedModel/ |
| <ul><li>**ProjectsSegmentChampionPutMany**</li><li>datarobot::PutManyProjectsSegmentChampion</li></ul>*Update champion model for a segment project.* | **PUT** /projects/{projectId}/segmentChampion/ |
### NotificationsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**NotificationChannelsCreate**</li><li>datarobot::CreateNotificationChannels</li></ul>*Create notification channel* | **POST** /notificationChannels/ |
| <ul><li>**NotificationChannelsDelete**</li><li>datarobot::DeleteNotificationChannels</li></ul>*Delete notification channel* | **DELETE** /notificationChannels/{channelId}/ |
| <ul><li>**NotificationChannelsList**</li><li>datarobot::ListNotificationChannels</li></ul>*List notification channels* | **GET** /notificationChannels/ |
| <ul><li>**NotificationChannelsPut**</li><li>datarobot::PutNotificationChannels</li></ul>*Update notification channel* | **PUT** /notificationChannels/{channelId}/ |
| <ul><li>**NotificationChannelsRetrieve**</li><li>datarobot::RetrieveNotificationChannels</li></ul>*Retrieve notification channel* | **GET** /notificationChannels/{channelId}/ |
| <ul><li>**NotificationEmailChannelVerificationCreate**</li><li>datarobot::CreateNotificationEmailChannelVerification</li></ul>*Sending a 6 digit verification code to user's email* | **POST** /notificationEmailChannelVerification/ |
| <ul><li>**NotificationEmailChannelVerificationStatusCreate**</li><li>datarobot::CreateNotificationEmailChannelVerificationStatus</li></ul>*Retrieve the status of whether admin enter code correctly* | **POST** /notificationEmailChannelVerificationStatus/ |
| <ul><li>**NotificationEventsList**</li><li>datarobot::ListNotificationEvents</li></ul>*List event types and groups the user can include in notification policies.* | **GET** /notificationEvents/ |
| <ul><li>**NotificationLogsList**</li><li>datarobot::ListNotificationLogs</li></ul>*List the notification logs* | **GET** /notificationLogs/ |
| <ul><li>**NotificationPoliciesCreate**</li><li>datarobot::CreateNotificationPolicies</li></ul>*Create notification policy* | **POST** /notificationPolicies/ |
| <ul><li>**NotificationPoliciesDelete**</li><li>datarobot::DeleteNotificationPolicies</li></ul>*Delete notification policy* | **DELETE** /notificationPolicies/{policyId}/ |
| <ul><li>**NotificationPoliciesList**</li><li>datarobot::ListNotificationPolicies</li></ul>*List notification policies* | **GET** /notificationPolicies/ |
| <ul><li>**NotificationPoliciesPut**</li><li>datarobot::PutNotificationPolicies</li></ul>*Update notification policy* | **PUT** /notificationPolicies/{policyId}/ |
| <ul><li>**NotificationPoliciesRetrieve**</li><li>datarobot::RetrieveNotificationPolicies</li></ul>*Retrieve notification policy* | **GET** /notificationPolicies/{policyId}/ |
| <ul><li>**NotificationPolicyMutesCreate**</li><li>datarobot::CreateNotificationPolicyMutes</li></ul>*Create a new ignored notification* | **POST** /notificationPolicyMutes/ |
| <ul><li>**NotificationPolicyMutesDelete**</li><li>datarobot::DeleteNotificationPolicyMutes</li></ul>*Delete the existing notification policy mute* | **DELETE** /notificationPolicyMutes/{muteId}/ |
| <ul><li>**NotificationPolicyMutesList**</li><li>datarobot::ListNotificationPolicyMutes</li></ul>*List the ignored notifications filtered by orgId if provided* | **GET** /notificationPolicyMutes/ |
| <ul><li>**NotificationWebhookChannelTestsCreate**</li><li>datarobot::CreateNotificationWebhookChannelTests</li></ul>*Test webhook notification channel* | **POST** /notificationWebhookChannelTests/ |
| <ul><li>**NotificationWebhookChannelTestsRetrieve**</li><li>datarobot::RetrieveNotificationWebhookChannelTests</li></ul>*Retrieve status of notification channel test* | **GET** /notificationWebhookChannelTests/{notificationId}/ |
| <ul><li>**NotificationsCreate**</li><li>datarobot::CreateNotifications</li></ul>*Resends the notification* | **POST** /notifications/ |
| <ul><li>**RemoteEventsCreate**</li><li>datarobot::CreateRemoteEvents</li></ul>*Post a remote deployment event* | **POST** /remoteEvents/ |
| <ul><li>**UserNotificationsDelete**</li><li>datarobot::DeleteUserNotifications</li></ul>*Delete user notification* | **DELETE** /userNotifications/{userNotificationId}/ |
| <ul><li>**UserNotificationsDeleteMany**</li><li>datarobot::DeleteManyUserNotifications</li></ul>*Delete all user notifications* | **DELETE** /userNotifications/ |
| <ul><li>**UserNotificationsList**</li><li>datarobot::ListUserNotifications</li></ul>*List user notifications* | **GET** /userNotifications/ |
| <ul><li>**UserNotificationsPatch**</li><li>datarobot::PatchUserNotifications</li></ul>*Mark as read* | **PATCH** /userNotifications/{userNotificationId}/ |
| <ul><li>**UserNotificationsPatchMany**</li><li>datarobot::PatchManyUserNotifications</li></ul>*Mark all as read* | **PATCH** /userNotifications/ |
### PredictionsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**BatchPredictionJobDefinitionsCreate**</li><li>datarobot::CreateBatchPredictionJobDefinitions</li></ul>*Creates a new Batch Prediction job definition* | **POST** /batchPredictionJobDefinitions/ |
| <ul><li>**BatchPredictionJobDefinitionsDelete**</li><li>datarobot::DeleteBatchPredictionJobDefinitions</li></ul>*Delete Batch Prediction job definition* | **DELETE** /batchPredictionJobDefinitions/{jobDefinitionId}/ |
| <ul><li>**BatchPredictionJobDefinitionsList**</li><li>datarobot::ListBatchPredictionJobDefinitions</li></ul>*List Batch Prediction job definitions* | **GET** /batchPredictionJobDefinitions/ |
| <ul><li>**BatchPredictionJobDefinitionsPatch**</li><li>datarobot::PatchBatchPredictionJobDefinitions</li></ul>*Update Batch Prediction job definition* | **PATCH** /batchPredictionJobDefinitions/{jobDefinitionId}/ |
| <ul><li>**BatchPredictionJobDefinitionsPortableList**</li><li>datarobot::ListBatchPredictionJobDefinitionsPortable</li></ul>*Retrieve job definition snippet for PBP* | **GET** /batchPredictionJobDefinitions/{jobDefinitionId}/portable/ |
| <ul><li>**BatchPredictionJobDefinitionsRetrieve**</li><li>datarobot::RetrieveBatchPredictionJobDefinitions</li></ul>*Retrieve Batch Prediction job definition* | **GET** /batchPredictionJobDefinitions/{jobDefinitionId}/ |
| <ul><li>**BatchPredictionsCreate**</li><li>datarobot::CreateBatchPredictions</li></ul>*Creates a new Batch Prediction job* | **POST** /batchPredictions/ |
| <ul><li>**BatchPredictionsCsvUploadFinalizeMultipartCreate**</li><li>datarobot::CreateBatchPredictionsCsvUploadFinalizeMultipart</li></ul>*Finalize a multipart upload* | **POST** /batchPredictions/{predictionJobId}/csvUpload/finalizeMultipart/ |
| <ul><li>**BatchPredictionsCsvUploadPartPut**</li><li>datarobot::PutBatchPredictionsCsvUploadPart</li></ul>*Upload CSV data in multiple parts* | **PUT** /batchPredictions/{predictionJobId}/csvUpload/part/{partNumber}/ |
| <ul><li>**BatchPredictionsCsvUploadPutMany**</li><li>datarobot::PutManyBatchPredictionsCsvUpload</li></ul>*Creates a new_model_id Batch Prediction job* | **PUT** /batchPredictions/{predictionJobId}/csvUpload/ |
| <ul><li>**BatchPredictionsDelete**</li><li>datarobot::DeleteBatchPredictions</li></ul>*Cancel a Batch Prediction job* | **DELETE** /batchPredictions/{predictionJobId}/ |
| <ul><li>**BatchPredictionsDownloadList**</li><li>datarobot::ListBatchPredictionsDownload</li></ul>*Download the scored data set of a batch prediction job* | **GET** /batchPredictions/{predictionJobId}/download/ |
| <ul><li>**BatchPredictionsFromExistingCreate**</li><li>datarobot::CreateBatchPredictionsFromExisting</li></ul>*Create a new a Batch Prediction job based on an existing Batch Prediction job.* | **POST** /batchPredictions/fromExisting/ |
| <ul><li>**BatchPredictionsFromJobDefinitionCreate**</li><li>datarobot::CreateBatchPredictionsFromJobDefinition</li></ul>*Launch a Batch Prediction job for scoring* | **POST** /batchPredictions/fromJobDefinition/ |
| <ul><li>**BatchPredictionsList**</li><li>datarobot::ListBatchPredictions</li></ul>*List batch prediction jobs* | **GET** /batchPredictions/ |
| <ul><li>**BatchPredictionsPatch**</li><li>datarobot::PatchBatchPredictions</li></ul>*Update a Batch Prediction job* | **PATCH** /batchPredictions/{predictionJobId}/ |
| <ul><li>**BatchPredictionsRetrieve**</li><li>datarobot::RetrieveBatchPredictions</li></ul>*Retrieve Batch Prediction job* | **GET** /batchPredictions/{predictionJobId}/ |
| <ul><li>**ComputedTrainingPredictionsList**</li><li>datarobot::GetTrainingPredictions</li></ul>*Retrieve training predictions* | **GET** /projects/{projectId}/trainingPredictions/{predictionId}/ |
| <ul><li>**ProjectsModelsPredictionExplanationsInitializationCreate**</li><li>datarobot::RequestPredictionExplanationsInitialization</li></ul>*Create a new prediction explanations initialization.* | **POST** /projects/{projectId}/models/{modelId}/predictionExplanationsInitialization/ |
| <ul><li>**ProjectsModelsPredictionExplanationsInitializationDeleteMany**</li><li>datarobot::DeletePredictionExplanationsInitialization</li></ul>*Delete an existing PredictionExplanationsInitialization.* | **DELETE** /projects/{projectId}/models/{modelId}/predictionExplanationsInitialization/ |
| <ul><li>**ProjectsModelsPredictionExplanationsInitializationList**</li><li>datarobot::GetPredictionExplanationsInitialization</li></ul>*Retrieve the current PredictionExplanationsInitialization.* | **GET** /projects/{projectId}/models/{modelId}/predictionExplanationsInitialization/ |
| <ul><li>**ProjectsPredictJobsDelete**</li><li>datarobot::DeletePredictJob</li></ul>*Cancel a queued prediction job* | **DELETE** /projects/{projectId}/predictJobs/{jobId}/ |
| <ul><li>**ProjectsPredictJobsList**</li><li>datarobot::GetPredictJobs</li></ul>*List all prediction jobs for a project* | **GET** /projects/{projectId}/predictJobs/ |
| <ul><li>**ProjectsPredictJobsRetrieve**</li><li>datarobot::GetPredictJob</li></ul>*Look up a particular prediction job* | **GET** /projects/{projectId}/predictJobs/{jobId}/ |
| <ul><li>**ProjectsPredictionDatasetsDataSourceUploadsCreate**</li><li>datarobot::UploadPredictionDatasetFromDataSource</li></ul>*Upload a dataset for predictions from a ``DataSource``.* | **POST** /projects/{projectId}/predictionDatasets/dataSourceUploads/ |
| <ul><li>**ProjectsPredictionDatasetsDatasetUploadsCreate**</li><li>datarobot::UploadPredictionDatasetFromCatalog</li></ul>*Create prediction dataset* | **POST** /projects/{projectId}/predictionDatasets/datasetUploads/ |
| <ul><li>**ProjectsPredictionDatasetsDelete**</li><li>datarobot::DeletePredictionDataset</li></ul>*Delete a dataset that was uploaded for prediction.* | **DELETE** /projects/{projectId}/predictionDatasets/{datasetId}/ |
| <ul><li>**ProjectsPredictionDatasetsFileUploadsCreate**</li><li>datarobot::CreateProjectsPredictionDatasetsFileUploads</li></ul>*Upload a file for predictions from an attached file.* | **POST** /projects/{projectId}/predictionDatasets/fileUploads/ |
| <ul><li>**ProjectsPredictionDatasetsList**</li><li>datarobot::ListPredictionDatasets</li></ul>*List predictions datasets uploaded to a project.* | **GET** /projects/{projectId}/predictionDatasets/ |
| <ul><li>**ProjectsPredictionDatasetsRetrieve**</li><li>datarobot::GetPredictionDataset</li></ul>*Get the metadata of a specific dataset. This only works for datasets uploaded to an existing project for prediction.* | **GET** /projects/{projectId}/predictionDatasets/{datasetId}/ |
| <ul><li>**ProjectsPredictionDatasetsUrlUploadsCreate**</li><li>datarobot::UploadPredictionDataset</li></ul>*Upload a file for predictions from a URL.* | **POST** /projects/{projectId}/predictionDatasets/urlUploads/ |
| <ul><li>**ProjectsPredictionExplanationsCreate**</li><li>datarobot::RequestPredictionExplanations</li></ul>*Create a new PredictionExplanations object (and its accompanying PredictionExplanationsRecord).* | **POST** /projects/{projectId}/predictionExplanations/ |
| <ul><li>**ProjectsPredictionExplanationsList**</li><li>datarobot::GetPredictionExplanationsPage</li></ul>*Retrieve stored Prediction Explanations.* | **GET** /projects/{projectId}/predictionExplanations/{predictionExplanationsId}/ |
| <ul><li>**ProjectsPredictionExplanationsRecordsDelete**</li><li>datarobot::DeletePredictionExplanations</li></ul>*Delete saved Prediction Explanations.* | **DELETE** /projects/{projectId}/predictionExplanationsRecords/{predictionExplanationsId}/ |
| <ul><li>**ProjectsPredictionExplanationsRecordsList**</li><li>datarobot::ListPredictionExplanationsMetadata</li></ul>*List PredictionExplanationsRecord objects for a project.* | **GET** /projects/{projectId}/predictionExplanationsRecords/ |
| <ul><li>**ProjectsPredictionExplanationsRecordsRetrieve**</li><li>datarobot::GetPredictionExplanationsMetadata</li></ul>*Retrieve a PredictionExplanationsRecord object.* | **GET** /projects/{projectId}/predictionExplanationsRecords/{predictionExplanationsId}/ |
| <ul><li>**ProjectsPredictionsCreate**</li><li>datarobot::RequestPredictions</li></ul>*Make new predictions.* | **POST** /projects/{projectId}/predictions/ |
| <ul><li>**ProjectsPredictionsList**</li><li>datarobot::ListPredictions</li></ul>*Get a list of prediction records.* | **GET** /projects/{projectId}/predictions/ |
| <ul><li>**ProjectsPredictionsMetadataList**</li><li>datarobot::ListProjectsPredictionsMetadata</li></ul>*Get a list of prediction metadata records.* | **GET** /projects/{projectId}/predictionsMetadata/ |
| <ul><li>**ProjectsPredictionsMetadataRetrieve**</li><li>datarobot::RetrieveProjectsPredictionsMetadata</li></ul>*Retrieve metadata for a set of predictions.* | **GET** /projects/{projectId}/predictionsMetadata/{predictionId}/ |
| <ul><li>**ProjectsPredictionsRetrieve**</li><li>datarobot::RetrieveProjectsPredictions</li></ul>*Get a completed set of predictions.* | **GET** /projects/{projectId}/predictions/{predictionId}/ |
| <ul><li>**ProjectsTrainingPredictionsCreate**</li><li>datarobot::RequestTrainingPredictions</li></ul>*Submits a job to compute predictions for training data* | **POST** /projects/{projectId}/trainingPredictions/ |
| <ul><li>**ScheduledJobsDelete**</li><li>datarobot::DeleteScheduledJobs</li></ul>*Delete scheduled job* | **DELETE** /scheduledJobs/{jobId}/ |
| <ul><li>**ScheduledJobsList**</li><li>datarobot::ListScheduledJobs</li></ul>*List scheduled deployment batch prediction jobs a user can view* | **GET** /scheduledJobs/ |
| <ul><li>**ScheduledJobsPatch**</li><li>datarobot::PatchScheduledJobs</li></ul>*Run or stop a previously created scheduled integration job* | **PATCH** /scheduledJobs/{jobId}/ |
| <ul><li>**ScheduledJobsRetrieve**</li><li>datarobot::RetrieveScheduledJobs</li></ul>*List a single deployment batch prediction job* | **GET** /scheduledJobs/{jobId}/ |
| <ul><li>**TrainingPredictionsList**</li><li>datarobot::ListTrainingPredictions</li></ul>*List training prediction jobs* | **GET** /projects/{projectId}/trainingPredictions/ |
### ProjectsApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**AccessControlList**</li><li>datarobot::ListProjectsAccessControl</li></ul>*Get project's access control list* | **GET** /projects/{projectId}/accessControl/ |
| <ul><li>**AccessControlPatchMany**</li><li>datarobot::PatchManyProjectsAccessControl</li></ul>*Update project's access controls* | **PATCH** /projects/{projectId}/accessControl/ |
| <ul><li>**AutopilotCreate**</li><li>datarobot::PauseQueue</li></ul>*Pause or unpause Autopilot* | **POST** /projects/{projectId}/autopilot/ |
| <ul><li>**AutopilotsCreate**</li><li>datarobot::StartNewAutoPilot</li></ul>*Start autopilot* | **POST** /projects/{projectId}/autopilots/ |
| <ul><li>**BatchTypeTransformFeaturesCreate**</li><li>datarobot::BatchFeaturesTypeTransform</li></ul>*Create multiple new features by changing the type of existing features.* | **POST** /projects/{projectId}/batchTypeTransformFeatures/ |
| <ul><li>**BatchTypeTransformFeaturesResultRetrieve**</li><li>datarobot::RetrieveProjectsBatchTypeTransformFeaturesResult</li></ul>*Retrieve the result of a batch variable type transformation.* | **GET** /projects/{projectId}/batchTypeTransformFeaturesResult/{jobId}/ |
| <ul><li>**CalendarCountryCodesList**</li><li>datarobot::ListCalendarCountryCodes</li></ul>*Retrieve the list of allowed country codes to request preloaded calendars generation for.* | **GET** /calendarCountryCodes/ |
| <ul><li>**CalendarEventsList**</li><li>datarobot::ListProjectsCalendarEvents</li></ul>*List available calendar events for the project.* | **GET** /projects/{projectId}/calendarEvents/ |
| <ul><li>**CalendarsAccessControlList**</li><li>datarobot::ListCalendarsAccessControl</li></ul>*Get a list of users who have access to this calendar and their roles on the calendar.* | **GET** /calendars/{calendarId}/accessControl/ |
| <ul><li>**CalendarsAccessControlPatchMany**</li><li>datarobot::PatchManyCalendarsAccessControl</li></ul>*Update the access control for this calendar.* | **PATCH** /calendars/{calendarId}/accessControl/ |
| <ul><li>**CalendarsDelete**</li><li>datarobot::DeleteCalendar</li></ul>*Delete a calendar.* | **DELETE** /calendars/{calendarId}/ |
| <ul><li>**CalendarsFileUploadCreate**</li><li>datarobot::CreateCalendar</li></ul>*Create a calendar from a file.* | **POST** /calendars/fileUpload/ |
| <ul><li>**CalendarsFromCountryCodeCreate**</li><li>datarobot::CreateCalendarsFromCountryCode</li></ul>*Initialize generation of preloaded calendars.* | **POST** /calendars/fromCountryCode/ |
| <ul><li>**CalendarsFromDatasetCreate**</li><li>datarobot::CreateCalendarsFromDataset</li></ul>*Create a calendar from the dataset* | **POST** /calendars/fromDataset/ |
| <ul><li>**CalendarsList**</li><li>datarobot::ListCalendars</li></ul>*List all available calendars for a user.* | **GET** /calendars/ |
| <ul><li>**CalendarsPatch**</li><li>datarobot::UpdateCalendar</li></ul>*Update a calendar's name* | **PATCH** /calendars/{calendarId}/ |
| <ul><li>**CalendarsRetrieve**</li><li>datarobot::GetCalendar</li></ul>*Retrieve information about a calendar.* | **GET** /calendars/{calendarId}/ |
| <ul><li>**CleanupJobsCreate**</li><li>datarobot::CreateProjectCleanupJobs</li></ul>*Schedule Project Permadelete Job* | **POST** /projectCleanupJobs/ |
| <ul><li>**CleanupJobsDelete**</li><li>datarobot::DeleteProjectCleanupJobs</li></ul>*Cancel Scheduled Project Permadelete Job* | **DELETE** /projectCleanupJobs/{statusId}/ |
| <ul><li>**CleanupJobsDownloadList**</li><li>datarobot::ListProjectCleanupJobsDownload</li></ul>*Download a projects permadeletion report.* | **GET** /projectCleanupJobs/{statusId}/download/ |
| <ul><li>**CleanupJobsList**</li><li>datarobot::ListProjectCleanupJobs</li></ul>*Retrieve Project Permadelete job status* | **GET** /projectCleanupJobs/ |
| <ul><li>**CleanupJobsRetrieve**</li><li>datarobot::RetrieveProjectCleanupJobs</li></ul>*Retrieve Project Permadelete job status* | **GET** /projectCleanupJobs/{statusId}/ |
| <ul><li>**CleanupJobsSummaryList**</li><li>datarobot::ListProjectCleanupJobsSummary</li></ul>*Get a projects cleanup jobs summary.* | **GET** /projectCleanupJobs/{statusId}/summary/ |
| <ul><li>**ClonesCreate**</li><li>datarobot::CloneProject</li></ul>*Clone a project* | **POST** /projectClones/ |
| <ul><li>**ConfigureAndStartAutopilot**</li><li>datarobot::SetTarget</li></ul>*Start modeling* | **PATCH** /projects/{projectId}/aim/ |
| <ul><li>**Create**</li><li>datarobot::SetupProject</li></ul>*Create project.* | **POST** /projects/ |
| <ul><li>**CrossSeriesPropertiesCreate**</li><li>datarobot::RequestCrossSeriesDetection</li></ul>*Validate columns for potential use as the group-by column for cross-series functionality.* | **POST** /projects/{projectId}/crossSeriesProperties/ |
| <ul><li>**Delete**</li><li>datarobot::DeleteProject</li></ul>*Delete a project* | **DELETE** /projects/{projectId}/ |
| <ul><li>**DeletedProjectsCountList**</li><li>datarobot::ListDeletedProjectsCount</li></ul>*Count soft-deleted projects.* | **GET** /deletedProjectsCount/ |
| <ul><li>**DeletedProjectsList**</li><li>datarobot::ListDeletedProjects</li></ul>*Retrieve a list of soft-deleted projects* | **GET** /deletedProjects/ |
| <ul><li>**DeletedProjectsPatch**</li><li>datarobot::PatchDeletedProjects</li></ul>*Recover soft-deleted project* | **PATCH** /deletedProjects/{projectId}/ |
| <ul><li>**DiscardedFeaturesList**</li><li>datarobot::RetrieveDiscardedFeaturesInformation</li></ul>*Get discarded features.* | **GET** /projects/{projectId}/discardedFeatures/ |
| <ul><li>**ExternalTimeSeriesBaselineDataValidationJobsCreate**</li><li>datarobot::CreateProjectsExternalTimeSeriesBaselineDataValidationJobs</li></ul>*Validate baseline data* | **POST** /projects/{projectId}/externalTimeSeriesBaselineDataValidationJobs/ |
| <ul><li>**ExternalTimeSeriesBaselineDataValidationJobsRetrieve**</li><li>datarobot::RetrieveProjectsExternalTimeSeriesBaselineDataValidationJobs</li></ul>*Retrieve Baseline Validation Job* | **GET** /projects/{projectId}/externalTimeSeriesBaselineDataValidationJobs/{baselineValidationJobId}/ |
| <ul><li>**FeatureDiscoveryDatasetDownloadList**</li><li>datarobot::ListProjectsFeatureDiscoveryDatasetDownload</li></ul>*Download the project dataset with features added by feature discovery* | **GET** /projects/{projectId}/featureDiscoveryDatasetDownload/ |
| <ul><li>**FeatureDiscoveryLogsDownloadList**</li><li>datarobot::ListProjectsFeatureDiscoveryLogsDownload</li></ul>*Retrieve a text file containing the feature discovery log* | **GET** /projects/{projectId}/featureDiscoveryLogs/download/ |
| <ul><li>**FeatureDiscoveryLogsList**</li><li>datarobot::ListProjectsFeatureDiscoveryLogs</li></ul>*Retrieve the feature discovery log content and log length* | **GET** /projects/{projectId}/featureDiscoveryLogs/ |
| <ul><li>**FeatureDiscoveryRecipeSQLsDownloadList**</li><li>datarobot::ListProjectsFeatureDiscoveryRecipeSQLsDownload</li></ul>*Download feature discovery SQL recipe* | **GET** /projects/{projectId}/featureDiscoveryRecipeSQLs/download/ |
| <ul><li>**FeatureDiscoveryRecipeSqlExportsCreate**</li><li>datarobot::CreateProjectsFeatureDiscoveryRecipeSqlExports</li></ul>*Generate feature discovery SQL recipe* | **POST** /projects/{projectId}/featureDiscoveryRecipeSqlExports/ |
| <ul><li>**FeatureHistogramsRetrieve**</li><li>datarobot::GetFeatureHistogram</li></ul>*Get feature histogram* | **GET** /projects/{projectId}/featureHistograms/{featureName}/ |
| <ul><li>**FeatureLineagesRetrieve**</li><li>datarobot::RetrieveProjectsFeatureLineages</li></ul>*Retrieve Feature Discovery Lineage* | **GET** /projects/{projectId}/featureLineages/{featureLineageId}/ |
| <ul><li>**FeaturelistsCreate**</li><li>datarobot::CreateFeaturelist</li></ul>*Create a new featurelist.* | **POST** /projects/{projectId}/featurelists/ |
| <ul><li>**FeaturelistsDelete**</li><li>datarobot::DeleteFeaturelist</li></ul>*Delete a specified featurelist.* | **DELETE** /projects/{projectId}/featurelists/{featurelistId}/ |
| <ul><li>**FeaturelistsList**</li><li>datarobot::ListFeaturelists</li></ul>*List featurelists* | **GET** /projects/{projectId}/featurelists/ |
| <ul><li>**FeaturelistsPatch**</li><li>datarobot::UpdateFeaturelist</li></ul>*Update an existing featurelist* | **PATCH** /projects/{projectId}/featurelists/{featurelistId}/ |
| <ul><li>**FeaturelistsRetrieve**</li><li>datarobot::GetFeaturelist</li></ul>*Retrieve a feature list* | **GET** /projects/{projectId}/featurelists/{featurelistId}/ |
| <ul><li>**FeaturesList**</li><li>datarobot::ListFeatureInfo</li></ul>*List project features* | **GET** /projects/{projectId}/features/ |
| <ul><li>**FeaturesMetricsList**</li><li>datarobot::GetValidMetrics</li></ul>*List feature metrics* | **GET** /projects/{projectId}/features/metrics/ |
| <ul><li>**FeaturesMultiseriesPropertiesList**</li><li>datarobot::GetMultiSeriesProperties</li></ul>*Retrieve potential multiseries ID columns to use with a particular datetime partition column.* | **GET** /projects/{projectId}/features/{featureName}/multiseriesProperties/ |
| <ul><li>**FeaturesRetrieve**</li><li>datarobot::GetFeatureInfo</li></ul>*Get project feature* | **GET** /projects/{projectId}/features/{featureName}/ |
| <ul><li>**HdfsProjectsCreate**</li><li>datarobot::CreateHdfsProjects</li></ul>*Create a project from an HDFS file source.* | **POST** /hdfsProjects/ |
| <ul><li>**JobsDelete**</li><li>datarobot::DeleteProjectsJobs</li></ul>*Cancel Job* | **DELETE** /projects/{projectId}/jobs/{jobId}/ |
| <ul><li>**JobsList**</li><li>datarobot::ListJobs</li></ul>*List project jobs* | **GET** /projects/{projectId}/jobs/ |
| <ul><li>**JobsRetrieve**</li><li>datarobot::GetJob</li></ul>*Get job* | **GET** /projects/{projectId}/jobs/{jobId}/ |
| <ul><li>**List**</li><li>datarobot::ListProjects</li></ul>*List projects* | **GET** /projects/ |
| <ul><li>**ModelingFeaturelistsCreate**</li><li>datarobot::CreateModelingFeaturelist</li></ul>*Create a new modeling featurelist.* | **POST** /projects/{projectId}/modelingFeaturelists/ |
| <ul><li>**ModelingFeaturelistsDelete**</li><li>datarobot::DeleteModelingFeaturelist</li></ul>*Delete a specified modeling featurelist.* | **DELETE** /projects/{projectId}/modelingFeaturelists/{featurelistId}/ |
| <ul><li>**ModelingFeaturelistsList**</li><li>datarobot::ListModelingFeaturelists</li></ul>*List all modeling featurelists from a project* | **GET** /projects/{projectId}/modelingFeaturelists/ |
| <ul><li>**ModelingFeaturelistsPatch**</li><li>datarobot::UpdateModelingFeaturelist</li></ul>*Update an existing modeling featurelist* | **PATCH** /projects/{projectId}/modelingFeaturelists/{featurelistId}/ |
| <ul><li>**ModelingFeaturelistsRetrieve**</li><li>datarobot::GetModelingFeaturelist</li></ul>*Retrieve a single modeling featurelist by ID* | **GET** /projects/{projectId}/modelingFeaturelists/{featurelistId}/ |
| <ul><li>**ModelingFeaturesFromDiscardedFeaturesCreate**</li><li>datarobot::RestoreDiscardedFeatures</li></ul>*Restore discarded time series features.* | **POST** /projects/{projectId}/modelingFeatures/fromDiscardedFeatures/ |
| <ul><li>**ModelingFeaturesList**</li><li>datarobot::ListProjectsModelingFeatures</li></ul>*List project modeling features.* | **GET** /projects/{projectId}/modelingFeatures/ |
| <ul><li>**ModelingFeaturesRetrieve**</li><li>datarobot::RetrieveProjectsModelingFeatures</li></ul>*Retrieve project modeling feature.* | **GET** /projects/{projectId}/modelingFeatures/{featureName}/ |
| <ul><li>**MultiseriesIdsCrossSeriesPropertiesList**</li><li>datarobot::ListProjectsMultiseriesIdsCrossSeriesProperties</li></ul>*Retrieve eligible cross-series group-by columns.* | **GET** /projects/{projectId}/multiseriesIds/{multiseriesId}/crossSeriesProperties/ |
| <ul><li>**MultiseriesNamesList**</li><li>datarobot::ListProjectsMultiseriesNames</li></ul>*List the names of a multiseries project.* | **GET** /projects/{projectId}/multiseriesNames/ |
| <ul><li>**MultiseriesPropertiesCreate**</li><li>datarobot::RequestMultiSeriesDetection</li></ul>*Detect multiseries properties* | **POST** /projects/{projectId}/multiseriesProperties/ |
| <ul><li>**Patch**</li><li>datarobot::UpdateProject</li></ul>*Update project* | **PATCH** /projects/{projectId}/ |
| <ul><li>**RelationshipsConfigurationList**</li><li>datarobot::GetFeatureDiscoveryRelationships</li></ul>*Retrieve relationships configuration for a project* | **GET** /projects/{projectId}/relationshipsConfiguration/ |
| <ul><li>**RelationshipsConfigurationsCreate**</li><li>datarobot::CreateRelationshipsConfigurations</li></ul>*Create a relationships configuration* | **POST** /relationshipsConfigurations/ |
| <ul><li>**RelationshipsConfigurationsDelete**</li><li>datarobot::DeleteRelationshipsConfigurations</li></ul>*Delete a relationships configuration* | **DELETE** /relationshipsConfigurations/{relationshipsConfigurationId}/ |
| <ul><li>**RelationshipsConfigurationsPut**</li><li>datarobot::PutRelationshipsConfigurations</li></ul>*Replace a relationships configuration* | **PUT** /relationshipsConfigurations/{relationshipsConfigurationId}/ |
| <ul><li>**RelationshipsConfigurationsRetrieve**</li><li>datarobot::RetrieveRelationshipsConfigurations</li></ul>*Retrieve a relationships configuration* | **GET** /relationshipsConfigurations/{relationshipsConfigurationId}/ |
| <ul><li>**Retrieve**</li><li>datarobot::GetProject</li></ul>*Get project.* | **GET** /projects/{projectId}/ |
| <ul><li>**SecondaryDatasetsConfigurationsCreate**</li><li>datarobot::CreateProjectsSecondaryDatasetsConfigurations</li></ul>*Create secondary dataset configurations for a project.* | **POST** /projects/{projectId}/secondaryDatasetsConfigurations/ |
| <ul><li>**SecondaryDatasetsConfigurationsDelete**</li><li>datarobot::DeleteProjectsSecondaryDatasetsConfigurations</li></ul>*Soft deletes a secondary dataset configuration.* | **DELETE** /projects/{projectId}/secondaryDatasetsConfigurations/{secondaryDatasetConfigId}/ |
| <ul><li>**SecondaryDatasetsConfigurationsList**</li><li>datarobot::ListProjectsSecondaryDatasetsConfigurations</li></ul>*List all secondary dataset configurations for a project* | **GET** /projects/{projectId}/secondaryDatasetsConfigurations/ |
| <ul><li>**SecondaryDatasetsConfigurationsRetrieve**</li><li>datarobot::RetrieveProjectsSecondaryDatasetsConfigurations</li></ul>*Retrieve secondary dataset configuration by ID.* | **GET** /projects/{projectId}/secondaryDatasetsConfigurations/{secondaryDatasetConfigId}/ |
| <ul><li>**SegmentationTaskJobResultsRetrieve**</li><li>datarobot::RetrieveProjectsSegmentationTaskJobResults</li></ul>*Retrieve segmentation task statuses.* | **GET** /projects/{projectId}/segmentationTaskJobResults/{segmentationTaskId}/ |
| <ul><li>**SegmentationTasksCreate**</li><li>datarobot::CreateProjectsSegmentationTasks</li></ul>*Create segmentation tasks.* | **POST** /projects/{projectId}/segmentationTasks/ |
| <ul><li>**SegmentationTasksList**</li><li>datarobot::ListProjectsSegmentationTasks</li></ul>*List segmentation tasks.* | **GET** /projects/{projectId}/segmentationTasks/ |
| <ul><li>**SegmentationTasksMappingsList**</li><li>datarobot::ListProjectsSegmentationTasksMappings</li></ul>*Retrieve seriesId to segmentId mappings.* | **GET** /projects/{projectId}/segmentationTasks/{segmentationTaskId}/mappings/ |
| <ul><li>**SegmentationTasksRetrieve**</li><li>datarobot::RetrieveProjectsSegmentationTasks</li></ul>*Retrieve segmentation task.* | **GET** /projects/{projectId}/segmentationTasks/{segmentationTaskId}/ |
| <ul><li>**SegmentsPatch**</li><li>datarobot::PatchProjectsSegments</li></ul>*Update child segment project.* | **PATCH** /projects/{projectId}/segments/{segmentId}/ |
| <ul><li>**StatusList**</li><li>datarobot::GetProjectStatus</li></ul>*Check project status* | **GET** /projects/{projectId}/status/ |
| <ul><li>**TypeTransformFeaturesCreate**</li><li>datarobot::CreateDerivedFeatureFunctionMaker</li></ul>*Create a new feature by changing the type of an existing one.* | **POST** /projects/{projectId}/typeTransformFeatures/ |
### SsoConfigurationApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**SsoConfigurationsCreate**</li><li>datarobot::CreateSsoConfigurations</li></ul>*Create an SSO configuration for a specific organization* | **POST** /ssoConfigurations/ |
| <ul><li>**SsoConfigurationsList**</li><li>datarobot::ListSsoConfigurations</li></ul>*List sso configurations.* | **GET** /ssoConfigurations/ |
| <ul><li>**SsoConfigurationsPatch**</li><li>datarobot::PatchSsoConfigurations</li></ul>*Update an SSO configuration for a specific organization.* | **PATCH** /ssoConfigurations/{configurationId}/ |
| <ul><li>**SsoConfigurationsRetrieve**</li><li>datarobot::RetrieveSsoConfigurations</li></ul>*Retrieve SSO configuration of a specific organization.* | **GET** /ssoConfigurations/{configurationId}/ |
### UseCaseApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**UseCaseValueTemplatesCalculationList**</li><li>datarobot::ListUseCaseValueTemplatesCalculation</li></ul>*Calculate value of template with given template parameters.* | **GET** /useCaseValueTemplates/{templateType}/calculation/ |
| <ul><li>**UseCaseValueTemplatesList**</li><li>datarobot::ListUseCaseValueTemplates</li></ul>*List available use cases templates.* | **GET** /useCaseValueTemplates/ |
| <ul><li>**UseCaseValueTemplatesRetrieve**</li><li>datarobot::RetrieveUseCaseValueTemplates</li></ul>*Get an individual use case value template by its name.* | **GET** /useCaseValueTemplates/{templateType}/ |
| <ul><li>**UseCasesActivitiesList**</li><li>datarobot::ListUseCasesActivities</li></ul>*Retrieve the activities of a use case.* | **GET** /useCases/{useCaseId}/activities/ |
| <ul><li>**UseCasesAttachmentsCreate**</li><li>datarobot::CreateUseCasesAttachments</li></ul>*Attach a list of resources to this use case.* | **POST** /useCases/{useCaseId}/attachments/ |
| <ul><li>**UseCasesAttachmentsDelete**</li><li>datarobot::DeleteUseCasesAttachments</li></ul>*Removes a resource from a use case.* | **DELETE** /useCases/{useCaseId}/attachments/{attachmentId}/ |
| <ul><li>**UseCasesAttachmentsList**</li><li>datarobot::ListUseCasesAttachments</li></ul>*Get a list of resources attached to this use case.* | **GET** /useCases/{useCaseId}/attachments/ |
| <ul><li>**UseCasesAttachmentsRetrieve**</li><li>datarobot::RetrieveUseCasesAttachments</li></ul>*Get a resource that is attached to a use case.* | **GET** /useCases/{useCaseId}/attachments/{attachmentId}/ |
| <ul><li>**UseCasesCreate**</li><li>datarobot::CreateUseCases</li></ul>*Create a new use case.* | **POST** /useCases/ |
| <ul><li>**UseCasesDelete**</li><li>datarobot::DeleteUseCases</li></ul>*Delete a use case* | **DELETE** /useCases/{useCaseId}/ |
| <ul><li>**UseCasesList**</li><li>datarobot::ListUseCases</li></ul>*List use cases the requesting user has access to.* | **GET** /useCases/ |
| <ul><li>**UseCasesPatch**</li><li>datarobot::PatchUseCases</li></ul>*Update a use case.* | **PATCH** /useCases/{useCaseId}/ |
| <ul><li>**UseCasesRealizedValueOverTimeList**</li><li>datarobot::ListUseCasesRealizedValueOverTime</li></ul>*Retrieve realized value information for a given use case over a period of time* | **GET** /useCases/{useCaseId}/realizedValueOverTime/ |
| <ul><li>**UseCasesRetrieve**</li><li>datarobot::RetrieveUseCases</li></ul>*Retrieve a use case.* | **GET** /useCases/{useCaseId}/ |
| <ul><li>**UseCasesSharedRolesList**</li><li>datarobot::ListUseCasesSharedRoles</li></ul>*Get a list of users, groups and organizations that have an access to this use case* | **GET** /useCases/{useCaseId}/sharedRoles/ |
| <ul><li>**UseCasesSharedRolesPatchMany**</li><li>datarobot::PatchManyUseCasesSharedRoles</li></ul>*Share a use case with a user, group or organization.* | **PATCH** /useCases/{useCaseId}/sharedRoles/ |
### UserManagementApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**GroupsCreate**</li><li>datarobot::CreateGroups</li></ul>*Create user group* | **POST** /groups/ |
| <ul><li>**GroupsDelete**</li><li>datarobot::DeleteGroups</li></ul>*Delete user group* | **DELETE** /groups/{groupId}/ |
| <ul><li>**GroupsDeleteMany**</li><li>datarobot::DeleteManyGroups</li></ul>*Delete multiple user groups* | **DELETE** /groups/ |
| <ul><li>**GroupsPatch**</li><li>datarobot::PatchGroups</li></ul>*Update user group* | **PATCH** /groups/{groupId}/ |
| <ul><li>**GroupsRetrieve**</li><li>datarobot::RetrieveGroups</li></ul>*Retrieve user group* | **GET** /groups/{groupId}/ |
| <ul><li>**GroupsUsersCreate**</li><li>datarobot::CreateGroupsUsers</li></ul>*Add users to group* | **POST** /groups/{groupId}/users/ |
| <ul><li>**GroupsUsersDeleteMany**</li><li>datarobot::DeleteManyGroupsUsers</li></ul>*Remove users from group* | **DELETE** /groups/{groupId}/users/ |
| <ul><li>**GroupsUsersList**</li><li>datarobot::ListGroupsUsers</li></ul>*List users in group* | **GET** /groups/{groupId}/users/ |
| <ul><li>**OrganizationsJobsList**</li><li>datarobot::ListOrganizationsJobs</li></ul>*List organization jobs* | **GET** /organizations/{organizationId}/jobs/ |
| <ul><li>**OrganizationsList**</li><li>datarobot::ListOrganizations</li></ul>*List organizations* | **GET** /organizations/ |
| <ul><li>**OrganizationsRetrieve**</li><li>datarobot::RetrieveOrganizations</li></ul>*Retrieve organization* | **GET** /organizations/{organizationId}/ |
| <ul><li>**OrganizationsUsersCreate**</li><li>datarobot::CreateOrganizationsUsers</li></ul>*Add user to an existing organization.* | **POST** /organizations/{organizationId}/users/ |
| <ul><li>**OrganizationsUsersDelete**</li><li>datarobot::DeleteOrganizationsUsers</li></ul>*Remove user from organization* | **DELETE** /organizations/{organizationId}/users/{userId}/ |
| <ul><li>**OrganizationsUsersList**</li><li>datarobot::ListOrganizationsUsers</li></ul>*List organization users* | **GET** /organizations/{organizationId}/users/ |
| <ul><li>**OrganizationsUsersPatch**</li><li>datarobot::PatchOrganizationsUsers</li></ul>*Patch organization's user* | **PATCH** /organizations/{organizationId}/users/{userId}/ |
| <ul><li>**OrganizationsUsersRetrieve**</li><li>datarobot::RetrieveOrganizationsUsers</li></ul>*Retrieve user from organization* | **GET** /organizations/{organizationId}/users/{userId}/ |
| <ul><li>**UserCleanupJobsCreate**</li><li>datarobot::CreateUserCleanupJobs</li></ul>*Users permanent delete.* | **POST** /userCleanupJobs/ |
| <ul><li>**UserCleanupJobsDelete**</li><li>datarobot::DeleteUserCleanupJobs</li></ul>*Cancel users perma-deletion.* | **DELETE** /userCleanupJobs/{statusId}/ |
| <ul><li>**UserCleanupJobsRetrieve**</li><li>datarobot::RetrieveUserCleanupJobs</li></ul>*Retrieve users perma-delete job status.* | **GET** /userCleanupJobs/{statusId}/ |
| <ul><li>**UserCleanupPreviewJobsCreate**</li><li>datarobot::CreateUserCleanupPreviewJobs</li></ul>*Users permanent delete preview.* | **POST** /userCleanupPreviewJobs/ |
| <ul><li>**UserCleanupPreviewJobsDelete**</li><li>datarobot::DeleteUserCleanupPreviewJobs</li></ul>*Cancel users perma-delete preview building.* | **DELETE** /userCleanupPreviewJobs/{statusId}/ |
| <ul><li>**UserCleanupPreviewJobsRetrieve**</li><li>datarobot::RetrieveUserCleanupPreviewJobs</li></ul>*Retrieve users perma-delete preview job status.* | **GET** /userCleanupPreviewJobs/{statusId}/ |
| <ul><li>**UserCleanupPreviewsContentList**</li><li>datarobot::ListUserCleanupPreviewsContent</li></ul>*Users permanent delete extended preview.* | **GET** /userCleanupPreviews/{reportId}/content/ |
| <ul><li>**UserCleanupPreviewsDelete**</li><li>datarobot::DeleteUserCleanupPreviews</li></ul>*Delete users permanent delete report.* | **DELETE** /userCleanupPreviews/{reportId}/ |
| <ul><li>**UserCleanupPreviewsDeleteParamsList**</li><li>datarobot::ListUserCleanupPreviewsDeleteParams</li></ul>*Users permanent delete report parameters.* | **GET** /userCleanupPreviews/{reportId}/deleteParams/ |
| <ul><li>**UserCleanupPreviewsStatisticsList**</li><li>datarobot::ListUserCleanupPreviewsStatistics</li></ul>*Users permanent delete preview statistics.* | **GET** /userCleanupPreviews/{reportId}/statistics/ |
| <ul><li>**UserCleanupSummariesContentList**</li><li>datarobot::ListUserCleanupSummariesContent</li></ul>*Users permanent delete extended summary.* | **GET** /userCleanupSummaries/{reportId}/content/ |
| <ul><li>**UserCleanupSummariesDelete**</li><li>datarobot::DeleteUserCleanupSummaries</li></ul>*Delete users permanent delete report.* | **DELETE** /userCleanupSummaries/{reportId}/ |
| <ul><li>**UserCleanupSummariesDeleteParamsList**</li><li>datarobot::ListUserCleanupSummariesDeleteParams</li></ul>*Users permanent delete report parameters.* | **GET** /userCleanupSummaries/{reportId}/deleteParams/ |
| <ul><li>**UserCleanupSummariesStatisticsList**</li><li>datarobot::ListUserCleanupSummariesStatistics</li></ul>*Users permanent delete summary statistics.* | **GET** /userCleanupSummaries/{reportId}/statistics/ |
| <ul><li>**UsersCreate**</li><li>datarobot::CreateUsers</li></ul>*Create a User and add them to an organisation.* | **POST** /users/ |
| <ul><li>**UsersLimitsList**</li><li>datarobot::ListUsersLimits</li></ul>*Get the rate limits and account limits for a user* | **GET** /users/{userId}/limits/ |
| <ul><li>**UsersLimitsPatchMany**</li><li>datarobot::PatchManyUsersLimits</li></ul>*Update the rate limits and account limits for a user* | **PATCH** /users/{userId}/limits/ |
### UtilitiesApi
| Functions and Description | HTTP request |
| ------------- | ------------- |
| <ul><li>**StringEncryptionsCreate**</li><li>datarobot::CreateStringEncryptions</li></ul>*Encrypt a string which DataRobot can decrypt.* | **POST** /stringEncryptions/ |
|
r-ref
|
---
title: Public preview R client v2.29
description: Learn about the new features available for public preview in version 2.29 of DataRobot's R client.
---
# Public preview R client v2.29
Now available for public preview, DataRobot has released [version 2.29 of the R client](https://github.com/datarobot/rsdk/releases){ target=_blank }. This version brings parity between the R client and version 2.29 of the Public API. As a result, it introduces significant changes to common methods and usage of the client. These changes are encapsulated in a new library (in addition to the `datarobot` library): `datarobot.apicore`, which provides auto-generated functions to access the Public API. The `datarobot` package provides a number of "API wrapper functions" around the `apicore` package to make it easier to use.
In addition to the release notes outlined below, reference the [public preview documentation](r-ref) for an overview of the functions introduced with v2.29.
## Feature overview
New API Functions:
* Generated API wrapper functions are organized into categories based on their tags from the OpenAPI specification.
* These functions use camel-cased argument names to be consistent with the rest of the package.
* Most function names follow a `VerbObject` pattern based on the OpenAPI specification.
* Some function names match "legacy" functions that existed in v2.18 of the R Client if they invoked the same underlying endpoint. For example, the wrapper function is called `GetModel`, not `RetrieveProjectsModels`, since the latter is what was implemented in the R client for the endpoint `/projects/{mId}/models/{mId}`.
* Similarly, these functions use the same arguments as the corresponding "legacy" functions to ensure DataRobot does not break existing code that calls those functions.
* Added the `DownloadDatasetAsCsv` function to retrieve dataset as CSV using `catalogId`.
* Added the `GetFeatureDiscoveryRelationships` function to get the feature discovery relationships for a project.
Other new features:
* The R client (both `datarobot` and `datarobot.apicore` packages) will output a warning when you attempt to access certain resources (projects, models, deployments, etc.) that are deprecated or disabled by the DataRobot platform migration to Python 3.
* Added the helper function `EditConfig` that allows you to interactively modify drconfig.yaml.
Many prominent features introduced in `datarobot.apicore` are listed below.
Feature | Method name
------- | -----------
Visual AI image learning | `datarobot.apicore::ImagesApi`
Location AI | `datarobot.apicore::InsightsApi`
Data Quality Assessment | `datarobot::RetrieveModelingFeatures()`
SHAP insights | `datarobot.apicore::InsightsApi`
Segment analysis for service health | `datarobot::ListDeploymentSettings()`
Model package replacement | `datarobot.apicore::DeploymentsApi$DeploymentsModelPatchMany()`
Download Scoring Code from deployments | `datarobot::ListDeploymentsScoringCode()`
Challenger Models | `datarobot::RetrieveDeploymentsChallengers()`
Feature Discovery | `datarobot.apicore::ProjectsApi`
Bias and Fairness | `datarobot::ListProjectsModelsFairnessInsights()`
Prediction environments | `datarobot.apicore::DeploymentsApi`
Visual AI image augmentation | `datarobot.apicore::ImagesApi`
Batch prediction cloud connectors | `datarobot.apicore::PredictionsApi`
Feature Discovery Snowflake integration | `datarobot.apicore::ProjectsApi`
DataRobot Pipelines | `datarobot.apicore::DataConnectivityApi`
Composable ML project linking and bulk training | `datarobot.apicore::BlueprintsApi`, `datarobot.apicore::CustomTasksApi`
Clustering | `datarobot.apicore::InsightsApi`
Multilabel classification | `datarobot.apicore::ProjectsApi`, `datarobot.apicore::InsightsApi`
Automated Retraining | `datarobot.apicore::DeploymentsApi`
Bias Mitigation functionality | `datarobot.apicore::ModelsApi`
“Uncensored” blueprints | `datarobot.apicore::BlueprintsApi`
Scoring Code in Snowflake | `datarobot::ListDeploymentsScoringCode()`
Segmented modeling for multiseries projects | `datarobot.apicore::ModelsApi`
Time series data prep | `datarobot.apicore::AiCatalogApi`
Scoring Code for time series | `datarobot.apicore::ModelsApi, datarobot.apicore::DeploymentsApi`
## Enhancements
* The function `RequestFeatureImpact` now accepts a `rowCount` argument, which will change the sample size used for Feature Impact calculations.
* The internal helper function `ValidateModel` was renamed to `ValidateAndReturnModel` and now works with model classes from the `apicore` package.
* The `quickrun` argument has been removed from the function `SetTarget`. Set `mode = AutopilotMode.Quick` instead.
* Removed files (code, tests, doc) representing parts of the Public API not present in v2.27-2.29.
## Bugfixes
* The enum `ModelCapability` has been properly exported.
* Fixed `FullAverageDataset` function in the PartialDependence vignette to ignore `NA` when calculating the `min` and `max` of the data range.
* Fixed `RetrieveAutomatedDocuments` function to accept filename argument that is used to specify where to save the automated document.
* Fixed `datarobot.apicore` file upload functions to properly encode the payload as "multipart".
* Fixed `datarobot.apicore` JSON serialization bugs.
* Fixed tests exercising the `BuildPath` helper function.
## Dependency changes
* The `datarobot` package is now dependent on R >= 3.5 due to changes in the updated "Introduction to DataRobot" vignette.
* Added dependency on `AmesHousing` package for updated "Introduction to DataRobot" vignette.
* Removed dependency on `MASS` package.
* Client documentation is now explicitly generated with Roxygen2 v7.2.1.
## Documentation changes
* Package-level documentation for both packages has been updated to explain how to use package options.
* Updated "Introduction to DataRobot" vignette to use Ames, Iowa housing data instead of Boston, Massachusetts housing dataset.
* Compressed `extdata/Friedman1.csv` and updated vignettes dependent on that dataset.
* Removed `extdata/anomFrame.csv` as it was unused.
## Deprecations and deletions
Review the breaking changes introduced in version 2.29:
* The `quickrun` argument has been removed from the function SetTarget. Set `mode = AutopilotMode.Quick` instead.
* The Transferable Models functions have been removed. Note that the underlying endpoints were also removed from the Public API with the removal of the Standalone Scoring Engine (SSE). The affected functions are listed below:
* `ListTransferableModels`
* `GetTransferableModel`
* `RequestTransferableModel`
* `DownloadTransferableModel`
* `UploadTransferableModel`
* `UpdateTransferableModel`
* `DeleteTransferableModel`
Review the deprecations introduced in version 2.29:
* Compliance Documentation API is deprecated. Instead use the Automated Documentation API.
## Installation
The following sections outline how to install v2.29 of DataRobot's R client.
### Prerequisites
Install the client's dependencies:
```
install.packages("jsonlite")
install.packages("httr")
install.packages("R6")
```
### Install the package
```R
library(remotes)
install_github("datarobot/rsdk", subdir = "datarobot.apicore", ref = github_release())
install_github("datarobot/rsdk", subdir = "datarobot", ref = github_release())
```
### Usage
```R
library(datarobot) # This will load the datarobot.apicore package as well
```
### Configuration options
The `datarobot.apicore` package can be configured in the following ways with the following options (using `getOption()` and `setOption()`):
`datarobot.apicore.returnS3` is a boolean that, if `TRUE`, will return all API responses to the caller as S3 objects. If `FALSE`, the responses are returned as R6 objects. The default configuration is `TRUE`.
To work with R6 objects, run the following snippet:
```
options('datarobot.apicore.returnS3 = FALSE')
```
|
index
|
---
title: DataRobot apicore package walkthrough
description: Learn how to access, install, and use DataRobot's apicore package for the R client.
---
# DataRobot apicore package walkthrough {: #datarobot-apicore-package-walkthrough }
In v2.29 of the R Client, available for public preview, DataRobot has introduced a new dependency: the `datarobot.apicore` package. The package provides you with access to all of the capabilities of the DataRobot platform previously unavailable for the R client. The package is generated from the OpenAPI specification of the DataRobot Public API.
Use the following snippet to install the two packages:
# Use the following links to install the package directly from GitHub
library(remotes)
install_github("datarobot/rsdk", subdir = "datarobot.apicore", ref = github_release())
install_github("datarobot/rsdk", subdir = "datarobot", ref = github_release())
# Download the tarballs directly from the GitHub Releases page at
# https://github.com/datarobot/rsdk/releases
# Then run install_packages()
The `datarobot.apicore` package is loaded into your R session when you load the `datarobot` package.
library(datarobot)
#> Loading required package: datarobot.apicore
#> Authenticating with config at: /Users/druser/.config/datarobot/drconfig.yaml
#> Authentication token saved
The following code lets you check the version of the API server that you are connected to.
iapi <- datarobot.apicore::InfrastructureApi$new()
iapi$VersionList()
#> $versionString
#> [1] "2.30.0"
#>
#> $minor
#> [1] 30
#>
#> $major
#> [1] 2
#>
#> attr(,"class")
#> [1] "VersionRetrieveResponse"
Next, try using the endpoint `GET /datasets`. It retrieves all of the datasets that you have access to in the AI Catalog (a new method introduced in v2.29).
catalogapi <- datarobot.apicore::AiCatalogApi$new()
try(catalogapi$DatasetsList())
#> Error in private$DatasetsListWithHttpInfo(limit, offset, category, orderBy, :
#> Missing required parameter `limit`.
v2.29 also introduces request parameter validation with helpful error messaging. Fill in the parameters used below (both required and optional).
datasets <- try(catalogapi$DatasetsList(
limit = 2,
offset = 0,
category = "TRAINING",
orderBy = "created"
))
dataset <- datasets$data[[1]]
dataset[c("name", "datasetId", "datasetSize", "creationDate")]
#> $name
#> [1] "SPI 2016-2019.csv"
#>
#> $datasetId
#> [1] "600f45bba65b448826884d5f"
#>
#> $datasetSize
#> [1] 8795275
#>
#> $creationDate
#> [1] "2021-01-25 22:27:07 UTC"
The `datarobot.apicore` package is very expressive and can provide R-specific functionality around the DataRobot Public API. Generally speaking, though, you may not need this additional customization, so the `datarobot` package provides several conveniences to simplify your development.
## Access the API {: #access-the-api }
The `datarobot` environment hosts a singleton list, `dr`, containing instances of all of the different API classes in `datarobot.apicore`.
exists("dr")
#> [1] TRUE
print(names(dr))
#> [1] "AiCatalogApi" "AnalyticsApi"
#> [3] "ApplicationsApi" "BlueprintsApi"
#> [5] "CommentsApi" "CredentialsApi"
#> [7] "CustomTasksApi" "DataConnectivityApi"
#> [9] "DatetimePartitioningApi" "DeploymentsApi"
#> [11] "DocumentationApi" "GovernanceApi"
#> [13] "ImagesApi" "InfrastructureApi"
#> [15] "InsightsApi" "JobsApi"
#> [17] "MlopsApi" "ModelsApi"
#> [19] "NotificationsApi" "PredictionsApi"
#> [21] "ProjectsApi" "SsoConfigurationApi"
#> [23] "UseCaseApi" "UserManagementApi"
#> [25] "UtilitiesApi"
You can use this list to quickly access API methods and avoid constructing new objects every time. Try checking the API server version again.
# Server version
dr$InfrastructureApi$VersionList()
#> $versionString
#> [1] "2.30.0"
#>
#> $minor
#> [1] 30
#>
#> $major
#> [1] 2
#>
#> attr(,"class")
#> [1] "VersionRetrieveResponse"
# AI catalog datasets
datasets <- try(dr$AiCatalogApi$DatasetsList(
limit = 2,
offset = 0,
category = "TRAINING",
orderBy = "created"
))
dataset <- datasets$data[[1]]
dataset[c("name", "datasetId", "datasetSize", "creationDate")]
#> $name
#> [1] "SPI 2016-2019.csv"
#>
#> $datasetId
#> [1] "600f45bba65b448826884d5f"
#>
#> $datasetSize
#> [1] 8795275
#>
#> $creationDate
#> [1] "2021-01-25 22:27:07 UTC"
The example above shows that you can use one less line of code and create one less object per API call.
!!! note
The API classes in the `dr` list all use the default authentication method, `ConnectToDataRobot()`.
## Convenience wrapper functions {: #convenience-wrapper-functions }
DataRobot provides a set of wrapper functions around every API endpoint. These functions:
* Follow the saner naming convention of `VerbObject()` that has existed in the R API Client. For example, `ListDatasets()` rather than `DatasetsList()`.
* Reuse the old names for functions that were already implemented in the R API Client before v2.29. For example, `GetServerVersion()` rather than `VersionList()`.
* Set default values if they were provided in the OpenAPI spec.
Try checking the API server version one more time:
GetServerVersion()
#> $major
#> [1] 2
#>
#> $minor
#> [1] 30
#>
#> $versionString
#> [1] "2.30.0"
#>
#> $releasedVersion
#> [1] "2.29.0"
Now, try looking up training datasets.
trainingDatasets <- try(ListDatasets(
category = "TRAINING",
orderBy = "created",
datasetVersionIds = c(),
offset = 0,
limit = 2
))
dataset <- trainingDatasets$data[[1]]
dataset[c("name", "datasetId", "datasetSize", "creationDate")]
#> $name
#> [1] "SPI 2016-2019.csv"
#>
#> $datasetId
#> [1] "600f45bba65b448826884d5f"
#>
#> $datasetSize
#> [1] 8795275
#>
#> $creationDate
#> [1] "2021-01-25 22:27:07 UTC"
DataRobot recommends you use whichever pattern you’re most comfortable with, but the convenience wrapper functions provide syntactic ease.
|
apicore
|
---
title: REST API code examples
description: Review comprehensive workflows, notebooks, and tutorials that help you find complete examples of common data science and machine learning workflows.
---
# REST API code examples {: #rest-api-code-examples }
The API user guide includes overviews and workflows for DataRobot's REST API that outline complete examples of common data science and machine learning workflows. Be sure to review the [API quickstart guide](api-quickstart/index) before using the notebooks below.
Topic | Describes... |
----- | ------ |
[Create a multiseries project](multi-rest.ipynb) | How to initiate a DataRobot project for a multiseries time series problem using the DataRobot REST API.
[Create a clustering project](rest-cluster.ipynb) | How to create a clustering project and initiate Autopilot in Manual mode via DataRobot's REST API. |
[Fetch metadata from prediction jobs](pred-metadata.ipynb) | How to retrieve metadata from prediction jobs with DataRobot's REST API. |
|
index
|
---
title: R code examples
description: Review comprehensive workflows, notebooks, and tutorials that help you find complete examples of common data science and machine learning workflows.
---
# R code examples {: #r-code-examples }
The API user guide includes overviews and workflows for DataRobot's R client that outline complete examples of common data science and machine learning workflows. Be sure to review the [API quickstart guide](api-quickstart/index) before using the notebooks below.
Topic | Describes... |
----- | ------ |
[Advanced feature selection with R](r-select.ipynb) | How to select features by creating aggregated feature impact. |
[Prediction Explanation clustering with R](pe-cluster.ipynb) | The analysis and identification of the clusters present in a DataRobot model's Prediction Explanations using the DataRobot R client. |
[Access blueprints with R](r-leaderboard-nb.ipynb) | How to access blueprints from either the Leaderboard or the Repository.
|
index
|
---
title: Price elasticity of demand modeling
description: Understand the impact that changes in price will have on consumer demand for a given product.
---
# Price elasticity of demand modeling {: #price-elasticity-of-demand-modeling }
The price elasticity of demand is a measurement for how demand for a product is affected by changes in its price, and is a crucial consideration for organizations that make pricing decisions. Generally, there’s an inverse relationship between price and demand; as price for a product or category increases, demand or the number of products sold tends to decrease. However, not all products are equally sensitive to changes in price. Factors such as product differentiation, competition, and promotional activities all play pivotal roles in the relationship between price and demand for individual products or categories. To maximize profits, companies must use price elasticities to determine the optimal balance between price and demand, as well as monitor these elasticities regularly to account for changes in the overall market environment.
[The included notebook](elasticity.ipynb) helps you to understand the impact that changes in price will have on consumer demand for a given product. Business analysts that measure price elasticity and business users that require elasticity as an input to make pricing decisions will benefit from this notebook.
Following this workflow will allow you to identify relationships between price and demand, maximize revenue by properly pricing products, monitor price elasticities for changes in price and demand, and reduce manual processes used to obtain and update price elasticities.
### Solution value {: #solution-value }
Traditional approaches to calculating price elasticity leave much to be desired:
* They are often managed by third-party analytics providers, obscuring methods and limiting experimentation.
* Category expertise is missing in selecting confounding variables.
* Single coefficients do not fully capture a price’s relationship to demand and require better model explainability.
* Models are typically limited to linear methods, limiting their quality and accuracy.
* The traditional approach is a single point-in-time estimate, but with high-quality model monitoring and retraining this can be a living and dynamic analysis.
Using DataRobot for price elasticity modeling provides a much more flexible and accurate way to make pricing decisions.
The primary issues that this use case addresses include:
* Revenue loss due to non-optimally priced items.
* Increased cadence for analyzing pricing with greater user control and understanding.
* Decreased reliance on expensive third-party analytics vendors.
### Data overview {: #data-overview }
Each row in this use case's dataset represents a product, by day, in a single market, with sales and a price information. The same product on the same day in a different market is represented in an independent row. Additionally, confounding variables related to active promotions, weather, and macroeconomic indicators are added to help isolate the relationship between price and sales. These should be reduced or augmented based on your category or product knowledge.
### Feature overview {: #feature-overview }
* Date: calendar date, daily level
* SKUName: unique identifier for each product
* Sales: Total dollar sales for that product for that date
* PriceBaseline: Price of product without discounting
* PriceActual: Current price of product with discounting
* PctOnSale: Percentage difference between PriceBaseline and PriceActual
* HotDay: Binary for whether the temperature was above some threshold
* sunnDay: Binary for if the day was sunny for a given geography
* EconChangeGDP: Percentage change in US GDP reported quarterly (blank on unreported dates)
* EconJobsChange: Percentage change in US unemployment insurance claims, available weekly (blank on unreported dates)
* AnnualizedCPI: Federal Reserve measure of national price inflations, available monthly
### Prepare data {: #prepare-data }
Use DataRobot’s EDA stages to proactively identify common impediments to model performance like outliers, missing values, and target leakage.
Build feature lists to approach modeling with the appropriate variables. For this project, DataRobot uses the provided dates and creates variables for the `Day of the Week`, `Day of the Month`, the `Month`, and the `Year`.

This use case removes `Year` and keeps all other variables. This use case also creates a feature list of variables that should always have a negative relationship with sales ([monotonically decreasing](monotonic) and uses its relationship to inform model training. In this case, the feature list enforces the idea that sales should always decrease as price increases.

### Model insights {: #modeling-and-insights }
When modeling completes, identify the top performing model on the Leaderboard and use the **Understand** tab in the UI to learn more about the model’s behavior.
In the example below, the **Feature Impact** chart shows that the product, its price, and the associated marketing efforts have the highest feature importance. Also, two of the included macroeconomic indicators have little to no impact on the model. This is a case where experimentation by removing these two variables may improve model performance.

For this use case, the partial dependence plot shown in [**Feature Effects**](feature-effects#partial-dependence-calculations) may be the most important model insight. In this case, the plot shows impact of price increases on sales at all price values. Rather than considering a single coefficient, this acknowledges that not all units of price increase have the same impact on sales. For example, in this graph a price increase from $2.10 to $3 has almost no effect on sales, but increasing the price beyond $3 has a significant negative impact.

[Prediction Explanations](pred-explain/index) (XEMP in this example) are extremely useful to analysts to aid in answering “Why?” questions around their product or category. In the below example we can see a product with low expected sales. The top three reasons given show that this is because the item is not on sale and has no marketing activity to promote it. This is an ice cream, so there is likely a discounted brand with the same flavor close by, altering consumer decision making.

### Optimizing price {: #optimizing-price }
With an understanding of how prices and other factors impact sales, you can find an optimal price that maximizes revenue. To do so, you can simulate many possible pricing scenarios for the products you want to analyze. The starting data contains one row for each product on an upcoming date, providing as much confounding information possible. The price to start is the currently planned price.

This example creates an observation of every product with 1% increments ranging from a 25% discount to a 25% price increase.

Use the top model from the Leaderboard to make predictions against the dataset and calculate the total revenue for each simulation.

Select a single product and observe its unique relationship between price and sales. In this graph, note the intersection point between sales and revenue.


Lastly, calculate the optimum price point for each product that maximizes its revenue.

The final output shows a new optimal price point for each product under the column `PriceActual_y`. Note that Heck 97% Pork Sausages are actually being discounted too much:

### Single elasticity coefficient {: #single-elasticity-coefficient}
To generate a single coefficient for the `PriceActual` variable, use a different model from the model Leaderboard. Find the initial Linear Regression blueprint and retrain it on 100% of the data using the Holdout partition.

Once the Linear Regression model finishes retraining, select it and navigate to [**Describe > Coefficients**](coefficients) to export its coefficients. Use the export button to download all coefficients and find the value associated with `Price Actual`.


### Demo {:#demo}
See the notebook outlining this use case [here](elasticity.ipynb).
|
index
|
---
title: Python v2.x use cases
description: Review Jupyter notebooks that outline common use cases for version 2.x of DataRobot's Python client.
---
# Python v2.x use cases {: #python-v2x-use-cases }
Review Jupyter notebooks that outline common use cases and machine learning workflows using DataRobot's Python client.
Topic | Describes... |
----- | ------ |
[Large scale demand forecasting](demand-forecast/index) | An end-to-end demand forecasting use case that uses DataRobot's Python package. |
[Predict loan defaults](loan-default/index) | A use case that reduces defaults and minimizes risk by predicting the likelihood that a borrower will not repay their loan. |
[Predict late shipments](predict-shipment/index) | A use case that determines whether a shipment will be late or if there will be a shortage of parts. |
[Predict steel plate defects](steel/index) | A use case that helps manufacturers significantly improve the efficiency and effectiveness of identifying defects of all kinds, including those for steel sheets.
[Reduce 30-Day readmissions rate](readmission/index) | A use case to reduce the 30-day readmission rate at a hospital. |
[Identify money laundering with anomaly detection](aml/index) | How to train anomaly detection models to detect outliers. |
[No-show appointment forecasting](no-show-appt/index) | How to build a model that identifies patients most likely to miss appointments, with correlating reasons. |
[Measure price elasticity of demand](elasticity/index) | A use case to identify relationships between price and demand, maximize revenue by properly pricing products, and monitor price elasticities for changes in price and demand. |
[Predict equipment failure](part-fail.ipynb) | A use case that that determines whether or not equipment part failure will occur. |
[Identify money laundering with anomaly detection](outlier-detect.ipynb) | How to train anomaly detection models to detect outliers. |
[Lead scoring](lead-scoring.ipynb) | A binary classification problem of whether a prospect will become a customer. |
[Predict fraudulent medical claims](pred-fraud.ipynb) | The identification of fraudulent medical claims using the DataRobot Python package. |
[Predict CO₂ levels with out-of-time validation modeling](otv-nb.ipynb) | How to use [out-of-time validation (OTV)](otv) modeling with DataRobot's Python client to predict monthly CO₂ levels for one of Hawaii's active volcanoes, Mauna Loa. |
[Forecast sales with multiseries modeling](multiseries-nb.ipynb) | How to forecast future sales for multiple stores using multiseries modeling. |
[Predictions for fantasy baseball](fantasy.ipynb) | An estimate of a baseball player's true talent level and their likely performance for the coming season. |
[Predict customer churn](customer-churn.ipynb) | How to predict customers that are at risk to churn and when to intervene to prevent it. |
[Configure datetime partitioning](datetime-nb.ipynb) | How to use [datetime partitioning](datetime_partitioning) to guard a project against time-based target leakage. |
|
index
|
---
title: No-show appointment forecasting
description: Build a model that identifies patients most likely to miss appointments, with correlating reasons. This data can then be used by staff to target outreach on those patients and additionally to understand, and perhaps address, associated issues.
---
# No-show appointment forecasting {: #no-show-appointment-forecasting }
In this use case you will build a model that identifies patients most likely to miss appointments, with correlating reasons. This data can then be used by staff to target outreach on those patients and additionally to understand, and perhaps address, associated issues.
[Click here](no_show.ipynb) to jump directly to the notebook. Otherwise, the following several paragraphs describe the business justification and problem framing for this use case.
## Background {: #background }
Canceling a doctor's appointment need not be particularly problematic—with appropriate notice to the office. Without notice (patients known as "no-shows") on the other hand, cost outpatient health centers a staggering 14% of anticipated daily revenue. Missed appointments result in lower utilization rates for doctors and nurses and also contribute unnecessarily to the overhead costs required to run outpatient centers. In addition, when a patient misses an appointment, they risk poorer health outcomes due to lack of timely care.
Many outpatient centers employ solutions such as phoning patients in advance, but this high-touch investment is often not prioritized for the highest risk no-shows patients. Low-touch solutions, such as automated texts, are effective tools for mass reminders but do not offer necessary personalization for a patient at the highest risk of missing an appointment.
Key takeaways:
* **Strategy/challenge**: Identify clients likely to miss appointments ("no-shows") and take action to prevent that from happening.
* **Business driver**: Grow Revenue, increase customer LTV, increase customer satisfaction.
* **Model solution**: Rank-order patients and build an outreach call list. Using the list can minimize revenue loss by increasing attendance (and also, thereby, improving patient outcome) and identifying potential overbook opportunities to prevent downtime.
## Using this notebook {: #using-this-notebook }
Topic | Description
:- | :-
**Use Case Type** | Health Care/No-show forecasting
**Skill set** | Business Analyst
**Desired Outcomes**| <ul><li>Prevent no-shows</li><li>Optimally target responses</li><li>Reduce costs from missed appointments/add revenue from booking more appointments</li></ul>
**Metrics / KPIs** | <ul><li>Current no-show rate is roughly 5% of all appointments.</li><li>Cost of missed visit on average is $150 per appointment.</li></ul>
**Sample Datasets** | This use cases uses the following datasets:<ul><li><a href="no_show.csv">no_show</a> (base dataset of patient records)</li> <li><a href="clinics.csv">clinics</a> (latitude and longitude of identified clinics)</li> <li><a href="planning_neighborhoods.csv">planning_neighborhoods</a> (neighborhoods of San Francisco with WKT geodata polygons)</li> <li><a href="no_show_historical.csv">no_show_historical</a> (historical no-show rate by patient)</li></ul>
## Solution value {: #solution-value }
The purpose of this use case is to build a model that enables practice management staff to predict in advance which patients are likely to miss appointments. Using historical data to uncover patterns related to no-shows, in addition to identifying those more likely to no-show, the model's visualizations help understand the top reasons _why_. These predictions, and their explanations, help staff understand how various factors, such as a patient’s distance from a clinic and the days they needed to wait for their appointments, influence the risk of no-show. Based on these predictions and insights, outpatient staff members can focus outreach on patients with the highest risk of missing and subsequently offer alternatives—rescheduling appointments or providing transportation.
The primary issues, and corresonding opportunities, that this use case addresses include:
ISSUE | OPPORTUNITY
:- | :-
Patient outcome | Ensuring attendance plays a critical role in patient health, since patients may suffer if they do not get required care.
Revenue loss | A degree of certainty about an open booking slot allows for preemptive filling by:<ul><li>Standard over-booking.</li><li>Contacting an alternative patient (using a "propensity for" model).</li></ul>
Staffing inefficiency | Correct staffing levels improve both patient and employee satisfaction.
## Work with data {: #work-with-data }
The primary dataset for this use case represents patient visits. Supplemental datasets allow aggregating features for more targeted responses.
### Shape {: #shape }
The dataset granularity is one row per visit. For best results, data should cover two years of historical appointments to provide a comprehensive sample of data, accounting for seasonality and other important factors like, most recently, the impacts of COVID. Sample by patient ID, not appointment ID, so that all appointments for a particular patient (within the time window) are represented.
### Features {: #features }
To apply this use case, your dataset should contain, minimally, the following features:
- Patient ID
- Binary classification target that represents attendance (`show/no-show`, `0/1`, `True/False`, etc.)
- Date/time of the scheduled appointment
- Date the appointment was made
- Number of days between scheduling and appointment
Other helpful features to include are:
- Distance between the patient and the clinic they are visiting
- Historical no-show history for the patient
- Reason for visit
- Scheduled clinic
- Scheduled doctor
- Patient age
- Patient gender
- Other patient descriptors (hypertension, diabetes, alcholism, etc...)
## Demo {:#demo}
See the notebook [here](no_show.ipynb).
|
index
|
---
title: Predict steel plate defects
description: Help manufacturers significantly improve the efficiency and effectiveness of identifying defects of all kinds, including those for steel sheets.
---
# Predict steel plate defects {: #predict-steel-plate-defects }
Steel plates, whether used for general construction, industrial applications, or highly critical applications, must be of a structural quality to ensure safety and provide good formability and high strength. Every steel plate produced has some level of defects and faults. Normally, plates that don’t pass muster are scrapped, but not until the end of an expensive testing process. The time, money, and expertise spent on unusable product lowers the margins on each plate and adds complications to the manufacturing process.
## Business problem {: #business-problem }
Traditionally, each plate (or a representative sample) runs through a gauntlet of tests designed to reveal faults. The goal in terms of efficiency and cost is to accurately identify defective steel plates as early in the manufacturing process as possible, saving time and expense. Predictive modeling leverages data from that process, allowing you to target tests and identify faults in steel plates as early as possible. With good, accurate models, a plant can scrap bad plates earlier for higher efficiency.
The challenge, however, doesn’t end there. Plants make constant adjustments and improvements to their processes. If the predictive models aren’t updated with new data, they will quickly become outdated and suffer from a degradation in accuracy. It’s necessary to set up retraining processes that automatically identify the best model available based on new information.
Consider Michelle, who is responsible for maintaining the quality steel plates her company produces. She knows the ins and outs of the production process, like the temperature-to-pressure ranges needed to ensure pristine results. Automated machine learning offers a new way to build upon her expertise and leverage the data her team collects. Instead of worrying about hiring a data scientist to support her team, she and the other engineers can take their data and build their own models. Those models can then predict which steel plates will have the highest likelihood of faults and also identify the reasons that may be driving those defective results.
## Use case data {: #use-case-data }
This notebook uses a [dataset](http://archive.ics.uci.edu/ml/datasets/steel+plates+faults) (UCI Machine Learning Repository: Steel Plates Faults dataset) provided by Semeion, Research Center of Sciences of Communication, Via Sersale 117, 00128, Rome, Italy.
Before proceeding, review the [API quickstart guide](https://docs.datarobot.com/en/docs/api/api-quickstart/index.html) to ensure that your client and API credentials have been properly configured.
* [Download training data](https://s3.amazonaws.com/datarobot-use-case-datasets/steel_plates_fault_training.csv)
* [Download prediction data](https://s3.amazonaws.com/datarobot-use-case-datasets/steel_plates_fault_testing.csv)
|
index
|
---
title: Predict the likelihood of a loan default
description: AI models for predicting the likelihood of a loan default can be deployed within the review process to score and rank all new flagged cases.
---
# Predict the likelihood of a loan default {: #predict-the-likelihood-of-a-loan-default }
This page outlines the use case to reduce defaults and minimize risk by predicting the likelihood that a borrower will not repay their loan. This use case is captured in:
* A [Jupyter notebook](loan-default-nb.ipynb) that you can download and execute.
* A UI-based [business accelerator](loan-default).
{% include 'includes/loan-defaults-include.md' %}
### Demo {:#demo}
See the notebook [here](loan-default-nb.ipynb) or the UI-based accelerator [here](loan-default).
|
index
|
---
title: Anti-Money Laundering (AML) Alert Scoring
description: Build a model that uses historical data, including customer and transactional information, to identify which alerts resulted in a Suspicious Activity Report (SAR).
---
# Anti-Money Laundering (AML) Alert Scoring {: #anti-money-laundering-aml-alert-scoring }
In this use case you will build a model that uses historical data, including customer and transactional information, to identify which alerts resulted in a Suspicious Activity Report (SAR). The model can then be used to assign a suspicious activity score to future alerts and improve the efficiency of an AML compliance program using rank ordering by score.
Download the sample training dataset [here](https://s3.amazonaws.com/datarobot-use-case-datasets/DR_Demo_AML_Alert_train.csv).
[Click here](anti_money_laundering.ipynb) to jump directly to the notebook. Otherwise, the following several paragraphs describe the business justification and problem framing for this use case.
{% include 'includes/aml-1-include.md' %}
{% include 'includes/aml-2-include.md' %}
{% include 'includes/aml-3-include.md' %}
{% include 'includes/aml-4-include.md' %}
## Demo {:#demo}
See the notebook [here](anti_money_laundering.ipynb).
|
index
|
---
title: Predict late shipments
description: This page outlines a use case to predict whether a shipment will be late or if there will be a shortage of parts.
---
# Predict late shipments {: #predict-late-shipments }
This page outlines a use case to predict whether a shipment will be late or if there will be a shortage of parts in the shipment. This use case is captured in a [notebook](pred-ship.ipynb) that you can download and execute locally.
## Business problem {: #business-problem }
A critical component of any supply-chain network is to prevent parts shortages, especially when they occur at the last minute. Parts shortages not only lead to underutilized machines and transportation, but also cause a domino effect of late deliveries through the entire network. In addition, the discrepancies between the forecasted and actual number of parts that arrive on time prevent supply-chain managers from optimizing their materials plans.
Parts shortages are often caused by delays in their shipment. To mitigate the impact delays will have on their supply chain, manufacturers adopt approaches such as holding excess inventory, optimizing product designs for more standardization, and moving away from single-sourcing strategies. However, most of these approaches add up to unnecessary costs for parts, storage, and logistics.
In many cases, late shipments persist until supply-chain managers can evaluate root cause and then implement short term and long term adjustments that prevent them from occurring in the future. Unfortunately, supply-chain managers have been unable to efficiently analyze historical data available in MRP systems because of the time and resources required.
## Intelligent solution {: #intelligent-solution }
Many credit decisioning systems are driven by scorecards, which are very simplistic rule-based systems. These are built by end-user organizations through industry knowledge or simple statistical systems. Some organizations go a step further and obtain scorecards from third parties, which may not be customized for an individual organization’s book.
An AI-based approach can help financial institutions learn signals from their own book and assess risk at a more granular level. Once the risk is calculated, a strategy may be implemented to use this information for interventions. If you can predict someone is going to default, this may lead to intervention steps, such as sending earlier notices or rejecting loan applications.
## Value estimation {: #value-estimation }
**How do you measure return on investment (ROI) for your use case?**
The ROI for implementing this solution can be estimated by considering the following factors:
1. Starting with the manufacturing company and production line stoppage, the cycle time of the production process can be used to understand how much of the production loss relates to part shortages. For example, if the cycle time (time taken to complete one part) is 60 seconds and each day 15 minutes of production are lost to part shortages, then total production loss is equivalent to 15 products, which translates to a loss in profit of 15 products in a day. A similar calculation can be used to estimate annual loss due to part shortage.
2. For a logistics provider, predicting part shortages early can increase savings in terms of reduced inventory. This can be roughly measured by capturing the difference in maintaining parts' stock before and after implementation of the AI solution. The difference in stock when multiplied with holding and inventory cost per unit gives the overall ROI. Furthermore, in cases when the demand for parts is left unfulfilled (because of part shortages), the opportunity cost related to the unsatisfied demand would directly result in loss of respective business opportunity.
## Tech implementation {: #tech-implentation }
### About the data {: #about-the-data }
For illustrative purposes, DataRobot uses a sample dataset provided by the President’s Emergency plan for AIDS relief (PEPFAR), which is publicly available on [Kaggle](https://www.kaggle.com/divyeshardeshana/supply-chain-shipment-pricing-data?select=SCMS_Delivery_History_Dataset.csv). This dataset provides supply chain health commodity shipment and pricing data. Specifically, the dataset identifies Antiretroviral (ARV) and HIV lab shipments to supported countries. In addition, the dataset provides the commodity pricing and associated supply chain expenses necessary to move the commodities to other countries for use. DataRobot uses this dataset to represent how a manufacturing or logistics company can leverage AI models to improve their decision-making.
### Problem framing {: #problem-framing }
The **target variable** for this use case is whether or not the shipment will be delayed (Binary; True or False, 1 or 0, etc.). The target (`Late_delivery`) makes this use case a **binary classification** problem. The distribution of the target variable is imbalanced, with 11.4% being 1 (late delivery) and 88.6% being 0 (on time delivery). See [here](https://www.datarobot.com/blog/how-to-tackle-imbalanced-data-with-datarobot/) for more information about imbalanced data in machine learning.
### Sample feature list {: #sample-feature-list }
**Feature Name** | **Data Type** | **Description** | **Data Source** | **Example**
--- | --- | --- | --- | ---
Supplier name | Categorical | Name of the vendor who is shipping the delivery. | Purchase order | Ranbaxy, Sun Pharma, etc. |
Part description | Text| The details of the part or item that is being shipped. | Purchase order | 30mg HIV test kit, 600mg Lamivudine capsules |
Order quantity | Numeric | The amount of item that was ordered. | Purchase order | 1000, 300, etc. |
Line item value | Numeric | The unit price of the line item ordered. | Purchase order | 0.39, 1.33 |
Scheduled delivery date | Date | The date at which the order is scheduled to be delivered. | Purchase order | 2-Jun-06 |
Delivery recorded date | Date | The date at which the order was eventually delivered. | ERP system | 2-Dec-06 |
Manufacturing site | Categorical | The site of the vendor where the manufacturing was done since the same vendor can ship parts from different sites. | Invoice | Sun Pharma, India
Product Group | Categorical | The category of the product that is ordered. | Purchase order | HRDT, ARV
Mode of delivery | Categorical | The mode of transport for part delivery. | Invoice | Air, Truck
Late Delivery | Target (Binary) | Whether the delivery was late or on-time. | ERP System, Purchase Order | 0 or 1
### Data preparation {: #data-preparation }
The dataset contains historical information on procurement transactions. Each row of analysis in the dataset is an individual order that is placed and whose delivery needs to be predicted. Every order has a scheduled delivery date and actual delivery date, and the difference between these were used to define the target variable (`Late_delivery`). If the delivery date surpassed the scheduled date, then the target variable had a value 1, else 0. Overall, the dataset contains about 10,320 rows and 26 features, including the target variable.
### Model training {: #model-training }
DataRobot Automated Machine Learning (AutoML) automates many parts of the modeling pipeline. Instead of hand-coding and manually testing dozens of models to find the one that best fits your needs, DataRobot automatically runs dozens of models and finds the most accurate one for you, all in a matter of minutes. In addition to training the models, DataRobot automates other steps in the modeling process, such as processing and partitioning the dataset.
Although this walkthrough jumps straight to interpreting the model results, you can take a look [here](gs-dr-fundamentals) to see how DataRobot works from start to finish, and to understand the data science methodologies embedded in its automation.
Something to highlight is, since you are dealing with an imbalanced dataset, DataRobot automatically recommends using `LogLoss` as the optimization metric to identify the most accurate model, being that it is an error metric that penalizes wrong predictions.
For this dataset, DataRobot found the most accurate model to be `Extreme Gradient Boosting Tree Classifier` with unsupervised learning features using the open source XGBoost library.
### Interpret results {: #interpret-results }
#### Feature Impact {: #feature-impact }
To give transparency on how the model works, DataRobot provides both global and local levels of model explanations. In broad terms, the model can be understood by looking at the Feature Impact graph, which reveals the association between each feature and the model target. The technique adopted by DataRobot to build this plot is called *Permutation Importance*.
As you can see, the model identified `Pack Price`, `Country`, `Vendor`, `Vendor INCO Term`, and `Line item Insurance` as some of the most critical factors affecting delays in the parts shipments.

#### Prediction Explanations {: #prediction-explanation }
Moving to the local view of explainability, DataRobot also provides **Prediction Explanations** that enable you to understand the top 10 key drivers for each prediction generated. This offers you the granularity you need to tailor your actions to the unique characteristics behind each part shortage.
For example, if a particular country is a top reason for a shipment delay, such as Nigeria or South Africa, you can take actions by reaching out to vendors in these countries and closely monitoring the shipment delivery across these routes.
Similarly, if there are certain vendors that are amongst the top reasons for delays, you can reach out to these vendors upfront and take corrective actions to avoid any delayed shipments that would affect the supply-chain network. These insights help businesses make data-driven decisions to improve the supply chain process by incorporating new rules or alternative procurement sources.

#### Word Cloud {: #word-cloud }
For text variables, such as `Part description` (included in the dataset), you can look at **Word Clouds** to discover the words or phrases that are highly associated with delayed shipments. Text features are generally the most challenging and time-consuming to build models for, but with DataRobot, each individual text column is automatically fitted as an individual classifier and is directly preprocessed with NLP techniques (tf-idf, n grams, etc.) In this case, you can see that the items described as `nevirapine 10 mg` are more likely to get delayed in comparison to other items.

### Evaluate accuracy {: #evaluate-accuracy }
To evaluate the performance of the model, DataRobot by default ran five-fold cross validation and the resulting AUC score (for ROC Curve) was around 0.82. Since the AUC score on the holdout set (unseen data) was also around 0.82, you can be reassured that the model is generalizing well and is not overfitting. The reason you look at the AUC score for evaluating the model is because AUC ranks the output (i.e., the probability of delayed shipment) instead of looking at actual values. The Lift Chart below shows how the predicted values (blue line) compared to actual values (red line) when the data is sorted by predicted values. The model has slight under-predictions for the orders that are more likely to get delayed. But overall, the model performs well. Furthermore, depending on the problem being solved, you can review the confusion matrix for the selected model and, if required, adjust the prediction threshold to optimize for precision and recall.

## Business implementation {: #business-implementation }
### Decision environment {: #decision-environment }
After finding the right model that best learns patterns in your data, DataRobot makes it easy to deploy the model into your desired decision environment. _Decision environments_ are the ways in which the predictions generated by the model will be consumed by the appropriate stakeholders in your organization, and how these stakeholders will make decisions using the predictions to impact the overall process.
**Decision maturity**
Automation | **Augmentation** | Blend
The predictions from this use case can **augment** the decisions of the supply chain managers as they foresee any upcoming delays in logistics. It acts as an intelligent machine that, when combined with the decisions of the managers, help improve your entire supply-chain network.
### Model deployment {: #model-deployment }
The model can be deployed using the DataRobot Prediction API. A REST API endpoint is used to bounce back predictions in near real-time when new scoring data from new orders are received.
Once the model has been deployed (in whatever way the organization decides), the predictions can be consumed in several ways. For example, a front-end application that acts as the supply chain’s reporting tool can be used to deliver new scoring data as an input to the model, which then bounces back predictions and Prediction Explanations in real-time.
### Decision stakeholders {: #decision-stakeholders }
The predictions and Prediction Explanations can be used by supply chain managers or logistic analysts to help them understand the critical factors or bottlenecks in the supply chain.
**Decision Executors**
Decision executors are the supply-chain managers and procurement teams who are empowered with the information they need to ensure that the supply-chain network is free from bottlenecks. These personnel have strong relationships with vendors and the ability to take corrective action using the model’s predictions.
**Decision Managers**
Decision managers are the executive stakeholders, such as the Head of Vendor Development, who manage large scale partnerships with key vendors. Based on the overall results, these stakeholders can perform quarterly reviews of the health of their vendor relationships to make strategic decisions on long-term investments and business partnerships.
**Decision Authors**
Decision authors are the business analysts or data scientists who would build this decision environment. These analysts could be the engineers/analysts from the supply chain, engineering, or vendor development teams in the organization who usually work in collaboration with the supply-chain managers and their teams.
### Decision process {: #decision-process }
The decisions that the managers and executive stakeholders take based on the predictions and Prediction Explanations for identifying potential bottlenecks include reaching out and collaborating with appropriate vendor teams in the supply-chain network based on data-driven insights. The decisions could be both long- and short-term based on the severity of the impact of shortages on the business.
### Model monitoring {: #model-monitoring }
One of the most critical components in implementing AI is having the ability to track the performance of the model for data drift and accuracy. With DataRobot MLOps, you can deploy, monitor, and manage all models across the organization through a centralized platform. Tracking model health is very important for proper model lifecycle management, similar to product lifecycle management.
### Implementation risks {: #implementation-risks }
One of the major risks in implementing this solution in the real world is adoption at the ground level. Having strong and transparent relationships with vendors is also critical in taking corrective action. The risk is that vendors may not be ready to adopt a data-driven strategy and trust the model results.
### Demo {:#demo}
See the notebook [here](pred-ship.ipynb).
|
index
|
---
title: Reduce 30-Day readmissions rate
description: This page outlines a use case to reduce the 30-day readmission rate at a hospital. This use case is captured in a Jupyter notebook that you can download and execute.
---
# Reduce 30-Day readmissions rate {: #reduce-30-day-readmissions-rate }
This page outlines a use case to reduce the 30-day readmission rate at a hospital. This use case is captured in a [Jupyter notebook](readmission.ipynb) that you can download and execute.
{% include 'includes/hospital-readmit-include.md' %}
### Demo {:#demo}
See the notebook version of this accelerator [here](readmission.ipynb).
|
index
|
---
title: Triage insurance claims
description: Evaluate the severity of an insurance claim in order to triage it effectively.
---
# Triage insurance claims {: #triage-insurance-claims }
This page outlines a use case that assesses claim complexity and severity as early as possible to optimize claim routing, ensure the appropriate level of attention, and improve claimant communications. This use case is captured in:
* A [Jupyter notebook](claims.ipynb) that you can download and execute.
* A UI-based [business accelerator](insurance-claims).
{% include 'includes/triage-insurance-claims-include.md' %}
## Demo {:#demo}
See the notebook outlining this use case [here](claims.ipynb).
|
index
|
---
title: Price elasticity of demand modeling
description: Understand the impact that changes in price will have on consumer demand for a given product.
---
# Price elasticity of demand modeling {: #price-elasticity-of-demand-modeling }
The price elasticity of demand is a measurement for how demand for a product is affected by changes in its price, and is a crucial consideration for organizations that make pricing decisions. Generally, there’s an inverse relationship between price and demand; as price for a product or category increases, demand or the number of products sold tends to decrease. However, not all products are equally sensitive to changes in price. Factors such as product differentiation, competition, and promotional activities all play pivotal roles in the relationship between price and demand for individual products or categories. To maximize profits, companies must use price elasticities to determine the optimal balance between price and demand, as well as monitor these elasticities regularly to account for changes in the overall market environment.
[The included notebook](elasticity.ipynb) helps you to understand the impact that changes in price will have on consumer demand for a given product. Business analysts that measure price elasticity and business users that require elasticity as an input to make pricing decisions will benefit from this notebook.
Following this workflow will allow you to identify relationships between price and demand, maximize revenue by properly pricing products, monitor price elasticities for changes in price and demand, and reduce manual processes used to obtain and update price elasticities.
### Solution value {: #solution-value }
Traditional approaches to calculating price elasticity leave much to be desired:
* They are often managed by third-party analytics providers, obscuring methods and limiting experimentation.
* Category expertise is missing in selecting confounding variables.
* Single coefficients do not fully capture a price’s relationship to demand and require better model explainability.
* Models are typically limited to linear methods, limiting their quality and accuracy.
* The traditional approach is a single point-in-time estimate, but with high-quality model monitoring and retraining this can be a living and dynamic analysis.
Using DataRobot for price elasticity modeling provides a much more flexible and accurate way to make pricing decisions.
The primary issues that this use case addresses include:
* Revenue loss due to non-optimally priced items.
* Increased cadence for analyzing pricing with greater user control and understanding.
* Decreased reliance on expensive third-party analytics vendors.
### Data overview {: #data-overview }
Each row in this use case's dataset represents a product, by day, in a single market, with sales and a price information. The same product on the same day in a different market is represented in an independent row. Additionally, confounding variables related to active promotions, weather, and macroeconomic indicators are added to help isolate the relationship between price and sales. These should be reduced or augmented based on your category or product knowledge.
### Feature overview {: #feature-overview }
* Date: calendar date, daily level
* SKUName: unique identifier for each product
* Sales: Total dollar sales for that product for that date
* PriceBaseline: Price of product without discounting
* PriceActual: Current price of product with discounting
* PctOnSale: Percentage difference between PriceBaseline and PriceActual
* HotDay: Binary for whether the temperature was above some threshold
* sunnDay: Binary for if the day was sunny for a given geography
* EconChangeGDP: Percentage change in US GDP reported quarterly (blank on unreported dates)
* EconJobsChange: Percentage change in US unemployment insurance claims, available weekly (blank on unreported dates)
* AnnualizedCPI: Federal Reserve measure of national price inflations, available monthly
### Prepare data {: #prepare-data }
Use DataRobot’s EDA stages to proactively identify common impediments to model performance like outliers, missing values, and target leakage.
Build feature lists to approach modeling with the appropriate variables. For this project, DataRobot uses the provided dates and creates variables for the `Day of the Week`, `Day of the Month`, the `Month`, and the `Year`.

This use case removes `Year` and keeps all other variables. This use case also creates a feature list of variables that should always have a negative relationship with sales ([monotonically decreasing](monotonic) and uses its relationship to inform model training. In this case, the feature list enforces the idea that sales should always decrease as price increases.

### Model insights {: #modeling-and-insights }
When modeling completes, identify the top performing model in the Leaderboard and use the **Understand** tab in the UI to learn more about the model’s behavior.
In the example below, the **Feature Impact** chart shows that the product, its price, and the associated marketing efforts have the highest feature importance. Also, two of the included macroeconomic indicators have little to no impact on the model. This is a case where experimentation by removing these two variables may improve model performance.

For this use case, the partial dependence plot shown in [**Feature Effects**](feature-effects#partial-dependence-calculations) may be the most important model insight. In this case, the plot shows impact of price increases on sales at all price values. Rather than considering a single coefficient, this acknowledges that not all units of price increase have the same impact on sales. For example, in this graph a price increase from $2.10 to $3 has almost no effect on sales, but increasing the price beyond $3 has a significant negative impact.

[Prediction Explanations](pred-explain/index) (XEMP in this example) are extremely useful to analysts to aid in answering “Why?” questions around their product or category. In the below example we can see a product with low expected sales. The top three reasons given show that this is because the item is not on sale and has no marketing activity to promote it. This is an ice cream, so there is likely a discounted brand with the same flavor close by, altering consumer decision making.

### Optimizing price {: #optimizing-price }
With an understanding of how prices and other factors impact sales, you can find an optimal price that maximizes revenue. To do so, you can simulate many possible pricing scenarios for the products you want to analyze. The starting data contains one row for each product on an upcoming date, providing as much confounding information possible. The price to start is the currently planned price.

This example creates an observation of every product with 1% increments ranging from a 25% discount to a 25% price increase.

Use the top model from the Leaderboard to make predictions against the dataset and calculate the total revenue for each simulation.

Select a single product and observe its unique relationship between price and sales. In this graph, note the intersection point between sales and revenue.


Lastly, calculate the optimum price point for each product that maximizes its revenue.

The final output shows a new optimal price point for each product under the column `PriceActual_y`. Note that Heck 97% Pork Sausages are actually being discounted too much:

### Single elasticity coefficient {: #single-elasticity-coefficient}
To generate a single coefficient for the `PriceActual` variable, use a different model from the model Leaderboard. Find the initial Linear Regression blueprint and retrain it on 100% of the data using the Holdout partition.

Once the Linear Regression model finishes retraining, select it and navigate to [**Describe > Coefficients**](coefficients) to export its coefficients. Use the export button to download all coefficients and find the value associated with `Price Actual`.


### Demo {:#demo}
See the notebook outlining this use case [here](elasticity.ipynb).
|
index
|
---
title: No-show appointment forecasting
description: Build a model that identifies patients most likely to miss appointments, with correlating reasons. This data can then be used by staff to target outreach on those patients and additionally to understand, and perhaps address, associated issues.
---
# No-show appointment forecasting {: #no-show-appointment-forecasting }
In this use case you will build a model that identifies patients most likely to miss appointments, with correlating reasons. This data can then be used by staff to target outreach on those patients and additionally to understand, and perhaps address, associated issues.
[Click here](no_show.ipynb) to jump directly to the notebook. Otherwise, the following several paragraphs describe the business justification and problem framing for this use case.
## Background {: #background }
Canceling a doctor's appointment need not be particularly problematic—with appropriate notice to the office. Without notice (patients known as "no-shows") on the other hand, cost outpatient health centers a staggering 14% of anticipated daily revenue. Missed appointments result in lower utilization rates for doctors and nurses and also contribute unnecessarily to the overhead costs required to run outpatient centers. In addition, when a patient misses an appointment, they risk poorer health outcomes due to lack of timely care.
Many outpatient centers employ solutions such as phoning patients in advance, but this high-touch investment is often not prioritized for the highest risk no-shows patients. Low-touch solutions, such as automated texts, are effective tools for mass reminders but do not offer necessary personalization for a patient at the highest risk of missing an appointment.
Key takeaways:
* **Strategy/challenge**: Identify clients likely to miss appointments ("no-shows") and take action to prevent that from happening.
* **Business driver**: Grow Revenue, increase customer LTV, increase customer satisfaction.
* **Model solution**: Rank-order patients and build an outreach call list. Using the list can minimize revenue loss by increasing attendance (and also, thereby, improving patient outcome) and identifying potential overbook opportunities to prevent downtime.
## Using this notebook {: #using-this-notebook }
Topic | Description
:- | :-
**Use Case Type** | Health Care/No-show forecasting
**Skill set** | Business Analyst
**Desired Outcomes**| <ul><li>Prevent no-shows</li><li>Optimally target responses</li><li>Reduce costs from missed appointments/add revenue from booking more appointments</li></ul>
**Metrics / KPIs** | <ul><li>Current no-show rate is roughly 5% of all appointments.</li><li>Cost of missed visit on average is $150 per appointment.</li></ul>
**Sample Datasets** | This use cases uses the following datasets:<ul><li><a href="no_show.csv">no_show</a> (base dataset of patient records)</li> <li><a href="clinics.csv">clinics</a> (latitude and longitude of identified clinics)</li> <li><a href="planning_neighborhoods.csv">planning_neighborhoods</a> (neighborhoods of San Francisco with WKT geodata polygons)</li> <li><a href="no_show_historical.csv">no_show_historical</a> (historical no-show rate by patient)</li></ul>
## Solution value {: #solution-value }
The purpose of this use case is to build a model that enables practice management staff to predict in advance which patients are likely to miss appointments. Using historical data to uncover patterns related to no-shows, in addition to identifying those more likely to no-show, the model's visualizations help understand the top reasons _why_. These predictions, and their explanations, help staff understand how various factors, such as a patient’s distance from a clinic and the days they needed to wait for their appointments, influence the risk of no-show. Based on these predictions and insights, outpatient staff members can focus outreach on patients with the highest risk of missing and subsequently offer alternatives—rescheduling appointments or providing transportation.
The primary issues, and corresonding opportunities, that this use case addresses include:
ISSUE | OPPORTUNITY
:- | :-
Patient outcome | Ensuring attendance plays a critical role in patient health, since patients may suffer if they do not get required care.
Revenue loss | A degree of certainty about an open booking slot allows for preemptive filling by:<ul><li>Standard over-booking.</li><li>Contacting an alternative patient (using a "propensity for" model).</li></ul>
Staffing inefficiency | Correct staffing levels improve both patient and employee satisfaction.
## Work with data {: #work-with-data }
The primary dataset for this use case represents patient visits. Supplemental datasets allow aggregating features for more targeted responses.
### Shape {: #shape }
The dataset granularity is one row per visit. For best results, data should cover two years of historical appointments to provide a comprehensive sample of data, accounting for seasonality and other important factors like, most recently, the impacts of COVID. Sample by patient ID, not appointment ID, so that all appointments for a particular patient (within the time window) are represented.
### Features {: #features }
To apply this use case, your dataset should contain, minimally, the following features:
- Patient ID
- Binary classification target that represents attendance (`show/no-show`, `0/1`, `True/False`, etc.)
- Date/time of the scheduled appointment
- Date the appointment was made
- Number of days between scheduling and appointment
Other helpful features to include are:
- Distance between the patient and the clinic they are visiting
- Historical no-show history for the patient
- Reason for visit
- Scheduled clinic
- Scheduled doctor
- Patient age
- Patient gender
- Other patient descriptors (hypertension, diabetes, alcholism, etc...)
## Demo {:#demo}
See the notebook [here](no_show.ipynb).
|
index
|
---
title: Predict steel plate defects
description: Help manufacturers significantly improve the efficiency and effectiveness of identifying defects of all kinds, including those for steel sheets.
---
# Predict steel plate defects {: #predict-steel-plate-defects }
Steel plates, whether used for general construction, industrial applications, or highly critical applications, must be of a structural quality to ensure safety and provide good formability and high strength. Every steel plate produced has some level of defects and faults. Normally, plates that don’t pass muster are scrapped, but not until the end of an expensive testing process. The time, money, and expertise spent on unusable product lowers the margins on each plate and adds complications to the manufacturing process.
## Business problem {: #business-problem }
Traditionally, each plate (or a representative sample) runs through a gauntlet of tests designed to reveal faults. The goal in terms of efficiency and cost is to accurately identify defective steel plates as early in the manufacturing process as possible, saving time and expense. Predictive modeling leverages data from that process, allowing you to target tests and identify faults in steel plates as early as possible. With good, accurate models, a plant can scrap bad plates earlier for higher efficiency.
The challenge, however, doesn’t end there. Plants make constant adjustments and improvements to their processes. If the predictive models aren’t updated with new data, they will quickly become outdated and suffer from a degradation in accuracy. It’s necessary to set up retraining processes that automatically identify the best model available based on new information.
Consider Michelle, who is responsible for maintaining the quality steel plates her company produces. She knows the ins and outs of the production process, like the temperature-to-pressure ranges needed to ensure pristine results. Automated machine learning offers a new way to build upon her expertise and leverage the data her team collects. Instead of worrying about hiring a data scientist to support her team, she and the other engineers can take their data and build their own models. Those models can then predict which steel plates will have the highest likelihood of faults and also identify the reasons that may be driving those defective results.
## Use case data {: #use-case-data }
This notebook uses a [dataset](http://archive.ics.uci.edu/ml/datasets/steel+plates+faults) (UCI Machine Learning Repository: Steel Plates Faults dataset) provided by Semeion, Research Center of Sciences of Communication, Via Sersale 117, 00128, Rome, Italy.
Before proceeding, review the [API quickstart guide](api-quickstart/index) to ensure that your client and API credentials have been properly configured.
* [Download training data](https://s3.amazonaws.com/datarobot-use-case-datasets/steel_plates_fault_training.csv)
* [Download prediction data](https://s3.amazonaws.com/datarobot-use-case-datasets/steel_plates_fault_testing.csv)
|
index
|
---
title: Predict the likelihood of a loan default
description: AI models for predicting the likelihood of a loan default can be deployed within the review process to score and rank all new flagged cases.
---
# Predict the likelihood of a loan default {: #predict-the-likelihood-of-a-loan-default }
This page outlines the use case to reduce defaults and minimize risk by predicting the likelihood that a borrower will not repay their loan. This use case is captured in:
* A [Jupyter notebook](loan-default-nb.ipynb) that you can download and execute.
* A UI-based [business accelerator](loan-default).
{% include 'includes/loan-defaults-include.md' %}
### Demo {:#demo}
See the notebook [here](loan-default-nb.ipynb) or the UI-based accelerator [here](loan-default).
|
index
|
---
title: Demand forecasting
description: Learn about an end-to-end demand forecasting use case that uses DataRobot's Python package.
---
# Large-scale demand forecasting {: #large-scale-demand-forecasting }
The notebooks listed below outline how to performed large-scale demand forecasting using DataRobot's Python package. No single model can handle extreme data diversity or forecast the complexity of human buying patterns at a detailed level. Complex demand forecasting typically requires deep statistical know-how and lengthy development projects around big data architectures. This notebook builds a model factory to automate this requirement by creating multiple projects "under the hood."
Follow the notebooks below in order to complete the demand forecasting workflow.
| Topic | Describes... |
| ------------| ----------------------------- |
| [Setup and upload data](demand-setup.ipynb) | Install and import the required libraries, connect to DataRobot, and curate the data for modeling. |
| [Cluster data](cluster-data.ipynb) | Break a dataset up into smaller datasets that group similar items together. |
| [Build models](demand-factory.ipynb) | Use segmented modeling to improve model performance and decrease time to deployment. |
| [Get model insights and create feature lists](demand-insights.ipynb) | Review insights for the top-performing model and create new feature lists. |
| [Deploy a model and make predictions](demand-pred.ipynb) | Test a model's prediction capabilities and deploy a model to a production environment to generate predictions. |
|
index
|
---
title: Anti-Money Laundering (AML) Alert Scoring
description: Build a model that uses historical data, including customer and transactional information, to identify which alerts resulted in a Suspicious Activity Report (SAR).
---
# Anti-Money Laundering (AML) Alert Scoring {: #anti-money-laundering-aml-alert-scoring }
In this use case you will build a model that uses historical data, including customer and transactional information, to identify which alerts resulted in a Suspicious Activity Report (SAR). The model can then be used to assign a suspicious activity score to future alerts and improve the efficiency of an AML compliance program using rank ordering by score.
Download the sample training dataset [here](https://s3.amazonaws.com/datarobot-use-case-datasets/DR_Demo_AML_Alert_train.csv).
[Click here](anti_money_laundering.ipynb) to jump directly to the notebook. Otherwise, the following several paragraphs describe the business justification and problem framing for this use case.
## Background {: #background }
A key pillar of any AML compliance program is to monitor transactions for suspicious activity. The scope of transactions is broad, including deposits, withdrawals, fund transfers, purchases, merchant credits, and payments. Typically, monitoring starts with a rules-based system that scans customer transactions for red flags consistent with money laundering. When a transaction matches a predetermined rule, an alert is generated and the case is referred to the bank’s internal investigation team for manual review. If the investigators conclude the behavior is indicative of money laundering, then the bank will file a Suspicious Activity Report (SAR) with FinCEN.
Unfortunately, the standard transaction monitoring system described above has costly drawbacks. In particular, the rate of false-positives (cases incorrectly flagged as suspicious) generated by this rules-based system can reach 90% or more. Since the system is rules-based and rigid, it cannot dynamically learn the complex interactions and behaviors behind money laundering. The prevalence of false-positives makes investigators less efficient as they have to manually weed out cases that the rules-based system incorrectly marked as suspicious.
Compliance teams at financial institutions can have hundreds or even thousands of investigators, and the current systems prevent investigators from becoming more effective and efficient in their investigations. The cost of reviewing an alert ranges between `$30~$70`. For a bank that receives 100,000 alerts a year, this is a substantial sum; on average, penalties imposed for proven money laundering amount to `$145` million per case. A reduction in false positives could result in between `$600,000~$4.2` million per year in savings.
Key takeaways:
* **Strategy/challenge**: Help investigators focus their attention on cases that have the highest risk of money laundering while minimizing the time they spend reviewing false-positive cases.
For banks with large volumes of daily transactions, improvements in the effectiveness and efficiency of their investigations ultimately results in fewer cases of money laundering that go unnoticed. This allows banks to enhance their regulatory compliance and reduce the volume of financial crime present within their network.
* **Business driver**: Improve efficiency of AML transaction monitoring and lower operational costs.
With its ability to dynamically learn patterns in complex data, AI significantly improves accuracy in predicting which cases will result in a SAR filing. AI models for anti-money laundering can be deployed into the review process to score and rank all new cases.
* **Model solution**: Assign a suspicious activity score to each AML alert, improving the efficiency in an AML compliance program.
Any case that exceeds a predetermined threshold of risk is sent to the investigators for manual review. Meanwhile, any case that falls below the threshold can be automatically discarded or sent to a lighter review. Once AI models are deployed into production, they can be continuously retrained on new data to capture any novel behaviors of money laundering. This data will come from the feedback of investigators.
Specifically the model will use rules that trigger an alert whenever a customer requests a refund of any amount since small refund requests could be the money launderer’s way of testing the refund mechanism or trying to establish refund requests as a normal pattern for their account.
## Using this notebook {: #using-this-notebook }
The following table summarizes aspects of this use case.
Topic | Description
:- | :-
**Use case type** | Anti-money laundering (false positive reduction)
**Target audience** | Data Scientist, Financial Crime Compliance Team
**Desired outcomes**| <ul><li>Identify which customer data and transaction activity are indicative of a high risk for potential money laundering.</li><li>Detect anomalous changes in behavior or nascent money laundering patterns before they spread.</li><li>Reduce the false positive rate for the cases selected for manual review.</li></ul>
**Metrics/KPIs** | <ul><li>Annual alert volume</li><li>Cost per alert</li><li>False positive reduction rate</li></ul>
**Sample dataset** | https://s3.amazonaws.com/datarobot-use-case-datasets/DR_Demo_AML_Alert_train.csv
## Solution value {: #solution-value }
This use case builds a model that dynamically learns patterns in complex data and reduces false positive alerts.
Then, financial crime compliance teams can prioritize the alerts that legitimately require manual review and dedicate more resources to those cases most likely to be suspicious. By learning from historical data to uncover patterns related to money laundering, AI also helps identify which customer data and transaction activity are indicative of a high risk for potential money laundering.
The primary issues and corresponding opportunities that this use case addresses include:
Issue | Opportunity
:- | :-
Potential regulatory fine | Mitigate the risk of missing suspicious activities due to lack of competency with alert investigations. Use alert scores to more effectively assign alerts—high risk alerts to more experienced investigators, low risk alerts to more junior team members.
Investigation productivity | Increase investigators' productivity by making the review process more effective and efficient, and by providing a more holistic view when assessing cases.|
### Calculating ROI {: #calculating-roi }
ROI can be calculated as follows:
`Avoided potential regulatory fine + Annual alert volume * false positive reduction rate * cost per alert`
A high-level measurement of the ROI equation involves two parts.
1. The total amount of `avoided potential regulatory fines` will vary depending on the nature of the bank and must be estimated on a case-by-case basis.
2. The second part of the equation is where AI can have a tangible impact on improving investigation productivity and reducing operational costs. Consider this example:
* A bank generates 100,000 AML alerts every year.
* DataRobot achieves a 70% false positive reduction rate without losing any historical suspicious activities.
* The average cost per alert is `$30~$70`.
Result: The annual ROI of implementing the solution will be `100,000 * 70% * ($30~$70) = $2.1MM~$4.9MM`.
## Work with data {: #work-with-data }
The linked synthetic dataset illustrates a credit card company’s AML compliance program. Specifically the model is detecting the following money-laundering scenarios:
- Customer spends on the card but overpays their credit card bill and seeks a cash refund for the difference.
- Customer receives credits from a merchant without offsetting transactions and either spends the money or requests a cash refund from the bank.
The unit of analysis in this dataset is an individual alert, meaning a rule-based engine is in place to produce an alert to detect potentially suspicious activity consistent with the above scenarios.
### Problem framing {: #problem-framing }
The target variable for this use case is **whether or not the alert resulted in a SAR** after manual review by investigators, making this a binary classification problem. The unit of analysis is an individual alert—the model will be built on the alert level—and each alert will receive a score ranging from 0 to 1. The score indicates the probability of being a SAR.
The goal of applying a model to this use case is to lower the false positive rate, which means resources are not spent reviewing cases that are eventually determined to not be suspicious after an investigation.
In this use case, the False Positive Rate of the rules engine on the validation sample (1600 records) is:
Number of `SAR=0` divided by the total number of records = `1436/1600` = `90%`.
### Data preparation {: #data-preparation }
Consider the following when working with data:
* **Define the scope of analysis**: Collect alerts from a specific analytical window to start with; it’s recommended that you use 12–18 months of alerts for model building.
* **Define the target**: Depending on the investigation processes, the target definition could be flexible. In this walkthrough, alerts are classified as `Level1`, `Level2`, `Level3`, and `Level3-confirmed`. These labels indicate at which level of the investigation the alert was closed (i.e., confirmed as a SAR). To create a binary target, treat `Level3-confirmed` as SAR (denoted by 1) and the remaining levels as non-SAR alerts (denoted by 0).
* **Consolidate information from multiple data sources**: Below is a sample entity-relationship diagram indicating the relationship between the data tables used for this use case.

Some features are static information—`kyc_risk_score` and `state of residence` for example—these can be fetched directly from the reference tables.
For transaction behavior and payment history, the information will be derived from a specific time window prior to the alert generation date. This case uses 90 days as the time window to obtain the dynamic customer behavior, such as `nbrPurchases90d`, `avgTxnSize90d`, or `totalSpend90d`.
Below is an example of one row in the training data after it is merged and aggregated (it is broken into multiple lines for a easier visualization).

### Features and sample data {: #features-and-sample-data }
The features in the sample dataset consist of KYC (Know-Your-Customer) information, demographic information, transactional behavior, and free-form text information from the customer service representatives’ notes. To apply this use case in your organization, your dataset should contain, minimally, the following features:
- Alert ID
- Binary classification target (`SAR/no-SAR`, `1/0`, `True/False`, etc.)
- Date/time of the alert
- "Know Your Customer" score used at time of account opening
- Account tenure, in months
- Total merchant credit in the last 90 days
- Number of refund requests by the customer in the last 90 days
- Total refund amount in the last 90 days
Other helpful features to include are:
- Annual income
- Credit bureau score
- Number of credit inquiries in the past year
- Number of logins to the bank website in the last 90 days
- Indicator that the customer owns a home
- Maximum revolving line of credit
- Number of purchases in the last 90 days
- Total spend in the last 90 days
- Number of payments in the last 90 days
- Number of cash-like payments (e.g., money orders) in last 90 days
- Total payment amount in last 90 days
- Number of distinct merchants purchased at in the last 90 days
- Customer Service Representative notes and codes based on conversations with customer (cumulative)
### Implementation risks {: #implementation-risks }
When operationalizing this use case, consider the following, which may impact outcomes and require model re-evaluation:
* Change in the transactional behavior of the money launderers.
* Novel information introduced to the transaction, and customer records that are not seen by the machine learning models.
## Predict and deploy {: #predict-and-deploy }
Once you identify the model that best learns patterns in your data to predict SARs, DataRobot makes it easy to deploy the model into your alert investigation process. This is a critical step for implementing the use case, as it ensures that predictions are used in the real world to reduce false positives and improve efficiency in the investigation process. The following sections describe activities related to preparing and then deploying a model.
The following applications of the alert-prioritization score from the false positive reduction model both automate and augment the existing rule-based transaction monitoring system.
* If the FCC (Financial Crime Compliance) team is comfortable with removing the low-risk alerts (very low prioritization score) from the scope of investigation, then the binary threshold selected during the model building stage will be used as the cutoff to remove those no-risk alerts. The investigation team will only investigate alerts above the cutoff, which will still capture all the SARs based on what was learned from the historical data.
* Often regulatory agencies will consider auto-closure or auto-removal as an aggressive treatment to production alerts. If auto-closing is not the ideal way to use the model output, the alert prioritization score can still be used to triage alerts into different investigation processes, hence improving the operational efficiency.
## Deep dive: Imbalanced targets {: #deep-dive-imbalanced-targets }
In AML and Transaction Monitoring, the SAR rate is usually very low (1%–5%, depending on the detection scenarios); sometimes it could be even lower than 1% in extremely unproductive scenarios. In machine learning, such a problem is called _class imbalance_. The question becomes, how can you mitigate the risk of class imbalance and let the machine learn as much as possible from the limited known-suspicious activities?
DataRobot offers different techniques to handle class imbalance problems. Some techniques:
* Evaluate the model with <a target="_blank" rel="noopener noreferrer" href="https://docs.datarobot.com/en/docs/modeling/reference/model-detail/opt-metric.html#optimization-metrics"><b>different metrics</b></a>. For binary classification (the false positive reduction model here, for example), LogLoss is used as the default metric to rank models on the Leaderboard. Since the rule-based system is often unproductive, which leads to very low SAR rate, it’s reasonable to take a look at a different metric, such as the SAR rate in the top 5% of alerts in the prioritization list. The objective of the model is to assign a higher prioritization score with a high risk alert, so it’s ideal to have a higher rate of SAR in the top tier of the prioritization score. In the example shown in the image below, the SAR rate in the top 5% of prioritization score is more than 70% (original SAR rate is less than 10%), which indicates that the model is very effective in ranking the alert based on the SAR risk.
* DataRobot also provides flexibility for modelers when tuning hyperparameters which could also help with the class imbalance problem. In the example below, the Random Forest Classifier is tuned by enabling the balance_boostrap (random sample equal amount of SAR and non-SAR alerts in each decision trees in the forest); you can see the validation score of the new ‘Balanced Random Forest Classifier’ model is slightly better than the parent model.

* You can also use <a target="_blank" rel="noopener noreferrer" href="https://docs.datarobot.com/en/docs/modeling/build-models/adv-opt/smart-ds.html#smart-downsampling"><b>Smart Downsampling</b></a> (from the Advanced Options tab) to intentionally downsample the majority class (i.e., non-SAR alerts) in order to build faster models with similar accuracy.
## Deep Dive: Decision process {: #deep-dive-decision-process }
A review process typically consists of a deep-dive analysis by investigators. The data related to the case is made available for review so that the investigators can develop a 360-degree view of the customer, including their profile, demographic, and transaction history. Additional data from third-party data providers, and web crawling, can supplement this information to complete the picture.
For transactions that do not get auto-closed or auto-removed, the model can help the compliance team create a more effective and efficient review process by triaging their reviews. The predictions and their explanations also give investigators a more holistic view when assessing cases.
### Risk-based alert triage {: #risk-based-alert-triage }
Based on the prioritization score, the investigation team could take different investigation strategies. For example:
* No-risk or low-risk alerts can be reviewed on a quarterly basis, instead of monthly. The frequently alerted entities without any SAR risk can then be reviewed once every three months, which will significantly reduce the time of investigation.
* High-risk alerts with higher prioritization scores can have their investigation fast-tracked to the final stage in the alert escalation path. This will significantly reduce the effort spent on level 1 and level 2 investigation.
* Medium-risk alerts can use standard investigation process.
### Smart alert assignment {: #smart-alert-assignment }
For an alert investigation team that is geographically dispersed, the alert prioritization score can be used to assign alerts to different teams in a more effective manner. High-risk alerts can be assigned to the team with the most experienced investigators while low risk alerts can be handled by a less experienced team. This mitigates the risk of missing suspicious activities due to lack of competency with alert investigations.
For both approaches, the definition of high/medium/low risk could be either a set of hard thresholds (for example, High: score>=0.5, Medium: 0.5>score>=0.3, Low: score<0.3), or based on the percentile of the alert scores on a monthly basis (for example, High: above 80th percentile, Medium: between 50th and 80th percentile, Low: below 50th percentile)
## Demo {:#demo}
See the notebook [here](anti_money_laundering.ipynb).
|
index
|
---
title: Predict late shipments
description: This page outlines a use case to predict whether a shipment will be late or if there will be a shortage of parts.
---
# Predict late shipments {: #predict-late-shipments }
This page outlines a use case to predict whether a shipment will be late or if there will be a shortage of parts in the shipment. This use case is captured in a [notebook](pred-ship.ipynb) that you can download and execute locally.
## Business problem {: #business-problem }
A critical component of any supply-chain network is to prevent parts shortages, especially when they occur at the last minute. Parts shortages not only lead to underutilized machines and transportation, but also cause a domino effect of late deliveries through the entire network. In addition, the discrepancies between the forecasted and actual number of parts that arrive on time prevent supply-chain managers from optimizing their materials plans.
Parts shortages are often caused by delays in their shipment. To mitigate the impact delays will have on their supply chain, manufacturers adopt approaches such as holding excess inventory, optimizing product designs for more standardization, and moving away from single-sourcing strategies. However, most of these approaches add up to unnecessary costs for parts, storage, and logistics.
In many cases, late shipments persist until supply-chain managers can evaluate root cause and then implement short term and long term adjustments that prevent them from occurring in the future. Unfortunately, supply-chain managers have been unable to efficiently analyze historical data available in MRP systems because of the time and resources required.
## Intelligent solution {: #intelligent-solution }
Many credit decisioning systems are driven by scorecards, which are very simplistic rule-based systems. These are built by end-user organizations through industry knowledge or simple statistical systems. Some organizations go a step further and obtain scorecards from third parties, which may not be customized for an individual organization’s book.
An AI-based approach can help financial institutions learn signals from their own book and assess risk at a more granular level. Once the risk is calculated, a strategy may be implemented to use this information for interventions. If you can predict someone is going to default, this may lead to intervention steps, such as sending earlier notices or rejecting loan applications.
## Value estimation {: #value-estimation }
**How do you measure return on investment (ROI) for your use case?**
The ROI for implementing this solution can be estimated by considering the following factors:
1. Starting with the manufacturing company and production line stoppage, the cycle time of the production process can be used to understand how much of the production loss relates to part shortages. For example, if the cycle time (time taken to complete one part) is 60 seconds and each day 15 minutes of production are lost to part shortages, then total production loss is equivalent to 15 products, which translates to a loss in profit of 15 products in a day. A similar calculation can be used to estimate annual loss due to part shortage.
2. For a logistics provider, predicting part shortages early can increase savings in terms of reduced inventory. This can be roughly measured by capturing the difference in maintaining parts' stock before and after implementation of the AI solution. The difference in stock when multiplied with holding and inventory cost per unit gives the overall ROI. Furthermore, in cases when the demand for parts is left unfulfilled (because of part shortages), the opportunity cost related to the unsatisfied demand would directly result in loss of respective business opportunity.
## Tech implementation {: #tech-implentation }
### About the data {: #about-the-data }
For illustrative purposes, DataRobot uses a sample dataset provided by the President’s Emergency plan for AIDS relief (PEPFAR), which is publicly available on [Kaggle](https://www.kaggle.com/divyeshardeshana/supply-chain-shipment-pricing-data?select=SCMS_Delivery_History_Dataset.csv). This dataset provides supply chain health commodity shipment and pricing data. Specifically, the dataset identifies Antiretroviral (ARV) and HIV lab shipments to supported countries. In addition, the dataset provides the commodity pricing and associated supply chain expenses necessary to move the commodities to other countries for use. DataRobot uses this dataset to represent how a manufacturing or logistics company can leverage AI models to improve their decision-making.
### Problem framing {: #problem-framing }
The **target variable** for this use case is whether or not the shipment will be delayed (Binary; True or False, 1 or 0, etc.). The target (`Late_delivery`) makes this use case a **binary classification** problem. The distribution of the target variable is imbalanced, with 11.4% being 1 (late delivery) and 88.6% being 0 (on time delivery). See [here](https://www.datarobot.com/blog/how-to-tackle-imbalanced-data-with-datarobot/) for more information about imbalanced data in machine learning.
### Sample feature list {: #sample-feature-list }
**Feature Name** | **Data Type** | **Description** | **Data Source** | **Example**
--- | --- | --- | --- | ---
Supplier name | Categorical | Name of the vendor who is shipping the delivery. | Purchase order | Ranbaxy, Sun Pharma, etc. |
Part description | Text| The details of the part or item that is being shipped. | Purchase order | 30mg HIV test kit, 600mg Lamivudine capsules |
Order quantity | Numeric | The amount of item that was ordered. | Purchase order | 1000, 300, etc. |
Line item value | Numeric | The unit price of the line item ordered. | Purchase order | 0.39, 1.33 |
Scheduled delivery date | Date | The date at which the order is scheduled to be delivered. | Purchase order | 2-Jun-06 |
Delivery recorded date | Date | The date at which the order was eventually delivered. | ERP system | 2-Dec-06 |
Manufacturing site | Categorical | The site of the vendor where the manufacturing was done since the same vendor can ship parts from different sites. | Invoice | Sun Pharma, India
Product Group | Categorical | The category of the product that is ordered. | Purchase order | HRDT, ARV
Mode of delivery | Categorical | The mode of transport for part delivery. | Invoice | Air, Truck
Late Delivery | Target (Binary) | Whether the delivery was late or on-time. | ERP System, Purchase Order | 0 or 1
### Data preparation {: #data-preparation }
The dataset contains historical information on procurement transactions. Each row of analysis in the dataset is an individual order that is placed and whose delivery needs to be predicted. Every order has a scheduled delivery date and actual delivery date, and the difference between these were used to define the target variable (`Late_delivery`). If the delivery date surpassed the scheduled date, then the target variable had a value 1, else 0. Overall, the dataset contains about 10,320 rows and 26 features, including the target variable.
### Model training {: #model-training }
DataRobot Automated Machine Learning (AutoML) automates many parts of the modeling pipeline. Instead of hand-coding and manually testing dozens of models to find the one that best fits your needs, DataRobot automatically runs dozens of models and finds the most accurate one for you, all in a matter of minutes. In addition to training the models, DataRobot automates other steps in the modeling process, such as processing and partitioning the dataset.
Although this walkthrough jumps straight to interpreting the model results, you can take a look [here](gs-dr-fundamentals) to see how DataRobot works from start to finish, and to understand the data science methodologies embedded in its automation.
Something to highlight is, since you are dealing with an imbalanced dataset, DataRobot automatically recommends using `LogLoss` as the optimization metric to identify the most accurate model, being that it is an error metric that penalizes wrong predictions.
For this dataset, DataRobot found the most accurate model to be `Extreme Gradient Boosting Tree Classifier` with unsupervised learning features using the open source XGBoost library.
### Interpret results {: #interpret-results }
#### Feature Impact {: #feature-impact }
To give transparency on how the model works, DataRobot provides both global and local levels of model explanations. In broad terms, the model can be understood by looking at the Feature Impact graph, which reveals the association between each feature and the model target. The technique adopted by DataRobot to build this plot is called *Permutation Importance*.
As you can see, the model identified `Pack Price`, `Country`, `Vendor`, `Vendor INCO Term`, and `Line item Insurance` as some of the most critical factors affecting delays in the parts shipments.

#### Prediction Explanations {: #prediction-explanation }
Moving to the local view of explainability, DataRobot also provides **Prediction Explanations** that enable you to understand the top 10 key drivers for each prediction generated. This offers you the granularity you need to tailor your actions to the unique characteristics behind each part shortage.
For example, if a particular country is a top reason for a shipment delay, such as Nigeria or South Africa, you can take actions by reaching out to vendors in these countries and closely monitoring the shipment delivery across these routes.
Similarly, if there are certain vendors that are amongst the top reasons for delays, you can reach out to these vendors upfront and take corrective actions to avoid any delayed shipments that would affect the supply-chain network. These insights help businesses make data-driven decisions to improve the supply chain process by incorporating new rules or alternative procurement sources.

#### Word Cloud {: #word-cloud }
For text variables, such as `Part description` (included in the dataset), you can look at **Word Clouds** to discover the words or phrases that are highly associated with delayed shipments. Text features are generally the most challenging and time-consuming to build models for, but with DataRobot, each individual text column is automatically fitted as an individual classifier and is directly preprocessed with NLP techniques (tf-idf, n grams, etc.) In this case, you can see that the items described as `nevirapine 10 mg` are more likely to get delayed in comparison to other items.

### Evaluate accuracy {: #evaluate-accuracy }
To evaluate the performance of the model, DataRobot by default ran five-fold cross validation and the resulting AUC score (for ROC Curve) was around 0.82. Since the AUC score on the holdout set (unseen data) was also around 0.82, you can be reassured that the model is generalizing well and is not overfitting. The reason you look at the AUC score for evaluating the model is because AUC ranks the output (i.e., the probability of delayed shipment) instead of looking at actual values. The Lift Chart below shows how the predicted values (blue line) compared to actual values (red line) when the data is sorted by predicted values. The model has slight under-predictions for the orders that are more likely to get delayed. But overall, the model performs well. Furthermore, depending on the problem being solved, you can review the confusion matrix for the selected model and, if required, adjust the prediction threshold to optimize for precision and recall.

## Business implementation {: #business-implementation }
### Decision environment {: #decision-environment }
After finding the right model that best learns patterns in your data, DataRobot makes it easy to deploy the model into your desired decision environment. _Decision environments_ are the ways in which the predictions generated by the model will be consumed by the appropriate stakeholders in your organization, and how these stakeholders will make decisions using the predictions to impact the overall process.
**Decision maturity**
Automation | **Augmentation** | Blend
The predictions from this use case can **augment** the decisions of the supply chain managers as they foresee any upcoming delays in logistics. It acts as an intelligent machine that, when combined with the decisions of the managers, help improve your entire supply-chain network.
### Model deployment {: #model-deployment }
The model can be deployed using the DataRobot Prediction API. A REST API endpoint is used to bounce back predictions in near real-time when new scoring data from new orders are received.
Once the model has been deployed (in whatever way the organization decides), the predictions can be consumed in several ways. For example, a front-end application that acts as the supply chain’s reporting tool can be used to deliver new scoring data as an input to the model, which then bounces back predictions and Prediction Explanations in real-time.
### Decision stakeholders {: #decision-stakeholders }
The predictions and Prediction Explanations can be used by supply chain managers or logistic analysts to help them understand the critical factors or bottlenecks in the supply chain.
**Decision Executors**
Decision executors are the supply-chain managers and procurement teams who are empowered with the information they need to ensure that the supply-chain network is free from bottlenecks. These personnel have strong relationships with vendors and the ability to take corrective action using the model’s predictions.
**Decision Managers**
Decision managers are the executive stakeholders, such as the Head of Vendor Development, who manage large scale partnerships with key vendors. Based on the overall results, these stakeholders can perform quarterly reviews of the health of their vendor relationships to make strategic decisions on long-term investments and business partnerships.
**Decision Authors**
Decision authors are the business analysts or data scientists who would build this decision environment. These analysts could be the engineers/analysts from the supply chain, engineering, or vendor development teams in the organization who usually work in collaboration with the supply-chain managers and their teams.
### Decision process {: #decision-process }
The decisions that the managers and executive stakeholders take based on the predictions and Prediction Explanations for identifying potential bottlenecks include reaching out and collaborating with appropriate vendor teams in the supply-chain network based on data-driven insights. The decisions could be both long- and short-term based on the severity of the impact of shortages on the business.
### Model monitoring {: #model-monitoring }
One of the most critical components in implementing AI is having the ability to track the performance of the model for data drift and accuracy. With DataRobot MLOps, you can deploy, monitor, and manage all models across the organization through a centralized platform. Tracking model health is very important for proper model lifecycle management, similar to product lifecycle management.
### Implementation risks {: #implementation-risks }
One of the major risks in implementing this solution in the real world is adoption at the ground level. Having strong and transparent relationships with vendors is also critical in taking corrective action. The risk is that vendors may not be ready to adopt a data-driven strategy and trust the model results.
### Demo {:#demo}
See the notebook [here](pred-ship.ipynb).
|
index
|
---
title: Reduce 30-Day readmissions rate
description: This page outlines a use case to reduce the 30-day readmission rate at a hospital. This use case is captured in a Jupyter notebook that you can download and execute.
---
# Reduce 30-Day readmissions rate {: #reduce-30-day-readmissions-rate }
This page outlines a use case to reduce the 30-day readmission rate at a hospital. This use case is captured in a [Jupyter notebook](loan-default-nb.ipynb) that you can download and execute.
## Overview {: #overview }
The following sections outline the business problem and intelligent solutions for this notebook.
### Business problem {: #business-problem }
A readmission occurs when a patient is readmitted into the hospital within 30 days of previously being discharged. Readmissions are not only a reflection of uncoordinated healthcare systems that fail to sufficiently understand patients and their conditions, but they are also a tremendous financial strain on both healthcare providers and payers. In 2011, the United States Government estimated there were approximately 3.3 million cases of 30-day all-cause hospital readmissions, incurring healthcare organizations a total cost of $41.3 billion.
The foremost challenge in mitigating readmissions is accurately anticipating patient risk from the point of initial admission up until discharge. Although a readmission is caused by a multitude of factors, including a patient’s medical history, admission diagnosis, and social determinants, the existing methods (i.e., LACE and HOSPITAL scores) used to assess a patient’s likelihood of a readmission are unable to effectively consider the variety of factors involved. By only including a limited amount of considerations, these methods result in suboptimal health evaluations and outcomes.
### Intelligent solution {: #intelligent-solution }
AI provides clinicians and care managers with the information they need to nurture strong, lasting connections with their patients. AI helps reduce readmission rates by predicting which patients are at risk and allowing clinicians to prescribe intervention strategies before and after the patient is discharged. Unlike existing methods, AI models can ingest significant amounts of data and learn complex patterns behind why certain patients are likely to be readmitted. With advancements in model interpretability, AI offers personalized explanations for all its predictions, giving clinicians insight into the top risk drivers for every single patient at any given time.
By taking the form of an artificial clinician and augmenting the care they provide, along with other actions clinicians already take, AI enables them to conduct intelligent interventions to improve patient health. Using the information they learn, clinicians can decrease the likelihood of patient readmission by carefully walking through their discharge paperwork in-person, scheduling additional outpatient appointments (to give them more confidence about their health), and providing additional interventions that help reduce readmissions.
### Value estimation {: #value-estimation }
**What has return on investment (ROI) looked like for this use case?**
“[DataRobot] easily outperformed the LACE model with a 5% reduction in readmissions in the first quarter of the year.” — KLAS Report
Symphony Post Acute Care: Saved $500K in costs by reducing readmissions.
**How would you measure ROI for your use case?**
Current cost of readmissions = `Current readmissions annual rate x Annual hospital inpatient discharge volumes x Average cost of a hospital readmission`
New cost of readmissions = `New readmissions annual rate x Annual hospital inpatient discharge volumes x Average cost of a hospital readmission`
ROI = `New cost of readmissions - Current cost of readmissions`
**Value Estimates (Top-down calculation)**
`Current costs of readmissions x improvement in readmissions rate` = ROI
Calculating top down cost of readmissions for each healthcare provider is `$41.3 billion / 6,210 US providers = ~$6.7 million`
## Tech implementation {: #tech-implentation }
### About the data {: #about-the-data }
For illustrative purposes, this tutorial uses a sample dataset provided by a [medical journal](https://www.hindawi.com/journals/bmri/2014/781670/#supplementary-materials){ target=_blank } that studied readmissions across 70,000 inpatients with diabetes. The researchers of the study collected this data from the Health Facts database provided by Cerner Corporation, which is a collection of clinical records across providers in the United States. Health Facts allows organizations that use Cerner’s electronic health system to voluntarily make their data available for research purposes. All the data was cleansed of PII in compliance with HIPAA.
### Problem framing {: #problem-framing }
The target variable for this use case is **whether or not the patient readmitted to the hospital** (Binary: True or False, 1 or 0, etc.). This choice in target makes this a binary classification problem.
The features below represent key factors for predicting readmissions. They encompass each patient’s background, diagnosis, and medical history, which will help DataRobot find relevant patterns across the patient’s medical profile to assess their re-hospitalization risk.
Beyond the features listed below, DataRobot suggests incorporating any additional data your organization may collect that could be relevant to the use case. As you will see later, DataRobot is able to quickly differentiate important vs. unimportant features.
These features are generally stored across proprietary data sources available in your EMR system: Patient Data, Diagnosis Data, Admissions Data, and Prescription Data. Examples of EMR systems are Epic and Cerner.
Other external data sources that may also be relevant include: Seasonal Data, Demographic Data, and Social Determinants Data.
### Sample feature list {: #sample-feature-list }
**Feature Name** | **Data Type** | **Description** | **Data Source** | **Example**
--- | --- | --- | --- | ---
Readmitted | Binary (Target) | Whether or not the patient readmitted after 30 days | Admissions Data | False |
Age | Numeric | Patient age group | Patient Data | Female |
Weight | Categorical | Patient weight group | Patient Data | 50-75|
Gender | Categorical | Patient gender | Patient Data | 50-60 |
Race | Categorical | Patient race | Patient Data | Caucasian |
Admissions Type | Categorical | Patient state during admission (Elective, Urgent, Emergency, etc.) | Admissions Data | Elective |
Discharge Disposition | Categorical | Patient discharge condition (Home, home with health services, etc.) | Admissions Data | Discharged to home |
Admission Source | Categorical | Patient source of admissions (Physician Referral, Emergency Room, Transfer, etc.) | Admissions Data | Physician Referral |
Days in Hospital | Numeric | Length of stay in hospital | Admissions Data | 1 |
Payer Code | Categorical | Unique code of patient’s payer | Admissions Data | CP |
Medical Specialty | Categorical | Medical specialty that patient is being admitted into | Admissions Data | Surgery-Neuro |
Lab Procedures | Numeric | Total lab procedures in the past | Admissions Data | 35 |
Procedures | Numeric | Total procedures in the past | Admissions Data | 4
Outpatient Visits | Numeric | Total outpatient visits in the past | Admissions Data | 0 |
ER Visits | Numeric | Total emergency room visits in the past | Admissions Data | 0 |
Inpatient Visits | Numeric | Total inpatient visits in the past | Admissions Data | 0 |
Diagnosis | Numeric | Total diagnosis | Diagnosis Data | 9 |
ICD10 Diagnosis Code(s) | Categorical | Patient’s ICD10 diagnosis on their condition; could be more than one (additional columns) | Diagnosis Data | M4802 |
ICD10 Diagnosis Description(s) | Categorical | Description on patient’s diagnosis; could be more than one (additional columns) | Diagnosis Data | Spinal stenosis, cervical region |
Medications | Numeric | Total number of medications prescribed to the patient | Prescription Data | 21 |
Prescribed Medication(s) | Binary | Whether or not the patient is prescribed to a medication; could be more than one (additional columns) | Prescription Data | Metformin – No |
### Data preparation {: #data-preparation }
The original raw data consisted of 74 million unique visits that include 18 million unique patients across 3 million providers. This data originally contained both inpatient and outpatient visits, as it included medical records from both integrated health systems and standalone providers.
While the original data schema consisted of 41 tables with 117 features, the final dataset was filtered on relevant patients and features based on the use case. The patients included were limited to those with:
* Inpatient encounters
* Existing diabetic conditions
* 1–14 days of inpatient stay
* Lab tests performed during inpatient stay (or not)
* Medications were prescribed during inpatient stay (or not)
All other features were excluded due to lack of relevance and/or poor data integrity.
Reference the [DataRobot documentation](data/index) to see details on how to connect DataRobot to your data source, perform feature engineering, follow best practice data science techniques, and more.
### Model training {: #model-training }
DataRobot's Automated Machine Learning (AutoML) automates many parts of the modeling pipeline. Instead of hand-coding and manually testing dozens of models to find the one that best fits your needs, DataRobot automatically runs dozens of models and finds the most accurate one for you, all in a matter of minutes. In addition to training the models, DataRobot automates other steps in the modeling process, such as processing and partitioning the dataset.
For this use case we create one unified model that predicts the likelihood of readmission for patients with diabetic conditions. Each record in the data represents a unique patient visit. Reference the [DataRobot documentation](gs-dr-fundamentals) to see how to use DataRobot from start to finish and how to understand the data science methodologies embedded in its automation.
### Interpret results {: #interpret-results }
#### Feature Impact {: #feature-impact }
By taking a look at the [Feature Impact](feature-impact) chart, you can see that a patient’s number of past inpatient visits, discharge disposition, and the medical specialty of their diagnosis are the top three most impactful features that contribute to whether a patient will readmit.

#### Partial Dependence {: #partial-dependence }
In assessing the [partial dependence](feature-effects#partial-dependence-calculations) plots to further evaluate the marginal impact top features have on the predicted outcome, you can see that as a patient’s number of past inpatient visits increases from 0 to 2, their likelihood to readmit subsequently jumps from 37% to 53%. As the number of visits exceeds 4 the likelihood increases to about 59%.

#### Prediction Explanations {: #prediction-explanations }
DataRobot’s [Prediction Explanations](pred-explain/index) provide a more granular view to interpret the model results. Here, we see why a given patient was predicted to readmit or not, based on the top predictive features.

### Post-processing {: #post-processing }
For the prediction results to be intuitive for clinicians to consume, instead of displaying them as a probabilistic or binary number, they can can be post-processed into different labels based on where they fall under predefined prediction thresholds. For instance, patients can be labeled as high risk, medium risk, and low risk depending on their risk of readmissions.
## Business implementation {: #business-implementation }
### Decision environment {: #decision-environment }
After you are able to find the right model that best learns patterns in your data to predict readmissions, DataRobot makes it easy to deploy the model into your desired decision environment. *Decision environments* are the ways in which the predictions generated by the model will be consumed by the appropriate stakeholders in your organization, and how these stakeholders will make decisions using the predictions to impact the overall process.
This is a critical piece of implementing the use case as it ensures that predictions are used in the real world for reducing hospital readmissions and generating clinical improvements.
**Decision maturity**
Automation | **Augmentation** | Blend
At its core, DataRobot empowers your clinicians and care managers with the information they need to nurture strong and lasting connections with the people they care about most: their patients. While there are use cases where decisions can be automated in a data pipeline, a readmissions model is geared to *augment* the decisions of your clinicians. It acts as an intelligent machine that, combined with the expertise of your clinicians, will help improve your patients’ medical outcomes.
### Model deployment {: #model-deployment }
DataRobot provides your clinicians with complete transparency on the top risk-drivers for every single patient at any given time, enabling them to conduct intelligent interventions both before and after the patient is discharged. Reference the [DataRobot documentation](mlops/index) for an overview of model deployment.
Predictions can be *integrated into other systems* that are embedded in the provider’s day-to-day business workflow. Results can be integrated into the provider’s EMR system or BI dashboards. For the former, clinicians can easily see predictions as an additional column in the data they already view on a daily basis to monitor their assigned patients. They will be given transparent interpretability of the predictions to understand why the model predicts the patient to readmit or not.
Some common integrations:
* Display results through an Electronic Medical Record system (i.e., Epic)
* Display results through a business intelligence tool (i.e., Tableau, Power BI)
For this use case, DataRobot shows an example of how to integrate predictions with Microsoft Power BI to create a dashboard that can be accessed by clinicians to support decisions on which patients they should address to prevent readmissions.
The dashboard below displays the probability of readmission for each patient on the floor. It shows the patient’s likelihood to readmit and top factors on why the model made the prediction. Nurses and physicians can consume a dashboard similar to this one to understand which patients are likely to readmit and why, allowing them to implement a prevention strategy tailored to each patient’s unique needs.

### Decision stakeholders {: #decision-stakeholders }
**Decision executors** are the clinical stakeholders who will consume decisions on a daily basis to identify patients who are likely to readmit and understand the steps they can take to intervene.
* Nurses
* Physicians
* Care managers
**Decision managers** are the executive stakeholders who will monitor and manage the program to analyze the performance of the provider’s readmission improvement programs.
* Chief medical officer
* Chief nursing officer
* Chief population health officer
**Decision authors** are the technical stakeholders who will set up the decision flow in place.
* Clinical operations analyst
* Business intelligence analyst
* Data scientists
### Decision process {: #decision-process }
You can set thresholds to determine whether a prediction constitutes a foreseen readmission or not. Assign clear action items for each level of threshold so that clinicians can prescribe the necessary intervention strategies.

**Low risk:** Send an automated email or text that includes discharge paperwork, warning symptoms, and outpatient alternatives.
**Medium risk:** Send multiple automated emails or texts that include discharge paperwork, warning symptoms, and outpatient alternatives, with multiple reminders. Follow up with the patient 10 days post-discharge through email to gauge their condition.
**High risk:** Clinician briefs patient on their discharge paperwork in person. Send automated emails or texts that include discharge paperwork, warning symptoms, and outpatient alternatives, with multiple reminders. Follow up with the patient on a weekly basis post discharge through telephone or email to gauge their condition.
### Model monitoring {: #model-monitoring }
**Decision Operators**: IT, system operations, and data scientists.
**Prediction Cadence**: Batch predictions generated on a daily basis.
**Model Retraining Cadence**: Models retrained once data drift reaches an assigned threshold; otherwise, retrain the models at the beginning of every new operating quarter.
### Implementation risks {: #implementation-risks }
* Fail to make prediction results easy and convenient for clinicians to access (i.e., if they have to open a separate web browser to the EHR that they are already used to or have information overload).
* Fail to make predictions intuitive for clinicians to understand.
* Fail to help clinicians interpret the predictions and why the model thought a certain way.
* Fail to provide clinicians with prescriptive strategies to act on high risk cases.
### Trusted AI {: #trusted-ai }
In addition to traditional risk analysis, the following elements of AI Trust may require attention in this use case.
**Target leakage:** Target leakage describes information that should not be available at the time of prediction being used to train the model. That is, particular features make leak information about the eventual outcome that will artificially inflate the performance of the model in training. This use case required the aggregation of data across 41 different tables and a wide timeframe, making it vulnerable to potential target leakage. In the design of this model and the preparation of data, it is pivotal to identify the point of prediction (discharge from the hospital) and ensure no data be included past that time. DataRobot additionally supports robust target leakage detection in the second round of exploratory data analysis and the selection of the Informative Features feature list during Autopilot.
**Bias & Fairness:** This use case leverages features that may be categorized as protected or may be sensitive (age, gender, race). It may be advisable to assess the equivalency of the error rates across these protected groups. For example, compare if patients of different races have equivalent false negative and positive rates. The risk is if the system predicts with less accuracy for a certain protected group, failing to identify those patients as at risk of readmission. Mitigation techniques may be explored at various stages of the modeling process, if it is determined necessary.
### Demo {:#demo}
See the notebook [here](readmission.ipynb).
|
index
|
---
title: Feature selection notebooks
description: Review notebooks that outline feature selection.
---
# Feature selection notebooks {: #feature-selection-notebooks }
DataRobot offers end-to-end code examples via Jupyter notebooks that help you find complete examples of common data science and machine learning workflows. Review the notebooks that outline feature selection below.
Topic | Describes... |
----- | ------ |
[Feature Importance Rank Ensembling](feat-select/Feature-Importance-Rank-Ensembling.ipynb) | Learn about the benefits of Feature Importance Rank Ensembling (FIRE)—a method of advanced feature selection that uses a median rank aggregation of feature impacts across several models created during a run of Autopilot. |
[Advanced feature selection with Python](python-select.ipynb) | Use Python to select features by creating aggregated Feature Impact. |
|
index
|
---
title: User AI Accelerators
description: Review user-submitted workflows, notebooks, and tutorials that help you find complete examples of common data science and machine learning workflows.
---
# User AI Accelerators {: #user-ai-accelerators }
!!! warning
The code in these GitHub repos for these accelerators is sourced from the DataRobot user community and is not owned or maintained by DataRobot, Inc. You may need to make edits or updates for this code to function properly in your environment.

_Want to contribute your own accelerator?_ Visit the [GitHub accelerator repo](https://github.com/datarobot-community/ai-accelerators/tree/main){ target=_blank } to see how it's done.
This page provides quick descriptions of each customer-contributed AI Accelerator—notebooks with API code building blocks to help reduce the cycle time from hypothesis to insights with DataRobot. For downloadable code and full description, see and clone the notebooks from the DataRobot [GitHub accelerator repo](https://github.com/datarobot-community/ai-accelerators/tree/main){ target=_blank }.
|
index
|
---
title: Troubleshooting Batch Prediction jobs
description: A list of common issues that occur with Batch Prediction jobs, and how to resolve them.
---
# Troubleshooting {: #troubleshooting }
The following lists some common issues and how to resolve them.
## A job is stuck in `INITIALIZING` {: #a-job-is-stuck-in-initializing }
If using local file intake, make sure you have made a `PUT` request with the scoring data for the job after the initial `POST` request.
DataRobot only processes one job at a time per prediction instance, so your job may be queued behind other jobs. Check the job log for details:
```shell
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/:id/ \
-H 'Authorization: Bearer <YOUR_KEY>'
```
## A job is stuck in `RUNNING` {: #a-job-is-stuck-in-running }
The job may be running slowly, either because of a slow model or because the scoring data contains errors that the API is trying to identify. You can follow the progress of a job by requesting the job status:
```shell
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/:id/ \
-H 'Authorization: Bearer <YOUR_KEY>'
```
## A job was `ABORTED` {: #a-job-was-aborted }
When a job is aborted, DataRobot logs the reason to the job status. You can check job status from an individual job URL:
```shell
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/:id/ \
-H 'Authorization: Bearer <YOUR_KEY>'
```
Or from the listing view of all jobs:
```shell
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/ \
-H 'Authorization: Bearer <YOUR_KEY>'
```
## `HTTP 406` was returned when uploading a CSV file for local file intake {: #http-406-was-returned-when-uploading-a-csv-file-for-local-file-intake }
You are missing the `Content-Type: text/csv` header.
## `HTTP 422` was returned when uploading a CSV file for local file intake {: #http-422-was-returned-when-uploading-a-csv-file-for-local-file-intake }
You either:
- Already pushed CSV data for this job. To submit new data, create a new job.
- Tried to push CSV data for a job that does not require you to push data (e.g., S3 intake).
- Didn't encode your CSV data in the UTF-8 character set and didn't specify a custom encoding in `csvSettings`.
- Didn't encode your CSV data in the proper CSV format and didn't specify a custom format in `csvSettings`.
- Tried to push an empty file.
In any of the above cases, the response and the job log will contain an explanation.
## Intake stream error due to date format mismatch in Oracle JDBC scoring data {: #intake-stream-error-due-to-date-format-mismatch-in-oracle-jdbc-scoring-data }
Oracle's DATE type contains a time component, which can cause issues with scoring time series data.
A model trained using the date format `yyyy-mm-dd` can result in an error for Oracle JDBC scoring data due to Oracle's DATE format.
When DataRobot reads dates from Oracle, the dates are returned in the format `yyyy-mm-dd hh:mm:ss` by default. This can cause an error when passed to a model expecting a different format.
Use one of the following workarounds to avoid this issue:
- Train the model using Oracle as the data source to ensure that the time format is the same when scored from Oracle.
- Use the `query` option instead of `table` and `schema` to allow for the use of SQL functions. Oracle's `TO_CHAR` function can be used to parse time columns before the data is scored.
## The network connection broke while uploading a dataset for local file intake {: #the-network-connection-broke-while-uploading-a-dataset-for-local-file-intake }
Create a new job and re-upload the dataset. Failed uploads cannot be resumed and will eventually time out.
## The network connection became unavailable while downloading the scoring data for local file output {: #the-network-connection-became-unavailable-while-downloading-the-scoring-data-for-local-file-output }
Re-download the job again. The scored data is available for 48 hours on the managed AI Platform (SaaS) and for 48 hours (but configurable) on the Self-Managed AI Platform (VPC or on-prem).
## `HTTP 404` was returned while trying to download scored data {: #http-404-was-returned-while-trying-to-download-scored-data }
You either:
- Tried to download the scored data for a job that does not have scored data available for download (e.g., S3 output).
- Started the download before the job had started scoring. In that case, wait until the `download` link becomes available in the job links and try again.
## `HTTP 406` was returned when trying to download scored data {: #http-406-was-returned-when-trying-to-download-scored-data }
Your client sent an `Accept` header that did not include `text/csv`. Either do not send the `Accept` header or include `text/csv` in it.
## `CREATE_TABLE` scoring fails due to unsupported output column name formats {: #create_table-scoring-fails-due-to-unsupported-output-column-name-formats }
You may be using a target database as your output adapter that does not support the way DataRobot generates the <a href="output-format.html">output format</a> column names. Column names such as `name (actual)_PREDICTION` when scoring Time Series models might not be supported with all databases.
To work around this issue, you can utilize the <a href="output-format.html#column-name-remapping">Column Name Remapping</a> functionality to re-write the output column name to some form your target database supports.
For instance, if you want to remove the spaces from a column name, you can make a request adding `columnNamesRemapping` as such:
```json
{
"deploymentId":"<id>",
"passthroughColumnsSet":"all",
"includePredictionStatus":true,
"intakeSettings":{
"type":"localFile"
},
"outputSettings":{
"type":"jdbc",
"dataStoreId":"<id>",
"credentialId":"<id>",
"table":"table_name_of_database",
"schema":"dbo",
"catalog":"test",
"statementType":"create_table"
},
"columnNamesRemapping":{
"name (actual)_PREDICTION":"name_actual_PREDICTION"
}
}
```
## Possible causes for `HTTP 422` on job creation {: #possible-causes-for-http-422-on-job-creation }
These are the possible causes for an `HTTP 422` reply when creating a new Batch Prediction job:
- You sent an unknown job parameter
- You specified a job parameter with an unexpected type or value
- You specified an unknown credential ID in either your intake or output settings
- You are attempting to score from/to the same S3/Azure/GCP URL (not supported)
- You are attempting to ingest data from the **AI Catalog**, but your account does not have access to the **AI Catalog**
- You are attempting to ingest data from the **AI Catalog** and the **AI Catalog** dataset is not snapshotted (required for predictions) or has not been successfully ingested
- You are attempting to use a time series custom model (not currently supported)
- You are attempting to use a traditional time series (ARIMA) model (not currently supported)
- You requested Prediction Explanations for a multiclass or time series project (not currently supported)
- You requested prediction warnings for a project other than a regression project (not currently supported)
- You requested prediction warnings for a project that is not properly configured with prediction boundaries
|
batch-pred-tshoot
|
---
title: Output formats
description: Review the output formats for the predictions DataRobot returns in a columnar table format.
---
# Output format {: #output-format }
DataRobot returns predictions in a columnar table format. Each example value is followed by the data type it belongs to. The columns returned are determined by model type, as described below.
!!! note
DataRobot allows prediction output to many different databases that all have unique versions of a string (e.g., some may call it `TEXT` while others may call it `VARCHAR`).
As a result, Datarobot cannot provide implementation-specific data types.
## Regression models {: #regression-models }
<table>
<th colspan="2" style="text-align:center;"> Prediction label</th>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true"><target_name>_PREDICTION</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Numeric</td>
</tr>
<tr>
<td><b>Example name</b></td>
<td no-i18n="true">revenue_PREDICTION</td>
</tr>
<tr>
<td><b>Example value</b></td>
<td no-i18n="true">493822.12</td>
</tr>
<tr>
<td><b>Description</b></td>
<td>The predicted value.</td>
</tr>
</table>
## Binary classification models {: #binary-classification-models }
<table>
<tr>
<th colspan="2" style="text-align:center;">Positive label</th>
</tr>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true"><target_name>_<positive_label>_PREDICTION</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Numeric</td>
</tr>
<tr>
<td><b>Example name</b></td>
<td no-i18n="true">isbadbuy_1_PREDICTION</td>
</tr>
<tr>
<td><b>Example value</b></td>
<td no-i18n="true">0.28</td>
</tr>
<tr>
<td><b>Description</b></td>
<td>The float probability of the positive label.</td>
</tr>
</table>
<table>
<tr>
<th colspan="2" style="text-align:center;">Negative label</th>
</tr>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true"><target_name>_<negative_label>_PREDICTION</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Numeric</td>
</tr>
<tr>
<td><b>Example name</b></td>
<td no-i18n="true">isbadbuy_0_PREDICTION</td>
</tr>
<tr>
<td><b>Example value</b></td>
<td no-i18n="true">0.72</td>
</tr>
<tr>
<td><b>Description</b></td>
<td>The float probability of the negative label.</td>
</tr>
</table>
<table>
<th colspan="2" style="text-align:center;"> Prediction label</th>
</tr>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true"><target_name>_PREDICTION</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Text</td>
</tr>
<tr>
<td><b>Example name</b></td>
<td no-i18n="true">isbadbuy_PREDICTION</td>
</tr>
<tr>
<td><b>Example value</b></td>
<td no-i18n="true">0</td>
</tr>
<tr>
<td><b>Description</b></td>
<td>The predicted label of the classification.</td>
</tr>
</table>
<table>
<th colspan="2" style="text-align:center;">Threshold label</th>
</tr>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true">THRESHOLD</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Numeric</td>
</tr>
<tr>
<td><b>Example name</b></td>
<td no-i18n="true">THRESHOLD</td>
</tr>
<tr>
<td><b>Example value</b></td>
<td no-i18n="true">0.5</td>
</tr>
<tr>
<td><b>Description</b></td>
<td>The float prediction threshold used for determining the label.</td>
</tr>
</table>
<table>
<tr>
<th colspan="2" style="text-align:center;">Positive class label</th>
</tr>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true">POSITIVE_CLASS</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Text</td>
</tr>
<tr>
<td><b>Example name</b></td>
<td no-i18n="true">POSITIVE_CLASS</td>
</tr>
<tr>
<td><b>Example value</b></td>
<td no-i18n="true">1</td>
</tr>
<tr>
<td><b>Description</b></td>
<td>The label configured as the positive class.</td>
</tr>
</table>
## Multiclass classification models {: #multiclass-classification-models }
<table>
<th colspan="2" style="text-align:center;">Prediction label</th>
</tr>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true"><target_name>_PREDICTION</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Text</td>
</tr>
<tr>
<td><b>Example name</b></td>
<td no-i18n="true">species_PREDICTION</td>
</tr>
<tr>
<td><b>Example value</b></td>
<td>lion</td>
</tr>
<tr>
<td><b>Description</b></td>
<td>The predicted label of the classification.</td>
</tr>
</table>
<table>
<th colspan="2" style="text-align:center;">Prediction class label (for each class)</th>
</tr>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true"><target_name>_<class_label>_PREDICTION</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Numeric</td>
</tr>
<tr>
<td><b>Description</b></td>
<td>The float probability for each class.</td>
</tr>
</table>
<table>
<th colspan="2" style="text-align:center;">Example classifications</th>
<tr>
<th>Example name</th>
<th>Example value</th>
</tr>
<tr>
<td no-i18n="true">species_cat_PREDICTION</td>
<td no-i18n="true">0.28</td>
</tr>
<tr>
<td no-i18n="true">species_lion_PREDICTION</td>
<td no-i18n="true">0.24</td>
</tr>
<tr>
<td no-i18n="true">species_lynx_PREDICTION</td>
<td no-i18n="true">0.48</td>
</tr>
</table>
## Time series models {: #time-series-models }
!!! note
These output columns are available for time series regression, classification, and anomaly detection models.
Time series model columns | Description | Data type |
------------------------- | ----------- | --------- |
<SERIES_ID_COLUMN_NAME> | Contains the series ID the row belongs to.<br> <br>Functions as a passthrough column and returns the unaltered column name and values provided in the scoring data. | Text |
FORECAST_POINT | Contains the forecast point timestamp.<br><br>Unless you request [historical time series predictions](batch-pred-ts#time-series-batch-prediction-settings), the output value is the same for all rows with the same forecast point (but different for each unique forecast distance). | Date |
<TIME_COLUMN_NAME> | Contains the time series timestamp.<br><br>Functions as a passthrough column and returns the unaltered column name and values provided in the scoring data. (This returns the same value as the `originalFormatTimestamp` field returned by time series models.) | Date |
FORECAST_DISTANCE | Contains the numeric forecast distance returned by time series models. | Numeric |
## Prediction status {: #prediction-status }
<table>
<th colspan="2" style="text-align:center;">Prediction status label</th>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true">prediction_status</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Text</td>
</tr>
<tr>
<td><b>Description</b></td>
<td>A row-by-row status containing either <code>OK</code> or a string error message describing why the prediction did not succeed.</td>
</tr>
<tr>
<td><b>Example value</b></td>
<td>Could not convert date field to date format YYYY-MM-DD</td>
</tr>
<tr>
<td><b>Example value</b></td>
<td>OK</td>
</tr>
</table>
## Prediction warnings {: #prediction-warnings }
If prediction warnings are enabled for your job, DataRobot returns an additional column.
<table>
<th colspan="2" style="text-align:center;">Prediction warnings label</th>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true">IS_OUTLIER_PREDICTION</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Text</td>
</tr>
<tr>
<td><b>Description</b></td>
<td>Whether the prediction is outside the calculated prediction boundaries.</td>
</tr>
</table>
<table>
<th colspan="2" style="text-align:center;">Example values</th>
</tr>
<tr>
<th>Column</th>
<th>Example value</th>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Text</td>
</tr>
<tr>
<td no-i18n="true">IS_OUTLIER_PREDICTION</td>
<td>True</td>
</tr>
<tr>
<td no-i18n="true">IS_OUTLIER_PREDICTION</td>
<td>False</td>
</tr>
</table>
## Deployment approval status {: #deployment-approval-status }
If the approval workflow is enabled for your deployment, the output schema will contain an extra column showing the deployment approval status.
<table>
<th colspan="2" style="text-align:center;">Deployment status label</th>
</tr>
<tr>
<td><b>Column name</b></td>
<td no-i18n="true">DEPLOYMENT_APPROVAL_STATUS</td>
</tr>
<tr>
<td><b>Data type</b></td>
<td>Text/td>
</tr>
<tr>
<td><b>Description</b></td>
<td>Whether the deployment was approved.</td>
</tr>
<tr>
<td><b>Example value</b></td>
<td no-i18n="true">PENDING</td>
</tr>
</table>
## Prediction Explanations {: #prediction-explanations }
You can request Prediction Explanations be returned with your predictions by setting the `maxExplanations` job parameter to a non-zero value. You can also set thresholds for computing explanations. If you do not configure a threshold, DataRobot computes explanations for every row.
<table>
<th colspan="4" style="text-align:center;">Prediction Explanation parameters</th>
<tr>
<th>Job parameter</th>
<th>Description</th>
<th>Example value</th>
<th>Data type</th>
</tr>
<tr>
<td no-i18n="true">maxExplanations</td>
<td>Optional. Compute up to this number of explanations.</td>
<td no-i18n="true">10</td>
<td>Integer</td>
</tr>
<tr>
<td no-i18n="true">thresholdHigh</td>
<td>Optional. Limit explanations to predictions above this threshold.</td>
<td no-i18n="true">0.5</td>
<td>Float</td>
</tr>
<tr>
<td no-i18n="true">thresholdLow</td>
<td>Optional. Limit explanations to predictions below this threshold.</td>
<td no-i18n="true">0.15</td>
<td>Float</td>
</tr>
</table>
If Prediction Explanations are requested, DataRobot returns four extra columns for each explanation in the format `EXPLANATION_<n>_IDENTIFIER` (where `n` is the feature explanation index, from 1 to the maximum number of explanations requested). The returned columns are:
<table>
<thead>
<th colspan="3" style="text-align:center;">Prediction Explanation columns</th>
<tr>
<th>Column</th>
<th>Description</th>
<th>Data type</th>
</tr>
</thead>
<tbody>
<tr>
<td no-i18n="true">EXPLANATION_<n>_FEATURE_NAME</td>
<td>The feature name this explanation covers.</td>
<td>Text</td>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_<n>_STRENGTH</td>
<td>The feature strength as a float.</td>
<td>Numeric</td>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_<n>_QUALITATIVE_STRENGTH</td>
<td>The feature strength as a string, a plus or minus indicator from <code>+++</code> to <code>---</code>.</td>
<td>Text</td>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_<n>_ACTUAL_VALUE</td>
<td>The feature associated with this explanation.</td>
<td>Text</td>
</tr>
</tbody>
</table>
### Prediction Explanation examples {: #prediction-explanation-examples }
<table>
<tr>
<th style="width: 55%;">Name</th>
<th>Value</th>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_1_FEATURE_NAME</td>
<td no-i18n="true">loan_status</td>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_1_ACTUAL_VALUE</td>
<td>Charged Off</td>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_1_STRENGTH</td>
<td no-i18n="true">1.380291221709652</td>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_1_QUALITATIVE_STRENGTH</td>
<td>+++</td>
</tr>
</table><table>
<tr>
<th style="width: 55%;">Name</th>
<th>Value</th>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_1_FEATURE_NAME</td>
<td no-i18n="true">loan_status</td>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_1_ACTUAL_VALUE</td>
<td>Fully Paid</td>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_1_STRENGTH</td>
<td no-i18n="true">-1.2145340858375335</td>
</tr>
<tr>
<td no-i18n="true">EXPLANATION_1_QUALITATIVE_STRENGTH</td>
<td>---</td>
</tr>
</table>
## Passthrough columns {: #passthrough-columns }
Passthrough columns you request are passed verbatim. If they conflict with any of the above names, the job is rejected.
## Association ID {: #association-id }
If your deployment was configured with an [association ID for accuracy](accuracy-settings), all result sets will have that column passed through from the source data automatically.
## Output filters {: #output-filters }
Use the following job configuration properties to control whether to display only specific class probabilities or none at all.
<table>
<th colspan="4" style="text-align:center;">Output filter parameters</th>
<tr>
<th>Job parameter</th>
<th>Description</th>
<th>Example value</th>
<th>Data type</th>
</tr>
<tr>
<td no-i18n="true">includeProbabilities</td>
<td>Optional. Include probabilities for all classes; defaults to <code>true</code>.</td>
<td>true</td>
<td>Boolean</td>
</tr>
<tr>
<td no-i18n="true">includeProbabilitiesClasses</td>
<td>Optional. Include only probabilities for classes listed in the given array; defaults to an empty array <code>[]</code>.</td>
<td>['setosa', 'versicolor']</td>
<td>Boolean</td>
</tr>
<tr>
<td no-i18n="true">includePredictionStatus</td>
<td>Optional. Include the <code>prediction_status</code> column in the output; defaults to false.</td>
<td>true</td>
<td>Boolean</td>
</tr>
</table>
!!! note
For binary classification, `includeProbabilities` also controls the `THRESHOLD` and `POSITIVE_CLASS` columns.
## Column name remapping {: #column-name-remapping }
If your use case has a strict output schema that does not match the DataRobot output, you can rename and remove any columns from the output using the `columnNamesRemapping` job configuration property.
<table>
<th colspan="3" style="text-align:center;">Output column name remapping parameters</th>
<tr>
<th>Job parameter</th>
<th>Description</th>
<th>Example value</th>
</tr>
<tr>
<td no-i18n="true">columnNamesRemapping</td>
<td>Optional. Provide a list of items to remap (rename or remove columns from) the output from this job. Set an outputName for the column to null or false to ignore it.</td>
<td><code>[{'inputName': 'isbadbuy_1_PREDICTION', 'outputName':'prediction'}, {'inputName': 'isbadbuy_0_PREDICTION', 'outputName': null}]</code></td>
</tr>
</table>
|
output-format
|
---
title: Prediction output options
description: Learn how to program your batch prediction job's output. You can use local file streaming, S3 scoring, an AI Catalog dataset, JDBC, Snowflake, Synapse, or Tableau scoring.
---
# Prediction output options {: #prediction-output-options }
You can configure a prediction destination using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-destinations)** tab or the [Batch Prediction API](batch-prediction-api/index). This topic describes both the UI and API output options.
!!! note
For a complete list of supported output options, see the [data sources supported for batch predictions](batch-prediction-api/index#data-sources-supported-for-batch-predictions).
| Output option | Description |
|---------------|-------------|
| [Local file streaming](#local-file-streaming) | Stream scored data through a URL endpoint for immediate download when the job moves to a running state.|
| [HTTP write](#http-write) | Stream data to write to an absolute URL for scoring. This option can write data to pre-signed URLs for Amazon S3, Azure, and Google Cloud Platform. |
| **Database connections** | :~~: |
| [JDBC write](#jdbc-write) | Write prediction results back to a JDBC data source with data destination details supplied through a job definition or the Batch Prediction API. |
| **Cloud storage connections** | :~~: |
| [Amazon S3 write](#amazon-s3-write) | Write scored data to public or private S3 buckets with a DataRobot credential consisting of an access key (ID and key) and a session token (optional).|
| [Azure Blob Storage write](#azure-blob-storage-write) | Write scored data to Azure Blob Storage with a DataRobot credential consisting of an Azure Connection String.|
| [Google Cloud Storage write](#google-cloud-storage-write) | Write scored data to Google Cloud Storage with a DataRobot credential consisting of a JSON-formatted account key. |
| **Data warehouse connections** | :~~: |
| [BigQuery write](#bigquery-write) | Score data using BigQuery with data destination details supplied through a job definition or the Batch Prediction API. |
| [Snowflake write](#snowflake-write) | Score data using Snowflake with data destination details supplied through a job definition or the Batch Prediction API. |
| [Azure Synapse write](#azure-synapse-write) | Score data using Synapse with data destination details supplied through a job definition or the Batch Prediction API. |
| **Other connections** | :~~: |
| [Tableau write](#tableau-write) | Score data using Tableau with data destination details supplied through the Batch Prediction API. |
If you are using a custom [CSV format](batch-prediction-api/index#csv-format), any output option dealing with CSV will adhere to that format. The columns that appear in the output are documented in the section on [output format](output-format).
## Local file streaming {: #local-file-streaming }
If your job is configured with local file streaming as the output option, you can start downloading the scored data as soon as the job moves to a `RUNNING` state. In the example job data JSON below, the URL needed to make the local file streaming request is available in the `download` key of the `links` object:
``` json
{
"elapsedTimeSec": 97,
"failedRows": 0,
"jobIntakeSize": 1150602342,
"jobOutputSize": 107791140,
"jobSpec": {
"deploymentId": "5dc1a6a9865d6c004dd881ef",
"maxExplanations": 0,
"numConcurrent": 4,
"passthroughColumns": null,
"passthroughColumnsSet": null,
"predictionWarningEnabled": null,
"thresholdHigh": null,
"thresholdLow": null
},
"links": {
"download": "https://app.datarobot.com/api/v2/batchPredictions/5dc45e583c36a100e45276da/download/",
"self": "https://app.datarobot.com/api/v2/batchPredictions/5dc45e583c36a100e45276da/"
},
"logs": [
"Job created by user@example.org from 203.0.113.42 at 2019-11-07 18:11:36.870000",
"Job started processing at 2019-11-07 18:11:49.781000",
"Job done processing at 2019-11-07 18:13:14.533000"
],
"percentageCompleted": 0.0,
"scoredRows": 3000000,
"status": "COMPLETED",
"statusDetails": "Job done processing at 2019-11-07 18:13:14.533000"
}
```
If you download faster than DataRobot can ingest and score your data, the download may appear sluggish because DataRobot streams the scored data as soon as it arrives (in chunks).
Refer to the [this sample use case](pred-examples#end-to-end-scoring-of-csv-files-from-local-files) for a complete example.
## HTTP write {: #http-write }
You can point Batch Predictions at a regular URL, and DataRobot streams the data for scoring:
| Parameter | Example | Description |
|--------------|---------|---------------|
| `type` | `http` | Use HTTP for output. |
| `url` | `https://example.com/datasets/scored.csv` | An absolute URL that designates where the file is written. |
The URL can optionally contain a username and password such as: `https://username:password@example.com/datasets/scoring.csv`.
The `http` adapter can be used for writing data to pre-signed URLs from either [S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html){ target=_blank }, [Azure](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview){ target=_blank }, or [GCP](https://cloud.google.com/storage/docs/access-control/signed-urls){ target=_blank }.
## JDBC write {: #jdbc-write }
DataRobot supports writing prediction results back to a JDBC data source. For this, the Batch Prediction API integrates with [external data sources](data-conn#add-data-sources) using [securely stored credentials](stored-creds).
Supply data destination details using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-destinations)** tab or the [Batch Prediction API](batch-prediction-api/index) (`outputSettings`) as described in the table below.
| UI field | Parameter | Example | Description |
|----------|-----------|---------|-------------|
| Destination type | `type` | `jdbc` | Use a JDBC data store as output. |
| + Select connection | `dataStoreId` | `5e4bc5b35e6e763beb9db14a` | The external data source ID. |
| Enter credentials | `credentialId` | `5e4bc5555e6e763beb9db147` | Optional. The ID of a stored credential containing username and password. Refer to [storing credentials securely](batch-prediction-api/index#credentials). |
| Tables | `table` | `scoring_data` | The name of the database table where scored data will be written. |
| Schemas | `schema` | `public` | Optional. The name of the schema where scored data will be written. |
| Database | `catalog` | `output_data` | Optional. The name of the specified database catalog to write output data to. |
| **Write strategy options** | :~~: | :~~: | :~~: |
| Write strategy | `statementType` | `update` | The statement type, `insert`, `update`, or `insertUpdate`. |
| Create table if it does not exist <br> (for Insert or Insert + Update) | `create_table_if_not_exists` | `true` | Optional. If no existing table is detected, attempt to create it before writing data with the strategy defined in the `statementType` parameter. |
| Row identifier <br> (for Update or Insert + Update) | `updateColumns` | `['index']`| Optional. A list of strings containing the column names to be updated when `statementType` is set to `update` or `insertUpdate`. |
| Row identifier <br> (for Update or Insert + Update) | `where_columns` | `['refId']` | Optional. A list of strings containing the column names to be selected when `statementType` is set to `update` or `insertUpdate`. |
| **Advanced options** | :~~: | :~~: | :~~: |
| Commit interval | `commitInterval` | `600` | Optional. Defines a time interval, in seconds, between commits to the JDBC source. If set to `0`, the batch prediction operation will write the entire job before committing. Default: `600` |
!!! note
If your target database doesn't support the column naming conventions of DataRobot's [output format](output-format), you can use [Column Name Remapping](output-format#column-name-remapping) to re-write the output column names to a format your target database supports (e.g., remove spaces from the name).
### Statement types {: #statement-types }
When dealing with **Write strategy** options, you can use the following statement types to write data, depending on the situation:
| Statement type | Description |
|----------------|-------------|
| `insert` | Scored data rows are inserted in the target database as a new entry. Suitable for writing to an **empty** table. |
| `update` | Scored data entries in the target database matching the row identifier of a result row are updated with the new result (columns identified in `updateColumns`). Suitable for writing to an **existing** table. |
| `insertUpdate` | Entries in the target database matching the row identifier of a result row (`where_columns`) are updated with the new result (`update` queries). All other result rows are inserted as new entries (`insert` queries). |
| `createTable` (deprecated) | DataRobot no longer recommends `createTable`. Use a different option with `create_table_if_not_exists` set to `True`. If used, scored data rows are saved to a new table using `INSERT` queries. The table must not exist before scoring. |
### Source IP addresses for whitelisting {: #source-ip-addresses-for-whitelisting }
Any connection initiated from DataRobot originates from one of the following IP addresses:
{% include 'includes/whitelist-ip.md' %}
## Amazon S3 write {: #amazon-s3-write }
DataRobot can save scored data to both public and private buckets. To write to S3, you must set up a credential with DataRobot consisting of an access key (ID and key) and optionally a session token.
| UI field | Parameter | Example | Description |
|----------|--------------|----------------|--------|
| Destination type | `type` | `s3` | Use S3 for intake. |
| URL | `url` | `s3://bucket-name/results/scored.csv` | An absolute URL for the file to be written. |
| Format | `format` | `csv` | CSV (default) or Parquet. |
| + Add credentials | `credentialId` | `5e4bc5555e6e763beb9db147` | In the UI, enable the **+ Add credentials** field by selecting **This URL requires credentials**. Required if explicit access credentials for this URL are required. Refer to [storing credentials securely](../index#credentials). |
| **Advanced options** | :~~: | :~~: | :~~: |
| Endpoint URL | `endpointUrl` | `https://s3.us-east-1.amazonaws.com` | Optional. Override the endpoint used to connect to S3, for example, to use an API gateway or another S3-compatible storage service. |
AWS credentials are encrypted and only decrypted when used to set up the client for communication with AWS during scoring.
!!! note
If running a Private AI Cloud within AWS, you can provide implicit credentials for your application instances using an IAM Instance Profile to access your S3 buckets without supplying explicit credentials in the job data. For more information, see the AWS article, [Create an IAM Instance Profile](https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-iam-instance-profile.html){ target="_blank" }.
## Azure Blob Storage write {: #azure-blob-storage-write }
Azure Blob Storage is an option for scoring large files. To save a dataset to Azure Blob Storage, you must set up a credential with DataRobot consisting of an Azure Connection String.
| UI field | Parameter | Example | Description |
|----------|--------------|---------|---------------|
| Destination type | `type` | `azure` | Use Azure Blob Storage for intake. |
| URL | `url` | `https://myaccount.blob.core.windows.net/datasets/scored.csv` | An absolute URL for the file to be written.|
| Format | `format` | `csv` | Optional. CSV (default) or Parquet. |
| + Add credentials | credentialId | `5e4bc5555e6e763beb488dba` | In the UI, enable the **+ Add credentials** field by selecting **This URL requires credentials**. Required if explicit access to credentials for this URL are necessary (optional otherwise). Refer to [storing credentials securely](batch-prediction-api/index#store-credentials-securely). |
Azure credentials are encrypted and only decrypted when used to set up the client for communication with Azure during scoring.
## Google Cloud Storage write {: #google-cloud-storage-write }
DataRobot also supports the Google Cloud Storage adapter. To save a dataset to Google Cloud Storage, you must set up a credential with DataRobot consisting of a JSON-formatted account key.
| UI field | Parameter | Example | Description |
|----------|--------------|---------|---------------|
| Destination type | `type` | `gcp` | Use Google Cloud Storage for output. |
| URL | `url` | `gcs://bucket-name/datasets/scored.csv` | An absolute URL designating where the file is written.|
| Format | `format` | `csv` | Optional. CSV (default) or Parquet. |
| + Add credentials | `credentialId` | `5e4bc5555e6e763beb488dba` | Required if explicit access credentials for this URL are required, otherwise optional. Refer to [storing credentials securely](batch-prediction-api/index#store-credentials-securely). |
GCP credentials are encrypted and are only decrypted when used to set up the client for communication with GCP during scoring.
## BigQuery write {: #bigquery-write }
To use BigQuery for scoring, supply data destination details using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-destinations)** tab or the [Batch Prediction API](batch-prediction-api/index) (`outputSettings`) as described in the table below.
| UI field | Parameter | Example | Description |
|----------|--------------|---------|---------------|
| Destination type | `type` | `bigquery` | Use Google Cloud Storage for output and the batch loading job to ingest data from GCS into a BigQuery table. |
| Dataset | `dataset` | `my_dataset` | The BigQuery dataset to use. |
| Table | `table` | `my_table` | The BigQuery table from the dataset to use for output. |
| Bucket name | `bucket` | `my-bucket-in-gcs` | The GCP bucket where data files are stored to be loaded into or unloaded from a BiqQuery table. |
| + Add credentials | `credentialId` | `5e4bc5555e6e763beb488dba` | Required if explicit access credentials for this bucket are necessary (otherwise optional). In the UI, enable the **+ Add credentials** field by selecting **This connection requires credentials**. Refer to [storing credentials securely](batch-prediction-api/index#store-credentials-securely). |
Refer to the [example section](pred-examples#end-to-end-scoring-with-bigquery) for a complete API example.
## Snowflake write {: #snowflake-write }
To use Snowflake for scoring, supply data destination details using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-destinations)** tab or the [Batch Prediction API](batch-prediction-api/index) (`outputSettings`) as described in the table below.
| UI field | Parameter | Example | Description |
|----------|-----------|---------|-------------|
| Destination type | `type` | `snowflake` | Adapter type. |
| **Connection options** | :~~: | :~~: | :~~: |
| + Select connection | `dataStoreId` | `5e4bc5b35e6e763beb9db14a` | ID of Snowflake data source. |
| Enter credentials | `credentialId` | `5e4bc5555e6e763beb9db147` | Optional. ID of a stored credential containing username and password for Snowflake. |
| Tables | `table` | `RESULTS` | Name of the Snowflake table to store results. |
| Schemas | `schema` | `PUBLIC` | Optional. The name of the schema containing the table to be scored. |
| Database | `catalog` | `OUTPUT` | Optional. The name of the specified database catalog to write output data to. |
| **Use external stage options** | :~~: | :~~: | :~~: |
| Cloud storage type | `cloudStorageType` | `s3` | Optional. Type of cloud storage backend used in Snowflake external stage. Can be one of 3 cloud storage providers: `s3`/`azure`/`gcp`. The default is `s3`. In the UI, select **Use external stage** to enable the **Cloud storage type** field. |
| External stage | `externalStage` | `my_s3_stage` | [Snowflake external stage](https://docs.snowflake.com/en/sql-reference/sql/create-stage.html){ target=_blank }. In the UI, select **Use external stage** to enable the **External stage** field. |
| Endpoint URL (for S3 only) | `endpointUrl` | `https://www.example.com/datasets/` | Optional. Override the endpoint used to connect to S3, for example, to use an API gateway or another S3-compatible storage service. In the UI, for the **S3** option in **Cloud storage type** click Show advanced options to reveal the **Endpoint URL** field.|
| + Add credentials | `cloudStorageCredentialId` | `6e4bc5541e6e763beb9db15c` | Optional. ID of stored credentials for a storage backend (S3/Azure/GCS) used in Snowflake stage. In the UI, enable the **+ Add credentials** field by selecting **This URL requires credentials**. |
| **Write strategy options (for fallback JDBC connection)** | :~~: | :~~: | :~~: |
| Write strategy | `statementType` | `insert` | If you're using a Snowflake external stage the `statementType` is `insert`. However, in the UI you have two configuration options: <ul><li>If you haven't configured an external stage, the connection defaults to **JDBC** and you can select **Insert** or **Update**. If you select **Update**, you can provide a **Row identifier**.</li><li>If you selected **Use external stage**, the **Insert** option is required.</li></ul> |
| Create table if it does not exist <br> (for Insert) | `create_table_if_not_exists` | `true` | Optional. If no existing table is detected, attempt to create one. |
| **Advanced options** | :~~: | :~~: | :~~: |
| Commit interval | `commitInterval` | `600` | Optional. Defines a time interval, in seconds, between commits to the JDBC source. If set to `0`, the batch prediction operation will write the entire job before committing. Default: `600` |
Refer to the [example section](pred-examples#end-to-end-scoring-with-snowflake) for a complete API example.
## Azure Synapse write {: #azure-synapse-write }
To use Synapse for scoring, supply data destination details using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-destinations)** tab or the [Batch Prediction API](batch-prediction-api/index) (`outputSettings`) as described in the table below.
| UI field | Parameter | Example | Description |
|----------|--------------|---------|---------------|
| Destination type | `type` | `synapse` | Adapter type. |
| **Connection options** | :~~: | :~~: | :~~: |
| + Select connection | `dataStoreId` | `5e4bc5b35e6e763beb9db14a` | ID of Synapse data source. |
| Enter credentials | `credentialId` | `5e4bc5555e6e763beb9db147` | Optional. ID of a stored credential containing username and password for Synapse. |
| Tables | `table` | `RESULTS` | Name of the Synapse table to keep results in. |
| Schemas | `schema` | `dbo` | Optional. Name of the schema containing the table to be scored. |
| **Use external stage options** | :~~: | :~~: | :~~: |
| External data source | `externalDatasource` | `my_data_source` | [Name of the identifier created in Synapse for the external data source](https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/reference-collation-types){ target=_blank }. |
| + Add credentials | `cloudStorageCredentialId` | `6e4bc5541e6e763beb9db15c` | Optional. ID of a stored credential for Azure Blob storage. |
| **Write strategy options (for fallback JDBC connection)** | :~~: | :~~: | :~~: |
| Write strategy | `statementType` | `insert` | If you're using a Synapse external stage the `statementType` is `insert`. However, in the UI you have two configuration options: <ul><li>If you haven't configured an external stage, the connection defaults to **JDBC** and you can select **Insert**, **Update**, or **Insert + Update**. If you select **Update** or **Insert + Update**, you can provide a **Row identifier**.</li><li>If you selected **Use external stage**, the **Insert** option is required.</li></ul> |
| Create table if it does not exist <br> (for Insert or Insert + Update) | `create_table_if_not_exists` | `true` | Optional. If no existing table is detected, attempt to create it before writing data with the strategy defined in the `statementType` parameter. |
| Create table if it does not exist | `create_table_if_not_exists` | `true` | Optional. Attempt to create the table first if no existing one is detected. |
| **Advanced options** | :~~: | :~~: | :~~: |
| Commit interval | `commitInterval` | `600` | Optional. Defines a time interval, in seconds, between commits to the JDBC source. If set to `0`, the batch prediction operation will write the entire job before committing. Default: `600` |
Refer to the [example section](pred-examples#end-to-end-scoring-with-synapse) for a complete API example.
!!! note
Azure Synapse supports fewer collations than the default Microsoft SQL Server. For more information, reference the [Azure Synapse documentation](https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/reference-collation-types"){ target=_blank }.
## Tableau write {: #tableau-write }
To use Tableau for scoring, supply data destination details using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-destinations)** tab or the [Batch Prediction API](batch-prediction-api/index) (`outputSettings`) as described in the table below.
| UI field | Parameter | Example | Description |
|----------|--------------|---------|---------------|
| Destination type | `type` | `tableau` | Use Tableau for output. |
| Tableau URL | `URL` | `https://xxxx.online.tableau.com` | The URL to your online Tableau server. |
| Site Name | `siteName` | `datarobottrial` | Your Tableau site name. |
| + Add credentials | `credentialId` | `5e4bc5555e6e763beb488dba` | Use the specified credential to access the Tableau URL. In the UI, enable the **+ Add credentials** field by selecting **This connection requires credentials**. Refer to [storing credentials securely](batch-prediction-api/index#store-credentials-securely). |
| + Select Tableau project and data source | `dataSourceId`| `0e470cc1-8178-4e8d-b159-6ae1db202394` | The ID of your Tableau data source. In the UI, select **Create a new data source** and add a new **Data source name**. Alternatively, select **Use existing data source**. |
| Output options | `overwrite` | `true` | Specify `true` to overwrite the dataset or `false` to append to the dataset. In the UI, select **Create a new data source** and select **Overwrite** or **Append**. |
|
output-options
|
---
title: Batch Prediction API
description: The Batch Prediction API provides flexible options for scoring large datasets using the prediction servers you have already deployed.
---
# Batch Prediction API {: #batch-prediction-api }
The Batch Prediction API provides flexible options for intake and output when scoring large datasets using the prediction servers you have already deployed. The API is exposed through the DataRobot Public API. The API can be consumed using either any REST-enabled client or the [DataRobot Python Public API bindings](https://datarobot-public-api-client.readthedocs-hosted.com/page/){ target=_blank }.
For more information about Batch Prediction REST API routes, view the [DataRobot REST API reference documentation](public-api/predictions).
<!--private start-->
You can also access the legacy [REST API documentation](https://app.datarobot.com/apidocs/index.html){ target=_blank }.
<!--private end-->
The main features of the API are:
* Flexible options for intake and output:
* Stream local files and start scoring while still uploading—while simultaneously downloading the results.
* Score large data sets from and to S3.
* Read datasets from the [**AI Catalog**](catalog).
* Connect to [external data sources](data-conn#add-data-sources) using JDBC with bidirectional streaming of scoring data and results.
* Mix intake and output options, for example scoring from a local file to an S3 target.
* Protection against prediction server overload with a concurrency control level option.
* Inclusion of Prediction Explanations (with an option to add thresholds).
* Support for passthrough columns to correlate scored data with source data.
* Addition of prediction warnings in the output.
* The ability to make predictions with [files greater than 1GB via the API](large-preds-api).
For more information about making batch prediction settings for time series, [reference the time series documentation](batch-pred-ts).
## Limits {: #limits }
| Item | AI Platform (SaaS) | Self-managed AI Platform (VPC or on-prem) |
|-----------------------------|-----------------|-----------------|
| Job runtime limit | 4 hours* | Unlimited |
| Local file intake size | Unlimited | Unlimited |
| Local file write size | Unlimited | Unlimited |
| S3 intake size | Unlimited | Unlimited |
| S3 write size | 100GB | 100GB (configurable) |
| Azure intake size | 4.75TB | 4.75TB |
| Azure write size | 195GB | 195GB |
| GCP intake size | 5TB | 5TB |
| GCP write size | 5TB | 5TB |
| JDBC intake size | Unlimited | Unlimited |
| JDBC output size | Unlimited | Unlimited |
| Concurrent jobs | 1 per prediction instance | 1 per installation |
| Stored data retention time <br><br> For local file adapters | 48 hours | 48 hours (configurable) |
* Feature Discovery projects have a job runtime limit of 6 hours.
## Concurrent jobs {: #concurrent-jobs }
To ensure that the prediction server does not get overloaded, DataRobot will only run one job per prediction instance.
Further jobs are queued and started as soon as previous jobs complete.
## Data pipeline {: #data-pipeline }
A Batch Prediction job is a data pipeline consisting of:
> **Data Intake > Concurrent Scoring > Data Output**
On creation, the job's `intakeSettings` and `outputSettings` define the data intake and data output part of the pipeline.
You can configure any combination of intake and output options.
For both, the defaults are local file intake and output, meaning you will have to issue a separate `PUT` request with the data to score and subsequently download the scored data.
### Data sources supported for batch predictions {: #data-sources-supported-for-batch-predictions }
<!--- When bumping versions, also update `datarobot_docs/en/data/connect-data/data-sources/index.md` --->
The following table shows the data source support for batch predictions.
| Name | Driver version | Intake support | Output support | DataRobot version validated |
|-----------------------|----------------|----------------|----------------|-----------------------------|
| AWS Athena 2.0 | 2.0.35 | yes | no | 7.3 |
| Exasol | 7.0.14 | yes | yes | 8.0 |
| Google BigQuery | 1.2.4 | yes | yes | 7.3 |
| InterSystems | 3.2.0 | yes | no | 7.3 |
| kdb+ | - | yes | yes | 7.3 |
| Microsoft SQL Server | 12.2.0 | yes | yes | 6.0 |
| MySQL | 8.0.32 | yes | yes | 6.0 |
| Oracle | 11.2.0 | yes | yes | 7.3 |
| PostgreSQL | 42.5.1 | yes | yes | 6.0 |
| Presto* | 0.216 | yes | yes | 8.0 |
| Redshift | 2.1.0.14 | yes | yes | 6.0 |
| SAP HANA | 2.15.10 | yes | no | 7.3 |
| Snowflake | 3.13.29 | yes | yes | 6.2 |
| Synapse | 8.4.1 | yes | yes | 7.3 |
| Teradata** | 17.10.00.23 | yes | yes | 7.3 |
| TreasureData | 0.5.10 | yes | no | 7.3 |
*Presto requires the use of `auto commit: true` for many of the underlying connectors which can delay writes.
**For output to Teradata, DataRobot only supports ANSI mode.
For further information, see:
* Supported [intake options](intake-options)
* Supported [output options](output-options)
* [Output format](output-format) schema
## Concurrent scoring {: #concurrent-scoring }
When scoring, the data you supply is split into chunks and scored concurrently on the prediction instance specified by the deployment.
To control the level of concurrency, modify the `numConcurrent` parameter at job creation.
## Job states {: #job-states }
When working with batch predictions, each prediction job can be in one of four states:
* `INITIALIZING`: The job has been successfully created and is either:
* Waiting for CSV data to be pushed (if local file intake).
* Waiting for a processing slot on the prediction server.
* `RUNNING`: Scoring the dataset on prediction servers has started.
* `ABORTED`: The job was aborted because either:
* It had an invalid configuration.
* DataRobot encountered 20% or 100MB of invalid scoring data that resulted in a prediction error.
* `COMPLETED`: The dataset has been scored and:
* You can now download the scored data (if local file output).
* Otherwise the data has been written to the destination.
## Store credentials securely {: #store-credentials-securely }
Some sources or targets for scoring may require DataRobot to authenticate on your behalf (for example, if your database requires that you pass a username and password for login). To ensure proper storage of these credentials, you must have [data credentials](stored-creds) enabled.
DataRobot uses the following credential types and properties:
| Adapter | Credential Type | Property |
|----------------------------|-----------------|----------|
| S3 intake / output | s3 |awsAccessKeyId <br> awsSecretAccessKey <br> awsSessionToken (optional)|
| JDBC intake / output | basic | username <br> password |
To use a stored credential, you must pass the associated `credentialId` in either `intakeSettings` or `outputSettings` as described below for each of the adapters.
## CSV format {: #csv-format }
For any intake or output options that deal with reading or writing CSV files, you can use a custom format by specifying the following in `csvSettings`:
| Parameter | Example | Description |
|-------------|---------|-------|
| `delimiter` | `,` | Optional. The delimiter character to use. Default: `,` (comma). To specify TAB as a delimiter, use the string `tab`. |
| `quotechar` | `"` | Optional. The character to use for quoting fields containing the delimiter. Default: `"`. |
| `encoding` | `utf-8` | Optional. Encoding for the CSV file. For example (but not limited to): `shift_jis`, `latin_1` or `mskanji`. Default: `utf-8`. <br><br> Any [Python supported encoding](https://docs.python.org/3/library/codecs.html){ target=_blank } can be used. |
The same format will be used for both intake and output. See a [complete example](pred-examples#end-to-end-scoring-of-csv-files-from-local-files).
## Model monitoring {: #model-monitoring }
The Batch Prediction API integrates well with DataRobot's model monitoring capabilities:
* If you have enabled data drift tracking for your deployment, any predictions run through the Batch Prediction API will be tracked.
* If you have enabled target drift tracking for you deployment, the output will contain the desired [association ID](output-format#association-id) to be used for reporting actuals.
Should you need to run a non-production dataset against your deployment, you can turn off drift tracking for a single job by providing the following parameter:
| Parameter | Example | Description |
|---------------------|---------|--------|
|`skipDriftTracking`|`true`| Optional. Skip data and target drift tracking for this job. Default: `false`. |
## Override the default prediction instance {: #override-the-default-prediction-instance }
Under normal circumstances, the prediction server used for scoring will be the default prediction server that your model was deployed to. It is however possible to override it If you have access to multiple prediction servers, you can override the default behavior by using the following properties in the `predictionInstance` option:
| Parameter | Example| Description |
|----------------|----------------------------------------|---------|
| `hostName` | `192.0.2.4` | Sets the hostname to use instead of the default hostname from the prediction server the model was deployed to. |
| `sslEnabled` | `false` | Optional. Use SSL (HTTPS) to access the prediction server. Default: `true`. |
| `apiKey` | `NWU...IBn2w` | Optional. Use an API key different from the job creator's key to authenticate against the new prediction server. |
| `datarobotKey` | `154a8abb-cbde-4e73-ab3b-a46c389c337b` | Optional. If running in a managed AI Platform environment, specify the per-organization DataRobot key for the prediction server. <br><br> Find the key on the [Deployments> Predictions > Prediction API](code-py) tab or by contacting your DataRobot representative. |
Here's a complete example:
```python
job_details = {
'deploymentId': deployment_id,
'intakeSettings': {'type': 'localFile'},
'outputSettings': {'type': 'localFile'},
'predictionInstance': {
'hostName': '192.0.2.4',
'sslEnabled': False,
'apiKey': 'NWUQ9w21UhGgerBtOC4ahN0aqjbjZ0NMhL1e5cSt4ZHIBn2w',
'datarobotKey': '154a8abb-cbde-4e73-ab3b-a46c389c337b',
},
}
```
## Consistent scoring with updated model {: #consistent-scoring-with-updated-model }
If you deploy a new model after a job has been queued, DataRobot will still use the model that was deployed at the time of job creation for the entire job. Every row will be scored with the same model.
## Template variables {: #template-variables }
Sometimes it can be useful to specify dynamic parameters in your batch jobs, such as in [Job Definitions](job-definitions). You can use [jinja's variable syntax](https://jinja.palletsprojects.com/en/3.0.x/templates/#variables){ target=_blank } (double curly braces) to print the value of the following parameters:
| Variable | Description |
|----------------|------------------------------------------------------|
| `current_run_time` | `datetime` object for current UTC time (`datetime.utcnow()`) |
| `current_run_timestamp` | Milliseconds from Unix epoch (integer) |
| `last_scheduled_run_time` | `datetime` object for the start of last job instantiated from the same job definition |
| `next_scheduled_run_time` | `datetime` object for the next scheduled start of job from the same job definition |
| `last_completed_run_time` | `datetime` object for when the previously scheduled job finished scoring |
The above variables can be used in the following fields:
| Field | Condition |
|----------------|------------------------------------------------------|
| `intake_settings.query` | For JDBC, Synapse, and Snowflake adapters |
| `output_settings.table` | For JDBC, Synapse, Snowflake, and BigQuery adapters, when statement type is `create_table` or `create_table_if_not_exists` is marked true |
| `output_settings.url` | For S3, GCP, and Azure adapters |
You should specify the URL as: `gs://bucket/output-<added-string-with-double-curly-braces>.csv`.
!!! note
To ensure that most databases understand the replacements mentioned above, DataRobot strips microseconds off the ISO-8601 format timestamps.
## API Reference {: #api-reference }
### The Public API {: #the-public-api }
The Batch Prediction API is part of the DataRobot Public API which you can access in DataRobot by clicking the question mark on the upper right, and selecting **API Documentation**.
### The Python API Client {: #the-python-api-client }
You can use the [Python Public API Client](https://datarobot-public-api-client.readthedocs-hosted.com/){ target=_blank } to interface with the Batch Prediction API.
|
index
|
---
title: Prediction intake options
description: For batch prediction data intake, you can use local file streaming, S3 scoring, Azure Blob Storage scoring, Google Cloud Storage scoring, HTTP scoring, an AI Catalog dataset scoring, JDBC scoring, Snowflake scoring, Synapse scoring, or BigQuery scoring.
---
# Prediction intake options {: #prediction-intake-options }
You can configure a prediction source using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-sources)** tab or the [Batch Prediction API](batch-prediction-api/index). This topic describes both the UI and API intake options.
!!! note
For a complete list of supported intake options, see the [data sources supported for batch predictions](batch-prediction-api/index#data-sources-supported-for-batch-predictions).
| Intake option | Description |
|--------------------------------------------------------------|-------------|
|[Local file streaming](#local-file-streaming) | Stream input data through a URL endpoint for immediate processing when the job moves to a running state. |
|[AI Catalog dataset scoring](#ai-catalog-dataset-scoring) | Read input data from a dataset snapshot in the DataRobot [AI Catalog](glossary/index#ai-catalog). |
|[HTTP scoring](#http-scoring) | Stream input data from an absolute URL for scoring. This option can read data from pre-signed URLs for Amazon S3, Azure, and Google Cloud Platform. |
| **Cloud storage intake** | :~~: |
|[Amazon S3 scoring](#amazon-s3-scoring) | Read input data from public or private S3 buckets with DataRobot credentials consisting of an access key (ID and key) and a session token (optional). This is the preferred intake option for larger files. |
|[Azure Blob Storage scoring](#azure-blob-storage-scoring) | Read input data from Azure Blob Storage with DataRobot credentials consisting of an Azure Connection String. |
|[Google Cloud Storage scoring](#google-cloud-storage-scoring) | Read input data from Google Cloud Storage with DataRobot credentials consisting of a JSON-formatted account key. |
| **Database intake** | :~~: |
|[JDBC scoring](#jdbc-scoring) | Read prediction data from a JDBC-compatible database with data source details supplied through a job definition or the Batch Prediction API.
| **Data warehouse intake** | :~~: |
|[BigQuery scoring](#bigquery-scoring) | Score data using BigQuery with data source details supplied through a job definition or the Batch Prediction API. |
|[Snowflake scoring](#snowflake-scoring) | Score data using Snowflake with data source details supplied through through a job definition or the Batch Prediction API. |
|[Synapse scoring](#synapse-scoring) | Score data using Synapse with data source details supplied through through a job definition or the Batch Prediction API. |
If you are using a custom [CSV format](batch-prediction-api/index#csv-format), any intake option dealing with CSV will adhere to that format.
## Local file streaming {: #local-file-streaming }
Local file intake does not have any special options. This intake option requires you to upload the job's scoring data using a `PUT` request to the URL specified in the `csvUpload` link in the job data. This starts the job (or queues it for processing if the prediction instance is already occupied).
If there is no other queued job for the selected prediction instance, scoring will start while you are still uploading.
Refer to [this sample use case](pred-examples#end-to-end-scoring-of-csv-files-from-local-files).
!!! note
If you forget to send scoring data, the job remains in the INITIALIZING state.
## AI Catalog dataset scoring {: #ai-catalog-dataset-scoring }
To read input data from an [**AI Catalog**](catalog) dataset, the following options are available:
| UI field | Parameter | Example | Description |
|----------|--------------|----------------|----------|
| Source type | `type` | `dataset` | In the UI, select **AI Catalog**. |
| + Select source from AI Catalog | `datasetId` | `5e4bc5b35e6e763beb9db14a` | The **AI Catalog** dataset ID.<br><br>In the UI, sort by Creation date, Name, or Description. You can filter to select from datasets that are snapshots, not snapshots, or all. Select the dataset, then click **Use the dataset**. |
| + Select version | `datasetVersionId` | `5e4bc5555e6e763beb488dba` | The **AI Catalog** dataset version ID (optional).<br><br>In the UI, enable the **+ Select version** field by selecting the **Use specific version** check box. Search for and select the version. If `datasetVersionId` is not specified, it defaults to the latest version for the specified dataset. |
!!! note
For the specified **AI Catalog** dataset, the version to be scored must have been successfully ingested, and it must be a snapshot.
## Amazon S3 scoring {: #amazon-s3-scoring }
For larger files, S3 is the preferred method for intake. DataRobot can ingest files from both public and private buckets. To score from Amazon S3, you must set up a credential with DataRobot consisting of an access key (ID and key) and, optionally, a session token.
| UI field | Parameter | Example | Description |
|----------|--------------|----------------|----------|
| Source type | `type` | `s3` | DataRobot recommends S3 for intake. |
| URL | `url` | `s3://bucket-name/datasets/scoring.csv` | An absolute URL for the file to be scored.|
| Format | `format` | `csv` | Optional. CSV (default) or Parquet. |
| + Add credentials | `credentialId` | `5e4bc5555e6e763beb488dba` | In the UI, enable the **+ Add credentials** field by selecting **This URL requires credentials**. Required if explicit access credentials for this URL are required. Refer to [storing credentials securely](batch-prediction-api/index#store-credentials-securely). |
AWS credentials are encrypted and only decrypted when used to set up the client for communication with AWS during scoring.
!!! note
If running a Private AI Cloud within AWS, it is possible to provide implicit credentials for your application instances using an IAM Instance Profile to access your S3 buckets without supplying explicit credentials in the job data. For more information, see the [AWS documentation](https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-iam-instance-profile.html){ target=_blank }.
## Azure Blob Storage scoring {: #azure-blob-storage-scoring }
Another scoring option for large files is Azure. To score from Azure Blob Storage, you must configure credentials with DataRobot using an Azure Connection String.
| UI field | Parameter | Example | Description |
|----------|--------------|----------------|----------|
| Source type | `type` | `azure` | Use Azure Blob Storage for intake. |
| URL | `url` | `https://myaccount.blob.core.windows.net/datasets/scoring.csv` | An absolute URL for the file to be scored.|
| Format | `format` | `csv` | Optional. CSV (default) or Parquet. |
| + Add credentials | `credentialId` | `5e4bc5555e6e763beb488dba` | In the UI, enable the **+ Add credentials** field by selecting **This URL requires credentials**. Required if explicit access to credentials for this URL are required (optional otherwise). Refer to [storing credentials securely](batch-prediction-api/index#store-credentials-securely). |
Azure credentials are encrypted and are only decrypted when used to set up the client for communication with Azure during scoring.
## Google Cloud Storage scoring {: #google-cloud-storage-scoring }
DataRobot also supports the Google Cloud Storage adapter. To score from Google Cloud Storage, you must set up a credential with DataRobot consisting of a JSON-formatted account key.
| UI field | Parameter | Example | Description |
|----------|--------------|----------------|----------|
| Source type | `type` | `gcp` | Use Google Cloud Storage for intake. |
| URL | `url` | `gcs://bucket-name/datasets/scoring.csv` | An absolute URL for the file to be scored.|
| Format | `format` | `csv` | Optional. CSV (default) or Parquet. |
| + Add credentials | `credentialId` | `5e4bc5555e6e763beb488dba` | In the UI, enable the **+ Add credentials** field by selecting **This URL requires credentials**. Required if explicit access credentials for this URL are required, otherwise optional. Refer to [storing credentials securely](batch-prediction-api/index#store-credentials-securely). |
GCP credentials are encrypted and are only decrypted when used to set up the client for communication with GCP during scoring.
## HTTP scoring {: #http-scoring }
In addition to the cloud storage adapters, you can also point batch predictions to a regular URL so DataRobot can stream the data for scoring:
| Parameter | Example | Description |
|--------------|---------|---------------|
| `type` | `http` | Use HTTP for intake. |
| `url` | `https://example.com/datasets/scoring.csv` | An absolute URL for the file to be scored.|
The URL can optionally contain a username and password, such as `https://username:password@example.com/datasets/scoring.csv`.
The `http` adapter can be used for ingesting data from pre-signed URLs from either [S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html){ target=_blank }, [Azure](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview){ target=_blank }, or [GCP](https://cloud.google.com/storage/docs/access-control/signed-urls){ target=_blank }.
## JDBC scoring {: #jdbc-scoring }
DataRobot supports reading from any JDBC-compatible database for Batch Predictions.
To use JDBC with the Batch Prediction API, specify `jdbc` as the intake type. Since no file is needed for a `PUT` request, scoring will start immediately, transitioning the job to RUNNING if preliminary validation succeeds. To support this, the Batch Prediction API integrates with [external data sources](data-conn#add-data-sources) using credentials securely stored in [data credentials](stored-creds).
Supply data source details using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-sources)** tab or the [Batch Prediction API](batch-prediction-api/index) (`intakeSettings`) as described in the table below.
| UI field | Parameter | Example | Description |
|----------|-------------|---------|---------------|
| Source type | `type` | `jdbc` | Use a JDBC data store as output. |
| + Select connection | `dataStoreId` | `5e4bc5b35e6e763beb9db14a` | The ID of an external data source. In the UI, select a Snowflake data connection or click [add a new data connection](data-conn#add-data-sources). Complete account and authorization fields. |
| Enter credentials | `credentialId` | `5e4bc5555e6e763beb9db147` | The ID of a stored credential containing username and password. Refer to [storing credentials securely](batch-prediction-api/index#store-credentials-securely). |
| *Deprecated option* | `fetchSize (deprecated)` | `1000` | Deprecated: `fetchSize` is now being inferred dynamically for optimal throughput and is no longer needed. Optional. To balance throughput and memory usage, sets a custom `fetchSize` (number of rows read at a time). Must be in range [1, 100000]; default 1000. |
| Tables | `table` | `scoring_data` | Optional. The name of the database table containing data to be scored. |
| Schemas | `schema` | `public` | Optional. The name of the schema containing the table to be scored. | | SQL query | `query`| `SELECT feature1, feature2, feature3 AS readmitted FROM diabetes` | Optional. A custom query to run against the database. |
!!! note
You must specify either `table` and `schema` or `query`.
Refer to the [example section](batch-prediction-api/pred-examples#end-to-end-scoring-from-a-jdbc-postgresql-database) for a complete API example.
### Source IP addresses for whitelisting {: #source-ip-addresses-for-whitelisting }
Any connection initiated from DataRobot originates from one of the following IP addresses:
{% include 'includes/whitelist-ip.md' %}
## BigQuery scoring {: #bigquery-scoring }
To use BigQuery for scoring, supply data source details using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-sources)** tab or the [Batch Prediction API](batch-prediction-api/index) (`intakeSettings`) as described in the table below.
| UI field | Parameter | Example | Description |
|----------|--------------|----------------|----------|
| Source type | `type` | `bigquery` | Use the BigQuery API to unload data to Google Cloud Storage and use it as intake. |
| Dataset | `dataset` | `my_dataset` | The BigQuery dataset to use. |
| Table | `table` | `my_table` | The BigQuery table or view from the dataset used as intake. |
| Bucket | `bucket` | `my-bucket-in-gcs` | Bucket where data should be exported. |
| + Add credentials | `credentialId` | `5e4bc5555e6e763beb488dba` | Required if explicit access credentials for this bucket are required (otherwise optional).<br><br>In the UI, enable the **+ Add credentials** field by selecting **This connection requires credentials**. Refer to [storing credentials securely](batch-prediction-api/index#store-credentials-securely). |
Refer to the [example section](pred-examples#end-to-end-scoring-with-bigquery) for a complete API example.
## Snowflake scoring {: #snowflake-scoring }
Using JDBC to transfer data can be costly in terms of IOPS (input/output operations per second) and expense for data warehouses. This adapter reduces the load on database engines during prediction scoring by using cloud storage and bulk insert to create a hybrid JDBC-cloud storage solution.
Supply data source details using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-sources)** tab or the [Batch Prediction API](batch-prediction-api/index) (`intakeSettings`) as described in the table below.
| UI field | Parameter | Example | Description |
|-----------|-------------|---------|---------------|
| Source type | `type` | `snowflake` | Adapter type. |
| + Select connection | `dataStoreId` | `5e4bc5b35e6e763beb9db14a` | ID of Snowflake data source. In the UI, select a Snowflake data connection or click [add a new data connection](data-conn#add-data-sources). Complete account and authorization fields. |
| Enter credentials | `credentialId ` | `5e4bc5555e6e763beb9db147` | ID of a stored credential containing username and password for Snowflake. |
| Tables | `table` | `SCORING_DATA` | Optional. Name of the Snowflake table containing data to be scored. |
| Schemas | `schema` | `PUBLIC` | Optional. Name of the schema containing the table to be scored. |
| SQL query | `query` | ` SELECT feature1, feature2, feature3 FROM diabetes` | Optional. Custom query to run against the database. |
| Cloud storage type | `cloudStorageType` | `s3` | Type of cloud storage backend used in Snowflake external stage. Can be one of 3 cloud storage providers: `s3`/`azure`/`gcp`. Default is `s3` |
| External stage | `externalStage` | `my_s3_stage` | [Snowflake external stage](https://docs.snowflake.com/en/sql-reference/sql/create-stage.html){ target=_blank }. In the UI, toggle on **Use external stage** to enable the **External stage** field. |
| + Add credentials | `cloudStorageCredentialId` | `6e4bc5541e6e763beb9db15c` | ID of stored credentials for a storage backend (S3/Azure/GCS) used in Snowflake stage. In the UI, enable the **+ Add credentials** field by selecting **This URL requires credentials**. |
Refer to the [example section](pred-examples#end-to-end-scoring-with-snowflake) for a complete API example.
## Synapse scoring {: #synapse-scoring }
To use Synapse for scoring, supply data source details using the **[Predictions > Job Definitions](batch-pred-jobs#set-up-prediction-sources)** tab or the [Batch Prediction API](batch-prediction-api/index) (`intakeSettings`) as described in the table below.
| UI field | Parameter | Example | Description |
|----------|--------------|----------------|----------|
| Source type | `type` | `synapse` | Adapter type. |
| + Select connection | `dataStoreId` | `5e4bc5b35e6e763beb9db14a` | ID of Synapse data source. In the UI, select a Synapse data connection or click [add a new data connection](data-conn#add-data-sources). Complete account and authorization fields.|
| External data source | `externalDatasource` | `my_data_source` | [Name of the Synapse external data source](https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/reference-collation-types){ target=_blank }. |
| Tables | `table ` | `SCORING_DATA` | Optional. Name of the Synapse table containing data to be scored. |
| Schemas | `schema` | `dbo` | Optional. Name of the schema containing the table to be scored. |
| SQL query | `query` | `SELECT feature1, feature2, feature3 FROM diabetes` | Optional. Custom query to run against the database.|
| Enter credentials | `credentialId` | `5e4bc5555e6e763beb9db147` | ID of a stored credential containing username and password for Synapse. Credentials are required if explicit access credentials for this URL are required, otherwise optional. Refer to [storing credentials securely](batch-prediction-api/index#store-credentials-securely). |
| + Add credentials | `cloudStorageCredentialId` | `6e4bc5541e6e763beb9db15c` | ID of a stored credential for Azure Blob storage. In the UI, enable the **+ Add credentials** field by selecting **This external data source requires credentials**. |
Refer to the [example section](pred-examples#end-to-end-scoring-with-synapse) for a complete API example.
!!! note
Synapse supports fewer collations than the default Microsoft SQL Server. For more information, reference the [Synapse documentation](https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/reference-collation-types){ target=_blank }.
|
intake-options
|
---
title: Batch prediction use cases
description: Examine several end-to-end examples of scoring with API code for both CSV files and external services.
---
# Batch prediction use cases {: #batch-prediction-use-cases }
The following provides several end-to-end examples of scoring with API code for both CSV files and external services.
* [End-to-end scoring of CSV files from local files](#end-to-end-scoring-of-csv-files-from-local-files)
* [End-to-end scoring of CSV files on S3](#end-to-end-scoring-of-csv-files-on-s3)
* [AI Catalog-to-CSV file scoring](#ai-catalog-to-csv-file-scoring)
* [End-to-end scoring from a JDBC PostgreSQL database](#end-to-end-scoring-from-a-jdbc-postgresql-database)
* [End-to-end scoring with Snowflake](#end-to-end-scoring-with-snowflake)
* [End-to-end scoring with Synapse](#end-to-end-scoring-with-synapse)
* [End-to-end scoring with BigQuery](#end-to-end-scoring-with-bigquery)
!!! note
These use cases require the <a target="_blank" href="https://datarobot-public-api-client.readthedocs-hosted.com/">DataRobot</a> API client to be installed.
## End-to-end scoring of CSV files from local files {: #end-to-end-scoring-of-csv-files-from-local-files }
The following example scores a local CSV file, waits for processing to start, and then initializes the download.
```python
import datarobot as dr
dr.Client(
endpoint="https://app.datarobot.com/api/v2",
token="...",
)
deployment_id = "..."
input_file = "to_predict.csv"
output_file = "predicted.csv"
job = dr.BatchPredictionJob.score_to_file(
deployment_id,
input_file,
output_file,
passthrough_columns_set="all"
)
print("started scoring...", job)
job.wait_for_completion()
```
### Prediction Explanations {: #prediction-explanations }
You can include Prediction Explanations by adding the desired [Prediction Explanation parameters](output-format#prediction-explanations) to the job configuration:
```python
job = dr.BatchPredictionJob.score_to_file(
deployment_id,
input_file,
output_file,
max_explanations=10,
threshold_high=0.5,
threshold_low=0.15,
)
```
### Custom CSV format {: #custom-csv-format }
If your CSV files does not match the default CSV format, you can modify the expected CSV format by setting `csvSettings`:
```python
job = dr.BatchPredictionJob.score_to_file(
deployment_id,
input_file,
output_file,
csv_settings={
'delimiter': ';',
'quotechar': '\'',
'encoding': 'ms_kanji',
},
)
```
## End-to-end scoring of CSV files on S3 {: #end-to-end-scoring-of-csv-files-on-s3 }
```python
import datarobot as dr
dr.Client(
endpoint="https://app.datarobot.com/api/v2",
token="...",
)
deployment_id = "616d01a8ddbd17fc2c75caf4"
credential_id = "..."
s3_csv_input_file = 's3://my-bucket/data/to_predict.csv'
s3_csv_output_file = 's3://my-bucket/data/predicted.csv'
job = dr.BatchPredictionJob.score_s3(
deployment_id,
source_url=s3_csv_input_file,
destination_url=s3_csv_output_file,
credential=credential_id
)
print("started scoring...", job)
job.wait_for_completion()
```
The same functionality is available for `score_azure` and `score_gcp`. You can also specify the `credential` object itself, instead of a credential ID:
```python
credentials = dr.Credential.get(credential_id)
job = dr.BatchPredictionJob.score_s3(
deployment_id,
source_url=s3_csv_input_file,
destination_url=s3_csv_output_file,
credential=credentials,
)
```
### Prediction Explanations {: #prediction-explanations_1 }
You can include Prediction Explanations by adding the desired [Prediction Explanation parameters](./output-format#prediction-explanations) to the job configuration:
```python
job = dr.BatchPredictionJob.score_s3(
deployment_id,
source_url=s3_csv_input_file,
destination_url=s3_csv_output_file,
credential=credential_id,
max_explanations=10,
threshold_high=0.5,
threshold_low=0.15,
)
```
## AI Catalog-to-CSV file scoring {: #ai-catalog-to-csv-file-scoring }
When using the [**AI Catalog**](catalog) for intake, you need the <code>dataset_id</code> of an already created dataset.
```python
import datarobot as dr
dr.Client(
endpoint="https://app.datarobot.com/api/v2",
token="...",
)
deployment_id = "616d01a8ddbd17fc2c75caf4"
credential_id = "..."
dataset_id = "..."
dataset = dr.Dataset.get(dataset_id)
job = dr.BatchPredictionJob.score(
deployment_id,
intake_settings={
'type': 'dataset',
'dataset_id': dataset,
},
output_settings={
'type': 'localFile',
},
)
job.wait_for_completion()
```
## End-to-end scoring from a JDBC PostgreSQL database {: #end-to-end-scoring-from-a-jdbc-postgresql-database }
The following reads a scoring dataset from the table `public.scoring_data` and saves the scored data back to `public.scored_data` (assuming that table already exists).
```python
import datarobot as dr
dr.Client(
endpoint="https://app.datarobot.com/api/v2",
token="...",
)
deployment_id = "616d01a8ddbd17fc2c75caf4"
credential_id = "..."
datastore_id = "..."
intake_settings = {
'type': 'jdbc',
'table': 'scoring_data',
'schema': 'public',
'data_store_id': datastore_id,
'credential_id': credential_id,
}
output_settings = {
'type': 'jdbc',
'table': 'scored_data',
'schema': 'public',
'data_store_id': datastore_id,
'credential_id': credential_id,
'statement_type': 'insert'
}
job = dr.BatchPredictionJob.score(
deployment_id,
passthrough_columns_set='all',
intake_settings=intake_settings,
output_settings=output_settings,
)
print("started scoring...", job)
job.wait_for_completion()
```
More details about JDBC scoring can be found [here](intake-options#jdbc-scoring).
## End-to-end scoring with Snowflake {: #end-to-end-scoring-with-snowflake }
The following example reads a scoring dataset from the table `public.SCORING_DATA` and saves the scored data back to `public.SCORED_DATA` (assuming that table already exists).
```python
import datarobot as dr
dr.Client(
endpoint="https://app.datarobot.com/api/v2",
token="...",
)
deployment_id = "616d01a8ddbd17fc2c75caf4"
credential_id = "..."
cloud_storage_credential_id = "..."
datastore_id = "..."
intake_settings = {
'type': 'snowflake',
'table': 'SCORING_DATA',
'schema': 'PUBLIC',
'external_stage': 'my_s3_stage_in_snowflake',
'data_store_id': datastore_id,
'credential_id': credential_id,
'cloud_storage_type': 's3',
'cloud_storage_credential_id': cloud_storage_credential_id
}
output_settings = {
'type': 'snowflake',
'table': 'SCORED_DATA',
'schema': 'PUBLIC',
'statement_type': 'insert'
'external_stage': 'my_s3_stage_in_snowflake',
'data_store_id': datastore_id,
'credential_id': credential_id,
'cloud_storage_type': 's3',
'cloud_storage_credential_id': cloud_storage_credential_id
}
job = dr.BatchPredictionJob.score(
deployment_id,
passthrough_columns_set='all',
intake_settings=intake_settings,
output_settings=output_settings,
)
print("started scoring...", job)
job.wait_for_completion()
```
More details about Snowflake scoring can be found in [intake](intake-options#snowflake-scoring) and [output](output-options#snowflake-write) documentation.
## End-to-end scoring with Synapse {: #end-to-end-scoring-with-synapse }
The following example reads a scoring dataset from the table `public.scoring_data` and saves the scored data back to `public.scored_data` (assuming that table already exists).
```python
import datarobot as dr
dr.Client(
endpoint="https://app.datarobot.com/api/v2",
token="...",
)
deployment_id = "616d01a8ddbd17fc2c75caf4"
credential_id = "..."
cloud_storage_credential_id = "..."
datastore_id = "..."
intake_settings = {
'type': 'synapse',
'table': 'SCORING_DATA',
'schema': 'PUBLIC',
'external_data_source': 'some_datastore',
'data_store_id': datastore_id,
'credential_id': credential_id,
'cloud_storage_credential_id': cloud_storage_credential_id
}
output_settings = {
'type': 'synapse',
'table': 'SCORED_DATA',
'schema': 'PUBLIC',
'statement_type': 'insert'
'external_data_source': 'some_datastore',
'data_store_id': datastore_id,
'credential_id': credential_id,
'cloud_storage_credential_id': cloud_storage_credential_id
}
job = dr.BatchPredictionJob.score(
deployment_id,
passthrough_columns_set='all',
intake_settings=intake_settings,
output_settings=output_settings,
)
print("started scoring...", job)
job.wait_for_completion()
```
More details about Synapse scoring can be found in the [intake](intake-options#synapse-scoring) and [output](output-options#synapse-write) documentation.
## End-to-end scoring with BigQuery {: #end-to-end-scoring-with-bigquery }
The following example scores data from a BigQuery table and sends results to a BigQuery table.
```python
import datarobot as dr
dr.Client(
endpoint="https://app.datarobot.com/api/v2",
token="...",
)
deployment_id = "616d01a8ddbd17fc2c75caf4"
gcs_credential_id = "6166c01ee91fb6641ecd28bd"
intake_settings = {
'type': 'bigquery',
'dataset': 'my-dataset',
'table': 'intake-table',
'bucket': 'my-bucket',
'credential_id': gcs_credential_id,
}
output_settings = {
'type': 'bigquery',
'dataset': 'my-dataset',
'table': 'output-table',
'bucket': 'my-bucket',
'credential_id': gcs_credential_id,
}
job = dr.BatchPredictionJob.score(
deployment=deployment_id,
intake_settings=intake_settings,
output_settings=output_settings,
include_prediction_status=True,
passthrough_columns=["some_col_name"],
)
print("started scoring...", job)
job.wait_for_completion()
```
More details about BigQuery scoring can be found in the [intake](intake-options#bigquery-scoring) and [output](output-options#bigquery-write) documentation.
|
pred-examples
|
---
title: Schedule Batch Prediction jobs
description: How to create a definition and schedule the execution of a Batch Prediction job.
---
# Schedule Batch Prediction jobs {: #schedule-batch-prediction-jobs }
After [creating a job definition](job-definitions.md), you can choose to execute job definitions on a scheduled basis instead of manually doing so through the `/batchPredictions/fromJobDefinition` endpoint.
A Scheduled Batch Prediction job works just like a regular Batch Prediction job, except DataRobot handles the execution of the job.
In order to schedule the execution of a Batch Prediction job, a definition must first be created, as described [here](job-definitions.md).
For more information about Batch Prediction REST API routes, view the [DataRobot REST API reference documentation](public-api/predictions).
<!--private start-->
You can also access the legacy [REST API documentation](https://app.datarobot.com/apidocs/index.html){ target=_blank }.
<!--private end-->
## Schedule a job definition {: #schedule-a-job-definition }
The API accepts the keywords `enabled` as well as a `schedule` object, as such:
`POST https://app.datarobot.com/api/v2/batchPredictionJobDefinitions`
```json
{
"deploymentId": "<deployment_id>",
"intakeSettings": {
"type": "dataset",
"datasetId": "<dataset_ud>"
},
"outputSettings": {
"type": "jdbc",
"statementType": "insert",
"credentialId": "<credential_id>",
"dataStoreId": "<data_store_id>",
"schema": "public",
"table": "example_table",
"createTableIfNotExists": false
},
"includeProbabilities": true,
"includePredictionStatus": true,
"passthroughColumnsSet": "all"
"enabled": false,
"schedule": {
"minute": [0],
"hour": [1],
"month": ["*"]
"dayOfWeek": ["*"],
"dayOfMonth": ["*"],
}
}
```
### `Schedule` payload {: #schedule-payload }
The `schedule` payload defines at what intervals the job should run, which can be combined in various ways to construct complex scheduling terms if needed. In all of the elements in the objects, you can supply either an asterisk `["*"]` denoting "every" time denomination or an array of integers (e.g. `[1, 2, 3]`) to define a specific interval.
<table>
<colgroup>
<col span="1" style="width: 20%;">
<col span="1" style="width: 17%;">
<col span="1" style="width: 17%;">
<col span="1" style="width: 46%;">
</colgroup>
<thead>
<tr>
<th style="text-align:center;">Key</th>
<th style="text-align:center;">Possible values</th>
<th style="text-align:center;">Example</th>
<th style="text-align:center;">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>minute</b></td>
<td><code>["*"]</code> or <code>[0 ... 59]</code></td>
<td><code>[15, 30, 45]</code></td>
<td>The job will run at these minute values for every hour of the day.</td>
</tr>
<tr>
<td><b>hour</b></td>
<td><code>["*"]</code> or <code>[0 ... 23]</code></td>
<td><code>[12,23]</code></td>
<td>The hour(s) of the day that the job will run.</td>
</tr>
<tr>
<td><b>month</b></td>
<td><code>["*"]</code> or <code>[1 ... 12]</code></td>
<td><code>["jan"]</code></td>
<td>
Strings, either 3-letter abbreviations or the full name of the month, can be used interchangeably (e.g., "jan" or "october"). <br/>
Months that are not compatible with <code>dayOfMonth</code> are ignored,
for example <code>{"dayOfMonth": [31], "month":["feb"]}.</code>
</td>
</tr>
<tr>
<td><b>dayOfWeek</b></td>
<td><code>["*"]</code> or <code>[0 ... 6]</code> where (Sunday=0)</td>
<td><code>["sun"]</code></td>
<td>
The day(s) of the week that the job will run. Strings, either 3-letter abbreviations or the full name
of the day, can be used interchangeably (e.g., "sunday", "Sunday", "sun", or "Sun", all map to <code>[0]</code>). <br/>
<br />
<strong>NOTE</strong>: This field is
additive with <code>dayOfMonth</code>, meaning the job will run both on the date specified by
<code>dayOfMonth</code> and the day defined in this field.
</td>
</tr>
<tr>
<td><b>dayOfMonth</b></td>
<td><code>["*"]</code> or <code>[1 ... 31]</code></td>
<td><code>[1, 25]</code></td>
<td>
The date(s) of the month that the job will run. Allowed values are either <code>[1 ... 31]</code> or <code>["*"]</code> for all
days of the month. <br/>
<br />
<strong>NOTE</strong>: This field is additive with <code>dayOfWeek</code>, meaning the job will run both on the date(s)
defined in this field and the day specified by <code>dayOfWeek</code> (for example, dates 1st, 2nd, 3rd, plus every Tuesday).
If <code>dayOfMonth</code> is set to <code>["*"]</code> and <code>dayOfWeek</code> is defined, the scheduler will trigger on every day of
the month that matches <code>dayOfWeek</code> (for example, Tuesday the 2nd, 9th, 16th, 23rd, 30th).
Invalid dates such as February 31st are ignored.
</td>
</tr>
</tbody>
</table>
!!! note
When specifying a time of day to run jobs, you must use UTC in the `schedule` payload—local time zones are not supported.
To account for DST (daylight savings time), update the schedule according to your local time.
### Examples {: #examples }
<table>
<colgroup>
<col span="1" style="width: 25%;">
<col span="1" style="width: 50%;">
<col span="1" style="width: 25%;">
</colgroup>
<thead>
<tr>
<th style="text-align:center;">Interval</th>
<th style="text-align:center;">Example</th>
<th style="text-align:center;">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Run every 5 minutes</td>
<td>
"schedule": {
"minute": [0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55],
"hour": ["*"],
"month": ["*"]
"dayOfWeek": ["*"],
"dayOfMonth": ["*"],
}
</td>
<td>
Executes every time the minute dial of a clock reaches the number(s) defined in <code>minute</code>, since all other fields are with asterisks.
</td>
</tr>
<tr>
<td>Run every full hour</td>
<td>
"schedule": {
"minute": [0],
"hour": ["*"],
"month": ["*"]
"dayOfWeek": ["*"],
"dayOfMonth": ["*"],
}
</td>
<td>
Executes every time the clock reaches the minute(s) defined in <code>minute</code>. <br />
<br/>
This example executes every day at <code>1:00 AM</code>, <code>2:00 AM</code>,
<code>3:00 AM</code>, and so forth.
</td>
</tr>
<tr>
<td>Run right before noon every day</td>
<td>
"schedule": {
"minute": [59],
"hour": [11],
"month": ["*"]
"dayOfWeek": ["*"],
"dayOfMonth": ["*"],
}
</td>
<td>
Executes every time the minute dial of a clock reaches the minutes(s) defined in <code>minute</code>, and the
same when the hour dial reaches the number(s) defined in <code>hour</code>. <br />
<br/>
This example executes every day at <code>11:59 AM</code>.
</td>
</tr>
<tr>
<td>Run every full hour once every half year</td>
<td>
"schedule": {
"minute": [0],
"hour": ["*"],
"month": [1, 6]
"dayOfWeek": ["*"],
"dayOfMonth": ["*"],
}
</td>
<td>
Executes every time the minute dial of a clock reaches the minute(s) defined in <code>minute</code>,
and only when the month is January (<code>1</code>) or June (<code>6</code>). </td>
</tr>
<tr>
<td>Run every full hour once every half year and only on Mondays and Saturdays</td>
<td>
"schedule": {
"minute": [0],
"hour": ["*"],
"month": [1, 6]
"dayOfWeek": ["mon", "sun"],
"dayOfMonth": ["*"],
}
</td>
<td>
Same as above, but with <code>dayOfWeek</code> specified, the interval is only executed on the days specified.
</td>
</tr>
<tr>
<td>Run every full hour once every half year and only on Mondays and Saturdays, <strong>but also</strong> on the 1st and 10th of the month</td>
<td>
"schedule": {
"minute": [0],
"hour": ["*"],
"month": [1, 6]
"dayOfWeek": ["mon", "sun"],
"dayOfMonth": [1, 10],
}
</td>
<td>
Same as above, but with <em>both</em> <code>dayOfWeek</code> and <code>dayOfMonth</code> specified, these
values <em>add</em> to each other, not excluding. <br />
<br />
This example executes on both the times defined in <code>dayOfWeek</code> and <code>dayOfMonth</code>,
and not, as could be believed, only on those years where the 1st and 10th are Mondays and Sundays.
</td>
</tr>
</tbody>
</table>
## Disable a scheduled job {: #disable-a-scheduled-job }
Job definitions are only be executed by the scheduler if `enabled` is set to `True`.
If you have a job definition that was previously running as a scheduled job, but should now be stopped, simply `PATCH` the endpoint with `enabled` set to `False`.
If a job is currently running, this will finish execution regardless.
`PATCH https://app.datarobot.com/api/v2/batchPredictionJobDefinitions/<job_definition_id>`
```json
{
"enabled": false
}
```
## Limitations {: #limitations }
The Scheduler has limitations set to how often a job can run and how many jobs can run at once.
### Total runs per day {: #total-runs-per-day }
Each organization is limited to a number of job executions per day.
If you are a Self-Managed AI Platform user, you can change this limitation by changing the environment variable `BATCH_PREDICTIONS_JOB_SCHEDULER_MAX_NUMBER_OF_RUNS_PER_DAY_PER_ORGANIZATION`.
On cloud, this limit is `1000` by default.
Note that the limitation is across all scheduled jobs per an organization, so if one scheduled job has a maximum run time of `1000` per day, no more scheduled jobs can be activated by that organization.
### Schedules are best-effort {: #schedules-are-best-effort }
Depending on the load of different definitions running at the same time across the organization, the scheduler cannot guarantee to execute all jobs at the exact second of the schedule. However, in most cases, the scheduler will have resources to trigger the job within 5 seconds of the schedule.
### Running the same definition simultaneously {: #running-the-same-definition-simultaneously }
One job definition cannot run more than once on a scheduled basis. This means that if a schedule job is taking long to execute, causing the next interval to trigger before the first one finished, the job will be rejected and aborted. This will continue to happen until the running job finishes.
### Automatic disablement of failing jobs {: #automatic-disablement-of-failing-jobs }
If a user has created a job definition that cannot execute due to misconfiguration and is aborted, this will cause the `enabled` feature to be auto-disabled after `5` consecutive failures.
It is therefore recommended that you use the existing `/batchPredictions` endpoint to test if the solution works, before `POST`ing the identical, confirmed working payload to the `/batchPredictionJobDefinitions`.
For Self-Managed AI Platform customers, this cut-off point of consecutive failures can be adjusted by changing the `BATCH_PREDICTIONS_JOB_SCHEDULER_FAILURES_BEFORE_ABORT` environment variable.
|
job-scheduling
|
---
title: Time series
description: Outlines how to set up batch predictions for time series models. Includes settings details and code examples.
---
# Time series {: #time-series }
Batch predictions for time series models work without any additional configuration. However, in most cases you need to either modify the default configuration or prepare the prediction dataset.
!!! note
Time series batch predictions are not generally available for cross-series projects or traditional time series models (such as ARIMA); however, this functionality is available as a [public preview](pp-ts-tts-lstm-batch-pred) feature.
## Time series batch prediction settings {: #time-series-batch-prediction-settings }
The default configuration can be overridden using the `timeseriesSettings` job configuration property:
|Parameter |Example| Description |
|------------|------------|-----------------|
| `type` | `forecast` | Must be either `forecast` (default) or `historical`. |
| `forecastPoint` | `2019-02-04T00:00:00Z` | Optional. By default, DataRobot infers the forecast point from the dataset. To configure, `type` must be set to `forecast`.|
| `predictionsStartDate` | `2019-01-04T00:00:00Z` | Optional. By default, DataRobot infers the start date from the dataset. To configure, `type` must be set to `historical`.|
| `predictionsEndDate` | `2019-02-04T00:00:00Z` | Optional. By default, DataRobot infers the end date from the dataset. To configure, `type` must be set to `historical`. |
|`relaxKnownInAdvanceFeaturesCheck`| `false` | Optional. If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or `false`, missing values are not allowed. Default: `false`. |
Here is a complete example job:
```json
{
"deploymentId": "5f22ba7ade0f435ba7217bcf",
"intakeSettings": {"type": "localFile"},
"outputSettings": {"type": "localFile"},
"timeseriesSettings": {
"type": "historical",
"predictionsStartDate": "2020-01-01",
"predictionsEndDate": "2020-03-31"
}
}
```
An example using Python SDK:
```python
import datarobot as dr
dr.Client(
endpoint="https://app.datarobot.com/api/v2",
token="...",
)
deployment_id = "..."
input_file = "to_predict.csv"
output_file = "predicted.csv"
job = dr.BatchPredictionJob.score_to_file(
deployment_id,
input_file,
output_file,
timeseries_settings={
"type": "historical",
"predictions_start_date": "2020-01-01",
"predictions_end_date": "2020-03-31",
},
)
print("started scoring...", job)
job.wait_for_completion()
```
## Prediction type {: #prediction-type }
When using `forecast` mode, DataRobot makes predictions using `forecastPoint` or rows in the dataset without a target. In `historical` mode, DataRobot enables bulk predictions, which calculates predictions for all possible forecast points and forecast distances within `predictionsStartDate` and `predictionsEndDate` range.
## Requirements for the scoring dataset {: #requirements-for-the-scoring-dataset }
To ensure the Batch Prediction API can process your time series dataset, you must configure the following:
* Sort prediction rows by their timestamps, with the earliest row first.
* If using multiseries, the prediction rows must be sorted by series ID then timestamp.
* There is **no limit** on the number of series DataRobot supports. The only limit is the job timeout as mentioned in [Limits](batch-prediction-api/index#limits).
### Single series forecast dataset example {: #single-series-forecast-dataset-example }
The following is an example forecast dataset for a single series:
<table>
<tr>
<th>date</th>
<th>y</th>
</tr>
<tr>
<td>2020-01-01</td>
<td>9342.85</td>
</tr>
<tr>
<td>2020-01-02</td>
<td>4951.33</td>
</tr>
<tr>
<td colspan="2" align="center"><i>24 more historical rows</i></td>
</tr>
<tr>
<td>2020-01-27</td>
<td>4180.92</td>
</tr>
<tr>
<td>2020-01-28</td>
<td>5943.11</td>
</tr>
<tr>
<td>2020-01-29</td>
<td></td>
</tr>
<tr>
<td>2020-01-30</td>
<td></td>
</tr>
<tr>
<td>2020-01-31</td>
<td></td>
</tr>
<tr>
<td>2020-02-01</td>
<td></td>
</tr>
<tr>
<td>2020-02-02</td>
<td></td>
</tr>
<tr>
<td>2020-02-03</td>
<td></td>
</tr>
<tr>
<td>2020-02-04</td>
<td></td>
</tr>
</table>
### Multiseries forecast dataset example {: #multiseries-forecast-dataset-example }
If scoring multiple series, the data must be ordered by series and timestamp:
<table>
<tr>
<th>date</th>
<th>series</th>
<th>y</th>
</tr>
<tr>
<td>2020-01-01</td>
<td>A</td>
<td>9342.85</td>
</tr>
<tr>
<td>2020-01-02</td>
<td>A</td>
<td>4951.33</td>
</tr>
<tr>
<td colspan="3" align="center"><i>24 more historical rows</i></td>
</tr>
<tr>
<td>2020-01-27</td>
<td>A</td>
<td>4180.92</td>
</tr>
<tr>
<td>2020-01-28</td>
<td>A</td>
<td>5943.11</td>
</tr>
<tr>
<td>2020-01-29</td>
<td>A</td>
<td></td>
</tr>
<tr>
<td>2020-01-30</td>
<td>A</td>
<td></td>
</tr>
<tr>
<td>2020-01-31</td>
<td>A</td>
<td></td>
</tr>
<tr>
<td>2020-02-01</td>
<td>A</td>
<td></td>
</tr>
<tr>
<td>2020-02-02</td>
<td>A</td>
<td></td>
</tr>
<tr>
<td>2020-02-03</td>
<td>A</td>
<td></td>
</tr>
<tr>
<td>2020-02-04</td>
<td>A</td>
<td></td>
</tr>
<tr>
<td>2020-01-01</td>
<td>B</td>
<td>8477.22</td>
</tr>
<tr>
<td>2020-01-02</td>
<td>B</td>
<td>7210.29</td>
</tr>
<tr>
<td colspan="3" align="center"><i>24 more historical rows</i></td>
</tr>
<tr>
<td>2020-01-27</td>
<td>B</td>
<td>7400.21</td>
</tr>
<tr>
<td>2020-01-28</td>
<td>B</td>
<td>8844.71</td>
</tr>
<tr>
<td>2020-01-29</td>
<td>B</td>
<td></td>
</tr>
<tr>
<td>2020-01-30</td>
<td>B</td>
<td></td>
</tr>
<tr>
<td>2020-01-31</td>
<td>B</td>
<td></td>
</tr>
<tr>
<td>2020-02-01</td>
<td>B</td>
<td></td>
</tr>
<tr>
<td>2020-02-02</td>
<td>B</td>
<td></td>
</tr>
<tr>
<td>2020-02-03</td>
<td>B</td>
<td></td>
</tr>
<tr>
<td>2020-02-04</td>
<td>B</td>
<td></td>
</tr>
</table>
|
batch-pred-ts
|
---
title: Predictions on large datasets
description: Walk through an example of making predictions on a large dataset using the Batch Prediction API.
---
# Predictions on large datasets {: #predictions-on-large-datasets }
[File size limits](pred-file-limits) vary depending on the prediction method—for predictions on large datasets, use the Batch Prediction API or real-time Prediction API.
The following example shows how to make predictions on a large dataset using the Batch Prediction API. See the [Prediction API](dr-predapi) for real-time predictions.
In this example, the prediction dataset is stored in the AI Catalog. The Batch Prediction API also supports predicting on data sourced from [other locations](intake-options). Note that for predicting with a dataset from the AI Catalog, the dataset must be snapshotted.
In addition to the API key sent in the header of all API requests, you need the following to use the Batch Prediction API:
1. `<deployment_id>`: The deployment ID for the model being used to make predictions against.
2. `<dataset_id>`: The dataset ID of the snapshotted AI Catalog dataset used by the model `<deployment_id>`.
The following steps show how to work with files greater than 100 MB using the `batchPredictions` API endpoint. In summary, you will:
1. Create a BatchPrediction job indicating the deployed model and dataset to use.
2. Check the status of that BatchPrediction job until it is complete.
3. Download the results.
### 1. Create a Batch Prediction job {: #1-create-a-batch-prediction-job }
`POST https://app.datarobot.com/api/v2/batchPredictions`
Sample request:
```json
{
"deploymentId": "<deployment_id>",
"intakeSettings": {
"type": "dataset",
"datasetId": "<dataset_id>"
}
}
```
Sample time series request (requires enabling the time series product and the public preview "Batch Predictions for time series" setting):
```json
{
"deploymentId": "<deployment_id>",
"intakeSettings": {
"type": "dataset",
"datasetId": "<dataset_id>"
},
"timeseriesSettings": {
"type": "forecast"
}
}
```
Sample response:
The `links.self` property of the response contains the URL used for the next two steps.
```json
{
"status": "INITIALIZING",
"skippedRows": 0,
"failedRows": 0,
"elapsedTimeSec": 0,
"logs": [
"Job created by user@example.com from 10.1.2.1 at 2020-02-19 22:41:00.865000"
],
"links": {
"download": null,
"self": "https://app.datarobot.com/api/v2/batchPredictions/a1b2c3d4x5y6z7/"
},
"jobIntakeSize": null,
"scoredRows": 0,
"jobOutputSize": null,
"jobSpec": {
"includeProbabilitiesClasses": [],
"maxExplanations": 0,
"predictionWarningEnabled": null,
"numConcurrent": 4,
"thresholdHigh": null,
"passthroughColumnsSet": null,
"csvSettings": {
"quotechar": "\"",
"delimiter": ",",
"encoding": "utf-8"
},
"thresholdLow": null,
"outputSettings": {
"type": "localFile"
},
"includeProbabilities": true,
"columnNamesRemapping": {},
"deploymentId": "<deployment_id>",
"abortOnError": true,
"intakeSettings": {
"type": "dataset",
"datasetId": "<dataset_id>"
},
"includePredictionStatus": false,
"skipDriftTracking": false,
"passthroughColumns": null
},
"statusDetails": "Job created by user@example.com from 10.1.2.1 at 2020-02-19 22:41:00.865000",
"percentageCompleted": 0.0
}
```
The `links.self` property `https://app.datarobot.com/api/v2/batchPredictions/a1b2c3d4x5y6z7/` is the variable `<batch_prediction_job_status_url>` in the Step 2 GET call, below.
### 2. Check the status of the batch prediction job {: #2-check-the-status-of-the-batch-prediction-job }
`GET <batch_prediction_job_status_url>`
Sample response:
```json
{
"status": "INITIALIZING",
"skippedRows": 0,
"failedRows": 0,
"elapsedTimeSec": 352,
"logs": [
"Job created by user@example.com from 10.1.2.1 at 2020-02-19 22:41:00.865000",
"Job started processing at 2020-02-19 22:41:16.192000"
],
"links": {
"download": "https://app.datarobot.com/api/v2/batchPredictions/a1b2c3d4x5y6z7/download/",
"self": "https://app.datarobot.com/api/v2/batchPredictions/a1b2c3d4x5y6z7/"
},
"jobIntakeSize": null,
"scoredRows": 1982300,
"jobOutputSize": null,
"jobSpec": {
"includeProbabilitiesClasses": [],
"maxExplanations": 0,
"predictionWarningEnabled": null,
"numConcurrent": 4,
"thresholdHigh": null,
"passthroughColumnsSet": null,
"csvSettings": {
"quotechar": "\"",
"delimiter": ",",
"encoding": "utf-8"
},
"thresholdLow": null,
"outputSettings": {
"type": "localFile"
},
"includeProbabilities": true,
"columnNamesRemapping": {},
"deploymentId": "<deployment_id>",
"abortOnError": true,
"intakeSettings": {
"type": "dataset",
"datasetId": "<dataset_id>"
},
"includePredictionStatus": false,
"skipDriftTracking": false,
"passthroughColumns": null
},
"statusDetails": "Job started processing at 2020-02-19 22:41:16.192000",
"percentageCompleted": 0.0
}
```
The `links.download` property `https://app.datarobot.com/api/v2/batchPredictions/a1b2c3d4x5y6z7/download/` is the variable`<batch_prediction_job_download_url>` in the Step 3 GET call, below.
### 3. Download the results of the batch prediction job {: #3-download-the-results-of-the-batch-prediction-job }
Continue polling the status URL above until the job status is COMPLETED and error-free. At that point, predictions can be downloaded.
`GET <batch_prediction_job_download_url>`
<!--private start-->
For complete descriptions of the above API commands and more advanced use cases, refer to the [DataRobot API documentation](/apidocs/autodoc/api_reference.html#batch-predictions).
<!--private end-->
|
large-preds-api
|
---
title: Batch Prediction job definitions
description: How to submit a working Batch Prediction job. You must supply a variety of elements to the POST request payload depending on the type of prediction.
---
# Batch Prediction job definitions {: #batch-prediction-job-definitions }
To submit a working Batch Prediction job, you must supply a variety of elements to the `POST` request payload depending on what type of prediction is required. Additionally, you must consider the type of intake and output adapters used for a given job.
For more information about Batch Prediction REST API routes, view the [DataRobot REST API reference documentation](public-api/predictions).
<!--private start-->
You can also access the legacy [REST API documentation](https://app.datarobot.com/apidocs/index.html){ target=_blank }.
<!--private end-->
Every time you make a Batch Prediction, the prediction information is stored outside DataRobot and re-submitted for each prediction request, as described in detail in the [sample use cases section](pred-examples). One such request could be as follows:
`POST https://app.datarobot.com/api/v2/batchPredictions`
```json
{
"deploymentId": "<deployment_id>",
"intakeSettings": {
"type": "dataset",
"datasetId": "<dataset_ud>"
},
"outputSettings": {
"type": "jdbc",
"statementType": "insert",
"credentialId": "<credential_id>",
"dataStoreId": "<data_store_id>",
"schema": "public",
"table": "example_table",
"createTableIfNotExists": false
},
"includeProbabilities": true,
"includePredictionStatus": true,
"passthroughColumnsSet": "all"
}
```
## Job Definitions API {: #job-definitions-api }
If your use case requires the same, or close to the same, type of prediction to be done multiple times, you can choose to create a _Job Definition_ of the Batch Prediction job and store this inside DataRobot for future use.
The API for job definitions is identical to the existing `/batchPredictions/` endpoint, and can be used interchangeably by changing the `POST` endpoint to `/batchPredictionJobDefinitions`:
`POST https://app.datarobot.com/api/v2/batchPredictionJobDefinitions`
```json
{
"deploymentId": "<deployment_id>",
"intakeSettings": {
"type": "dataset",
"datasetId": "<dataset_ud>"
},
"outputSettings": {
"type": "jdbc",
"statementType": "insert",
"credentialId": "<credential_id>",
"dataStoreId": "<data_store_id>",
"schema": "public",
"table": "example_table",
"createTableIfNotExists": false
},
"includeProbabilities": true,
"includePredictionStatus": true,
"passthroughColumnsSet": "all"
}
```
This definition endpoint will return an accepted payload that verifies the successful storing of the definition to DataRobot.
Optionally, you can supply a `name` parameter for easier identification. If you don't supply one, DataRobot will create one for you.
!!! warning
The <code>name</code> parameter must be unique across your organization. If you attempt to create multiple definitions with the same name, the request will fail. If you wish to free up a name, you must first send a <code>DELETE</code> request with the existing job definition ID you wish to delete.
## Execute a Job Definition {: #execute-a-job-definition }
If you wish to submit a stored job definition for scoring, you can either choose to do so on a scheduled basis, described [here](job-scheduling.md), or by manually submitting the definition ID to the endpoint `/batchPredictions/fromJobDefinition` and with the definition ID as the payload, as such:
`POST https://app.datarobot.com/api/v2/batchPredictions/fromJobDefinition`
```json
{
"jobDefinitionId": "<job_definition_id>"
}
```
The endpoint supports regular the CRUD operations, `GET`, `POST`, `DELETE` and `PATCH`.
|
job-definitions
|
---
title: DataRobot REST API
description: The DataRobot REST API provides a programmatic alternative to the web interface for creating and managing DataRobot projects. Use the DataRobot API to build highly accurate predictive models and deploy them into production environments.
---
# DataRobot REST API {: #datarobot-public-api }
The DataRobot REST API provides a programmatic alternative to the web interface for creating and managing DataRobot projects. Use the DataRobot API to build highly accurate predictive models and deploy them into production environments.
For information about specific endpoints, select one from the table of contents on the left.
|
index
|
---
title: Get a prediction server ID
dataset_name: N/A
description: Learn how to retrieve a prediction server ID using cURL commands from the REST API or by using the DataRobot Python client.
domain: DSX
expiration_date: 6-20-2022
owner: nathan.goudreault@datarobot.com
url: docs.datarobot.com/en/tutorials/using-the-api/pred-server-id.html
---
# Get a prediction server ID {: #get-a-prediction-server-id }
In order to make predictions from a deployment via DataRobot's [Prediction API](dr-predapi), you need a prediction server ID. In this tutorial, you'll learn how to retrieve the ID using cURL commands from the REST API or by using the DataRobot Python client. Once obtained, you can use the prediction server ID to deploy a model and make predictions.
!!! note
Before proceeding, note that an API key is required for this tutorial. Reference the [Create API keys](api-key-mgmt) tutorial for more information.
=== "cURL"
``` shell
curl -v \
-H "Authorization: Bearer API_KEY" \ YOUR_DR_URL/api/v2/predictionServers/
Example
API_KEY=YOUR_API_KEY
ENDPOINT=YOUR_DR_URL/api/v2/predictionServers/
curl -v \
-H "Authorization: Bearer $API_KEY" \
$ENDPOINT
```
=== "Python"
Before continuing with Python, be sure you have installed the DataRobot Python client and configured your connection to DataRobot as outlined in the [API quickstart guide](api-quickstart/index).
``` python
# Set up your environment
import os
import datarobot as dr
API_KEY = os.environ["API_KEY"]
YOUR_DR_URL = os.environ["YOUR_DR_URL"]
FILE_PATH = os.environ["FILE_PATH"]
ENDPOINT = YOUR_DR_URL+"/api/v2"
# Instantiate DataRobot instance
dr.Client(
token=API_KEY,
endpoint=ENDPOINT
)
prediction_server_id = dr.PredictionServer.list()[0].id
print(prediction_server_id)
```
## Documentation {: #documentation }
The following provides additional documentation for features mentioned in this tutorial.
* [API key management](api-key-mgmt#api-key-management)
* [DataRobot Developers portal](https://developers.datarobot.com/){ target=_blank }
* [DataRobot Prediction API](dr-predapi)
|
pred-server-id
|
---
title: DataRobot Prediction API
description: This section describes how to use DataRobot's Prediction API to make predictions on a dedicated prediction server.
---
# DataRobot Prediction API {: #datarobot-prediction-api }
This section describes how to use DataRobot's Prediction API to make predictions on a dedicated prediction server. If you need Prediction API reference documentation, it is available [here](pred-ref/index).
You can use DataRobot's Prediction API for making predictions on a model deployment (by specifying the deployment ID). This gives an advantage of advanced [model management](mlops/index.md) features like target or data drift detection. DataRobot's model management features are safely decoupled from the Prediction API so that you can gain their benefit without sacrificing prediction speed or reliability. See the [**Deploy**](deploy-methods/index) section for details on creating a model deployment.
Before generating predictions with the Prediction API, review the recommended [best practices](#best-practices-for-the-fastest-predictions) to ensure the fastest predictions.
## Making predictions {: #making-predictions }
To generate predictions on new data using the Prediction API, you need:
* The model's deployment ID. You can find the ID in the sample code output of the [**Deployments > Predictions > Prediction API**](code-py) tab (with **Interface** set to "API Client").
* Your [API key](api-key-mgmt#api-key-management).
!!! warning
If your model is an open-source R script, it will run considerably slower.
Prediction requests are submitted as POST requests to the resource, for example:
```shell
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions" \
-H "Authorization: Bearer <API key>" -F \
file=@~/.home/path/to/dataset.csv
```
!!! info "Availability information"
Managed AI Platform (SaaS) users must include the `datarobot-key` in the cURL header (for example, `curl -H "Content-Type: application/json", -H "datarobot-key: xxxx"`). Find the key on the [**Predictions > Prediction API**](code-py) tab or by contacting your DataRobot representative.
The order of the prediction response rows is the same as the order of the sent data.
The Response returned is similar to:
```
HTTP/1.1 200 OK
Content-Type: application/json
X-DataRobot-Execution-Time: 38
X-DataRobot-Model-Cache-Hit: true
{"data":[...]}
```
!!! note
The example above shows an arbitrary hostname (`example.datarobot.com`) as the Prediction API URL; be sure to use the correct hostname of your dedicated prediction server. The configured (predictions) URL is displayed in the sample code of the [**Deployments > Predictions > Prediction API**](code-py) tab. See your system administrator for more assistance if needed.
## Using persistent HTTP connections {: #using-persistent-http-connections }
All prediction requests are served over a secure connection (SSL/TLS), which can result in significant connection setup time. Depending on your network latency to the prediction instance, this can be anywhere from 30ms to upwards of 100-150ms.
To address this, the Prediction API supports HTTP Keep-Alive, enabling your systems to keep a connection open for up to a minute after the last prediction request.
Using the Python `requests` module, run your prediction requests from `requests.Session`:
```python
import json
import requests
data = [
json.dumps({'Feature1': 42, 'Feature2': 'text value 1'}),
json.dumps({'Feature1': 60, 'Feature2': 'text value 2'}),
]
api_key = '...'
api_endpoint = '...'
session = requests.Session()
session.headers = {
'Authorization': 'Bearer {}'.format(api_key),
'Content-Type': 'text/json',
}
for row in data:
print(session.post(api_endpoint, data=row).json())
```
Check the documentation of your favorite HTTP library for how to use persistent connections in your integration.
## Prediction inputs {: #prediction-inputs }
The API supports both JSON- and CSV-formatted input data (although JSON can be a safer choice if it is created with a good quality JSON parser). Data can either be posted in the [request body](#request-schema) or via a [file upload](#file-input) (multipart form).
!!! note
When using the Prediction API, the only supported column separator in CSV files and request bodies is the comma (`,`).
### JSON input {: #json-input }
The JSON input is formatted as an array of objects where the key is the feature name and the value is the value in the dataset.
For example, a CSV file that looks like:
```
a,b,c
1,2,3
7,8,9
```
Would be represented in JSON as:
```json
[
{
"a": 1,
"b": 2,
"c": 3
},
{
"a": 7,
"b": 8,
"c": 9
}
]
```
Submit a JSON array to the Prediction API by sending the data to the `/predApi/v1.0/deployments/<deploymentId>/predictions` endpoint. For example:
```shell
curl -H "Content-Type: application/json" -X POST --data '[{"a": 4, "b": 5, "c": 6}\]' \
-H "Authorization: Bearer <API key>" \
https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions
```
### File input {: #file-input }
This example assumes a CSV file, `dataset.csv`, that contains a header and the rows of data to predict on. cURL automatically sets the content type.
```
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions" \
-H "Authorization: Bearer <API key>" -F \
file=@~/.home/path/to/dataset.csv
HTTP/1.1 200 OK
Date: Fri, 08 Feb 2019 10:00:00 GMT
Content-Type: application/json
Content-Length: 60624
Connection: keep-alive
Server: nginx/1.12.2
X-DataRobot-Execution-Time: 39
X-DataRobot-Model-Cache-Hit: true
Access-Control-Allow-Methods: OPTIONS, POST
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: Content-Type,Content-Length,X-DataRobot-Execution-Time,X-DataRobot-Model-Cache-Hit,X-DataRobot-Model-Id,X-DataRobot-Request-Id
Access-Control-Allow-Headers: Content-Type,Authorization,datarobot-key
X-DataRobot-Request-ID: 9e61f97bf07903b8c526f4eb47830a86
{
"data": [
{
"predictionValues": [
{
"value": 0.2570950924,
"label": 1
},
{
"value": 0.7429049076,
"label": 0
}
],
"predictionThreshold": 0.5,
"prediction": 0,
"rowId": 0
},
{
"predictionValues": [
{
"value": 0.7631880558,
"label": 1
},
{
"value": 0.2368119442,
"label": 0
}
],
"predictionThreshold": 0.5,
"prediction": 1,
"rowId": 1
}
]
}
```
### In-body text input {: #in-body-text-input }
This example includes the CSV file content in the request body. With this format, you must set the Content-Type of the form data to `text/plain`.
```shell
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions" --data-binary $'a,b,c\n1,2,3\n7,8,9\n'
-H "content-type: text/plain" \
-H "Authorization: Bearer <API key>" \
```
## Prediction objects {: #prediction-objects }
The following sections describe the content of the various prediction objects.
### Request schema {: #request-schema }
Note that **Request** schema are standard for any kind of predictions. Following are the accepted headers:
| Name | Value(s) |
|------------------|-----------------------|
| Content-Type | text/csv;charset=utf8 |
| | application/json |
| | multipart/form-data |
| Content-Encoding | gzip |
| | bz2 |
| Authorization | Bearer <API key> |
Note the following:
* If you are submitting predictions as a raw stream of data, you can specify an encoding by adding `;charset=<encoding>` to the `Content-Type` header. See the <a target="_blank" href="https://docs.python.org/2.7/library/codecs#standard-encodings">Python standard encodings</a> for a list of valid values. DataRobot uses `utf8` by default.
* If you are sending an encoded stream of data, you should specify the `Content-Encoding` header.
* The ``Authorization`` field is a Bearer authentication HTTP authentication scheme that involves security tokens called bearer tokens. While it is possible to authenticate via pair username + API token (Basic auth) or just via API token, these authentication methods are deprecated and not recommended.
You can parameterize a request using URI query parameters:
| Parameter name | Type | Notes |
|------------------------|----------|-----------------------------|
| passthroughColumns | string | List of columns from a scoring dataset to return in the prediction response. |
| passthroughColumnsSet | string | If passthroughColumnsSet=all is passed, all columns from the scoring dataset are returned in the prediction response. |
Note the following:
* The `passthroughColumns` and `passthroughColumnsSet` parameters cannot both be passed in the same request.
* While there is no limit on the number of column names you can pass with the `passthroughColumns` query parameter, there is a limit on the <a target="_blank" href="https://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html#sec5.1">HTTP request line</a> (currently 8192 bytes).
The following example illustrates the use of multiple passthrough columns:
```shell
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions?passthroughColumns=Latitude&passthroughColumns=Longitude" \
-H "Authorization: Bearer <API key>" \
-H "datarobot-key: <DataRobot key>" -F \
file=@~/.home/path/to/dataset.csv
```
### Response schema {: #response-schema }
The following is a sample **Response** body (also see the additional example of a [time series response body](#making-predictions-with-time-series)):
```json
{
"data": [
{
"predictionValues": [
{
"value": 0.6856798909,
"label": 1
},
{
"value": 0.3143201091,
"label": 0
}
],
"predictionThreshold": 0.5,
"prediction": 1,
"rowId": 0,
"passthroughValues": {
"Latitude": -25.433508,
"Longitude": 22.759397
}
},
{
"predictionValues": [
{
"value": 0.765656753,
"label": 1
},
{
"value": 0.234343247,
"label": 0
}
],
"predictionThreshold": 0.5,
"prediction": 1,
"rowId": 1,
"passthroughValues": {
"Latitude": 41.051128,
"Longitude": 14.49598
}
}
]
}
```
The table below lists custom DataRobot headers:
| Name | Value | Note |
|-----------------------------|---------------|---------------------|
| X-DataRobot-Execution-Time | numeric | Time for compute predictions (ms). |
| X-DataRobot-Model-Cache-Hit | true or false | Indication of in-memory presence of model (bool). |
| X-DataRobot-Model-Id | ObjectId | ID of the model used to serve the prediction request (only returned for predictions made on model deployments). |
| X-DataRobot-Request-Id | uuid | Unique identifier of a prediction request. |
The following table describes the *Response Prediction Rows* of the JSON array:
| Name | Type | Note |
|--------------------------|--------|-----------|
| predictionValues | array | An array of **PredictionValue** (schema described below). |
| predictionThreshold | float | The threshold used for predictions (applicable to binary classification projects only). |
| prediction | float | The output of the model for this row. |
| rowId | int | The row described. |
| passthroughValues | object | A JSON object where **key** is a column name and **value** is a corresponding value for a predicted row from the scoring dataset. This JSON item is only returned if either passthroughColumns or passthroughColumnsSet is passed. |
| adjustedPrediction | float | The exposure-adjusted output of the model for this row if the exposure was used during model building. The adjustedPrediction is included in responses if the request parameter **excludeAdjustedPredictions** is false. |
| adjustedPredictionValues | array | An array of exposure-adjusted **PredictionValue** (schema described below). The adjustedPredictionValues is included in responses if the request parameter **excludeAdjustedPredictions** is false. |
| predictionExplanations | array | An array of **PredictionExplanations** (schema described [below](#making-prediction-explanations)). This JSON item is only returned with Prediction Explanations. |
#### PredictionValue schema {: #predictionvalue-schema }
The following table describes the **PredictionValue** schema in the JSON Response array:
| Name | Type | Note |
|-------|-------|----------|
| label | - | Describes what the model output corresponds to. For regression projects, it is the name of the target feature. For classification projects, it is a label from the target feature. |
| value | float | The output of the prediction. For regression projects, it is the predicted value of the target. For classification projects, it is the probability associated with the label that is predicted to be most likely (implying a threshold of 0.5 for binary classification problems). |
## Making predictions with time series {: #making-predictions-with-time-series }
!!! tip
Time series predictions are specific to time series projects, not all time-aware modeling projects. Specifically, the CSV file must follow a specific format, described in the [predictions section](ts-predictions#make-predictions-tab) of the time series modeling pages.
If you are making predictions with the forecast point, you can skip the forecast window in your prediction data as DataRobot generates a forecast point automatically. This is called autoexpansion. Autoexpansion applies automatically if:
- Predictions are made for a specific forecast point and not a forecast range.
- The time series project has a regular time step and does not use Nowcasting.
When using autoexpansion, note the following:
- If you have [Known in Advance](ts-adv-opt#set-known-in-advance-ka) features that are important for your model, it is recommended that you manually create a forecast window to increase prediction accuracy.
- If you plan to use an [association ID](accuracy-settings#association-ids-for-time-series-deployments) other than the primary date/time column in your deployment to track accuracy, create a forecast window manually.
The URL for making predictions with time series deployments and regular non-time series deployments is the same.
The only difference is that you can optionally specify forecast point, prediction start/end date, or some other time series specific URL parameters.
Using the deployment ID, the server automatically detects the deployed model as a time series deployment and processes it accordingly:
```shell
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions" \
-H "Authorization: Bearer <API key>" -F \
file=@~/.home/path/to/dataset.csv
```
The following is a sample **Response** body for a multiseries project:
```
HTTP/1.1 200 OK
Content-Type: application/json
X-DataRobot-Execution-Time: 1405
X-DataRobot-Model-Cache-Hit: false
{
"data": [
{
"seriesId": 1,
"forecastPoint": "2018-01-09T00:00:00Z",
"rowId": 365,
"timestamp": "2018-01-10T00:00:00.000000Z",
"predictionValues": [
{
"value": 45180.4041874386,
"label": "target (actual)"
}
],
"forecastDistance": 1,
"prediction": 45180.4041874386
},
{
"seriesId": 1,
"forecastPoint": "2018-01-09T00:00:00Z",
"rowId": 366,
"timestamp": "2018-01-11T00:00:00.000000Z",
"predictionValues": [
{
"value": 47742.9432499386,
"label": "target (actual)"
}
],
"forecastDistance": 2,
"prediction": 47742.9432499386
},
{
"seriesId": 1,
"forecastPoint": "2018-01-09T00:00:00Z",
"rowId": 367,
"timestamp": "2018-01-12T00:00:00.000000Z",
"predictionValues": [
{
"value": 46394.5698978878,
"label": "target (actual)"
}
],
"forecastDistance": 3,
"prediction": 46394.5698978878
},
{
"seriesId": 2,
"forecastPoint": "2018-01-09T00:00:00Z",
"rowId": 697,
"timestamp": "2018-01-10T00:00:00.000000Z",
"predictionValues": [
{
"value": 39794.833199375,
"label": "target (actual)"
}
]
}
]
}
```
### Request parameters {: #request-parameters }
You can parameterize the time series prediction request using URI query parameters.
For example, overriding the default inferred forecast point can look like this:
```shell
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions?forecastPoint=1961-01-01T00:00:00?relaxKnownInAdvanceFeaturesCheck=true" \
-H "Authorization: Bearer <API key>" -F \
file=@~/.home/path/to/dataset.csv
```
For the full list of time series-specific parameters, see [Time series predictions for deployments](time-pred).
### Response schema {: #response-schema_1 }
The [Response schema](#response-schema_1) is consistent with [standard predictions](#response-schema_2) but adds a number of columns for each `PredictionRow` object:
| Name | Type | Notes |
|-------------------|----------------------|---------------------|
| seriesId | string, int, or None | A multiseries identifier of a predicted row that identifies the series in a multiseries project. |
| forecastPoint | string | An <a target="_blank" href="https://www.iso.org/iso-8601-date-and-time-format.html">ISO 8601</a> formatted DateTime string corresponding to the forecast point for the prediction request, either user-configured or selected by DataRobot. |
| timestamp | string | An <a target="_blank" href="https://www.iso.org/iso-8601-date-and-time-format.html">ISO 8601</a> formatted DateTime string corresponding to the DateTime column of the predicted row. |
| forecastDistance | int | A [forecast distance](glossary/index#forecast-distance) identifier of the predicted row, or how far it is from forecastPoint in the scoring dataset. |
| originalFormatTimestamp | string | A DateTime string corresponding to the DateTime column of the predicted row. Unlike the timestamp column, this column will keep the same DateTime formatting as the uploaded prediction dataset. (This column is shown if enabled by your administrator.) |
## Making Prediction Explanations {: #making-prediction-explanations }
The DataRobot [**Prediction Explanations**](pred-explain/index) feature gives insight into which attributes of a particular input cause it to have exceptionally high or exceptionally low predicted values.
!!! tip
You must run the following two critical dependencies before running Prediction Explanations:
1. You must compute [Feature Impact](feature-impact) for the model.
2. You must generate predictions on the dataset using the selected model.
To initialize Prediction Explanations, use the [Prediction Explanations](pred-explain/index) tab.

Making Prediction Explanations is very similar to standard prediction requests. First, Prediction Explanations requests are submitted as POST requests to the resource:
```shell
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictionExplanations" \
-H "Authorization: Bearer <API key>" -F \
file=@~/.home/path/to/dataset.csv
```
The following is a sample **Response** body:
```
HTTP/1.1 200 OK
Content-Type: application/json
X-DataRobot-Execution-Time: 841
X-DataRobot-Model-Cache-Hit: true
{
"data": [
{
"predictionValues": [
{
"value": 0.6634830442,
"label": 1
},
{
"value": 0.3365169558,
"label": 0
}
],
"prediction": 1,
"rowId": 0,
"predictionExplanations": [
{
"featureValue": 49,
"strength": 0.6194461777,
"feature": "driver_age",
"qualitativeStrength": "+++",
"label": 1
},
{
"featureValue": 1,
"strength": 0.3501610895,
"feature": "territory",
"qualitativeStrength": "++",
"label": 1
},
{
"featureValue": "M",
"strength": -0.171075409,
"feature": "gender",
"qualitativeStrength": "--",
"label": 1
}
]
},
{
"predictionValues": [
{
"value": 0.3565584672,
"label": 1
},
{
"value": 0.6434415328,
"label": 0
}
],
"prediction": 0,
"rowId": 1,
"predictionExplanations": []
}
]
}
```
### Request parameters {: #request-parameters_1 }
You can parameterize the Prediction Explanations prediction request using URI query parameters:
| Parameter name | Type | Notes |
|----------------------------|--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| maxExplanations | int | Maximum number of codes generated per prediction. Default is 3. Previously called maxCodes. |
| thresholdLow | float | Prediction Explanation low threshold. Predictions must be below this value (or above the thresholdHigh value) for Prediction Explanations to compute. This value can be null. |
| thresholdHigh | float | Prediction Explanation high threshold. Predictions must be above this value (or below the thresholdLow value) for Prediction Explanations to compute. This value can be null. |
| excludeAdjustedPredictions | string | Includes or excludes exposure-adjusted predictions in prediction responses if exposure was used during model building. The default value is 'true' (exclude exposure-adjusted predictions). |
The following is an example of a parameterized request:
```shell
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictionExplanations?maxExplanations=2&thresholdLow=0.2&thresholdHigh=0.5"
-H "Authorization: Bearer <API key>" -F \
file=@~/.home/path/to/dataset.csv
```
DataRobot's headers schema is the same as that for prediction responses. The [Response schema](#response-schema_2) is consistent with standard predictions, but adds "predictionExplanations", an array of `PredictionExplanations` for each `PredictionRow` object.
#### PredictionExplanations schema {: #predictionexplanations-schema }
Response JSON Array of Objects:
| Name | Type | Notes |
|---------------------|--------|------------------------------------------------------------------------|
| label | – | Describes which output was driven by this Prediction Explanation. For regression projects, it is the name of the target feature. For classification projects, it is the class whose probability increasing would correspond to a positive strength of this Prediction Explanation. |
| feature | string | Name of the feature contributing to the prediction. |
| featureValue | - | Value the feature took on for this row. |
| strength | float | Amount this feature’s value affected the prediction. |
| qualitativeStrength | string | Human-readable description of how strongly the feature affected the prediction (e.g., +++, –, +). |
!!! tip
The prediction explanation `strength` value is not bounded to the values `[-1, 1]`; its interpretation may change as the number of features in the model changes. For normalized values, use `qualitativeStrength` instead. `qualitativeStrength` expresses the `[-1, 1]` range with visuals, with `---` representing `-1` and `+++` representing `1`. For explanations with the same `qualitativeStrength`, you can then use the `strength` value for ranking.
See the section on [interpreting Prediction Explanation output](xemp-pe#interpret-xemp-prediction-explanations) for more information.
## Making predictions with humility monitoring {: #making-predictions-with-humility-monitoring }
Predictions with [humility monitoring](humility-settings) allow you to monitor predictions using user-defined humility rules.
When a prediction falls outside the thresholds provided for the "Uncertain Prediction" Trigger, it will default to the action assigned to the trigger.
The humility key is added to the body of the prediction response when the trigger is activated.
The following is a sample **Response** body for a Regression project with an `Uncertain Prediction Trigger` with `Action - No Operation`:
```json
{
"data": [
{
"predictionValues": [
{
"value": 122.8034057617,
"label": "length"
}
],
"prediction": 122.8034057617,
"rowId": 99,
"humility": [
{
"ruleId": "5ebad4735f11b33a38ff3e0d",
"triggered": true,
"ruleName": "Uncertain Prediction Trigger"
}
]
}
]
}
```
The following is an example of a **Response** body for a regression model deployment. It uses the "Uncertain Prediction" trigger with the "Throw Error" action:
```
480 Error: {"message":"Humility ReturnError action triggered."}
```
The following is an example of a **Response** body for a regression model deployment. It uses the "Uncertain Prediction" trigger with the "Override Prediction" action:
```json
{
"data": [
{
"predictionValues": [
{
"value": 122.8034057617,
"label": "length"
}
],
"prediction": 5220,
"rowId": 99,
"humility": [
{
"ruleId": "5ebad4735f11b33a38ff3e0d",
"triggered": true,
"ruleName": "Uncertain Prediction Trigger"
}
]
}
]
}
```
### Response schema {: #response-schema_2 }
The [response schema](#response-schema_2) is consistent with [standard predictions](#response-schema) but adds a new humility column with a subset of columns for each `Humility` object:
| Name | Type | Notes |
|-----------|---------|--------------------------------------------------------------------------------------------|
| ruleId | string | The ID of the humility rule assigned to the deployment |
| triggered | boolean | Returns "True" or "False" depending on if the rule was triggered or not |
| ruleName | string | The name of the rule that is either defined by the user or auto-generated with a timestamp |
## Error responses {: #error-responses }
Any error is indicated by a non-200 code attribute. Codes starting with 4XX indicate request errors (e.g., missing columns, wrong credentials, unknown model ID). The message attribute gives a detailed description of the error in the case of a 4XX code. For example:
```
curl -H "Content-Type: application/json" -X POST --data '' \
-H "Authorization: Bearer <API key>" \
https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>
HTTP/1.1 400 BAD REQUEST
Date: Fri, 08 Feb 2019 11:00:00 GMT
Content-Type: application/json
Content-Length: 53
Connection: keep-alive
Server: nginx/1.12.2
X-DataRobot-Execution-Time: 332
X-DataRobot-Request-ID: fad6a0b62c1ff30db74c6359648d12fd
{
"message": "The requested URL was not found on the server. If you entered the URL manually, please check your spelling and try again."
}
```
Codes starting with 5XX indicate server-side errors. Retry the request or contact your DataRobot representative.
## Knowing the limitations {: #knowing-the-limitations }
The following describes the size and timeout boundaries for dedicated prediction instances:
* Maximum data submission size for dedicated predictions is 50 MB.
* There is no limit on the number of rows, but timeout limits are as follows:
* Self-Managed AI Platform: configuration-dependent
* Managed AI Platform: 600s
If your request exceeds the timeout or you are trying to score a large file using dedicated predictions, consider using the [batch scoring package](python-batch-scoring).
* There is a limit on the size of the [HTTP request line](https://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html#sec5.1){ target=_blank } (currently 8192 bytes).
* For managed AI Platform deployments, dedicated Prediction API servers automatically close persistent HTTP connections if they are idle for more than 600 seconds. To use persistent connections, the client side must be able to handle these disconnects correctly. The following example configures Python HTTP library `requests` to automatically retry HTTP requests on transport failure:
```python
import requests
import urllib3
# create a transport adapter that will automatically retry GET/POST/HEAD requests on failures up to 3 times
adapter = requests.adapters.HTTPAdapter(
max_retries=urllib3.Retry(
total=3,
method_whitelist=frozenset(['GET', 'POST', 'HEAD'])
)
)
# create a Session (a pool of connections) and make it use the given adapter for HTTP and HTTPS requests
session = requests.Session()
session.mount('http://', adapter)
session.mount('https://', adapter)
# execute a prediction request that will be retried on transport failures, if needed
api_token = '<your api token>'
dr_key = '<your datarobot key>'
response = session.post(
'https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions',
headers={
'Authorization': 'Bearer %s' % api_token,
'DataRobot-Key': dr_key,
'Content-Type': 'text/csv',
},
data='<your scoring data>',
)
print(response.content)
```
### Model caching {: #model-caching }
The dedicated prediction server fetches models, as needed, from the DataRobot cluster. To speed up subsequent predictions that use the same model, DataRobot stores a certain number of models in memory (cache). When the cache fills, each new model request will require that one of the existing models in the cache be removed. DataRobot removes the least recently used model (which is not necessarily the model that has been in the cache the longest).
For Self-Managed AI Platform installations, the default size for the cache is 16 models, but it can vary from installation to installation. Please contact DataRobot support if you have questions regarding the cache size of your specific installation.
A prediction server runs multiple prediction processes, each of which has its own exclusive model cache. Prediction processes do not share between themselves. Because of this, it is possible that you send two consecutive requests to a prediction server, and each has to download the model data.
Each response from the prediction server includes a header, `X-DataRobot-Model-Cache-Hit`, indicating whether the model used was in the cache. If the model was in the cache, the value of the header is true; if the value is false, the model was not in the cache.
## Best practices for the fastest predictions {: #best-practices-for-the-fastest-predictions }
The following checklist summarizes the suggestions above to help deliver the fastest predictions possible:
* *Implement [persistent HTTP connections](#using-persistent-http-connections)*: This reduces network round-trips, and thus latency, to the Prediction API.
* *Use CSV data:* Because JSON serialization of large amounts of data can take longer than using CSV, consider using CSV for your [prediction inputs](#prediction-inputs).
* *Keep the number of requested models low:* This allows the Prediction API to make use of [model caching](#model-caching).
* *Batch data together in chunks:* Batch as many rows together as possible without going over the [50 MB request limit](#knowing-the-limitations). If scoring larger files, consider using the [Batch Prediction API](batch-prediction-api/index), which, in addition to scoring local files, also supports scoring from/to S3 and databases.
|
dr-predapi
|
---
title: Make predictions with the API
description: DataRobot's Prediction API provides a mechanism for using your model for real-time predictions on a prediction server.
---
# Make predictions with the API {: #make-predictions-with-the-api }
DataRobot's Prediction API provides a mechanism for using your model for real-time predictions on an external application server. Follow [the guidelines](dr-predapi) for making predictions with the Prediction API. You can also review how to [retrieve a prediction server ID](pred-server-id) using cURL commands from the REST API or by using the DataRobot Python client to make predictions with a deployment.
|
index
|
---
title: Deprecated API routes
description: An overview of DataRobot's deprecated Prediction API routes, with a complete list of the specific deprecated POST and GET requests and their replacements.
---
# Deprecated API routes {: #deprecated-api-routes }
The Prediction API has changed significantly over time and accumulated a number of old routes. Even though these routes are already deprecated, they are still available in some installations since not all users have migrated to newer versions yet.
This page describes:
- all such deprecated routes
- deadlines for their complete removal
- new REST endpoints that should be used instead of old ones
- how Prediction Admins can capture usages of deprecated routes within their organization to safely upgrade DataRobot
Please refer to Prediction API [reference](../predapi/index) documentation for details on each specific route.
## Deprecated Prediction API routes {: #deprecated-prediction-api-routes }
The Prediction API has moved from "project/model" routes to "deployment-aware" routes. To support this transition, the following routes have been deprecated:
!!! warning
Availability of deprecated routes (those not using the deployment-aware model) is dependent on the initial DataRobot deployment version.
See the table below for complete details. Contact your DataRobot representative if you need help migrating to the new API routes.
| Deployment type | Installation timeline | Status and notes |
|------------------------|--------------------------------------------|-----------------------------------|
| Self-Managed AI Platform | New as of v6.0 or later | Disabled |
| Self-Managed AI Platform | Upgraded to v6.0 or v.6.1 | Supported |
| Self-Managed AI Platform | v6.2 upgrade (future) | **All deprecated routes removed entirely** |
| AI Platform* | Migrated individually, contact your DataRobot representative | Migration is in progress |
\* Managed AI Platform accounts newer than May 2020 only have access to the new routes.
### The full list of deprecated routes {: #the-full-list-of-deprecated-routes }
#### Make AutoML predictions {: #make-automl-predictions }
*Deprecated route:* `POST /predApi/v1.0/<projectId>/<modelId>/predict`
*New route:* `POST /predApi/v1.0/deployments/<deploymentId>/predictions`
#### Make time series predictions {: #make-time-series-predictions }
*Deprecated routes:*
`POST /predApi/v1.0/<projectId>/<modelId>/timeSeriesPredict`
`POST /predApi/v1.0/deployments/<deploymentId>/timeSeriesPredictions`
*New route:*`POST /predApi/v1.0/deployments/<deploymentId>/predictions`
#### Prediction Explanations {: #prediction-explanations }
*Deprecated routes:*
`POST /predApi/v1.0/<projectId>/<modelId>/reasonCodesPredictions`
`POST /predApi/v1.0/<projectId>/<modelId>/predictionExplanations`
`POST /predApi/v1.0/deployments/<deploymentId>/predictionExplanations`
*New route:*`POST /predApi/v1.0/deployments/<deploymentId>/predictions`
#### Ping {: #ping }
*Deprecated route:*`GET /api/v1/ping`
*New route:*`GET /predApi/v1.0/ping`
#### List models {: #list-models }
*Deprecated route:* `GET /api/v1/<projectId>/models`
*New route:*
Use Public V2 API to fetch the list of models in a project
#### Using tokens {: #using-tokens }
*Deprecated routes:*
`GET /api/v1/api_token`
`POST /api/v1/api_token`
`GET /predApi/v1.0/api_token`
*New route:*
API tokens are superseded by API Keys and are managed by the public V2 API only. See the [UI platform documentation](api-key-mgmt) or the *Account > Developer Tools* section of the public V2 API documentation.
## Request examples for legacy Prediction API routes {: #request-examples-for-legacy-prediction-api-routes }
This section provides examples showing how to make predictions *using legacy and soon to be disabled* Prediction API routes directly on a model by specifying the model’s project and model ID.
See the table above for a [deprecation timeline](#tracking-deprecated-routes-usage) based on release status.
Generating predictions for classification and regression projects:
```shell
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/<projectId>/<modelId>/predict" \
-H "Authorization: Bearer <API key>" -F \
file=@~/.home/path/to/dataset.csv
```
Generating predictions for time series projects:
```shell
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/<projectId>/<modelId>/timeSeriesPredict" \
-H "Authorization: Bearer <API key>" -F \
file=@~/.home/path/to/dataset.csv
```
Generating Prediction Explanations:
```shell
curl -i -X POST "https://example.datarobot.com/predApi/v1.0/<projectId>/<modelId>/predictionExplanations" \
-H "Authorization: Bearer <API key>" -F \
file=@~/.home/path/to/dataset.csv
```
If you are using he managed AI Platform (SaaS), include the `datarobot-key` in the cURL header: `-H "datarobot-key: xxxx`.
## Tracking deprecated routes usage {: #tracking-deprecated-routes-usage }
!!! info "Availability information"
This feature is not available for managed AI Platform (SaaS) users. Contact your DataRobot representative for information on handling migrations.
!!! note
This feature is available in v6.1 only. In v6.2 it will be deleted along with all deprecated routes.
For *prediction admins* convenience, it is possible to track all deprecated routes usage from a single page.
This feature is only available to users who have "Enable Predictions Admin" permission enabled:

*Prediction admins* can access it via **Manage Predictions** page:

In order to access deprecated routes statistics click on the button in the top right corner:

The table with statistics will look like this:

The table has the following columns:
- **Last Used** : the last time this request was made (UTC)
- **Request** : HTTP request that was made. Includes query parameters, if any
- **Username** : name of the DR user who made this request
- **Total Use Count** : total number of times request was made since v6.1.
Notes:
1. The table is sorted by *Last Used* descending, so that the most recent requests are shown on top.
2. Two different users making the same request are counted separately.
3. The total number of the most recent requests shown is limited by 100.
4. This table only shows requests to deprecated routes. Requests using new routes are not shown..
|
deprecated-prediction-api
|
---
title: Time series predictions for deployments
description: Make time series predictions for a deployed model.
---
# Time series predictions for deployments {: #time-series-predictions-for-deployments }
**Endpoint:** `/deployments/<deploymentId>/predictions`
Makes time series predictions for a deployed model.
**Request Method:** `POST`
**Request URL:** deployed URL, for example: <br> `https://your-company.orm.datarobot.com/predApi/v1.0`
## Request parameters {: #request-parameters }
### Headers {: #headers }
| Key | Type | Description | Example(s) |
|------|----------------|--------------|------------|
| Datarobot-key | string | Required for managed AI Platform users only. An organization specific secret used to access that organization's prediction servers. | `33257d41-fcc9-7c01-161c-3467df169a50` |
| Authorization | string | Required. <br> Three methods are supported: <ul><li> Bearer authentication</li><li>(deprecated) Basic authentication: User_email and API token</li><li>(deprecated) API token</li></ul> | <ul><li> Example for Bearer authentication method: `Bearer API_key-12345abcdb-xyz6789`</li><li>(deprecated) Example for User_email and API token method: `Basic Auth_basic-12345abcdb-xyz6789`</li><li>(deprecated) Example for API token method: `Token API_key-12345abcdb-xyz6789`</li></ul>|
**Datarobot-key:** This header is required only with the managed AI Platform. It is used as a precaution to secure user data from other verified DataRobot users. The key can also be retrieved with the following request to the DataRobot API: <br>`GET <URL>/api/v2/modelDeployments/<deploymentId>`
### Query arguments (time series models only) {: #query-arguments }
| Key | Type | Description | Example(s) |
|------|----------------|--------------|------------|
| forecastPoint | ISO-8601 string | An [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html){ target=_blank } formatted DateTime string, without timezone, representing the [forecast point](glossary/index#forecast-point). This parameter cannot be used if `predictionsStartDate` and `predictionsEndDate` are passed. | `?forecastPoint=2013-12-20T01:30:00Z` |
| relaxKnownInAdvanceFeaturesCheck | bool | `true` or `false`. When `true`, missing values for known-in-advance features are allowed in the forecast window at prediction time. The default value is `false`. Note that the absence of known-in-advance values can negatively impact prediction quality. | `?relaxKnownInAdvanceFeaturesCheck=true` |
| predictionsStartDate | ISO-8601 string | The time in the dataset when bulk predictions begin generating. This parameter must be defined together with `predictionsEndDate`. The `forecastPoint` parameter cannot be used if `predictionsStartDate` and `predictionsEndDate` are passed. | `?predictionsStartDate=2013-12-20T01:30:00Z&predictionsEndDate=2013-12-20T01:40:00Z` |
| predictionsEndDate | ISO-8601 string | The time in the dataset when bulk predictions stop generating. This parameter must be defined together with `predictionsStartDate`. The `forecastPoint` parameter cannot be used if `predictionsStartDate` and `predictionsEndDate` are passed. | See above. |
It is possible to use standard URI parameters, including `passthroughColumns`, `passthroughColumnsSet`, and `maxExplanations`.
!!! note "XEMP-based explanations support"
Time series supports XEMP explanations. See [Prediction Explanations](dep-predex) for examples of the `maxExplanations` URI parameter.
### Body {: #body }
| Data | Type | Example(s) |
|------------------------------|------|--------------------------------|
| Historic and prediction data | JSON | Raw data shown in image below. |

### Response 200 {: #response-200 }
#### Regression prediction example {: #regression-prediction-example }
```json
{
"data": [
{
"seriesId": null,
"forecastPoint": "2013-12-20T00:00:00Z",
"rowId": 35,
"timestamp": "2013-12-21T00:00:00.000000Z",
"predictionValues": [
{
"value": 2.3353628422,
"label": "sales (actual)"
}
],
"forecastDistance": 1,
"prediction": 2.3353628422
}
]
}
```
#### Binary classification prediction example {: #binary-classification-prediction-example }
```json
{
"data": [
{
"rowId": 147,
"prediction": "low",
"predictionThreshold": 0.5,
"predictionValues": [
{"label": "low", "value": 0.5158823954},
{"label": "high", "value": 0.4841176046},
],
"timestamp": "1961-04-01T00:00:00.000000Z",
"forecastDistance": 2,
"forecastPoint": "1961-02-01T00:00:00Z",
"seriesId": null,
}
]
}
```
## Errors List {: #errors-list }
| HTTP Code | Sample error message | Reason(s) |
|--------------------------|--------------------|--------------|
| 400 BAD REQUEST | `{"message": "Based on the forecast point (10/26/08), there are no rows to predict that fall inside of the forecast window (10/27/08 to 11/02/08). Try adjusting the forecast point to an earlier date or appending new future rows to the data."}` | No empty rows were provided to predict on. |
| 400 BAD REQUEST | `{"message": "No valid output rows"}` | No historic information was provided; there's only 1 row to predict on. |
| 400 BAD REQUEST | `{"message": "The \"Time\" feature contains the value 'OCT-27', which does not match the original format %m/%d/%y (e.g., '06/24/19'). To upload this data, first correct the format in your prediction dataset and then try the import again. Because some software automatically converts the format for display, it is best to check the actual format using a text editor."}` | Prediction row has a different format than the rest of the data. |
| 400 BAD REQUEST | `{"message": "The following errors are found:\n • The prediction data must contain historical values spanning more than 35 day(s) into the past. In addition, the target cannot have missing values or missing rows which are used for differencing"}` | Provided dataset has fewer than the required 35 rows of historical data. |
| 400 BAD REQUEST | `{"message": {"forecastPoint": "Invalid RFC 3339 datetime string: "}}` | Provided an empty or non-valid forecastPoint. |
| 404 NOT FOUND | `{"message": "Deployment :deploymentId cannot be found for user :userId"}` | Deployment was removed or doesn’t exist.|
| 422 UNPROCESSABLE ENTITY | `{"message": "Predictions on models that are not time series models are not supported on this endpoint. Please use the predict endpoint instead."}` | Provided `deploymentId` that is not for a time series project.|
| 422 UNPROCESSABLE ENTITY | `{"message": {"relaxKnownInAdvanceFeaturesCheck": "value can't be converted to Bool"}}` | Provided an empty or non-valid value for `relaxKnownInAdvanceFeaturesCheck'`. |
|
time-pred
|
---
title: Ping health check
description: Health check to determine if the service is "alive".
---
# Ping health check {: #ping-health-check }
**Endpoint:** `/ping`
Health check to determine if the service is "alive".
**Request Method:** `GET`
**Request URL:** deployed URL, example: `https://your-company.orm.datarobot.com/predApi/v1.0`
### Request parameters {: #request-parameters }
None required.
### Response 200 {: #response-200 }
| Data | Type | Example(s) |
|----------|--------|------------|
| response | string | pong |
|
ping
|
---
title: Predictions for deployments
description: Using a specified endpoint, calculates predictions based on user-provided data for a specific deployment.
---
# Predictions for deployments {: #predictions-for-deployments }
Using the endpoint below, you can provide the data necessary to calculate predictions for a specific deployment. If you need to make predictions for an unstructured custom inference model, see [Predictions for unstructured model deployments](dep-pred-unstructured).
**Endpoint:** `/deployments/<deploymentId>/predictions`
Calculates predictions based on user-provided data for a specific deployment. Note that this endpoint works only for deployed models.
!!! note
You can find the deployment ID in the sample code output of the [**Deployments > Predictions > Prediction API**](code-py) tab (with **Interface** set to **API Client**).
**Request Method:** `POST`
**Request URL:** deployed URL, for example: `https://your-company.orm.datarobot.com/predApi/v1.0`
## Request parameters {: #request-parameters }
### Headers {: #headers }
| Key | Description | Example(s) |
|------------|----------------|-----|
| Datarobot-key | Required for managed AI Platform users; string type <br><br> Once a model is deployed, see the code snippet in the DataRobot UI, [Predictions > Prediction API](code-py). | `DR-key-12345abcdb-xyz6789` |
| Authorization | Required; string <br><br> Three methods are supported: <ul><li> Bearer authentication </li><li> (deprecated) Basic authentication: User_email and API token </li><li> (deprecated) API token | <ul><li>Example for Bearer authentication method: `Bearer API_key-12345abcdb-xyz6789` </li><li>(deprecated) Example for User_email and API token method: `Basic Auth_basic-12345abcdb-xyz6789` </li><li>(deprecated) Example for API token method: `Token API_key-12345abcdb-xyz6789`</li></ul> |
| Content-Type | Optional; string type | <ul><li>`text/plain; charset=UTF-8`</li><li>`text/csv`</li><li>`application/json`</li><li>`multipart/form-data` (for files with data, i.e., .csv, .txt files)</ul> |
| Content-Encoding | Optional; string type <br><br> Currently supports only gzip-encoding with the default data extension. | `gzip` |
| Accept | Optional; string type <br><br> Controls the shape of the response schema. Currently JSON(default) and CSV are supported. See examples. | <ul><li>`application/json` (default)</li><li>`text/csv` (for CSV output)</li></ul> |
**Datarobot-key:** This header is required only with the managed AI Platform. It is used as a precaution to secure user data from other verified DataRobot users. The key can also be retrieved with the following request to the DataRobot API: <br>`GET <URL>/api/v2/modelDeployments/<deploymentId>`
### Query arguments {: #query-arguments }
| Key | Type | Description | Example(s) |
|------|---------|------------------------------|------------|
| passthroughColumns | list of strings | Optional. Controls which columns from a scoring dataset to expose (or copy over) in a prediction response. <br><br> The request may contain zero, one, or more columns. (There’s no limit on how many column names you can pass.) Column names must be passed as UTF-8 bytes and be percent-encoded (see the [HTTP standard](https://tools.ietf.org/html/rfc2616){ target=_blank } for this requirement). Make sure to use the exact name of a column as a value. | `/v1.0/deployments/<deploymentId>/predictions?passthroughColumns=colA&passthroughColumns=colB` |
| passthroughColumnsSet | string| Optional. Controls which columns from a scoring dataset to expose (or to copy over) in a prediction response. The only possible option is `all` and, if passed, all columns from a scoring dataset are exposed. | `/v1.0/deployments/deploymentId/predictions?passthroughColumnsSet=all` |
| predictionWarningEnabled | bool | Optional. DataRobot monitors unusual or anomalous predictions in real-time and indicates when they are detected. <br><br> If this argument is set to true, a new key is added to each prediction to specify the result of the Humility check. Otherwise, there are no changes in the prediction response. | `/v1.0/deployments/deploymentId/predictions?predictionWarningEnabled=true` <br><br> Response: <br><br> `{ "data": [ { "predictionValues": [ { "value": 18.6948852, "label": "y" } ], "isOutlierPrediction": false, "rowId": 0, "prediction": 18.6948852 } ] }` |
| decimalsNumber | integer | Optional. Configures the float precision in prediction results by setting the number of digits after the decimal point. <br><br> If there aren't any digits after the decimal point, rather than adding zeros, the float precision is less than the value set by `decimalsNumber`. | `?decimalsNumber=15` |
!!! note
The `passthroughColumns` and `passthroughColumnsSet` parameters are mutually exclusive and cannot both be passed in the same request. Also, while there isn't a limit on the number of column names you can pass with the `passthroughColumns` query parameter, there is a limit on the size of the [HTTP request line](https://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html#sec5.1){ target=_blank } (currently 8192 bytes).
### Body {: #body }
| Data | Type | Example(s) |
|-------|------|---------------------|
| Data to predict | <ul><li> raw text </li><li> form-data | <ul><li> PassengerId,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked 892,3,"Kelly, Mr. James",male,34.5,0,0,330911,7.8292,,Q 893,3,"Wilkes, Mrs. James (Ellen Needs)",female,47,1,0,363272,7,,S 894,2,"Myles, Mr. Thomas Francis",male,62,0,0,240276,9.6875,,Q </li><li> Key: file, value: file_with_data_to_predict.csv |
## Response 200 {: #response-200 }
### Binary prediction {: #binary-prediction }
**Label:** For regression and binary classification tasks, DataRobot API always returns 1 for the positive class and 0 for the negative class. Although the actual values for the classes may be different depending on the data provided (like "yes"/"no"), the DataRobot API will always return 1/0. For multiclass classification, the DataRobot API returns the value itself.
**Value:** Shows the probability of an event happening (where 0 and 1 are min and max probability, respectively). The user can adjust the threshold that links the value with the prediction label.
**PredictionThreshold** (*Applicable to binary classification projects only*): Threshold is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as FALSE, and an observation above the threshold as TRUE. In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold. This can be configured manually through the UI (the [**Deploy** tab](deploy-model#deploy-from-the-leaderboard)), or through the DataRobot API (i.e., the `PATCH /api/v2/projects/(projectId)/models/(modelId)` route).
The actual response is dependent on the classification task: binary classification, regression or multiclass task.
### Binary classification example {: #binary-classification-example }
```python
{
"data": [
{
"predictionValues": [
{
"value": 0.2789450715,
"label": 1
},
{
"value": 0.7210549285,
"label": 0
}
],
"predictionThreshold": 0.5,
"prediction": 0,
"rowId": 0
}
]
}
```
### Regression prediction example {: #regression-prediction-example }
```python
{
"data": [
{
"predictionValues": [
{
"value": 6754486.5,
"label": "revenue"
}
],
"prediction": 6754486.5,
"rowId": 0
}
]
}
```
### Multiclass classification prediction example {: #multiclass-classification-prediction-example }
```python
{
"data": [
{
"predictionValues": [
{
"value": 0.9999997616,
"label": "setosa"
},
{
"value": 2.433e-7,
"label": "versicolor"
},
{
"value": 1.997631915e-16,
"label": "virginica"
}
],
"prediction": "setosa",
"rowId": 0
}
]
}
```
## Errors list {: #errors-list }
| HTTP Code | Sample error message | Reason(s) |
|---------|-------------|---------|
| 400 BAD REQUEST | `{"message": "Bad request"}` | Added external deployments that are unsupported.|
| 404 NOT FOUND | `{"message": "Deployment :deploymentId cannot be found for user :userId"}` | Provided an invalid :deploymentId (deleted deployment). |
|
dep-pred
|
---
title: Prediction Explanations for deployment
description: Using a specified endpoint, makes predictions on a given deployment and provides explanations.
---
# Prediction Explanations for deployments {: #prediction-explanations-for-deployments }
**Endpoint:** `/deployments/<deploymentId>/predictions?maxExplanations=<number>`
Prediction Explanations identify why a given model makes a certain prediction. To calculate Prediction Explanations, use the same endpoint used for calculating bare predictions with the `maxExplanations` URI parameter set to a positive integer value.
Prediction Explanations can be either:
* [XEMP](xemp-pe)-based (the default). To use XEMP-based explanations, first calculate feature impact and initialize Prediction Explanations to provide a [qualitative indicator (qualitativeStrength)](#qualitativestrength-indicator) of the effect variables have on the predictions. Explanations are computed for the top 50 features, ranked by feature impact scores (not including features with zero feature impact).
* [SHAP](shap-pe)-based if the [**Include only models with SHAP value support**](additional) advanced option is enabled prior to model building. Calculating feature impact is not required; note that the [qualitativeStrength](#qualitativestrength-indicator) indicator is not available for SHAP.
XEMP- and SHAP-based explanations are mutually exclusive—a model cannot have both XEMP and SHAP explanations. See the [Prediction Explanations](pred-explain/index) full documentation for specific calculation information.
Note the following:
* SHAP-based explanations are not available for time series or custom models.
* Neither XEMP or SHAP explanations are available for images (that is, no [Image Explanations](xemp-pe#prediction-explanations-for-visual-ai)).
!!! warning "Performance considerations for XEMP-based explanations"
XEMP-based explanations can be 100x slower than regular predictions. Avoid them for low-latency critical use cases. SHAP-based explanations are much faster but can add some latency too.
!!! note "Multiclass support"
Prediction Explanations cannot be generated for XEMP- or SHAP-based multiclass projects.
More information to consider while working with explanations can be found [here](pred-explain/index#feature-considerations).
**Request Method:** `POST`
**Request URL:** deployed URL, for example: `https://your-company.orm.datarobot.com/predApi/v1.0`
## Request parameters {: #request-parameters }
### Headers {: #headers }
| Key | Description |Example(s) |
|------------------|---------------------|------------------|
| Datarobot-key | Required for managed AI Platform users; string type <br><br> Once a model is deployed, see the code snippet in the DataRobot UI, [Predictions > Prediction API](code-py). | `DR-key-12345abcdb-xyz6789`|
| Authorization | Required; string <br><br> Three methods are supported: <ul><li> Bearer authentication </li><li> (deprecated) Basic authentication: User_email and API token </li><li> (deprecated) API token | <ul><li> Example for Bearer authentication method: `Bearer API_key-12345abcdb-xyz6789` </li><li> (deprecated) Example for User_email and API token method: `Basic Auth_basic-12345abcdb-xyz6789` </li><li> (deprecated) Example for API token method: `Token API_key-12345abcdb-xyz6789` |
| Content-Type | Optional; string type | <ul><li> textplain; charset=UTF-8 </li><li> text/csv </li><li> application/JSON </li><li> multipart/form-data (For files with data, i.e., .csv, .txt files) |
| Content-Encoding | Optional; string type <br><br> Currently supports only gzip-encoding with the default data extension. | gzip |
**Datarobot-key:** This header is required only with the managed AI Platform. It is used as a precaution to secure user data from other verified DataRobot users. The key can also be retrieved with the following request to DataRobot API: <br>`GET <URL>/api/v2/modelDeployments/<deploymentId>`
### Query arguments (explanations specific) {: #query-arguments }
!!! note
To trigger prediction explanations, your request must send `maxExplanations=N` where N is greater than `0`.
| Key | Type | Description | Example(s) |
|------|----------------|--------------|------------|
| maxExplanations | int OR string | Optional. Limits the number of explanations returned by server. Previously called `maxCodes` (deprecated). For SHAP explanations only a special constant `all` is also accepted. | <ul><li>`?maxExplanations=5`</li><li>`?maxExplanations=all`</li></ul> |
| thresholdLow | float | Optional. The lower threshold for requiring a Prediction Explanation. Predictions must be below this value (or above the thresholdHigh value) for Prediction Explanations to compute. | `?thresholdLow=0.678` |
| thresholdHigh | float | Optional. The upper threshold for requiring a Prediction Explanation. Predictions must be above this value (or below the thresholdLow value) for Prediction Explanations to compute. | `?thresholdHigh=0.345` |
| excludeAdjustedPredictions | bool | Optional. Includes or excludes exposure-adjusted predictions in prediction responses if exposure was used during model building. The default value is `true` (exclude exposure-adjusted predictions). | `?excludeAdjustedPredictions=true` |
| explanationNumTopClasses | int | Optional. This argument is only for multiclass model explanations and it is mutually exclusive with `explanationClassNames`. <br><br> The number of top predicted classes to explain for each row. The default value is `1`. | `?explanationNumTopClasses=5` |
| explanationClassNames | list of string types | Optional. This argument is only for multiclass model explanations and it is mutually exclusive with `explanationNumTopClasses`. <br><br> A list of class names to explain for each row. Class names must be passed as UTF-8 bytes and must be percent-encoded (see the [HTTP standard](https://tools.ietf.org/html/rfc2616){ target=_blank } for this requirement). By default, `explanationNumTopClasses=1` is assumed. | `?explanationClassNames=classA&explanationClassNames=classB` |
The rest of the parameters like `passthroughColumns`, `passthroughColumnsSet`, and `predictionWarningEnabled` can also be used with Prediction Explanations.
### Body {: #body }
| Data | Type | Example(s) |
|---|---------|------------------|
| Data to predict | <ul><li> raw text </li><li> form-data | <ul><li> PassengerId,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked 892,3,"Kelly, Mr. James",male,34.5,0,0,330911,7.8292,,Q 893,3,"Wilkes, Mrs. James (Ellen Needs)",female,47,1,0,363272,7,,S 894,2,"Myles, Mr. Thomas Francis",male,62,0,0,240276,9.6875,,Q </li><li> Key: file, value: file_with_data_to_predict.csv |
### Response 200 {: #response-200 }
#### Binary XEMP-based explanation response example {: #binary-prediction-example }
```json
{
"data": [
{
"predictionValues": [
{
"value": 0.07836511,
"label": 1
},
{
"value": 0.92163489,
"label": 0
}
],
"predictionThreshold": 0.5,
"prediction": 0,
"rowId": 0,
"predictionExplanations": [
{
"featureValue": "male",
"strength": -0.6706725349,
"feature": "Sex",
"qualitativeStrength": "---",
"label": 1
},
{
"featureValue": 62,
"strength": -0.6325465255,
"feature": "Age",
"qualitativeStrength": "---",
"label": 1
},
{
"featureValue": 9.6875,
"strength": -0.353000328,
"feature": "Fare",
"qualitativeStrength": "--",
"label": 1
}
]
}
]
}
```
#### Binary SHAP-based explanation response example {: #binary-shap-prediction-example }
```json
{
"data":[
{
"deploymentApprovalStatus": "APPROVED",
"prediction": 0.0,
"predictionExplanations": [
{
"featureValue": "9",
"strength": 0.0534648234,
"qualitativeStrength": null,
"feature": "number_diagnoses",
"label": 1
},
{
"featureValue": "0",
"strength": -0.0490243586,
"qualitativeStrength": null,
"feature": "number_inpatient",
"label": 1
}
],
"rowId": 0,
"predictionValues": [
{
"value": 0.3111782477,
"label": 1
},
{
"value": 0.6888217523,
"label": 0.0
}
],
"predictionThreshold": 0.5,
"shapExplanationsMetadata": {
"warnings": null,
"remainingTotal": -0.089668474,
"baseValue": 0.3964062631
}
}
]
}
```
### "qualitativeStrength" indicator {: #qualitativestrength-indicator }
The "qualitativeStrength" indicates the effect of the feature's value on predictions, based on XEMP calculations. The following table provides an example for a model with two features. See the [XEMP calculation reference](xemp-calc) for full calculation details.
!!! note
This response is an XEMP-only feature.
| Indicator... | Description |
|---------------|-----------------------------------|
| +++ | Absolute score is > 0.75 and feature has positive impact. |
| --- | Absolute score is > 0.75 and feature has negative impact. |
| ++ | Absolute score is between (0.25, 0.75) and feature has positive impact. |
| -- | Absolute score is between (0.25, 0.75) and feature has negative impact. |
| + | Absolute score is between (0.001, 0.25) and feature has positive impact. |
| - | Absolute score is between (0.001, 0.25) and feature has negative impact. |
| <+ | Absolute score is between (0, 0.001) and feature has positive impact. |
| <- | Absolute score is between (0, 0.001) and feature has negative impact. |
## Errors List {: #errors-list }
| HTTP Code | Sample error message | Reason(s) |
|-----------|--------|----------------------|
| 404 NOT FOUND | `{"message": "Not found"}` | Provided an invalid :deploymentId (deleted deployment). |
| 404 NOT FOUND | `{"message": "Bad request"}` | Provided the wrong format for :deploymentId. |
| 422 UNPROCESSABLE ENTITY | `{"message": "{'max_codes': DataError(value can't be converted to int)}"}` | Provided `maxCodes` parameter in unsupported data type (i.e., non-integer values). |
| 422 UNPROCESSABLE ENTITY | `{"message": "{'threshold_high': DataError(value can't be converted to float)}"}` | Provided `threshold_high` parameter in unsupported data type (i.e., non-integer values). |
| 422 UNPROCESSABLE ENTITY | `{"message": "{'threshold_low': DataError(value can't be converted to float)}"}` | Provided `threshold_low` parameter in unsupported data type (i.e., non-integer values). |
| 422 UNPROCESSABLE ENTITY | `{"message": "Multiclass models cannot be used for Prediction Explanations"}` | Provided a multiclass classification problem dataset, which is not supported for this endpoint. |
| 422 UNPROCESSABLE ENTITY | `{"message": "This endpoint does not support predictions on time series models. Please use the timeSeriesPredictions route instead."}` | Provided the deploymentId of a time series project, which is not supported for this endpoint. |
| 422 UNPROCESSABLE ENTITY | `{"message": "{'exclude\_adjusted\_predictions': DataError(value can't be converted to Bool)}"}` | Sent an empty or non-Boolean value with the `excludeAdjustedPredictions` parameter. |
| 422 UNPROCESSABLE ENTITY | `{"message": "'predictionWarningEnabled': value can't be converted to Bool"}` | Provided an invalid (non-boolean) value for `predictionWarningEnabled` parameter. |
|
dep-predex
|
---
title: Dedicated Prediction API reference
description: This reference provides additional documentation for the Prediction API. It lists the methods, input and output parameters, and errors that the API may return.
---
# Dedicated Prediction API reference {: #dedicated-prediction-api-reference }
This reference provides additional documentation for the Prediction API to help you successfully use the API. The following pages list the methods, input and output parameters, and errors that may be returned by the API. This reference supplements the information provided in the user guide's Prediction API pages. There, you can also find information about prerequisites and best practices and instructions for obtaining your configured predictions URL.
When using these examples, be sure to replace `https://your-company.orm.datarobot.com` with the name of your dedicated prediction instance. If you do not know whether you have a dedicated prediction instance, or its address, contact your DataRobot representative.
## General errors for Prediction API {: #general-errors-for-prediction-api }
These errors may be returned from any Prediction API calls, depending on the issue. They are common to all endpoints.
### Authorization {: #authorization }
| HTTP Code | Sample error message | Reason(s) |
|------------------|-----------------------|---------------|
| 401 UNAUTHORIZED | `{"message": "Invalid API token"}` | <ul><li> Provided an invalid or no API token key (Basic Auth). </li><li> Provided an invalid or no API token key (Bearer Auth). </li><li> Provided an invalid username with a valid API token key (Basic Auth). |
| 401 UNAUTHORIZED | `{"message": "Invalid Authorization header. No credentials provided."}` | Did not provide an API token key (Bearer Token Auth). |
| 401 UNAUTHORIZED | `{"message": "The datarobot-key header is missing"}` | <ul><li> Did not provide a DataRobot key parameter for a project that requires one. </li><li> Provided an empty DataRobot key parameter. </li><li> Provided an invalid DataRobot key parameter. |
### Parameters {: #parameters }
| HTTP Code | Sample error message | Reason(s) |
|---------|-----------------|------------|
| 400 BAD REQUEST | `{"message": "passthroughColumns do not match columns, columns expected but not found: [u'Name']"}` | Provided the name of a column that does not exist. |
| 422 UNPROCESSABLE ENTITY | `{"message": "'wd': wd is not allowed key"}` | Provided an unsupported parameter, e.g., `wd`. |
| 422 UNPROCESSABLE ENTITY | `{"message": "'passthroughColumns': blank value is not allowed"}` | Provided an empty value for the `passthroughColumns` parameter. |
| 422 UNPROCESSABLE ENTITY | `{"message": "'passthroughColumnsSet': value is not exactly 'all'"}` | Needed to provide the `all` value for the `passthroughColumnsSet` parameter, and provided some other value (or empty). |
| 422 UNPROCESSABLE ENTITY | `{"message": "'passthroughColumns' and 'passthroughColumnsSet' cannot be used together"}` | Passed parameters for both `passthroughColumns` and `passthroughColumnsSet` in the same request. Need to pass parameters in separate requests. |
| 422 UNPROCESSABLE ENTITY | `{"message": "'predictionWarningEnabled': value can't be converted to Bool"}` | Provided an invalid (non-boolean) value for predictionWarningEnabled parameter. |
### Payload {: #payload }
| HTTP Code | Sample error message | Reason(s) |
|--------------------------|---------------------|-------------------------|
| 400 BAD REQUEST | `{"message": "Submitted file '10k\_diabetes.xlsx' has unsupported extension"}` | Provided a file with an unsupported extension, e.g., .xlsx. |
| 400 BAD REQUEST | `{"message": "Bad JSON format"}` | Provided raw text with content type `application/JSON`. |
| 400 BAD REQUEST | `{"message": "Mimetype '' not supported"}` | Provided an empty body with the `Text` mimetype. |
| 400 BAD REQUEST | `{"message": "No data was received"}` | Provided an empty body with `application/JSON` content-type. |
| 400 BAD REQUEST | `{"message": "Mimetype 'application/xml' not supported"}` | Provided a request with unsupported mimetype.|
| 400 BAD REQUEST | `{"message": "Requires non-empty JSON input"}`| Provided empty JSON, {}.|
| 400 BAD REQUEST | `{"message": "JSON uploads must be formatted as an array of objects"}`| Provided JSON was malformatted: `{"0": {"PassengerId": 892, "Pclass": 3}}`|
| 400 BAD REQUEST | `{"message": "Malformed CSV, please check schema and encoding.\nError tokenizing data. C error: Expected 11 fields in line 5, saw 12\n"}` | Provided CSV has issues: 1 row has more fields than expected (in this instance).|
| 413 Entity Too Large | `{"message": "Request is too large. The request size is $content\_length bytes and the maximum message size allowed by the server is 50MB"}` | Provided file is too large. DataRobot accepts files of up to 50MB in size; if file size exceeds the limit, the batch-scoring tool should be used. The same limit is applied for archived datasets. |
| 422 UNPROCESSABLE ENTITY | `{"message": "No data to predict on"}`| Provided an empty request payload. |
| 422 UNPROCESSABLE ENTITY | `{"message": "Missing column(s): Age, Cabin, Embarked, Fare, Name, Parch, PassengerId, Pclass, Sex & SibSp"}`| Dataset is missing all required fields. Use a dataset from the project you try to predict on, with expected fields. |
## Prediction API infinity behavior
[IEEE-754](https://en.wikipedia.org/wiki/IEEE_754){ target=_blank }, the standard for floating-point arithmetic, defines finite numbers, infinities, and a special NaN (not-a-number) value. According to <a target="_blank" href="https://datatracker.ietf.org/doc/html/rfc8259#section-6">RFC-8259</a>, infinities and NaN are not allowed in JSON. DataRobot tries to replace these values before they are returned in APIs using the following rules:
* `Inf` is replaced with `1.7976931348623157e+308` (double precision floating-point max).
* `-Inf` is replaced with `-1.7976931348623157e+308` (double precision floating-point min).
* `NaN` is replaced with `0.0`.
The Predictions API rounds floating-point numbers to 10 decimal places. However, the rounding logic changes when the floating point minimum and maximum are rounded below and above the limits, respectively:
* `1.7976931348623157e+308` (double precision floating-point max) is returned as `1.797693135e+308` (greater than the maximum limit).
* `-1.7976931348623157e+308` (double precision floating-point min) is returned as `-1.797693135e+308` (lower than the minimum limit).
* Note that CPython’s built-in JSON parser parses such values as `inf` and `-inf` respectively, but some other languages may crash.
|
index
|
---
title: Predictions for unstructured model deployments
description: Using a specified endpoint, calculates predictions based on user-provided data for a specific unstructured model deployment.
---
# Predictions for unstructured model deployments {: #predictions-for-unstructured-model-deployments }
Using the endpoint below, you can provide the data necessary to calculate predictions for a specific unstructured model deployment. If you need to make predictions for a standard model, see [Predictions for deployments](dep-pred).
**Endpoint:** `/deployments/<deploymentId>/predictionsUnstructured`
Calculates predictions based on user-provided data for a specific unstructured model deployment. This endpoint works _only_ for deployed custom inference models with an unstructured target type. For more information, see [Assemble unstructured custom models](unstructured-custom-models).
This endpoint does the following:
* Calls the `/predictUnstructured` route on the target custom inference model, allowing you to use the custom request and response schema, which may go beyond the standard DataRobot prediction API interface.
* Passes any payload and content type (MIME type and charset, if provided) to the model.
* Passes any model-returned payload, along with the content type (MIME type and charset, if provided), back to the caller.
In the [DRUM library](custom-model-drum), this call is handled by the [`score_unstructured()` hook](unstructured-custom-models#score).
!!! note
You can find the deployment ID in the sample code output of the [**Deployments > Predictions > Prediction API**](code-py) tab (with **Interface** set to **API Client**).
**Request Method:** `POST`
**Request URL:** deployed URL, for example: `https://your-company.orm.datarobot.com/predApi/v1.0`
## Request parameters {: #request-parameters }
### Headers {: #headers }
| Key | Description | Example(s) |
|-----|-------------|------------|
| Datarobot-key | Required for managed AI Platform users; string type <br><br> Once a model is deployed, see the code snippet in the DataRobot UI, [Predictions > Prediction API](code-py). | `DR-key-12345abcdb-xyz6789` |
| Authorization | Required; string <br><br> Three methods are supported: <ul><li> Bearer authentication </li><li> (deprecated) Basic authentication: User_email and API token </li><li> (deprecated) API token | <ul><li>Example for Bearer authentication method: `Bearer API_key-12345abcdb-xyz6789`</li><li>(deprecated) Example for User_email and API token method: `Basic Auth_basic-12345abcdb-xyz6789`</li><li>(deprecated) Example for API token method: `Token API_key-12345abcdb-xyz6789`</li></ul> |
| Content-Type | Optional; string type <br><br> Default: `application/octet-stream` <br><br> Any provided content type is passed to the model; however, the DRUM library has a built-in decoding mechanism for `text` content-types using the specified charset. <br><br> For more information, see [Assemble unstructured custom models](unstructured-custom-models). | <ul><li>`text/plain`</li><li>`text/csv`</li><li>`text/plain; charset=latin1`</li><li>`application/json; charset=UTF-8`</li><li>`custom/type`<li>`application/octet-stream`</li></ul> |
| Content-Encoding | Optional; string type <br><br> Currently supports only gzip-encoding with the default data extension. | `gzip` |
| Accept | Optional; string type | `*/*` (default) <br><br> The response is defined by the model output.
|
### Query arguments {: #query-arguments }
Currently not supported for the `predictionsUnstructured` endpoint.
### Body {: #body }
| Data | Type | Example(s) |
|------|------|------------|
| Data to pass to the custom model | Bytes | <ul><li>`PassengerId,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked 892,3,"Kelly, Mr. James",male,34.5,0,0,330911,7.8292,,Q 893,3,"Wilkes, Mrs. James (Ellen Needs)",female,47,1,0,363272,7,,S 894,2,"Myles, Mr. Thomas Francis",male,62,0,0,240276,9.6875,,Q`</li><li>`{“data”: [{“some”: “json”}]}`</li><li>`Custom payload 123`</li><li>`<binary data>` (for example, image data)</li></ul> |
## Response 200 {: #response-200 }
The HTTP Response contains a payload returned by the custom model’s `/predictUnstructured` route and passed back as-is. The `Content-Type` header is passed to the caller. If the `Content-Type` header isn't provided, the `application/octet-stream` default is applied.
In the case of a DataRobot-acknowledged error in a request, an `application/json` error message is returned.
In the [DRUM library](custom-model-drum), the response payload and content type are generated by the [`score_unstructured()` hook](unstructured-custom-models#score).
## Errors list {: #errors-list }
| HTTP Code | Sample error message | Reason(s) |
|-----------|----------------------|-----------|
| 400 BAD REQUEST | `{"message":"Query parameters not accepted on this endpoint"}` | The request passed query parameters to the endpoint. |
| 404 NOT FOUND | `{"message": "Deployment :deploymentId cannot be found for user :userId"}` | The request provided an invalid `:deploymentId` (a deleted or non-existant deployment). |
| 422 UNPROCESSABLE CONTENT | `{"message": "Only unstructured custom models can be used with this endpoint. Use /predictions instead.}` | The request provided a `:deploymentId` for a deployment that isn't an unstructured custom inference model deployment. |
|
dep-pred-unstructured
|
---
title: Import data
description: DataRobot lets you import data using multiple methods, including uploading dataset files locally, uploading from URLs, and connecting to data sources.
---
# Import data {: #import-data }
Data can be ingested into DataRobot from your local system, a URL, and through data connections to common databases and data lakes. A critical part of the data ingestion process is [Exploratory Data Analysis (EDA)](eda-explained). EDA happens twice within DataRobot, once when data is ingested and again once a target has been selected and modeling has begun.
You can import data directly into the DataRobot platform or you can import into the AI Catalog, a centralized collaboration hub for working with data and related assets. The catalog allows you to seamlessly find, understand, share, tag, and reuse data. The following sections provide guidelines and steps for importing data.
Topic | Describes...
----- | ------
[Import directly into DataRobot](import-to-dr) | In DataRobot, you can import a dataset file, import from a URL, import from AWS S3, among other methods.
[Import into the AI Catalog](catalog) | Import data into the AI Catalog and from there, create a DataRobot project. In the catalog, you can transform the data using SQL, and create and schedule snapshots of your data.
[Import large datasets](large-data/index) | Methods of working with large datasets (greater than 10GB).
|
index
|
---
title: Import to DataRobot directly
description: This section provides detailed steps for importing without using the catalog, by drag and drop, URL, and HDFS.
---
# Import to DataRobot directly {: #import-data-to-DataRobot-directly }
This section describes detailed steps for importing data to DataRobot. Before you import data, review DataRobot's [data guidelines](file-types) to understand dataset requirements, including file formats and sizes.
!!! notes
This section assumes that your data sources are configured. If not, see the [JDBC connection](data-conn) instructions on selecting a data connection and data source, as well as creating SQL queries.
## To get started {: #to-get-started }
The first step to building models is to import your data. To get started:
1. Create a new DataRobot project in either of the following ways:
* Sign in to DataRobot and click the DataRobot logo in the upper left corner.
* Open the [**Projects** folder](manage-projects#create-a-new-project) in the upper right corner and click the **Create New Project** link.
2. Once the new project page is open, [select a method to import](#import-methods) an acceptable file type to the page. (Accepted types are listed at the bottom of the screen.) If a particular upload method is disabled on your cluster, the corresponding ingest button will be grayed out.
## Import methods {: #import-methods }
Once you sign in to DataRobot, you can import data and start a project. The following are ways to import your data.
!!! note
Some import methods need to be configured before users in your organization can use them, as noted in the following sections.
| Method | Description |
|---|---|
| [Drag and drop](#drag-and-drop) | Drag a dataset into DataRobot to begin an upload. |
| [Use an existing data source](#use-an-existing-data-source) | Import from a configured data source. |
| [Import a dataset from a URL](#import-a-dataset-from-a-url) | Specify a URL from which to import data. |
| [Import local files](#import-local-files) | Browse to a local file and import. |
| [Import files from S3](#import-files-from-s3) | Upload from an AWS S3 bucket. |
| [Import files from Google Cloud Storage](#import-files-from-google-cloud-storage) | Import directly from Google Cloud. |
| [Import files from Azure Blob Storage](#import-files-from-azure-blob-storage) | Import directly from Azure Blob. |
!!! note
A particular upload method may be disabled on your cluster, in which case a button for that method does not appear. Contact your system administrator for information about the configured import methods.
For larger datasets, DataRobot provides [special handling](fast-eda) that lets you see your data earlier and select project options earlier.
### Drag and drop {: #drag-and-drop }
To use drag and drop, simply drag a file onto the app. Note, however, that when dropping large files (greater than 100MB) the upload process may hang. If that happens:
* Try again.
* Compress the file into a supported [format](file-types#data-formats) and then try again.
* Save the file to a remote data store (e.g., S3) and use URL ingest, which is more reliable for large files.
* If security is a concern, use a temporarily signed S3 URL.
### Use an existing data source {: #use-an-existing-data-source }
You can use this method if you have already configured data sources. If not, see the [JDBC](data-conn) connection instructions for details on selecting a data connection and data source, as well as creating SQL queries.
!!! note
When DataRobot ingests from the data source option, it makes a copy of the selected database rows for your use in the project.
To use an existing data source:
1. On the new project screen, click **Data Source**.
2. Select the desired data source and click **Next**.
3. Use saved credentials or enter new credentials for the database configured.
4. Click **Save and sign in**.
### Import a dataset from a URL {: #import-a-dataset-from-a-url }
To import data from a URL:
1. On the new project screen, click **URL**.
2. Enter the URL of the dataset. It can be [local](#import-local-files), HTTP, HTTPS, [Google Cloud Storage](#import-files-from-google-cloud-storage), [Azure Blob Storage](#import-files-from-azure-blob-storage), or [S3](#import-files-from-s3).
!!! note
The ability to import from Google Cloud, Azure Blob Storage, or S3 using a URL needs to be configured for your organization's installation.
4. Click **Create New Project** to create a new project.
### Import local files {: #import-local-files }
Instead of copying data to the client and then uploading it via the browser, you can specify the URL link as <code>file:///local/file/location</code>. DataRobot will then ingest the file from the network storage drive connected to the cluster. This import method needs to be configured for your organization's installation.
!!! note
The ability to load locally mounted files directly into DataRobot is not available for managed AI Platform users.
### Import files from S3 {: #import-files-from-s3 }
Self-Managed AI Platform installations with this import method configured can ingest S3 files via a URL by specifying the link to S3 as `s3://<bucket-name>/<file-name.csv>` (instead of, for example, `https://s3.amazonaws.com/bucket/file?AWSAccessKeyId...`). This allows you to ingest files from S3 without setting your object and buckets to public.
!!! note
This method is disabled for managed AI Platform users. Instead, import S3 files using one of the following methods:
* Using an Amazon S3 [data connection](data-conn).
* Generate a pre-signed URL allowing public access to S3 buckets with authentication, then you can use a direct URL to ingest the dataset.
### Import files from Google Cloud Storage {: #import-files-from-google-cloud-storage }
You can configure DataRobot to directly import files stored in Google Cloud Storage using the link `gs://<bucket-name>/<file-name.csv>`. This import method needs to be configured for your organization's installation.
!!! note
The ability to import files using the `gs://<bucket-name>/<file-name.csv>` link is not available for managed AI Platform users.
### Import files from Azure Blob Storage {: #import-files-from-azure-blob-storage }
It is possible to directly import files stored in Azure Blob Storage using the link `azure_blob://<container-name>/<file-name.csv>`. This import method needs to be configured for your organization's installation.
!!! note
The ability to import files using the `azure_blob://<container-name>/<file-name.csv>` link is not available for managed AI Platform users.
## Project creation and analysis {: #project-creation-and-analysis }
After you select a data source and import your data, DataRobot creates a new project. This first [*exploratory data analysis*](eda-explained) step is known as *EDA1*. (See the section on ["Fast EDA"](fast-eda) to understand how DataRobot handles larger datasets.)
Progress messages indicate that the file is being processed.

When [EDA1](eda-explained#eda1) completes, DataRobot displays the **Start** screen. From here you can scroll down or click the **Explore** link to view a data summary. You can also [specify the target feature](model-data#set-the-target-feature) to use for predictions.

Once you're in the data section, you can:

* Click **View Raw Data** (1) to display a modal presenting up to a 1MB random sample of the raw data table DataRobot will be building models with:

* [Set your target](model-data#set-the-target-feature) (2) by mousing over a feature name in the data display.
* Work with [feature lists](feature-lists) (3).
You can also [view a histogram](histogram) for each feature. The histogram provides several options for modifying the display to help explain the feature and its relationship to the dataset.
More information becomes available once you set a target feature and begin your [model build](model-data), which is the next step.
|
import-to-dr
|
---
title: Exploratory Data Analysis (EDA)
description: EDA is a two-stage process that DataRobot employs to first analyze datasets and summarize their main characteristics and then build models.
---
# Exploratory Data Analysis (EDA) {: #exploratory-data-analysis-eda }
Exploratory Data Analysis or _EDA_ is DataRobot's approach to analyzing datasets and summarizing their main characteristics. Generally speaking, there are two stages of EDA—EDA1 and EDA2. EDA1 provides summary statistics based on a sample of your data. EDA2 is the step used for model building and uses the entire dataset, based on the options selected (see below).
The following describes, in general terms, the DataRobot model building process for datasets under 1GB:
1. Import a dataset.
2. DataRobot launches EDA1 (and automatically creates [feature transformations](feature-transforms) if date features are detected).
3. Upon completion of EDA1, select a target and click **Start**.
* For [Feature Discovery](feature-discovery/index) projects, DataRobot:
* Loads secondary datasets.
* Discovers features from secondary datasets.
* Generates new features from the discovery.
* For time series projects, DataRobot applies the [feature derivation process](feature-eng) to create final features.
4. DataRobot partitions the data.
5. DataRobot launches EDA2 and starts model building when it completes.
The table below lists the components of EDA:
Analysis type | Analyzes...
---------- | -----------
Automatic data schema and data type | <ul><li>Numeric (numerical statistics, mean, standard deviation, median, min, max)</li><li>Categorical</li><li>Boolean</li><li>Text</li><li>Special feature types date</li><li>Currency</li><li>Percentage</li><li>Length</li><li>Image</li><li>Geospatial points</li><li>Geospatial lines or polygons</li></ul>
Data visualization | <ul><li>Histogram</li><li>Frequency distribution for top 50 items</li><li>Overtime</li><li>Column validity for modeling (non-empty, non-duplicate)</li><li>Average value</li><li>Outliers</li><li>Feature correlation to the target</li></ul>
Data quality checks | <ul><li>Inliers</li><li>Outliers</li><li>Disguised missing values</li><li>Excess zeros</li><li>Target leakage</li><li>Missing images</li><li>Duplicate images</li></ul>
Feature association matrix | Support numerical and categorical data with metrics:<br><ul><li>Mutual information</li><li>Cramer's V</li><li>Pearson</li><li>Spearman</li></ul>
## EDA1 {: #eda1 }
DataRobot calculates EDA1 on up to 500MB of your dataset, after any applicable conversion or expansion. If the expanded dataset is under 500MB, it uses the entire dataset; otherwise, it uses a 500MB random sample (meaning it takes a random sampling equaling 500MB when the dataset is over 500MB).
!!! note
For larger datasets, Fast EDA runs during EDA1 and calculates early target selection using only a percentage of the input dataset. A message identifies the approximate percentage of data used. See [more information](large-data/fast-eda#fast-eda-and-early-target-selection) on early target selection for large datasets.
EDA1 returns:
* Feature type
* Numeric
* Categorical
* Boolean
* Image
* Text
* Special feature type
* Date
* Currency
* Percentage
* Length
* For numerics, numerical statistics
* Mean
* Standard deviation
* Median
* Min
* Max
* Frequency distribution for top 50 items
* Column validity for modeling (non-empty, non-duplicate)
## EDA2 {: #eda2 }
DataRobot calculates EDA2 on the portion of the data used for EDA1, excluding rows that are also in the holdout data (if there is a holdout) and rows where the target is `N/A`. DataRobot also does additional calculations on the target column using the entire dataset.
EDA2 returns:
* Recalculation of the numerical statistics originally calculated in EDA1.
* Feature correlation to the target (initial feature importance calculation). The target data used is from the sampled portion used for all the other columns.
Note that the following column types are flagged as "invalid/non-informative," cannot be transformed, and are not used in modeling:
* Duplicate column(s).
* Empty columns and columns lacking enough data to model.
* Columns consisting of only unique identifiers (reference ID columns).
|
eda-explained
|
---
title: Data Quality Assessment
description: The Data Quality Assessment automatically detects and often handles data quality issues such as outliers, leading or trailing zeros, target leakage, and many more.
---
# Data Quality Assessment {: #data-quality-assessment }
The Data Quality Assessment capability automatically detects and surfaces common data quality issues and, often, handles them with minimal or no action on the part of the user. The assessment not only saves time finding and addressing issues, but provides transparency into automated data processing (you can see the automated processing that has been applied). It includes a warning level to help determine issue severity.
See the associated [considerations](#feature-considerations) for important additional information.
As part of [EDA1](eda-explained#eda1), DataRobot runs checks on features that don’t require date/time and/or target information. Once EDA2 starts, DataRobot runs additional checks. In the end, the following checks are run:
* [Outliers](#outliers)
* [Multicategorical format errors](#multicategorical-format-errors)
* [Inliers](#inliers)
* [Excess zeros](#excess-zeros)
* [Disguised missing values](#disguised-missing-values)
* [Target leakage](#target-leakage)
* [Missing images](#missing-images) (for Visual AI projects)
Time series projects run all the baseline data quality checks as well as checks for:
* [Imputation leakage](#imputation-leakage)
* [Pre-derived lagged features](#pre-derived-lagged-feature)
* [Irregular time steps](#irregular-time-steps) (inconsistent gaps)
* [Leading or trailing zeros](#leading-or-trailing-zeros)
* [Infrequent negative values](#infrequent-negative-values)
* [New series in validation](#new-series-in-validation)
The [Visual AI project](visual-ai/index) Data Quality Assessment runs the same baseline checks and an additional missing image check:
* [Missing images](#missing-images)
Once EDA1 completes, the Data Quality Assessment appears just above the feature listing on the **Data** page.

In addition to the baseline data quality assessment, DataRobot provides additional detail for [time series](#time-series-assessment-details) and [Visual AI](#visual-ai-assessment-details) projects. Once model building completes, you can view the [Data Quality Handling Report](dq-report) for additional imputation information.
For more information, refer to the following reference material:
* [Detailed descriptions](#quality-check-descriptions) of each check.
* A summary of the [logic](#data-quality-check-logic-summary) behind each of the data quality checks.
## Overview {: #overview }
The Data Quality Assessment provides information about data quality issues that are relevant to your stage of model building. Initially run as part of EDA1 (data ingest), the results report on the **All Features** list. It runs again and updates after EDA2, displaying information for the selected feature list (or, by default, **All Features**). For checks that are not applicable to individual features (for example, Inconsistent Gaps), the report provides a general summary. Click **View Info** to view (and then **Close Info** to dismiss) the report:

Each data quality check provides issue status flags, a short description of the issue, and a recommendation message, if appropriate:
* Warning (): Attention or action required
* Informational (): No action required
* No issue ()
Because the results are feature-list based, it is possible that if you change the selected feature list on the **Data** page, new checks will appear or current checks will disappear from the assessment. For example, if feature list `List 1` contains a feature `problem`, which contains outliers, the outliers check will show in the assessment. If you change lists to `List 2` which does not include `problem` (or any other feature with outliers), the outliers check will report "no issue" ().
From within the assessment modal, you can filter by issue type to see which features triggered the checks. Toggle on **Show only affected features** and check boxes next to the check names to select which checks to display:

DataRobot then displays only features violating the selected data quality checks, and within the selected feature list, on the **Data** page. Hover on an icon for more detail:

For multilabel and Visual AI projects, **Preview Log** displays at the top if the assessment detects [multicategorical format errors](#multicategorical-format-errors) or [missing images](#missing-images) in the dataset. Click **Preview Log** to open a window with a detailed view of each error, so you can more easily find and fix them in the dataset.

## Explore the assessment {: #explore-the-assessment }
Once EDA1 completes and you have, perhaps, filtered the display, view the list of features impacted by the issues you are interested in investigating. To see the values that triggered a warning or information notification, expand a feature and review the **Histogram** and **Frequent Values** visualizations.
### Interpret the Histogram tab {: #interpret-the-histogram-tab }
{% include 'includes/histogram-include.md' %}
### Interpret Frequent Values {: #interpret-frequent-values }
The [Frequent Values](histogram#frequent-values-chart) chart, in addition to showing common values, reports inliers, disguised missing values, and excess zeros.

## More info... {: #more-info }
The following sections provide:
* [detailed descriptions](#quality-check-descriptions) of each check
* a summary of the [logic](#data-quality-check-logic-summary) behind each of the data quality checks.
### Quality check descriptions {: #quality-check-descriptions }
The sections below detail the checks DataRobot runs for the potential data quality issues. The [table that follows](#data-quality-check-logic-summary) summarizes this information.
* [Outliers](#outliers)
* [Multicategorical format errors](#multicategorical-format-errors)
* [Inliers](#inliers)
* [Excess zeros](#excess-zeros)
* [Disguised missing values](#disguised-missing-values)
* [Target leakage](#target-leakage)
* [Imputation leakage](#imputation-leakage)
* [Pre-derived lagged features](#pre-derived-lagged-feature)
* [Irregular time steps](#irregular-time-steps) (inconsistent gaps)
* [Leading or trailing zeros](#leading-or-trailing-zeros)
* [Infrequent negative values](#infrequent-negative-values)
* [New series in validation](#new-series-in-validation)
* [Missing images](#missing-images) (for Visual AI projects)
#### Outliers {: #outliers }
Outliers, the observation points at the far ends of the sample mean, may be the result of data variability. DataRobot automatically creates blueprints that handle outliers. Each blueprint applies an appropriate method for handling outliers, depending on the modeling algorithm used in the blueprint. For linear models, DataRobot adds a binary column inside of a blueprint to flag rows with outliers. Tree models handle outliers automatically.
**How they are detected**: DataRobot uses its own implementation of <a target="_blank" href="https://jsdajournal.springeropen.com/articles/10.1186/s40488-015-0031-y">Ueda's algorithm</a> for automatic detection of discordant outliers.
**How they are handled**: The data quality tool checks for outliers; to view outliers use the feature's [histogram](histogram#histogram-chart).
#### Multicategorical format errors {: #multicategorical-format-errors }
Multilabel modeling is a classification task that allows each row to contain one, several, or zero labels. To create a training dataset that can be used for multilabel modeling, you must follow the [requirements for multicategorical features](multilabel#create-the-dataset).
**How they are detected**: From a sampling of 100 random rows, DataRobot checks every feature that might qualify as multicategorical, looking for at least one value with the proper multicategorical format. If found, each row is checked to determine whether it complies with the multicategorical format. If there is at least one row that does not, the "multicategorical format error" is reported for the feature. The logic for the check is:
- Value must be a valid JSON.
- Value must represent a list of non-empty strings.
**How they are handled**: A selection of errors are reported to the data quality tool. If a feature has a multicategorical format error, it is not detected as multicategorical. View the assessment log for details of the error:

#### Inliers {: #inliers }
Inliers are values that are neither above nor below the range of common values for a feature, however, they are anomalously frequent compared to nearby values (for example, 55555 as a zip code value, entered by people who don't want to disclose their real zip code). If not handled, they could negatively affect model performance.
**How they are detected**: For each value recorded for a feature, DataRobot computes the value's frequency for that feature and makes an array of the results. Inlier <em>candidates</em> are the outliers in that array. To reduce false positives, DataRobot then applies another condition, keeping as inliers only those values for which:
frequency > 50 * (number of non-missing rows in the feature) / (number of unique non-missing values in the feature)
The algorithm allows inlier detection in numeric features with many unique values where, due to the number of values, inliers wouldn’t be noticeable in a [histogram](histogram#histogram-chart) plot. Note that this is a conservative approach for features with a smaller number of unique values. Additionally, it does not detect inliers in features with fewer than 50 unique values.
**How they are handled**: A binary column is automatically added inside of a blueprint to flag rows with inliers. This allows the model to incorporate possible patterns behind abnormal values. No additional user action is required.
#### Excess zeros {: #excess-zeros }
Repeated zeros in a column could be regular values but could also represent missing values. For example, sales could be zero for a given item either because there was no demand for the item or due to no stock. Using 0s to impute missing values is often suboptimal, potentially leading to decreased model accuracy.
**How they are detected**: Using the array described in [inliers](#inliers), if the frequency of the value 0 is an outlier, DataRobot flags the feature.
**How they are handled**: A binary column is automatically added inside of a blueprint to flag rows with excess zeros. This allows the model to incorporate possible patterns behind abnormal values. No additional user action is required.
#### Disguised missing values {: #disguised-missing-values }
A "disguised missing value" is the term applied to a situation when a value (for example, `-999`) is inserted to encode what would otherwise be a missing value. Because machine learning algorithms do not treat them automatically, these values could negatively affect model performance if not handled.
**How they are detected**: DataRobot finds values that both repeat with greater frequency than other values and are also detected outliers. To be considered a disguised missing value, repeated outliers must meet one of the following heuristics:
* All digits in the value are the same and repeat at least twice (e.g., 99, 88, 9999).
* The value begins with `1` and is then followed by two or more zeros.
* The value is equal to <a target="_blank" href="https://stats.idre.ucla.edu/wp-content/uploads/2016/02/bt115.pdf">-1, 98, or 97</a>.
**How they are handled**: Disguised missing values are handled in the same way as standard [missing values](model-ref#missing-values)—a median value is imputed and inserted and a binary column flags the rows where imputation occurred.
#### Target leakage {: #target-leakage }
The goal of predictive modeling is to develop a model that makes accurate predictions on new data, unseen during training. Because you cannot evaluate the model on data you don’t have, DataRobot estimates model performance on unseen data by saving off a portion of the historical dataset to use for evaluation.
A problem can occur, however, if the dataset uses information that is not known until the event occurs, causing <em>target leakage</em>. Target leakage refers to a feature whose value cannot be known at the time of prediction (for example, using the value for “churn reason” from the training dataset to predict whether a customer will churn). Including the feature in the model’s feature list would incorrectly influence the prediction and can lead to overly optimistic models.
**How they are detected**: DataRobot checks for target leakage during [EDA2](eda-explained#eda2) by calculating ACE importance scores (Gini Norm metric) for each feature with regard to the target. Features that exceed the moderate-risk (0.85) threshold are flagged; features exceeding the high risk (0.975) threshold are removed.
**How they are handled**: If the [advanced option](additional) for leakage removal is enabled (which it is by default), DataRobot automatically creates a feature list ([Informative Features - Leakage Removed](feature-lists#automatically-created-feature-lists)) that removes the high-risk problematic columns. Medium-risk features are marked with a yellow warning to alert you that you may want to investigate further.
After DataRobot detects leakage and creates <em>Informative Features - Leakage Removed</em>, it behaves according to the Advanced Option [“Run Autopilot on feature list with target leakage removed”](additional) setting. If enabled (the default):
* Quick, full, or Comprehensive Autopilot: DataRobot runs the newly created feature list <em>unless</em> you specified a user-created list. To run on one of the other default lists, rebuild models after the initial build with any list you select.
* Manual mode: DataRobot makes the list available so that you can apply it, at your discretion, from the Repository.
* The target leakage list will be available when adding models after the initial build.
If disabled, DataRobot applies the above to <em>Informative Features</em> (with potential target leakage remaining) or any user-created list you specified.
#### Imputation leakage {: #imputation-leakage }
The time series data prep tool can impute target and feature values for dates that are missing in the original dataset. The data quality check ensures that the imputed features are not leaking the imputed target. This is only a potential problem for features that are known in advance (KA), since the feature value is concurrent with the target value DataRobot is predicting.
**How they are detected**: DataRobot derives a binary classification target `is_imputed = (aggregated_row_count == 0)`. Prior to deriving time series features, it applies the [target leakage](#target-leakage) check to each KA feature, using `is_imputed` as the target.
**How they are handled**: Any features identified as high or moderate risk for imputation leakage are removed from the set of KA features. Subsequently, time series feature derivation proceeds as normal.
#### Pre-derived lagged feature {: #pre-derived-lagged-feature }
When a time series project starts, DataRobot automatically creates multiple date/time- related features, like lags and rolling statistics. There are times, however, when you do not want to automate time-based feature engineering (for example, if you have extracted your own time-oriented features and do not want further derivation performed on them). In this case, you should flag those features as [**Excluded from derivation**](ts-adv-opt#exclude-features-from-derivation) or [**Known in advance**](ts-adv-opt#set-known-in-advance-ka). The “Lagged feature” check helps to detect whether features that should have been flagged were not, which would lead to duplication of columns.
**How they are detected**: DataRobot compares each non-target feature with target(t-1), target(t-2) ... target(t-8).
**How they are handled**: All features detected as lags are automatically set as excluded from derivation to prevent "double derivation." Best practice suggests reviewing other uploaded features and setting all pre-derived features as “Excluded from derivation” or “Known in advance”, if applicable.
#### Irregular time steps {: #irregular-time-steps }
The “inconsistent gaps” check is flagged when a time series model has irregular [time steps](ts-flow-overview#time-steps). These gaps cause inaccurate rolling statistics. Some examples:
* Transactional data is not aggregated for a time series project and raw transactional data is used.
* Transactional data is aggregated into a daily sales dataset, and dates with zero sales are not added to the dataset.
**How they are detected**: DataRobot detects when there are expected timestamps missing.
It is important to understand that gaps could be consistent (for example, no sales for each weekend). DataRobot accounts for that and only detects inconsistent or unexpected gaps.
**How they are handled**: Because their inclusion is not good for rolling statistics, if greater than 20% of expected time steps are missing, the project runs in row-based mode (i.e., a regular project with out-of-time (OTV) validation). If that is not the intended behavior, make corrections in the dataset and recreate the project.
#### Leading or trailing zeros {: #leading-or-trailing-zeros }
Just as for excess zeros, this check works to detect zeros that are used to fill in missing values. It works for the special case where 0s are used to fill in missing values in the beginning or end of series that started later or finished earlier than others.
**How they are detected**: DataRobot estimates a total rate for zeros in each series and performs a statistical test to identify the number of consecutive zeros that cannot be considered a natural sequence of zeros.
**How they are handled**: If that is not the intended behavior, make corrections in the dataset and recreate the project.
#### Infrequent negative values {: #infrequent-negative-values }
Data with excess zeros in the target can be modeled with a special [two-stage model](model-ref#two-stage-models) for zero-inflated cases. This model is only available when the min value of the target is zero (that is, a single negative value will invalidate its use). In sales data, for example, this can happen when returns are recorded along with sales. This data quality check identifies a negative value when two-stage models are appropriate and provides a warning to correct the target if the desire is to enable zero-inflated modeling and other additional blueprints.
**How they are detected**: If DataRobot detects that fewer than 2% of values are negative, it treats the project as zero-inflated.
**How they are handled**: DataRobot surfaces a warning message.
### New series in validation {: #new-series-in-validation }
Depending on the project settings (training and validation partition sizes), a multiseries project might be configured so that a new series is introduced at the end of the dataset and therefore isn't part of the training data. For example, this could happen when a new store opens. This check returns an information message indicating that the new series is not within the training data.
**How they are detected**: If DataRobot detects that more than 20% of series are new (meaning that they are not in the training data).
**How they are handled**: DataRobot surfaces an informational message.
### Missing images {: #missing-images }
When an image dataset is used to build a [Visual AI project](visual-ai/index), the CSV contains paths to images contained in the provided ZIP archive. These paths can be missing, refer to an image that does not exist, or refer to an invalid image. A missing path is not necessarily an issue as a row could contain a variable number of images or simply not have an image for that row and column. Click **Preview Log** for a more detailed view:

In this example, row 1 reports a file name referenced that did not exist in the uploaded file (1). Row 2 reports that a row was missing an image path (2). The log provides both the nature of the issue as well as the row in which the problem occurred. The log previews up to 100 rows; choose **Download** to export the log and view additional rows.
**How they are detected**: DataRobot checks each image path provided to ensure it refers to an image that exists and is valid.
**How they are handled**: For paths that fail to resolve, DataRobot attempts to find the intended image and replace the problematic path. In the event that an auto-correction is not possible, the problematic path is removed. If the image was invalid, the path is removed.
All missing images, paths that fail to resolve (even when automatically fixed), and invalid images are logged and [available for viewing](#visual-ai-assessment-details).
### Data quality check logic summary {: #data-quality-check-logic-summary }
The following table summarizes the logic behind each data quality check:
| Check / Run | Detection logic | Handling | Reported in... |
|----------------|-------------------|-----------|-----------------|
| Outliers / EDA2 | <a target="_blank" href="https://jsdajournal.springeropen.com/articles/10.1186/s40488-015-0031-y">Ueda's algorithm</a> | Linear: Flag added to feature in blueprint<br />Tree: Handled automatically | **Data > [Histogram](histogram#histogram-chart)** |
| Multicategorical format error / EDA1 | Meets any of the following three conditions: <ul><li>Value is not valid JSON</li> <li>Value does not represent a list</li> <li>List entry contains an empty string</li></ul> | Feature is not identified as multicategorical | Data Quality Assessment log |
| Inliers / EDA1 | Value is not an outlier; frequency is an outlier | Flag added feature in blueprint | **Data > [Frequent Values](histogram#frequent-values-chart)** |
| Excess zeros / EDA1 | Frequency is an outlier; value is 0 | Flag added feature in blueprint | **Data > [Frequent Values](histogram#frequent-values-chart)** |
| Disguised Missing Values / EDA1 | Meets the following three conditions: <ul><li>Value is an outlier </li> <li>Frequency is an outlier</li> <li>Value matches one of these patterns: <ul><li> Is two or more digits and all digits are the same </li> <li> begins with “1”, followed by multiple zeros</li><li> -1, 98, or 97 </li></ul></li></ul> | Median imputed; flag feature in blueprint | **Data > [Frequent Values](histogram#frequent-values-chart)** |
| [Target leakage](#target-leakage) / EDA2 | [Importance](model-ref#data-summary-information) score for each feature, calculated using Gini Norm metric. Threshold levels for reporting are moderate-risk (0.85) or high-risk (0.975). | High-risk leaky features excluded from Autopilot (using "Leakage Removed" feature list) | **Data** page; optionally, filter by issue type |
| Missing images / EDA1 | Empty cell, missing file, broken link | Links are fixed automatically | Data Quality Assessment log |
| Imputation leakage / EDA2 (pre-feature derivation) | Target leakage with `is_imputed` as the target applied to KA features. Only checked for projects with time series data prep applied to the dataset. | Remove feature from KA features | **Data** page; optionally, filter by issue type |
| Pre-derived lagged features / EDA2 | Features equal to target(t-1), target(t-2) ... target(t-8) | Excluded from derivation | **Data** page; optionally, filter by issue type |
| Inconsistent gaps / EDA2 | [Irregular time steps][time steps](ts-flow-overview#time-steps) | Model runs in a row-based mode | Message in the time-aware modeling configuration |
| Leading/ trailing zeros / EDA2 | For series starting/ending with 0, compute probability of consecutive 0s; flag series with <5% probability | User correction | **Data** page; optionally, filter by issue type |
| Infrequent negative values / EDA1 | Fewer than 2% of values are negative | User correction | Warning message |
| New series in validation / EDA1 | More than 20% of series not seen in training data | User correction | Informational message |
## Feature considerations {: #feature-considerations }
Consider the following when working with Data Quality Assessment capability:
* For disguised missing values, inlier, and excess zero issues, automated _handling_ is only enabled for linear and Keras blueprints, where they have proven to reduce model error. Detection is applied to all blueprints.
* You cannot disable automated imputation handling.
* A public API is not yet available.
* Automated feature engineering runs on raw data (instead of removing all excess zeros and disguised missing values before calculating rolling averages).
|
data-quality
|
---
title: Exploratory Spatial Data Analysis (ESDA)
description: DataRobot Location AI provides tools to conduct ESDA. The tools let you interactively visualize and aggregate target, numeric, and categorical features on a map.
---
# Exploratory Spatial Data Analysis (ESDA) {: #exploratory-spatial-data-analysis-esda }
DataRobot Location AI provides a variety of tools for conducting ESDA within the DataRobot AutoML environment, including geometry map visualizations, categorical/numeric thematic maps, and smart aggregation of large geospatial datasets. Location AI’s modern web mapping tools allow you to interactively visualize, explore, and aggregate target, numeric, and categorical features on a map.
## Location visualization {: #location-visualization }
Within the **Data** tab, you can visualize and explore the spatial distribution of observations by expanding location features from the list and selecting the **Geospatial Map** link. Clicking **Compute feature over map** creates a chart showing the distribution on a map.
By default, Location AI displays a **Unique map** visualization depicting individual rows in the dataset as unique geometries. You can:
* Pan the map using a left-click hold (or equivalent touch gesture) and moving it.
* Zoom in left by double-clicking (or equivalent touch gesture).
* Use the zoom controls in the top-right corner of the map panel to zoom in and out.

Within the **Unique map** view, rows from the input dataset that are co-located in space are aggregated; the map legend in the top-left corner of the map panel displays a color gradient that represents counts of co-located points at a given location. Hovering over a geometry produces a pop-up displaying the count of co-located points and the coordinates of the location at that geometry. The opacity of the data can be controlled in **Visualization Settings**.
When the number or complexity of input geometries meets a certain threshold, Location AI automatically aggregates geometries into a **Kernel density map** to enhance the visualization experience and interpretability.
## Feature Over Space {: #feature-over-space }
In addition to visualizing the spatial distribution of the input geometries, Location AI also displays distributions of numeric and categorical variables on the **Geospatial Map**. Within the **Data** tab, navigate to any numeric or categorical features, select **Geospatial Map**, and click **Calculate Feature Over Map** to create the visualization.

By default, the **Feature Over Space** visualization displays a thematic map of unique locations with feature values depicted as colors. For geometries that are co-located spatially, the average value for the co-located locations are displayed. For numeric variables, you can change the metric used for the display by selecting “min”, “max”, or “avg” from the **Aggregation** dropdown menu at the bottom-left of the map panel. For categorical variables, the mode of the co-located categories is displayed. When the number of unique geometries grows large, DataRobot automatically aggregates individual geometries to enhance the visualization.
## Kernel density map {: #kernel-density-map }
A **Kernel density map** collects multiple observations within each given kernel and displays aggregated statistics with a color gradient. For location features, the count, min, max, and average can be selected from the **Aggregation** dropdown. For numeric features, the min, max, or average is available. For categorical features, the mode is displayed. Several visualization customizations are available in **Visualization Settings**.

## Hexagon map {: #hexagon-map }
In addition to viewing kernel density and unique maps of features, you can also view hexagon map visualizations. Select **Hexagon map** from the **Visualization** dropdown at the bottom-left of the map panel. Once selected, the map visualization displays hexagon-shaped cells. For location features, the count, min, max, and average can be selected from the Aggregation dropdown. For numeric features, the min, max, or average is available. For categorical features, the mode is displayed. Use the **Visualization settings** in the bottom-right of the map panel to adjust the settings.

## Heat map {: #heat-map }
You can also view heat map visualizations for geometry and numeric features. Heat map visualization is not available for categorical features. Select **Heat map** from the **Visualization** dropdown at the bottom-left of the map panel. Use the **Visualization settings** in the bottom-right of the map panel to adjust the settings.

|
lai-esda
|
---
title: Feature Associations
description: How to navigate and use the Feature Associations tab, which provides a matrix to help you track and visualize associations within your data.
---
# Feature Associations {: #feature-associations }
Accessed from the **Data** page, the **Feature Associations** tab provides a matrix to help you track and visualize associations within your data. This information is derived from different metrics that:
* Help to determine the extent to which features depend on each other.
* Provide a protocol that partitions features into separate clusters or "families."
The matrix is:
* Created during [EDA2](eda-explained#eda2) using the [feature importance](model-ref#data-summary-information) score.
* Based on numeric and categorical features found in the [Informative Features](feature-lists#automatically-created-feature-lists) feature list.
To use the matrix, click the **Feature Associations** tab on the **Data** page.

The page displays a [matrix](#view-the-matrix) (1) with an accompanying [details pane](#details-pane) (2) for more specific information on clusters, general associations, and association pairs. From the details pane, you can [view associations](#feature-association-pairs) and relationships between specific feature pairs (3). Below the matrix is a set of [matrix controls](#control-the-matrix-view) (4) to modify the view.
The **Feature Associations** matrix provides information on [association strength](#what-are-associations) between pairs of numeric and categorical features (that is, num/cat, num/num, cat/cat) and feature clusters. <em>Clusters</em>, families of features denoted by color on the matrix, are features partitioned into groups based on their similarity. With the matrix's intuitive visualizations you can:
* Quickly perform association analysis and better understand your data.
* Gain understanding of the strength and nature of associations.
* Detect families of pairwise association clusters.
* Identify clusters of high-association features prior to model building (for example, to choose one feature in each group for model input while differencing the others).
## View the matrix {: #view-the-matrix }
Once EDA2 completes, the matrix becomes available. It lists up to the top 50 features, sorted by cluster, on both the X and Y axes. Look at the intersection of a feature pair for an indication of their level of co-occurrence. By default, the matrix displays by the [Mutual Information](#associations-tab) values.

The following are some general take aways from looking at the default matrix:
* The target feature is bolded in white.
* Each dot represents the association between two features (a feature pair).
* Each cluster is represented by a different color.
* The opacity of color indicates the level of co-occurrence (association or dependence) 0 to 1, between the feature pair. Levels are measured by the set metric, either [mutual information](#associations-tab) or [Cramer's V](#more-about-metrics).
* Shaded gray dots indicate that the two features, while showing some dependence, are not in the same cluster.
* White dots represent features that were not categorized into a cluster.
* The "Weaker ... Stronger" associations legend is a reminder that the opacity of the dots in the metric represent the strength of the metric score.
Clicking points in the matrix updates the detail pane to the right. To reset to the default view, click again in the selected cell. Use the [controls](#control-the-matrix-view) beneath the matrix to change the display criteria.
You can also filter the matrix by importance, which instead ranks your top 50 features by ACE ([importance](model-ref#data-summary-information)) score for binary classification, regression, and multiclass projects.
### Work with the display {: #work-with-the-display }
Click on any point in the matrix to highlight the association between the two features:

Drag the cursor to outline any section of the matrix. DataRobot zooms the matrix to display only those points within your drawn boundary. Click **Reset Zoom** in the control pane to return to the full matrix view.

Note that you can export either the zoomed or full matrix by clicking  in the upper left.
## Details pane {: #details-pane }
By default, with no matrix cells selected, the details pane:
* Displays the strongest associations ([Associations](#associations-tab) tab) found, ranked by association [metric](#more-about-metrics) score.
* Displays a list of all identified clusters ([Clusters](#clusters-tab) tab) and their average [metric](#more-about-metrics) score.
* Provides access to charting of [feature pair association](#feature-association-pairs) details.

The listings are based on internal [calculations](#how-associations-are-calculated) DataRobot runs when creating the matrix.
### Associations tab {: #associations-tab }
Once a cell is selected in the matrix, the **Associations** tab updates to reflect information specific to the selected feature pair:

The table below describes the fields:
| Category | Description |
|---------------|-------------|
| **"*feature_1*" & "*feature_2*"** | :~~: |
| Cluster | The cluster that both features of the pair belong to, or if from different clusters, displays "None." |
| *Metric name* | A measure of the dependence features have on each other. The value is dependent on the metric set, either [Mutual Information](#more-about-metrics) or [Cramer's V](#more-about-metrics). |
| **Details for "*feature_1*" <br/>Details for "*feature_2*"** | :~~: |
| Importance | The normalized importance score, rounded to three digits, indicating a feature's importance to the target. This is the same value as that displayed on the **Data** page. |
| Type | The feature's data type, either numeric or categorical. |
| Mean | From the **Data** page, the mean of the feature value. |
| Min/Max | From the **Data** page, the minimum and maximum values of the feature. |
| **Strong associations with "*feature_1*"** | :~~: |
| *feature\_1* | When you select a feature's intersection with itself on the matrix, a list of the five most strongly associated features, based on metric score. |
### Clusters tab {: #clusters-tab }
By default DataRobot displays all found clusters, ranked by the average [metric](#more-about-metrics) score. These rankings illustrate the clusters with the strongest dependence on each other. The displayed name is based on the feature in the cluster with the highest importance score relative to the target. Clicking on a point in the matrix changes the **Clusters** tab display to report:
* Score details for the cluster.
* A list of all member features.

## Feature association pairs {: #feature-association-pairs }
Click **View Feature Association Pairs** to open a modal that displays plots of the individual association between the two features of a feature pair. From the resulting insights, you can see the values that are impacting the calculation, the "metrics of association." Initially, the plots auto-populate to the points selected in the matrix (which are also those highlighted in the details pane). For each display, DataRobot displays the cluster that the feature with the highest metric score belongs to as well as the metric association score for the feature pair. You can change features directly from the modal (and the cluster and score update):

The insight is the same whether accessed from the **Clusters** or the **Associations** tab. Once displayed, click **Download PNG** to save the insight.
There are three types of plots that display, type being dependent on the data type:
* Scatter plots for numeric vs. numeric features.
* Box and whisker plots for numeric vs. categorical features.
* Contingency tables for categorical vs. categorical features.
The following shows an example of each type, with a brief "reading" of what you can learn from the insight.
### Scatter plots {: #scatter-plots }
When comparing numeric features against each other, a scatter plot results with the X axis spanning the range of results. The dot size, or overlapping dots, represents the frequency of the value.

For example, in the chart above you might assume there's no discernible dependence of 12m_interest on reviews_seasonal, and as a result, the mutual information they share is very low.
### Box and whisker plots {: #box-and-whisker-plots }
Box and whisker plots graphically display upper and lower quartiles for a group of data. It is useful for helping to determine whether a distribution is skewed and/or whether the dataset contains a problematic number of outliers. Depending on the which feature sets the X or Y axis, the plot may rise vertically or lay horizontally. In either case, the end points represent the upper and lower extremes, with the box illustrating the highest occurrence of a value. DataRobot uses box and whisker plots to create insights for numeric and categorical feature pairs.

In the example above, the plot shows most of the variation of the online_sites feature occurs in the E1 locality. Among the other localities, there is very little dispersion.
### Contingency tables {: #contingency-tables }
When both features are categorical, DataRobot creates a contingency table which shows a frequency distribution of values for the selected features. The table can contain up to six bins, each representing a unique feature value. For features with more than five unique values, the top five are displayed with the rest accumulated in a bin named Other.

Read the table as follows: The dots are all bigger in the 12 month bucket because there are more total reviews than in the 9 month bucket. Since there is not a lot of variation in the dot sizes across the reviews_department buckets, knowledge about the last_response doesn't improve knowledge about reviews_department. The result is a low metric score.
## Control the matrix view {: #control-the-matrix-view }
You can modify the matrix view by changing the sort criteria or the metric used to calculate the association. These controls are available below the matrix:

The **Sort by** option allows you to sort by:
* Cluster (the default).
* Importance to the target (value from the **Data** page).
* Alphabetically.
The [**Metric**](#more-about-metrics) selection determines how DataRobot calculates the association between feature pairs, using either the Mutual Information or Cramer's V correlation algorithms.
The **Feature List** selection allows you to compute feature association for any of the project's feature lists. If you select a list, the page refreshes and displays the matrix for the selected feature list.
Additionally, if you previously highlighted a section of the matrix for closer observation, click **Reset Zoom** to return to the full matrix view.
## More info... {: #more-info }
The following sections include:
* A general discussion about [associations](#what-are-associations).
* Understanding the mutual information and Cramer's V [metrics](#more-about-metrics).
* How associations are [calculated](#how-associations-are-calculated).
### What are associations? {: #what-are-associations }
There is a lot of terminology to describe a feature pair's relationship to each other—feature associations, mutual dependence, levels of co-occurrence, and correlations (although technically this is somewhat different) to name the more common examples. The **Feature Association** tab is a tool to help visualize the association, both through a wide angle lens (the full matrix) and more close up (both matrix zoom and feature association pair details).
Looking at the matrix, each dot tells you, "If I know the value of one of these features, how accurate will my guess be as to the value of the other?" The metric value puts a numeric value on that answer. The closer the metric value is to 0, the more independent the features are of each other. Knowing one doesn't tell you much about the other. A score of 1, on the other hand, says that if you know <em>X</em>, you know <em>Y</em>. Intermediate values indicate a pattern, but aren't completely reliable. The closer they are to "perfect mutual information" or 1, the higher their metric score and the darker their representation on the matrix.
### More about metrics {: #more-about-metrics }
The metric score is responsible for ordering and positioning of clusters and features in the matrix and the detail pane. You can select either the Mutual Information (the default) of Cramer's V metric. These metrics are well-documented on the internet:
* A technical overview of Mutual Information on <a target="_blank" href="https://en.wikipedia.org/wiki/Mutual_information">Wikipedia</a>.
* A longer discussion of Mutual Information on <a target="_blank" href="http://www.scholarpedia.org/article/Mutual_information">Scholarpedia</a>, with examples.
* A technical overview of Cramer's V on <a target="_blank" href="https://en.wikipedia.org/wiki/Cramér%27s_V">Wikipedia</a>.
* A Cramer's V <a target="_blank" href="https://www.spss-tutorials.com/cramers-v-what-and-why/">tutorial</a> of "what and why."
Both metrics measure dependence between features and selection is largely dependent on preference and familiarity. Keep in mind that Cramer's V is more sensitive and, as such, when features depend weakly on each other it reports associations that Mutual Information may not.
### How associations are calculated {: #how-associations-are-calculated }
When calculating associations, DataRobot selects the top 50 numeric and categorical features (or all features if fewer than 50). "Top" is defined as those features with the highest importance score, the value that represents a feature's association with the target. Data from those features is then randomly subsampled to a maximum of 10k rows.
Note the following:
* For associations, DataRobot performs quantile binning of numerical features and does no data imputation. Missing values are grouped as a new bin.
* Outlying values are excluded from correlational analysis.
* For clustering, features below an association threshold of 0.1 are eliminated.
* If all features are relatively independent of each other—no distinct families—DataRobot displays the matrix but all dots are white.
* Features missing over 90% of their values are excluded from calculations.
* High-cardinality categorical features with more than 2000 values are excluded from calculations.
|
feature-assoc
|
---
title: Analyze data
description: Interpret the findings and visualizations created by DataRobot.
---
# Analyze data {: #analyze-data }
These sections help you interpret the findings and visualizations created by DataRobot.
Topic | Describes...
----- | ------
[Data Quality Assessment](data-quality) | Understand a dataset's Data Quality Assessment results, including the logic DataRobot applies to detect, and often repair, common data quality issues.
**Post data ingest analysis (EDA1)** | :~~:
[Feature details](histogram) | Interpret histograms, frequent values charts, and transformations.
[EDA1](eda-explained#eda1) | View summary statistics based on a sample of your data.
[ESDA](lai-esda) | Interactively visualize, explore, and aggregate target, numeric, and categorical features on a map.
[Fast EDA for large datasets](fast-eda) | Understand Fast Exploratory Data Analysis (EDA) for large datasets, and how to apply early target selection.
[Over Time chart](ts-leaderboard#understand-a-features-over-time-chart). | Review time-aware visualizations of how a feature changes over time feature (time-aware only).
**Post modeling analysis (EDA2)** | :~~:
[EDA2](eda-explained#eda2) | View summary statistics based on the portion of the data used for EDA1, excluding rows that are also in the holdout data and rows where the target is `N/A`.
[Feature Associations](feature-assoc) | Interpret feature correlations.
|
index
|
---
title: Over Time chart
description: How to use the Over Time chart, which helps identify trends and potential gaps in your data, for all time-aware projects (OTV, single series, and multiseries).
---
# Over Time chart {: #over-time-chart}
The **Over time** chart helps you identify trends and potential gaps in your data by displaying, for both the original modeling data and the derived data, how a feature changes over the primary date/time feature. It is available for all time-aware projects (OTV, single series, and multiseries). For time series, it is available for each user-configured forecast distance. See also [Understand a feature's Over Time chart](ts-leaderboard#understand-a-features-over-time-chart).
Using the page's tools, you can focus on specific time periods. Display options for OTV and single-series projects differ than those of multiseries. Note that to view the **Over time** chart you must first compute chart data. Once computed:
1. Set the chart's granularity. The resolution options are auto-detected by DataRobot. All project types allow you to set a resolution (this option is under **Additional settings** for multiseries projects).

2. Toggle the histogram display on and off to see a visualization of the bins DataRobot is using for EDA1.
3. Use the date range slider below the chart to highlight a specific region of the time plot. For smaller datasets, you can drag the sliders to a selected portion. Larger data sets use block pagination.

4. For multiseries projects , you can set both the forecast distance and an individual series (or average across series) to plot:

For time series projects, the **Data** page also provides a [Feature Lineage](ts-leaderboard#feature-lineage-tab) chart to help understand the creation process for derived features.
|
over-time
|
---
title: Feature details
description: How to work with a feature on the Data page, to view its details and also (in some cases) modify its type.
---
# Feature details {: #feature-details }
The **Data** page displays tags to indicate a variety of information that DataRobot uncovered while computing EDA1. You can also [click a feature name](#view-feature-details) to view its details.
## Data page informational tags {: #data-page-informational-tags }

Informational tags on the **Data** page include:
| Tag | Description |
|-------|----------------|
| Duplicate | A feature column is duplicated in the ingest dataset. |
| Empty | Column contains no values. |
| Few values | Too few values, relative to the size of the dataset, for DataRobot to extrapolate meaningful information from the feature. Not an indicator of the number of unique values, but instead domination of a single value, making the feature inappropriate for modeling. Specifically:<ul><li> A numeric with no missing values and only one unique value. </li><li> A variable in which >99.9% is the same value </li></ul> |
| Too many values | Too many values, relative to the size of the dataset, for DataRobot to extrapolate meaningful information from the feature. For categorical features, the label is applied if: `[ number of unique values ] > [ number of rows] / 2 |`
| Reference ID* | Column contains reference IDs (unique sequential numbers). |
| Associated with Target | Column was derived from target column. |
| [Target leakage](data-quality#target-leakage) | Indicates a feature whose value cannot be known at the time of prediction. |
??? note "* Reference ID calculations"
A feature is considered a reference ID if *all* of the following apply:
* The feature is an integer and not a date.
* The number of rows in the data is greater than 2000.
* Feature values are unique (`[ number of unique values] = [number of rows]`)
* Feature values are "compact." That is, the highest and lowest values are not more than `100 * rows` apart.
## View feature details {: #view-feature-details }
Once DataRobot displays features on the **Data** page, you can click a feature name to view its details and also (in some cases) modify its type. The options available are dependent on variable type:
| Option | Description | Variable Type |
|-----------|-------------|-----------------|
| _Tabs_ | :~~: | :~~: |
| [Histogram](#histogram-chart) | Buckets numeric feature values into equal-sized ranges to show a rough distribution of the variable. | numeric, summarized categorical, [multicategorical](multilabel#histogram-tab) |
| [Frequent Values](#frequent-values-chart) | Plots the counts of each individual value for the most frequent values of a feature. If there are more than 10 categories, DataRobot displays values that account for 95% of the data; the remaining 5% of values are bucketed into a single "All Other" category. | numeric, categorical, text, boolean |
| [Table](#table-tab) | Provides a table of feature values and their occurrence counts. Note that if the value displayed contains a leading space, DataRobot includes a tag, leading space, to indicate as much. This is to help clarify why a particular value may show twice in the histogram (for example, 36 months and 36 months are both represented). | numeric, categorical, text, boolean, summarized categorical, multilabel |
| [Illustration](#illustration-table) | Shows how summarized categorical data—features that host a collection of categories—is represented as a feature. See also the [summarized categorical tab differences](#summarized-categorical-features) for information on Overview and Histogram. | summarized categorical |
| [Category Cloud](analyze-insights#category-cloud-insights) | After EDA2 completes, displays the keys most relevant to their corresponding feature in Word Cloud format. This is the same Word Cloud that is available from the Category Cloud on the **Insights** page. From the **Data** page you can more easily compare Clouds across features; on the **Insights** page you can compare Word Clouds for a project's categorically-based models. | summarized categorical |
| [Feature Statistics](multilabel#feature-statistics-tab) | Reports overall multilabel dataset characteristics, as well as pairwise statistics for pairs of labels and the occurrence percentage of each label in the dataset. | multilabel
| [Over Time (time-aware only)](ts-leaderboard#understand-a-features-over-time-chart) | Identifies trends and potential gaps in data by displaying, for both the original modeling data and the derived data, how a feature changes over the primary date/time feature. | numeric, categorical, text, boolean |
| Feature Lineage [(time series)](ts-leaderboard#feature-lineage-tab) or [(Feature Discovery)](fd-gen#project-data-tab) | Provides a visual description of how a derived feature was created. | numeric, categorical, text, boolean |
| _Actions_ | :~~: | :~~: |
| [Var Type Transform](feature-transforms#variable-type-transformations) | Provides a dialog to modify the variable type. (Not shown if the variable type for this feature was previously transformed.) | numeric, categorical, text |
| [Transformation](feature-transforms#create-transformations) | Shows details for a selected transformed feature and a comparison of the transformed feature with the parent feature. (Applies to transformed features only.) | numeric, boolean |
!!! note
The values and displays for a feature may differ between EDA1 and EDA2. For EDA1, the charts represent data straight from the dataset. After you have selected a target and built models, the data calculations may have fewer rows due to, for example, holdout or missing values. Additionally, after EDA2 DataRobot displays [average target values](#average-target-values) which are not yet calculated for EDA1.
## Histogram chart {: #histogram-chart }
{% include 'includes/histogram-include.md' %}
### Change the distribution and display {: #change-the-distribution-and-display }
DataRobot breaks the data into several bins; the size of the bin depends on the number of rows in your dataset. You can change the number of bins to change the distribution range. The bin options depend largely on the number of unique values in the dataset. To change the distribution range use the dropdown:

For classification projects, you can also (after EDA2) change the basis of the display to fill bins based on the number of rows or percentage of target value. The displays of the histogram and average target value overlay also change to match your selection.
### Display summaries {: #display-summaries }
To see the details of a selected bin, hover over the bin until a popup displays:

| | Element | Description |
|---|---|---|
|  | Value | Displays the bin range located on the X-axis. |
|  | Rows | Displays the number of rows in the bin (located on the left Y-axis).|
|  | Percentage | Displays the [average target value](#average-target-values) (located on the right Y-axis). |
### Calculate outliers {: #calculate-outliers }
Outliers, the observation points at the far ends of the sample mean, may be the result of data variability. They can also represent data error, in which case you may want to exclude them from the histogram. Outlier detection—run as part of [EDA1](eda-explained) using a combination of heuristics—is strictly a histogram visualization tool and does not influence the modeling process.
Outliers are generally calculated as a collection of two ranges:
* `p25` represents the values in the first quartile of a data distribution.
* `p75` represents the values in the third quartile of a data distribution.
* `IQR` is the Interquartile Range, equal to the difference of the first quartile subtracted from the third quartile: `IQR = p75-p25`.
The ranges are then calculated as the first quartile minus IQR (`p25-IQR`) and the third quartile plus IQR (`p75+IQR`). Note that this is a general overview of outlier calculation. Additional calculations are required depending on how these ranges compare to the minimal and maximal values of the data distribution. There are also additional heuristics used for corner cases that cover how DataRobot calculates IQR and the final values of the outlier threshold.
Check the **Show outliers** box and to initiate a calculation identifying the rows containing outliers. DataRobot then re-displays the histogram with outliers included:

Check and uncheck the box to switch the histogram display between off (excluding) and on (including) outliers:
Note that DataRobot reshuffles the bin values based on the display. With outliers excluded, there are more rows and each contains a smaller number of rows. When toggled on, each bin contains a greater number of rows because the bin has expanded its range of values.
The bin selection dropdown works as usual, regardless of the outlier display setting.
## Frequent Values chart {: #frequent-values-chart }
The Frequent Values chart is the default display for categorical, text, and boolean features, although it is also available to other feature types. The display is dependent on the results of the [data quality](data-quality#interpret-the-histogram-tab) check. With no data quality issues:

In many cases, you can change the display using the **Sort by** dropdown. By default, DataRobot sorts by frequency (**Number of rows**), from highest to lowest. You can also sort by <<em>feature_name</em>>, which displays either alphabetically or, in the case of numerics, from low to high. The [**Export** link](export-results) allows you to download an image of the Frequent Values chart as a PNG file.
After EDA2 completes, the Frequent Values chart also displays an [average target value](#average-target-values) overlay.
## Summarized categorical features {: #summarized-categorical-features }
The summarized categorical variable type is used for features that host a collection of categories (for example, the count of a product by category or department). If your original dataset does not have features of this type, DataRobot creates them (where appropriate as described below) as part of EDA2. The summarized categorical variable type offers unique feature details in its [**Overview**](#overview-tab-for-summarized-categorical), [**Histogram**](#histogram-tab-for-summarized-categorical), [**Category Cloud**](#category-cloud-tab), and [**Table**](#table-tab) tabs.
!!! note
You cannot use summarized categorical features as your target for modeling.
### Required dataset formatting {: #required-dataset-formatting }
For features to be detected as the summarized categorical variable type (shown in the Var Type column on the **Data** tab), the column in your dataset must be a valid JSON-formatted dictionary:
`"Key1": Value1, "Key2": Value2, "Key3": Value3, ...`
* `"Key":` must be a string.
* `Value` must be numeric (an integer or floating point value) and greater than 0.
* Each key requires a corresponding value. If there is no value for a given key, the data will not be usable.
* The column must be JSON-serializable.
The following is an example of a <em>valid</em> summarized categorical column:
`{“Book1”: 100, “Book2”: 13}`
An <em>invalid</em> summarized categorical column can look like any of the following examples:
* `{‘Book1’: 100, ‘Book2’: 12}`
* The key is not in quotation marks (not JSON-serializable).
* `{‘Book1’: ‘rate’,‘Book2’: ‘rate1’}`
* These values are strings instead of positive numeric values.
* `{“Book1”, “Book2”}`
* This example is not in JSON dictionary format.
### Overview tab for summarized categorical {: #overview-tab-for-summarized-categorical }
The **Overview** tab presents the top 50 most frequent keys for your feature. Each key displays the percentage of rows that it appears in, its mean, standard deviation, median, min, and max. You can sort the keys by any of these fields. Most of this information is available for other types of features in the columns on the **Data** page, but for summarized categorical features each individual key has its own values for these fields.

| | Element | Description |
|---|---|---|
|  | Export | Export the list of keys and their associated values as a PNG. You can choose to include the chart title in the image and edit the filename before you download it.|
|  | Page control | Move through pages of listed keys (10 keys per page). |
|  | Histogram icon | Access the histogram for a given key. |
### Histogram tab for summarized categorical {: #histogram-tab-for-summarized-categorical }
While most of the functionality for this tab is the same as described in the [working with histograms](#histogram-chart) section above, there are some differences unique to this variable type. The histograms displayed in this tab correspond to the individual labels (keys) of a feature instead of a feature itself. The list of keys can be sorted by percentage of occurrence in the dataset's rows or alphabetically.

| | Element | Description |
|---|---|---|
|  | Search | Searches for labels. |
|  | Showing | [Changes the bin distribution](#change-the-distribution-and-display). Select the number of bins to view. |
|  | Target values | Sets the basis of the [target value display](#change-the-distribution-and-display). |
|  | Scale Y-axis for large values | Reduces the number of rows measured in the Y-axis for [large values](#viewing-large-values).|
|  | Export | Exports the histogram. |
!!! note
DataRobot automatically filters out stopwords when calculating values for the histogram.
### Viewing large values {: #viewing-large-values }
The **Scale the Y-axis for large values** option reduces the number of rows measured in the Y-axis and improves the visualization of larger values—it is common that large numbers are only represented in a few rows. Resizing the histogram above results in:

By scaling the Y-axis, you can see that the greatest value measured has been greatly reduced. As a result, the number of rows across all values are more evenly represented.
### Category Cloud for summarized categorical {: #category-cloud-for-summarized-categorical }
The **Category Cloud** tab provides insights into [summarized categorical](histogram#summarized-categorical-features) features. It displays as a [word cloud](word-cloud) and shows the keys that are most relevant to their corresponding feature.

{% include 'includes/category-cloud-include.md' %}
## Illustration table {: #illustration-table }
The **Illustration** tab shows how summarized categorical data is represented as a feature. For example, in the below image, the **Values** column contains five summarized categorical features displayed in JSON dictionary format (selected at random), as described above.

Click **Summary** to display a box that visualizes how categorical values appeared in their initial state, prior to being engineered as summarized categorical features.

## Table tab {: #table-tab }
The **Table** tab, which is the default tab for [multilabel](multilabel) projects, displays a two-column table detailing counts for the top 50 most frequent label sets in the multicategorical feature.

The table lists each key in the **Values** column, and the respective key's count in the **Count** column.
!!! note "Unicode text in the Values column"
If you are using Unicode text and it appears abnormal in the Values column, make sure your text is UTF8 encoded.
## Average target values {: #average-target-values }
After EDA2, DataRobot displays orange circles as graph overlays on the Histogram and Frequent Values charts. The circles indicate the average target value for a bin. (These circles are connected for numeric features and not for categorical, since the ordering of categorical variables is arbitrary and histograms display a continuous range of values.)
For example, consider the feature `num_lab_procedures`:

In this example, there are 846 people who had between 44-49.999999 lab procedures. The average target value represented by the circle (in this case, the percent readmitted) is 37.23%. (The orange dots correspond to the right axis of the histogram.)
### How Exposure changes output {: #how-exposure-changes-output }
If you used the [Exposure](additional#set-exposure) parameter when building models for the project, the **Histogram** and **Frequent values** tabs display the graphs adjusted to exposure. In this case:

* The <em>number of rows</em> (1) in each bin.
* The <em>sum of exposure</em> (2) in each bin. That is, the sum of the weights for all rows weighted by exposure.
* The <em>sum of target</em> value divided by the <em>sum of the exposure</em> (3) in a bin.
### How Weight changes output {: #how-weight-changes-output }
If you set the Weight parameter for a project, DataRobot weights the number of rows and average target values by weight.
|
histogram
|
---
title: Data connections
description: To enable integration with a variety of enterprise databases, DataRobot provides a “self-service” JDBC platform for database connectivity setup.
---
# DataRobot data connections {: #datarobot-data-connections }
!!! note
If your database is protected by a network policy that only allows connections from specific IP addresses, have an administrator add all [whitelisted IPs for DataRobot](#source-ip-addresses-for-whitelisting) to your network policy. If the problem persists, contact your DataRobot representative.
To enable integration with a variety of enterprise databases, DataRobot provides a “self-service” JDBC platform for database connectivity setup. Once configured, you can read data from production databases for model building and predictions. This allows you to quickly train and retrain models on that data, and avoids the unnecessary step of exporting data from your database to a CSV file for ingest into DataRobot. It allows access to more diverse data, which results in more accurate models.
The DataRobot JDBC database connectivity solution is a standardized, platform-independent solution that does not require complicated installation and configuration. Those users with the technical abilities and permissions can establish database connections; other users can leverage those connections to solve business problems.
!!! note
By default, only users with "Can manage JDBC database drivers" permission can add, update, or remove JDBC drivers. Only users with "Can manage connectors" permission can [add connections](#create-a-new-connection). See [Roles and permissions](roles-permissions) for details on permissions.
This section includes the following:
* An overview of the [database connectivity workflow](#database-connectivity-workflow).
* Steps for [creating new connections](#create-a-new-connection).
* Information about [data connections with OAuth](#data-connection-with-oauth).
* Steps for [adding data sources](#add-data-sources).
* Steps for [sharing data connections](#share-data-connections).
## Database connectivity workflow {: #database-connectivity-workflow }
By default, users can create, modify (depending on their [role](roles-permissions#shared-data-connection-and-data-asset-roles), and share data connections (see below for [definitions](#database-connection-terms) of the terminology used in this section). You can also create [data sources](glossary/index#data-source).
DataRobot's database connectivity workflow, described below, has two fundamental components. First, the administrator uploads JDBC drivers and configures database connections for those drivers. Then, users can import data into DataRobot for project creation and predictions, as follows:
1. From the **Data Connections** page, create [data connection](#create-a-new-connection) configuration(s).
2. From the same **Start** screen or the [**AI Catalog**](catalog#add-data-from-external-connections), create [data sources](#add-data-sources)—from the data connections—to use for modeling and predictions.
Once configured, your data sources are available for both ingest from the **Start** screen and for predictions from the [Make Predictions](predict) tab.
3. Optionally, and depending on [role](roles-permissions#shared-data-connection-and-data-asset-roles), [share](#share-data-connections) data connections with others.
There are additional opportunities to launch the data source creation dialogs, but these instructions describe the process used in all cases.
### Source IP addresses for whitelisting {: #source-ip-addresses-for-whitelisting }
Any connection initiated from DataRobot originates from one of the following IP addresses:
{% include 'includes/whitelist-ip.md' %}
## Create a new connection {: #create-a-new-connection }
To create a new data connection:
1. From the account menu on the top right, select **Data Connections**.

2. Click **Add new data connection** to open the data store selection dialog box. You can also create a new data connection using the [**AI Catalog**](catalog) by selecting **Add to catalog** > **New Data Connection**.

3. Select the tile for the data store you wish to use.

??? note "Self-Managed AI Platform installations"
For Self-Managed AI Platform installations, you might not see any data stores listed. In that case, click **Add a new driver** and add a driver from the [list of supported databases](#supported-databases).

4. Complete the fields for the data store. They will vary slightly based on the data store selected.

| Field | Description |
|-----------------------|----------------|
| Data connection name | Provide a unique name for the connection. |
| Version | Select the version of the data store to use from the dropdown list. |
| Configuration: Parameters | Modify [parameters](#data-connection-with-parameters) for connections. |
| Configuration: URL | Enter the URL to the database/data store to connect to, in the form jdbc:mysql://<HOST>:<PORT>/<NAME>. You can include parameters in the URL if your connection requires them. |
5. Click **Add data connection** to save the configuration.
The new connection appears in the left-panel list of **Data Connections**.
!!! note
Any connection that you create is only available to you unless you [share](#share-data-connections) it with others.
#### Data connection with parameters {: #data-connection-with-parameters }
The parameters provided for modification in the data connection configuration screen are dependent on the selected driver. Available parameters are dependent on the configuration done by the administrator who added the driver.

Many other fields can be found in a searchable expanded field. If a desired field is not listed, you can click **Add parameter** to include it.

Click the trash can icon () to remove a listed parameter from the connection configuration.
!!! note
Additional parameters may be required to establish a connection to your database. These parameters are not always pre-defined in DataRobot, in which case, they must be manually added.
For more information on the required parameters, see the [documentation for your database](data-sources/index).
#### Data connection with OAuth {: #data-connection-with-oauth }
Snowflake and Google BigQuery users can set up a data connection using OAuth single sign-on. Once configured, you can read data from production databases to use for model building and [predictions](batch-pred-jobs).
For information on setting up a data connection with OAuth, the required parameters, and troubleshooting steps, see the documentation for your database: [Snowflake](dc-snowflake) or [BigQuery](dc-bigquery).
### Test the connection {: #test-the-connection }
Once your data connection is created, test the connection by clicking the **Test connection** button in the upper right.

In the resulting dialog box, enter or [use stored](stored-creds) credentials for the database identified in the **JDBC URL** field or the parameter-based configuration of the data connection creation screen. Click **Sign in** and when the test passes successfully, click **Close** to return to the **Data Connections** page and create your [data sources](#add-data-sources).
### Modify a connection {: #modify-a-connection }
You can modify the name, JDBC URL, and, if the driver was configured with them, the parameters of an existing data source.
1. Select the data connection in the left-panel connections list.
2. In the updated main window, click in the box of the element you want to edit and enter new text.
3. Click **Save changes**.
### Delete a connection {: #delete-a-connection }
You can delete any data connection that is not being used by an existing data source. If it is being used, you must first delete the dependencies. To delete a data connection:
1. From the **Data Connections** tab, select the data connection in the left-panel connections list.
2. Click the **Delete** button in the upper right ().
3. DataRobot prompts for confirmation. Click **Delete** to remove the data connection. If there are data sources dependent on the data connection, DataRobot returns a notification.

4. Once all dependent data sources are removed <a target="_blank" href="https://datarobot-public-api-client.readthedocs-hosted.com/en/v2.11.1/api/database_connectivity.html#datarobot.DataSource.update">via the API</a>, try again to delete the data connection.
## Add data sources {: #add-data-sources }
Your data sources specify, via SQL query or selected table and schema data, which data to extract from the data connection. It is the extracted data that you will use for modeling and predictions. You can point to entire database tables or use a SQL query to select specific data from the database. Any data source that you create is available only to you.
!!! note
Once data sources are created, they cannot be modified and can only be deleted <a target="_blank" href="https://datarobot-public-api-client.readthedocs-hosted.com/en/v2.25.0/entities/database_connectivity.html">via the API</a>.
To add a data source, do one of the following:
* From the **Start** screen, click **Data Source** and select the connection that holds the data you would like to add. See how to [import from an existing data source](import-to-dr#use-an-existing-data-source).
* From the [**AI Catalog**](catalog), select **Add to catalog** > **Existing Data Connection**. See how to [add data from external connections](catalog#add-data-from-external-connections).
## Share data connections {: #share-data-connections }
Because the user creating a data connection and the enduser may not be the same, or there may be multiple endusers for the data connection, DataRobot provides the ability to set user-level permissions for each entity. You can accomplish scenarios like the following:
* A user wants to set permissions on a selected data entity to control who has consumer-level, editor-level, or owner-level access. Or, the user wants to remove a particular user's access.
* A user that has had a data connection shared with them wants the shared entity to appear under their list of available entities.
When you invite a user, user group, or organization to share a data connection, DataRobot assigns the default role of Editor to each selected target (not all entities allow sharing beyond a specific user). You can change the role from the dropdown menu.
To share data connections:
1. From the account menu on the top right, select **Data Connections**, select a data connection, and click **Share**:

2. Enter the email address, group name, or organization you are adding and select a [role](roles-permissions). Check the box to grant sharing permission.
3. Click **Share** to add the user, user group, or organization.
4. Add any number of collaborators and when finished, click **Close** to dismiss the sharing dialog box.
Depending on your own permissions, you can remove any user or change access as described in the table of [roles and permissions](roles-permissions).
!!! note
There must be at least one Owner for each entity; you cannot remove yourself or remove your sharing ability if you are the only collaborating Owner.
|
data-conn
|
---
title: Stored data credentials
description: How to add and manage securely stored credentials for reuse in accessing secure data sources.
---
# Stored data credentials {: #stored-data-credentials }
!!! info "Availability information"
This feature is off by default for Self-Managed AI Platform users. To enable the feature, contact your system administrator or Support. It is enabled by default for managed AI Platform users.
Because there is often a need to access secure data sources, DataRobot provides an option to securely store associated credentials for reuse. This capability is particularly useful for automating workflows that require access to secure sources or when combining many tables and sources that would each otherwise require individual authentication. You can also remove stored credentials. You can manage all your stored credentials from the [**Credentials Management**](#credentials-management) page.
Application of stored credentials is available whenever you are prompted for them, such as when:
* [Creating a data source](#create-stored-credentials) from the [**Data Connections**](data-conn) page or testing connections.
* Using the [**Data Source**](import-to-dr#use-an-existing-data-source) option when beginning a project.
* Using a data source or S3 to make [batch predictions](../../api/reference/batch-prediction-api/index) via the API.
* Using a data connection to create a new dataset in the [**AI Catalog**](catalog#add-data-from-external-connections).
* Taking a [snapshot](catalog#create-a-snapshot) of a dataset in the **AI Catalog**.
* [Creating a project](catalog#create-a-project) from a dataset in the **AI Catalog**.
* Using the **AI Catalog** to select a prediction dataset using [**Make Predictions**](predict).
## Credentials Management {: #credentials-management }
This section describes adding, editing, and removing credentials, as well as managing their associated data connections. To access this page, click on your user icon and navigate to the **Credentials Management** page.
### Add new credentials {: #add-new-credentials }
From the **Credentials Management** page, click **+Add New**.

Enter a username and password, as well as an identifying label for the set (an account name), then click **Save and sign in**. The credential entry becomes available in the panel on the left.

Click **Add Associated connection** to add a data connection.

Select the data connection you would like associated to these credentials and click **Connect**.

### Modify credentials {: #modify-credentials }
To remove a set of credentials, select the account name you would like to delete from the panel on the left. Then, click **Delete** in the upper-right corner.

To edit credentials, select the account name you would like to edit from the panel on the left.
In the upper right corner, click **Edit Credentials** to modify the username, password, and/or account name.
Change the relevant fields and click **Save and sign in** to apply your changes. When you edit credentials, the credentials will be updated for all associated data connections.

### Manage associated connections {: #manage-associated-connections }
When you select credentials, all associated data connections show up on the right.

From the **Credentials Management** page, you can add a new associated connection and remove an existing association. If you are removing all data connections for the associated credentials, you will receive a message prompting you to delete the credentials. You can decide to remove the credentials or leave them without any associated connections.
## Managing credentials in Data Connections {: #managing-credentials-in-data-connections }
As an alternative to managing credentials from the **Credentials Management** page, you can also add and remove them from [Data Connections](data-conn) page. To access this page, click on your user icon and navigate to the **Data Connections**.
### Create stored credentials {: #create-stored-credentials }
Select your data connection and click **Test connection**.

Enter the new credentials and check **Securely save credentials**. Optionally, enter a n account name and click **Save and sign in**.

You can also click the **Credentials** and click **+Add New**.

### Use stored credentials {: #use-stored-credentials }
Select the desired account. DataRobot immediately initiates the appropriate action (adding data to the **AI Catalog** from a data connection in this example):

To use different credentials, select **Use different account**.
### Remove stored credentials {: #remove-stored-credentials }
From the [**Data Connections**](data-conn) tab, select the connection with associated credentials, click the **Credentials** tab, and click **Remove association**.

!!! note
You cannot edit stored credentials in the Data Connections tab. To edit stored credentials, go to [Credentials Management](#modify-credentials).
|
stored-creds
|
---
title: Share secure configurations
description: Allows IT admins to configure OAuth-based authentication parameters for a data connection, and then securely share them with other users without exposing sensitive fields.
---
# Share secure configurations {: #share-secure-configurations }
IT admins can configure OAuth-based authentication parameters for a data connection, and then securely share them with other users without exposing sensitive fields. This allows users to easily connect to their data warehouse without needing to reach out to IT for data connection parameters.
## IT admins {: #it-admins}
=== "SaaS"
!!! info "Availability information"
**Required user role:** Organization administrator
=== "Self-Managed"
!!! info "Availability information"
**Required user role:** System administrator
### Prerequisites {: #prerequisites }
Before proceeding, make sure you have the following parameters:
- Client ID
- Client Secret
- (optional) Scopes
- Authorization endpoint URL
- Token URL
For more information, see the [documentation for connecting to Snowflake](dc-snowflake).
### Create a configuration {: #create-a-configuration }
To create a secure configuration:
1. Click your user icon in the upper-right corner and select **Secure Configurations**.

2. Click **Add a secure configuration**.

3. Fill out the required parameters for your data connection. Make sure you enter a _unique_ name under **Secure configuration display name**.

4. Click **Save**.
### Share a configuration {: #share-a-configuration }
Other users cannot access a secure configuration when setting up a data connection until it's been shared with them.
To share a secure configuration:
1. On the **Secure Configurations** page, click the **Share** icon next to a configuration.
2. In the sharing modal, enter the user(s), group(s), or organization(s) you want to grant access to (1). Then, select the appropriate user role (2) and click **Share** (3).

Note that the role you select determines what configuration information the recipients can view. The table below describes each option:
Role | Description
----- | ---------
Consumer | Cannot view sensitive fields, including Client ID and Client Secret.
Editor / Consumer | Can view and update sensitive fields.
### Manage secure configurations {: #manage-secure-configurations }
Once you've created a secure configuration, you can:
=== "Update a configuration"
To update an existing configuration, click the name of the configuration you want to update. Update the fields that appear below the configuration name and click **Save**.

=== "Delete a configuration"
To delete an existing configuration, click the **More options** icon next to the configuration you want to remove, and select **Delete**.

=== "Revoke access"
To revoke access to a shared secure configuration, click the **Share** icon next to the configuration and click the **X** next to the user, group, or organization.

## Users {: #users }
With a shared secure configuration, you can quickly connect to an external database or data lake without going through the trouble of filling in the required fields and potentially exposing sensitive fields.
To remove a secure configuration after it's been associated with a data connection, see the documentation on [stored data credentials](stored-creds#remove-stored-credentials).
### Prerequisites {: #prerequisites }
Before you can add a data connection with a secure configuration, your IT admin must share it with you.
### Associate a secure configuration {: #associate-a-secure-configuration }
To use a secure connection configuration:
1. Click your user icon in the upper-right corner and select **Data Connections**.

2. Select a data connection.
3. Select **Credentials** and click **+ Add Credentials**.

4. In the **Add Credentials** modal, click **+ Create new**.

5. Fill in the available fields:
- For **Credential type**, select **OAuth**.
- Click **Share secure configurations**.
- Select a secure configuration from the dropdown.
- Enter a unique display name.

6. Click **Save and sign in**, and then sign in with your database credentials.
|
secure-config
|
---
title: Data connections
description: Connect to various data sources and manage stored credentials.
---
# Data connections {: #data-connections }
The AI Catalog is a browsable and searchable collection of registered objects that contains definitions and relationships between various object types. These definitions and relationships include data connections, data sources, data metadata, and blueprints.
Topic | Describes...
----- | ------
[Stored data credentials](stored-creds) | Add and manage securely stored credentials for reuse in accessing secure data sources.
[Share secure configurations](secure-config) | IT admins can configure OAuth-based authentication parameters for a data connection, and then securely share them with other users without exposing sensitive fields.
[Connect to data sources](data-conn) | Set up database connections using a “self-service” JDBC platform.
[Register data in the AI Catalog from a data connection](catalog#add-data-from-external-connections) | Register data in the catalog from a new or existing data connection.
[Supported databases](data-sources/index) | View a list of supported and deprecated databases, as well as the required parameters to connect to them in DataRobot.
|
index
|
---
title: Automatic transformations
description: Learn about DataRobot's automatic transformations. Transformed features do not replace the original features, but are added as new features for building models.
---
# Automatic transformations {: #automatic-transformations }
The following sections describe DataRobot's automatic transformations. Transformed features do not replace the original, raw features; rather, they are provided as new, additional features for building models. For information on automated feature transformations DataRobot performs during the modeling process, see the [Modeling process](model-ref#data-transformation-information) documentation.
!!! note
Transformed features (including numeric features created as user-defined functions) cannot be used for special variables, such as [Weight, Offset, Exposure, and Count of Events](additional#set-exposure).
When DataRobot identifies a feature column as variable type date, it automatically creates transformations for qualifying features (see below the table) after EDA1 completes. When complete, the dataset can have up to four new features for each date column:
| Feature variable | Description | Variable type |
|------------------|---------------|---------------|
| Hour of Day | Numeric value representing a 24-hour period, 0-23. Data must contain one or more date columns and at least three different hours in the date field. | Numeric |
| Day of week | Numeric and text value representing the day of the week, where 0 corresponds to Monday (for example, 0: Monday, 2: Wednesday, 5: Saturday). Data must contain at least three different weeks. | Categorical |
| Day of Month | The day of the month, 1-31. Data must contain at least three different years. | Numeric |
| Month | Numeric value representing the month, 1-12. Data must contain at least three different years. | Categorical |
| Year | Data must contain at least three different years. | Numeric |
Date features are not automatically extracted if:
* there are 10 or more date and/or time columns in the dataset
* transformed features would not be informative (e.g., if there is only 1 year of data there is no need to extract year)
* transformed features risk overfitting (e.g., with 1 year of data, modeling on month cannot identify full seasonal effects)
The new derived features are included in the [Informative Features](feature-lists#feature-lists) feature list and used for Autopilot. DataRobot also maintains the original date column. Note, however, that the original raw date is excluded from Informative Features if all four features listed above were extracted (that is, the dataset included at least three years of data). The following is an example of a dataset that contains over 10 years' worth of data. As a result, DataRobot created new features for all four date columns:

If any of the automatically-transformed date features are duplicates of existing features in the dataset, they are not included in the Informative Features list. As an example, assume you add a date-type column containing the manufacturing year, “MfgYear”, to the dataset prior to ingestion. DataRobot marks the transformed feature, "MfgYear(Year)”, as a duplicate and excludes it from Informative Features. If, however, the automatically-transformed feature has a different type than the original column, it is included in Informative Features.
|
auto-transform
|
---
title: Manual transformations
description: Manually create feature transformations using the natural logarithm, squaring, running functions on numeric data, or changing the variable type, if appropriate.
---
# Manual transformations {: #manual-transformations }
The following sections describe manual, user-created transformations. Transformed features do not replace the original, raw features; rather, they are provided as new, additional features for building models.
!!! note
Transformed features (including numeric features created as user-defined functions) cannot be used for special variables, such as [Weight, Offset, Exposure, and Count of Events](additional#additional-weighting-details).
## Create transformations {: #create-transformations }
DataRobot supports different transformations that you can apply to your data, including taking the natural logarithm, squaring, and running functions on numeric data. (You can also change the [variable type](#variable-type-transformations) for features.) These transformations are only available when it is appropriate to the feature type. The following steps describe creating a user transformation.
1. Hover over a feature available for transformation and click the orange arrow to the left of the feature name to expose the **Transformations** menu:

2. Select a transformation. If you select the natural log `log(<feature>)` or squaring `<feature>^2` options, transformation is computed immediately and the new derived feature created.
3. If you select the function option `f(<feature>)`, a dialog for adding a new transformation appears.

* In the **New feature name** field, type a name for this transformation. You can create multiple function-based transformations for a feature.
* Type the function and feature(s), using the [supported syntax](#transform-options-and-syntax).
* Click **Create** to create the transformation.
Note that you can also access this functionality from the menu:

The transformed feature appears under the original feature in the **Data** page (all features). It can be included in any new feature lists and can also be used for modeling. When using a model that contains transformed features for predictions, DataRobot automatically includes the new feature in any uploaded dataset.

As with other features, you can view the [histogram](histogram), charted frequent values, and a table of values by clicking the feature name. However, instead of allowing further [variable type transformations](#variable-type-transformations), the display compares the transformed feature with the parent feature:

## Variable type transformations {: #variable-type-transformations }
DataRobot bases variable type assignment on the values seen during EDA and then lists the variable type for each feature in your dataset on the **Data** page. There are times, however, when you may need to change the type. For example, area codes may be interpreted as numeric but you would rather they map to categories. Or a categorical feature may be encoded as a number (that is intended to map to a feature value, such as `1=yes, 2=no`) but without transformation is interpreted as a number.
There are certain cases where variable type transforms are not available. These include columns that DataRobot has identified as [special columns](file-types#special-column-detection) for both integral and float values. (Date columns are a special case and do support transforms. See the description of [single feature transformations](#single-feature-transformations).) Additionally, a column that is all numeric except for a single unique non-numeric value is treated as special. In this case, DataRobot converts the unique value to [NaN](model-ref#missing-values) and disallows conversion to prevent losing the value.
!!! note
When converting from numeric variable types to categorical, be aware that DataRobot drops any values after the decimal point. In other words, the value is truncated to become an integer. Also, when transforming floats with missing values to categorical, the new feature is converted, not rounded. For example, 9.9 becomes 9, not 10.
!!! tip
When making predictions DataRobot expects the columns in the prediction data to be the same as the original data. If a model uses the original variable plus the transformed variable, the prediction data must use the original feature name. DataRobot will calculate the derived features internally.
You can transform the variable type of [many features](#multiple-feature-transformations) at the same time (using a batch transformation), or [one feature](#single-feature-transformations) at a time.
### Multiple feature transformations {: #multiple-feature-transformations }
To modify the variable type for multiple features as a single batch operation, use the **Change Variable Types** option from the menu. This option is useful, for example, if you want to transform all features of one variable type.
You can select to transform all features or multiple features for a specific variable type to another variable type. For example, you could change all Categorical features to Text, or you can pick specific Categorical features to transform to Text. All new features created using batch variable type transformations are available from the **Data** page, in the **All Features** list (although you can transform features from any feature list, when batch transformation completes, you need to view all features on the **Data** page). You can [add the new features to other feature lists](feature-lists#create-feature-lists).
!!! note
Keep the following in mind when transforming multiple features at the same time:
* A feature that is the result of a previous transformation operation cannot be selected for transformation.
* All features selected for batch transformation must be of the same variable type.
If DataRobot does not let you transform the features, you should correct the list of features and try the transformation again.
1. First, select the features to transform using one of the following methods:
* If you want to manually-select features: Select each feature you want to transform (1). Make sure you do not select any previously-transformed features (2) and that all features you select are of the same variable type (3).

* If you want to select all features for a variable type: From the menu, under **Select Features by Var Type**, select the variable type you want to transform for the dataset. Only variable types present in the dataset can be selected.

All features of that variable type are shown as selected in the **Data** page (i.e., checks in the left-hand boxes).
2. From the menu, under **Actions**, click **Change Variable Types**. (If the link is disabled, there is an issue with the selected features. Hover over the disabled link to see the reason DataRobot cannot transform the selected features. See [Transform options and syntax](#transform-options-and-syntax) for details.)

The **Change Variable Type** dialog appears.

!!! note
DataRobot supports transforming up to 500 features at a time. If you see a message indicating more than 500 features are selected for transformation, you need to [deselect features](#feature-transformation-limit).
3. Configure how to create the new features for the selected variable type.
| Component | Description |
|-------------|--------------|
| Selected features (1) | Identifies the number of selected features, the variable type for the features, and the names of all features selected for transformation. |
| Change variable type option (2) | Shows the selected variable and prompts you to select the target variable type for the transformation. DataRobot performs specific transformations for numeric variable types. |
| Prefix for new features (3) | Provides a prefix to apply to the original feature names to create the transformed feature names. You can keep the default prefix (**Updated\_**) or create your own. If creating a prefix, do not include `- " . { } / \`. The names of the new features must have a suffix, prefix, or both; if a suffix is defined, then a prefix is not required. |
| Suffix for new features (4) | Provides a suffix to apply to the original feature names to create the transformed feature names. You can keep the default suffix, which is the new variable type, or create your own. (If creating a suffix, do not include `- " . { } / \` . The names of the new features must have a suffix, prefix, or both; if a prefix is defined, then a suffix is not required. |
| New Feature Names (5) | Shows how the new (transformed) features will be named: `prefix_[original feature name]_suffix` (using the actual prefix and/or suffix). |
| Change (6) | Creates new features for all selected features, for the target variable type. |
When you click **Change**, DataRobot creates a list of the selected features and submits them for variable type transformation. A message indicates features have been selected and transformation has started:

DataRobot creates the transformed features in the background. As each new (transformed) feature completes finishes processing, it is shown in the **Data** page (all features). Depending on the number of features selected for transformation, it may take several minutes for all new features to finish transformation and become available. A message indicates when all transformations are complete:

#### Feature transformation limit {: #feature-transformation-limit }
DataRobot supports transforming up to 500 features at a time and will show a message in the **Change Variable Types** dialog if you select more than 500:

If this is the case, you need to deselect features so that only 500 or fewer features are selected. To do this, close the dialog and, in the **Data** page, deselect features:

Then, when 500 or fewer features are selected for transformation, select **Change Variable Type**.
### Single feature transformations {: #single-feature-transformations }
To modify the variable type for a single feature, use one of the following methods:
* View the **Transformations** menu for the feature and click **Change Var Type**, or
* View the [histogram](histogram) for the feature and click **Var Type Transform**.
Both methods open the same dialog, which will vary depending on the variable type for the selected feature.

The following table explains the settings for a categorical transformation:
| Component | Description |
|-----------|-------------|
| Current variable type transformation (1) | Displays the current variable type assigned to the feature. |
| Transformation options (2) | Selects a new feature type, via the dropdown, from the available variable types for the current feature. DataRobot performs specific transformations for numeric and categorial variable types. |
| New Feature Name (3) | Provides a field to rename the new feature. By default, DataRobot uses the existing feature name with the new variable type appended. |
| Feature list application (4) | Selects which feature list the new feature is added to. Select to add to "All Features" or use the dropdown (5) to add it to a specific list instead. |
| Feature list selection (5) | Provides a dropdown selection of feature lists from the project, allowing you to select which list to add the feature to. |
| Create Feature (6) | Creates the new feature. The new feature is then listed below the original on the **Data** page. |
You can create any number of transformations from the same feature. By default, DataRobot applies a unique name to each transformation. If you inadvertently create duplicate features, DataRobot marks them as such and ignores them in processing.
The following is an example of date transformation, which allows you to select which date-specific derivations to apply. You can also select whether the result should be considered a categorical or numeric value.

Here's an example of a numeric to categorical transformation:

## Transform options and syntax {: #transform-options-and-syntax }
DataRobot uses a subset of <a target="_blank" href="https://numexpr.readthedocs.io/projects/NumExpr3/en/latest/user_guide.html">Python's <code>Numexp</code> package</a> to create user transformations of column values (features). When you select the function option, `f()`, from the [**Transformations**](#create-transformations) menu, a dialog for entering the user transformation syntax appears. The following describes DataRobot's application of `Numexpr` and provides some examples.
!!! note
The DataRobot API supports only variable type transformations.

To create transformations, enter feature name(s) within curly braces `{}` and apply the appropriate function and operator. DataRobot provides auto-completion for feature names; if you click after the initial curly brace, you can select from the list of displayed features.
Note that:
* Feature names are case-sensitive.
* You cannot transform features of variable type date. Instead, create new features out of the derived date features (for example, `Timestamp (Hour of Day`).
* You cannot do a feature transformation on the target.
| Allowed functions | Description |
|--------------------|----------------|
| <code>log({<em>feature</em>})</code> | natural logarithm |
| <code>sqrt({<em>feature</em>})</code> | square root |
| <code>abs({<em>feature</em>})</code> | absolute value |
|<code>where({<em>feature1</em>} <em>operator</em> {<em>feature2</em>}, <em>value-if-true</em>, <em>value-if-false</em>)</code> | if-then-else functionality |
The following lists the allowed binary arithmetic operators. Use parentheses to group and order operations, for example `(1 + 2) * (3 + 4)`. You can reference multiple features in a single transform, for example `{number_inpatient} + {num_medications}`.
Supported arithmetic operators are:
* \+ (addition)
* – (subtraction)
* \* (multiplication)
* / (division)
* \*\* (exponentiation)
You can also use comparison operators, but they must be wrapped within a `where()` function (i.e., `where({feature1} operator {feature2}, value-if-true, value-if-false)`. Supported comparison operators are:
* < , > (less than, greater than)
* == (equal to)
* != (not equal to)
* <= , >= (less than or equal to, greater than or equal to)
### Comparison operators with missing values {: #comparison-operators-with-missing-values }
If there are missing values (NaN) in the dataset, applying transformations that compare feature values to the input values requires special consideration. If you do create statements using comparison operators with NaN values—and the goal is that the derived feature returns NaN if the original feature is NaN—be sure to compare results against the expected behavior.
For example, transforming the feature `sales` to `excellent_sales` using the following statement will always return `False` if sales is `NaN`. Even if there are missing values in the data for the feature `sales`, missing values will not be returned in the result:
Excellent_Sales = where({Sales}>300000,1,0)
If this is not the desired result, consider an expression like the following:
Excellent_Sales = where(~({Sales} > 300000) & ~({Sales} <= 300000), {Sales}, where({Sales} > 300000, 1,0))
Some transformation examples:
* `OrderOfMag = log({NumberOfLabs})`
* `Success = sqrt({sales} + 10)`
* `CostBreakdown = abs({sales} - {costs})`
* `IsRich = where({YearlyIncome} > 1000000, 1, 0)`
|
feature-transforms
|
---
title: Interaction-based transformations
description: Without a secondary dataset, Feature Discovery does not run. But with settings you can automatically create features using interactions in your primary dataset.
---
# Interactions-based transformations {: #interaction-based-transformations }
If your project has no secondary dataset, the [Feature Discovery](feature-discovery/index) process does not apply. For these cases, use the capability to search for interactions in a primary dataset to automatically create new features based on interactions between features from your primary dataset.
These newly engineered features can provide additional insight that might be important for modeling. For example, if you were to provide the year a house was sold and the year a house was built, DataRobot could extract a new feature from the difference. This engineered feature, “age of house at sell date,” may prove more relevant than the build or sale dates alone.
The [search for interactions](#search-for-interactions) functionality, run as part of the EDA2 process, results in not only new features but also new feature lists, both default lists and custom. The new features are represented in the following tabs:
* [Feature Impact](feature-impact) if in the top 50 most impactful.
* [Feature Effects](feature-effects) if they have more than zero influence on the model (based on the feature importance score).
* [Prediction Explanations](../../../modeling/analyze-models/understand/pred-explain/index), if applicable to the displayed reasons.
If the search does not create any new features (or you have not enabled the option in [**Advanced options**](additional), there are no changes to the **Data** page list of features and no new feature lists are created.
See the [considerations](#feature-considerations) for feature availability.
!!! note
DataRobot additionally provides automatic feature transformations for features of type date. This transformation, which occurs during EDA1 and is described as part of the [feature transformations](feature-transforms) section, requires no manual settings.
## Search for interactions {: #search-for-interactions }
To enable interaction search for a primary dataset, after selecting a target, expand the **Advanced options** link and select the **Additional** tab. In the **Automation Settings** section, select **Search for interactions**:

Return to the top of the page and click **Start**. As EDA2 runs, you can watch as the newly created features are added to the **Data** page. New features are named in a way that indicates the operation that created them:

Note the **Importance** score of the new features, showing the strength of their relationship to the target.
To improve efficiency when run, Autopilot does not search for differences/ratios for selected blueprints. This is because **Search for Interactions**, which is done at the EDA2 stage (before Autopilot is run), has already performed a similar search and added new features when applicable.
## Feature lists and created features {: #feature-lists-and-created-features }
DataRobot creates new feature lists—"Informative Features" and, if applicable, custom lists—with the created features and marks the lists with a plus (+) sign. Informative features:

A custom list:

When EDA2 completes, if DataRobot found and created new features, the selected modeling mode uses the new list to build models.
A few things to note about feature lists:
* The target feature is automatically added to every feature list.
* If Autopilot is set to run on the "Informative Features" list, DataRobot creates **Informative Features +**. If set to run on a custom list, DataRobot creates both **<<em>Custom_Features</em>> +** and **Informative Features +**.
* For custom lists, DataRobot only adds those features that make sense to the original content of the list. Also, DataRobot only creates a new custom list if the original custom list contains the parent of at least one newly derived feature.
* **Informative Features +** may or may not have the same number of features as the original. This is because when deriving the new feature from the old, keeping both may result in redundancy. If that is the case, DataRobot removes one of the parent features.
* **Informative Features +** is created based on the Informative Features with Leakage Removed feature list.
* **<<em>Custom_Features</em>> +** is created based on the features in the custom list and any engineered features whose parents are in the custom list.
## Explore new features {: #explore-new-features }
Once a new feature is created, the **Transformation** tab provides insights that explain the relationships. To view:
1. From the **Data** page click on the new feature name.
2. Select the **Transformation** tab. The display compares the transformed feature with the parent features and indicates the interaction (MINUS, EQUAL, or DIVIDED BY):

To further investigate the newly engineered features, and how newly derived features affect model predictions, find them in the following insights:
* [Feature Impact](feature-impact)
* [Feature Effects](feature-effects)
* [Prediction Explanations](pred-explain/index)
In general, DataRobot considers an interaction between a pair "useful" only when the interaction satisfies criteria of both interpretability and accuracy. This is achieved through high correlation and significance checks. DataRobot fits a Generalized Linear Model with the derived features and then determines the significance of that feature (for example, using p-values or other statistical criteria).
## Feature considerations {: #feature-considerations }
**Search for Interactions** typically adds additional insights, but can sometimes result in insights being slightly less accurate. That change in accuracy can lead to DataRobot selecting a different recommended model and also can change the runtime of the 80% model.
Search for interactions on primary datasets is supported for:
* Pure numeric
* Special numeric (date, percentage, currency, length)
And does not support the following:
* [Time series](time/index) projects
* [Multiclass](multiclass) modeling
|
feature-disc
|
---
title: Transform data
description: Perform transformations and feature discovery using DataRobot's feature engineering tools.
---
# Transform data {: #transform-data }
DataRobot supports multiple methods of feature engineering—automatic and manual feature transformations for single datasets, as well as Feature Discovery for multiple datasets. See the table below to learn about the feature transformation options in DataRobot.
Topic | Describes... | Dataset | Notes
----- | ------ | ---- | ---
**Automatic transformations** | :~~: | :~~: | :~~:
[Automatic feature transformations ](auto-transform) | Understand date-type feature transformations generated by DataRobot. | Primary | Calculated during EDA1.
[Interaction-based transformations](feature-disc) | Transform features based in interactions within your primary dataset by enabling an [advanced option](additional). | Primary | Enabled in project and calculated during EDA2.
[Feature Discovery](feature-discovery/index) | Perform multi-dataset, interaction-based feature creation. | Secondary | Configured in project and calculated during EDA2.
[Automatic modeling transformations](model-ref#automated-feature-transformations) | Understand the automated feature engineering DataRobot performs as part of the modeling process. | All | Performed during modeling.
**Manual transformations** | :~~: | :~~: | :~~:
[Manual feature transformations](feature-transforms) | Manually transform features in your dataset, including variable type transformations. | Primary | Transformed in project.
**AI Catalog transformations** | :~~: | :~~: | :~~:
[Prepare data in AI Catalog with Spark SQL](spark) | Enrich, transform, shape, and blend together datasets using Spark SQL queries within the AI Catalog. | |
## What is feature engineering? {: #what-is-feature-engineering }
Feature engineering is the process of preparing a dataset for machine learning by changing existing features or deriving new features to improve model performance. DataRobot's Automated Feature Engineering uses AI to accelerate the transformation of data into machine learning assets, allowing you to build better machine learning models in less time.

Feature engineering takes place after data preparation and ingest, and before model building.

During EDA1, DataRobot analyzes and profiles every feature in each dataset—detecting feature types, [automatically transforming](auto-transform) date-type features, and assessing feature quality.
Before model building, you can take further advantage of DataRobot's Automated Feature Engineering by enabling [interaction-based transformations](feature-disc) for primary datasets or defining relationships between multiple datasets using [Feature Discovery](fd-overview). You can also [manually transform features](feature-transforms) in your dataset, including variable type transformations, with functions.
During EDA2, DataRobot uses these known interactions, or relationships, to discover relevant features for your ML models and automatically transforms them to address the unique requirements of each algorithm in the blueprint library.

After model building, navigate to the **Leaderboard** and select a model. There are a few places you can view which [transformations](model-ref#automated-feature-transformations) DataRobot performed for individual models during the modeling process:
Feature | Description | Location
------- | ----------- | ---------
[Blueprint](blueprints#blueprint-components) | Displays preprocessing, modeling algorithms, and post-processing tasks for the selected model. | Click **Describe > Blueprints**.
[Data Quality Handling report](dq-report) | Displays feature and imputation information for [supported blueprint tasks](dq-report#supported-tasks). | Click **Describe > Data Quality Handling**.
[Coefficients](coefficients#preprocessing-and-parameter-view) | Allows you to download coefficients and preprocessing information, including feature transformations, for [supported model types](coefficients#supported-model-types). | Click **Describe > Coefficients** and click **Export**. |
|
index
|
---
title: Prepare data with Spark SQL
description: On the Add menu, Prepare data with Spark SQL lets you prepare a new dataset from a single dataset or blend several datasets using a Spark SQL query.
---
# Prepare data in AI Catalog with Spark SQL {: #prepare-data-in-ai-catalog-with-spark-sql }
Using **Prepare data with Spark SQL** from the **Add** menu in the AI Catalog allows you to enrich, transform, shape, and blend datasets together using Spark SQL queries.
Supported dataset types:
* Static datasets created from local files.
* [Unmaterialized](glossary/index#unmaterialized) (dynamic) datasets created from JDBC [data connections](data-conn).
* [Snapshotted](glossary/index#snapshot) datasets created from JDBC data connections.
The following sections describe the process of data preparation with Spark SQL:
* [Creating blended datasets](#create-blended-datasets)
* [Creating a query](#create-a-query)
* [Previewing results](#preview-results)
* [Saving results](#save-results-to-the-ai-catalog) to the **AI Catalog**
* [Editing queries](#edit-queries)
## Create blended datasets {: #create-blended-datasets }
Using Spark SQL queries, you can pull in data from multiple sources to create a new dataset that can then be used for analysis and in visualizations. Blending datasets helps create more comprehensive datasets to compare relationships in data or address specific business problems, for example, combining highly related datasets to better predict customer behavior.
1. To create a new blended dataset, select Spark SQL from the **Add** menu:

2. In the resulting dialog box, click **Add data** to open the “Select tables from the catalog for blending” modal.

3. DataRobot opens available datasets in a new modal. Click **Select** next to one or more datasets from the list of assets. The right panel lists the selected datasets.

4. When you have finished adding datasets, click **Add selected data**.
5. [Enter credentials](stored-creds) for any datasets that require authentication and, when authenticated, click **Complete registration** to open the SQL editor.

### Add and edit datasets {: #add-and-edit-datasets }
After initially adding datasets, you can add more or modify the dataset alias:

* Click **Add** to re-open the “Select tables from the catalog for blending” modal. Check marks indicate the datasets that are already included; click **Select** to add new datasets. You do not need to use all added datasets as part of the query.
* Click **Edit** to rename the dataset alias or delete the dataset from the query. You can also do either of these tasks from the dataset's menu.
!!! note
To conform to Spark SQL naming conventions (no special characters or spaces), DataRobot generates an alias by which to refer to each dataset in the SQL code. You’re welcome to choose your own alias or use the generated one.
## Create a query {: #create-a-query }
Once you have datasets loaded, the next step is to enter a valid <a target="_blank" href="https://spark.apache.org/docs/2.4.5/api/sql/index.html">Spark SQL</a> query in the **SQL** input section. To access the Spark SQL documentation in DataRobot, click **Spark Docs**.

To enter a query you can either manually enter the SQL query syntax into the editor or add some or all features using the menu next to the dataset name.
!!! note
You must surround alias or feature names that contain non-ascii characters with backticks ( \` ). For example, a correctly escaped sequence might be \`alias%name\`\.\`feature@name\`.
### Add features via the menu {: #add-features-via-the-menu }
Click the menu next to the dataset name.

DataRobot opens a pane that allows you to:
* add features individually by clicking the arrow to the right of the feature name (1).
* add a group of features by first selecting them in the checkbox to the left of the feature name and then choosing **Add selected features to SQL** (2).
* select or deselect all features.

When using the menu to add features, DataRobot moves the added feature(s) into the SQL editor at the point of your cursor.
## Preview results {: #preview-results }
When the query is complete, click **Run** or if there are several queries in the editor, highlight a specific section and then click **Run** . After computing completes, if successful, DataRobot opens the **Results** tab. Use the window-shade scroll to display more rows in the preview; if necessary, use the horizontal scroll bar to scroll through all columns of a row:

If the query was not successful, DataRobot returns a notification banner and returns details of the error in the **Console**:

### Preview considerations {: #preview-considerations }
When running a query, preview results from the **Run** action are limited to 10,000 rows and/or 16MB.
* If the preview exceeds 16MB, DataRobot returns: <em>command document too large</em>
* If the preview exceeds 10,000 rows, DataRobot returns the message: <em>Data engine query execution error: Output table is too large (more than 10000 rows). Please, use LIMIT or come up with another query</em>
<em>Saving to the **AI Catalog** does not incur these limits.</em>
## Save results to the AI Catalog {: #save-results-to-the-ai-catalog }
Before saving your query and resulting dataset, you can optionally provide a name and/or description for the new dataset from the **Settings** tab, overwriting the default name "Untitled (blended dataset)".

By default, DataRobot creates a [snapshot](catalog#create-a-snapshot) of the new dataset. Uncheck "Create snapshot" (in **Settings**) to prevent this behavior.
When you are satisfied with the naming and results, click **Save** to write the new blended dataset to the [**AI Catalog**](catalog) and start the registration process.
## Edit queries {: #edit-queries }
Once you have saved your data asset, you can both view and edit the query from the asset's **Info** tab. Any errors from your previous run are displayed at the top of the page.

To correct errors:
1. Click **Edit script** to return to the query editor. All results and errors from the previous run are preloaded below the editor.

2. Make changes to the query and **Run** to validate results.
3. Click **Save** and choose to either edit your script ("Save new version") or save as a new dataset.

* "Save new version" edits the script and reregisters a new version of the dataset. Visit the **Version History** tab to see all versions. Click a version to expand and see both the SQL query and the related data sources.

!!! note
When using "Save new version," the new version must conform to the original dataset schema. If you must change output schema as part of the edit, use the "Save new dataset" option instead.
* Choose "Save new dataset" to create a new dataset with the updated query. Provide a name and click **Save**.

DataRobot reregisters the dataset and adds it to the **AI Catalog**.
If registration fails using the new query, use the **Edit script** link to return to the SQL editor, correct any problems, and save as a new version.
### Create a new version {: #create-a-new-version }
Additionally, you can use the "Create a new dataset from script" link in the **Version History** tab. Click the link and return to the query editor. When you **Save**, the entry is saved as a new data asset.
|
spark
|
---
title: Schedule snapshots
description: To keep a dataset in sync with the data source, you can schedule snapshots at specified intervals through the AI Catalog.
---
# Schedule snapshots in the AI Catalog {: #schedule-snapshots-in-the-ai-catalog }
!!! info "Availability information"
The **AI Catalog** must be enabled in order to schedule snapshot refreshes. For Self-Managed AI Platform installations, the Model Management Service must also be installed.
To ensure that a dataset is always in sync with the data source, if desired, DataRobot provides an automated, scheduled refresh mechanism. Through the [AI Catalog](catalog), users with dataset access above the [consumer](roles-permissions) level can schedule snapshots at daily, weekly, monthly, and annual intervals. You can refresh any data asset type (HDFS, JDBC, Spark, and URL) except for files.

!!! note
Scheduled dataset refreshes should not be enabled if you do not have the [stored credentials](stored-creds) capability enabled (unless the data source in question does not require credentials, such as a URL or possibly HDFS).
## Schedule refresh tasks {: #schedule-refresh-tasks }
You can schedule multiple refresh tasks; [limits](#refresh-limit-settings) are applied to datasets and to users independently.
To schedule snapshots for a dataset:
1. From the main catalog listing, select the asset for which you want to schedule one or more refresh tasks.
2. Click the **Schedule refresh** link to expand the scheduler.
3. If the asset source is JDBC or HDFS, a login dialog results. Select the account credentials associated with the asset. DataRobot uses these credentials each time it runs the scheduled task. Once credentials are accepted (or if they were not required), the scheduler opens:

4. Complete the fields to set your task:
|Field | Description |
|-----------------------------|-----------------------|
| Name (1) | Enter a name for the refresh job (or leave the default). |
| [Calendar picker](#use-the-calendar-picker)(2) | Sets the basis for the interval setting. |
| Interval (3) | Based on the calendar setting, the interval dropdown sets the frequency to daily, weekly, monthly, or annually. The time on the selected day is always set to the timestamp when the job was scheduled. |
| Summary (4) | Provides a summary of the selected scheduled task, including the interval and whether it is active or paused, supplied by DataRobot and updated with any changes to the job. |
5. Click **Save** to schedule a refresh for the asset. DataRobot reports the last execution status under the scheduled job name.

### Use the calendar picker {: #use-the-calendar-picker }
Use the calendar picker to select a date that will serve as the basis of the day-of-week, monthly date, or day of year for the refresh.

Refreshes will start on or after (depending on the time set) the specific date. For example, if June 21 is the date selected, refreshes will begin:
* daily at <em>timestamp</em>, either that day or the next day (June 22)
* weekly on the set day (every Sunday at <em>timestamp</em>)
* monthly on that date of month (the 21st of each month at <em>timestamp</em>)
* Annually on that date (every June 21 at <em>timestamp</em>).

Click in the time picker. Use the arrows to change the time, setting the timestamp to the local time at which you want the snapshot to refresh. Click on the date to return to the full calendar view:

### Work with scheduled tasks {: #work-with-scheduled-tasks }
Once scheduled, you can modify the task in a variety of ways. Use the menu associated with the task to access the options.

* <b>Pause job</b>: Pauses the scheduled task indefinitely. When paused, the "Scheduled" label changes to "Paused" and the menu item changes to "Resume job". Use this action to re-enable the scheduled task. Paused jobs do not count against the [task limits](#refresh-limit-settings).
* <b>Edit</b>: Retrieves the scheduler interface, allowing you to change any aspect of the task configuration.
* <b>Manage credentials</b>: Opens the credentials selection modal, allowing you to change the credentials associated with the dataset.
* <b>Delete</b>: Deletes the scheduled task.
### Refresh limit settings {: #refresh-limit-settings }
The following table lists the defaults and maximums for refresh-related activities.
!!! info "Availability information"
For Self-Managed AI Platform installations, consider the maximum setting as the default.
| Number of... | Default | Maximum |
|------------------------------------------------------------------------|--------:|--------:|
| Enabled dataset refresh jobs for a user | 10 | 100 |
| Enabled dataset refresh jobs for a dataset | 5 | 100 |
| Stored snapshots until a dataset refresh job is automatically disabled | 25 | 1000 |
|
snapshot
|
---
title: Work with catalog assets
description: How to find, share, and delete data assets in the AI Catalog, work with metadata and feature lists, view relationships and version history, download datasets.
---
# Work with AI Catalog assets {: #work-with-ai-catalog-assets }
Data assets within the AI Catalog can be one of the following:
* Materialized snapshots of tables/views, meaning DataRobot has pulled from the data asset and is currently keeping a copy of it in the catalog.
* Dynamic connections, meaning that the whole dataset is ingested from your data source when you create a modeling project from it, thus allowing you to work with the most up-to-date data.
If the data is snapshotted, those snapshots can be automatically refreshed periodically, and are also automatically versioned to preserve dataset lineage and enhance the overall governance capabilities of DataRobot.
Additionally, when Composable ML is enabled, you can [save blueprints to the AI Catalog](cml-catalog). From the catalog, a blueprint can be edited, used to train models in compatible projects, or shared.
The following sections describe tools for working with the catalog assets:
* Understand [asset states](#asset-states).
* Update [asset details](#asset-details) (name, tags, and descriptions)
* Manage [feature lists](#work-with-feature-lists)
* View configured [relationships](#view-configured-relationships)
* View [version history](#view-version-history) and [add comments](#add-comments)
* [Share](roles-permissions) and [delete](#delete-assets) assets
* Perform [bulk actions](#bulk-actions-on-datasets) on assets
To add assets, see [Import and create projects in the AI Catalog](catalog).
## Find existing assets {: #find-existing-assets }
Once in the **AI Catalog**, there are a variety of tools to help quickly locate the data assets you want to work with. You can:
=== "Search"
Search for a specific asset using the search query box.

=== "Sort"
Use the dropdown to modify the order of all existing assets.

The default sort option is **Creation date**, except after searching for a specific asset, in which case the default is **Relevance**.
=== "Filter"
Under the search query box, you can filter assets by **Source**, **Tags**, and/or **Owner**.

For example, you can filter by any [tags](#asset-details) manually added to an asset:

## Asset states {: #asset-states }
DataRobot adds badges to catalog entries to indicate the state of the dataset. Either:
State | Description
---------- | -----------
Dynamic | A dataset that has no [snapshot](catalog#create-a-snapshot).
Spark | A dataset built from a [Spark query](catalog#use-a-sql-query).
Snapshot | A dataset that has a snapshot.
Static | A static file or URL-based dataset with a snapshot. <br><br>Datasets uploaded using data stages also display the STATIC badge, however, the FROM field displays `stage://{stageId}/{filename}`.

!!! note "Versioning static assets"
Static assets can only be versioned by uploads of the same type; datasets created by local files are versioned from local file uploads, and datasets created from a data stage are versioned from data stage uploads.
??? faq "What happens if I create a snapshot from a dynamic dataset?"
In the AI Catalog, the dataset will be marked as `SNAPSHOT`; as with all `SNAPSHOT` datasets, you can still create new snapshots from it. Note that for such a dataset, only the snapshots are used to create projects.
## Asset details {: #asset-details }
When you add a dataset, DataRobot ingests the source data and runs [EDA1](eda-explained#eda1) to register the asset and make it available from the catalog:

Once registered, you can also view additional information and manage asset details using the tabs described below:
=== "Info"
The **Info** tab displays an overview of the asset's details as well as metadata.

| Element | Description
---|---|---
 | Name | Name the asset. By default, this is the file name uploaded.
 | Description | Enter a helpful description of the asset.
 | Tags | Add tags to help when filtering assets in the AI Catalog. DataRobot offers any predefined tags that match the characters you entered. Select one by clicking or continue typing to add a new tag of alphanumeric characters (special characters and symbols are invalid). Either click outside the entry box or in the dropdown to add the tag.
 | Overview | An overview of the asset, including the full row count, feature count, and feature types.
 | Metadata | Additional metadata, including size, owner, and dataset ID.
Click the pencil icons () to change the asset name, add a description, or add tags to aid in filtering, and then click anywhere outside of the box to save the change.

=== "Profile"
The **Profile** tab allows you to preview dataset column names and row data. It can be useful for finding or verifying column names when writing Spark SQL statements for [blended datasets](spark#create-blended-datasets).
!!! note "Info tab vs. Profile tab"
The **Info** tab displays the data's total row count, feature count, and size.
The **Profile** tab only displays a preview of the data based on a 1MB raw sample, and the feature types and details are based on a 500MB sample.
Meaning the row count observed on the **Profile** tab may not match that displayed in the **Info** tab.
Note that the preview is a random sample of up to 1MB of the data and may be ordered differently from the original data. To see the complete, original data, use the [**Download Dataset**](#download-datasets) option.
To preview a dataset, select it in the main catalog and click the pencil icon () to access dataset information (if available).
1. Click the **Profile** tab to preview the contents of the dataset:

2. Use the **Columns** dropdown to select the number of columns to display on the page and the scroll bars to scroll through those columns. Additionally, you can use the **Rows** dropdown to cycle through available data, 20 rows at a time.
The **Profile** tab also displays details for all features in the dataset. To view details for a particular feature, scroll to it in the display and click. The **Feature Details** listed in the right panel update to reflect statistics for the feature. (These are the same statistics as those displayed on the [**Data**](histogram) for EDA1.)

=== "Feature Lists"
You can create new lists and feature transformations for features of any dataset in the catalog. To work with the tools, select the dataset in the main catalog and **Feature Lists** in the left panel.
When you create feature lists, they are copied to a project upon creation. You can then set the list to use for the project from the **Feature List** dropdown at the top of the **Project Data** list. See the section on working with [**Feature Lists**](feature-lists) for complete details on creating, modifying, and understanding these lists.

The **Feature List** tab also provides access to a tool for creating variable type feature transformations. While DataRobot bases variable type assignments on the values seen during EDA, there are times when you may need to change the type. Refer to [feature transformations](feature-transforms) documentation for complete details.
To create a feature list:
1. Use the checkboxes to the left of feature names to select a set of features.
2. Click the **Create new feature list from selection** link, which becomes active after you select the first feature.

3. In the resulting dialog, provide a name for the new list and click **Submit**. The new list becomes available through the dropdown.
You can delete or rename any feature list you created. You cannot make any changes to the DataRobot [default feature lists](feature-lists#automatically-created-feature-lists).

=== "Relationships"
DataRobot’s Feature Discovery capability guides you through creating relationships, which define both the included datasets and how they are related to one another. The end product is a multitude of additional features that are a result of these linkings. The Feature Discovery engine analyzes the included datasets to determine a feature engineering “recipe” and, from that recipe, generates secondary features for training and predictions. Once these relationships are established, you can view them from the catalog.
To view relationships, select the dataset in the main catalog and click the **Relationships** tab to view, modify, or delete existing relationships:

See complete details of working with [relationships](fd-overview#define-relationships) before modifying relationship details.
=== "Version History"
The **Version History** tab lists all versions of a selected asset. The **Status** column indicates the snapshot status—green if successful, red if failed, gray if the original version did not have a snapshot.

Click a version to select it. Once selected, you can create a project from the version and download or delete the contents.
=== "Comments"
The **Comments** tab allows you to add comments to—even host a discussion around—any item in the catalog that you have access to. Comment functionality is available in the **AI Catalog** (illustrated below), and also as a model tab from the Leaderboard and in [use case tracking](value-tracker). With comments, you can:
* Tag other users in a comment; DataRobot will then send them an email notification.
* Edit or delete any comment you have added (you cannot edit or delete other users' comments).

## Asset actions {: #asset actions }
In the AI Catalog, there are a number of ways you can interact with data assets, including downloading, sharing, and deleting datasets.
### Download datasets {: #download-datasets }
To download a dataset, select it from the catalog list. From the dropdown menu in the upper right, select **Download Dataset** () and in the resulting dialog, browse to a download location and click Save.
!!! note
Only [snapshotted ](catalog#create-a-snapshot) datasets can be downloaded. Additionally, there is a 10GB file size limit; attempting to download a dataset larger than 10GB will fail.

### Share assets {: #share-assets }
Assets in the **AI Catalog** can be shared to users, groups, and organizations.

| | Element | Description |
|---|---|---|
|  | Allow sharing | The user you're sharing with can share the asset with other users. |
|  | Can use data | The user you're sharing with can an use the data. How they use the data depends on their role. |
|  | User list | Enter the user(s) with whom you are sharing the asset. |
|  | Access level | Select from **Users**, by default. If your instance has **Groups** and **Organizations** configured, you can select from these categories. |
|  | Role | Select a role for the users, groups, or organizations that are sharing the asset: <ul><li>**Owner**: Can view, edit, and administer the asset. </li><li>**Consumer**: Can view the asset. </li><li>**Editor**: Can view and edit the asset. </li></ul> |
|  | Share | Select **Share** to perform the operation. |
|  | Shared with | Shows the users, groups, and organizations that asset is shared with, along with their permission settings. |
!!! tip "Sharing with multiple users"
When sharing a catalog asset with multiple users, DataRobot suggests [creating a user group](manage-groups#manage-groups) first, and then sharing with that group instead of individual users.
The catalog uses the same [sharing](roles-permissions#sharing) window as other places in the application, with some fields specific to the data assets.
### Delete assets {: #delete-assets }
To delete a dataset, select the dataset from the catalog list. From the dropdown menu in the upper right, select **Delete Dataset** (). When prompted for confirmation, click **Delete**.
### Bulk actions on datasets {: #bulk-actions-on-datasets }
You can share, tag, or delete multiple datasets at once using the bulk action functionality in the AI Catalog. Start by selecting the box next to the asset(s) you want to manage; select at least one asset to enable the bulk actions at the top. A counter also displays how many assets are actively selected.

Once you're done selecting assets, choose the appropriate action from the following options: [Delete](#delete-assets), [Tag](#asset-details), or [Share](#share-assets).
|
catalog-asset
|
---
title: AI Catalog
description: DataRobot's AI Catalog is a collection of registered objects that contains definitions and relationships between various object types.
---
# AI Catalog {: #import-data }
The AI Catalog is a browsable and searchable collection of registered objects that contains definitions and relationships between various object types. Items stored in the catalog include data connections, data sources, data metadata, and blueprints.
Topic | Describes...
----- | ------
**Import datasets** | :~~:
[Import data and create projects](catalog) | Import data into the AI Catalog and from there, create a DataRobot project.
**Manage catalog assets** | :~~:
[Work with catalog assets](catalog-asset) | Understand the details of catalog assets and learn how to manage them, including downloading, deleting, and sharing.
[Schedule data snapshots](snapshot) | Set up schedules for data snapshots in the AI Catalog to keep a dataset in sync with its data source.
**Prepare data** | :~~:
[Prepare data with SparkSQL](spark) | Enrich, transform, shape, and blend together datasets using Spark SQL queries within the AI Catalog.
|
index
|
---
title: Load data and create projects
description: From the AI Catalog, you can add external data using JDBC or a SQL query, create a snapshot, configure fast registration, upload calendars for time series projects, then create a project.
---
# Import and create projects in the AI Catalog {: #import-and-create-projects-in-the-ai-catalog }
The AI Catalog enables seamlessly finding, sharing, tagging, and reusing data, helping to speed time to production and increase collaboration. The catalog provides easy access to the data needed to answer a business problem while ensuring security, compliance, and consistency. With the AI Catalog, you can:
* Execute simple data preparation, leveraging SQL scripts for pinpointed results.
* Create datasets without the full commitment of creating projects.
* Find, access, delete, and reuse the assets you need.
* Share data without sharing projects, decreasing risks and costs around data duplication.
* Support data security and governance, which reduces friction and speeds up model adoption, through selective addition to the catalog, role-based sharing, and an audit trail.
!!! info "Important"
For Self-Managed AI Platform users, DataRobot recommends enabling Elasticsearch for significantly improved search matches, relevancy, and rankings. Contact your DataRobot representative for help configuring and deploying Elasticsearch.

The AI Catalog is a centralized collaboration hub for working with data and related assets. The DataRobot landing page provides the option to start a project via the [legacy method](import-to-dr) or by using the AI Catalog.
The following sections describe importing data and creating projects from the AI Catalog:
* [Add](#add-data) new data
* From an external [data connection](#add-data-from-external-connections)
* Using a [SQL query](#use-a-sql-query)
* [Create a snapshot](#create-a-snapshot) from a connected data source
* [Create a project](#create-a-project) for a listed asset
Once in the catalog, use the [additional tools](catalog-asset) to view, modify, and share assets.
## Add data {: #add-data }
The starting point for adding assets to the catalog is either the application home page or the AI Catalog home page:

Import methods are the same for both legacy and catalog entry—that is, via local file, HDFS, URL, or JDBC data source. From the catalog, however, you can also add by blending datasets with Spark. When uploading through the catalog, DataRobot completes [EDA1](eda-explained#eda1) (for [materialized](glossary/index#materialized) assets), and saves the results for later re-use. For unmaterialized assets, DataRobot uploads and samples the data but does not save the results for later re-use. Additionally, you can [upload calendars](#upload-calendars) for use in time series projects.
To upload assets to the catalog:
1. Select the **AI Catalog** tab.
2. Click **Add to catalog** and select a source for the data:

* [Data Connection:](#add-data-from-external-connections) Add data from an existing external data connection (with or without [a snapshot](#create-a-snapshot)). If you do not have existing connections, you can [add](data-conn#create-a-new-connection) them from the AI Catalog.
* [Local File:](import-to-dr#import-local-files) Use the file browser to select a locally-stored file.
* [URL:](import-to-dr#import-a-dataset-from-a-url) Add data from a URL (HTTP, HTTPS, local, S3, Google Cloud Storage). Note that the types of URLs supported depend on how your installation is configured.
* [Spark SQL:](spark) Blend of two or more datasets using Spark SQL.
### External data connections {: #external-data-connections }
Using JDBC, you can read data from external databases and add the data as assets to the AI Catalog for model building and predictions. See [Data connections](data-conn) for more information.
1. If you haven't already, [create the connections](data-conn#create-a-new-connection) and add data sources.
2. Select the **AI Catalog** tab, click **Add to catalog**, and select **Existing Data Connection**.

3. Click the connection that holds the data you would like to add.

4. Select an account. Enter [or use stored credentials](stored-creds) for the connection to authenticate.

5. Once validated, select a source for data.

| | Element | Description |
|---|---|---|
| | Schemas | Select **Schemas** to list all schemas associated with the database connection. Select a schema from the displayed list. DataRobot then displays all tables that are part of that schema. Click **Select** for each table you want to add as a data source. |
| | Tables | Select **Tables** to list all tables across all schemas. Click **Select** for each table you want to add as a data source. |
| | SQL Query | Select data for your project with a [SQL query](#use-a-sql-query). |
|  | Search | After you select how to filter the data sources (by schema, table, or SQL query), enter a text string to search.
| | Data source list | Click **Select** for data sources you want to add. Selected tables (datasets) display on the right. Click the `x` to remove a single dataset or **Clear all** to remove all entries. |
| | Policies | Select a policy:<ul><li>**Create snapshot**: DataRobot takes a snapshot of the data. </li><li>**Create dynamic**: DataRobot refreshes the data for future modeling and prediction activities. </li></ul>|
6. Once the content is selected, click **Proceed with registration**.

DataRobot registers the new tables (datasets) and you can then create projects from them or perform other operations, like sharing and querying with SQL.

### Use a SQL Query {: #use-a-sql-query }
You can use a SQL query to select specific elements of the named database and use them as your data source. DataRobot provides a web-based code editor with SQL syntax highlighting to help in query construction. Note that DataRobot’s SQL query option only supports SELECT-based queries. Also, SQL validation is only run on initial project creation. If you edit the query from the summary pane, DataRobot does not re-run the validation.
To use the query editor:

1. Once you have added data from an [external connection](#add-data-from-external-connections), click the **SQL query** tab. By default, the **Settings** tab is selected.
2. Enter your query in the SQL query box.
3. To validate that your entry is well-formed, make sure that the **Validate SQL Query** box below the entry box is checked.
!!! note
In some scenarios, it can be useful to disable syntax validation as the validation can take a long time to complete for some complex queries. If you disable validation, no results display. You can skip running the query and proceed to registration.
4. Select whether to create a [snapshot](#create-a-snapshot).
5. Click **Run** to create a results preview.
6. Select the **Results** tab after computing completes.
7. Use the window-shade scroll to display more rows in the preview; if necessary, use the horizontal scroll bar to scroll through all columns of a row:

When you are satisfied with your results, click **Proceed with registration**. DataRobot validates the query and begins data ingestion. When complete, the dataset is published to the catalog. From here you [can interact with the dataset](catalog-asset#work-with-ai-catalog-assets) as with any other asset type.

For more examples of working with the SQL editor, see [Prepare data in AI Catalog with Spark SQL](spark).
### Configure fast registration {: #configure-fast-registration }
Fast registration allows you to quickly register large datasets in the AI Catalog by specifying the first N rows to be used for registration instead of the full dataset. This gives you faster access to data to use for testing and Feature Discovery.
To configure fast registration:
1. In the **AI Catalog**, click **Add to catalog** and select your data source. Fast registration is only available when adding a dataset from a new data connection, an existing data connection, or a URL.

2. In the resulting window, enter the data source information (in this example, URL).

3. Select the appropriate policy for your use case—either **Create snapshot** or **Create dynamic**.
For both snapshot and dynamic policies, the AI Catalog dataset calculates EDA1 using only the specified number of rows, taken from the start of the dataset. For example, it calculates using the first 1,000 rows in the dataset above.
Where the two policies differ is that if you consume the snapshot dataset (for example, using it to create a project), the consumer of the dataset will only see the specified number of rows when consuming it, but the consumer of the dynamic dataset will see the full set of rows rather than the partial set of rows.
4. Select the fast registration data upload option. For snapshot, select **Upload data partially**, and for dynamic, select **Use partial data for EDA**.
5. Specify the number of rows to use for data ingest during registration and click **Save**.
### Upload calendars {: #upload-calendars }
[Calendars for time series](ts-adv-opt#calendar-files) projects can be uploaded either:
* Directly to the catalog with the **Add to catalog** button, using any of the upload methods. Calendars uploaded as a local file are automatically added to the **AI Catalog**, where they can then be shared and downloaded.
* From within the project using the **Advanced options > Time Series** tab.
When adding from **Advanced options**, use the **choose file** dropdown and choose **AI Catalog**:

A modal appears listing available calendars, which was determined based on the content of the dataset. Use the dropdown to sort the listing by type.

DataRobot determines whether the calendar is single or multiseries based on the number of columns. If two columns, only one of which is a date, it is single series; three columns with just one being a date makes it multiseries.
Click on any calendar dataset to see the associated details and select the calendar for use with the project.

The calendar file becomes part of the standard **AI Catalog** inventory and can be reused like any dataset. Calendars generated from **Advanced options** are saved to the catalog where you can then download them, apply further customization, and re-upload them.
## Create a snapshot {: #create-a-snapshot }
You can uncheck **Create Snapshot** when adding external data connections, to meet certain security requirements, for example. When de-selected, DataRobot adds the database table to the catalog but does not take a snapshot, creating an [unmaterialized](glossary/index#unmaterialized) data entry. When unchecked, DataRobot pulls the data once, runs EDA to learn the data structure, and then deletes the data. When requested for modeling or predictions, DataRobot then pulls the data. Snapshotted [materialized](glossary/index#materialized) data is stored on disk; unmaterialized data is stored remotely as your asset and only downloaded when needed.
!!! note
You can schedule [automated snapshot refreshes](snapshot) to sync your dataset with your data source regularly.
To determine whether an asset has been snapshotted, click on its catalog entry and check the details on the right. If it has been snapshotted, the last snapshot date displays; if not, a notification appears:

To create a snapshot for unmaterialized data:
1. Select the asset from the main catalog listing.
2. Expand the menu in the upper right and select **Create Snapshot**.

You cannot update the snapshot parameters that were defined when the catalog entry was added; snapshots are based on the original SQL.
3. DataRobot prompts for any credentials needed to access the data source. Click **Yes, take snapshot** to proceed.
4. DataRobot runs EDA. New snapshots are available from the version history, with the newest ("latest") snapshot becoming the one used by default for the dataset.
Once EDA completes, the displayed status updates to "SNAPSHOT" and a message appears indicating that publishing is complete. If you want the asset to no longer be snapshotted, remove the asset and add it again, making sure to uncheck **Create Snapshot**.
## Create a project {: #create-a-project }
You can create new projects directly from the **AI Catalog**; you can also use listed datasets as a source for [predictions](predict).
To create a project, from the catalog main listing, click on an asset to select it. In the upper right, click **Create project**.

DataRobot runs [EDA1](eda-explained#eda1) and loads the project. When complete, DataRobot displays the [**Start**](model-data) screen.
|
catalog
|
---
title: Creating and managing data connectors
description: Add data connectors to DataRobot and create a data connection from a connector.
section_name: Data
maturity: public-preview
---
# Creating and managing data connectors {: #creating-and-managing-data-connectors }
!!! info "Availability information"
Creating and managing data connectors is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable DataRobot Connectors
Now available as a public preview feature, you can add data connectors to DataRobot and create a [data connection from a connector](data-conn). Additionally, Self-Managed AI Platform administrators can [manage data connectors](#managing-data-connectors) for an organization.
## Creating a new data connection {: #creating-a-new-data-connection }
Unless specifically disabled by the administrator, each user has permission to create a data connection. Any connection that you create is available only to you unless you [share](data-conn#share-data-connections) it with others.
To create a new data connection:
1. From the Account Settings dropdown, select **Data Connections**.

2. Click **Add new data connection** to open the data store selection dialog box. You can also create a new data connection using the [**AI Catalog**](catalog) by selecting **Add to catalog** > **New Data Connection**.

3. Select the **Connectors** tile in the dialog box.

4. Complete the fields for the data connection. They will vary slightly based on the data connection selected.

| Field | Description |
|------|-----------|
| Data connection name | Provide a unique name for the connection. |
| Connector | Select the connector for the data store to use from the dropdown list. |
| Configuration: Parameters | Indicate the name of the bucket in which the data is stored and modify [parameters](data-conn#data-connection-with-parameters) for the connection. |
5. Click **Add data connection** to save the configuration.
The new connection appears in the left-panel list of **Data Connections**.
Note that to authenticate an S3 data connection, you must always provide the AWS Access Key and AWS Secret Access Key:

#### Azure Data Lake Gen2 connectors {: #azure-data-lake-gen2-connectors }
In addition to S3, users can create data connections with Azure Data Lake Storage (ADLS) Gen2 connectors. The configuration process is similar to the workflow described above.
When indicating the configuration parameters, complete the two mandatory fields for Azure:

* Azure Account Storage Name: The Azure account that has the container with data you plan to use.
* File System Name: The name of the container storing the data.
To verify a data connection, you must use your Azure username, password, and account name.

!!! note
To authenticate an ADLS Gen 2 connector, users must have the appropriate Azure built-in role: [Storage Blob Data Owner](https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#storage-blob-data-owner){ target=_blank }, [Storage Blob Data Contributor](https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor){ target=_blank }, or [Storage Blob Data Reader](https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#storage-blob-data-reader){ target=_blank }. If the system displays an error and fails to authenticate, report the error to your IT team so they can check your user role in Azure.
!!! warning "Two-factor authentication restrictions"
You cannot set up an ADLS Gen2 connector in DataRobot if the account uses two-factor authentication in Azure.
Each configured user must also grant DataRobot permission to access data on their behalf. Click **Grant access** to do so for the account listed. To grant access for another account, click **Use different account** and enter a different set of credentials.

This brings you to a Microsoft login page that will also prompt you to log in with your Azure credentials and grant DataRobot permissions. After consenting, the data connection is fully verified.
## Self-Managed AI Platform admins {: #self-managed-ai-platform-admins }
The following is available only on the Self-Managed AI Platform.
### Managing data connectors {: #managing-data-connectors }
!!! info "Required permission"
"Can manage connectors"
In addition to data connections, system administrators can now also manage data connectors. A connector allows DataRobot to provide a way for users to ingest data from a database. The administrator can [upload connector files](#creating-a-new-data-connection) (JAR files) for their organization's users to access when creating data connections. Once uploaded, connectors can be modified or deleted only by administrators.
The steps below describe how to create a connector.
1. Click the profile icon in the top right corner of the application screen, and select **Data Connections** from the dropdown menu.
2. Select the **Connectors & JDBC Drivers** link:

3. In the left-panel **Connectors & JDBC Drivers** list, select **Connectors** and click **Add new connector**.

4. In the displayed dialog, click **Upload JAR** and select the file for the connector.
Once the JAR is uploaded, you can view the connector in the left panel:

|
connector
|
---
title: Create feature lists in the Relationship Editor
description: Create custom feature lists and transform features in DataRobot's Relationship Editor.
section_name: Data
maturity: public-preview
---
# Create feature lists in the Relationship Editor {: #create-feature-lists-in-the-relationship-editor }
!!! info "Availability information"
The ability to create feature lists in the Relationship Editor, available as a public preview feature, is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable Feature List Creation from Relationship Editor
You can now create custom feature lists and transform features in the Relationship Editor.
## Access the Relationship Editor {: #access-the-relationship-editor }
Open the Relationship Editor in either of the following ways:
* [Create a Feature Discovery project](fd-overview). Use this method if you haven't yet added secondary datasets and created relationships among the datasets.
* Click **Edit relationships** on the right side of the **Secondary Datasets** section of the **Data** page. Use this method if you have already created a relationship configuration.
## Create a feature list {: #create-a-feature-list }
1. Hover over the menu on the right of the dataset tile from which you want to create a feature list and select **Configure dataset**.

2. In the **Dataset Editor**, click **+ Create new feature list**.

3. Enter a name for the new feature list, select the features to be included, and click **Create Feature List**.

A message indicating that the feature list has been successfully created displays in the top right.
??? tip
The list of features in the Dataset Editor window extend over multiple pages if necessary. You can change the number of features that display on each page. To do so, click on the page control and select 5, 10, or 20 from the dropdown that displays.

## Transform features {: #transform-features }
Just as you can [transform features](feature-transforms) on the **Data** tab, you can do so when you add them to a feature list in the Relationship Editor. To transform a feature in the Relationship Editor:
1. [Open the Relationship Editor](#access-the-relationship-editor), hover over the menu on the right of the dataset tile, and select **Configure dataset**.
2. In the **Dataset Editor**, click **+ Create a new feature list**.
3. Click the transform icon () for a feature you want to transform.

There are different settings depending on the type of feature you are transforming.
For example, for a categorical transform, you can choose to transform the category to a Text or Numeric feature.

For a date feature, you can extract portions of a date and transform them. In this case, the month portion of the date is transformed into a categorical feature.

4. Once you have selected your transformed feature settings, optionally update the generated feature name in the **New Feature Name** field.
In this example, a Numeric transform was selected.

By default, DataRobot names the feature using the original feature name followed by the transform data type in parentheses.
5. Click **Create Feature**.
A message indicating that the transformed feature has been successfully created displays in the top right. In the **Create feature list** window, you can hover over the "i" to see that the feature was created as a result of a transform.

|
safer-rel-editor-feature-lists
|
---
title: AI Catalog impact analysis
description: View additional details that show how AI Catalog entities in the DataRobot application are related to--or dependent on--the current asset.
section_name: Data
maturity: public-preview
---
# AI Catalog impact analysis {: #ai-catalog-impact-analysis }
!!! info "Availability information"
**AI Catalog** impact analysis is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable **AI Catalog** item impact analysis
The [**AI Catalog**](catalog) serves as a centralized collaboration hub for working with data and related assets. Additionally, it provides at-a-glance details (metadata) about these assets.
Available as a public preview feature, you can view additional details that show how other entities in the application are related to—or dependent on—the current asset. This is useful for a number of reasons, allowing you to:
* View how popular an item is based on the number of projects in which it is used.
* Understand which other entities might be affected if you were to makes changes or deletions.
* Gain understanding on how the entity is used.
All of the following associations are reported (with frequency values) as applicable:
* Projects
* Prediction datasets
* Feature Discovery configurations
* Time series calendars
* Spark SQL queries
* External model packages
* Deployment retraining
To view details, click on the asset title and tiles relevant to the selection display:

Click on a tile for summary details and then click on the associated button for specific detail. For example, click on **Project** for a summary and **Open Project** for detail:

In this example, DataRobot opens to the **Start** screen if [EDA1](eda-explained) was the last step completed or the **Data** page if EDA2 completed.
Each tile-type provides different (self-explanatory) details.
If you do not have permission to access an asset, you can view an entry that represents the asset but the entry does not disclose any additional information.

This functionality is also available from the **Version History** tab for an asset:

|
catalog-impact
|
---
title: Data connection UI improvements
description: Set up data connections in DataRobot using a more intuitive workflow.
section_name: Data
maturity: public-preview
---
# Data connection UI improvements {: #data-connection-ui-improvements }
!!! info "Availability information"
The new data connection page is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable New Data Connection UI
Now available for public preview, DataRobot introduces improvements to the data connection user interface that simplifies the process of adding and configuring data connections. Instead of opening multiple windows to set up a data connection, after selecting a data store, you can configure parameters and authenticate credentials in the same window. For each data connection, only the required fields are displayed, however, you can define additional parameters under **Advanced Options** at the bottom of the page.
1. To access the new data connections page, go to the **AI Catalog**.
2. Click **Add to Catalog** and select **Data Connection**.

3. All existing connections are displayed on the left. If you select a configured connection, its configuration options are displayed in the center.

4. To add a new data connection, click **Add new connection**.

5. Select a connection type.

6. Name the data connection (1), select an authentication method (2), and fill in the required fields (see the documentation for your specific data store).
Note that the visible configuration options are the required parameters for the selected data store, therefore, these options vary for each data store. You can add more parameters under **Show advanced options** (3). <!--is this correct? it didn't seem to be.-->

??? note "Saved credentials"
If you previously added credentials for your datastore via the [**Credentials Management**](stored-creds#credentials-management) page, you can click **Select saved credentials** and choose them from the list instead of adding them manually.
7. Click **Add from connection**; once selected, the **Schema** tab opens.
8. The **Schema** tab lists the available schemas for your database—select a schema from the list. Once selected, the **Tables** tab opens.

To use a SQL query to select specific elements of the named database, click the [**SQL query**](catalog#use-a-sql-query) tab.
9. Select the table(s) you want to register in the AI Catalog and click **Proceed to confirmation**. Each table will be registered as a separate catalog asset.

9. Under **Settings**, select the appropriate policy (1) and data upload amount (2), then review and confirm your selections by clicking **Register in the AI Catalog**.

View the existing workflow procedure in the [AI Catalog documentation](catalog#add-data-from-external-connections).
|
new-data-ui
|
---
title: BigQuery connection enhancements
description: A new BigQuery connector is now available for public preview, providing several performance and compatibility enhancements, as well as support for authentication using Service Account credentials.
section_name: Data
maturity: public-preview
---
# BigQuery connection enhancements {: #bigquery-connection-enhancements }
!!! info "Availability information"
BigQuery connector enhancements are off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable Native BigQuery Driver
A new BigQuery connector is now available for public preview, providing several performance and compatibility enhancements, as well as support for authentication using Service Account credentials.
## Add the driver {: #add-the-driver }
!!! info "Availability information"
Self-Managed AI Platform admins must add the new native BigQuery driver before creating a connection.
**Feature flag:** Can manage JDBC database drivers
To add the new BigQuery driver:
1. Click your User icon in the upper-right corner and select **Drivers**.
2. Click **+ Add new driver**.

3. In the **Add New Driver** dialog, select **Native** configuration, choose **BigQuery** from the dropdown, and click **Create Driver**.

## Create a data connection {: #create-a-data-connection }
To set up a data connection using the new BigQuery connector and Service Account credentials:
1. [Add a new connection](data-conn#create-a-new-connection){ target=_blank } via the **AI Catalog** or **Data Connections** tab:
- Go to the **AI Catalog**, click **Add to catalog**, and select **New Data Connection**.
- Click your User profile, select **Data Connections**, and click **+ Add new connection**.
2. In the **Select connection type** modal, select the new **BigQuery** connection. The previous version, _Google BigQuery - 2022 (Deprecated)_, is now disabled.

3. Under **Credential type**, select **Service Account** and fill in the required parameters for manual configuration (described in the table below):

Required field | Description | Notes
--------------- | ---------- | -----------
`Projectid` | A globally unique identifier for your project. | See the [Google Cloud documentation](https://cloud.google.com/resource-manager/docs/creating-managing-projects){ target=_blank }.
[Service Account Key](https://cloud.google.com/iam/docs/service-account-creds#key-types){ target=_blank } | The public/private RSA key pair associated with each service account that can be provided as a JSON string or loaded from a file. | See the Google Cloud documentation, [List and get service account keys](https://cloud.google.com/iam/docs/keys-list-get#get-key){ target=_blank }.
Display name | A unique identifier for your credentials DataRobot. | You can access and manage these credentials under this display name.
4. When you're done, click **Create**.
For more information, see the documentation on [required configuration parameters and connecting to BigQuery in DataRobot](dc-bigquery){ target=_blank }.
|
bigq-service-acct
|
---
title: Data public preview features
description: Read preliminary documentation for data-related features currently in the DataRobot public preview pipeline.
section_name: Data
maturity: public-preview
---
# Data public preview features {: #data-public-preview-features }
{% include 'includes/pub-preview-notice-include.md' %}
## Available data public preview documentation {: #available-dataopublic-preview-documentation }
=== "SaaS"
Public preview for... | Describes...
----- | ------
[BigQuery connection enhancements](bigq-service-acct) | The new BigQuery connector provides several performance and compatibility enhancements, as well as support for authentication using Service Account credentials.
[Key pair authentication for Snowflake](snow-keypair) | Connect to Snowflake using key pair as the authentication method.
[Feature Discovery support in No-Code AI Apps](app-ft-cache) | Create No-Code AI Apps from Feature Discovery projects.
[Data connection UI improvements](new-data-ui) | Set up data connections in DataRobot using a more intuitive workflow.
[Relationship Quality Assessment speed improvements](rqa-speed) | Assess Feature Discovery relationship configurations faster by sampling approximately 10% of the primary dataset to run the Relationship Quality Assessment.
[AI Catalog impact analysis](catalog-impact)| View additional details that show how other entities in the **AI Catalog** are related to or dependent on an asset.
[Create and manage data connectors](connector) | Add data connectors to DataRobot and create a data connection from a connector.
[Enable governance workflows for Feature Discovery deployments](feat-disc-workflow) | Control updates to secondary dataset configurations with the deployment approval workflow for Feature Discovery projects.
[Create feature lists in the Relationship Editor](safer-rel-editor-feature-lists) | Create custom feature lists and transform features in the Relationship Editor.
=== "Self-Managed"
Public preview for... | Describes...
----- | ------
[BigQuery connection enhancements](bigq-service-acct) | The new BigQuery connector provides several performance and compatibility enhancements, as well as support for authentication using Service Account credentials.
[Key pair authentication for Snowflake](snow-keypair) | Connect to Snowflake using key pair as the authentication method.
[Data connection UI improvements](new-data-ui) | Set up data connections in DataRobot using a more intuitive workflow.
[Relationship Quality Assessment speed improvements](rqa-speed) | Assess Feature Discovery relationship configurations faster by sampling approximately 10% of the primary dataset to run the Relationship Quality Assessment.
[AI Catalog impact analysis](catalog-impact)| View additional details that show how other entities in the **AI Catalog** are related to or dependent on an asset.
[Create and manage data connectors](connector) | Add data connectors to DataRobot and create a data connection from a connector.
[Enable governance workflows for Feature Discovery deployments](feat-disc-workflow) | Control updates to secondary dataset configurations with the deployment approval workflow for Feature Discovery projects.
[Create feature lists in the Relationship Editor](safer-rel-editor-feature-lists) | Create custom feature lists and transform features in the Relationship Editor.
|
index
|
---
title: Governance workflow support for Feature Discovery deployments
description: DataRobot's Feature Discovery projects support the deployment approval workflow to control updates to secondary dataset configurations.
section_name: Data
maturity: public-preview
---
# Governance workflow support for Feature Discovery deployments {: #governance-workflow-support-for-feature-discovery-deployments }
!!! info "Availability information"
The governance workflow for Feature Discovery deployments, available as a public preview feature, is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flags:</b> Enable Feature Discovery Global Approval Workflow for Model Deployments, Enable Global Approval Workflow, Can manage Global Approval Policies, Enable MLOps
Feature Discovery projects now support the [deployment approval workflow](dep-admin) to control updates to secondary dataset configurations. Before controlling secondary datasets with the approval workflow, an MLOps admin must go to **User Settings > Approval Policies** and set up the "Secondary datasets configuration changed" [approval policy](deploy-approval) trigger.

When the trigger is configured, [changing a secondary dataset](fd-predict#secondary-datasets-configurations-modal) in **Deployments > Settings** prompts the creation of a change request.

As the creator of a change request, there will be a pending changes notification displayed at the top of **Deployments > Overview** and its status is listed under **History**.

As a reviewer of a change request, there will be a pending changes notification displayed at the top of **Deployments > Overview** with the option to [**Add Review**](dep-admin).

|
feat-disc-workflow
|
---
title: Snowflake key pair authentication
description: Connect to Snowflake using key pair authentication.
section_name: Data
maturity: public-preview
---
# Snowflake key pair authentication {: #snowflake-key-pair-authentication }
!!! info "Availability information"
Key pair authentication for Snowflake is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable Snowflake Key-pair Authentication
Now available for public preview, you can create a Snowflake data connection using the key pair authentication method—a Snowflake username and private key—as an alternative to [basic authentication](dc-snowflake).
## Prerequisites {: #prerequisites }
The following is required before connecting to Snowflake in DataRobot:
* A Snowflake account
* A private key file (for instructions on generating a private key, see the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/key-pair-auth){ target=_blank })
## Set up the connection in DataRobot {: #set-up-a-connection-in-datarobot }
The tabs below show how to configure a Snowflake data connection using key pair authentication:
=== "DataRobot Classic"
When creating a Snowflake [data connection](data-conn#create-a-new-connection) in DataRobot Classic, select **Key-pair** as your credential type. Then, fill in the [required parameters](#required-parameters).

=== "Workbench"
When creating a Snowflake [data connection](wb-connect) in Workbench, select **Key-pair** as your credential type. Then, fill in the [required parameters](#required-parameters).

## Required parameters {: #required-parameters }
In addition to the required fields listed below, you can learn about other available configuration options in the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/jdbc-parameters.html){ target=_blank }.
Required field | Description
--------------- | ----------
Username | A unique identifier of a user inside a Snowflake account (i.e., the name you use to log into Snowflake).
Private key | The string copied from your private key file.
Display name | A unique identifier for your Snowflake credentials within DataRobot.
For more information on Snowflake key pair authentication, including generating private keys and configuring key pair authentication in Snowflake, see the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/key-pair-auth){ target=_blank }.
{% include 'includes/data-conn-trouble.md' %}
DataRobot returns the following error message when attempting to authenticate Snowflake credentials: <br><br>_Incorrect username or password was specified._ | Confirm that your parameters are valid; if they are, use the recommended driver version. | Check the username, private key, and passphrase; if all parameters are valid, use the recommended driver version from the dropdown under **Show additional parameters > Driver**.<br><br>If you are using driver version 3.13.9:<br><ol><li>Click **Show additional parameters**.</li><li>Click **Add parameter** and select **account**.</li><li>Enter your account name in the field.</li></ol><br>For more information, see the [Snowflake community article](https://community.snowflake.com/s/article/JDBC-Driver-Spark-Connector-Getting-SnowflakeSQLException-Incorrect-username-or-password-was-specified-when-setting-correct-credentials){ target=_blank }.
|
snow-keypair
|
---
title: Relationship Quality Assessment speed improvements
description: Receive faster results from the Relationship Quality Assessment in Feature Discovery projects.
section_name: Data
maturity: public-preview
---
# Relationship Quality Assessment speed improvements {: relationship-quality-assessment-speed-improvements }
!!! info "Availability information"
Speed improvements to the Relationship Quality Assessment, available as a public preview feature, is _on_ by default. Contact your DataRobot representative or administrator for information on disabling the feature.
<b>Feature flag:</b> Enables Feature Discovery Relationship Quality Assessment Speedup
The [Relationship Quality Assessment (RQA)](fd-overview#relationship-quality-assessment) allows you to test relationships configurations in Feature Discovery projects by verifying join keys, dataset selection, and time-aware settings—the results of which help you spot and fix these configuration issues before EDA2 begins. Typically, finding the best configuration for a given use case requires multiple relationship iterations, however, current RQA run times are too long to make this feasible.
To improve run times, DataRobot now subsamples approximately 10% of the primary dataset, speeding up the computation without impacting the enrichment rate estimation accuracy or the results of the assessment. After the assessment is done, the sampling percentage is included at the top of the report.

|
rqa-speed
|
---
title: Fast EDA for large datasets
description: Overview of Fast Exploratory Data Analysis (EDA) for large datasets, and how to apply early target selection.
---
# Early target selection {: #early-target-selection }
The data ingestion process for large datasets can, optionally, be different than that used for smaller sets. (You can also use the <em>same</em> process by letting the project complete [EDA1](eda-explained#eda1).) When DataRobot optimizes for larger sets, it launches "[Fast (or preliminary) EDA](#fast-eda-application)," a subset of the full EDA1 process, which looks more like:
1. Dataset import begins.
2. DataRobot detects the need for, and launches, Fast EDA.
3. When Fast EDA completes, there is a window of time in which you can choose to participate in [early target selection](#fast-eda-and-early-target-selection). This window is only valid between the time when Fast EDA completes and when EDA1 completes. As a result, for smaller datasets (less than 200MB) the window may be too small to take advantage of it.
4. If early target selection was enabled, DataRobot completes EDA1, partitions the data, and launches EDA2 using project criteria for early target selection. If it was not enabled, the standard ingest process resumes (select a target and options and press **Start**).
!!! tip
When working with large datasets, you cannot create GLM, ENET, or PLS [blender models](creating-addl-models#create-a-blended-model). Median and average blenders <i>are</i> available. Also, Fast EDA is disabled in some cases, such as when the dataset has too many columns or uses too much RAM during ingest.
## Fast EDA application {: #fast-eda-application }
A dataset qualifies for Fast EDA if it is larger than 5MB, has fewer than 10,000 columns, and if when you begin to load data, 10 seconds elapse and the ingestion process is less than 75% complete. Note that the ingestion process is internal to DataRobot and may appear differently to you in the status bar. Fast EDA allows you to see preliminary EDA1 results and explore your data shortly after upload begins and while ingestion continues. Once Fast EDA completes, DataRobot continues calculating until full EDA1 completes.
Fast EDA is particularly helpful with large datasets because it allows you to:
* explore your data while ingestion continues. This is particularly helpful with large datasets. For example, it may take 15 minutes to ingest a 10GB file, but with Fast EDA you can see data information much more quickly.
* use [early target selection](#fast-eda-and-early-target-selection), described below, to set the target variable and advanced option settings earlier on in the upload process.
!!! note
Fast EDA is calculated on the first <i>X</i> rows of the dataset, not a random sample.
## Fast EDA and early target selection {: #fast-eda-and-early-target-selection }
Fast EDA paves the way for early target selection. Once you have chosen the target, DataRobot populates the project options (partitioning, downsampling, number of workers, etc.) with default values based on your Fast EDA data. You can change and save the options, then set the project to auto-start at the completion of full EDA1. In this way, you do not have to check repeatedly for ingestion completion, which can be time consuming with quite large datasets. If there is any kind of error in the settings or ingest, DataRobot notifies you by email with an informative error message (if configured to do so). Once you set the target and any advanced options, even if you close your browser, DataRobot saves your project selections.
You can set the following at the completion of Fast EDA:
* [Target](model-data#set-the-target-feature)
* [Metric](additional#change-the-optimization-metric)
* [Initial number of workers](worker-queue)
* [Modeling mode](model-data#set-the-modeling-mode)
* [Advanced options](adv-opt/index)
* [Smart Downsampling](smart-ds)
* [Partitioning Method](partitioning)
* [Additional Parameters](additional)
Until full EDA1 completes, you cannot:
* View [feature importance](model-ref#data-summary-information)
* Create [feature lists](feature-lists#create-feature-lists)
* Perform [feature transformations](model-data#create-new-features)
## Apply early target selection {: #apply-early-target-selection }
To use early target selection, keep an eye on the **Start** screen and the processing status reported in the right sidebar. Fast EDA is part of the ingestion process, but if your dataset is too small for early target selection to make sense, you won't be able to modify these selections and EDA1 will go on to complete. If early target selection is applicable to your project, you will see a change in the start screen that indicates early target selection is an option:

To use early target selection:
1. [Import your dataset](import-to-dr) to DataRobot.
2. When Fast EDA completes (part-way through the full EDA1 process), you are allowed to enter a target variable. Scroll down to explore your data and you see a yellow information message indicating, approximately, the amount of data used for the preliminary results:

!!! note
The informational message disappears after completion of EDA1.
3. Enter a target variable. The **Data** page displays the auto-start toggle:

4. Click the [**Show Advanced options**](adv-opt/index) link to set additional parameters.
5. If you choose to auto-start the model build process, toggle the auto-start and [select a modeling mode](model-data#set-the-modeling-mode):

When full EDA1 completes, DataRobot launches the model building process using the criteria you set.
## More info... {: #more-info }
When working with large datasets, there are some differences in behaviors that you should note.
### Train into validation and holdout {: #train-into-validation-and-holdout }
If, when training models, you trained into the validation and/or holdout sets, those scores display `N/A` on the Leaderboard:

With large datasets, DataRobot disables internal cross-validation when you train models into validation/holdout. For anything over 800MB, DataRobot uses [TVH](data-partitioning#training-validation-and-holdout-tvh) as the default validation method. This is because the validation/holdout rows are used for training the model, the scores are not likely to be an accurate representation of the model performance on unseen data (and thus, `N/A`).
Some additional considerations for models displaying `N/A`:
* They are not represented in the **Learning Curves** or **Speed vs Accuracy** tabs.
* The **Lift Chart** and **Feature Effects** tabs are unavailable.
* You cannot compute predictions using the **Make Predictions** tab.
* You cannot run DataRobot Prime.
### Change model sample size {: #change-model-sample-size }
You can change model sample size either from the [Leaderboard](creating-addl-models#use-add-new-model) or the [**Repository**](repository#create-a-new-model). Additionally, you cannot change the sample size for blender models. If you do wish to change a blender sample size, you can:
1. Retrain each constituent model at the new sample size.
2. Blend the constituent models to [make a new blender](creating-addl-models#create-a-blended-model).
### Understand the messages {: #understand-the-messages }
DataRobot provides some notifications to help you interpret the preliminary data displayed and used for early target selection. For example:
The [Smart Downsampling](smart-ds) setting is available (for binary classification or zero-boosted regression problems) after you set your target. The notification about the feature indicates the subset of data, in number of rows, that DataRobot will use in modeling. You can change the value in **Advanced options**:

The dataset notification tells you the number of rows in your dataset that are missing the target variable (and are therefore excluded from model building/predictions):

Additionally, auto-start returns a partitioning error when DataRobot bases preliminary calculations for partitioning settings on a subset of your dataset. If the cardinality of your partitioning column is outside the given range, auto-start returns an error.
|
fast-eda
|
---
title: Work with large datasets
description: This section provides information on working with large datasets. Consider Fast EDA for large sets up to 10GB; use scalable ingest for sets up to 100GB.
---
# Large datasets {: #large-datasets }
The following sections provide additional information on working with large datasets. Consider Fast EDA for large sets up to 10GB; use scalable ingest for sets up to 100GB.
Topic | Describes...
----- | ------
[Fast EDA for large datasets](fast-eda) | Details of the Fast EDA process.
|
index
|
# Microsoft SQL Server {: #microsoft-sql-server }
## Supported authentication {: #supported-authentication }
- Username/password
## Prerequisites {: #prerequisites }
The following is required before connecting to Microsoft SQL Server in DataRobot:
- Microsoft SQL account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with Microsoft SQL Server:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The service endpoint used to connect to SQL Server.<br><br>**Example:**<br> `jdbc-cert-ms-sql-server-2016.cqz9ezetwbf4.us-east-1.rds.amazonaws.com:1489` | [Microsoft documentation](https://docs.microsoft.com/en-us/sql/connect/jdbc/using-the-jdbc-driver){ target=_blank }
{% include 'includes/data-conn-trouble.md' %}
|
dc-ms-sql-srvr
|
# Snowflake {: #snowflake }
## Supported authentication {: #supported-authentication }
- [Username/password](#username-password)
- [Snowflake OAuth](#snowflake-oauth)
- [External OAuth](#snowflake-external-oauth) with Okta or AzureAD
## Username/password {: #username-password}
### Prerequisites {: #prerequisites }
The following is required before connecting to Snowflake in DataRobot:
* A Snowflake account
!!! warning "OAuth with security integrations"
If you create a security integration when configuring OAuth, you must specify the `OAUTH_REDIRECT_URI` as `https://<datarobot_app_server>/account/snowflake/snowflake_authz_return`.
### Required parameters {: #required-parameters }
In addition to the required fields listed below, you can learn about other available configuration options in the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/jdbc-parameters.html).
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | A connection object that stores a secure connection URL to connect to Snowflake.<br><br>**Example:** `{account_name}.snowflakecomputing.com` | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html)
`warehouse` | A unique identifier for your virtual warehouse. | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/snowflake-manager.html#warehouses-page)
`db` | A unique identifier for your database. | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/snowflake-manager.html#databases-page)
## Snowflake OAuth {: #snowflake-oauth }
### Prerequisites {: #prerequisites }
The following is required before connecting to Snowflake in DataRobot:
* A Snowflake account
* [Snowflake OAuth](https://docs.snowflake.com/en/user-guide/oauth-snowflake.html) configured
### Set up the connection in DataRobot {: #set-up-a-connection-in-datarobot }
When connecting with OAuth parameters, you must create a new data connection.
To set up a data connection using OAuth:
1. Follow the instructions for [creating a data connection](#create-a-new-connection) and [testing the connection](#test-the-connection).
2. After clicking **Test Connection**, a window appears and you must enter your Snowflake client ID and client secret, along with other required details retrieved from Azure app registration configurations (see the example below).

3. Click **Save and sign in**.

3. Enter your Snowflake username and password. Click **Sign in**.

4. To provide consent to the database client, click **Allow**.
If the connection is successful, the following message appears in DataRobot:

### Required parameters {: #required-parameters }
In addition to the required fields listed below, you can learn about other available configuration options in the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/jdbc-parameters.html).
Required field | Description | Documentation
--------------- | ---------- | -----------
_Required fields for data connection_ | :~~: | :~~:
`address` | A connection object that stores a secure connection URL to connect to Snowflake.<br><br>**Example:**<br> `{account_name}.snowflakecomputing.com` | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html){ target=_blank }
`warehouse` | A unique identifier for your virtual warehouse. | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/snowflake-manager.html#warehouses-page){ target=_blank }
`db` | A unique identifier for your database. | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/snowflake-manager.html#databases-page){ target=_blank }
_Required fields for credentials_ | :~~: | :~~:
Client ID | The public identifier for your application. | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/oauth-custom.html){ target=_blank }
Client secret | A confidential identifier used to authenticate your application. | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/oauth-custom.html){ target=_blank }
Snowflake account name | A unique identifier for your Snowflake account within an organization. | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html){ target=_blank }
## Snowflake External OAuth {: #snowflake-external-oauth }
### Prerequisites {: #prerequisites }
The following is required before connecting to Snowflake in DataRobot:
=== "Okta"
* A Snowflake account.
* [External OAuth](https://docs.snowflake.com/en/user-guide/oauth-snowflake.html){ target=_blank } configured in Snowflake for Okta.
!!! warning "External OAuth with security integrations"
If using Okta as the external identity provider (IdP), you must specify `http://localhost/account/snowflake/snowflake_authz_return` as a **Sign-in redirect URI** when [creating a new App integration in Okta](#external-idp-setup).
=== "Azure AD"
* A Snowflake account.
* [External OAuth](https://docs.snowflake.com/en/user-guide/oauth-azure){ target=_blank } configured in Snowflake for Microsoft Azure AD.
!!! warning "External OAuth with security integrations"
If using Azure AD as the external identity provider (IdP), you must specify `https://<datarobot_app_server>/account/snowflake/snowflake_authz_return` as a **Redirect URI** when [registering both applications in Azure AD](#external-idp-setup).
### External IdP setup {: #external-idp-setup }
!!! note
This section uses example configurations for setting up an external IdP. For information on setting up an external IdP based on your specific environment and requirements, see the documentation for Okta or Azure AD.
In the appropriate external IdP, create the Snowflake application(s):
=== "Okta"
Create a new App Integration in Okta:
1. Go to **Applications > Applications**.
2. Click **Create App Integration**.
3. For the **Sign-in method**, select **OIDC - OpenID Connect**.
4. For the **Application type**, select **Web Application**.
5. Click **Next**.
6. Make sure the following options are selected:
* Client Credentials
* Authorization Code
* Refresh Token
* Require consent
7. Under **LOGIN**, add `http://localhost/account/snowflake/snowflake_authz_return` to the **Sign-in redirect URIs**.
8. This results in your `Client ID` and `Client secret`.
Now, create a new Authorization Server:
1. Go to **Security > API > Add Authorization Server**.
* Set **Audience** to `https://<partner_name>.snowflakecomputing.com/`. `<partner_name>` is the `datarobot_partner` for the current DataRobot Snowflake instance.
2. Go to **Scopes > Add Scope**.
* Set **Name** to `session:role:public` (refers to the Snowflake role).
* For **Check-in**, add `Require user consent for this scope` and `Block services from requesting this scope`.
* (Optional) Set the `offline_access` scope to require consent.
3. Go to **Access Policies > Add Rule** and add the following rules:
* Add Check-in `Client Credentials`.
* Add Check-in `Authorization Code`.
* Add the client integration (created above) to the `Assigned to clients` field.
4. Go to **Token** and click **Create token**.
5. This results in the following:
* `Issuer`, for example, `https://dev-11863425.okta.com/oauth2/aus15ca55wkdOxplJ5d7`.
* Auth `Token` for programmatic access to the Okta API.
* Auth server metadata JSON (found in **Settings > Metadata URI**).
**Okta API calls**
``` title="Get current user"
curl --location --request GET 'https://<OKTA_ACCOUNT>.okta.com/api/v1/users/me' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--header 'Authorization: SSWS <TOKEN>'
```
``` title="Get the user's grants"
curl --location --request GET 'https://<OKTA_ACCOUNT>.okta.com/api/v1/users/<USER_ID>/clients/<CLIENT_ID>/grants' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--header 'Authorization: SSWS <TOKEN>'
```
``` title="Revoke grant/consent"
curl --location --request DELETE 'https://<OKTA_ACCOUNT>.okta.com/api/v1/users/<USER_ID>/grants/<GRANT_ID>' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--header 'Authorization: SSWS <TOKEN>'
```
=== "Azure AD"
Register an application for Snowflake Resource in Azure AD:
1. Go to **MS Azure > Azure AD > App registrations**.
2. Click **New registration**.
* Under **Name**, enter **Snowflake OAuth Resource**.
* Under **Supported account types**, select **Accounts in this organizational directory only**.
* Under **Redirect URI**, select **Web** and enter `https://app.datarobot.com/account/snowflake/snowflake_authz_return`.
* Click **Register**.
3. In the Overview section, copy the ClientID from the Application (client) ID field which will be `<OAUTH_CLIENT_ID>` value
4. Click on Certificates & secrets and then New client secret.
5. Add a description of the secret.
4. Click Add and copy the secret. Note that this value will not be available after this step. This will be `<OAUTH_CLIENT_SECRET>` value
5. Expose the API.
* Click on the set link next to **Application ID URI** make sure it is a unique ID ( this does not need any change ). This will be the <SNOWFLAKE_APPLICATION_ID_URI> value
* Click on **Add a scope** to add a scope representing the Snowflake role
* Enter the scope name as `session:scope:public` 
Register an application for Snowflake Client App in Azure AD:
1. Go to **MS Azure > Azure AD > App registrations**.
2. Click **New registration**.
* Under **Name**, enter **Snowflake OAuth Client**.
* Under **Supported account types**, select **Accounts in this organizationl directory only**.
* Under **Redirect URI**, select **Web** and enter `https://app.datarobot.com/account/snowflake/snowflake_authz_return`.
* Click **Register**.
3. Go to `API Permission > Add Permission > My APIs > Snowflake Resource` and choose the scope created above for Snowflake Resource (`session:scope:public`).
4. For programmatic clients that will request an Access Token on behalf of a user, configure Delegated permissions for Applications as follows.
* Click on API Permissions.
* Click on Add Permission.
* Click on My APIs.
* Click on the Snowflake OAuth Resource that you created in Step 1: Configure the OAuth Resource in Azure AD.
* Click on the Delegated Permissions box.
* Check on the Permission related to the Scopes created in step 3 `session:scope:public`
* Click Add Permissions. 
5. Collect additional information for Snowflake integration
* Click on App Registrations
* Click on the Snowflake OAuth Resource
* Click on Endpoints in the Overview interface
* Copy the OAuth 2.0 token endpoint (v2) which will be `<AZURE_AD_OAUTH_TOKEN_ENDPOINT>` value
* Copy OpenID Connect metadata url from the endpoint overview and paste it on a new window
* Locate the "jwks_uri" parameter and copy its value which will be `<AZURE_AD_JWS_KEY_ENDPOINT>` value (e.g., `https://login.microsoftonline.com/6064c47c-80e4-4a2b-82ee-1fc5643b37a2/discovery/v2.0/keys`)
* Copy Federation metadata document, open the URL in a new window. Locate the "entityID" which will be our `<AZURE_AD_ISSUER>` value (e.g., `https://sts.windows.net/6064c47c-80e4-4a2b-82ee-1fc5643b37a2/`)
6. This results in the following:
* `Client ID` and `Client secret` copied from **Snowflake OAuth Resource**
* Issuer URL copied from step 5 `<AZURE_AD_ISSUER>` value
* `<AZURE_AD_JWS_KEY_ENDPOINT>` value from Register an application for Snowflake Resource in Azure AD step 5
* `<SNOWFLAKE_APPLICATION_ID_URI>` from Register an application for Snowflake Resource in Azure AD step 3
??? tip " Related reading"
- [How to: Create External OAuth Token Using Azure AD On Behalf Of The User](https://community.snowflake.com/s/article/External-oAuth-Token-Generation-using-Azure-AD){ target=_blank }
- [Configure Microsoft Azure AD for External OAuth](https://docs.snowflake.com/en/user-guide/oauth-azure.html){ target=_blank }
### Snowflake setup {: #snowflake-setup }
!!! note
This section uses example configurations for setting up an external IdP in Snowflake. For information on setting up an external IdP in Snowflake based on your specific environment and requirements, see the Snowflake documentation.
In Snowflake, create an integration for the appropriate external IdP:
=== "Okta"
```
create security integration external_oauth_okta_2
type = external_oauth
enabled = true
external_oauth_type = okta
external_oauth_issuer = '<OKTA_ISSUER>'
external_oauth_jws_keys_url = '<JWKS_URI>'
external_oauth_audience_list = ('<AUDIENCE>')
external_oauth_token_user_mapping_claim = 'sub'
external_oauth_snowflake_user_mapping_attribute = 'login_name';
CREATE OR REPLACE USER <user_name>
LOGIN_NAME = '<okta_user_name>';
alter user <user_name> set DEFAULT_ROLE = 'PUBLIC';
```
<br>
**Reference values:**
* `OKTA_ISSUER`: `https://dev-11863425.okta.com/oauth2/aus15ca55wkdOxplJ5d7`
* `AUDIENCE`: `https://hl91180.us-east-2.aws.snowflakecomputing.com/`
* `JWKS_URI`: `https://dev-11863425.okta.com/oauth2/aus15ca55wkdOxplJ5d7/v1/keys` (retrieved from Okta Auth server Metadata JSON)
* `okta_user_name` (retrieved from **Okta > Directory > People**, select a user, and then go to **Profile > Username/login** )
=== "Azure AD"
!!! note
You must have the `accountadmin` role, or a role with the global `CREATE INTEGRATION` privilege to create the integration below.
```
create security integration external_oauth_azure_1
type = external_oauth
enabled = true
external_oauth_type = azure
external_oauth_issuer = 'https://sts.windows.net/6064c47c-80e4-4a2b-82ee-1fc5643b37a2/'
external_oauth_jws_keys_url = 'https://login.microsoftonline.com/6064c47c-80e4-4a2b-82ee-1fc5643b37a2/discovery/v2.0/keys'
external_oauth_audience_list = ('api://8aa2572f-c9e6-4e91-9eb1-dcd84c856dd2')
external_oauth_token_user_mapping_claim = 'upn'
external_oauth_any_role_mode = 'ENABLE'
external_oauth_snowflake_user_mapping_attribute = 'login_name';
```
Grant access on the integration to the public role:
`grant USE_ANY_ROLE on integration external_oauth_azure_1 to PUBLIC;`
Ensure that the `LOGIN_NAME` of the user is the same as the Azure login. Verify using the following query in Snowflake:
`DESC USER <SNOWFLAKE_LOGIN_NAME>`
If the login names are different, Snowflake cannot validate the access token generated with Azure AD. In that case, use the command below to match Snowflake with Azure:
`ALTER USER <SNOWFLAKE_LOGIN_NAME> SET LOGIN_NAME='<EMAIL_USED_FOR_AZURE_LOGIN>'`
### Set up the connection in DataRobot {: #set-up-the-connection-in-datarobot }
When connecting with external OAuth parameters, you must create a new data connection.
To set up a Snowflake data connection using external OAuth:
1. Follow the instructions for [creating a data connection](#create-a-new-connection) and [testing the connection](#test-the-connection).
2. After clicking **Test Connection**, select your OAuth provider from the dropdown—either Okta or Azure AD— and fill in the [additional required fields](#required-parameters).Then, click **Save and sign in**.

3. Enter your Okta or Azure AD username and password. Click **Sign in**.
4. To provide consent to the database client, click **Allow**.
If the connection is successful, the following message appears in DataRobot:

### Required parameters {: #required-parameters }
In addition to the required fields listed below, you can learn about other available configuration options in the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/jdbc-parameters.html){ target=_blank }.
Required field | Description | Documentation
--------------- | ---------- | -----------
_Required fields for data connection_ | :~~: | :~~:
`address` | A connection object that stores a secure connection URL to connect to Snowflake.<br><br>**Example:** `{account_name}.snowflakecomputing.com` | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html){ target=_blank }
`warehouse` | A unique identifier for your virtual warehouse. | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/snowflake-manager.html#warehouses-page){ target=_blank }
`db` | A unique identifier for your database. | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/snowflake-manager.html#databases-page){ target=_blank }
_Required fields for credentials_ | :~~: | :~~:
Client ID | The public identifier for your application.<br><br>In the Okta Admin console, go to **Applications** > **Applications** > **Your OpenID Connect web app** > **Sign On** tab > **Sign On Methods**.<br><br>In Azure AD, this is also known as the `applicationID`. | [Okta](https://developer.okta.com/docs/guides/find-your-app-credentials/main/) or [Azure AD](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal){ target=_blank } documentation
Client secret | A confidential identifier used to authenticate your application.<br><br>In the Okta Admin console, go to **Applications** > **Applications** > **Your OpenID Connect web app** > **Sign On** tab > **Sign On Methods**.<br><br>In Azure AD, this is also known as the `application secret`. | [Okta](https://developer.okta.com/docs/guides/find-your-app-credentials/main/){ target=_blank } or [Azure AD](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal){ target=_blank } documentation
Snowflake account name | A unique identifier for your Snowflake account within an organization. | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html){ target=_blank }
Issuer URL | A URL that uniquely identifies your SAML identity provider. "Issuer" refers to the Entity ID of your identity provider.<br><br>**Examples:**<br> <ul><li>Okta: `https://<your_company>.okta.com/oauth2/<auth_server_id>`</li><li>Azure AD:<br>`https://login.microsoftonline.com/<snowflake_resource_app_id>`</li></ul> | [Okta](https://developer.okta.com/docs/reference/api/oidc){ target=_blank } or [Azure AD](https://docs.microsoft.com/en-us/azure/app-service/configure-authentication-provider-aad){ target=_blank } documentation
Scopes | Contains the name of your Snowflake role. <br><br> **Examples:** <br>Parameters for a Snowflake Analyst. <br> <ul><li>Okta: `session:role:analyst`</li><li>Azure AD: `<client_app_id>/session:scope:analyst`</li></ul> | [Snowflake documentation](https://docs.snowflake.com/en/user-guide/oauth-ext-overview.html#scopes){ target=_blank }
Reach out to your administrator for the appropriate values for these fields.
## Caveats {: #caveats }
By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers; however, if you override this default in Snowflake, double-quoted identifiers are stored and resolved as uppercase letters. Because DataRobot is a case-sensitive platform, it's important to preserve the original case of the letters.
To avoid potential issues related to case-sensitivity, go to your Snowflake data connection in DataRobot, add the `QUOTED_IDENTIFIERS_IGNORE_CASE` parameter, and set the value to `FALSE`. See the [Snowflake documentation](https://docs.snowflake.com/en/sql-reference/parameters.html#quoted-identifiers-ignore-case){ target=_blank } for more details.
{% include 'includes/data-conn-trouble.md' %}
DataRobot returns the following message when testing external OAuth Snowflake connection with Azure AD: <br><br>_AADSTS700016: Application with identifier 'aa2572f-c9e6-4e91-9eb1-dcd84c856dd2' was not found in the directory 'Azure directory "datarobot" ("azuresupportdatarobot")'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant._ | Make sure scopes were created, granted, and assigned to the resource in Azure. | Refer to the [Snowflake setup](#snowflake-setup) section for more details.
DataRobot returns the following message when testing external OAuth after adding the data connection:<br><br>_JDBC connect failed for `jdbc:snowflake://datarobot_partner.snowflakecomputing.com?CLIENT_TIMESTAMP_TYPE_MAPPING=TIMESTAMP_NTZ&db=SANDBOX&warehouse=DEMO_WH&application=DATAROBOT&CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX=false.` Original error: The role requested in the connection or the default role if none was requested in the connection (`ACCOUNTADMIN`) is not listed in the Access Token or was filtered. Please specify another role, or contact your OAuth Authorization server administrator._ | Make sure the user who is establishing a connection with Azure has default role assigned. | The default role needs to be anything other than `ACCOUNTADMIN`, `ORGADMIN`, or `SECURITYADMIN`. If the `session:scope` is created with `scope:role-any`, the user can log in with any role other than the admin roles stated.
DataRobot returns the following message when testing the connection: <br><br>_Invalid Request: The request tokens do not match the user context. Do not copy the user context values (cookies; form fields; headers) between different requests or user sessions; always maintain the `ALL` of the supplied values across a complete single user flow. Failure Reasons:[Token values do not match;]_ | Make sure the login name of the user matches the login name in both Snowflake and Azure to map user and create access tokens. | You can alter the login name in Snowflake to match the username of Azure if it does not already match.
|
dc-snowflake
|
# Microsoft Azure {: #microsoft-azure }
## Supported authentication {: #supported-authentication }
- Azure SQL Server/Synapse username/password
- Active Directory username/password
## Prerequisites {: #prerequisites }
The following is required before connecting to Azure in DataRobot:
- Azure SQL account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with Azure:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The connection URL that supplies connection information for Azure. | [Microsoft documentation](https://docs.microsoft.com/en-us/sql/connect/jdbc/building-the-connection-url?view=azuresqldb-current){ target=_blank }
Learn about additional [configuration options for Azure](https://docs.microsoft.com/en-us/sql/connect/jdbc/setting-the-connection-properties?view=azuresqldb-current){ target=_blank }.
{% include 'includes/data-conn-trouble.md' %}
DataRobot returns the following error message when attempting to authenticate Azure credentials: <br><br>_Failed to authenticate. Check that this client has been granted access to your storage account and that your credentials are correct. Note that username/password authetnication only works with organizational accounts. It will not work with personal accounts._ | Check the user account type and make sure the user or group to which the user belongs is granted access via the asset's Access Control List (ACL). | The user account must be in Microsoft Azure Active Directory (AD)—Service Principal is not supported.<br><br> If the user has an Azure AD account, [grant them access to the asset via its ACL](https://learn.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-access-control){ target=_blank }.
|
dc-azure
|
# Treasure Data {: #treasure-data }
## Supported authentication {: #supported-authentication }
- Username/password
## Prerequisites {: #prerequisites }
The following is required before connecting to Treasure Data (TD-Hive) in DataRobot:
- Treasure Data account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with Treasure Data:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The connection URL that supplies connection information for Treasure Data. | [Treasure Data documentation](https://docs.treasuredata.com/display/public/PD/JDBC+Driver+for+Hive+Query+Engine)
`database` | A unique identifier for your database. | [Treasure Data documentation](https://docs.treasuredata.com/display/public/PD/Database+and+Table+Management)
Learn about additional [configuration options for Treasure Data](https://docs.treasuredata.com/display/public/PD/JDBC+Driver+for+Hive+Query+Engine).
{% include 'includes/data-conn-trouble.md' %}
|
dc-treasure
|
# PostgreSQL {: #postgresql }
## Supported authentication {: #supported-authentication }
- Username/password
## Prerequisites {: #prerequisites }
The following is required before connecting to PostgreSQL in DataRobot:
- PostgreSQL account
## Required parameters {: #required-parameters }
The table below lists the minimum required fields to establish a connection with PostgreSQL:
Required field | Description | Documentation
--------------- | ---------- | -----------
`address` | The server's address used to connect to PostgreSQL. | [PostgreSQL documentation](https://jdbc.postgresql.org/documentation/head/connect.html){ target=_blank }
{% include 'includes/data-conn-trouble.md' %}
|
dc-postgresql
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.