markdown
stringlengths 44
160k
| filename
stringlengths 3
39
|
|---|---|
---
title: Visual AI reference
description: Read a brief overview and reference describing the technological components of Visual AI.
---
# Visual AI reference {: #visual-ai-reference }
The following sections provide a very brief overview on the technological components of Visual AI.
A common approach for modeling image data is building neural networks that take raw pixel values as input. However, using a fully-connected neural network for images often leads to enormous network sizes and makes them difficult to work with. For example, a color (i.e., red, green, blue), 224x224 pixel image has 150,528 input features (224 x 224 x 3). The network can result in more than 150 million weights in the first layer alone. Additionally, because images have too much "noise," it is very difficult to make sense of them by looking at individual pixels. Instead, pixels are most useful in the context of their neighbors. Since the position and rotation of pixels representing an object in an image can change, the network must be trained to, for example, detect a cat regardless of where it appears in the image. Visual AI provides automated and efficient techniques for solving these challenges, along with model interpretability, tuning, and predictions in a human-consumable and familiar workflow.
## Pretrained network architectures {: #pretrained-network-architectures }
To use images in a modeling algorithm, they must first be turned into numbers. In DataRobot blueprints, this is the responsibility of the blueprint tasks called "featurizers". The featurizer takes the binary content of an image file as input and produces a feature vector that represents key characteristics of that image at different levels of complexity. These feature vectors can further be combined with other features in the dataset (numeric, categorical, text, etc.) and used downstream as input to a modeler.
Additionally, _fine-tuned_ featurizers train the neural network on the given dataset after initializing it with pretrained information, further customizing the output features. Fine-tuned featurizers are incorporated in a subset of blueprints and are only run by Autopilot in [Comprehensive mode](more-accuracy). You can run them from the Repository if the project was built using a different mode. Additionally, you can edit an existing blueprint using [Composable ML](cml-blueprint-edit) and replace a pretrained featurizer with a fine-tuned featurizer.
??? note "How do different types of featurizers handle domain adaptation?"
Separate sets of blueprints incorporate "fine-tuned" featurizers vs pretrained (non-fine-tuned) featurizers. Both are pretrained, but they handle transfer learning differently. The non-fine-tuned featurizer blueprints produce features of various complexity, but use the downstream ML algorithm to adapt to a different domain. Fine-tuned featurizer blueprints, on the other hand, adjust their own weights during training.
All DataRobot, featurizers, fine-tuned classifiers/regressors, and fine-tuned featurizers are based on pretrained neural network architectures. _Architectures_ define the internal structure of the featurizer—the neural network—and they influence runtime and accuracy. DataRobot automates the selection of hyperparameters for these featurizers and fine-tuners to a certain extent, but it is also possible to further [customize](vai-tuning) them manually to optimize results.
There is, additionally, a baseline blueprint that answers the question "What results would I see if I didn't bother with a [neural network?](#images-and-neural-networks)". If selected, DataRobot builds a blueprint that contains a grayscale downscaled featurizer that is not a network. These models are faster but less accurate. They are useful for investigating target leakage (is one class brighter than others? Are there unique watermarks or visual patches for each class? Is accuracy too good to be true?).
Furthermore, DataRobot implements state-of-the-art neural network optimizations to run over popular architectures, making them significantly faster while preserving the same accuracy. DataRobot offers a _pruned_ version of several of the top featurizers which, if that architectural variant exists, is highly recommended (providing the same accuracy at up to three times the speed).
DataRobot Visual AI not only offers the best imaging architectures but automatically selects the best neural network architecture and _featurizer_pooling_ type for the dataset and problem type. Automatic selection—known as _Visual AI Heuristics_ —is based on optimizing the balance between accuracy and speed. Additionally, when Autopilot concludes, the logic automatically retrains the best Leaderboard model using the _EfficientNet-B0-Pruned_ architecture for an accuracy boost.
DataRobot integrates state-of-the-art architectures, allowing you to select the best for your needs. The following lists the architectures DataRobot supports:
| Featurizer | Description |
|---------------|--------------|
| <a target="_blank" href="https://pjreddie.com/darknet/">Darknet</a> | This simple neural network consists of eight 3x3 convolutional blocks with batch normalization, Leaky ReLu activation, and pooling. The channel depth increases by a factor of two with each block. Including a final dense layer, the network has nine layers in total. |
| <a target="_blank" href="https://arxiv.org/abs/1905.11946">Efficientnet-b0, Efficientnet-b4</a> | The fastest network in the EfficientNet family of networks, the b0 model notably outperforms ResNet-50 top-1 and top-5 accuracy on ImageNet while having ~5x fewer parameters. The main building of the EfficientNet models is the mobile inverted residual bottleneck (MBConv) convolutional block, which constrains the number of parameters. The b4 neural network is likely to be the most accurate for a given dataset. The implementation of the b4 model scales up the width of the network (number of channels in each convolution) by 1.4 and the depth of the network (the number of convolutional blocks) by 1.8, providing a more accurate and slower model than b0, with results comparable to ResNext-101 or PolyNet. EfficientNet-b4, while it takes longer to run, can deliver significant accuracy increases. |
| <a target="_blank" href="https://arxiv.org/pdf/1603.05027.pdf">Preresnet10</a> | Based on ResNet, except within each residual block the batch norm and ReLu activation happen *before* rather than after the convolutional layer. This implementation of the PreResNet architecture has four PreRes blocks with two convolutional blocks each, which yield 10 total layers when including an input convolutional layer and output dense layer. The model's computational complexity should scale linearly with the depth of the network, so this model should be about 5x faster than ResNet50. However, because the richness of the features generated can affect the fitting time of downstream modelers like XGB with Early Stopping, the time taken to train a model using a deeper featurizer like ResNet50 could be even more than 5x. |
| <a target="_blank" href="https://arxiv.org/abs/1512.03385">Resnet50</a> | This classic neural network is based on residual blocks containing skip-ahead layers, which in practice allow for very deep networks that still train effectively. In each residual block, the inputs to the block are run through a 3x3 convolution, batch norm, and ReLu activation—twice. That result is added to the inputs to the block, which effectively turns the result into a residual of the layer. This implementation of ResNet has an input convolutional layer, 48 residual blocks, and a final dense layer, which yield 50 total layers. |
| <a target="_blank" href="https://arxiv.org/abs/1602.07360">Squeezenet</a> | The fastest neural network in DataRobot, this network was designed to achieve the speed of AlexNet with 50x fewer parameters, allowing for faster training, prediction, and storage size. It is based around the concept of fire modules, consisting of a combination of "squeeze" layers followed by "expand" layers, the purpose of which is to dramatically reduce the number of parameters used while preserving accuracy. This implementation of SqueezNet v1.1 has an input convolutional layer followed by eight fire modules of three convolutions each, leading to a total of 25 total layers. |
| <a target="_blank" href="https://arxiv.org/abs/1610.02357">Xception</a> | This neural network is an improvement in accuracy over the popular Inception V3 network that has comparable speed to ResNet-50 but with better accuracy on some datasets. It saves on parameters by learning spatial correlations separately from cross-channel correlations. The core building block is the depth-wise separable convolution (a depthwise convolution + pointwise convolution) with residual layers added (similar to PreResNet-10). This building block aims to "decouple" the learning happening across the spatial dimensions (height and width) with the learning happening across the channel dimensions (depth), so that they are handled in separate parameters whose interaction can be learned from other parameters downstream in the network. Xception has 11 convolutional layers in the "entry flow" where the width and height are reduced and the depth increases, then 24 convolutional layers where the size remains constant for a total of 36 convolutional layers. |
| <a target="_blank" href="https://arxiv.org/abs/1905.02244">MobileNetV3-Small-Pruned</a> | MobileNet V3 is the latest in the MobileNet family of neural networks, which are specially designed for mobile phone CPUs and other low-resource devices. It comes in two 2 variants: MobileNet3-Large for high resource usage and MobileNet3-Small for low resource usage. MobileNetV3-Small is 6.6% more accurate than the previous MobileNetV2 with the same or better latency. In addition to its lightweight blocks and operations, pruning is applied resulting in faster feature extraction. This pruned version keeps the same architecture but with a significantly reduced number of layers ( ~50 ). *Conv2D or DepthwiseConv2D* followed by *BatchNormalization* are merged into single *Conv2D* layer. |
| <a target="_blank" href="https://pjreddie.com/darknet/">DarkNet-Pruned</a> | Based on the same architecture as DarkNet, the pruned version is optimized for inference speed. *Conv2D* layers followed by *BatchNormalization* layers are merged into single Conv2D layer. |
| <a target="_blank" href="https://arxiv.org/abs/1905.11946">EfficientNet-b0-Pruned, EfficientNet-b4-Pruned</a> | Providing a modified version of EfficientNet-b0 and -b4, the pruned variant removes the *BatchNormalization* layers after *Conv2D* layer and merges them into the preceding *Conv2D* layer. This results in a network with fewer layers but the same accuracy, providing faster inference for both CPU and GPU. |
| <a target="_blank" href="https://arxiv.org/abs/2104.00298">EfficientNetV2-S-Pruned</a> | EfficientNetV2-S-Pruned is the latest Neural Network part of EfficientNet family. It combines all previous insights from EfficientNetV1 models (2019), and applies the new Fused-MBConv approach by Google Neural Architecture Search as follows:<ul><li>Replaces"DepthwiseConv2D 3x3 followed by Conv2D 1x1" with "Conv2D 3x3" (operation is called Fused-MBConv).</li><li>Improves training procedures. Models are now pre-trained with over 13M+ images from 21k+ classes [ImageNet 21k](https://arxiv.org/abs/2104.10972){ target=_blank }.</li></ul>In addition, DataRobot applies a layer-reducing "pruning operation", removing the *BatchNormalization* layers after the *Conv2D* and *DepthwiseConv2D*. This results in a network with fewer layers, achieving the same accuracy but providing faster inference for both CPU and GPU. |
| <a target="_blank" href="https://arxiv.org/abs/1512.03385">ResNet50-Pruned</a> | The only difference between ResNet50 and ResNet50-Pruned is that the variant removes the *BatchNormalization* layers after the *Conv2D* layer and merges them into the preceding *Conv2D* layer. This results in a network with fewer layers but the same accuracy, providing faster inference for both CPU and GPU. |

## Images and neural networks {: #images-and-neural-networks }
Featurizers are deep convolutional neural networks made of sequential layers, each layer aggregating information from previous layers. The first layers capture low level patterns made of a few pixels: points, edges, corners. The next layers capture shapes and textures; final layers capture objects. You can select the level of features you want to extract from the neural network model, [tuning](vai-tuning) and optimizing results (although the more layers enabled the longer the run time).
### Keras models {: #keras-models }
The [Neural Network Visualizer](vai-insights#neural-network-visualizer), available from the Leaderboard, illustrates layer connectivity for each layer in a model's neural network. It applies to models where either a preprocessing step is a neural network (like in the case of Visual AI blueprints) or the algorithm making predictions is a Keras model (like with tabular Keras blueprints without images).
All Visual AI blueprints, except the Baseline Image Classifier/Regressor, use Keras for preprocessing images. Some Visual AI blueprints use Keras for preprocessing and another Keras model for making predictions—those blueprints have "Keras" in the name. There are also non-Visual AI blueprints that use Keras for making predictions; those blueprints also have "Keras" in the name.

## Convolutional Neural Networks (CNNs) {: #convolutional-neural-networks-cnns }
<a target="_blank" href="https://en.wikipedia.org/wiki/Convolutional_neural_network">CNNs</a> are a class of <a target="_blank" href="https://en.wikipedia.org/wiki/Deep_learning#Deep_neural_networks">deep learning networks</a> applied to image processing for the purpose of turning image input to machine learning output. (See also the <a target="_blank" href="https://www.kdnuggets.com/2019/07/convolutional-neural-networks-python-tutorial-tensorflow-keras.html">KDnuggets explanation</a> of CNNs.) With CNNs, instead of having all pixels connected to all other pixels, the network only connects pixels within regions, and then regions to other regions. This training process, known as the "rectified linear unit" or ReLU network layer, significantly reduces the number of parameters and can be illustrated as:

or

The drawbacks of CNNs are that they require millions of rows of labeled data to train accurate models. Additionally, for large images, feature extraction can be quite slow. As the amount of training data and the resolution of the data increases, the required computational resources to train can also increase dramatically.
To address these issues, Visual AI relies on [pretrained networks](#pretrained-network-architectures) to featurize images, speeding up processing because there is no need to train deep learning featurizers from scratch. Also, Visual AI requires much less training data: hundreds of images instead of thousands. By combining features from various layers, Visual AI is not limited to using the output of the pretrained featurizers only, which means the subsequent modeling algorithm (XGBoost, Linear model, etc.) can learn the specificity of the training images. This is DataRobot's application of transfer learning, allowing you to apply Visual AI to any kind of problem. The mathematics of transfer learning also makes it possible to combine image and non-image data in a single project.
## Visualizations {: #visualizations }
There are two model-specific [visualizations](vai-insights) available to help understand how Visual AI grouped images and which aspects of the image were deemed most important.
### Activation Maps {: #activation-maps }
[**Activation Maps**](vai-insights#activation-maps) illustrate which areas of an image the model is paying attention to. They are computed similarly to Feature Impact for numeric and categorical variables, relying on the permutation method and/or SHAP techniques to capture how a prediction changes when data is modified. The implementation itself leverages a modified version of <a target="_blank" href="https://arxiv.org/abs/1512.04150">Class Activation Maps</a>, highlighting the regions of interest.
These maps are important because they allow you to verify that the model is learning the right information for your use case, does not contain undesired bias, and is not overfitting on spurious details. Furthermore, convolutional layers naturally retain spatial information which is otherwise lost in fully connected layers. As a result, the last convolutional layers have the best compromise between high-level object recognition and detailed spatial information. These layers look for class-specific information in the image. Knowing the importance of each class' activation helps to better understand the deep model's focus.
### Image Embeddings {: #image-embeddings }
At the input layer, classes are quite tangled and less distinct. Visual AI uses the last layer of a pretrained neural network (because the last layer of the network represents a high-level overview of what the network knows about forming complex objects). This layer produces a new representation in which the classes are much more separated, allowing them to be projected into a two-dimensional space, defined by similarity, and inspected with the [Image Embeddings](vai-insights#image-embeddings) tab. DataRobot uses [Trimap](https://arxiv.org/pdf/1910.00204.pdf), a state-of-the-art unsupervised learning dimensionality reduction approach for its image embedding implementation.
Image embeddings are about projecting. Using the super high dimensional space the images exist in (224x224x3 or 528 dimensions), DataRobot projects them into 2D-space. Their proximity, while dependent on the data, can potentially aid in outlier detection.
|
vai-ref
|
---
title: Visual AI reference
description: Do a deep dive on DataRobot's Visual AI.
---
# Visual AI reference {: #visual-ai-reference }
These sections describe the workflow and reference materials for including images as part of your DataRobot project.
Topic | Describes...
----- | ------
[Visual AI reference](vai-ref) | Learn about technological components of Visual AI.
[Visual AI tuning walkthrough](vai-tuning-guide) | See an example of the [tuning section](vai-tuning) at work.
See considerations for working with [Visual AI](vai-model#feature-considerations).
|
index
|
---
title: Visual AI tuning guide
description: Step through several recommended methods for maximizing Visual AI classification accuracy.
---
# Visual AI tuning guide {: #visual-ai-tuning-guide }
In this guide, you will step through several recommended methods for maximizing Visual AI classification accuracy with a boat dataset containing nine classes and approximately 1,500 images. You can get the dataset [here](https://www.kaggle.com/datasets/clorichel/boat-types-recognition){ target=_blank }.

Start the project with the target of `class`. When it builds, change the optimization metric from `LogLoss` to show `Accuracy`. You'll see under the cross-validation score that the top model achieved 83.68% accuracy.

Use the steps below to improve the results
## 1. Run with Comprehensive mode {: #1-run-with-comprehensive-mode }
The first modeling run, using Quick mode, generated results by exploring a limited set of available blueprints. There are, however, many more available if you run in Comprehensive mode. Click [**Configure modeling settings**](more-accuracy#get-more-accuracy) option in the right pane, and select Comprehensive modeling mode, to re-run the modeling process and build additional models while prioritizing accuracy.

This results in a model with a much higher accuracy of 91.45%.

## 2. Explore other image featurizers {: #2-explore-other-image-featurizers }
Once images are turned to numbers (“[featurized](vai-ref#pretrained-network-architectures)”) as a task in the model's blueprint, they can be passed to a modeling algorithm and combined with other features (numeric, categorical, text, etc.). The featurizer takes the binary content of image files as input and produces a feature vector that represents key characteristics of that image at different levels of complexity. These feature vectors are then used downstream as input to a modeler. DataRobot provides several featurizers based on pretrained neural network architectures.
To explore improvements with other image featurizers, select the top model on the Leaderboard and view its blueprint, which shows the featurizer used.

From the [**Advanced Tuning**](adv-tuning) tab, scroll to the current network to bring up the menu of options.

Try different `network` hyperparameters (scroll to the bottom and select **Begin Tuning** after each change). After tuning the top Comprehensive mode model with each available image featurizer, you can further explore variations of those featurizers in the top-performing models.
## 3. Feature Granularity {: #3-feature-granularity }
Featurizers are deep convolutional neural networks made up of sequential layers, each layer aggregating information from previous layers. The first layers capture low-level patterns made of a few pixels: points, edges, and corners. The next layers capture shapes and textures. The final layers capture objects. You can select the level of features you want to extract from the neural network model, tuning and optimizing results (although the more layers enabled, the longer the run time).

Toggles for feature granularity options (highest, high, medium, low) are found below the `network` section in the **Advanced Tuning** menu.

Any combination of these can be used, and the context of your problem/data can direct which features might provide the most useful information.
## 4. Image Augmentation {: #4-image-augmentation }
Following featurizer tuning, you can explore changes to your input data with [image augmentation](tti-augment/index) to improve model accuracy. By creating new images for training by randomly transforming existing images, you can build insightful projects with datasets that might otherwise be too small.
Image augmentation is available at project setup in the [**Image Augmentation**](ttia) advanced options or after modeling in **Advanced Tuning**.
Domain expertise can provide insight into which transformations could show the greatest impact. Otherwise, a good place to start is with `rotation`, then `rotation + cutout`, followed by other combinations.

## 5. Classifier hyperparameters {: #classifier-hyperparameters }
The training hyperparameters of the classifier, the component receiving the image feature encodings from the featurizer, are also exposed for tuning in the **Advanced Tuning** menu.

To set a new hyperparameter, enter a value in the **Enter value** field in one of the following ways:
* Select one of the prepopulated values (clicking any value listed in orange enters it into the value field.)
* Type a value into the field. Refer to the **Acceptable Values** field, which lists either constraints for numeric inputs or predefined allowed values for categorical inputs (“selects”). To enter a specific numeric, type a value or range meeting the criteria of **Acceptable Values**:

In the screenshot above, you can enter various values between 0.00001 and 1, for example:
* 0.2 to select an individual value.
* 0.2, 0.4, 0.6 to list values that fall within the range; use commas to separate a list.
* 0.2-0.6 to specify the range and let DataRobot select intervals between the high and low values; use hyphen notation to specify a range.
### Fine-tuning tips {: #fine-tuning-tips }
* For speed improvements, reduce **Early Stopping Patience**. The default is 5, try setting it to 2, as 5 sometimes can lead to 40+ epochs.
* Change the loss function to **focal_loss** if the dataset is imbalanced, since **focal_loss** generalizes well for imbalanced datasets.
* For faster convergence, change **reduce_lr_patience** to 1 (the default is 3).
* Change **model_name** to `efficientnet-b0` if you aiming better fine-tuner accuracy. The default is `mobilenet-v3`.
[Set the search type](adv-tuning#set-the-search-type) by clicking **Select search option**, and selecting either:
* **Smart Search** (default) performs a sophisticated pattern search (optimization) that emphasizes areas where the model is likely to do well and skips hyperparameter points that are less relevant to the model.
* **Brute Force** evaluates each data point, which can be more time and resource intensive.
Recommended hyperparameters to search first vary by the classifier used. For example:
* Keras model: batch size, learning rate, hidden layers, and initialization
* XGBoost: number of variables, learning rate, number of trees, and subsample per tree
* ENet: alpha and lambda
## Bonus: Fine-tuned blueprints {: #bonus-fine-tuned-blueprints }
Fine-tuned model blueprints may prove useful for datasets that greatly differ from ImageNet, the dataset leveraged for the pretrained image featurizers. These blueprints are found in the model repository and can be trained from scratch (random weight initialization) or with the pretrained weights. Many of the tuning steps described above also apply for these blueprints; however, keep in mind that fine-tuned blueprints requires extended training times. In addition, pretrained blueprints will achieve better scores than fine-tuned blueprints for the majority of cases, and did so with our boats dataset.

## Final results {: #final-results }
Fine-tuning improved the accuracy score on the top Quick model (83.68%) to the top Comprehensive mode model (91.45%). Following the additional steps outlined here for maximizing model performance with the most effective settings within the platform resulted in a final accuracy of 92.92%.
|
vai-tuning-guide
|
---
title: Document AI insights
description: Use the Document AI visualizations to better understand the information contained in your documents.
section_name: AutoML
maturity: public-preview
---
# Document AI insights {: #document-ai-insights }
DataRobot provides a variety of visualizations to help better understand `document` features.
Insight | Description
------- | -----------
Prior to modeling | :~~:
AI Catalog [Profile](catalog-asset#asset-details) tab | Preview dataset column names and row data.
[Data Quality Assessment](doc-ai-ingest#data-quality) (DQA) | After EDA1, use the DQA to find potential issues with the modeling data.
Post-modeling | :~~:
[Document Insights](#document-insights) | Understand how DataRobot processed `document` features for modeling.
[Clustering Insights](#clustering-insights) | Show how text (of type `document`) is clustered, which can capture a latent features or identify segments of content.
[Prediction Explanations](pred-explain/predex-text)* | Show extracted text from documents. Note that while you will see the `document` text for each row selected, and can get a preview of each feature, the highlighting that accompanies Text Explanations is not available.
[Word Cloud](word-cloud)* | Display the most relevant words and short phrases found in the project's `document` column.
[Lift Chart](lift-chart#lift-chart-drill-down)* | View bin data for actual and predicted values of the `document` feature.
[Blueprint](doc-ai-ingest#set-document-task) | View the text extraction process represented as part of the model [blueprint](blueprints).
* These insights work similarly to DataRobot's handling of `text` features, with minor differences.
## Document Insights {: #document-insights }
The **Document Insights** tab provides `document`-specific visualizations to help you see and understand the unique nature of a document's text elements. It lets you compare rendered pages of a document with the extracted text of the documents. There are several components to the screen:

| | Element | Description |
|---|---|---|
|  | Filters | Sets the display to match the classes selected by the filters. Both actual and predicted filter values are applied as an `and` to the display.
|  | Task | Identifies the task used in the [text extraction process](doc-ai-ingest#set-document-task).|
|  | High-level page preview | Scroll through or select the PDF documents that are used in the model. Click an entry to change the middle and right columns to reflect that text.|
|  | Mid-level page view | Shows the content of the selected document, page by page, highlighting the areas that were extracted as text. Use arrows below the page (if present) to cycle through the pages.
|  | Detailed page view | Shows the individual text rows.
This insight is useful for double-checking which information DataRobot extracted from the document and whether you selected the correct task. For example, if you see that the information from an image is not available, and you need the text from within that image, you can then retry with the OCR task.

To use the insight:
1. Click a high-level page preview (1) to select a page. The mid-level and detailed pages update to reflect the selected page.
2. Select an individual line in the mid-level preview (2) and:
- Use the zoom in/zoom out features to change the view.
- Use pagination for documents with more than one page.
- Notice that the line is highlighted in the detailed page view.

3. Select a line in the detailed page view
## Clustering Insights {: #clustering-insights }
Document AI also supports [Cluster Insights](cluster-insights). For each cluster based on `document` features, DataRobot displays the ngrams for features in the document column. Each ngram is listed according to importance. In the example below, the insight shows:
* Previews of the images in the cluster. Hover to enlarge the image.
* Ranked importance of the ngrams found. Hover on a feature for more details of its use within the document.

## Advanced Tuning {: #advanced-tuning }
The Tesseract OCR engine may not recognize documents with very small text (some footnotes, for example). If that happens and the text is necessary to the model accuracy, use [**Advanced Tunning**](adv-tuning) to manually set model parameters.
When the **Tesseract OCR** task is present, a `Resolution` option becomes available through this tuning (as does a language option). The resolution, which sets the number of DPI, is the value used to convert the document page to images before they are processed with the Tesseract library. With a higher number, the OCR results could improve; however, the run times are extended. In other words, if you notice that text is missed, from **Document Insights** for example, you could increase the value and compare results.

|
doc-ai-insights
|
---
title: Document ingest and modeling
description: Learn how to ingest PDF documents as an input to modeling.
section_name: AutoML
maturity: public-preview
---
# Document ingest and modeling {: #document-ingest-and-modeling }
PDF documents used for modeling are extracted by tasks within the blueprint and registered as a dataset made of a single text column, with each row in the column representing a single document and with the value being the extracted text—features of type `document`.
The steps to build models are:
1. [Prepare your data](#prepare-pdf-data).
2. [Model with text](#model-with-text), including ingesting, converting PDF to text, and analyzing data.
## Prepare PDF data {: #prepare-pdf-data }
The following options describe methods of preparing embedded text PDF or PDF documents with scans as data that can be imported into DataRobot for modeling. See the [deep dive](#deep-dive-text-handling-details) below for a more detailed description of both data handling methods.
* Include PDF documents as base64-encoded strings inside the dataset. (See the DataRobot Python client [utility methods](https://datarobot-public-api-client.readthedocs-hosted.com/en/latest-release/reference/modeling/spec/binary_data.html?highlight=binary#working-with-binary-data){ target=_blank } for assistance.)

* Upload an archive file (e.g., zip) with a dataset file that references PDF documents relative to the dataset (document columns in the dataset contain paths to documents).

* For binary or multiclass classification, separate the PDF document classes by folder, then compress the separated PDFs into an archive and upload them. DataRobot creates a column with the directory names, which you can use as the target.

* For unsupervised projects, include all PDF files in the root (no directories needed).
* Include a dataset along with your documents and other binary files (for example, images). In the dataset, you can reference the binary files by their relative path to the dataset file in the archive. This method works for any project type and allows you to combine the document feature type with all the other feature types DataRobot supports.
When uploading a ZIP file, you can also supply an accompanying CSV file to provide additional information to support the uploaded document. One column within the CSV must contain the document file name being referenced. All other values contained in the row are associated with the document and used as modeling features.

!!! note "Text extraction with the Document Text Extractor task"
DataRobot extracts all text from _text_ PDF documents. If images contain text, that text may or may not be used, depending on how the image was created. To determine if text in an image can be used for modeling, open the image in a PDF editor and try to select it—if you can select the text, DataRobot will use it for modeling. To ensure DataRobot can extract text from any image, you can select the the Tesseract OCR task.
## Model with text {: #model-with-text }
To start a project using Document AI:
1. Load your prepared [dataset file](#prepare-pdf-data), either via upload or the AI Catalog. Note that:
- Any `document` type feature is converted to text during modeling (in the blueprint).
- Each document is represented as a row.
- All extracted text from the document is represented within a cell for that row.
2. Verify that DataRobot is using the correct [document processing task](#document-settings) and set the language.
2. Examine your data after [EDA1](#data-quality) (ingest) to understand the content of the dataset.
3. Press start to begin modeling building.
4. Examine your data using the [Document AI insights](doc-ai-insights).
## Document settings {: #document-settings }
After setting the target, use the **Document Settings** advanced option to verify or modify the document task type and language.
### Set document task {: #set-document-task }
Select one of two document tasks to be used in the blueprints. —**Document Text Extractor** or **Tesseract OCR**. During EDA1, if DataRobot can detect embedded text it applies **Document Text Extractor**; otherwise, it selects **Tesseract OCR**.
* For embedded text, the Document Text Extractor is recommended because it's faster and more accurate.
* To extract all visible text, including the text from images inside the documents, select the **Tesseract OCR** task.
* When PDFs contain scans, it is possible that the scans have quality issues—they contain "noise," the pages are rotated, the contrast is not sharp. Once EDA1 completes, you can view the state of the scans by expanding the `Document` type entry in the data table:

### Set language {: #set-language }
It is important to verify and set the language of the document. The OCR engine must have the correct language set in order to set the appropriate pretrained language model. DataRobot's OCR engine supports 105 languages.
## Data quality {: #data-quality }
If the dataset was loaded to the AI Catalog, use the **Profile** tab for visual inspection:

After uploading, examine the data, which shows:
* The feature names or, if an archive file is separated into folders, a class column—the folder names from the ZIP file.

* The `document` type features.
* The Reference ID, which provides all the file names to later help identify which predictions belong to which file if no dataset file was provided inside the archive file.
Additionally, DataRobot's Data Quality assessment helps to identify issues so that you can identify errors before modeling.

Click **Preview log**, and optionally download the log, to identify errors and fix the dataset. Some of the errors include:
* `There is no file with this name`
* `Found empty path`
* `File not in PDF format or corrupted`
* `The file extension indicates that this file is not of a supported document type`
## Deep dive: Text handling details {: #deep-dive-text-handling-details }
DataRobot handles both embedded text and PDFs with scans. Embedded text documents are PDFs that allow you to select and/or search for text in your PDF viewer. PDF with scans are processed via optical character recognition (OCR) —text cannot be searched or selected in your PDF viewer. This is because the text is part of an image within the PDF.
### Embedded text {: #embedded-text }
The blueprints available in the [Repository](repository) are the same as those that would be available for a text variable. While text-based blueprints use text directly, in a blueprint with `document` variables, you can see the addition of a **Document Text Extractor** task. It takes the PDF files, extracts the text, and provides the text to all subsequent tasks.

### Scanned text (OCR) {: #scanned-text-ocr }
Because PDF documents with scans do not have the text embedded, the text is not directly machine-readable. DataRobot runs optical character recognition (OCR) on the PDF in an attempt to identify and extract text. Blueprints using OCR use the Tesseract OCR task:

The **Tesseract OCR** task opens the document, converts each page to an image and then processes the images with the Tesseract library to extract the text from them. The **Tesseract OCR** task then passes the text to the next blueprint task.
Use the [**Document Insights**](doc-ai-insights#document-insights) visualization after model building to see example pages and the detected text. Because the Tesseract engine can have issues with small fonts, use [**Advanced Tuning**](doc-ai-insights#advanced-tuning) to adjust the resolution.
### Base64 strings {: #base64-strings }
DataRobot also supports base64-encoded strings. For document (and [image](vai-model)) datasets, DataRobot converts PDF files to base64 strings and includes them in the dataset file during ingest. After ingest, instead of a ZIP file there is a single CSV file that includes the images and PDF files as base64 strings.
|
doc-ai-ingest
|
---
title: Document AI
description: Learn how to use documents as a data source without manual intervention to make documents available for modeling.
section_name: AutoML
maturity: public-preview
---
# Document AI {: #document-ai }
!!! info "Availability information"
Document AI modeling is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable Document Ingest, Enable OCR for Document Ingest
Public preview for... | Describes...
----- | ------
[Workflow overview](doc-ai-overview) | Read background information and a simplified workflow overview.
[Document ingest and modeling](doc-ai-ingest) | Learn how to prepare raw documents as an input to modeling.
[Document AI insights](doc-ai-insights) | Use the Document AI-specific visualizations to better understand text within your documents.
[Making predictions](doc-ai-predictions) | Make real-time or batch predictions on Document AI models.
|
index
|
---
title: Predictions from documents
description: Multiple methods are available for making predictions with Document AI.
section_name: AutoML
maturity: public-preview
---
# Predictions from documents {: #predictions-from-documents }
The following prediction methods are available for Document AI.
Method | Description
------ | -----------
[UI](predict) | Upload an archive or dataset file for predictions from the UI.
For all other types other than deployment batch predictions, you must include PDF documents as base64-encoded strings. The public API client includes a utility function to help with conversion.
Method | Description
------ | -----------
[API](realtime/index) | Use scripting code and base64-converted document files to make an API call and get predictions from the deployed model. The output will be a CSV file with prediction results.
[Portable Prediction Server](port-pred/index) | Use base64-converted document files for either single-model or multi-model modes.
[Portable Batch Predictions](batch/index) | Use base64-converted document files for all supported adapters (filesystem, JDBC, AWS S3, Azure Blob, GCS, Snowflake, Synapse).
|
doc-ai-predictions
|
---
title: Document AI overview
description: Read background information and a simplified workflow overview.
section_name: AutoML
maturity: public-preview
---
# Document AI overview {: #document-ai-overview }
Analysts and data scientists often want to use the information contained in PDF documents to build models. However, manually intensive data preparation requirements present a challenging barrier to efficient use of documents as a data source. Often the volume of documents is large enough that reading through each or manually formatting and preparing them into tabular formats is not feasible. Information spread out in a large corpus of documents—in a variety of formats with inconsistencies—makes the frequently valuable text information contained within these documents inaccessible.
Document AI provides a way to build models on raw PDF documents without manually intensive data preparation steps. It provides end-to-end support for PDFs with encoded text that is readily machine readable:
* DocumentTextExtractor (DTE): Extracts embedded text from a PDF document. Example: Save a document written on your computer as PDF, then upload it.
* Optical Character Recognition (OCR): Extracts scanned text. Example: You print out a document and then scan it and upload it as PDF. Content is seen as pixels (not as “known” text).
Document AI works with many project types, including regression, binary and multiclass classification, multilabel, clustering, and anomaly detection. The process extracts content and categorizes it as type `document` for modeling:

Projects can include not only one or more `document` features, but any other feature type that DataRobot supports.
## Workflow overview {: #workflow-overview }
Following is the Document AI workflow:
1. Create a [PDF-based dataset](doc-ai-ingest) for use in projects via the AI Catalog or local file upload.
2. Preview documents for potential [data quality](doc-ai-ingest#data-quality) issues.
3. Build models using the standard DataRobot workflow.
4. Evaluate models on the Leaderboard with [document-specific insights](doc-ai-insights).
7. Select a model to use for [making predictions](doc-ai-predictions) via Make Predictions, the DataRobot API, or batch predictions.
## Feature considerations {: #feature-considerations }
- Time series projects are not supported.
|
doc-ai-overview
|
---
title: Comments
description: With the Comments link, you can add comments to, or host a discussion around, any item you have access to in the catalog.
---
# Comments {: #comments }
{% include 'includes/comm-add.md' %}
|
index
|
---
title: Word Cloud
description: Word Cloud displays the most relevant words and short phrases in word cloud format.
---
# Word Cloud {: #word-cloud }
Text variables often contain words that are highly indicative of the response. The **Word Cloud** insight displays up to the 200 most impactful words and short phrases in word cloud format.
Select a model from the Leaderboard and click **Understand > Word Cloud** to display the chart:

{% include 'includes/word-cloud-include.md' %}
|
word-cloud
|
---
title: Feature Effects
description: Feature Effects (with partial dependence) conveys how changes to the value of each feature change model predictions.
---
# Feature Effects {: #feature-effects }
!!! warning
**Evaluate > Feature Fit** has been removed. Use **Feature Effects** instead, as it provides the same output.
Because of the complexity of many machine learning techniques, models can sometimes be difficult to interpret directly. **Feature Effects** ranks features based on the feature impact score.
## Feature Effects explained {: #feature-effects-explained }
**Feature Effects** shows the effect of changes in the value of each feature on the model’s predictions. It displays a graph depicting how a model "understands" the relationship between each feature and the target, with the features sorted by [**Feature Impact**](feature-impact). The insight is communicated in terms of [partial dependence](#partial-dependence-logic), which illustrates how a change in a feature's value, while keeping all other features as they were, impacts a model's predictions. Literally, "what is the feature's effect, how is *this* model using *this* feature?" To compare the model evaluation methods side by side:
* **Feature Impact** conveys the relative impact of each feature on a specific model.
* **Feature Effects** (with partial dependence) conveys how changes to the value of each feature change model predictions.
Clicking **Compute Feature Effects** causes DataRobot to first compute **Feature Impact** (if not already computed for the model) on all data. If you change the [data slice](sliced-insights) for **Feature Effects** or the [quick-compute](feature-impact#quick-compute) setting for **Feature Impact**, **Feature Effects** will still use the original **Feature Impact** settings. In other words, DataRobot does not change the basis of (recalculate) **Feature Effects** visualizations that have already been calculated. If you subsequently change the **Feature Impact** quick-compute setting, all new calculations will use the new **Feature Impact** calculations.
See below for [more information](#more-info) on how DataRobot calculates values, explanation of tips for using the displays, and how Exposure and Weight change the output.

The completed result looks similar to the following, with three main screen components:
* [Display options](#display-options)
* [List of top features](#list-of-features)
* [Chart of results](#feature-effects-results)
## Display options {: #display-options }

The following table describes the display control options for **Feature Effects**:
| | Element | Description |
|---|---|---|
|  | [Sort by](#sort-options) | Provides controls for sorting. |
|  | [Bins](#set-the-number-of-bins) | For qualifying feature types, sets the binning resolution for the feature value count display. |
|  | [Data Selection](#select-the-partition-fold) | Controls which partition fold is used as 1) the basis of the Predicted and Actual values and 2) the sample used for the computation of partial dependence. Options for [OTV projects](#select-the-partition-fold) differ slightly. |
|  | [Data slice](sliced-insights) | _Binary classification and regression only_. Selects the filter that defines the subpopulation to display within the insight.
| Not shown | [Class](#select-class) | _Multiclass only_. Provides controls to display graphed results for a particular class within the target feature. |
|  | [**More**](#more-options) | Controls whether to display missing values and changes the Y-axis scale.|
|  | [**Export**](#export) | Provides options for downloading data. |
!!! tip
{% include 'includes/slices-viz-include.md' %}
### Sort options {: #sort-options }
The **Sort by** dropdown provides sorting options for plot data. For categorical features, you can sort alphabetically, by frequency, or by size of the effect (partial dependence). For numeric features, sort is always numeric.
### Set the number of bins {: #set-the-number-of-bins }
The **Bins** setting allows you to set the binning resolution for the display. This option is only available when the selected feature is a numeric or continuous variable; it is not available for categorical features or numeric features with low unique values. Use the [feature value tooltip](#feature-value-tooltip) to view bin statistics.
### Select the partition fold {: #select-the-partition-fold }
You can set the partition fold used for predicted, actual, and partial dependence value plotting with the **Data Selection** dropdown—Training, Validation, and, if unlocked, Holdout. While it may not be immediately obvious, there are [good reasons](#training-data-as-the-viewing-subset) to investigate the training dataset results.

When you select a partition fold, that selection applies to all three display controls, whether or not the control is checked. Note, however, that while performed on the same partition fold, the [partial dependence calculation](#partial-dependence-calculations) uses a different range of the data.
Note that **Data Selection** options differ depending on whether or not you are investigating a time-aware project:
_For non-time-aware projects:_ In all cases you can select the Training or Validation set; if you have unlocked holdout, you also have an option to select the Holdout partition.
_For time-aware projects:_ For time-aware projects, you can select Training, Validation, and/or Holdout (if available) as well as a specific backtest. See the section on [time-aware Data Selection](#data-selection-for-time-aware-projects) settings for details.
### Select the class (multiclass only) {: #select-the-class-multiclass-only }
In a multiclass project, you can additionally set the display to chart per-class results for each feature in your dataset.

By default, DataRobot calculates effects for the top 10 features. To view per-class results for features ranked lower than 10, click **Compute** next to the feature name:

### Export {: #export }
The **Export** option allows you to [export](export-results) the graphs and data associated with the model's details and for individual features. If you choose to export a ZIP file, you will get all of the chart images and the CSV files for partial dependence and predicted vs actual data.
### More options {: #more-options }
The **Feature Effects** insight provides tools for re-displaying the chart to help you focus on areas of importance.
!!! note
This option is only available when one of the following conditions is met: there are missing values in the dataset, the chart's access is scalable, the project is binary classification.
Click the gear setting to view the choices:

Check or uncheck the following boxes to activate:
* **Show Missing Values**: Shows or hides the effect of missing values. This selection is available for numeric features only. The bin corresponding to missing values is labeled as **=Missing=**.
* **Auto-scale Y-axis**: Resets the Y-axis range, which is then used to chart the actual data, the prediction, and the partial dependence values. When checked (the default), the values on the axis span the highest and lowest values of the target feature. When unchecked, the scale spans the entire eligible range (for example, 0 through 1 for binary projects).
* **Log X-Axis**: Toggles between the different X-axis representations. This selection is available for highly skewed (distribution where one of tail is longer than the other) with numeric features having values greater than zero.
## List of features {: #list-of-features }

The following table describes the feature list output of the **Feature Effects** display:
| | Element | Description |
|---|---|---|
|  | Search for features | Lists of the top features that have more than zero-influence on the model, based on the Feature Impact (**Feature Effects**) score. |
|  | Score | Reports the relevance to the target feature. This is the value displayed in the [**Feature Impact**](feature-impact) display. |
To the left of the graph, DataRobot displays a list of the top 500 predictors. Use the arrow keys or scroll bar to scroll through features, or the search field to find by name. If all the sample rows are empty for a given feature, the feature is not available in the list. Selecting a feature in the list updates the display to reflect results for that feature.
Each feature in the list is accompanied by its [feature impact](feature-impact) score. Feature impact measures, for each of the top 500 features, the importance of one feature on the target prediction. It is estimated by calculating the prediction difference before and after shuffling the selected rows of one feature (while leaving other columns unchanged). DataRobot normalizes the scores so that the value of the most important column is 1 (100%). A score of 0% indicates that there was no calculated relationship.
## Feature Effects results {: #feature-effects-results }

| | Element | Description |
|---|---|---|
|  | [Target range](#target-range-y-axis) | Displays the value range for the target; the Y-axis values can be adjusted with the [scaling](#more-options) option.|
|  | [Feature values](#feature-values-x-axis) | Displays individual values of the selected feature.|
|  | [Feature values tooltip](#feature-value-tooltip)| Provides summary information for a feature's binned values. |
|  | [Feature value count](#feature-value-count)| Sets, for the selected feature, the feature distribution for the selected partition fold. |
|  | [Display controls](#display-controls) | Sets filters that control the values plotted in the display (partial dependence, predicted, and/or actual). |
### Target range (Y-axis) {: #target-range-y-axis }
The Y-axis represents the value range for the target variable. For binary classification and regression problems, this is a value between 0 and 1. For non-binary projects, the axis displays from min to max values. Note that you can use the [scaling feature](#more-options) to change the Y-axis and bring greater focus to the display.
### Feature values (X-axis) {: #feature-values-x-axis }
The X-axis displays the values found for the feature selected in the [list of features](#list-of-features). The selected [sort order](#sort-and-export) controls how the values are displayed. See the section on [partial dependence calculations](#partial-dependence-calculations) for more information.
#### For numeric features {: #for-numeric-features }
The logic for a numeric feature depends on whether you are displaying predicted/actual or partial dependence.
#### Predicted/actual logic {: #predicted-actual-logic }
* If the value count in the selected partition fold is greater than 20, DataRobot bins the values based on their distribution in the fold and computes Predicted and Actual for each bin.
* If the value count is 20 or less, DataRobot plots Predicted/Actuals for the top values present in the fold selected.
#### Partial dependence logic {: #partial-dependence-logic }
* If the value count of the feature in the entire dataset is greater than 99, DataRobot computes partial dependence on the percentiles of the distribution of the feature in the entire dataset.
* If the value count is 99 or less, DataRobot computes partial dependence on all values in the dataset (excluding outliers).
#### Chart-specific logic {: #chart-specific-logic }
Partial dependence feature values are derived from the percentiles of the distribution of the feature across the entire data set. The X-axis may additionally display a `==Missing==` bin, which contains the effect of missing values. Partial dependence calculation always includes "missing values," even if the feature is not missing throughout data set. The display shows what *would be* the average predictions if the feature were missing—DataRobot doesn't need the feature to actually be missing, it's just a "what if."
#### For categorical features {: #for-categorical-features }
For categorical, the X-axis displays the 25 most frequent values for predicted, actual, and partial dependence in the selected partition fold. The categories can include, as applicable:
* `=All Other=`: For categorical features, a single bin containing all values other than the 25 most frequent values. No partial dependence is computed for `=All Other=`. DataRobot uses one-hot encoding and ordinal encoding preprocessing tasks to automatically group low-frequency levels.
For both tasks you can use the the `min_support` [advance tuning](adv-tuning) parameter to group low-frequency values. By default, DataRobot uses a value of 10 for the one-hot encoder and 5 for the ordinal encoder. In other words, any category that has fewer than 10 levels (one-hot encoder) or 5 (ordinal encoder) is combined into 1 group.
* `==Missing==`: A single bin containing all rows with missing feature values (that is, NaN as the value of one of the features).
* `==Other Unseen==`: A single bin containing all values that were not present in the Training set. No partial dependence is computed for `=Other Unseen=`. See the [explanation below](#binning-and-top-values) for more information.

### Feature value tooltip {: #feature-value-tooltip }
For each bin, to display a feature's calculated values and row count, hover in the display area above the bin. For example, this tooltip:

Indicates:
For the feature `number diagnoses` when the value is `7`, the partial dependence average was (roughly) `0.407` and the actual values average was `0.432`. These averages were calculated from `201` rows in the dataset (in which the number of diagnoses was seven). Select the **Predicted** label to see the the predicted average.
### Feature value count {: #feature-value-count }
The bar graph below the X-axis provides a visual indicator, for the selected feature, of each of the feature's value frequencies. The bars are mapped to the feature values listed above them, and so changing the sort order also changes the bar display. This is the same information as that presented in the [**Frequent Values**](histogram#frequent-values-chart) chart on the **Data** page. For qualifying feature types, you can use the [**Bins**](#set-the-number-of-bins) dropdown to set the number of bars (determine the binning).
### Display controls {: #display-controls }
Use the display control links to set the display of plotted data. Actual values are represented by open orange circles, predicted valued by blue crosses, and partial dependence points by solid yellow circles. In this way, points lie on top without blocking view of each other. Click or unclick the label in the legend to focus on a particular aspect of the display. See below for information on how DataRobot [calculates](#partial-dependence-calculations) and displays the values.
## More info... {: #more-info }
The following sections describe:
* How DataRobot calculates [average values](#average-value-calculations) and [partial dependence](#partial-dependence-calculations)
* [Interpreting](#interpret-the-displays) the displays
* [Time-aware data selection](#data-selection-for-time-aware-projects)
* Understanding [unseen values](#binning-and-top-values)
* How [Exposure and Weight](#how-exposure-changes-output) change output
### Average value calculations {: #average-value-calculations }
For the predicted and actual values in the display, DataRobot plots the average values. The following simple example explains the calculation.
In the following dataset, Feature A has two possible values—1 and 2:
| Feature A | Feature B | Target |
|-----------:|----------:|-------:|
| 1 | 2 | 4 |
| 2 | 3 | 5 |
| 1 | 2 | 6 |
| 2 | 4 | 8 |
| 1 | 3 | 1 |
| 2 | 2 | 2 |
In this fictitious dataset, the X axis would show two values: 1 and 2. When target value A=1, DataRobot calculates the average as 4+6+1 / 3. When A=2, the average is 5+8+2 / 3. So the actual and predicted points on the graph show the average target for each aggregated feature value.
Specifically:
* For numeric features, DataRobot generates bins based on the feature domain. For example, for the feature `Age` with a range of 16-101, bins (the user selects the number) would be based on that range.
* For categorical features, for example `Gender`, DataRobot generates bins based on the top unique values (perhaps 3 bins—`M`, `F`, `N/A`).
DataRobot then calculates the average values of prediction in each bin and the average of the actual values of each bin.
### Interpret the displays {: #interpret-the-displays }
In the **Feature Effects** display, categorical features are represented as points; numerical features are represented as connected points. This is because each numerical value can be seen in relation to the other values, while categorical features are not linearly related. A dotted line indicates that there were not enough values to plot.
!!! note
If you are using the [Exposure](additional#set-exposure) parameter feature available from the **Advanced options** tab, [line calculations differ](#how-exposure-changes-output).
Consider the following **Feature Effects** display:

The orange open circles depict, for the selected feature, the *average target value* for the aggregated **number_diagnoses** feature values. In other words, when the target is **readmitted** and the selected feature is **number_diagnoses**, a patient with two diagnoses has, on average, a roughly 23% chance of being readmitted. Patients with three diagnoses have, on average, a roughly 35% chance of readmittance.
The blue crosses depict, for the selected feature, the *average prediction* for a specific value. From the graph you can see that DataRobot averaged the predicted feature values and calculated a 25% chance of readmittance when **number_diagnoses** is two. Comparing the actual and predicted lines can identify segments where model predictions differ from observed data. This typically occurs when the segment size is small. In those cases, for example, some models may predict closer to the overall average.
The yellow **Partial Dependence** line depicts the marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables *except* the feature of interest as they were, the value of this feature affects your prediction. The value of the feature of interest is then reassigned to each possible value, calculating the average predictions for the sample at each setting. (From the simple example above, DataRobot calculates the average results when all 1000 rows use value 1 and then again when all 1000 rows use value 2.) These values help determine how the value of each feature affects the target. The shape of the yellow line "describes" the model’s view of the marginal relationship between the selected feature and the target. See the discussion of [partial dependence calculation](#partial-dependence-calculations) for more information.
Tips for using the displays:
* To evaluate model accuracy, uncheck the partial dependence box. You are left with a visual indicator that charts actual values against the model's predicted values.
* To understand partial dependence, uncheck the actual and predicted boxes. Set the sort order to **Effect Size**. Consider the partial dependence line carefully. Isolating the effect of important features can be very useful in optimizing outcomes in business scenarios.
* If there are not enough observations in the sample at a particular level, the partial dependency computation may be missing for a specific feature value.
* A dashed instead of solid predicted (blue) and actual (orange) line indicates that there are no rows in the bins created at the point in the chart.
* For numeric variables, if there are more than 18 values, DataRobot calculates partial dependence on values derived from the percentiles of the distribution of the feature across the entire data set. As a result, the value is not displayed in the hover tooltip.
#### Training data as the viewing subset {: #training-data-as-the-viewing-subset }
Viewing **Feature Effect** for training data provides a few benefits. It helps to determine how well a trained model fits the data it used for training. It also lets you compare the difference between seen and unseen data in the model performance. In other words, viewing the training results is a way to check the model against known values. If the predicted vs the actual results from the training set are weak, it is a sign that the model is not appropriately selected for the data.
When considering partial dependence, using training data means the values are calculated based on training samples and compared against the maximum possible feature domain. It provides the option to check the relationship between a single feature (by removing marginal effects from other features) and the target across the entire range of the data. For example, suppose the validation set covers January through June but you want to see partial dependence in December. Without that month's data in validation, you wouldn't be able to. However, by setting the data selection subset to **Training**, you could see the effect.
### Partial dependence calculations {: #partial-dependence-calculations }
Predicted/actual and partial dependence values are computed very differently for continuous data. The calculations that bin the data for predicted/actual (for example, `(1-40], (40-50]...`) are created to result in sufficient material for computing averages. DataRobot then bins the values based on the distribution of the feature for the selected partition fold.
Partial dependence, on the other hand, uses single values (for example, `1`, `5`, `10`, `20`, `40`, `42`, `45...`) that are percentiles of the distribution of the feature across the entire data set. It uses up to 1000-row samples to determine the scale of the curve. To make the scale comparable with predicted/actual, the 1000 samples are drawn from the data of the selected fold. In other words, partial dependence is *calculated* for the maximum possible range of values from the entire dataset but scaled based on the **Data Selection** fold setting.
For example, consider a feature `year`. For partial dependence, DataRobot computes values based on all the years in the data. For predicted/actual, computation is based on the years in the selected fold. If the dataset dates range from 2001-01-01 to 2010-01-01, DataRobot uses that span for partial dependence calculations. Predicted/actual calculations, in contrast, contain only the data from the corresponding, selected fold/backtest. You can see this difference when viewing all three control displays for a selected fold:

??? tip "Deep dive: Partial dependence calculations"
The partial dependence plot shows the marginal effect a feature has on the predicted outcome of a machine learning model—or how the prediction varies if we just change one feature and keep everything else constant. The following calculation illustrates this for one feature, `X1`, on a sample of 1000 records of training data.
Assume that `X1` has 5 different values (like 0, 5, 10, 15, 20). For all 1000 records, DataRobot creates artificial datapoints by keeping all features constant except the feature `X1`, which translates to 5,000 records (each row duplicated 5 times with one value of the different levels of X1). Then it makes predictions for all 5,000 records and averages the predictions for each level of `X1`. This average prediction now corresponds to the marginal effect of feature `X1`, as displayed on the partial dependence plot.
If there are 10 features, and each feature has 5 different values in a training dataset of 10K records, creating the marginal effect using all the data would require making predictions using 500k records (computationally expensive). Because it can obtain similar results for less "cost," DataRobot only uses a representative sample of the data to calculate partial dependence.
Why is the partial dependence plot short compared to the range of the actual data?
Note that because calculations are based on 1000 rows, it is quite possible that values from the tail ends of the distribution aren't captured by the sample. Also, the selected partition (holdout or validation) may not contain the full range of data, which can be especially true in the case of OTV or group partitioning. Finally, **Feature Effects** uses its own outlier logic to improve the clarity of the chart. If in the given sample, 4% of the tail ends represent more than 20% of X-axis, DataRobot limits the calculations to a range between the 2-98 percentiles.
### Data selection for time-aware projects {: #data-selection-for-time-aware-projects }
When working with time-aware projects, **Data Selection** dropdown works a bit differently because of the backtests. Select the **Feature Effects** tab for your model of interest. If you haven't already computed values for the tab, you are prompted to compute for **Backtest 1** (Validation).
!!! note
If the model you are viewing uses start and end dates (common for the recommended model), backtest selection is not available.
When DataRobot completes the calculations, the insight displays with the following **Data Selection** setting:

#### Calculate backtests {: #calculate-backtests }
The results of clicking on the backtest name depend on whether backtesting has been run for the model. DataRobot automatically computes backtests for the highest scoring models; for lower-scoring models, you must select **Run** from the Leaderboard to initiate backtesting:

For comparison, the following illustrates when backtests have not been run and when they have:

When calculations are complete, you must then run **Feature Effect** calculations for each backtest you want to display, as well as for the Holdout fold, if applicable. From the dropdown, click a backtest that is not yet computed and DataRobot provides a button to initiate calculations.
#### Set the partition fold {: #set-the-partition-fold }
Once backtest calculations are complete for your needs, use the **Data Selection** control to choose the backtest and partition for display. The available partition folds are dependent on the backtest:
Options are:
* For numbered backtests: Validation and Training for each calculated backtest
* For the Holdout Fold: Holdout and Training
Click the down arrow to open the dialog and select a partition:

Or, click the right and left arrows to move through the options for the currently selected partition—Validation or Training—plus Holdout. If you move to an option that has yet to be computed, DataRobot provides a button to initiate the calculation:

#### Interpret days as numerics {: #interpret-days-as-numerics }
When interpreting the results of a Feature Effects chart within a time series project, the derived `Datetime (Day of Week) (actual)` feature correlates a day to a numeric. Specifically, Monday is always `0` in a Day of Week feature (Tuesday is `1`, etc.). DataRobot uses the Python [time access and conversion module](https://docs.python.org/3/library/time.html){ target=_blank } (`tm_wday`) for this time-related function.
### Binning and top values {: #binning-and-top-values }
By default, DataRobot calculates the top features listed in **Feature Effects** using the training dataset. For categorical feature values, displayed as discrete points on the X-axis, the segmentation is affected if you select a different data source. To understand the segmentation, consider the illustration below and the table describing the segments:

| As illustrated in chart | Label in chart | Description |
|-------------------------|------------------|---------------|
| Top-*N* values | <*feature_value*\> | Values for the selected feature, with a maximum of 20 values. For any feature with more than 10 values, DataRobot further filters the results, as described in the example below. |
| Other values | `==All Other==` | A single bin containing all values other than the Top-*N* most frequent values. |
| Missing values | `==Missing==` | A single bin containing all records with missing feature values (that is, NaN as the value of one of the features). |
| Unseen values | <*feature_value*\> `(Unseen)` | Categorical feature values that were not "seen" in the Training set but qualified as Top-*N* in Validation and/or Holdout. |
| Unseen values | `==Other Unseen==` | Categorical feature values that were not "seen" in the Training set and did not qualify as Top-*N* in Validation and/or Holdout. |
A simple example to explain Top-<em>N</em>:
Consider a dataset with categorical feature `Population` and a world population of 100. DataRobot calculates Top-<em>N</em> as follows:
1. Ranks countries by their population.
2. Selects up to the top-20 countries with the highest population.
3. In cases with more than 10 values, DataRobot further filters the results so that accumulative frequency is >95%. In other words, DataRobot displays in the X-axis those countries where their accumulated population hits 95% of the world population.
A simple example to explain <em>Unseen</em>:
Consider a dataset with the categorical feature `Letters`. The complete list of values for `Letters` is A, B, C, D, E, F, G, H. After filtering, DataRobot determines that Top-<em>N</em> equals three values. Note that, because the feature is categorical, there is no `Missing` bin.
| Fold/set | Values found | Top-3 values | X-axis values |
|----------------|--------------|--------------|----------------|
| Training set | A, B, C, D | A, B, C | A, B, C, `=All Other=` |
| Validation set | B, C, F, G+ | B, C, F* | B, C, F (unseen), `=All Other=`, `Other Unseen`+ |
| Holdout set | C, E, F, H+ | C, E*, F* | C, E (unseen), F (unseen), `=All Other=`, `Other Unseen`+ |
<sup>*</sup> A new value in the top 3 but not present in the Training set, flagged as `Unseen`
<sup>+</sup> A new value not present in Training or in top-3, flagged as `Other Unseen`
### How Exposure changes output {: #how-exposure-changes-output }

If you used the [Exposure](additional#set-exposure) parameter when building models for the project, the **Feature Effects** tab displays the graph adjusted to exposure. In this case:
* The orange line depicts the <em>sum of the target divided by the sum of exposure</em> for a specific value. The label and tooltip display <em>Sum of Actual/Sum of Exposure</em>, which indicates that exposure was used during model building.
* The blue line depicts the <em>sum of predictions divided by the sum of exposure</em> and the legend label displays <em>Sum of Predicted/Sum of Exposure</em>.
* The marginal effect depicted in the yellow <em>partial dependence</em> is divided by the sum of exposure of the 1000-row sample. This adjustment is useful in insurance, for example, to understand the relationship between annualized cost of a policy and the predictors. The label tooltip displays <em>Average partial dependency adjusted by exposure</em>.
### How Weight changes output {: #how-weight-changes-output }
If you set the **Weight** parameter for the project, DataRobot weights the average and sum operations as described above.
|
feature-effects
|
---
title: Understand tabs
description: The Understand tabs—Feature Effects, Feature Impact, Prediction Explanations, and Word Cloud—explain what drives a model’s predictions.
---
# Understand {: #understand }
The **Understand** tabs explain what drives a model’s predictions:
Leaderboard tabs | Description | Source
------------------|-------------|------------
[Feature Impact](feature-impact) | Provides a high-level visualization that identifies which features are most strongly driving model decisions. | Training data
[Feature Effects](feature-effects) | Visualizes the effect of changes in the value of each feature on the model’s predictions. | Training data prior to v5.0; Training, Validation, Holdout (selectable) in v5.0+
[Cluster Insights](cluster-insights) | Visualizes the groupings of data that result from modeling in [clustering](clustering) mode, an [unsupervised learning](glossary/index#unsupervised-learning) technique. | Training data
[Prediction Explanations](pred-explain/index) | Illustrates what drives predictions on a row-by-row basis using XEMP or SHAP methodology. | Low and high thresholds and baseline prediction based on Validation data (XEMP) or training data (SHAP)
[Word Cloud](analyze-insights#word-cloud-insights) | Displays the most relevant words and short phrases in word cloud format. | Training data
|
index
|
---
title: Feature Impact
description: Feature Impact shows, on demand, which features are driving model decisions the most. It is rendered using permutation, SHAP, or tree-based importance.
---
# Feature Impact {: #feature-impact }
!!! note
To retrieve the SHAP-based **Feature Impact** visualization, you must enable the [Include only models with SHAP value support](additional) advanced option prior to model building.
**Feature Impact** shows, at a high level, which features are driving model decisions the most. By understanding which features are important to model outcome, you can more easily validate if the model complies with business rules. **Feature Impact** also helps to improve the model by providing the ability to identify unimportant or redundant columns that can be dropped to improve model performance.
!!! note
Be aware that **Feature Impact** differs from the [feature importance](model-ref#importance-score) measure shown in the **Data** page. The green bars displayed in the Importance column of the **Data** page are a measure of how much a feature, by itself, is correlated with the target variable. By contrast, **Feature Impact** measures how important a feature is in the context of a model.
There are three methodologies available for rendering **Feature Impact**—permutation, SHAP, and tree-based importance. To avoid confusion when the same insight is produced yet potentially returns different results, they are not displayed next to each other. Sections [below](#feature-impact-calculations) describe the differences and how to compute each.

**Feature Impact**, which is available for all model types, is calculated using training data. It is an on-demand feature, meaning that you must initiate a calculation to see the results. Once you have had DataRobot compute the feature impact for a model, that information is saved with the project (you do not need to recalculate each time you re-open the project). It is also available for [multiclass models](#feature-impact-with-multiclass-models-permutation-only) and offers unique functionality.
## Shared permutation-based Feature Impact {: #shared-permutation-based-feature-impact }
The **Feature Impact** and **[Prediction Explanations](pred-explain/index)** tabs share computational results (Prediction Explanations rely on the impact computation). If you calculate impact for one, the results are also available to the other. In addition to the **Feature Impact** tab, you can initiate calculations from the [**Deploy**](deploy-model) and [**Feature Effects**](feature-effects) tabs. Also, DataRobot automatically runs permutation-based **Feature Impact** the top-scoring Leaderboard model.
## Interpret and use Feature Impact {: #interpret-and-use-feature-impact }
Feature Impact shows, at a high level, which features are driving model decisions. By default, features are sorted from the most to the least important. Accuracy of the top most important model is always normalized to 1.
**Feature Impact** informs:
* Which features are the most important—is it demographic data, transaction data, or something else that is driving model results? Does it align with the knowledge of industry experts?
* Are there opportunities to improve the model? There might be some features having negative accuracy or some [redundant features](#remove-redundant-features-automl). [Dropping them](#create-a-new-feature-list) might increase model accuracy and speed. Some features may have unexpectedly low importance, which may be worth investigating. Is there a problem in the data? Were data type defined incorrectly?
Consider the following when evaluating **Feature Impact**:
* **Feature Impact** is calculated using a sample of the model's training data. Because sample size can affect results, you may want to [recompute the values](#change-sample-size) on a larger sample size.
* Occasionally, due to random noise in the data, there may be features that have negative feature impact scores. In extremely unbalanced data, they may be largely negative. Consider removing these features.
* The choice of project metric can have a significant effect on permutation-based on **Feature Impact** results. Some metrics, such as AUC, are less sensitive to small changes in model output and may therefore be less optimal for assessing how changing features affect model accuracy.
* Under some conditions, **Feature Impact** results can vary due to the function of the algorithm used for modeling. This could happen, for example, in the case of multicollinearity. In this case, for algorithms using L1 penalty—such as some linear models—impact will be concentrated to one signal only while for trees, impact will be spread uniformly over the correlated signals.
## Feature Impact methodologies {: #feature-impact-methodologies }
There are three methodologies available for computing **Feature Impact** in DataRobot—permutation, SHAP, and tree-based importance.
* [_Permutation-based_](#permutation-based-feature-impact) shows how much the error of a model would increase, based on a sample of the training data, if values in the column are shuffled.
* [_SHAP-based_](#shap-based-feature-impact) shows how much, on average, each feature affects training data prediction values. For supervised projects, SHAP is available for AutoML projects only. See also the [SHAP reference](shap) and [SHAP considerations](pred-explain/index#feature-considerations).
* [_Tree-based_](#tree-based-variable-importance) variable importance uses node impurity measures (gini, entropy) to show how much gain each feature adds to the model.
Overall, DataRobot recommends using either permutation-based or SHAP-based **Feature Impact** as they show results for original features and methods are model agnostic.
Some notable differences between methodologies:
* Permutation-based impact offers a model-agnostic approach that works for all modeling techniques. Tree-based importance only works for tree-based models, SHAP only returns results for models that support SHAP.
* SHAP Feature Impact is faster and more robust on a smaller sample size than permutation-based Feature Impact.
* Both SHAP- and permutation-based **Feature Impact** show importance for original features, while tree-based impact shows importance for features that have been derived during modeling.
DataRobot uses permutation by default, unless you:
* Set the mode to SHAP in the [**Advanced options**](additional) link before starting a project.
* Are creating a project type that is unsupervised anomaly detection.
### Feature Impact for unsupervised projects {: #feature-impact-for-unsupervised-projects }
**Feature Impact** for [anomaly detection](anomaly-detection) is calculated by aggregating [SHAP values](shap) (for both AutoML and time series projects). This technique is used instead of permutation-based calculations because the latter requires a target column to calculate metrics. With SHAP, approximation is computed for each row out-of-sample and then averages them per column. The sample is taken uniformly across the training data.
## Generate the Feature Impact chart {: #generate-the-feature-impact-chart }
!!! note
Time series models have [additional settings](#feature-impact-with-time-series-permutation-only) available.
For permutation- and SHAP-based **Feature Impact**:
1. Select the **Understand > Feature Impact** tab for a model.

2. Optionally, select whether to use [quick-compute](#quick-compute).
3. Click **Compute Feature Impact**. DataRobot displays the status of the computation in the right-pane, on the [**Worker Usage**](worker-queue#view-progress) panel. In addition, the **Compute** box is replaced with a status indicator reporting the percentage of completed features.
4. When DataRobot completes its calculations, the **Feature Impact** graph displays a chart of up to 25 of the model's most important features, ranked by importance. The chart lists feature names on the Y-axis and predictive importance (Effect) on the X-axis. It also indicates the number of rows used in the calculation.

DataRobot may report [redundant features](#remove-redundant-features-automl) in the output (indicated by an icon ). You can use the redundancy information to easily create special feature lists that remove those features.

5. By default, the chart displays features based on impact (importance), but you can also sort alphabetically. Click on the **Sort by** dropdown and select **Feature Name**.
6. Optionally, create or select a different [data slice](sliced-insights#recompute-feature-impact) to view a subpopulation of the data.
7. Optionally, click the [**Export**](export-results) button to download a CSV file containing up to 1000 of the model's most important features.
Tree-based variable importance information is available from the [Insights > Tree-based Variable Importance](analyze-insights#variable-importance).
## Quick-compute {: #quick-compute }
When working with **Feature Impact**, the **Use quick-compute** option controls the sample size used in the visualization. The row count used to build the visualization is based on the toggle setting and whether a [data slice](sliced-insights) is applied.
When a data slice is applied:
* If on, DataRobot uses 2500 rows or the number of rows available after a slice is applied, whichever is smaller.
* If off, DataRobot uses 100,000 rows or the number of rows available after a slice is applied, whichever is smaller.
For unsliced **Feature Impact**, the quick-compute toggle replaces the **Adjust sample size** option.:
* If on, DataRobot uses 2500 rows or the number of rows in the model training sample size, whichever is smaller.
* If off, DataRobot uses 100,000 rows or the number of rows in the model training sample size, whichever is smaller.
You may want to use this option, for example, to train **Feature Impact** at a sample size higher than the default 2500 rows (or less, if downsampled) in order to get more accurate and stable results.
!!! note
When you run Feature Effects _before_ Feature Impact, DataRobot initiates the **Feature Impact** calculation first. In that case, the quick-compute option is available on the **Feature Effects** screen and sets the basis of the **Feature Impact** calculation.
## Create a new feature list {: #create-a-new-feature-list }
Once you have computed feature impact for a model, you may want to create one or more feature lists based on the top feature importances for that model or, for permutation-based projects, with [redundant features](#remove-redundant-features-automl) removed. (There is more information on feature lists [here](feature-lists).) You can then re-run the model using the new feature list, potentially creating even more accurate results. Note also that if the smaller list does not improve model performance, it is still valuable since models with fewer features run faster. To create a new feature list from the **Feature Impact** page:
1. After DataRobot completes the feature impact computation, click **Create Feature List**.

2. Enter the number of features to include in your list. These are the top _X_ features for impact (regardless of whether they are sorted alphabetically). You can select more than the 30 features displayed. To view more than the top 30 features, export a CSV and determine the number of features you want from that file.
3. Optionally, check **Exclude redundant features** to build a list with [redundant features](#remove-redundant-features-automl) removed. These are the features marked with the redundancy () icon.
3. After you complete the fields, click **Create feature list** to create the list. When you create the new feature list, it becomes available to the project in all feature list dropdowns and can be viewed in the [Feature List](feature-lists) tab of the **Data** page.
## Remove redundant features (AutoML) {: #remove-redundant-features-automl }
When you run permutation-based **Feature Impact** for a model, DataRobot evaluates a subset of training rows (2500 by default, or up to 100,000 by request), calculating their impact on the target. If two features change predictions in a similar way, DataRobot recognizes them as correlated and identifies the feature with lower feature impact as redundant (). Note that because model type and sample size have an effect on feature impact scores, redundant feature identification differs across models and sample sizes.
Once redundant features are identified, you can create a new feature list that excludes them, and optionally, that includes user-specified top-_N_ features. When you choose to exclude redundant features, DataRobot recalculates feature impact, which may result in different feature ranking, and therefore a different order of top features. Note that the new ranking does not update the chart display.
## Feature Impact with time series (permutation only) {: #feature-impact-with-time-series-permutation-only }
!!! note
Data slices are available as a [Public Preview](release/index#slices-for-time-aware-projects-classic) feature for OTV and time series projects.
For [time series models](time/index), you have an option to see results for original or derived features. When viewing original features, the chart shows all features derived from the original parent feature as a single entry. Hovering on a feature displays a tooltip showing the aggregated impact of the original and derived features (the sum of derived feature impacts).

Additionally, you can rescale the plot (ON by default), which will zoom in to show lower impact results, from the **Settings** link. This is useful in cases where the top feature has a significantly higher impact than other features, preventing the plot from displaying values for the lesser features.

Note that the **Settings** link is only available if scaling is available. The link is hidden or shown based on the ratio of **Feature Impact** values (whether they are high enough to need scaling). Specifically, it is only shown if `highest_score / second_highest_score > 3`.
### Remove redundant features (time series) {: #remove-redundant-features-time-series }
The **Exclude redundant features** option for time series works similarly to the [AutoML](#remove-redundant-features-automl) option, but applies it to date/time partitioned projects. For time series, the new feature list can be built from the derived features (the modeling dataset) and **Feature Impact** can then be recalculated to help improve modeling by using a selected set of impactful features.
## Feature Impact with multiclass models (permutation only) {: #feature-impact-with-multiclass-models-permutation-only }
For multiclass models, you can calculate **Feature Impact** to find out how important a feature is not only for the model in general, but also for each individual class. This is useful for determining how features impact training on a per-class basis.
After calculating **Feature Impact**, an additional **Select Class** dropdown appears with the chart.

The **Aggregation** option displays the **Feature Impact** chart like any other model; it displays up to 25 of the model's most important features, listed most important to least. Select an individual class to see its individual **Feature Impact** scores on a new chart.

Click the [**Export**](export-results) button to download an image of the chart and a CSV file containing the most important features of the aggregation or an individual class. You can download a ZIP file that instead contains the **Feature Impact** scores and charts for every class and the aggregation.
## Feature Impact calculations {: #feature-impact-calculations }
This section contains technical details on computation for each of the three available methodologies:
* Permutation-based **Feature Impact**
* SHAP-based **Feature Impact**
* Tree-based variable importance
### Permutation-based Feature Impact {: #permutation-based-feature-impact }
Permutation-based **Feature Impact** measures a drop in model accuracy when feature values are shuffled. To compute values, DataRobot:
1. Makes predictions on a sample of training records—2500 rows by default, maximum 100,000 rows.
2. Alters the training data (shuffles values in a column).
3. Makes predictions on the new (shuffled) training data and computes a drop in accuracy that resulted from shuffling.
4. Computes the average drop.
5. Repeats steps 2-4 for each feature.
6. Normalizes the results (i.e., the top feature has an impact of 100%).
The sampling process corresponds to one of the following criteria:
* For balanced data, random sampling is used.
* For imbalanced binary data, smart downsampling is used; DataRobot attempts to make the distribution for imbalanced binary targets closer to 50/50 and adjusts the sample weights used for scoring.
* For zero-inflated regression data, smart downsampling is used; DataRobot groups the non-zero elements into the minority class.
* For imbalanced multiclass data, random sampling is used.
### SHAP-based Feature Impact {: #shap-based-feature-impact }
SHAP-based **Feature Impact** measures how much, on average, each feature affects training data prediction value. To compute values, DataRobot:
1. Takes a sample of records from the training data (5000 rows by default, with a maximum of 100,000 rows).
2. Computes SHAP values for each record in the sample, generating the local importance of each feature in each record.
3. Computes global importance by taking the average of `abs(SHAP values)` for each feature in the sample.
4. Normalizes the results (i.e., the top feature has an impact of 100%).
### Tree-based variable importance {: #tree-based-variable-importance }
[Tree-based variable importance](analyze-insights#tree-based-variable-importance) uses node impurity measures (gini, entropy) to show how much gain each feature adds to the model.
|
feature-impact
|
---
title: Cluster Insights
description: Learn how the Cluster Insights visualization helps you to understand the natural groupings in your data.
---
# Cluster Insights {: #cluster-insights }
With the Cluster Insights visualization, you can understand and name each cluster in a dataset. Use clustering to capture a latent feature in your data, to surface and communicate actionable insights quickly, or to identify segments in the data for further modeling.
!!! note
The maximum number of features computed for Cluster Insights is 100. The features are selected from the features used to train the model, based on the [Feature Impact](feature-impact) (high to low). The remaining features (those not used to train the model) are sorted alphabetically.
To analyze the clusters in your data:
1. Build a [clustering model](clustering#build-a-clustering-model) and expand the model you want to investigate.
2. Select **Understand > Cluster Insights**.

The following table describes the Cluster Insights visualization.
| | Element | Description |
|----|----|---|
|  | Select clusters | Click to [select clusters](#add-or-remove-clusters-from-the-display) to view or remove from view.
|  | Rename clusters | [Name clusters](#name-clusters) after you gain an understanding of what they represent. |
|  | Feature List | By default, DataRobot builds clustering models using the Informative Features list, although you can select another feature list to compare other features. Analyzing features not used to generate the clusters can still be useful, for example, to answer questions like "How does income distribute among my clusters, even if I'm not using it for clustering?" |
|  | Download CSV | Click to download the cluster insights. The CSV contains the information displayed in the Cluster Insights visualization, and more detailed feature data.
|  | Feature page control | Page through to [view more features](#view-features).
|  | Clusters | Clusters display in columns of features (four features display by default). Cluster sizes are shown above (in percentages). Clusters are sorted by size from largest to smallest. |
|  | Cluster arrow | Click to view more clusters. The rightmost cluster contains 100% as a baseline comparison. |
|  | Features | Features are sorted by feature importance. The Informative Feature list displays by default, but you can select another feature list. |
3. Evaluate the distribution of descriptive features across clusters and [the feature values in each cluster](#investigate-cluster-features).
## View features {: #view-features }
The features display in order of Feature Impact (most important to least).
To page through the features, click the right arrow above the clusters:

The display defaults to four features but you can view 10 features at a time by clicking the feature page control and selecting 10:

## Name clusters {: #name-clusters }
Once you get a sense of what your clusters represent, you can name them. Take a look at the data for obvious similarities and then name the cluster accordingly. The cluster names propagate to other insights and predictions, allowing you to further analyze the clusters.
1. Click **Rename clusters** and enter names for each cluster.

2. Click **Finish editing** and click **Proceed**.

## Add or remove clusters from the display {: #add-or-remove-clusters-from-the-display }
1. Click **Select clusters** to choose clusters to view or delete.

2. Click the down arrow to select a new cluster.

3. Click **+ Add cluster** to display additional clusters.
4. Click the trash can icon to remove a cluster from the display.
## Download cluster insights {: #download cluster insights }
You can download the cluster insights as a CSV file for further analysis by clicking **Download CSV** above the clusters.

## Investigate cluster features {: #investigate-cluster-features }
The following sections show the visualization tools used to investigate cluster features. The sample dataset contains features representing housing data:

This dataset *could* be run in supervised mode with `price` as the target feature, but for clustering mode, no target is specified.
The dataset contains the following feature types:
* [Numeric](#numeric-features) (`price`, `sq_ft`, etc.)
* [Categorical](#categorical-features) (`cooling`, `roof_type`, etc.)
* [Text](#text-features) (`amenities`)
* [Image](#image-features) (`exterior_image`)
* [Geospatial](#geospatial-location-features) (`zip_geometry`)
### Numeric features {: #numeric-features }
To view numeric features in Cluster Insights:
1. Locate a numeric feature on the **Cluster Insights** tab.

2. Click near the feature name to expand. Hover over the blue bar for each cluster to view the maximum, median, average, minimum, the percentage missing, and the 1st and 3rd quartiles.

### Categorical features {: #categorical-features }
To view categorical features in Cluster Insights:
1. Locate a categorical feature on the **Cluster Insights** tab.

Low-frequency labels for the feature are grouped in the Other category. For example, if only a small number of houses in the dataset have `floor_type` engineered wood, houses with engineered wood would be grouped into the Other category for the `floor_type` feature.
2. Hover over the bar for each cluster to see the breakdown within a cluster.

2. Click near the feature name to expand. This allows you to see more categories.

3. To drill into the categories, click the gear icon next to the feature name and select **High cardinality view**. Hover to see the percentage of records that have each value.

### Text features {: #text-features }
For text features, Cluster Insights shows [n-grams](glossary/index#n-gram) ranked by importance (highest to lowest). These are displayed as blue bars that represent the relative importance. To see the actual importance value, [download the CSV](#download).
??? tip "Deep dive: Calculating importance scores"
Importance scores are an estimation, computed using an adaptation of the TF-IDF method. The basis of the methodology is:
1. N-grams that are common in every cluster will have lower importance.
2. N-grams that are common in a specific cluster, but missing or rare in other clusters, will have higher importance for the specific cluster.
3. The importance score is robust to clusters with different numbers of rows.
4. N-grams that frequently occur only in a single example in a cluster will not skew the importance higher.
Specifically, the estimation method used to compute the importance works as follows. Consider, for example:
**How frequent is n-gram _j_ in cluster _i_?**
`frequency_j_i = (num of docs containing n-gram j in cluster i ) / (num of docs in cluster i)`
Now, **How frequent is n-gram _j_ in another average cluster _k_?**
`frequency_j_not_i = [(num of docs containing n-gram j not in cluster i) + 1] / [(num of docs not in cluster i) * (num of clusters - 1)]`
Finally:
`importance of n-gram _i_ in cluster _j_ = frequency_i_j / frequency_j_not_i`
In the CSV download, the values associated with the text feature column will show the entire list of n-grams that exist in the dataset. If the n-gram exists for a cluster, it will contain an importance value; if it doesn't exist in the cluster, the importance field will be blank.
1. Locate a text feature on the **Cluster Insights** tab. Click near the feature name to expand.

!!! note
Missing values impute blanks; `blank` is included as an n-gram if the missing values are scored as important.
2. Hover over an n-gram in a cluster to view sample strings that contain the word.

3. Click **See more context examples** to drill down.

The Context window displays ten random excerpts that contain the n-gram.
### Image features {: #image-features }
For image features, Cluster Insights displays sample images from each cluster. DataRobot uses the [Maximal Marginal Relevance](https://www.cs.cmu.edu/~jgc/publication/The_Use_MMR_Diversity_Based_LTMIR_1998.pdf){ target=_blank } criterion to choose images that are representative of the cluster, but also diverse within the cluster (so not all from the [centroid](glossary/index#centroid) of the cluster).
1. Locate an image feature on the **Cluster Insights** tab. By default, four images are displayed. Click near the feature name to show 10 images.

2. Hover over an image to zoom in.
### Geospatial location features {: #geospatial-location-features }
To see a map of a geospatial location feature:
1. Locate a geospatial feature on the **Cluster Insights** tab.

DataRobot uses the [Maximal Marginal Relevance](https://www.cs.cmu.edu/~jgc/publication/The_Use_MMR_Diversity_Based_LTMIR_1998.pdf){ target=_blank } criterion to transform geospatial data to points.
!!! tip
DataRobot derives numeric features (such as area and coordinates) from geospatial features. Often the derived features appear in the **Informative Features** list, while the original geospatial feature does not. To view the geospatial map of the original geospatial feature, select **All Features** from the Feature List dropdown and locate the feature.
2. Click near the feature name to expand the map:

To view individual clusters, click the **Map legend** and click cluster names to hide clusters. The map visualization includes zoom buttons.
|
cluster-insights
|
---
title: Insights
description: The Insights tab lets you view and analyze visualizations for your project, switching between models to make comparisons.
---
# Insights {: #insights }
The **Insights** tab lets you [view and analyze visualizations](#work-with-insights) for your project, switching between models to make comparisons.
The following table lists the insight visualizations, descriptions, and data sources used. Click the links for details on analyzing the visualizations. Note that availability of visualization tools is based on project type.
| Insight tiles | Description | Source |
|------------------|---------------|--------|
| [Activation Maps](#activation-maps) | Visualizes areas of images that a model is using when making predictions. | Training data |
| [Anomaly Detection](#anomaly-detection) | Provides a summary table of anomalous results sorted by score. | From training data, the most anomalous rows (those with the highest scores) |
| [Category Cloud](#category-clouds) | Visualizes relevancy of a collection of categories from summarized categorical features. | Training data |
| [Hotspots](#hotspots)| Indicates predictive performance. | Training data |
| [Image Embeddings](#image-embeddings) | Displays a projection of images onto a two-dimensional space defined by similarity. | Training data |
| [Text Mining](#text-mining) | Visualizes relevancy of words and short phrases. | Training data |
| [Tree-based Variable Importance](#tree-based-variable-importance) | Ranks the most important variables in a model. | Training data |
| [Variable Effects](#variable-effects) | Illustrates the magnitude and direction of a feature's effect on a model's predictions. | Validation data |
| [Word Cloud](#word-clouds) | Visualizes variable keyword relevancy. | Training data |
## Work with Insights {: #work-with-insights }
1. Navigate to **Models > Insights** and click an insight tile (see [Insight visualizations](#insights) for a complete list).

!!! note
The particular insights that display are dependent on the model type, which is in turn dependent on the project type. In other words, not every insight is available for every project.
2. From the **Model** dropdown menu on the right, select from the list of models.

!!! note
The models listed depend on the insight selected. In the example shown, the Tree-based Variable Importance insight is selected, so the **Model** list contains tree and forest models.
3. Select from other applicable insights using the **Insight** dropdown menu.

4. Click **Export** on the bottom right to download visualizations, then select the type of download (CSV, PNG, or ZIP).

See [Export charts and data](export-results) for more details.
## Activation maps {: #activation-maps }

Click the **Activation Maps** tile on the **Insights** tab to see which image areas the model is using when making predictions—which parts of the images are driving the algorithm prediction decision.
An activation map can indicate whether your model is looking at the foreground or background of an image or whether it is focusing on the right areas. For example, is it looking only at “healthy” areas of a plant when there is disease and because it does not use the whole leaf, classifying it as "no disease"? Is there a problem with [overfitting](glossary/index#overfitting) or [target leakage](glossary/index#target-leakage)? These maps help to determine whether the model would be more effective if it were [tuned](vai-tuning).

{% include 'includes/activation-map-include.md' %}
## Anomaly detection {: #anomaly-detection }

Click the **Anomaly Detection** tile on the **Insights** tab to see an anomaly score for each row, helping to identify unusual patterns that do not conform to expected behavior. The Anomaly Detection insight provides a summary table of these results sorted by score.

The display lists up to the top 100 rows from your training dataset with the highest anomaly scores, with a maximum of 1000 columns and 200 characters per column. Click the **Anomaly Score** column header on the top left to sort the scores from low to high. The **Export** button lets you download the complete listing of anomaly scores.
See also [anomaly score insights](anomaly-detection#anomaly-score-insights) and [time series anomaly visualizations](anom-viz).
## Category Cloud {: #category-clouds }

Click the **Category Cloud** tile on the **Insights** tab to investigate [summarized categorical](histogram#summarized-categorical-features) features. The **Category Cloud** displays as a [word cloud](#word-cloud) and shows the keys most relevant to their corresponding feature.

{% include 'includes/category-cloud-include.md' %}
## Hotspots {: #hotspots }

Click the **Hotspots** tile on the **Insights** tab to investigate "hot" and "cold" spots that represent simple rules with high predictive performance. These rules are good predictors for data and can easily be translated and implemented as business rules.
??? info "Hotspots availability"
Hotspot insights are available only when you have:
* A project with a RuleFit Classifier or RuleFit Regressor blueprint trained on the training dataset but not yet trained on the validation or holdout datasets
* At least one numeric or categorical column
* Fewer than 100K rows

| | Element | Description |
|---|---|---|
|  | Hotspots | DataRobot uses the rules created by the RuleFit model to produce the hotspots plot. <ul><li>The size of a spot indicates the number of observations that follow the rule.</li><li>The color of the rule indicates the relative difference between the average target value for the group defined by the rule and the overall population. The hotspots range from blue to red, with blue indicating a negative effect (“cold”) and red indicating a positive effect (“hot”). Rules with a larger positive or negative effect display in deeper shades of red or blue, and those with a smaller effect display in lighter shades.</li></ul>|
|  | Hotspot rule tooltip | Displays the rule corresponding to the spot. Hover over a spot to display the rule. The rule is also shown in the table below. |
|  | Filter | Allows you to display only hotspots or coldspots by selecting or clearing the **Hot** and **Cold** check boxes. |
|  | Export | Allows you export the hotspot table as a CSV. |
|  | Rule | Lists the rules created by the RuleFit model. Each rule corresponds to a hotspot. Click the header to sort the rules alphabetically. |
|  | Hot/Cold bar | Displays as red for hotspots and blue for a coldspots. The magnitude of the bar represents the strength of the effect (red for a negative effect and blue for a positive effect). Click the header to sort the rules based on the magnitude of hot/cold effects. |
|  | Mean Relative to Target (MRT) | The ratio of the average target value for the subgroup defined by the rule to the average target value of the overall population. High values of MRT—i.e., red dots or “hotspots”—indicate groups with higher target values, whereas low values of MRT (blue dots or “coldspots”) indicate groups with lower target values. Click the header to sort the rules based on mean relative target. |
|  | Mean Target | Mean target value for the subgroup defined by the rule. Click the header to sort the rules based on mean target. |
|  | Observations[%]| Percentage of observations that satisfy the rule, calculated using data from the validation partition. Click the header to sort the rules based on observation percentages. |
??? tip "Example of a hotspot rule"
This example of the average of subgroup divided by the overall average: If the average readmission rate across your dataset is 40%, but for people with 10+ inpatient procedures it is 80%, then MRT is 2.00. That *does not* mean that people with 10+ inpatient procedures are twice as likely to be readmitted. Instead it tells you that this rule is twice better at capturing positive instances than just guessing at random using the overall sample mean.
Rules also exist for categorical features. They will include `x <= 0.5` or `x > 0.5`, which represent `x=0` or “No” for a given category, or `x=1` or `Yes`, respectively.
For example, consider a dataset that looks at admitted hospital patients. The categorical feature `Medical Specialty` identifies the speciality of a physician that attends to a patient (cardiology, surgeon, etc.). This feature is included in the rule `MEDICAL_SPECIALTY-Surgery-General <= 0.5`. This rule captures all the rows in the dataset where the medical specialty of the attending physician is *not* “Surgery General”.
## Image Embeddings {: #image-embeddings }

Click the **Image Embeddings** tile on the **Insights** tab to view up to 100 images from the validation set projected onto a [two-dimensional plane](vai-ref#image-embeddings) (using a technique that preserves similarity among images). This visualization answers the questions: What does the featurizer consider to be similar? Does this match human intuition? Is the featurizer missing something obvious?

See the full description of the **Image Embeddings** insight in the section on [Visual AI model insights](vai-insights#image-embeddings).
## Text-based insights {: #text-based-insights }
Text variables often contain words that are highly indicative of the response. To help assess variable keyword relevancy, DataRobot provides the following text-based insights:
* [**Text Mining**](#text-mining)
* [**Word Cloud**](#word-clouds)
??? info "Text-based insight availability"
If you expected to see one of these text insights and do not, view the [**Log**](log) tab for error messages to help understand why the models may be missing.
One common reasons that text models are not built is because DataRobot removes single-character "words" when model building. It does this because the words are typically uninformative (e.g., "a" or "I"). A side-effect of this removal is that single-digit numbers are also removed. In other words, DataRobot removes "1" or "2" or "a" or "I". This common practice in text mining (for example, the [Sklearn Tfidf Vectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer#sklearn.feature_extraction.text.TfidfVectorizer{ target=_blank }) selects tokens of 2 or more alphanumeric characters).
This can be an issue if you have encoded words as numbers (which some organizations do to anonymize data). For example, if you use "1 2 3" instead of "john jacob schmidt" and "1 4 3" instead of "john jingleheimer schmidt," DataRobot removes the single digits; the texts become "" and "". If DataRobot cannot find <em>any</em> words for features of type text (because they are all single digits) it errors.
If you need a workaround to avoid the error, here are two solutions:
* Start numbering at 10 (e.g., "11 12 13" and "11 14 13")
* Add a single letter to each ID (e.g., "x1 x2 x3" and "x1 x4 x3").
### Text Mining {: #text-mining }

The **Text Mining** chart displays the most relevant words and short phrases in any variables detected as text. Text strings with a positive effect display in red and those with a negative effect display in blue.

| | Element | Description |
|---|---|---|
|  | Sort by | Lets you sort values by impact (Feature Coefficients) or alphabetically (Feature Name). |
|  | Select Class | For multiclass projects, use the **Select Class** dropdown to choose a specific class for the text mining insights. |
The most important words and phrases are shown in the text mining chart, ranked by their coefficient value (which indicates how strongly the word or phrase is correlated with the target). This ranking enables you to compare the strength of the presence of these words and phrases. The side-by-side comparison allows you to see how individual words can be used in numerous —and sometimes counterintuitive—ways, with many different implications for the response.
### Word Cloud {: #word-clouds }

The **Word Cloud** insight displays up to the 200 most impactful words and short phrases in word cloud format.

{% include 'includes/word-cloud-include.md' %}
## Tree-based Variable Importance {: #tree-based-variable-importance }

The **Tree-Based Variable Importance** insight shows the sorted relative importance of all key variables driving a specific model.

This view accumulates all the Importance charts for models in the project to make it easier to compare these charts across models. Change the **Sort by** dropdown to list features by ranked importance or alphabetically (2).
??? info "Tree-based Variable Importance availability"
The chart is only available for tree/forest models (for example, Gradient Boosted Trees Classifier or Random Forest).
The chart shows the relative importance of all key features making up the model. The importance of each feature is calculated relative to the most important feature for predicting the target. To calculate, DataRobot sets the relative importance of the most important feature to 100%, and all other features are a percentage relative to the top feature.
Consider the following when interpreting the chart:
* Sometimes relative importance can be very useful, especially when a particular feature appears to be significantly more important for predictions than all other features. It is usually worth checking if the values of this very important variable do not depend on the response. If it is the case, you may want to exclude this feature in training the model. Not all models have a Coefficients chart, and the Importance graph is the only way to visualize the feature impact to the model.
* If a feature is included in only one model out of the dozens that DataRobot builds, it may not be that important. Excluding it from the feature set can optimize model building and future predictions.
* It is useful to compare how feature importance changes for the same model with different feature lists. Sometimes the features recognized as important on a reduced dataset differ substantially from the features recognized on the full feature set.
## Variable Effects {: #variable-effects }

While **Tree-Based Variable Importance** tells you the relevancy of different variables to the model, the **Variable Effects** chart shows the impact of each variable in the prediction outcome.

Use this chart to compare the impact of a feature for different Constant Spline models. It is useful to ensure that the relative rank of feature importance across models does not vary wildly. If in one model a feature is regarded to be very important with positive effect and in another with negative, it is worth double-checking both the dataset and the model.
With **Variable Effects**, you can:
* Click **Variable Effects** to display the relative rank of features.
* Use the **Sort by** dropdown to sort values by impact (Feature Coefficients) or alphabetically (Feature Name).
??? info "Variable Effects availability"
**Variable Effects** are only available for full Autopilot models built using Constant Splines during preprocessing. To see the impact of each variable in the prediction outcomes for other model types, use the [**Coefficients**](coefficients) tab.
|
analyze-insights
|
---
title: Bias vs Accuracy
description: The Bias vs Accuracy chart compares predictive accuracy and fairness, removing the need to manually note each model's accuracy and fairness scores.
---
# Bias vs Accuracy {: #bias-vs-accuracy }
The **Bias vs Accuracy** chart shows the tradeoff between predictive accuracy and fairness, removing the need to manually note each model's accuracy score and fairness score for the protected features. Consider your use case when deciding if the model needs to be more accurate or more fair. The **Bias vs Accuracy** display is based on the validation score, using the currently selected metric.
- The Y-axis displays the validation score of each model. To change this metric, switch to the Leaderboard, change the metric via the **Metric** dropdown, and then return to **Bias vs Accuracy**.
- The X-axis displays the fairness score of each model, that is the lowest relative fairness score for a class in the protected feature.

## Bias vs Accuracy chart {: #bias-vs-accuracy-chart }
Consider the following when evaluating the **Bias vs Accuracy** chart:
- You must calculate [**Per-Class Bias**](per-class) for a model before it can be displayed on the chart.
- **Protected Features**, **Fairness metric**, and **Fairness threshold** were defined in [advanced options](fairness-metrics) prior to model building.
- Use the **Feature List** dropdown to compare the models trained on different feature lists.
- The left side highlights models with fairness scores below the fairness threshold, and the right side highlights models with scores above the threshold.
- Hover on any point to view scores for a specific model.
- Some models may not report scores because there is not enough data (as indicated with a tooltip). Requirements are:
- More than 100 rows.
- If between 100 and 1,000 rows, more than 10% of the rows must belong to the majority class (the class with the most rows of data).
|
bias-tab
|
---
title: Learning Curves
description: Use the Learning Curve graph to help determine whether getting additional data would be worth the expense if it increases model accuracy.
---
# Learning Curves {: #learning-curves }
Use the **Learning Curve** graph to help determine whether it is worthwhile to increase the size of your dataset. Getting additional data can be expensive, but may be worthwhile if it increases model accuracy. The **Learning Curve** graph illustrates, for the top-performing models, how model performance varies as the sample size changes. It is based on the current metric set to sort the Leaderboard. (See below for information on how DataRobot [calculates model selection](#learning-curves-additional-info) for display.)
After you have started a model build, select **Learning Curves** to display the graph, which shows the stages (sample sizes) used by Autopilot. The display updates as models finish building, reprocessing the graphs with the new model score(s). The image below shows a graph for an AutoML project. For [time-aware models](#learning-curves-with-otv), you can set the view (OTV only) and you cannot modify [sample size](#compute-new-sample-sizes).

To see the actual values for a data point, mouse over the point on the graph's line or the color bar next to the model name to the right of the graph:

Not all models show three sample sizes in the Learning Curves graph. This is because as DataRobot reruns data with a larger sample size, only the highest scoring models from the previous run progress to the next stage. Additionally, blenders are only run on the highest sample percent (which is determined by partitioning settings). Also, the number of points for a given model depend on the number of rows in your dataset. Small datasets ([AutoML](model-ref#small-datasets) and [time-aware](multistep-ta#small-datasets)) also impact the number of stages run and shown.
## Interpret Learning Curves {: #interpret-learning-curves }
Consider the following when evaluating the **Learning Curves** graph:
* You must unlock holdout to display Validation scores.
* Study the model for any sharp changes or performance decrease with increased sample size. If the dataset or the validation set is small, there may be significant variation due to the exact characteristics of the datasets.
* Model performance can decrease with increasing sample size, as models may become overly sensitive to particular characteristics of the training set.
* In general, high-bias models (such as linear models) may do better at small sample sizes, while more flexible, high-variance models often perform better at large sample sizes.
* Preprocessing variations can increase model flexibility.
## Compute new sample sizes {: #compute-new-sample-sizes }
You can compute the **Learning Curves** graph for several models in a single click across a set of different sample sizes. By default, the graph auto-populates with sample sizes that map to the stages that were part of your modeling mode selection—three points (full Autopilot and the model recommended and prepared for deployment).
!!! note "Learning Curves with Quick Autopilot"
Because Quick Autopilot uses one-stage training, the **Learning Curves** graph that was initially populated will show only a single point. To use the Quick run as the basis for this visualization, you can manually run models at various sample sizes or run a different Autopilot mode.
To compute and display additional sample sizes with the **Compute Learning Curves** option:

Adding sample sizes and clicking **Compute** causes DataRobot to recompute for the newly entered sizes. Computation is run for all models, or, if you selected one or more models from the list of the right, only for the selected model(s). While per-request size is limited to five sample sizes, you can display any number of points on the graph (using multiple requests). The sample size values you add via **Compute Learning Curves** are only remembered and auto-populated for that session; they do not persist if you navigate away from the page. To view anything above 64%, you must first unlock holdout.
Some notes on adding new sample sizes:
* If you trained on a new sample size from the Leaderboard (by clicking the plus () sign), any atypical size (a size not available from the snap-to choices in the dialog to add a new model) does not automatically display on the **Learning Curves** graph, although you can add it from the graph.
* Initially, the sample size field populates with the default snap-to sizes (usually 16%, 32%, and 64%). Because the field only accepts five sizes per request, if you have more than two additional custom sizes you can delete the defaults if they are already plotted. (Their availability on the graph is dependent on the modeling mode you used to build the project.)
## Learning Curves with OTV {: #learning-curves-with-otv }
The **Learning Curves** graph is based on the mode used for selecting rows ([rows, duration, or project settings](ts-date-time#bt-force) and the sampling method (random or latest). Because these settings can result in different time periods in the training data, there are two views available to make the visualization meaningful to the mode—**History view** (charts top models by duration) and **Data view** (charts top models based on number of rows).

Switch between the views to see the one appropriate for your data. A straight line in history view suggests models were trained on datasets with the same observable history. For example, in **Project Settings** mode, 25%/50%/100%, if selected randomly, results in a different number or rows but with the same time period (in other words, different data density in the time period). Models that use start/end duration are not included in the **Learning Curves** graph because you can't directly compare these durations. While using the same time period, it could be from the start or end of the dataset, and applying that against a backtest does not provide comparable results.
## Learning Curves additional info {: #learning-curves-additional-info }
The **Learning Curves** graph uses log loss (logarithmic loss) to plot model accuracy—the lower the log loss, the higher the accuracy. The display plots, for the top 10 performing models, log loss for each size data run. The resulting curves help predict how well each model will perform for a given quantity of training data.
**Learning Curves** charts how well a *model group* performs when it is computed across multiple sample sizes. This grouping represents a line in the graph, with each dot on the line representing the sample size and score of an individual model in that group.
DataRobot groups models on the Leaderboard by the blueprint ID and Feature List. So, for example, every Regularized Logistic Regression model, built using the *Informative Features* feature list, is a single model group. A Regularized Logistic Regression model built using a different feature list is part of a different model group.
By default, DataRobot displays:
* up to the top 10 grouped models. There may be fewer than 10 models if, for example, one or more of the models highly diverges from the top model. To preserve graph integrity, that divergent model is treated as a kind of outlier and is not plotted.
* any blenders models with scores that fall within an automatically determined threshold (that emphasizes important data points and graph legibility).
If the holdout is locked for your project, the display only includes data points computed based on the size of your training set. If the holdout is unlocked, data points are computed on training and validation data.
### Filtering display by feature list {: #filtering-display-by-feature-list }
The **Learning Curves** graph plots using the [Informative Features](feature-lists#automatically-created-feature-lists) feature list. You can filter the graph to show models for a specific feature list that you created (and ran models for) by using the **Feature List** dropdown menu. The menu lists all feature lists that belong to the project. If you have not run models on a feature list, the option is displayed but disabled.

When you select an alternate feature list, DataRobot displays, for the selected feature list:
* the top 10 non-blended models
* any blenders models with scores that fall within an automatically determined threshold (that emphasizes important data points and graph legibility).
### How Model Comparison uses actual value {: #how-model-comparison-uses-actual-value }
What follows is a very simple example to illustrate the meaning of actual value on the **Model Comparison** page (Lift and Dual Lift Charts):
Imagine a simple dataset of 10 rows and the Lift Chart is displaying using 10 bins. The value of the target for rows 1 through 10 is:
0, 0, 0, 0, 0, 1, 1, 1, 1, 1
Model A is perfect and predicts:
0, 0, 0, 0, 0, 1, 1, 1, 1, 1
Model B is terrible and predicts:
1, 1, 1, 1, 1, 0, 0, 0, 0, 0
Now, because DataRobot sorts before binning, Model B sorts to:
0, 0, 0, 0, 0, 1, 1, 1, 1, 1
As a result, the bin 1 prediction is `0` for both models. Model A is perfect, so the bin 1 *actual* is also 0. With Model B, however, the bin 1 actual is `1`.
|
learn-curve
|
---
title: Other Leaderboard tabs
description: In addition to the model-specific tabs available, additional insights are available on the Leaderboard level.
---
# Other {: #other }
In addition to the model-specific tabs available, additional insights are available on the Leaderboard level:
Insight tab | Description
------------------|-------------
[Insights](analyze-insights) | Provides graphical representations of model details.
[Learning Curves](learn-curve) | Helps to determine whether it is worthwhile to increase dataset size.
[Speed vs Accuracy](speed) | Illustrates the tradeoff between runtime and predictive accuracy.
[Model Comparison](model-compare) | Compares selected models by varying criteria.
[Bias vs Accuracy](bias-tab) | Illustrates the tradeoff between predictive accuracy and fairness.
|
index
|
---
title: Model Comparison
description: You can compare the business returns of built models in a project using the Model Comparison tab or the Leaderboard's Compare Selected.
---
# Model Comparison {: #model-comparison }
Comparing Leaderboard models can help identify the model that offers the highest business returns. It also can help select candidates for blender models; for example, you may blend two models with diverging predictions to improve accuracy—or two relatively strong models to improve your results further.
!!! note
Once you have selected the model that best fits your needs, you can [deploy it](deploy-model) directly from the model menu.
#### Model Comparison availability {: #Model Comparison availability }
The **Model Comparison** tab is available for all project types except:
* Multiclass (including extended multiclass and unlimited multiclass)
* Multilabel
* Unsupervised clustering
* Unsupervised anomaly detection for time-aware projects
* Parent projects in segmented modeling
If model comparison isn't supported, the tab does not display.
## Compare models {: #compare-models }
To compare models in a project with at least two models built, either:
* Select the **Model Comparison** tab.

* Select two models from the Leaderboard and use the Leaderboard menu's **Compare Selected** option.

Once on the page, select models from the dropdown. The associated model statistics update to reflect the currently selected model:

The **Model Comparison** page allows you to compare models using different evaluation tools. For the Lift Chart, ROC Curve, and Profit Curve, depending on the partitions available, you can select a data source to use as the basis of the display.
| Tool | Description |
|-------------------------|--------------------------|
| Accuracy metrics | Displays various accuracy metrics for the selected model. |
| Prediction time | Reports the time required for DataRobot to score the model's holdout. |
| [Dual Lift Chart](#dual-lift-chart) | Depicts model accuracy compared to actual results, based on the difference between the model prediction values. For each pair, click to compute data for the model. |
| [Lift Chart](#interpret-a-lift-chart) | Depicts how effective a model is at predicting the target, letting you visualize the model's effectiveness. |
| [ROC Curve](roc-curve-tab/index) | Helps to explore classification, performance, and statistics related to the selected models. On Model Comparison, it shows just the ROC Curve visualization and selected summary statistics for the selected models. It also allows you to view the prediction threshold used for modeling, predictions, and deployments. |
| [Profit Curve](profit-curve) | Helps compare the estimated business impact of the two selected models. The visualization includes both the payoff matrix and the accompanying graph. It also allows you to view the prediction threshold used for modeling, predictions, and deployments. See the tab for more complete information. |
| [Accuracy Over Time](aot) (OTV, time series only)* | Visualizes how predictions change over time for each model, helping to compare model performance. Hover on any point in the chart to see predicted and actual values for each model. You can modify the partition and forecast distance for the display. Values are based on the Validation partition; to see training data you must first compute training predictions from the **Evaluate > Accuracy Over Time** tab. |
| [Anomaly Over Time](anom-viz) (OTV, time series only) | Visualizes when anomalies occur across the timeline of your data, functioning like the **Accuracy Over Time** chart, but with anomalies. Hover on any point in the chart to see predicted anomaly scores for each model. You can modify the partition for the display as well as the anomaly threshold of each model. Values are based on the Validation partition; to see training data you must first compute training predictions from the **Evaluate > Anomaly Detection** tab. |
!!! note "Accuracy Over Time calculations*"
Because multiseries projects can have up to 1 million series and up to 1000 forecast distances, calculating accuracy charts for all series data can be extremely compute-intensive and often unnecessary. To avoid this, DataRobot provides [alternative calculation options](aot#display-by-series).
### Dual Lift Chart {: #dual-lift-chart }
The **Dual Lift** chart is a mechanism for visualizing how two competing models perform against each other—their degree of divergence and relative performance. How well does each model segment the population? Like the [Lift Chart](lift-chart), the Dual Lift also uses [binning](lift-chart#lift-chart-binning) as the basis for plotting. However, while the standard Lift Chart sorts <em>predictions</em> (or adjusted <em>predictions</em> if you set the [Exposure](additional#set-exposure) parameter in **Advanced options**) and then groups them, the Dual Lift Chart groups as follows:
1. Calculate the difference between each model's prediction score (or adjusted predictions score if Exposure is set).
2. Sort the rows according to the difference.
3. Group the results based on the number of bins requested.

The Dual Lift Chart plots the following:
| Chart element | Description |
|---------------|---------------------|
| Top or bottom model prediction (1) | For each model, and color-coded to match the model name, the data points represent the average prediction score for the rows in that bin. These values match those shown in the Lift Chart. |
| Difference measurement (2) | Shading to indicate the difference between the left and right models. |
| [Actual value](lift-chart#lift-chart-binning) (3) | The actual percentage or value for the rows in the bin. |
| Frequency (4) | A measurement of the number of rows in each bin. Frequency changes as the number of bins changes. |
The Dual Lift Chart is a good tool for assessing candidates for ensemble modeling. Finding different models with large divergences in the target rate (orange line) could indicate good pairs of models to blend. That is, does a model show strength in a particular quadrant of the data? You might be able to create a strong ensemble by blending with a model that is strong in an opposite quadrant.
### Interpret a Lift Chart {: #interpret-a-lift-chart }
The points on the Lift Chart indicate either the average percentage (for binary classification) or the average value (for regression) in each bin. To compute the Lift Chart, the actuals are sorted based on the predicted value, binned, and then the average actuals for each model appears on the chart.
A Lift Chart is especially valuable for propensity modeling, for example, for finding which model is the best at identifying customers most likely to take action. For this, points on the Lift Chart indicate, for each model, the average actuals value in each bin. For a general discussion of a Lift Chart, see the [Leaderboard tab](lift-chart) description.
To understand the value of a Lift Chart, consider the sample model comparison Lift Chart below:

Both models make pretty similar predictions in bins 5, 8, 9, and 11. From this you can assume that in the mean of the distribution, they are both predicting well. However, on the left side, the majority class classifier (yellow line) is consistently over-predicting relative to the SVM classifier (blue line). On the right side, you can see that majority class classifier is under-predicting. Notice:
1. The blue model is better than the yellow model, because the lift curve is “steeper" and a steeper curve indicates a better model.
2. Now knowing that the blue model is better, you can notice that the yellow model is especially bad in the tails of the distribution. Look at bin 1 where the blue model is predicting very low and bin 10 where the blue model is predicting very high.
The take-aways? The yellow model (majority class classifier) does not suit this data particularly well. While the blue model (SVM classifier) is trending the right way pretty consistently, the yellow model jumps around. The blue model is probably your better choice.
|
model-compare
|
---
title: Speed vs Accuracy
description: Predictive accuracy often requires longer prediction runtime. The Speed vs Accuracy plot shows the runtime/accuracy tradeoff to help you choose the best model.
---
# Speed vs Accuracy {: #speed-vs-accuracy }
Predictive accuracy often comes at the price of increased prediction runtime. The **Speed vs Accuracy** analysis plot shows the tradeoff between runtime and predictive accuracy and helps you choose the best model with the lowest overhead. The **Speed vs Accuracy** display is based on the validation score, using the currently selected metric.
* The Y-axis lists the metric currently selected on the Leaderboard. To change metric, switch to the Leaderboard, change the metric via the **Metric** dropdown, and then return to **Speed vs Accuracy**.
* The X-axis displays the estimated time, in milliseconds, to make 1000 predictions. Total prediction times include a variety of factors and vary based on the implementation. Mouse over any point on the graph, or the model name in the legend to the right, to display the estimated time and the score.

!!! tip
If you re-order the Leaderboard display, for example to sort by cross-validation score, the <b>Speed vs Accuracy</b> graph continues to plot the top 10 models based on validation score.
|
speed
|
---
title: Model Compliance
description: Details and steps to generate the Model Compliance Document.
---
# Model Compliance {: #model-compliance }
DataRobot automates many critical compliance tasks associated with developing a model and, by doing so, [decreases the time-to-deployment](#generalized-model-validation-workflow) in highly regulated industries. You can generate, for each model, individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. Then, you can download the report as an editable Microsoft Word document (`.docx`). The generated report includes the appropriate level of information and transparency necessitated by regulatory compliance demands.
The model compliance report is not prescriptive in format and content, but rather serves as a guide in creating sufficiently rigorous model development, implementation, and use documentation. The documentation provides evidence to show that the components of the model work as intended, the model is appropriate for its intended business purpose, and it is conceptually sound. As such, the report can help with completing the Federal Reserve System's [SR 11-7: Guidance on Model Risk Management](https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm){ target=_blank }.
!!! note
Using the [Python client](https://pypi.org/project/datarobot/#description){ target=_blank }, you can create custom compliance documentation templates. DataRobot’s customized templating capabilities provide flexibility to control the structure and contents of the generated documentation. Alternatively, DataRobot uses a default template to generate Compliance Documentation when no custom template is specified.
## Complete the Model Compliance document {: #complete-the-model-compliance-document }
From the **Compliance Documentation** tab:
1. Optionally, consider unlocking the project's holdout as explained in the **Documentation Content Improvement** note: "Consider unlocking the project's holdout to include additional model performance detail in the compliance documentation."

2. Select the format of your compliance documentation. Choose:
* **Automated Compliance Document** to use the default template provided by DataRobot.
* **Upload custom JSON file** to provide a custom template for DataRobot to use in generating the documentation.

3. After selecting a template, click **Generate Report** to initiate DataRobot's report production process, which results in creation of a DOCX file. You will see an indicator that DataRobot is creating the report.

When the report is completed successfully, this indicator changes to a checkmark.
!!! tip
You can also generate compliance documentation from the [Model Registry](reg-compliance).
4. After you have generated the model compliance report, click **Download** and save the DOCX file to your system. Open the file and complete it as follows:
* Areas of blue italic text are intended as guidance and instruction. They identify who should complete the section and provide detail of the required information.

* Areas of black text are DataRobot's automatically generated model compliance text—preprocessing, performance, impact, task-specific, and DataRobot general information.

## Model report updates {: #model-report-updates }
Once you generate the report, DataRobot stores it with your project for download at any time. In some cases, there are changes that would affect the report content (for example, [unlocking holdout](unlocking-holdout)). This is fairly uncommon because report generation usually happens <em>after</em> model selection, which generally happens <em>after</em> the model has been tested against holdout. If you view the **Compliance Documentation** tab and there are changes that affect the report content, you are prompted to generate the report again.
## Generalized model validation workflow {: #generalized-model-validation-workflow }
The following is a high-level workflow of a typical model validation process. It is repeated for each new model or for a material change to an existing model (e.g., model re-fit or re-estimation). DataRobot's report satisfies Step 2 described below and, by extension, expedites the remaining steps.
1. Model owner identifies a use case and business need; model developer builds the model.
2. Owner and developer collaborate on a comprehensive “model development, implementation, and use” document that summarizes the model development process in detail.
3. The model development documentation is given to the model risk management team, with any applicable code and data.
4. Using the documentation, the model validation team replicates the process and performs a series of predefined statistical, analytical, and qualitative tests.
5. Validation team writes a comprehensive report summarizing their findings. Reportable issues require remediation. Non-reportable suggestions or recommendations demonstrate an effective challenge of the model development process.
6. Upon validation team approval, the model governance team secures stakeholder approval, tracks the remediation process, and performs ongoing model performance monitoring.
|
compliance
|
---
title: Compliance
description: The Compliance tabs compile model development documentation that can be used for regulatory validation.
---
# Compliance {: #compliance }
!!! info "Availability information"
Availability of compliance documentation is dependent on your configuration. Contact your DataRobot representative for more information.
The **Compliance** tabs compile model development documentation that can be used for regulatory validation. Work with the report from the following Leaderboard tabs:
Leaderboard tab | Description | Source
------------------|-------------|------------
[Compliance Documentation](compliance) | Generate individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. | Data sources used for insights vary across use cases.
[Template Builder](template-builder) | Create, edit, and share custom compliance documentation templates. | Data sources used for insights vary across use cases.
|
index
|
---
title: Template Builder for compliance reports
description: For regulated industries, how to use Template Builder to generate required documentation using the provided compliance template or a custom template.
---
# Template Builder {: #template-builder }
In some regulated industries, models have to go through a rigorous validation process which can be tedious, time-consuming, and can potentially block the deployment of models to production. DataRobot's [Automated Compliance Documentation](compliance) was designed to accelerate this process by automating the necessary documentation requirements that accompany such use cases. The Template Builder allows you to create, edit, and share custom documentation templates to fit their needs and speed up the validation process. With the click of a button, users can generate automated documentation using the provided compliance documentation template or by creating and sharing a custom, user-defined template more closely aligned with your documentation requirements.
If enabled, the Template Builder can be found by going to the user dropdown menu found in the top right hand corner of the application.

## Create and edit templates {: #create-and-edit-templates }
From the Template Builder homepage, select **Create Template** and choose between a **Time Series** and **Non-Time Series** template. This will determine what sections and components are available for the template.
* **Sections** are predefined portions of content describing different aspects of the specified model. They usually include a visualization such as a chart or a table with associated explanation text. You can edit **Custom Sections** to display custom text and organization of the document.
* **Components** are more granular than sections. They only contain certain visualizations describing results of the model without the surrounding explanation text.

The document can be rearranged into subsections by clicking on the **Rearrange Sections** button to drag and drop into the desired position.

After the template has been finalized and saved, you can use the preview option to see what a document rendered from the template would look like before assigning it to users in your organization to generate model documentation. This step inserts dummy data for the preview and it is not a finished document because it is not associated with a model yet.

## Share a template {: #share-a-template }
Once you have created a template you can share it with other users, groups, or organizations within the DataRobot platform. Navigate to the Template Builder homepage where all templates are listed and click on the action menu for the template you would like to share.

Once on the menu, you can view and edit the permissions for the template. To add permissions, search the name of the user, group, or organization you want to share with in the **Share with others** field. Choose the permission level (User or Owner) and then save the modifications.

|
template-builder
|
---
title: Coefficients (preprocessing)
description: How to use the Coefficients tab, which shows the positive or negative impact of important variables, to help you refine and optimize your models.
---
# Coefficients (preprocessing) {: #coefficients-preprocessing }
The **Coefficients** tab provides a visual indicator of information that can help you refine and optimize your models. If you are using a regression model, use the **Coefficients** tab provides a visual representation of the 30 most important variables, sorted (by default) in descending order of impact on the final prediction. Variables with a positive effect are displayed in red; variables with a negative effect are shown in blue. The tab also provides, for [supported models](#supported-model-types), a link that allows you to [export the parameters and coefficients](#preprocessing-and-parameter-view) DataRobot uses with the selected model to generate predictions. (Exported coefficients includes <em>all</em> coefficients, not just the top 30.)
Note that the **Coefficients** tab is only available for a limited number of models because it is not always possible to derive the coefficients for complex models in short analytical form.
The **Coefficients** chart determines the following to help assess model results:
* Which features were chosen to form the prediction in the particular model?
* How important is each of these features?
* Which features have positive and negative impact?
!!! tip
The <b>Leaderboard > Coefficients</b> and <b>Insights > Variable Effects</b> charts display the same type of information. Use the <b>Coefficients</b> tab to display coefficient information while investigating an individual model; use the <b>[Variable Effects](analyze-insights#variable-effects)</b> chart to access, and compare, coefficient information for all applicable models in the project.

Time series projects have an additional option to filter the display based on forecast distance, as [described below](#chart-display-based-on-forecast-distance).
See [below](#understand-the-coefficient-chart) for more detailed ways to consider Coefficients chart output.
## Use the Coefficient chart {: #use-the-coefficient-chart }
The Coefficients chart opens when you click the **Coefficients** tab. Actions available include:
1. Click the **Sort By** dropdown to set the sort criteria, either **Feature Coefficients** or **Feature Name**:
* Feature Coefficients: Sorts in descending order of impact on the final prediction.
* Feature Name: Sorts features alphabetically.
2. Click the **Export** button to access a pop-up that allows download of a chart PNG, a CSV file containing feature coefficients, or both in a ZIP file.
!!! tip
If a model has the ability to produce rating tables (for example, GAM and GA2M), the CSV download option is not available. Use the [Rating Tables](rating-table) tab instead. (These models are indicated with the rating table icon on the Leaderboard.)
3. If the main model uses a [two-stage modeling process](model-ref#two-stage-models) (Frequency-Severity Elastic Net, for example), you can use the dropdown to select a stage. DataRobot then graphs parameters corresponding to the selected stage.
## Preprocessing and parameter view {: #preprocessing-and-parameter-view }
Exporting coefficients with preprocessing information provides the data needed to reproduce predictions for a selected model. With the click of a link, DataRobot generates a table of model parameters (coefficients and the values of the applied feature transformations) for the input data of [supported models](#supported-model-types). That is, while you can export coefficients for all models showing the **Coefficients** tab, not all models showing the tab allow you to export preprocessing information. DataRobot then builds a CSV table of a model's parameter, transformation, and coefficient information. There are [many reasons](#why-export) for using coefficients with preprocessing information, for example, to replicate results manually for verification of the DataRobot model.
### Generate output {: #generate-output }
DataRobot automatically generates coefficients with preprocessing information, and makes it available for export, for all [supported models](#supported-model-types) when you use any of the supported [modeling modes](model-data#set-the-modeling-mode).
!!! tip
Generalized Additive Models (GA2M) using pairwise interactions, typically used by the insurance industry, generate a different rating table for export. For more information, see the sections on [exporting](rating-table) and/or [interpreting](ga2m) export output for GA2M.
To use the feature:
1. Use the Leaderboard search feature to list all models with coefficient/preprocessing information available by searching the term "bi".
2. Select and expand the model for which you want to view model parameters.
3. Click the **Coefficients** tab to see a visual representation of the thirty most important variables. Click the **Export** button and select **.csv** from the available export options.

5. Inspect the parameter information displayed in the box. To save the contents in CSV format, click the **Download** button and select a location.
6. If your data contains text features, either all text or in combination with numerical and/or categorical features, continue to the section on [using coefficient/preprocessing information with text variables](#coefficientpreprocessing-information-with-text-variables).
See the information [below](#interpret-export-output) for a detailed description of the export output and how to interpret it.
## More info... {: #more-info }
The following sections provide information on:
* Setting the [forecast distance](#chart-display-based-on-forecast-distance) for time series projects.
* Additional ways to work with the [**Coefficient** chart](#understand-the-coefficient-chart).
* [Reasons for using](#why-export) coefficient/preprocessing export.
* [Supported model types](#supported-model-types).
* [Interpreting output](#interpret-export-output).
* Using coefficient/preprocessing information [with text variables](#coefficientpreprocessing-information-with-text-variables).
## Chart display based on forecast distance {: #chart-display-based-on-forecast-distance }
Because DataRobot creates so many additional features when building a time series modeling dataset with multiple forecasting distances, displaying parameters for all forecast distances at once would result in a difficult viewing experience. To simplify and make the view more meaningful, use the **Forecast distance** selector to view coefficients for a single distance. Set the distance by either clicking the down arrow to expand a dialog or clicking through the distance options with the right and left arrows.

Additionally, with multiseries modeling where Performance Clustered and Similarity Clustered models were built, the chart displays cluster information that includes the number of series found in each cluster (up to 20 clusters). This information can be described from the coefficients and transparent parameters and support producing user-coded insights outside of DataRobot. For example, with this information you could reproduce most of the results with an XGB model by making a new dataset that includes only series from specific clusters. For other non-cluster multiseries models, the display is the same as described above.

To support datasets with a large number of series, where displaying per-cluster information in the UI would be visually overwhelming, use the export to CSV option. The resulting export will provide a complete mapping of all series IDs to the associated cluster.

## Understand the Coefficient chart {: #understand-the-coefficient-chart }
With the **Coefficients** chart open and sorted by rank, consider the following:
1. Look carefully at features that have a very strong influence on your model to ensure that they are not dependent upon the response. Consider excluding these features from the model to avoid target leakage.
2. Try to determine if a particular feature is included in only one of the dozens of models generated by DataRobot. If so, it may not be particularly important. Excluding it from the feature set might help optimize model-building and future predictions.
3. Examine, in both the dataset and the models, any features that have a strongly positive effect in one model and a strongly negative effect in another.
4. Reduce the number of features considered by a model, as it may change the relative importance of each remaining feature. You may find it useful to compare how the importance of each feature changes when a feature list is reduced.
## Why export? {: #why-export }
You may want—or be required—to view and export the coefficients DataRobot uses to generate predictions. This is an appropriate feature if you need to:
- observe regulatory constraints.
- roll out a prediction solution without using DataRobot. This might be the case in environments where DataRobot is prohibited or not possible, for example in offline deployments such as banks or video games.
- adjust coefficients to control model build.
- quickly verify parameters accuracy without the need to compute it by hand and inspect transformations process.
**Example use case: greater model insights**
Coefficient/preprocessing information can help with modeling mortality rates for breast cancer survivors. From the parameters perhaps you can come to understand:
* which age ranges are grouped together as similar risks.
* which tumor sizes are grouped together as similar risks, and at exactly what point the risk suddenly increases.
**Example use case: regulatory disclosure**
A Korean regulator requires all model coefficients and data preprocessing steps used by banks. With DataRobot, the bank can send the coefficient output.
To reproduce the steps DataRobot takes (and illustrates in the model blueprint) to build a model, you must know the formulas used. The export available through the **Coefficients** tab provides the coefficients and transformation descriptions that paint a picture of how a model works.
**Example use case: text-based insights**
DataRobot can also work with datasets containing text columns, allowing you to download certain text preprocessing parameters. You may want to use this feature, for example, to align a marketing campaign message with the direct marketing customers selected by your DataRobot model. Using text preprocessing, you can investigate the derived features used in the modeling process to gain an intuitive understanding of selected clients.
### Supported model types {: #supported-model-types }
The coefficient/preprocessing export feature supports DataRobot's linear models, which are easy to describe in simple, portable tables of parameters. Such parameter tables might allow you to see, for example, that age is the most important variable for predicting a certain event. More complex, non-linear models can be inspected using DataRobot's other built in tools, available from the [**Feature Impact**](feature-impact), [**Feature Effects**](feature-effects), and [**Prediction Explanations**](pred-explain/index) tabs.
DataRobot provides the export feature for regularized and non-regularized GLMs, specifically:
- Generalized Linear Model
- Elastic Net Classifier
- Elastic Net Regressor
- Regularized Logistic Regression
DataRobot supports the following transformations ([described in detail below](#details-of-preprocessing)):
- Numeric imputation
- Constant splines
- Polynomial and log transforms
- Standardize
- One-hot encoding
- Binning
- Matrix of token occurrences
In general, more complicated proprietary preprocessing techniques are not exportable. For example, an imputation is exportable, but a polynomial spline is not. In the example below, although both are the same model <em>type</em>, the second model uses Regularized Linear Model Processing, which, because of the preprocessing, is not exportable.

DataRobot supports equation exports for [Eureqa models](eureqa), but does not currently support coefficient exports.
### Interpret export output {: #interpret-export-output }
The following is a sample excerpt from coefficient/preprocessing output:
1 Intercept: 5.13039673557
2 Loss distribution: Tweedie Deviance
3 Link function: log
4
5 Feature Name Type Derived Feature Transform1 Value1 Transform2 Value2 Coefficient
6 a NUM STANDARDIZED_a Missing imputation 59.5000 Standardize (56.078125,31.3878483092) 0.3347
7 b NUM STANDARDIZED_b Missing imputation 24.0000 Standardize (24.71875,15.9133088463) 0.2421
In the example, the **Intercept**, **Loss distribution**, and **Link function** parameters describe the model in general and not any particular feature. Each row in the table describes a feature and the transformations DataRobot applies to it. For example, you can read the sample as follows:
1. Take the feature named "a" (line #6) and replace missing values with the number 59.5.
2. Apply the STANDARDIZED transform formula—the mean (56.078125) and standard deviation (31.3878483092) to the value.
3. Write the result, now a derived feature, to the column "STANDARDIZE_a".
4. Follow the same procedure for feature "b".
The resulting prediction from the model is then calculated with the following
formula, where the `inverse_link_function` is the exponential (the inverse of log)and standardized `_a` and `_b` are each multiplied their coefficient (the model output) and then added to the intercept value:
resulting prediction = inverse_link_function( (STANDARDIZE_a * 0.3347) + (STANDARDIZE_b * 0.2421) + 5.13)
If the main model uses a [two-stage modeling process](model-ref#two-stage-models) (Frequency-Severity Elastic Net, for example), two additional columns—`Frequency_Coefficient` and `Severity_Coefficient`—provide the coefficients of each stage.
### Coefficient/preprocessing information with text variables {: #coefficientpreprocessing-information-with-text-variables }
Text-preprocessing transforms text found in a dataset into a form that can be used by a DataRobot model. Specifically, DataRobot uses the [Matrix of token occurrences](#matrix-of-token-occurrences) (also known as "bag of words" or "document-term matrix") transformation.
??? "Deepdive: Word Cloud coefficient values"
The coefficient value displayed is a rescaling of the linear model coefficients. That is, DataRobot models a row and then changes all its ngrams to be consistent with `minimum in the negative box = -1` and `maximum in the positive box = 1` coefficients. The coefficient value is then a percentage of those observations.
When generating coefficient/preprocessing output, DataRobot simply exports the text preprocessing parameters along with the other parameters.

When text preprocessing occurs, DataRobot reports the parameters it used in the header section, prefixed with the transform name. You will need these "instructions" to create dataset columns from new text rows. Possible values of the transform name (with and without <a target="_blank" href="https://en.wikipedia.org/wiki/Tf-idf">inverse document frequency</a> (IDF) weighting) are:
* Matrix of word-grams occurrences [with tfidf]
* Matrix of word-grams counts [with tfidf]
* Matrix of char-grams occurrences [with tfidf]
* Matrix of char-grams counts [with tfidf]
The following table describes the parameters (key-value fields) that DataRobot used to create the parameter export. These values are reported at the top of the file:
| Parameter name | Value | Description |
|----------------|---------|---------------|
| tokenizer | *name* | Specifies the external library used to perform the tokenization step (e.g., scikit-learn based tokenizer). |
| binary | True or False | If True, converts the term frequency to binary value. If False, no conversion occurs. |
| sublinear\_tf | True or False | If True, applies a transformation 1 + log(tf) to term frequency . If False, does not modify term frequency count. |
| use\_idf | True or False | If True, applies IDF weighting to the term. If False, there is no change to the weighting factor. |
| norm | L1, L2, or None | If L1 or L2, applies row-wise normalization using the L1 or L2 norm. |
Each row in the parameters table represents a token. To generate predictions on new data using the coefficients listed in the parameters table, you must first create a <a target="_blank" href="https://en.wikipedia.org/wiki/Document-term_matrix">document-term matrix</a> (a matrix is the extracted features).
To create features from text:
1. Count the number of occurrences (i.e., term frequencies, <tf>) of each token in the new dataset row. If binary is True, the value is 0 (not present) or 1 (present) for each token. If binary is False, occurrences is the actual token count.
2. If `sublinear_tf` is true, apply the transformation `1 + log(tf)` to the token count.
3. If `use_idf` is true, apply IDF weighting to the token. You can find the IDF weight for the transformation in the **Value** field of the export. For example, in the tuple (cardiac, 0.01), use the multiplier 0.01.
4. If normalization was used, normalize the resulting feature vector using the appropriate norm.
Once you have extracted the text features for your dataset, you can generate predictions using the coefficients of the linear model.
### Details of preprocessing {: #details-of-preprocessing }
The following sections describe the routines DataRobot uses to reproduce predictions from the parameters table.
#### Numeric imputation {: #numeric-imputation }
Missing imputation imputes missing values on numeric variables with the number (value).
**Value**: number
**Value example**: 3.1415926
#### Standardize {: #standardize }
Standardize standardizes features by removing the mean and scaling to unit variance:
x' = (x - mean) / scale
**Value**: (mean, scale)
**Value example**: (0.124072727273, 0.733724343942)
#### Constant splines {: #constant-splines }
Constant splines converts numeric features into piece-wise constant spline base expansion. A derived feature will equal to 1.0 if the original value `x` is within the interval:
a < x <= b
Additionally, N/A in original feature will set 1.0 in the derived feature if `value` ends with "(default for NA)" marker.
**Value**: (a, b]
**Value examples**: (-inf, 8.5], (8.5, 12.5], (12.5, inf)
#### Polynomial and log transforms {: #polynomial-and-log-transforms }
Best transform applies the formula to the original feature.
If formula contains `log`, negatives are replaced with the median of the remaining positives.
**Value**: formula operating on the original feature.
**Value examples**: log(a)^2, foo^3
!!! note
If your target is log transformed, or if the model uses the log link (Gamma, Poisson, or Tweedie Regression, for example), the coefficients are on the log scale, not the linear scale.
#### One-hot encoding {: #one-hot-encoding }
One-hot (or dummy-variable) transformation of categorical features.
* If `value` is a string, derived feature will contain 1.0 whenever the original feature equals value.
* If `value` is "Missing value," derived feature will contain 1.0 when the original feature is N/A.
* If `value` is "Other categories," derived feature will contain 1.0 when the original feature doesn't match any of the above.
**Value**: string, or `Missing value`, or `Other categories`
**Value example**: 'MA', Missing value
#### Binning {: #binning }
Binning transforms numerical variables into non-uniform bins.
The boundary of each bin is defined by the two numbers specified in `value`. Derived feature will equal to 1.0 if the original value `x` is within the interval:
a < x <= b
**Value**: (a, b]
**Value examples**: (-inf, 12.5], (12.5, 25], (25, inf)
#### Matrix of token occurrences {: #matrix-of-token-occurrences }
Convert raw text fields into a document-term matrix.
**Value**: token or (token, weight)
**Value example**: apple or, with <a target="_blank" href="https://en.wikipedia.org/wiki/Tf-idf">inverse document frequency</a> weighting (apple, 0.1)
|
coefficients
|
---
title: Rating Tables
description: How to display a model’s Rating Table tab, and export the model's validated parameters. Validation ensures correct parameters and reproducible results.
---
# Rating Tables {: #rating-tables }
When a model displays the rating table  icon on the Leaderboard, you can export the model's complete, [validated](#rating-table-validation) parameters. Validation assures that the downloaded parameters are correct and that you can reproduce the model's performance outside of DataRobot. For organizations that have the capability enabled, you can modify the table coefficients and [apply the new table to the original (parent) model](#modify-rating-tables), resulting in a new "child" model available on the Leaderboard.
Note that, for GA2M models, you can [specify the pairwise interactions](ga2m) included in the model's output.
Before working with rating tables, review [the considerations](#feature-considerations) regarding file size and model availability.
## Download rating tables {: #download-rating-tables }
To export rating table coefficients:
1. From the Leaderboard, identify a model with this  icon, indicating that it produced a rating table.
2. Expand the model and click the **Rating Table** tab. (The screen may appear different, depending on your permissions.)
3. Click the **Download Table** link to save the CSV file. See this [additional information](ga2m) for help interpreting the rating table output.

4. Modify your rating table in a text editor or spreadsheet application. If applicable, you can next [upload the modified table](#modify-rating-tables) to the parent and create a new child model with the table.

## Modify rating tables {: #modify-rating-tables }
When you modify a rating table and upload it to the original parent model (and then run the model), DataRobot creates a child model with the modified version of the original parent model's rating table. Available from the Leaderboard, the new model has access to the same features as the parent (with these exceptions).
The following, briefly and then [in detail](#detailed-workflow), describes the workflow for creating a new child model.
### Workflow overview {: #workflow-overview }
The following outlines the steps to iterate on building models with modified rating tables:
1. Download the rating table from the parent.
2. Modify the rating table outside of DataRobot using an [appropriate editor](#edit-considerations).
3. Upload the modified table to the parent model.
4. Score the new model, adding it to the Leaderboard.
5. Click **Open Child Model** to view the new model.
5. To iterate on rating table changes, download the child's rating table.
6. Modify the child's rating table outside of DataRobot.
7. Upload the newly modified table to the parent model.
8. Return to step 4 and repeat as necessary.
### Detailed workflow {: #detailed-workflow }
The following describes, in more detail, the steps for working with rating tables:
1. Select a model from the Leaderboard that displays the rating table icon. This is the parent model.
2. [Download](#download-rating-tables) the parent model's rating table.
3. Edit the [coefficients](coefficients) in the rating table CSV file using an [appropriate editor](#edit-considerations) or spreadsheet.
4. Once you have completed modifications to the exported rating table, drag-and-drop or browse to upload the new rating table:

All available (newly and previously uploaded) ratings tables are listed under **Uploaded Tables**.
5. If desired, and only before you run the model, you can click the pencil icon to rename the uploaded table, up to 50 characters. Note that the child model's name is based on the name of the rating table it was created from. You can also rename the table outside of the application. If you specify an existing name, DataRobot appends a numeric to the table name.

6. Click the **Add to Leaderboard** link to create and score the new model. DataRobot first validates the new rating table and, after building completes, the new child model is available on the Leaderboard. A green check indicates a successfully validated and uploaded table; otherwise, DataRobot displays an error message indicating the issue. (You can monitor build status in the [Worker Queue](worker-queue).)
7. Once the build completes, click the **Open Child Model** link corresponding to the child model/rating table pair you would like to view. DataRobot opens (and places you in) the **Rating Tables** tab of the child model. The child model name is `Modified Rating Table: <rating_table_name>.csv` and is visible and accessible from the Leaderboard.

From the child model, you can do the following:
| Link | Action |
|--------|---------|
| Download Table | Download the rating table of the child model. To iterate on coefficient changes in a table, download the child's rating table, upload the modified child rating table to the parent, compare scores, and continue the process as necessary. |
| Open Parent Model | Move back to the **Rating Tables** tab of the parent (original) model. From there you can upload new tables, build new models, or open any built child models. |
!!! note
You cannot upload a new rating table to the child model. You can only upload rating tables to the parent model.
## Rating table validation {: #rating-table-validation }
When DataRobot builds a model that produces rating tables (for example, GA2M), it runs validation on the model before making it available from the Leaderboard. For validation, DataRobot compares predictions made by Java rating table Scoring Code (the same predictions to produce that specific Rating Table) against predictions made by a Python code model in the DataRobot application that is independent from the rating table CSV file. If the predictions are different, the rating table fails to validate, and DataRobot marks the model as errored.
## Feature considerations {: #feature-considerations}
Because rating models (GAM, GA2M, and Frequency/Severity) depend on DataRobot's specialized internal code generation for validation, they are limited to 8GB in-RAM datasets. Over this amount, the project may potentially fail due to memory issues. If you get an OOM, decrease the sample size and try again.
### Editing considerations {: #editing-considerations }
While editing, keep the following in mind:
* Rating table modification does not support changing the header row of the dataset or data type of the columns. Some editors process data in a way that unintentionally makes these changes, for example, by truncating “000” to “0" or quoting every field so that coefficients are changed from numeric to string. This affects the table that is ultimately re-uploaded. Therefore, DataRobot strongly suggests using a text editor that does not change the data such as <a target="_blank" href="https://atom.io/">Atom</a> or Windows Notepad.
* If you are using a spreadsheet application, be careful that you do not convert column types (e.g., Num to Date).
* Rating tables are not created for models with Japanese text columns (they do not support the MeCab tokenizer).
* In the first section of the table (which defines model parameters and pairwise interactions), you can only modify the values of `Intercept` and `Base`.

* In the first line of the second section (which defines how each variable is used to derive the coefficient that contributes to the prediction), you can edit the value of any column <em>except</em>: `Feature Name`, `Type`, `Transform1`, `Value1`, `Transform2`, `Value2`, and `weight`.
* You can add extra columns to the table (for example, to add comments).
* The `Coefficient`, `Relativity`, `Intercept`, and `Base` values must be numeric.
* `Base` is the exponential of `Intercept` and is computed from the `Intercept` value.
* `Relativity` is the exponential of `Coefficient` for each row and is computed from the `Coefficient` value in the row.
* `Feature Strength` is computed from the modified `Coefficient` values.
* CSV encoding must be UTF-8.
Additionally, for Frequency/Severity models:
* The `Coefficient` value for each row is the sum of the `Frequency_Coefficient` and `Severity_Coefficient` values for the row, and is computed from them. `Relativity` is computed from `Coefficient` as described above.
* `Frequency_Relativity` is the exponential of `Frequency_Coefficient` for each row, and is computed from the `Frequency_Coefficient` value in the row.
* `Severity_Relativity` is the exponential of `Severity_Coefficient` for each row and is computed from the `Severity_Coefficient` value in the row.
* `Frequency_Coefficient`, `Severity_Coefficient`, `Frequency_Relativity`, and `Severity_Relativity` values must be numeric.
### Child model considerations {: #child-model-considerations }
When DataRobot creates a child model with the modified version of the original parent model’s rating table, the new model has access to the same features as the parent, with these exceptions:
* The [**Advanced Tuning**](adv-tuning) tab is not available.
* In the [**Make Predictions**](predict) tab, the child model is unable to make predictions on data that was used to train the original. That is, making predictions on the Validation and Holdout partitions of the training data is only possible if those partitions were not used for training. Predictions on those partitions are available when using a newly uploaded dataset.
* You cannot rerun a child model (for example, with a different feature list or sample size).
* You cannot change row order when modifying a rating table; any changes will result in error.
* You cannot upload a new rating table to the child model. You can only upload rating tables to the parent model.
|
rating-table
|
---
title: Model Info
description: To view the Model Info tab, click a model on the Leaderboard, then click Model Info. The tab’s tiles report general model and performance information.
---
# Model Info {: #model-info }
To display model information, click a model on the Leaderboard list then click **Model Info**. The tab provides tiles that report general model and performance information.

For time series projects, the output also includes backtesting information, including execution time for each backtest.

The backtest summaries show partitioning against the full date range:
* Date ranges the model is trained on (blue)
* Validation (green) and Holdout (red) partitions
* Any configured gaps (yellow)
Training periods can be changed by clicking the plus sign next to a model on the [Leaderboard](otv#change-the-training-period).
If a model uses specified start and end dates note that:
* Insights on validation and holdout sets are only available if the model was not trained into those sets (if data is out-of-sample).
* If _any_ partitions remain out-of-sample for the model, insights are provided for that partition.
* If any partition is wholly or partially in the training period for the model, insights are _not_ provided for that partition.
## Model File Size {: #model-file-size }
Model File Size reports the sum total of the cache files DataRobot uses to store the model data. It's generated from an internal storage mechanism and indicates your system footprint, which can be especially useful for Self-Managed AI Platform deployments.
## Prediction Time {: #prediction-time }
Displays the estimated time, in seconds, to score 1000 rows of the dataset.
## Sample Size {: #sample-size }
Sample Size reports the number of observations used to train and validate the model (and also for each cross-validation repetition, if applicable). When [smart downsampling](smart-ds) is in play or DataRobot has downsampled the project, Sample Size reports the number of rows in the minority class rather than the total number of rows used to train the model.
|
model-info
|
---
title: Model log
description: How to display the model log, which shows the status of successful operations with green INFO tags and errors marked with red ERROR tags.
---
# Log {: #log }
The model log displays the status of successful operations with green INFO tags, along with information about errors marked with red ERROR tags. To display the model log, click a model on the Leaderboard list then click **Log**.
!!! note
If you receive text-based insight model errors, see this [note](analyze-insights#text-based-insights) for a description of how DataRobot handles single-character "words."

|
log
|
---
title: Describe
description: Introduces the Leaderboard tabs, including Blueprint, Coefficients, Constraints, Data Quality Handling Reports, Eureqa Models, Log, Model Info, and Rating Table.
---
# Describe {: #describe }
The **Describe** tabs provide model building information and feature details:
Leaderboard tab | Description | Source
------------------|-------------|------------
[Blueprint](blueprints) | Provides a graphical representation of the data preprocessing and parameter settings via blueprint. | Blueprints can be DataRobot- or [user-](cml/index)generated. For DataRobot blueprints, the structure is decided once (after the partitioning stage), taking the dataset, project options, and column metadata into account. Values of `auto` hyperparameters may be decided later in the training process. Certain blueprint inputs and paths may be eliminated before training if the feature list does not have the corresponding feature types. For user-generated blueprints, the structure can be decided at any time.
[Coefficients](coefficients) | Provides, for select models, a visual representation of the most important variables and a coefficient export capability. | Training data
[Constraints](monotonic) | Forces certain XGBoost models to learn only monotonic (always increasing or always decreasing) relationships between specific features and the target. | Training, Validation data
[Data Quality Handling Report](dq-report) (Formerly Missing Values) | Provides transformation and imputation information for blueprints. | Training data
[Eureqa Models](eureqa) | Provides access to model blueprints for Eureqa generalized additive models (GAM), regression models, and classification models. | The Pareto front uses the Eureqa validation set, a subset of DataRobot training. The plots shown for regression and classification models use validation data.
[Log](log) | Lists operation status results. | N/A
[Model Info](model-info) | Displays model information. | Training data
[Rating Table](rating-table) | Provides access to an export of the model’s complete, validated parameters. | Training data
|
index
|
---
title: Eureqa Models
description: The Eureqa Models tab lets you inspect and compare the best models generated from a Eureqa blueprint, to balance predictive accuracy against complexity.
---
# Eureqa Models {: #eureqa-models }
The **Eureqa Models** tab provides access to model blueprints for Eureqa generalized additive models (Eureqa GAM), Eureqa regression, and Eureqa classification models. These blueprints use a proprietary Eureqa machine learning algorithm to construct models that balance predictive accuracy against complexity.

The Eureqa modeling algorithm is robust to noise and highly flexible, and performs well across a wide variety of datasets. Eureqa typically finds simple, easily interpretable models with exportable expressions that provide an accurate fit to your data.
Eureqa GAM blueprints, a Eureqa/XGBoost hybrid, are available for both regression and classification projects.
When DataRobot runs a Eureqa blueprint, the Eureqa algorithm tries millions of candidate models and selects a handful (of varying complexity) which represent the best fit to the data. From the **Eureqa Models** tab you can inspect and compare those models, and select one which best balances your requirements for complexity against predictive accuracy.
You can select one or more Eureqa GAM models to add to the Leaderboard for later deployment. Additionally, the ability to recreate Eureqa models enables you to fully reproduce their predictions outside of DataRobot. This is helpful for meeting requirements in regulated industries as well as for simplifying the steps to embed models in production software. Recreating a Eureqa model is as simple as copying and pasting the model expression to the target database or production environment. (Also, for GAM models only, [parameters can be exported](#export-model-parameters) to recreate models.)
See the associated [considerations](#feature-considerations) for additional information.
## Benefits of Eureqa models {: #benefits-of-eureqa-models }
There are a number of advantages to using Eureqa models:
* They return human-readable and interpretable analytic expressions, which are easily reviewed by subject matter experts.
* They are very good at feature selection because they are forced to reduce complexity during the model building process. For example, if the data had 20 different columns used to predict the target variable, the search for a simple expression would result in an expression that only uses the strongest predictors.
* They work well with small datasets, so they are very popular with scientific researchers who gather data from physical experiments that don’t produce massive amounts of data.
* They provide an easy way to incorporate domain knowledge. If you know the underlying relationship in the system that you're modeling, you can give Eureqa a "hint," (for example, the formula for heat transfer or how house prices work in a particular neighborhood) as a building block or a starting point to learn from. Eureqa will build machine learning corrections from there.
## Build a Eureqa model {: #build-a-eureqa-model }
Eureqa models are run in full Autopilot, not Quick, but can always be accessed from the model [**Repository**](repository). (See [the reference](#model-availability) on when models are available based on modeling mode and project type.) Additionally, you can disable them from running as part of Autopilot with the "Don't include Eureqa models in Autopilot" flag (ask your administrator).
If you ran Quick mode and Eureqa models were not built, or you chose manual mode, you can create them from the [**Repository**](repository). For [Comprehensive](more-accuracy) mode, all Eureqa models are created during Autopilot. Running a Eureqa blueprint creates a model of that name.
To run a blueprint:
1. Upload your dataset and select a target, select the [modeling mode](model-data#set-the-modeling-mode), and click **Start** to begin the model building process. If you used Manual mode to start your project, you see this message:

2. Click **Repository** in the message or select **Repository** from the menu to add a Eureqa blueprint. (Note that Autopilot mode automatically creates a Eureqa generalized additive model and makes it available from the Leaderboard.)
3. In the search box in the Repository, type `eureqa` to filter the display. Click **Add** from the dropdown for each Eureqa model you want to create.

4. When ready, click **Run Tasks**.

DataRobot then begins processing the selected model(s); you can follow the status in the Worker Queue. When the build completes, models are available from the Leaderboard.
## Eureqa Models tab {: #eureqa-models-tab }
To view details for a Eureqa model, select it from the Leaderboard () and then select the **Eureqa Models** tab:

Display component | Description
----------------- | -----------
 [Eureqa Model summary](#eureqa-model-summary) | Displays the Leaderboard model’s Eureqa complexity, Eureqa error, and model expression.
 [Decimal rounding](#decimal-rounding) | Sets the number of decimal places to display for rounding in Eureqa constants.
 [**Models by Error vs Complexity**](#models-by-error-vs-complexity-graph) chart | Plots model error against model complexity.
 [**Selected Model Detail**](#selected-model-detail-graph) | Displays the mathematical expression and plot for the selected model.
 [Export](#export-model-parameters) link | Exports the Leaderboard model's preprocessing and parameter information to CSV (for GAM only).
Note that the tab's graphs and other UI elements update periodically as DataRobot creates and selects additional candidate models.
### Eureqa model summary {: #eureqa-model-summary }
The model summary information DataRobot displays in this upper section represents information for the Leaderboard model. It includes complexity and error scores as well as a mathematical representation of the model (i.e., model expression) and access to [model export](#export-model-parameters) (for GAM only).

!!! note
When [customizing a Eureqa model](advanced-options) to configure a prior solution (<b>prior_solutions</b>), for example, you copy the model expression content to the right of the equal sign. Also, when using the model expression for a target expression string (<b>target_expression_string</b>), make sure to replace the original variable name with <code>Target</code>. For example, in the screenshot above the target expression would be:
<br>
<code>Target = High Cardinality and Text features Modeling +1.23938372292399*sqrt(perc_alumni) + 0.031847155305945*Top25perc*log(Enroll) + 0.000123426619061881*Outstate*log(Accept) - 23.3747552223482 - 0.00203437584904968*Personal</code>
The complexity score reports the complexity of this model, as represented in the [Models by Error vs. Complexity](#models-by-error-vs-complexity-graph) chart. The "Eureqa error" value provides a mechanism for comparing Eureqa models. Once you have selected the best-suited model, you can move that model to the Leaderboard to compare it against other DataRobot models. The model expression denotes the mathematical functions representing the model. The **Export** link opens a dialog for downloading model preprocessing and parameter data. See this [note](#more-info) on data partitioning and error metrics.
### Decimal rounding {: #decimal-rounding }
To improve readability, DatRobot shows constants to two decimal points of precision by default. You can change the precision displayed from the **Rounding** dropdown. Changes to the display do not affect the underlying model.

Default display:

With all points displayed:

### Models by Error vs. Complexity graph {: #models-by-error-vs-complexity-graph }
The left panel of the **Eureqa Model** display plots model error against model complexity. Each point on the resulting graph (known as a <a target="_blank" href="https://en.wikipedia.org/wiki/Pareto_efficiency#Pareto_frontier">Pareto front</a>) represents a different model created by Eureqa. The color range for each point varies from red for the simplest and lowest accuracy model to blue for the most complex and accurate model.

The location of the Leaderboard entry—the “current model”—is indicated on the graph (). Hover over any other point to display a tooltip reporting the model’s Eureqa complexity and Eureqa error. Clicking a model (point) updates the **Selected Model Detail** graph on the right with details for that model.
### Selected Model Detail graph {: #selected-model-detail-graph }
The **Selected Model Detail** graph reports, for the selected model, the complexity and error scores, as well as the mathematical representation of the model.
Clicking a model (point) on the **Models by Error vs. Complexity** graph updates the **Selected Model Detail** graph. Additionally, selecting a different model activates the **Move to Leaderboard** button. Once you click the button, DataRobot creates a new, additional Leaderboard entry for the selected model. Because DataRobot already built the model, no new computations are needed.
The contents of the graphing portion are dependent on whether you are working with a [regression](#for-regression-projects) or [classification](#for-classification-projects) problem.
#### For regression projects {: #for-regression-projects }
The **Selected Model Detail** graph for regression problems displays a scatter plot fit to data for the selected model. Similar to the [Lift Chart](lift-chart), the orange points in the **Selected Model Detail** graph show the target value across the data; the blue line graphs model predictions. To see output for a different model, select a new model in the **Models by Error vs. Complexity** graph to the left.

Interpret the graph as follows:
Component | Description
-------- | -----------
 | Complexity values, error values, and model expression for the selected model.
 | Action to send the selected model to the Leaderboard. Because all available Eureqa models are built when first run, there is no additional processing necessary.
 | Tooltip displaying target and model values.
 | Dropdown to control row ordering along the X-axis.

The **Order by** dropdown has several options, including:
- Row (default): rows are ordered in the same order as the original data
- Data Values: rows are ordered by the target values
- Model Values: rows are ordered by the model predictions
#### For classification projects {: #for-classification-projects }
The **Selected Model Detail** graph for classification problems displays a distribution histogram—a confusion matrix—for the selected model. That is, it shows the percentage of model predictions that fall into each of *n* buckets, spaced evenly across the range of model predictions. For more information about understanding a [confusion matrix](confusion-matrix), see a general description in the **ROC Curve** details.
The histogram displays all predicted values applicable to the selected model. To see output for a different model, select a new model (different point) in the **Models by Error vs. Complexity** graph.
Interpret the graph as follows:
Component | Description
-------- | -----------
 | Complexity values, error values, and model expression for the selected model.
 | Action to send the selected model to the Leaderboard. Because all available Eureqa models are built when first run, there is no additional processing necessary.
 | Tooltip describing the content of the bucket, including total values, range of values, and breakdown of true/false counts.
 | Order by value for the rows along the X-axis. By default, rows are ordered by model predictions.

The histogram displays a vertical threshold line (`0.5` in the above example), dividing the plot into four regions. The top portion of the plot shows all rows where the target value was 1 while the bottom portion includes all rows where the target value was 0. All predictions to the left of the threshold were predicted _false_ (negative); lower left represents correct predictions, upper left incorrect predictions. Values to the right of the threshold are predicted to be true. Histogram counts are computed across the entire training dataset.
## Export model parameters {: #export-model-parameters }
!!! note
Although you can recreate GAM models using the <b>Export</b> button, consider that another simple way to recreate any GAM or non-GAM Eureqa model is by copying and pasting the model expression into the target environment directly (such as a SQL query, Python, Java, etc.).
The **Export** button opens a window allowing you to download the Eureqa preprocessing and parameter table for the selected Leaderboard entry. This export provides all the information necessary to recreate the GAM model outside of DataRobot. Interpret the output in the same way as you would the export available from the [**Coefficients**](coefficients#preprocessing-and-parameter-view) tab (with [GAM-specific information](ga2m) here), with the following differences:

* The first section of output shows the Eureqa model formula. This is the mathematical equation displayed at the top of the **Eureqa Models** tab, beginning with `Target=...`.
* The second section displays the DataRobot preprocessing parameters for each feature used in the model, which includes parameters for one or two input transformations (e.g., standardization). With Eureqa models, the `Coefficient` field is set to 0 when there are no text or-high cardinality features. “Coefficient” is used in linear models to denote the column’s linearly-fit coefficient.
* Eureqa model parameters can be exported to .csv format only (.png and .zip options are not selectable here).
## More info... {: #more-info }
With [traditional DataRobot model building](data-partitioning), data is split into training, validation, and holdout sets. Eureqa, by contrast, uses the training DataRobot split and then, to compute the Eureqa error, further splits that set using its own internal training/validation splitting logic.
### Model availability {: #model-availability }
The following table describes the conditions under which Eureqa models for AutoML and time series projects are available in Autopilot and the Repository.
!!! note
Running Eureqa models as part of Autopilot can be controlled with “Don't include Eureqa models in Autopilot” flag; see your administrator to disable.
Eureqa model type | Autopilot | Repository
---------- | --------- | ----------
**_AutoML projects_** | :~~: |:~~:
Regressor/Classifier | <ul><li>Requires numeric or categorical features</li><li> Maximum dataset size 100,000 rows </li><li>Offset and exposure not set</li>| <ul><li>Requires numeric or categorical features</li><li> No dataset size limitation</li><li>Offset and exposure not set</li>
GAM | <ul><li>Maximum dataset size 1GB</li><li>Offset and exposure not set</li>| <ul><li>Maximum dataset size 1GB</li><li>Offset and exposure not set</li>
**_Time series projects_** |:~~: | :~~:
Regressor/Classifier | <ul><li>Number of rows is less than 100,000 **_and_**</li><li>Number of unique values for a categorical feature is less than 1,000</li> | No restrictions
GAM | <ul><li>Number of rows is less than 100,000 **_or_**</li><li>Number of unique categorical features is less than 1,000</li> | No restrictions
Eureqa With Forecast Distance Modeling | N/A | <ul><li>Number of forecast distances is less than 15</li><li>Maximum 100,000 rows **_or_** number of unique values for a categorical feature is less than 1,000</li>
### Number of generations {: #number-of-generations }
The following table describes the number of generations performed, based on blueprint selected. Generation values are reflected in the blueprint name.

Eureqa model type | Autopilot generations | Repository generations
---------- | --------- | ----------
**_AutoML projects_** *| :~~: |:~~:
Regressor/Classifier | 250| 40, 250, or 3000
GAM | Dynamic\* | 40, dynamic\*, or 10,000</li>
**_Time series projects_** |:~~: | :~~:
Regressor/Classifier | 250 | 40 or 3000
GAM | 250 | 40, 250, dynamic\*
Eureqa With Forecast Distance Modeling (one model per forecast distance) | N/A | Number of generations is determined by the **Advanced Tuning** `task_size` parameter. Default is medium (1000 generations).
\* The dynamic option for the number of generations is based on the number of rows in the dataset. The value will be between 1000 and 3000 generations.
### Eureqa and stacked predictions {: #eureqa-and-stacked-predictions }
Because it would be too computationally "expensive" to do so, Eureqa blueprints don't support [stacked predictions](data-partitioning#what-are-stacked-predictions). Most models use stacking to generate predictions on the data that was used to create the project. When you generate Eureqa predictions on the training data, all predictions will come from a single Eureqa model, not from stacking.
This means the Eureqa error isn't exactly the error on the data; it's the error on a filtered version of the data. This explains why the reported Eureqa error can lower than the Leaderboard error when the error metrics are the same. You cannot change the Eureqa error metric, although you can change the DataRobot [optimization metric](additional#change-the-optimization-metric) (the value DataRobot uses to rank models on the Leaderboard).
The following lists differences from non-Eureqa modeling due to lack of stacked predictions:
* In AutoML, blenders that train on predictions (for example, GLM or ENET) are disabled. Other blenders are available (such as AVG or MED).
* Validation and Cross-Validation scores are hidden for Eureqa and Eureqa GAM models trained into Validation and/or Holdout.
* Downloading predictions on training data is disabled.
### Model training process {: #model-training-process }
When training a Eureqa model, DataRobot executes either a new solution search or a refit:
* _New solution search_: The Eureqa evolution process does a complete search, looking for a new set of the solutions. The mechanism is slower than retrofitting.
* _Refit_: Eureqa refits coefficients of the linear components. In other words, it takes the target expression from the existing solution, extracts linear components, and refits its coefficients using all the training data.
The following table describes, for each Eureqa model type, training behavior for validation/backtesting and [frozen runs](frozen-run):
Model type | Backtesting/Cross-Validation | Frozen run
---------- | -----------------------------| ----------
Eureqa Regressor/Classifier | Refits coefficients of existing solutions from the model trained on the first fold. | Refits coefficients of existing solutions from the parent model.
Eureqa GAM\* | Refits coefficients of existing solutions from the model trained on the first fold. | Freezes XGBoost hyperparameters; performs new solution search for Eureqa second-stage models.
Eureqa with Forecast Distance Modeling (selects the best solution—per strategy—for each forecast distance) | Performs a new solution search. | Performs a new solution search with fixed Eureqa building blocks.
\* Eureqa GAM consists of two stages—first stage is XGBoost, second stage is Eureqa approximating the XGBoost model but trained on a subset of the training data.
### Deterministic modeling {: #deterministic-modeling }
Like other DataRobot models, Eureqa's model-generation process is deterministic: if you run Eureqa twice against the same data, with the same configuration arguments, you will get the same model—same error, same complexity, same model equation. Because of Eureqa's unique model-generation process, if you make a very small change in its inputs, such as removing a single row or changing a tuning parameter slightly, it's possible that you will get a very different model equation.
!!! note
If the **sync_migrations** [Advanced Tuning parameter](../../reference/eureqa-ref/index) is set to *False*, then Eureqa's model-generation process will be non-deterministic. If this is the case, DataRobot may identify good Eureqa models more quickly (though this isn't guaranteed), and it will better utilize all available CPUs.
### Tune with error metrics {: #tune-with-error-metrics }
The metric used by Eureqa for Eureqa GAM (Mean Absolute Error) is a "surrogate" error, as the Eureqa GAM blueprint runs Eureqa on the output of XGBoost. It measures how well Eureqa could reproduce the raw output of XGBoost. For regression, you can change the loss function used in XGBoost in the advanced option but you cannot change the Eureqa error metric. You can also change the DataRobot [optimization metric](additional#change-the-optimization-metric) (the value DataRobot uses to rank models on the Leaderboard). This tuning affects the tuning of XGBoost and the default choice of XGBoost loss function, and leads to different results for Eureqa GAM.
### Advanced Tuning parameters {: #advanced-tuning-parameters }
You can tune your Eureqa models by modifying building blocks, customizing the target expression, and modifying other model parameters, such as support for building blocks, error metrics, row weighting, and data splitting. Eureqa models use expressions to represent mathematical relationships and transformations.
See the [reference guide](eureqa-ref/index) to Eureqa's Advanced Tuning options for more information.
## Feature considerations {: #feature-considerations }
The following considerations apply to working with both GAM and general Eureqa models and for working with Eureqa models in [time series](#additional-time-series-considerations) projects, specifically.
!!! note
Eureqa model blueprints are deterministic only if the number of cores in the training and validation environments is kept constant. If the configurations differ, the resulting Eureqa blueprints produce different results.
* There is no support for [multiclass](multiclass) modeling.
* Cross-validation can only be run from the Leaderboard (not from the Repository).
* For legacy Eureqa SaaS product users, accuracy may be comparatively reduced due to fewer cores. (Legacy users can contact their DataRobot representative to discuss options for addressing this.)
* Eureqa scoring code is available for both AutoMl and time series. When using with time series, Scoring Code is supported for Eureqa regression and Eureqa GAMs only (no classification).
* There is no support for offsets with time series models.
|
eureqa
|
---
title: Monotonic constraints
description: How to force an XGBoost model to learn only monotonic (always increasing or always decreasing) relationships between chosen features and the target.
---
# Monotonic constraints {: #monotonic-constraints }
In some projects (typically insurance and banking), you may want to force the directional relationship between a feature and the target (for example, higher home values should always lead to higher home insurance rates). By training with <em>monotonic constraints</em>, you force certain XGBoost models to learn only monotonic (always increasing or always decreasing) relationships between specific features and the target. For example, increasing values in feature(s) that describe the value of the home have an always increasing relationship with the target, `claim losses`. You set the direction of the relationships—increasing or decreasing— by creating feature lists and applying them in [**Advanced options > Feature Constraints**](feature-con). You can also create lists and retrain models, from the Leaderboard menu, after the initial model run.
## General workflow {: #general-workflow }
Typically, working with monotonic constraints follows the workflow described below. Review the [modeling considerations](#feature-considerations) before starting, including special considerations for time series projects.
1. [Create feature lists](#create-feature-lists) for monotonically increasing and/or monotonically decreasing constraints.
2. Set the target and the modeling feature list.
3. From the **Advanced options** link, [configure feature constraints](feature-con).
4. Start the model build using any of the [modeling modes](model-data#set-the-modeling-mode).
5. When model building finishes, [investigate the results](#evaluate-results).
6. Optionally, [retrain model(s)](#retrain-models-with-new-constraints) using different monotonic constraints (or without constraints).
7. Compare training results between models using the [**Feature Effects**](feature-effects) partial dependence graph to help with final model selection.
## Create feature lists {: #create-feature-lists }
The first step in monotonic modeling is to [create feature lists](feature-lists#create-feature-lists) that identify monotonically constrained features. The feature(s) contained in these list(s) are those that have a forced directional relationship, increasing or decreasing, with the target. Note that use of monotonic feature lists does not replace the standard model building feature list. Instead, these lists identify features in the selected model building feature list that should be handled differently.
When creating lists, remember:
* Features must be of type numeric, percentage, length, or currency.
* There can be no feature overlap in the feature lists for monotonically increasing and monotonically decreasing constraints.
* After creating lists, be sure to change the Feature List dropdown to the list for the project model build (otherwise, it remains on the last created list).
* The target feature is automatically added to every feature list.
## Evaluate results {: #evaluate-results }
When model building is complete, there are several tools for investigating and evaluating the results. First, check the Leaderboard to identify the model(s) that are, or can be, trained with monotonic constraints (identified with the MONO  badge). To determine if a model was built with constraints, check the Leaderboard model description.

Expand a constrained model and, from the **Describe > Constraints** tab, review the features that were constrained:

There are cases where some, or all, of the features identified in the constraint list were not used in the final modeling feature list. This happens, for example, when DataRobot creates a reduced feature list for the recommended model. In essence, this also serves as a constraint.

Next, calculate and view the [**Feature Effects**](feature-effects) partial dependence graph, checking the results for monotonically constrained features. This graph helps determine whether a monotonically constrained feature maintains the monotonic relationship with the predicted target.
Partial dependence is calculated, based on the validation set, between each predictor and predicted target values—it calculates the relationship between average target values and predictor values of each bin. Since impacts from other predictors on predicted target values tend to be averaged out, partial dependence reflects the relationship between an individual predictor and the target. If a model is trained with monotonic constraints, the relationship between monotonically constrained features and predicted target tends to be monotonic.
For example, this graph plots partial dependence on a model trained without constraints:

When monotonically decreasing constraints are applied, partial dependence looks more like this:

## Retrain models with new constraints {: #retrain-models-with-new-constraints }
You may decide, after a model build, to change the monotonic feature list and rerun one or more models. To do this:
1. From the Leaderboard, select models that support constraints by checking the box to the left of the model name. Click the MONO badge to filter the Leaderboard display so that it shows only eligible models.
2. Expand the menu and select **Run Selected Model(s) with Constraints**. Note that this option is only available if all selected models support constraints.
3. In the upper part of the screen, the window expands and provides a dropdown for selecting monotonic increasing and decreasing feature lists.

4. Once set, click **Run Models**.
## Feature considerations {: #feature-considerations}
The following considerations apply to monotonic modeling.
* Blenders that contain monotonic models do not display the MONO label on the Leaderboard.
* Monotonic modeling is available in binary classification and regression projects.
* Monotonic constraints can only be applied between Numeric, Percentage, Length, or Currency type variables and the target.
* Generalized Additive Models, Frequency/Severity, and Frequency/Cost models don't support interactions with features trained with monotonic constraints.
* Only Extreme Gradient Boosted Trees, Generalized Additive Models and Frequency/Severity (both frequency and severity model based on XGBoost) and Frequency/Cost model (both frequency and cost model based on XGBoost) support training with monotonic constraints.
* Extreme Gradient Boosted Trees that include `Search for differences` tasks (i.e., DIFF3), `Search for ratio` tasks (i.e., RATIO3) don’t support training with monotonic constraints. (This is because these tasks tend to change the monotonic correlation between features and predicted target.)
* Constraints for models used by Autopilot or from the Repository can only be set in [advanced options](feature-con#monotonicity) and can't be changed after a project is created.
* New constraints can only be set for models on the Leaderboard.
* If `Include only monotonic models` is selected in advanced options, Autopilot only runs the average blender.
* When using monotonic constraints with time series projects:
* Only XGB models are supported
* Prior to the feature derivation process (i.e., monotonic lists added before EDA2), features used for monotonicity will be set as "do not derive."
* When there is an offset in the blueprint, for example naive predictions, the predictions themselves may not be monotonic after offset is applied. (XGBoost _does_ honor monotonicity.)
* If the model is a collection of models (per-series XGBoost or Performance Clustered blueprint, for example) monotonicity is preserved per-series/cluster.
|
monotonic
|
---
title: GA2M output (from Rating Tables)
description: Overview and detailed explanations of the output for Generalized Additive Model (GA2M) models, available for download from the Rating Tables tab.
---
# GA2M output (from Rating Tables) {: #ga2m-output-from-rating-tables }
The following section helps to understand the output for Generalized Additive Model (GA2M) models. This output is available as a download from the [**Rating Tables**](rating-table) tab.
## Read model output {: #read-model-output }
When examining the output, note the following:
* Pairwise interactions found by the GA2M model have the following characteristics:
- when there is an interaction of two variables, there is an additional table heading labeled `(Var1 & Var2)`.
- table rows that describe preprocessing and coefficients of pairwise interactions have a **Type** of `2W-INT`.
* **Feature Strength** describes the strength of each feature and pairwise interaction. Interaction strength is marginal and doesn't include the main effects strength. The Feature Strength is equal to the weighted average of the absolute value of the centered coefficients.
* **Transform1** and **Value1** describe the preprocessing of the first variable in the pair; **Transform2** and **Value2** describe the preprocessing of the second variable in the pair. The coefficient applies to the product of the two values derived from the preprocessing of the two variables.
* **Weight** is the sum of observations for each row of the table. If the project is using a weight variable, the **Weight** column is the sum of weights. This can be used to quantify the (weighted) number of observations in the training data that correspond to each bin of numeric feature, each level of categorical feature, or each cell of pairwise interaction.
The following is a sample excerpt from Generalized Additive Model output:

In the sample table, the **Intercept**, **Base**, **Loss distribution**, and **Link** function parameters describe the model in general and not any particular feature. Each row in the table describes a feature and the transformations DataRobot applies to it. To compute the predictions, you can use either the `Coefficient` column or the `Relativity` column. Use the `Coefficient` column if you want the prediction to have the same precision as DataRobot predictions.
For example, assume CRIM value equals 0.9 and LSTAT equals 8.
Using the **Coefficient** column, read the sample as follows:
For... | Coefficient value | From line...
-------| ------------------| -------------
Intercept | 3.080070 | 1|
Coefficient for CRIM=0.9 | -0.005546 | 12 (bin includes CRIM values 0.60079503 to inf) |
Coefficient for LSTAT=8 | 0.257544 | 14 (bin includes LSTAT values -inf to 9.72500038) |
Get Coefficient for CRIM=0.9 and LSTAT=8 | 0.122927 | 20 (bin for Value1, CRIM, equal to 0.9 and Value2, LSTAT equal to 8)
**Prediction** = exp(3.08006971649 -0.00554623809222501 + 0.257543518013598 + 0.122926708231993) = 31.658089382684512
Using the **Relativity** column, read the sample as follows:
For... | Relativity value | From line...
---------|-----------------:|-----------------
Base | 21.7599 | 2
Relativity for CRIM=0.9 |-0.9945 | 12 (bin includes CRIM values 0.60079503 to inf)
Coefficient for LSTAT=8 | 1.2937 | 14 (bin includes LSTAT values -inf to 9.72500038)
Get Coefficient for CRIM=0.9 and LSTAT=8 | 1.1308 | 20 (bin for Value1, CRIM, equal to 0.9 and Value2, LSTAT equal to 8)
**Prediction** = 21.7599193685 * 0.994469113891232 * 1.29374811110316 * 1.13080153946617 = 31.65808938265751
If the main model uses a [two-stage modeling process](model-ref#two-stage-models) (Frequency-Severity Generalized Additive Model, for example), two additional columns—`Frequency_Coefficient` and `Severity_Coefficient`—provide the coefficients of each stage.
## Allowed pairwise interactions in GA2M {: #allowed-pairwise-interactions-in-ga2m }
You can choose to control which pairwise interactions are included in GA2M output (available in the [**Rating Tables**](rating-table) tab) instead of using every interaction or none of them. This allows you to specify which interactions are permitted to interact during the training of a GA2M model in cases where there are certain features that are not permitted to interact due to regulatory constraints.
{% include 'includes/pairwise-warning.md' %}
Use the [**Feature Constraints**](feature-con#pairwise-interactions) advanced option to specify the allowed pairwise interactions for a model.
## Define transformations for GA2M {: #define-transformations-for-ga2m }
The following sections describe the routines DataRobot uses to reproduce predictions from a GAM.
### One-hot encoding {: #one-hot-encoding }
**Name**: One-hot
**Value**: string, or `Missing value`, or `Other categories`
**Value example**: 'MA'
**Value example**: Missing value
One-Hot (or dummy-variable) transformation of categorical features:
* If `value` is a string then derived feature will contain 1.0 whenever the original feature equals `value`.
* If value of the original feature is missing then "Binning" transformation with "Missing value" is equal to 1.0.
* If `value` is "Other categories" then derived feature will contain 1.0 when the original feature doesn't match any of the above.
### Dummy encoding {: #dummy-encoding }
**Name**: Dummy
**Value**: string
**Value example**: 'MA'
Derived feature will contain 1.0 whenever the original feature equals `value`.
### 1-Dummy encoding {: #1-dummy-encoding }
**Name**: 1-Dummy
**Value**: string
**Value example**: 'NOT MA'
Derived feature will contain 1.0 whenever the original feature is different from `value` without the 4 characters 'NOT '.
### Binning {: #binning }
**Name**: Binning
**Value**: (a, b], or `Missing value`
**Value example**: (-inf, 12.5]
**Value example**: (12.5, 25]
**Value example**: (25, inf)
**Value example**: Missing value
Transform numerical variables into non-uniform bins.
The boundary of each bin is defined by the two numbers specified in `value`. Derived feature will equal to 1.0 if the original value `x` is within given interval:
a < x <= b
If `value` of the original feature is missing, then "Binning" transformation with "Missing value" is equal to 1.0.
|
ga2m
|
---
title: Data Quality Handling Report
description: How to use the Data Quality Handling Report, which reports on tasks and imputation methods.
---
# Data Quality Handling Report {: #data-quality-handling-report }
The Data Quality Handling Report can be found in a model's **Describe** division.

The report includes the following information based on the training data:
| Field | Description |
|----------|-------------|
| Feature Name | Displays the feature name. Every feature in the dataset is listed, as well as transformed and OTV derived features. |
| Variable Type | The feature's variable type. |
| Row Count | Reports the number of rows in which the feature is missing from the training data. Click the column heading to change the sort order frequency. |
| Percentage | Reports, as a percentage, the number of rows in which the feature is missing from the training data. Click the column heading to change the sort order frequency. |
| Data Transformation Information | Lists the imputation task applied to the feature as well as the [applied value](#imputation-information). If more than one imputation task applies, all tasks are listed. |
Additionally, you can:
* Use **Search** to find a specific feature.
* Filter by column header.
## Supported tasks {: #supported-tasks }
The Data Quality Handling Report tab reports on the following supported tasks:
* Numeric values imputed
* Numeric data cleansing
* Ordinal encoding of categorical variables
* Categorical Embedding
* Category Count
* One-Hot Encoding
* VW encoding of categorical variables
## Imputation information {: #imputation-information }
The task information that can be returned in the **Data Transformation Information** column includes:
* the name of the [task](#supported-tasks).
* the imputed value inserted in the place of the missing value. Different preprocessing tasks have different strategies for assigning the value to use for imputation. In some cases, this can be tuned on the [Advanced Tuning](adv-tuning) tab.
* if DataRobot created a missing indicator feature, it displays `Missing indicator treated as feature`. This indicates that DataRobot created a new feature inside the blueprint with 1s in the rows where values in the original feature were missing and 0s where the original feature had a value. Sometimes the pattern of rows containing missing values is predictive and can increase accuracy when input into the model.
* (categorical features only) if DataRobot treated missing values as <em>infrequent</em> values, it displays `Missing values treated as infrequent`. This means that a row with a missing value is handled as if that row had a categorical value that did not occur very often in the feature. Different blueprints may handle infrequent values in categorical features differently.
* (categorical features only) if DataRobot treated infrequent values as <em>missing</em> values, it displays `Infrequent values treated as missing`. This means that a row with an infrequent value is handled as if that row had a missing value for that feature.
* for categorical features, if missing values were ignored, DataRobot displays `Missing values ignored`.
|
dq-report
|
---
title: Blueprint
description: How to use blueprints, which show the high-level end-to-end procedure for fitting the model, including preprocessing steps, algorithms, and post-processing.
---
# Blueprint {: #blueprints }
During the course of building predictive models, DataRobot runs several different versions of each algorithm and tests thousands of possible combinations of data preprocessing and parameter settings. (Many of the models use DataRobot proprietary approaches to data preprocessing.) The result of this testing is provided in the **Blueprints** tab.
Blueprints are ML pipelines containing preprocessing steps, modeling algorithms, and post-processing steps. They can be generated either automatically as part of Autopilot or manually/programmatically. Blueprints are found in three places in the application:
1. From the Leaderboard, as a visualization available for each trained models (this tab).
2. From the [Repository](repository), which contains all blueprints generated by (although not necessarily built by) Autopilot for a project.
3. In the AI Catalog, under the **Blueprints** tab.
??? faq "What is the difference between a model and a blueprint?"
A *modeling* algorithm fits a model to data, which is just one component of a blueprint. A *blueprint* represents the high-level, end-to-end procedure for fitting the model, including any preprocessing steps, modeling, and post-processing steps.
## View blueprints {: #view-blueprints }
To view a graphical representation of a blueprint, click a model on the Leaderboard.

## Blueprint components {: #blueprint-components }
Each blueprint has a few key sections.
| Section | Description |
|-----------|-------------|
| `Data` | The incoming data, separated into each type (categorical, numeric, text, image, geospatial, etc.). |
| Transformations | The tasks that perform transformations on the data (for example, `Missing values imputed`). Different columns in the dataset require different types of preparation and transformation. For example, some algorithms recommend subtracting the mean and dividing by the standard deviation of the input data—but this would not make sense for text input data. The first step in the execution of a blueprint is to identify data types that belong together so they can be processed separately. |
| Model(s) | The model(s) making predictions or possibly supplying [stacked predictions](data-partitioning#what-are-stacked-predictions) to a subsequent model. |
| Post-processing | Any post-processing steps, such as `Calibration`. |
| `Prediction` | The data being sent as the final predictions. |
Each blueprint has nodes and edges (i.e., connections). A node will take in data, perform an operation, and output the data in its new form. An edge is a representation of the flow of data.
When two edges are <em>received</em> by a single node:

It is a representation of two sets of columns being received by the node— the two sets of columns are stacked horizontally. That is, the column count of the incoming data is the sum of the two sets of columns and the row count remains the same.
If two edges are <em>output</em> by a single node, it is a representation of two copies of the output data being sent to other nodes. Other nodes in the blueprint are other types of data transformations or models.

Click a blueprint node to display additional information, including access to model documentation.
## Blueprint controls {: #blueprint-controls }
From the blueprint canvas, you can:
* Click, hold, and drag to move the blueprint around the canvas.
* Add the blueprint to the AI Catalog for later editing, re-use, and sharing.
* [Copy and edit blueprints](cml-blueprint-edit).

|
blueprints
|
---
title: Series Insights (multiseries)
description: Available for multiseries projects, the Series Insights tab provides series-specific information in both charted and tabular format.
---
# Series Insights (multiseries) {: #series-insights-multiseries }
The **Series Insights** tab for time series [multiseries](multiseries) projects provides series-specific information. (A clustering-specific version of [**Series Insights**](series-insights) is also available.) For multiseries, insights are reported in both charted and tabular format:
* The [histogram](#series-insights-histogram) provides binned data representing accuracy, average scores, length, and start/end date distribution by count for each series. Clicking on any bar populates the table below it with results from that bin.
* The [table](#series-insights-table-view) displays basic information for each series that falls within the selected binned region from the chart.

Note that for large datasets, DataRobot computes scores and values after downsampling.
## Use Series Insights {: #use-series-insights }
To speed processing, **Series Insights** visualizations are initially computed for the first 1000 series (sorted by ID). You can, however, run calculations for the remaining series data. As each new calculation is computed, additional details become available. Complete information depends on accuracy calculations for all backtests.
On first opening a model from the Leaderboard, the chart defaults to binning by **Total Length**. At this point you can select from a variety of plot distributions and bin counts. Note however that sorting by accuracy is disabled.

Click either **Run** (under **All Backtests** on the Leaderboard) or **Compute remaining backtests** (above the table) to activate additional options:

Selecting either one changes both to indicate that backtest calculations are in progress. Once completed, although backtests have been computed, accuracy has not. Click the **Compute accuracy scores** link above the table to compute accuracy. With accuracy calculations complete, the distribution options change, as described in the [chart interpretation](#series-insights-histogram) section below.

## Interpret the insights {: #interpret-the-insights }
The page insights display aggregated (chart) and individual (table) series information. The insights are available immediately upon opening the tab, but all accuracy calculations must be complete before the full functionality is available. The sections below describe how to understand the output.
### Series Insights histogram {: #series-insights-histogram }
The histogram provides an at-a-glance indication of the series distribution (for the first 1000 series, regardless of whether all series are computed) based on a variety of metrics. Initially you can set the distribution to length, start or end date, or target average. Use the dropdowns to set the method and the number of bins for the display. When you have calculated accuracy, selecting that distribution adds options.

If you select **Accuracy** as the distribution method, you can additionally filter the display by partition and metric:
* Partition: Sorts by accuracy score for Backtest 1, the average score across all backtests, or the Holdout score. Regardless of the number of backtests configured for a project, only Backtest 1 and an average value are available for selection.
* Metric: Selects the metric to base the accuracy score on. By default, the display uses the project metric.

Hover on a bin to show a tooltip that displays series counts and binned value. In this example, when displayed for accuracy, 105 series had scores between (roughly) .69 and .89 when the metric was RMSE:

Clicking on a bin updates the table display to include results only for those series within the selected bin.

### Series Insights table view {: #series-insights-table-view }
The table below the histogram provides series-specific information for either the first 1000 series (based on the histogram filters) or the series in a selected bin. The sort order defaults to series ID but you can click any column to re-sort. Some entries in the table may be missing values. This is most likely because you have not yet computed their scores or the individual series does not overlap with the selected partition.

Use the search function to view metrics for any series. Note in the example below that additional accuracy scores have not yet been computed:

The displayed table reports the following for each series:
| Component | Description |
|-------------|--------------|
|  | Opens the selected series in the **Accuracy Over Time** tab (further calculation in that tab may be required). |
| Total length | Displays the number of entries in the series. Use the **Options** link () above the table to set the view to rows or duration. |
| Start Date / End Date | Displays the first and last timestamps of the series in the dataset. |
| Target Average (regression) <br /> Positive Class (classification) | Regression: Displays the average value of the target over the range of the dataset in that series.Classification: Displays the fraction of the positive class the target makes up over the range of the dataset in that series. |
| Backtest 1 | Displays the average backtest score for Backtest 1 across the series. |
| All Backtests | Displays the average backtest score for all backtests across the series (requires having run backtests from the Leaderboard or via the **Compute remaining backtests** link). |
| Holdout | Displays the score for the Holdout fold, if unlocked. |
Use the **Options** () link to download the table data to a CSV file.
### Interpreting scaled metrics in Series Insights
Series Insights handle time-series scaled metrics [MASE](opt-metric#mase) and [Theil's U](opt-metric#theils-u) differently from other metrics. MASE and Theil's U metrics compare the model to a baseline model and are calculated using ratios. As ratios, they can result in values of infinity, so DataRobot caps these values at 100M.
To prevent these high values from distorting the Series Insights histogram plot, DataRobot filters them out of the display. Thy are, however, retained in the corresponding Series Insights table, where they display as the capped value of 100M in backtest columns. The following table displays accuracy using the MASE metric:

For the All Backtests column, DataRobot averages all backtest scores, which can lead to fractions of 100M. If only one of the backtests has an infinity cap, the values are in the range of `100M/number of backtests - 100M` (e.g., ~50M for two backtests, ~33M for three backtests, etc.)
|
series-insights-multi
|
---
title: Anomaly visualizations
description: During unsupervised learning on time series, anomaly visualizations help to locate and analyze anomalies that occur across the timeline of your data.
---
# Anomaly visualizations {: #anomaly-visualizations }
For time series [anomaly detection](anomaly-detection), DataRobot provides the following additional visualizations to help view and understand anomaly scores.
* [Anomaly Over Time](#anomaly-over-time)
* [Anomaly Assessment](#anomaly-assessment)
## Anomaly Over Time {: #anomaly-over-time }
The **Anomaly Over Time** visualization helps to understand when anomalies occur across the timeline of your data. It functions similarly to the non-anomaly [**Accuracy Over Time**](aot) chart. See that chart description for details of the configurable elements (backtest, forecast distance, etc.) and controlling the display.
!!! note
Because multiseries projects can have up to 1 million series and up to 1000 forecast distances, calculating accuracy charts for all series data can be extremely compute-intensive and often unnecessary. To avoid this, DataRobot provides [alternative calculation options](aot#display-by-series).

This chart, in addition to handles that control the preview (1), provides an additional handle to control the anomaly threshold (2). Drag the handle up and down to set the threshold that defines whether plot values should be considered as anomalies. Points above the threshold are indicated in red, both in the upper chart and in the preview (3).
If you are using **Model Comparison** to visualize anomaly detection over time for two selected models, the page displays predicted anomalies in an **Anomaly Over Time** chart for each model and a **Summary** chart that visualizes where the anomaly models agree or disagree.

To control the anomaly threshold, drag the handle up and down independently for each model. Note that thresholds vary between models in the same project, meaning they do not need to be the same across the two charts to make an accurate comparison.

As the handle moves, the **Summary** chart updates to only display bins above the anomaly thresholds.

Select a date range of interest using the time selector at the bottom of the page. Both the **Anomaly Over Time** charts and **Summary** chart update to reflect the selected time window.
Comparing **Anomaly Over Time** is a good method for identifying two complimentary models to blend, increasing the likelihood of capturing more potential issues. However, you must have a good understanding of where the actual anomalies are in your data; neither chart indicates if anomalies are correctly predicted. For example, while comparing the **Anomaly Over Time** of two models, you might find that one model is able to detect more issues, but another model is able to detect issues earlier. Training a blender out of these two models results in more efficient anomaly detection.
## Anomaly Assessment {: #anomaly-assessment }
The **Anomaly Assessment** tab plots data for the selected backtest and provides, below the visualization, [SHAP explanations](shap-pe) for up to 500 anomalous points. Red points on the chart indicate that explanations are calculated and available. Clicking on an explanation expands and computes the [Feature Over Time](#display-the-feature-over-time-chart) chart for the selected feature. The chart and explanations together provide some explanation for the source of an anomaly.
### Anomaly Assessment chart {: #anomaly-assessment-chart }
When you open the tab and click to compute the assessment, the most anomalous point in the validation data is selected by default (a white vertical bar) with corresponding explanations below. Hover on any point to see the prediction for that point; click elsewhere in the chart to move the bar. As the bar moves, the explanations below the chart update.

!!! note
SHAP explanations are available for up to 500 anomalous points per backtest. When a selected backtest has more than 500 sample points, the display uses red to indicate those points for which SHAP explanations are available and blue to show points without SHAP explanations. In other words, color coding, in this case, respresents the availability of SHAP explanations not the value of the anomaly score.
#### Control the chart display {: #control-the-chart-display }
The chart provides several controls that modify the display, allowing you to focus on areas of particular interest.

<b>Backtest / series selector</b>
Use the dropdown to select a specific backtest or the holdout partition. The chart updates to include only data from within that date range. For multiseries projects there is an additional dropdown that allows you to select the series of interest.

<b>Compute for training / Show training data</b>
Initially the **Anomaly Assessment** chart displays anomalies found in the validation data. Click **Compute for training** to calculate anomalous points in the training data. Note, however, that training data is not a reliable measure of a model's ability to predict on future data.
Once computed, use the **Show training data** option to show training and validation (box checked) or only validation data (box unchecked).
<b>Zoom to fit</b>
When **Zoom to fit** is checked, DataRobot modifies the chart's Y-axis values to match the minimum and maximum of the target values. When unchecked, the chart scales to show the full possible range of target values:

When checked, it scales from the minimum to maximum values, which can change the relative difference in the anomaly score:

See the [example](aot#zoom-the-display) in the **Accuracy Over Time** description for more detail.
<b>Preview handles</b>
Use the handles on preview pane to narrow the display in the chart above. Gradient coloring, in both the preview and the chart, indicates division in partitions, if applicable. Changes to the preview pane also impact the display of the [**Feature Over Time**](#display-the-feature-over-time-chart) chart.

### Display anomaly information {: #display-anomaly-information }
Hover on any point in the chart to see a report of the date and prediction score for that point:

Click a point to move the vertical bar to that point, which in turn updates the displayed [SHAP scores](shap-pe). The SHAP score helps to understand how a feature is involved in the prediction.
#### List SHAP explanations {: #list-shap-explanations }
The white vertical bar in the main chart serves as a selector that controls the SHAP explanation display. As you click through the chart, you can notice that the list of explanations (scores) changes. For example, on 11/23/08 the anomaly score was fairly low with a derivation of the feature "Sales" having the most impact:

On 02/07/09, by contrast, a higher score is attributed to the actual precipitation on that day:

If a point is not anomalous, no SHAP scores are listed.
#### Display the Feature Over Time chart {: #display-the-feature-over-time-chart }
From the list of SHAP scores, click a feature to see its **Over Time** chart. (Read more about the [**Over Time** chart](ts-leaderboard#understand-a-features-over-time-chart) chart in the time series documentation.) The plot is computed for each backtest and series.
The white bar is based on the location set in the full chart. Note that if the selected anomaly point is in the training data, and **Show training data** is unchecked, the bar does not display.
Drag the handles in the preview pane to focus the display.

The chart is not available for text or categorical features.
|
anom-viz
|
---
title: Residuals
description: The Residuals tab helps you understand the predictive performance and validity of a regression model by letting you gauge how linearly your model scales.
---
# Residuals {: #residuals }
The Residuals tab is designed to help you clearly understand the predictive performance and validity of a regression model. It allows you to gauge how linearly your models scale relative to the actual values of the dataset used.
This tab provides multiple scatter plots and a histogram to assist your residual analysis:
* Predicted vs Actual
* Residual vs Actual
* Residual vs Predicted
* Residuals histogram
<em>Predicted</em> values are those predicted by the model, <em>actual</em> values are the real-world outcome data, and <em>residual</em> values represent the difference of `predicted value - actual value`.
!!! note
Because these plots are created as part of the model fit process, this tab is only accessible for models created with version 5.2 and later (or after 7/1/2019 for managed AI Platform users). You must manually re-run existing models to view the Residuals tab for them. Additionally, the Residual vs Predicted plot and the Residuals histogram are only available for version 5.3 and later (or after 11/13/2019 for managed AI Platform users). </a>
## Access the Residuals tab {: #access-the-residuals-tab }
The Residuals tab can be accessed from the **Leaderboard**. You can choose to start a new project, or [add a new model](creating-addl-models#adding-models-from-the-leaderboard) to the Leaderboard. All of these options will trigger the model-fit process that creates the scatter plot.
Note that the Residuals tab is not available for frozen run models if there are no out-of-sample predictions. You are redirected to the **Residuals** tab of the parent model.

1. To begin, start a new project by importing a dataset.
2. Select a numeric target feature to build a regression model. Set modeling parameters and a build mode; <em>do not</em> enable time-aware modeling (if available).

3. When a model completes and is available on the **Leaderboard**, expand the model and select **Evaluate** > **Residuals** to display the scatter plot:

### Access individual plots {: #access-individual-plots }
From the **Residuals** tab, you can access each plot by selecting the appropriate distribution: **Predictions** or **Residuals**.

Select **Predictions distribution** to display the Predicted vs. Actual scatter plot.

Select **Residuals distribution** to view the Residual vs Actual plot, the Residual vs Predicted plot, and the Residuals histogram.

## Interpret plots and graphs {: #interpret-plots-and-graphs }
Each scatter plot has a variety of analytical components.
### Accuracy Parameters {: #accuracy-parameters }

The reported **Residual mean** value (1) is the mean (average) difference between the predicted value and the actual value.
The reported **Coefficient of determination** value (2), denoted by r^2, is the proportion of the variance in the dependent variable that is predictable from the independent variable.
The **Standard Deviation** value (3) measures variation in the dataset. A low value indicates that the data points tend to be close to the mean; a high value indicates that the data points are spread over a wider range of values.
!!! info “Availability information”
The standard deviation calculation for these scatter plots is only displayed for Self-Managed AI Platform users with version 5.3 or later.
### Plot and graph actions {: #plot-and-graph-actions }
!!! tip
{% include 'includes/slices-viz-include.md' %}
The **Residuals** plots and graphs have multiple actions available, including data selection, data slices, export, and settings.

Below each scatter plot, the **Data Selection** dropdown allows you to switch between data sources. Choose between Validation, Cross Validation, or Holdout data.
The **Export** button allows you to export the scatter plots as a PNG, CSV, or ZIP file:

The settings wheel icon allows you to adjust the scaling of the x- and y-axes. Select linear or log scaling for each axis, and all graphs will adjust accordingly.

For example, compare the Predicted vs. Actual plot with linear scaling (left) to log scaling (right):

To examine an area of any plot more closely, hover over the plot and zoom in or out.

Once zoomed in, click and drag the plot to examine different areas.
### Interact with the scatter plots {: #interact-with-the-scatter-plots }
You can highlight residuals `x` times greater than the standard deviation by toggling the check box on.

Enter a value to change the number of times greater the residuals must be than the standard deviation in order for the residuals to be highlighted. For example, if set to 3, the only points highlighted are those with values three times greater than the standard deviation. Highlighted residuals are represented by yellow points:

Hovering over individual points on the plots displays the **Data Point** bin. The bin allows you to compare the predicted or residual values to the actual values for a given blue dot. For the predicted vs actual plot, hover over a specific dot to compare how far the predicted value (represented by the blue dot) differs from that specific actual value (represented by the gray line).

For the Residual vs Actual plot, hover over a specific point to see the exact residual value for a given actual value. Each dot's coordinates are based on these values (residual for the y-axis coordinate and actual for the x-axis coordinate), and the distance from the horizontal gray line indicates the difference between the predicted and actual values. The greater the difference, the further a point is from the line.

The Residual vs Predicted plot is structured the same way, but compares the predicted values to residuals instead.

The Residuals histogram bins residuals by ranges of values, and measures the number of residuals in each bin.

|
residuals
|
---
title: Training Dashboard
description: Use the Training Dashboard to view a model's training and test loss, accuracy (for some projects), learning rate, and momentum, to learn how training went.
---
# Training Dashboard {: #training-dashboard }
!!! note
The Training Dashboard tab is currently available for Keras-based (deep learning) models only.
Use the **Training Dashboard** to get a better understanding about what may have happened during model training. The model training dashboard provides, for each executed iteration, information about a model's:
* training and test loss
* accuracy (classification, multiclass, and multilabel projects only)
* learning rate
* momentum
Running a large grid-search to find the best performing model without first performing a deeper assessment of the model is likely to result in a suboptimal model. From the Training Dashboard tab, you can assess the impact that each parameter has on model performance. Additionally, the tab provides visualizations of the learning rate and momentum schedules to ensure transparency.
Applying both training and test (validation) data for the entire training procedure helps to easily assess whether each candidate model is overfitting, is underfitting, or has a good fit.

The dashboard is comprised of the following functional sections:
* [Model selection](#model-selection) (1)
* [Loss iterations](#loss-graph) (2)
* [Accuracy iterations](#accuracy-graph) (3)
* [Learning rate](#learning-rate) (4)
* [Momentum](#momentum) (5)
* [Hyperparameter comparison chart](#hyperparameter-comparison-chart) (6)
* [Settings](#settings) (7)
Hover on any point in a graph to see the exact training (solid line) and, where applicable, test (dotted line) values for a specific iteration.

## Model selection {: #model-selection }
From the model selection controls, choose a model to highlight ("Model in focus") in the informative graphs by cycling through the available options or clicking a model name in the legend. Each graph, where applicable, will bring results for that model to the foreground; other model results are still available but of low opacity to help with comparison and visibility.

The models available for selection are based on the automation DataRobot applied. For example, if DataRobot used internal heuristics to set hyperparameters and a [grid search](adv-tuning) was not required, you will see only “Candidate 1” and “Final”. If DataRobot performed a grid search or a Neural Architecture Search model was trained, multiple candidate models will be available for comparison.
Candidates models are trained on 85% of the training data and are sorted by lowest to highest, based on project metric performance. The remaining 15% is used to track the performance. Once the best candidate is found, DataRobot creates a final model on the full training data.
## Loss graph {: #loss-graph }
Neural networks are iterative in nature, and as such, the general trend should improve accuracy as weights are updated to minimize the loss (objective) function. The **Loss** graph illustrates, by number of iterations (X-axis), the results of the loss function (Y-axis). Understanding these loss curves is crucial to understanding whether a model is underfit or overfit.
When plotting the *training* dataset, the loss should reduce (the curve should lower and flatten) as the number of iterations increases.

**Interpretation notes:**
* For the *testing* dataset, loss may start increasing after a certain number of iterations. This might suggest that the model is underfit and not generalizing well enough to the test set.
* Another underfitting warning sign is a graph where the test loss is equivalent to the training loss, and the training loss never approaches 0.
* A test loss that drops and then later rises is an indication that the model has likely started to overfit on the training dataset.
In the following example, notice how test loss almost immediately diverged from training loss, indicating overfitting. For general information on training neural networks and deep learning for tabular data, see this [YouTube video](https://www.youtube.com/watch?v=WPQOkoXhdBQ&t=4s){ target=_blank }.

## Accuracy graph {: #accuracy-graph }
_For classification, multiclass, and multilabel only_
The **Accuracy** graph measures how many rows are correctly labeled. When evaluating the graph, look for an increasing value. Is it moving toward 1 and how close does it get? How does it relate to **Loss**?

In the graphs above, the model gets very accurate around iteration 50 (exceeding 95%), and while the loss is still dropping by quite a bit, it is only half way to the bottom of the curve at iteration 50.
If training stopped at that point, predictions would probably be close to 0.5 (for example, 0.55 for positive and 0.45 for negative). The goal, however, is 0 for negative and 1 for positive. The accuracy reading acts as a kind of "confidence". If you minimize this function and keep training, you can see log loss continue to fall, but accuracy is barely improving (if at all). This is a good heuristic that indicates the model will do better on out of sample data.
Examining accuracy generally provides similar information as examining loss. Loss functions, however, may incorporate other criteria for determining error (e.g., log loss provides a metric for confidence). Comparing accuracy and loss can help inform decisions regarding loss function (is the loss function is appropriate for the problem?). If you tune the `loss` parameter, the loss function may improve over time but accuracy only slightly increases. This would suggest perhaps other hyperparameters are better applied.
## Learning rate {: #learning-rate }
The **Learning rate** graph illustrates how the learning rate varies over the course of training.
To determine how a neural network's weights should be updated after training on each batch of data, the gradient of the loss function is calculated and multiplied by a small number. That small number is the learning rate.
Using a high learning rate early in training can help regularize the network, but warming up the learning rate first (starting at a low learning rate and increasing) can help mitigate early overfitting. By cyclically varying between reasonable bounds, instead of monotonically decreasing the learning rate, saddle points can be handled more easily (due to their nature of producing very small gradients).
By default, DataRobot both performs a specific cosine variant of the 1cycle method (using a shape found through many experiments) and exposes parameters. This approach provides full control over the scheduled learning rate.
When learning rates are the same between candidate models, lines overlay each other. Click each candidate to see its values.

## Momentum {: #momentum }
The **Momentum** graph is only available for models with optimizers that use momentum. It is used by the optimizer to adapt the learning rate and does so by using the previous gradients to help reduce the impact of noisy gradients. It automatically performs larger updates to weights (gradients) when repeatedly moving in one direction, and smaller gradients when nearing a minima.
In the following example, the model uses `adam` (the default optimizer); the graph illustrates how momentum varies over the course of training.

Any other optimizer does not vary over time and therefore is not shown in the chart.

This external public resource provides more information on popular <a target="_blank" href="https://ruder.io/optimizing-gradient-descent/">gradient-based optimization algorithms</a>.
## Hyperparameter comparison chart {: #hyperparameter-comparison-chart }
The hyperparameter comparison chart shows—and compares—hyperparameter values for the active candidate model and a selected comparison model.
Use this information to inform decisions about which parameters to tune in order to improve the final model. For example, if a model improves in each candidate with an increased batch size, it might be worth experimenting with an even larger batch size.
Models with different values for a single parameter are highlighted by a yellow vertical bar. Use the chart's scroll bar to view all parameters, which are listed alphabetically.

Most hyperparameters are tunable. Clicking the parameter name in the chart opens the [**Advanced Tuning**](adv-tuning) tab, where you can change values to run more experiments.
## Settings {: #settings }
There are a variety of **Training Dashboard** settings that change the display and can help interpretability.
### Candidate selection {: #candidate-selection }
Use the **Models to show** option to choose which candidate models to display results for. You can manually select, search, select all, or choose the final model. By default, DataRobot displays up to four candidates and the final model. Candidates are ranked and then sorted from highest to lowest performance.

### More options {: #more-options }
Click **More** () to manage the display.
**Apply log scale for loss**
If it is difficult to fully interpret results, enable the log scale view. Default view:

With log scale applied:

**Smooth plots**
If the chart is very noisy, it is often useful to add smoothing. The noisier the plot, the more dramatic the effect.
**Reduce data points**
If many candidates are run, and/or thousands of iterations are shown in the graphs, you may experience performance degradation. When **Reduce data points** is enabled (the default), DataRobot reduces the number of data points, automatically determining a maximum that still provides a performant interface. If you require every data point included, disable the setting. For data-heavy charts, you should expect degraded performance when the option is disabled.
**Only show test data**
In displays where test loss is fully hidden behind the training loss, use “Only show test data” to remove training data results from the **Loss** graph.
|
training-dash
|
---
title: Forecast vs Actual
description: How to use Forecast vs Actual, which allows you to compare how different predictions behave from different forecast points to different times in the future.
---
# Forecast vs Actual {: #forecast-vs-actual }
Time series forecasting predicts multiple values for each point in time (forecast distances). While the [**Accuracy Over Time**](aot) chart displays a single forecast at a time, you can use the **Forecast vs Actual** chart to show multiple forecast distances in one view. For example, imagine forecasting the weather. Your forecast point might be today, and you can forecast out a day, or maybe a week. Predicting tomorrow’s weather from today will have a very different accuracy than predicting the weather a week from today. Those spans are called forecast “distances”.

**Forecast vs Actual** allows you to compare how different predictions behave from different forecast points to different times in the future. Use the chart to help answer what, for your needs, is the best distance to predict. Forecasting out only one day may provide the best results, but it may not be the most actionable for your business. Forecasting the next three days out, however, may provide relatively good accuracy and give your business time to react to the information provided. If your project included calendar data, those events are displayed on this chart, helping you to gain insight into the effects of those events.
The **Forecast vs Actual** chart is not available for OTV or unsupervised projects.
## Chart display options {: #chart-display-options }
The **Forecast vs Actual** chart has many similarities to the [**Accuracy Over Time**](aot) chart in its display controls. Other than the **Forecast Range** control (**Forecast Distance** in **Accuracy Over Time**) the following work the same.

See the **Accuracy Over Time** documentation for descriptions of:
* [Backtest](aot#change-the-displayed-backtest)
* [Series to plot](aot#display-by-series) (multiseries only), including use of the [bulk calculation](#compute-multiple-series) feature.
* [Compute for training](aot#compute-training-data)
Under additional settings:
* [Resolution](aot#change-the-binning-resolution)
* [Show full date range](aot#change-the-date-range)
* [Zoom to fit](aot#zoom-the-display)
* [Export](aot#export-data)
!!! note
Because multiseries projects can have up to 1 million series and up to 1000 forecast distances, calculating accuracy charts for all series data can be extremely compute-intensive and often unnecessary. To avoid this, DataRobot provides [alternative calculation options](aot#display-by-series).
As with other time series visualizations, drag the handles on the preview panel to bring specific areas into focus on the main chart.

## Forecast range {: #forecast-range }
Where **Accuracy Over Time** allows you to set a single forecast distance—one day from now, four days from now—use **Forecast vs Actual** to plot a range of distances (for example, one to seven days from now). There are three ways to set the start point for the range.

The start point is marked by a blue bar in the chart. If the **Forecast Range** is set to `+1 to +7 days`, for example, the chart will display forecasts for days 1, 2, 3...7 from the blue bar. When you change the date using one of the mechanisms, the change is reflected in the others.
1. Click anywhere in the chart to set that date as the start point (1).
2. Drag the handle to the start point (2).
3. Use the calendar picker to set a date (3).
!!! note
If you change the **Forecast range** to represent a single value, and that step is equivalent to the **Forecast distance** in [**Accuracy Over Time**](aot), the chart is available without further calculations.
## Interpret the insight {: #interpret-the-insight }
**Forecast vs Actual** helps to visualize how predictions change over time in the context of a forecast range. The open orange circle represents actual values from your data. Solid blue circles represent predicted values on dates contained within the forecast range. If you used a [calendar file](ts-adv-opt#calendar-files) when you created the project, the display also includes markers to indicate [calendar events](aot#identify-calendar-events).
Hover on any point in the chart for a tooltip listing information of the values for that bin. Information is available for all calculated points, regardless of whether or not they are included in the currently selected forecast distance:

The tooltip reports the average absolute residual value. This value is also represented in bar chart form at the bottom of the main chart:

Residuals measure the difference between actual and predicted values. They help to visualize whether there is an unexplained trend in your data that the model did not account for and how the model errors change over time.
??? info "How are absolute residual values calculated?"
The average absolute residual value represents the "error" between the actual and forecasted results. To calculate, DataRobot takes the average value of the absolute difference between actuals and forecast shown in the subsequent bins:

For example, in the image above, values are calculated as:
- First bin error: `(|A2 - F2| + |A3 - F3| + |A4 - F4| + |A5 - F5|) / 4`
- Second bin error : `(|A3 - F3|| + |A4 - F4| + |A5 - F5|) / 3`
- Third bin error: `(|A4 - F4| + |A5 - F5|) / 2`
- The fourth bin error: `|A5 - F5|`
The last bin does not have an error because there are no subsequent bins.
|
fore-act
|
---
title: Forecasting Accuracy
description: The Forecasting Accuracy tab provides a visual indicator of how well a model predicts at each forecast distance in the project's forecast window.
---
# Forecasting Accuracy {: #forecasting-accuracy }
The **Forecasting Accuracy** tab provides a visual indicator of how well a model predicts at each forecast distance in the project's forecast window. It is available for all [time series](time/index) projects (both single series and multiseries). Use it to help determine, for example, how much harder it is to accurately forecast four days out as opposed to two days out. The chart depicts how accuracy changes as you move further into the future.

For each forecast distance, the points represent:
* Green (Backtest 1): the validation score displayed on the Leaderboard, which represents the validation score of the first (most recent) backtest.
* Blue (All Backtests): the backtesting score displayed on the Leaderboard, which represents the average validation score across all backtests.
* Red (Holdout): the holdout score.
You can change the optimization metric from the Leaderboard to change the display.
|
forecast-acc
|
---
title: Series Insights (clustering)
description: Available for clustering projects, the Series Insights tab provides series clustering information in both charted and tabular format.
---
# Series Insights (clustering) {: #series-insights-clustering }
The **Series Insights** tab for time series [clustering](ts-clustering) provides series clustering information, including the cluster to which the series belongs, as well as series row and date information. Histograms, a non-clustering version of [**Series Insights**](series-insights-multi) for multiseries projects, are also available. The insight is helpful in identifying which cluster any given series was assigned to, and also provides an at-a-glance check that no single series is inappropriately dominant.
For multiseries, insights are reported in both charted and tabular format:
* The [histogram](#series-insights-histogram) for each cluster, which includes the number of series, the number of total rows, and the percentage of the dataset that belongs to that cluster.
* The [table](#series-insights-table-view) displays basic information for each series, such as row count, dates, and cluster membership.

Note that for large datasets, DataRobot computes scores and values after downsampling.
## Use Series Insights {: #use-series-insights }
On first opening, the **Series Insight** chart displays a list of the series, sorted alphanumerically by series ID.

Click **Compute cluster scores** to run histogram calculations and populate the table _for the current model_:

### Histogram {: #histogram }
The histogram provides an at-a-glance indication of the make up of each cluster.

The following table describes the tools for working with the histogram:
| | Element | Description |
|---|---|---|
|  | Plot | Bins clusters by number of rows, percentage of total rows, or number of series in the cluster. To see all values, hover on a bin. |
|  | Sort by | Sets the sorting criteria for the bins. |
|  | Table filter | Filters the bins represented in the table below the histogram. |
Hovering on a bin displays the values for each of the plot criteria:

### Table view {: #table-view }
The table below the histogram provides cluster-specific information for all series in the project dataset or, if filters are applied from the histogram, for series related to the selected clusters.

To work with the table:
* Use the search function to view metrics for any series.
* Use **Options** () to set whether the series length information is reported as a time step or number of rows. Additionally, you can download the table data.
* Note that the start and end dates display the first and last timestamps of the series in the dataset.
|
series-insights
|
---
title: Lift Chart
description: The Lift Chart depicts how well a model segments the target population and how capable it is of predicting the target, to show the model's effectiveness.
---
# Lift Chart {: #lift-chart }
The Lift Chart depicts how well a model segments the target population and how capable it is of predicting the target, letting you visualize the model's effectiveness. The chart is sorted by predicted values—riskiest to least risky, for example—so you can see how well the model performs for different ranges of values of the target variable. Looking at the Lift Chart, the left side of the curve indicates where the model predicted a low score on one section of the population while the right side of the curve indicates where the model predicted a high score. In general, the steeper the actual line is, and the more closely the predicted line matches the actual line, the better the model is. A consistently increasing line is another good indicator.

From the **Leaderboard**, the Lift Chart displays the actual and predicted values, described in more detail below. (By comparison, the Lift Chart and Dual Lift Chart available on the [**Model Comparison** tab](model-compare#dual-lift-chart) group numeric feature values into equal sized "[bins](#lift-chart-binning)," which DataRobot creates by sorting predictions in increasing order and then grouping them.)
If you used the [Exposure](additional#set-exposure) parameter when building models for a regression project, the Lift Chart displays the graph adjusted to exposure (and the corresponding legend indicates the difference):
* The orange line depicts the *sum of the target divided by the sum of exposure* for a specific value.
* The blue line depicts the *sum of predictions divided by the sum of exposure*.
This adjustment is useful in insurance, for example, to understand the relationship between annualized cost of a policy and the predictors.
## Change the display {: #change-the-display }
!!! tip
{% include 'includes/slices-viz-include.md' %}
The Lift Chart offers several controls that impact the display:
| Element | Description |
|-------------|--------------|
| Data Selection | Changes the data source input. Changes affect the view of predicted versus actual results for the model in the specified run type. Options are dependent on the type(s) of validation completed—validation, cross-validation, or holdout, or you can access and use [external test](predict#make-predictions-on-an-external-dataset) datasets. [Time-aware modeling](ts-date-time) allows backtest-based selections. |
| [Data slice](sliced-insights) | _Binary classification and regression only_. Selects the filter that defines the subpopulation to display within the insight.|
| [Select Class](#lift-chart-with-multiclass-projects) | *Multiclass only*. Sets the class that the visualization displays results for. |
| Number of Bins | Adjusts the granularity of the displayed values. Set the number of bins you want predictions sorted into (10 bins by default); the more bins, the greater the detail. |
| Sort bins | Sets the bin sort order. |
| [Enable Drill Down](#drill-into-the-data) | Uses the predictions created during the model fit process. Drill down shows a total of 200 predictions—the top 100 and the bottom 100 predictions on the Lift Chart. Drill down is only supported on the **All Data** [slice](sliced-insights). |
| [Download Predictions](#drill-into-the-data) | When drill down is enabled, transfers to the **Make Predictions** tab. |
| Export | Downloads either a PNG of the chart, a CSV of the data, or a ZIP containing both. See the [section on exporting](export-results) for more details. |
|  | Indicates that the project was built with an optimization metric that lead to biased predictions. Hover on the icon for recommendations. |
| Bin summary tooltip | Hover over a bin to view the number of member rows as well as the average actual and average predicted target values for those rows. |
Once you've set the data source and number of bins, you can:
* download the data for each bin by clicking the [**Enable Drill Down**](#drill-into-the-data) link.
* view an inline table for certain bins by by hovering over links in the table.
## Drill into the data {: #drill-into-the-data }
The Lift Chart only shows subsets of the data—just the predictions needed for the particular Lift Chart you are viewing based on the **Data Source** dropdown selection.
Click **Enable Drill Down** to set DataRobot to use the predictions created during the model fit process and append all of the columns of the dataset to those predictions. (This is the source of the raw data displayed when you click the bins in the Lift Chart.)

Once you enable drill down, DataRobot computes the data and when finished, the label changes to **Download Predictions**. Click **Download Predictions** and DataRobot transfers to the [**Make Predictions**](predict) tab to compute or download predictions. The option to compute predictions with the **Make Predictions** tab is for the entire dataset, not the subset selected with the **Data Source** dropdown.
## View raw data {: #view-raw-data }
After enabling drill down, you can display a table of the data available in a bin by clicking the plus sign in the graph. For those bins without a plus sign, you must download predictions to see the data for that bin.
If you used the [Exposure](additional#set-exposure) parameter when building models for a regression project, the **Prediction** column in the inline table shows predictions adjusted with exposure (i.e., predictions divided by exposure). The **Actual** column in the inline table displays the column value adjusted with exposure (i.e., *actual divided by exposure*). Accordingly, the names of the **Prediction** and **Actual** columns change to **Predicted/Exposure** and **Actual/Exposure**.

## Calculate raw data display {: #calculate-raw-data-display }
The drill-down shows only the 100 lowest and 100 highest ranked predictions. This corresponds to the far left and far right sides of the Lift Chart. Depending on the size of the data source being displayed, a varying number of highlighted bins are available to display that raw data, and the same number of bins display at each side of the chart. For large datasets, there may be only one highlighted bin on each side, as each bin can hold 100 predictions. (To test that, you can increase the number of bins and you most likely will see more highlighted segments.)
Consider the following example. The Validation subset contains 5000 rows. When you view the Lift Chart with 10 bins, each bin contains 500 rows. When you enable drill down, all 100 of the lowest predictions fall into bin 1. If you increase the number of bins to 60, each bin then contains 83 rows. Now, it takes two bins to contain 100 predictions and so the two left (and two rightmost) bins are highlighted.
## Lift Chart with multiclass projects {: #lift-chart-with-multiclass-projects }
!!! note
This feature is not backward compatible; you must retrain any model built before the feature was introduced to see its multiclass insights.
For multiclass projects, you can set the Lift Chart display to focus on individual target classes—in other words, to display a Lift Chart for each individual class. Use the **Select Class** dropdown below the chart to visualize how well a model segments the target population for a class and how capable it is of predicting the target. The dropdown offers the 20 most common classes for selection.

Use the **Export** button to export:
* a PNG of the selected class
* CSV data for the selected class
* a ZIP archive of data for all classes
## Lift Chart binning example {: #lift-chart-binning-example }
Sometimes called a *decile chart*, DataRobot creates the Lift Chart by sorting predictions in increasing order and then grouping them into equal-sized bins. The results are plotted as the Lift Chart, where the x-axis plots the bin number and the y-axis plots the average value of predictions within each the bin. It's a two step process—first group rows by what the model thinks is the likelihood of your target and then calculate the number of actual occurrences. Both values are plotted on the Leaderboard Lift Chart.
For example, if your dataset of loan default information has 100 rows, DataRobot sorts by the predicted score and then chunks those scores into the number of bins you select. If you have 10 bins, each group contains 10 rows. The first bin (or *decile*) contains the lowest prediction scores and is the least likely to default while the 10th bin, with the highest scores, is the most likely. Regardless of the number of bins (and the resulting number of rows per bin), the concept is the same—what percentage of people in that bin actually defaulted?
In terms of what the points on the chart mean, using the default example, each bin point tells you:
* On the **Leaderboard**, the number of people DataRobot predicts will have defaulted (blue line) and number that actually defaulted (orange line). Use this chart to evaluate the accuracy of your model.
* In **Model Comparison**, the number of people that actually defaulted for each model.
So what is actual value? The actual value plotted on the Lift Chart is the number or percentage of rows, for the corresponding bin, in which the target value applies. This distinction is particularly important when considering models on the **Model Comparison** page. Because DataRobot sorts based on model scores, and then groups rows from that sorted list, the bin for each model contains different content. As a result, while the bins for each model contain the same number of entries, the actual value for each bin differs.
## Exposure and weight details {: #exposure-and-weight-details }
If [Exposure](additional#set-exposure) is set for regression projects, observations are sorted according to the "annualized" predictions adjusted with exposure (that is, predictions divided by exposure), and bin boundaries are determined based on these adjusted predictions. The y-axis plots the sum of adjusted predictions divided by the sum of exposure within the bin. Actuals are adjusted and plotted in the same way.
When exposure and sample weights are *both* specified, exposure is used to determine the bin boundaries as above, but sample weights are not. DataRobot uses a composite weight equal to the product of `composite_weight = weight * exposure` to calculate the weighted average of predictions and actuals in each bin. The y-axis then plots the weighted sum of adjusted predictions divided by the sum of the composite weights, and similarly for actuals.
|
lift-chart
|
---
title: Evaluate
description: The Evaluate tabs provide key plots and statistics needed to judge a model's effectiveness, including ROC Curve, Lift Chart, and Forecasting Accuracy.
---
# Evaluate {: #evaluate }
The **Evaluate** tabs provide key plots and statistics needed to judge and interpret a model’s effectiveness:
Leaderboard tab | Description | Source
------------------|-------------|------------
[Accuracy Over Space](lai-insights) | Provides a spatial residual mapping within an individual model. | Validation, Cross-Validation, Holdout (selectable)
[Accuracy over Time](aot) | Visualizes how predictions change over time. | Computed separately for each backtest and the Holdout fold and can be viewed in the UI. Plots can be computed on both Validation and Training data.
[Advanced Tuning](adv-tuning) | Visualizes how predictions change over time. | Internal grid search set
[Anomaly Assessment](anom-viz) | Plots data for the selected backtest and provides SHAP explanations for up to 500 anomalous points. | Computed separately for each backtest and the Holdout fold and can be viewed in the UI. Plots can be computed on both Validation and Training data.
[Anomaly over Time](anom-viz) | Plots how anomalies occur across the timeline of your data. | Computed separately for each backtest and the Holdout fold and can be viewed in the UI. Plots can be computed on both Validation and Training data.
[Confusion Matrix for multiclass projects](multiclass) | Compares actual data values with predicted data values in multiclass projects. | Validation, Cross-Validation, or Holdout (selectable). For binary classification projects, use the [confusion matrix](confusion-matrix) on the [ROC Curve](roc-curve-tab/index) tab.
Feature Fit | Removed. See [**Feature Effects**](feature-effects).
[Forecasting Accuracy](forecast-acc) | Provides a visual indicator of how well a model predicts at each forecast distance in the project’s forecast window. | Computed separately for each backtest and the Holdout fold; only the validation subset of each fold is scored. Validation predictions are filtered by the forecast distance and the metrics are computed on the filtered predictions. UI/API does not provide access to individual backtests but rather to validation (backtest 0=most recent backtest), backtesting (averaged across all backtests), and Holdout.
[Forecast vs Actual](fore-act) | Compares how different predictions behave at different forecast points to different times in the future. | Computed separately for each backtest and the Holdout fold and can be viewed in the UI. Plots can be computed on both Validation and training data.
[Lift Chart](lift-chart) | Depicts how well a model segments the target population and how capable it is of predicting the target. | Validation, Cross-Validation, Holdout (selectable)
[Residuals](residuals) | Clearly visualizes the predictive performance and validity of a regression model. | Validation, Cross-Validation, Holdout (selectable)
[ROC Curve](roc-curve-tab/index) | Explores classification, performance, and statistics related to a selected model at any point on the probability scale. | Validation data
[Series Insights (clustering)](series-insights) | Provides information on the cluster to which each series belongs, along with series information, including rows and dates. Histograms for each cluster show the number of series, the number of total rows, and the percentage of the dataset that belongs to that cluster. | Computed for each series in the clustering backtest.
[Series Insights (multiseries)](series-insights-multi) | Provides series-specific information. | Computed separately for each backtest and the Holdout fold; only the validation subset of each fold is scored. Validation predictions are filtered by the forecast distance and the metrics are computed on the filtered predictions. UI/API does not provide access to individual backtests but rather to validation (backtest 0=most recent backtest), backtesting (averaged across all backtests), and Holdout.
[Stability](stability) | Provides an at-a-glance summary of how well a model performs on different backtests. | Computed separately for each backtest and the Holdout fold; only the validation subset of each fold is scored.
[Training Dashboard](training-dash) | Provides an understanding about training activity, per iteration, for Keras-based models. | Training, but validated on an internal holdout of the training data.
|
index
|
---
title: Stability
description: The Stability tab provides an at-a-glance summary of how well a model performs on different backtests, to understand whether a model is consistent across time.
---
# Stability {: #stability }
The **Stability** tab provides an at-a-glance summary of how well a model performs on different backtests. Use the results to understand whether a model is consistent across time, helping to evaluate the process of training and measuring performance. To use the tab, which is available for all [date/time partitioning](ts-date-time) projects, first compute all backtests for the model (from the Leaderboard, click **Backtesting > Run**). To include the information for the holdout partition, unlock holdout. The backtesting information in this chart is the same as that available from the [**Model Info**](ts-date-time#view-summary-information) tab.

The values in the chart represent the validation scores for each backtest and the holdout. Hover over a backtest or holdout to display the actual score and the range for the partition.

Changing the optimization metric from the Leaderboard changes the display, providing additional evaluation tools:

|
stability
|
---
title: Advanced Tuning
description: How to create models with Advanced Tuning, which lets you manually set model parameters to override the DataRobot selections and create a named “tune.”
---
# Advanced Tuning {: #advanced-tuning }
Advanced tuning allows you to manually set model parameters, overriding the DataRobot selections for a model, and create a named “tune.” In some cases, by experimenting with parameter settings you can improve model performance. When you create models with **Advanced Tuning**, DataRobot generates new, additional Leaderboard models that you can later blend together or further tune. To compute scores for tuned models, DataRobot uses an internal "grid search" partition inside of the training dataset. Typically the partition is a 80/20 training/validation split, although in some cases DataRobot applies five-fold cross-validation.
!!! note
See also [architecture, augmentation, and tuning](vai-tuning) options specific to Visual AI projects.

You can use **Advanced Tuning** with all models except the following:
* Blended models
* Prime models
* Open source/R models
* User-created models
There are also models that do not fall into any of the categories above that, while they do support advanced tuning capability, do not offer any tunable parameters at this time (baseline models, for example).
!!! info "Availability information"
Managed AI Platform users have access to a subset of the preprocessing parameters potentially available for tuning. Those parameters that are proprietary to DataRobot's preprocessing steps will not appear on the <b>Advanced Tuning</b> tab.
To display the advanced parameter settings, expand a model on the Leaderboard list and click **Evaluate > Advanced Tuning**. A window opens displaying parameter settings on the left and a graph on the right.

The following table describes the fields of the Advanced Tuning page:
| Element | Description |
|-----------|-------------|
| [Parameters](#explore-advanced-tuning) (1) | Displays either all parameters searched or the single best value of all searched values for [preprocessing](#advanced-tuning-parameter-types) or final prediction model parameters. Refer to the acceptable values guidance to set a value for your next search. Click the **documentation** link in the upper right corner to access the documentation specific to the model type. See the [Eureqa Advanced Tuning Guide](../../reference/eureqa-ref/index) for information about Eureqa model tuning parameters. |
| [Search type](#set-the-search-type) (2) | Defines the search type, either Smart Search or Brute Force. The types set the level of search detail, which in turn affects resource usage. |
| [Naming](#run-the-tune) (3) | Appends descriptive text to the tune. |
| [Graph](#interpret-the-graph) (4) | Plots parameters against score. |
| [Begin tuning](#run-the-tune) (5) | Launches a new tuning session, using the parameters displayed in the New Search parameter list. |
See below for information on [exploring Advanced Tuning](#explore-advanced-tuning), including definitions of, and settings for, display options.
## Advanced Tuning parameter types {: #advanced-tuning-parameter-types }
DataRobot allows you to tune parameters not only for the final model used for prediction but for preprocessing tasks as well.

With Advanced Tuning, you can set values for the following preprocessing tasks:
* Number of clusters for K-Means Clustering
* Number of principal components for PCA
* Arbitrary value for arbitrary value impute
* Feature importance threshold for ExtraTrees feature selection
* Number of N-grams in Elastic Net models
When you tune any of the listed parameters, DataRobot adds that information below the model description on the Leaderboard.
### Project sharing with tuning {: #project-sharing-with-tuning }
The extent of your ability to tune preprocessing parameters depends on your organization's enabled features. If you have full access and create a project that tunes preprocessing parameters, it may turn out that not all users are necessarily able to see those parameters. Sharing a project with users having different permissions may result in any restricted user being unable to see the specific parameter that was tuned. Because DataRobot provides a visual indicator on the Leaderboard when you tune a project, these users <em>will</em> be able to see that there was parameter modification.
## Use Advanced Tuning {: #use-advanced-tuning }
Once you understand how to set options, you can create a tune. Advanced Tuning consists of the three steps:
1. [Determining and setting new parameter values](#setting-tune-params).
2. [Setting the search type (optional)](#set-the-search-type).
3. [Running the tune](#run-the-tune).
!!! warning
The following situations can cause memory errors: <ul>
<li>Setting any advanced tuning parameter that accepts multiple values such that it results in a grid search that exceeds 25 grid points. Be aware that the search is multiplicative not additive (for example, <code>max_depth=1,2,3,4,5</code> with <code>learning_rate=.1,.2,.3,.4,.5</code> results in 25 grid points, not 10).</li>
<li>Increasing the range of hyperparameters searched so that they result in larger model sizes (for example, by increasing the number of estimators or tree depth in an XGBoost model).</li></ul>
### Set a parameter {: #set-a-parameter }
To change one of the parameter values:
1. Click the down arrow next to the parameter name:

2. Enter a new value in the **Enter value** field in one of the following ways:
- Select one of the pre-populated values (clicking any value listed in orange enters it into the value field.)
- Type a value into the field. Refer to the **Acceptable Values** field, which lists either constraints for numeric inputs or predefined allowed values for categorical inputs (“selects”). To enter a specific numeric, type a value or range meeting the criteria of **Acceptable Values**:
!!! note
Categorical values inside of parameters that also accept other types (“multis”), as well as preprocessing parameters, are not tunable. For models created prior to the introduction of this feature, additional instances of selects may not be tunable.

In the screenshot above you can enter various values between 0.00001 and 1, for example:
* `0.2` to select an individual value.
* `0.2, 0.4, 0.6` to list values that fall within the range; use commas to separate a list.
* `0.2-0.6` to specify the range and let DataRobot select intervals between the high and low values; use hyphen notation to specify a range.
### Set the search type {: #set-the-search-type }
Click on the **Advanced** link to set your search type to either:
* **Smart Search** (default) performs a sophisticated <a target="_blank" href="https://en.wikipedia.org/wiki/Pattern_search">pattern search (optimization)</a> that emphasizes areas where the model is likely to do well and skips hyperparameter points that are less relevant to the model.
* **Brute Force** evaluates each data point, which can be more time and resource intensive.

There are situations, however, in which Brute Force will outperform Smart Search. This is because Smart Search is heuristic-based—meaning it's about saving time not increasing accuracy (it doesn’t search the whole grid).
??? tip "Deep dive: Smart Search heuristics"
The following describes how DataRobot performs Smart Search:
1. Use a start value if specified. Otherwise, initialize the first grid pass to be the cross-product of the 25th and 75th percentile among each parameter's grid points and score.
2. Find the best-performing grid points among those that were searched.
3. From the unsearched grid points, find the neighbor grid points of the best-performing searched grid point and score for those points.
4. Add those values to the "searched" list and repeat from Step 2.
5. If no neighbors are found, search all adjacent neighbors (`edges`). If neighbors are found, repeat from Step 2.
6. If still no neighbors are found, reduce the search radius for neighbors. If neighbors are found, repeat from Step 2.
7. Repeat Steps 2-5 until there are no new neighbors to search or the maximum number of iterations (`max_iter`) is reached.
### Run the tune {: #run-the-tune }
To run your tune:
1. Optionally, use **Describe this tuning** to append text (for example, a name and comment) to the tune. DataRobot displays your comments on the Leaderboard when the model has finished, in small text underneath the model title.

2. Click **Begin Tuning**.
When you click **Begin Tuning**, a new model, with your selected parameters, begins to build. DataRobot displays progress in the right-side worker usage pane and adds the new model to the Leaderboard. The Leaderboard entry is only partially complete, however, until the model finishes running. The listing displays a **TUNED** badge to the right of the model name and any descriptive text from the **Describe this tuning** box in the line beneath the title:

### Interpret the graph {: #interpret-the-graph }
The graph(s) displayed by the **Advanced Tuning** feature map an individual parameter to a score and also provide parameter-to-parameter graphs for analyzing pairs of parameter values run together. The number and detail of the graphs vary based on model type.
## More info... {: #more-info }
The sections below describe the parameters section in more detail and also explain the [Advanced Tuning graph](#interpret-the-graph).
### Explore Advanced Tuning {: #explore-advanced-tuning }
The Parameters area provides three tabs—two different ways to view the parameters for the existing model and a third tab for launching a new model. Parameters are model-dependent but sample-size independent. Display options are:
* **Searched**: lists the parameter values used to run the current model, create the displayed graphs, and obtain the validation score shown on the Leaderboard.
* **Best of Searched**: displays the single value for each parameter values that resulted in the optimal validation score.
* **New Search**: provides, for each parameter, an editable field where you can modify the parameter value used for the next search. You launch the tune from this tab.
!!! tip
Regardless of the section you are in, when you click the down arrow to modify parameters you are taken to the <b>New Search</b>.
Once you open the **Advanced Tuning** page, DataRobot displays the model’s parameters on the left (listed parameters are model-specific). Click on **Searched** and **Best of Searched** to display the parameter information DataRobot makes available.
In the example below, you can see that the values displayed for the parameters `max_features` and `max_leaf_nodes` differ.

**Search** lists all the values searched; **Best of Searched** lists only the value that yielded the best model results—in this case, 0.4 for `max_features` and 500 for `max_leaf_nodes`.
Click on **New Search** to display the parameter values that will be used for the next tune. The values that populate **New Search** are, by default, the same as those in B. DataRobot displays any changes from the **Best of Searched** parameter values in blue on the **New Search** screen.
### Advanced Tuning graph details {: #advanced-tuning-graph-details }
The following shows a sample Advanced Tuning graph:

DataRobot graphs those parameters that take a numeric or decimal value and displays them against a score. In the example above, the top two graphs each plot one of the parameters used to build the current model. The points on the graph indicate each value DataRobot tried for that parameter. The largest dot is the value selected, and dots are represented in a “warmer to colder” color scheme.
The third graph, in the bottom left, is a parameter-to-parameter graph that illustrates an analysis of co-occurrence. It plots the parameter values against each other. In the sample graph above, the comparison graph shows gamma on the Y axis and C on the X axis. The large dot in the comparison graph is the point of best score for the combination of those parameters.
This final parameter-to-parameter graph can be helpful in experimenting with parameter selection because it provides a visual indicator of the values that DataRobot tried. So, for example, to try something completely different, you can look for empty regions in the graph and set parameter values to match an area in the empty region. Or, if you want to try tweaking something that you know did well, you can identify values in the region near to the large dot that represents the best value.
If the main model uses a [two-stage modeling process](model-ref#two-stage-models) (Frequency-Severity eXtreme Gradient Boosted Trees, for example), you can use the dropdown to select a stage. DataRobot then graphs parameters corresponding to the selected stage.

|
adv-tuning
|
---
title: Accuracy Over Time
description: How to use the Accuracy Over Time tab, which becomes available when you specify date/time partitioning, to visualize how predictions change over time.
---
# Accuracy Over Time {: #accuracy-over-time }
The **Accuracy Over Time** tab helps to visualize how predictions change over time. By default, the view shows predicted and actual vs. time values for the training and validation data of the most recent (first) backtest. This is the backtest model DataRobot uses to deploy and make predictions. (In other words, the model for the validation set.)
This visualization differs somewhat between OTV and time series modeling. With time series, in addition to the standard features of the tool, you can display based on forecast distances (the future values range you selected before running model building).
!!! note
If you are modeling a multiseries project, there is an additional dropdown that allows you to select which series to model. Also, the charts reflect the validation data until training data is explicitly requested. See the [multiseries-specific details](#display-by-series), below.

The default view of the graph, in all cases, displays the validation data's forecast—actual values marked by open orange circles connected by a line and the model's predicted values with connected blue solid circles. If you uploaded a [calendar file](ts-adv-opt#calendar-files) when you created the project, the display also includes markers to [indicate calendar events](#identify-calendar-events).
Click the **Compute for training** link to add results for training data to the display:

!!! note
**Accuracy Over Time** training computation is disabled if the dataset exceeds the configured threshold after creation of the modeling dataset. The default threshold is 5 million rows.
**Accuracy Over Time** charts values for the selected period, similar to (but also differently from) the information provided by [Lift Charts](lift-chart). Both charts bin and then graph data. (Although the **Accuracy Over Time** bins are not displayed as a histogram beneath the chart, the binning information is available as [hover help](#identify-the-bin-data) on the chart itself.) Bins within the **Accuracy Over Time** tab are equal width—that is, each bin spans the same time range—while bins in the Lift Chart are equal sized, such that each bin contains the same number of rows.
There are two plots available in the **Accuracy Over Time** tab—the **Predicted & Actual** and the [**Residuals**](#interpret-the-residuals-chart) plots.
## Data used in the displays {: #data-used-in-the-displays }
The **Accuracy Over Time** tab and associated graphs are available for all models produced with [date/time partitioning](ts-date-time), although options differ for OTV vs. time series/multiseries modeling.
When you open the tab, the graph defaults to the **Predicted & Actual** plot for the validation set of the most recent (first) backtest. You can select a different, or all, backtests for display, although you must return to the Leaderboard **Run** button to compute the display for additional backtests. If holdout is unlocked, you can also click on the holdout partition to view holdout predictions. If it is locked, you can unlock it from the Leaderboard and return to this display to view the results.
With small amounts of data, the chart displays all data at once; use the date range slider in the preview below the chart to focus in on parts of the display.
For larger datasets (greater than approximately 500 rows), the preview renders all of the results but the chart itself displays only the selection encompassed by the slider. Slide the selector to see different regions of the data. By default, the selector covers the most recent 1000 time markers (dependent on the [resolution](#change-the-binning-resolution) you set).

The tab provides several options to change the display. For all date/time-partitioned projects you can:
1. Change the [displayed backtest](#change-the-displayed-backtest) or display all backtests.
2. Select the [series to plot](#display-by-series) (multiseries only).
3. Choose a [forecast distance](#change-the-forecast-distance) (time series and multiseries only).
4. [Compute and display training data](#compute-training-data).
5. Expand **Additional Settings**, if necessary, to [change the display resolution](#change-the-binning-resolution).
6. [Change the date range](#change-the-date-range).
7. [Zoom to fit](#zoom-the-display).
8. [Export](#export-data) the data.
9. View the [Residuals values chart](#interpret-the-residuals-chart).
10. Identify [calendar events](#identify-calendar-events).
## Predicted & Actual Over Time {: #predicted-actual-over-time }
The **Predicted & Actual Over Time** chart provides useful information regarding each backtest in your project. By comparing backtests, you can more easily identify and select the model that best suits your data. The following describes some things to note when viewing the chart:
### Understand line continuity {: #understand-line-continuity }
When viewing a single backtest, the lines may be discontinuous. This is because data may be missing in one of the binned time ranges. For example, There might be a lot of data in week 1, no data in week 2, and then more data in week 3. There is a discontinuity between week 1 and week 3, and it is reflected in the chart.
When [viewing all backtests](#all-backtests-option), there are basically three scenarios. Backtests can be perfectly contiguous: January 1-January 31, February 1-February 28, etc. Backtests can overlap: January 1-February 15, February 1-March 15, etc. And backtests can have one or more gaps (configured when you configured the [date/time partition](ts-date-time)). These backtest configuration options are reflected in the "all backtests" view, so backtest lines on the chart may overlap, be separated by a gap, or be contiguous.
### Understand line color indicators {: #understand-line-color-indicators }
The **Predicted & Actual Over Time** chart represents the actual values by open orange circles. Predicted values based on the validation set are represented by blue solid circles, which corresponds to the blue in the backtest representation. You can additionally compute and include predictions on the [training data](#compute-training-data) for each backtest. The bar below the chart indicates the division between training and validation data.
## Change the Predicted & Actuals display {: #change-the-predicted-actuals-display }
There are several tips and toggles available to help you best evaluate your data.
### Change the displayed backtest {: #change-the-displayed-backtest }
While DataRobot defaults to the first backtest for display, you can change to a different backtest or even [all backtests](#all-backtests-option) in the **Backtest** dropdown. DataRobot runs all backtests when building the project, but you must individually train a backtest's model and compute its validation predictions before viewing in the **Accuracy Over Time** chart. Until you do so, the backtest is grayed out and unavailable for display. To view the chart for a different backtest, first compute predictions:

#### All Backtests option {: #all-backtests-option }
DataRobot initially computes and displays data for Backtest 1. To display values for all computed backtests, select **All Backtests** from the **Backtest** dropdown. You can either compute each backtest individually from the dropdown, or, to compute all backtests at once, click the **Run** button for the model on the Leaderboard.
When subsequent backtest(s) are computed, the chart expands to support the larger date range, showing each computed backtest in the context of the total range of the data. (Make sure **All Backtests** is still selected.)

When you select **All Backtests**, the display only includes predicted vs. actual values for the validation (and holdout, if unlocked) partitions across all the backtests. Even if you [computed training data](#compute-training-data), it does not display with this option.
Note that tooltips for the **All Backtests** view behave slightly differently than for an individual backtest. Instead of reporting on bin content, the tooltip highlights an individual backtest. Clicking focuses the chart on that backtest (which is the same as manually choosing the backtest via the dropdown).

### Change the forecast distance {: #change-the-forecast-distance }
For time series and multiseries projects, you can base the display on forecast distance (the future values range you selected before running model building):

Setting a different forecast distance modifies the display to visualize predictions for that distance. For example, "show me the predicted vs. actual validation data when predicting each point two days in advance." Click the left or right arrow to change the distance by a single increment (day, week, etc.); click the down arrow to open a dialog for setting the distance.
When working with large (downsampled) datasets or projects with wide forecast windows, DataRobot computes Accuracy Over Time on-demand, allowing you to specify the forecast distance of interest. For each distance you navigate to on the chart, you are prompted to compute the results and view the insight. In this way, you can determine the number of distances to check in order to confidently deploy models into production, without overburdening compute resources.

### Compute training data {: #compute-training-data }
Typically DataRobot models use only the validation predictions (and holdout, if unlocked) for model insights and assessing model performance. Because it can be helpful to view past history and trends, the date/time partitioning **Predicted & Actual** chart allows you to include training predictions in the display. Note, however, that training data predictions are not a reliable measure of the model's ability to predict future data.
Check **Show training data** to see the full results using training and validation data. This option is only available when an individual backtest is selected, <em>not</em> when you have selected **All Backtests** from the **Backtest** dropdown. The visualization captures not only the weekly variation, but the overall trend. Often with time series datasets the predictions lag slightly, but the **Accuracy Over Time** tab shows that this model is predicting quite well.
Computing with no training data:

Computing with training data:

### Identify calendar events {: #identify-calendar-events }
If you upload a [calendar file](ts-adv-opt#calendar-files) when you create a project, the **Accuracy Over Time** graph displays indicators that specify where the events listed in the calendar occurred. These markers provide context for the actual and predicted values displayed in the chart. Hover on a marker to display event information.
For multiseries projects, events may be series-specific. To view those events, select the series to plot, locate the event on the timeline, and hover for information including the series ID and event name:

### Identify the bin data {: #identify-the-bin-data }
The **Accuracy Over Time** tab uses binning to segment and plot data. With date/time partitioning models, bins are equal width (same time range, defined by the [resolution](#change-the-binning-resolution)) and often contain different numbers of data points. You can hover over a bin to see a summary of the average actual and predicted values (or "<em>missing</em>" as appropriate), as well as a row count and timestamp:

!!! note
In cases where the amount of data is small enough, DataRobot plots each predicted and actual point individually on the chart.
### Change the binning resolution {: #change-the-binning-resolution }
By default, DataRobot displays the most granular binning resolution. You can, however, change the resolution from the **Resolution** dropdown (in **Additional Settings** for time series and multiseries). Increasing the resolution allows you to further aggregate the data and see higher-level trends. This is useful if the data is not evenly distributed across time. For example, if your data has many points in one week and no points for the next two weeks, aggregating at a monthly resolution visually compresses gaps in the data. The resolution options available are determined by the data's detected time steps.

Backtest 1 daily:

Backtest 1 weekly:

Backtest 1 monthly:

Note, however, that the bin start dates might not be the same as the dataset dates (even if the dataset has a regular time step). This is because **Accuracy Over Time** bins are aligned to always include the end date of the dataset. This may mean that they are shifted by a single <em>time unit length</em> to ensure the final datapoint is included, even if this means that the bins no longer align with the periodicity in the dataset.
For example, consider a dataset based on weekly data (aggregation of data from Monday through Sunday) where Monday is always the start of the week. Even though the data is spaced every seven days on Monday, the **Accuracy Over Time** bins may span Tuesday to Tuesday (instead of Monday to Monday) to ensure that the final Monday is included.
### Change the date range {: #change-the-date-range }
Using the **Show full date range** toggle, you can change the chart scale to match the range of the entire data set. In other words, rescaling to the full range contextualizes how much of your data you're using for validation and/or training. For example, let's say you upload a dataset covering January 1 2017 to December 30 2017. If you create backtests for October/November and November/December, the full range plot shows the size of those backtests relative to the complete dataset.

If you select **All Backtests**, the chart displays the validation data for the entire data set, marking each backtest in the range:

### Focus the display {: #focus-the-display }
Use the date range slider below the chart to highlight a specific region of the time plot, selecting a subset of the data. For smaller parts of displayed data (a backtest or a higher resolution, for example), you can move the slider to a selected portion—drag the edges of the box to resize and click within the box and drag to move—focusing in on parts of the display. The full display:

A focused display:

For larger amounts of data, the preview renders the full results for the selected backtest(s) while the chart reflects only the data contained within the slider selection. Drag the slider to select a subset of the data for further inspection. The slider selection, by default, contains up to 1000 bins. If your data results in more than 1000 bins, the display shows the most recent 1000 bins. You can make the slider smaller than 1000 by dragging the edges, but if you try to make it larger, the selection highlights the most recent 1000 (right-most in the preview) and the chart updates accordingly.

### Zoom the display {: #zoom-the-display }
The **Zoom to fit** box (in **Additional Settings** for time series and multiseries projects), when checked, modifies the chart's Y-axis values to the minimum and maximum of the target values. When off, the chart scales to show the full possible range of target values. For binary classification projects, zoom is disabled by default, meaning the Y-axis range displays 0 to 1. Enabling **Zoom to fit** shows the chart within the range of both actual and predicted values for the backtest (and series, if multiseries) that is currently selected.
For example, suppose the target (`sales`) has possible values between 0 and 150,000. Maybe all the predicted and/or actual values, however, fall between roughly 15,000 and 60,000. When **Zoom to fit** is checked, the Y-axis display will plot from the low of approximately 15,000 to approximately 60,000, the highest known value.

When unchecked, the Y-axis spans from 0-150,000, with all data points grouped between roughly 15,0000 and 60,000.

Note that if the maximum and minimum of the prediction values are equal (or close to equal to) the maximum and minimum of the target, checking the box may not cause a change to the display. The preview slider below the plot always displays zoomed to fit (does not match the scaling used in the main chart).
### Export data {: #export-data }
Click the **Export** link to download the data behind the chart you are currently viewing. DataRobot presents a dialog box that allows you to copy or download CSV data for the selected backtest, forecast distance, and series (if applicable), as well as the average or absolute value residuals.
## Display by series {: #display-by-series }
If your project is [multiseries](multiseries), the plot controls are dependent on the number of series. Because projects can have up to 1 million series and up to 1000 forecast distances, calculating accuracy charts for all series data can be extremely compute-intensive and often unnecessary. To avoid this, DataRobot calculates either all or a portion of the series— the first _x_ series, sorted by name—at a single forecast distance.
!!! note
Calculations apply to the _validation_ data; training data calculations can also be run, but are done separately and for each series.
The number of series calculated is based on the projected space and memory requirements. As a result, the landing page for multiseries **Accuracy Over Time** can be one of three options:
* If the dataset (number of series) at each configured forecast distance is relatively small and will not exceed a base threshold, DataRobot calculates **Accuracy Over Time** during model building. When you open the tab, the charts are available for each series at each distance.
* If the dataset is large enough that the memory and storage requirements would cause a noticeable delay when building models, but not so large that bulk calculations are applicable, you are prompted to [run select calculations](#compute-a-selected-series) from the landing page, similar to:

* If calculations for all series would exceed an even higher threshold—one that prevents potential excessive compute time—the landing page adds an option allowing you to calculate per-series and also [in bulk](#compute-multiple-series):

The methodology DataRobot uses for per-series calculations is applicable to the following functionality:
* Accuracy Over Time
* [Forecast vs Actual](fore-act)
* [Anomaly Over Time](anom-viz#anomaly-over-time)
* [Model Comparison](model-compare) for Accuracy/Anomaly Over Time-based comparisons
### Compute a selected series {: #compute-a-selected-series }
Compute **Accuracy Over Time** on-demand in the following circumstances:
* The project exceeds the base threshold for calculation.
* You have changed the forecast distance for an on-demand calculated series.
* The project triggered [bulk calculations](#compute-multiple-series) but you want results for specific series (you do not want to consume the resources that running all series would require).
Note that you can search the desired series in the **Series to plot** dropdown, regardless of whether or not calculations have been run for the series.
To calculate for a selected series:
1. Select the series of interest to plot:

Or, plot the average across all calculated series:

!!! note
If the project triggered the bulk calculation option, selecting _Average_ for **Series to plot** sets DataRobot to first calculate accuracy for the number of series identified in the bulk series limit value.
2. [Change the forecast distance](#change-the-forecast-distance), if desired.
3. Click one of the buttons to initiate calculations; options are dependent on which backtests you want to compute. Calculations apply for all series, but only for the selected forecast distance. Select either:
Button | Description
------ | -----------
Compute Forecast Distance _X_ / All Backtests | Computes insight data for all backtests with the selected settings (series, forecast distance). This may be compute-intensive, depending on the project configuration.
Compute Forecast Distance _X_ / Backtest _X_ | Computes insight data for the selected series, but only for the selected backtest at the selected forecast distance.
### Compute multiple series {: #compute-multiple-series }
When a project exceeds the mid-range threshold, DataRobot provides an option to calculate series in bulk (**Compute multiple series**). Because these calculations can take significant time, DataRobot applies a storage threshold so even the bulk action may not compute all series. Help text above the activation button provides information on the total number of series as well as the number that will be calculated with each computation request:

In this example, DataRobot found 2300 series in the project but, in order to stay below the threshold, will only calculate the first 943 series (the series limit, ordered alphanumerically) in one computation. The bulk computation method does not allow calculation for more than one backtest at a time.
!!! note
The number of series available for calculation is different in the [**Forecast vs Actual**](fore-act) tab. This is because **Accuracy Over Time** calculates for a single forecast distance, while **Forecast vs Actual** calculates for a range of distances. As a result, the series limit for **Forecast vs Actual** is the number shown here divided by the number of steps in the forecast range.
To work with bulk calculations, select the series, backtest, and forecast distance to plot. Computation options differ depending on your selection.
=== "Single series"
If you want insights for all or a selected backtest for a *single series*:

If you choose *Average* as the series to plot, DataRobot runs calculations for the maximum series limit, and allows for a selected backtest. Be aware that this can be extremely compute-intensive:

=== "Multiple series"
If you want accuracy calculations for the maximum number of series, use the bulk option:

Once bulk calculations complete, **Accuracy Over Time** results are available for the number of series, processed in alphanumeric order, indicated in the help text. If you search a series that has not yet been calculated, the option to compute that series and the next _x_ series, displays.
To view accuracy charts for any series beyond the identified limit, run the calculation for the first batch and select a series outside of that range. Once selected, the bulk activation button returns, with an option to calculate the next _x_ series.
## Interpret the Residuals chart {: #interpret-the-residuals-chart }
The **Residuals** chart plots the difference between actual and predicted values. It helps to visualize whether there is an unexplained trend in your data that the model did not account for and how the model errors change over time. Using the same controls as those available for the **Predicted & Actual** tab, you can modify the display to investigate specific areas of your data.
The chart also reports the Durbin-Watson statistic, a numerical way of evaluating residual charts. Calculated against validation data, Durbin-Watson is a test statistic used for detecting autocorrelation in the residuals from a statistical regression analysis. The value of the statistic is always between 0 and 4, where 2 indicates no autocorrelation in the sample.
By default the chart plots the average residual error (Y-axis) against the primary date/time feature (X-axis):

Check the **Absolute value residuals** box to view the residuals as absolute values:

Some things to consider when evaluating the **Residuals** chart:
* When the residual is positive (and **Absolute value residuals** is unchecked), it means the actual value is greater than the predicted value.
* If you see unexpected variation, consider adding features to your model that may do a better job of accounting for the trend.
* Look for trends that may be easily explained, such as "we always under-predict holidays and over-predict summer sales."
* Consider adding [known in advance](ts-adv-opt#set-known-in-advance-ka) features that may help account for the trend.
|
aot
|
---
title: Confusion Matrix (for multiclass models)
description: The multiclass confusion matrix compares actual and predicted data values, so you can see if any mislabeling has occurred and with which values.
---
# Confusion Matrix (for multiclass models) {: #confusion-matrix-for-multiclass-models }
!!! info "Availability information"
Availability of unlimited classes in multiclass projects is dependent on your DataRobot package. If it is not enabled for your organization, class limit is set to 100. Contact your DataRobot representative to increase this limit.
For multiclass models, DataRobot provides a multiclass confusion matrix to help evaluate model performance. The confusion matrix compares actual data values with predicted data values, making it easy to see if any mislabeling has occurred and with which values.

See [considerations](#feature-considerations) for working with multiclass models.
## Background {: #background }
In general, there are two types of prediction problems—regression and classification. Regression problems predict continuous values (1.7, 6, 9.8…). Classification problems, by contrast, classify values into discrete, final outputs or _classes_ (buy, sell, hold...).
Classification can be broken down into binary and multiclass problems.
* In a binary classification problem, there are only two possible classes. Some examples include predicting whether or not a customer will pay their bill on time (yes or no) or if a patient will be readmitted to the hospital (true or false). The model generates a predicted probability that a given observation falls into the "positive" class (`readmitted=yes` in the last example). By default, if the predicted probability is 50% or greater, then the predicted class is "positive."
* Multiclass classification problems, on the other hand, answer questions that have more than two possible outcomes (classes). For example, which of five competitors will a customer turn to (instead of simply whether or not they are likely to make a purchase). Or, to which department should a call be routed (instead of simply whether or not someone is likely to make a call)? In this case, the model generates a predicted probability that a given observation falls into each class; the predicted class is the one with the highest predicted probability. (This is also called [_argmax_](https://machinelearningmastery.com/argmax-in-machine-learning/){ target=_blank }.) With additional class options for multiclass classification problems, you can ask more “which one” questions, which result in more nuanced models and solutions.
Depending on the number of values for a given target feature, DataRobot automatically determines the project type and whether a project is standard, extended, or unlimited multiclass. The following table describes how DataRobot assigns a default problem type for numeric and non-numeric target data types:
Target data type | Number of unique target values | Default problem type | Use multiclass?
------------------|------------------|----------------------|-------------------
Numeric | 3-10 | Regression | Yes, optional
Numeric | > 10 | Regression | Yes, optional (extended multiclass)
Non-numeric | 2 | Binary | No
Non-numeric | 3-100 | Multiclass | Yes, automatic
Non-numeric, numeric | 100+ | [Unlimited multiclass](#unlimited-multiclass) | Yes, automatic, if enabled
## Build multiclass models {: #build-multiclass-models }
Multiclass modeling uses the same general [model building workflow](model-data#model-building-workflow) as binary or regression projects.
1. Import a dataset and specify a target.
2. [Change regression project to multiclass](#change-regression-projects-to-multiclass), if applicable.
3. For unlimited multiclass projects with more than 1,000 classes, you can [modify the aggregation settings](feature-con#aggregate-target-classes). Otherwise, DataRobot, by default, will keep the top 999 most frequent classes and aggregate the remainder into a single "other" bucket.
4. Use the [**Confusion Matrix**](#confusion-matrix-overview) to evaluate model performance.
### Change regression projects to multiclass {: #change-regression-projects-to-multiclass }
Once you enter a target feature, DataRobot classifies the project type and indicates the default with a tag next to the target feature:

If the project is classified as regression, and eligible for multiclass conversion, DataRobot provides a **Switch To Classification** link below the target entry box. Clicking the link changes the project to a classification project (values are interpreted as classes instead of continuous values). If the number of unique values falls outside the allowable range, the **Switch To Classification** link is not available.
??? tip "What is eligible for multiclass?"
Whether a project is considered "eligible for multiclass" is dependent on settings. If unlimited multiclass is enabled, all projects can be converted. Without unlimited multiclass, you can convert from numeric to multiclass when there are up to 100 unique numeric values.
Click **Switch To Regression** to switch the project type from classification back to the default regression setting.
With the training method set, verify or [change](additional#change-the-optimization-metric) the metric, choose a [modeling mode](model-data#set-the-modeling-mode), and click **Start**.
## Unlimited multiclass {: #unlimited-multiclass}
If enabled for your organization, unlimited multiclass is available to handle projects with a target feature containing more than 100 classes. For projects that contain a target with more than 1000 classes, DataRobot employs multiclass aggregation to bring the modeling class number to 1000.
### Set unlimited multiclass aggregation {: #set-unlimited-multiclass-aggregation }
To support more than 1000 classes, DataRobot automatically aggregates classes, based on frequency, to 1000 unique labels. You can, however, configure the aggregation parameters to ensure all classes necessary to your project are represented.
DataRobot handles the breakdown based on the number of classes detected:
* If 101-1000 classes, modeling continues as usual.
* If 1000 or more classes, a warning appears below the target entry field:

If this warning appears, you can allow DataRobot to handle the aggregation. In this case, there will be 999 classes—the 999 classes with the most frequency. All other classes are binned into a 1000th class—"other." You can, however, configure the aggregation settings in the [**Feature Constraints**](feature-con#aggregate-target-classes) advanced setting. See **Feature Constraints** for field descriptions and an aggregation example.
!!! note
Aggregation settings are also available for multiclass projects with fewer than 1,000 classes.
### Changes to Feature Impact {: #changes-to-feature-impact }
In projects with more than 100 classes, the [**Feature Impact**](feature-impact) visualization charts only the *aggregated* feature impact, not per-class impact. This is because:
1. Using only aggregated classes improves runtime.
2. Given that each class instance has a comparatively low count, it makes the score less reliable than the aggregated score.
As a result, the **Select Class** dropdown is not available on the chart.
## Confusion Matrix overview {: #confusion-matrix-overview }
For each classification project type, DataRobot builds a confusion matrix to help evaluate model performance. The name "confusion matrix" refers to how a model can confuse two or more classes by consistently mislabeling (confusing) one class as another. The confusion matrix compares actual data values with predicted data values, making it easy to see if any mislabeling has occurred and with which values.
A confusion matrix specific to the problem type is available for both binary classification (in the [ROC Curve](roc-curve-tab/index)) and multiclass problems. To access the multiclass confusion matrix, first build your models and then select the **Confusion Matrix** tab from the **Evaluate** division.
The tab displays two confusion matrix tables for each multiclass model: the *Multiclass Confusion Matrix* and the *Selected Class Confusion Matrix*. Both matrices compare predicted and actual values for each class, which are based on the results of the training data used to build the project, and through the graphic elements illustrate mislabeling of classes. The Multiclass Confusion Matrix provides an overview of every class found for the selected target, while the Selected Class Confusion Matrix analyzes a specific class. From these comparisons, you can determine how well DataRobot models are performing.
The following describes the components available in the **Confusion Matrix** tab.

| | Option | Description |
| - |-------|------------ |
|  | [Matrix](#large-confusion-matrix) | Overview of every found class. |
|  | [Data selection](#data-selection) | Data partition selector. |
|  | [Display modes](#modes) | Modes that impact display. |
|  | [Display options](#display-options) | Menu for display options. |
|  | [Matrix detail](#matrix-detail) | Numeric frequency details. |
|  | [Class selector](#class-selector) | Individual class selector. |
|  | [Selected class confusion matrix](#selected-class-confusion-matrix) | Class-specific matrix. |
|  | [Extended-class confusion matrix thumbnail](#extended-class-confusion-matrix-thumbnail) | Thumbnail for extended classes. |
### Large confusion matrix {: #large-confusion-matrix }
This matrix provides an overview of every class (value) that DataRobot recognized for the selected target in the dataset. It reports class prediction results using different colored and sized circles. Color indicates prediction accuracy—green circles represent correct predictions while red circles represent incorrect predictions. The size of a circle is a visual indicator of the occurrence (based on row count) of correct and incorrect predictions (for example, the number of rows in which “product problem” was predicted but the actual value was “bad support”).
The default size of the matrix changes depending on the type of multiclass:
* Up to 100 classes, the matrix is 10 features by 10 features.
* More than 100 classes, the matrix is 25 features by 25 features.
Click on any of the **correct predictions** (green circles) in the Multiclass Confusion Matrix to view and analyze additional details for that class in the display to the right of the matrix.
### Data selection {: #data-selection }
The data used to build the Multiclass Confusion Matrix is dependent on your project type and can be changed using the **Data Selection** dropdown. The option you choose changes the display to reflect the selected subset of the project's historical (training) data:
* For non time-aware projects, it is sourced from the validation, cross-validation, or holdout (if unlocked) [partitions](data-partitioning).
* For time-aware projects, it is sourced from an individual backtest, all backtests, or holdout (if unlocked).
Additionally, you can add an [external test dataset](pred-test#make-predictions-on-an-external-test-dataset) to help evaluate model performance.

### Modes {: #modes }
There are three mode options—**Global**, **Actual**, and **Predicted**—that provide detailed information about each class within the target column. Changing the mode updates the full matrix, the selected class matrix, and the details for the selected class.

The following table describes each of the Multiclass Confusion Matrix modes. See the [metrics](metrics#metrics-explained) documentation or the <a target="_blank" href="https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall">Google developers foundation course</a> for descriptions of Recall and Precision.
| Mode | Description | Hover over a cell on the matrix grid to display... |
|-----------|--------------|----------------------------------------------------|
| Global | Provides F1 Score, Recall, and Precision metrics for each selected class. | <ul><li>Total row count</li><li>Total row count compared to *total row count* in the selected partition (%)</li></ul> |
| Actual | Provides details of the Recall score as well as a partial list of classes that the model confused with the selected class. Click **Full List** to see Recall score for all confused classes.* | <ul><li> Total row count</li><li> Total row count compared to the total row count of *actual class values* in the selected partition (%)</li></ul> |
| Predicted | Provides details of the Precision score (how often the model accurately predicted the selected class). Click **Full List** to see Precision score for all confused classes.* | <ul><li> Total row count</li><li> Total row count compared to the total row count of *predicted class values* in the selected partition (%) </li></ul> |
Clicking **Full List** opens the Feature Misclassification popup, which lists scores for all classes and allows you to switch between the Actual and Predicted modes.

### Display options {: #display-options }
The gear icon provides a menu of options for sorting and orienting the Multiclass Confusion matrix into different formats.

Display options include:
* Orientation of Actuals: sets the axis (rows or columns) for the Actual values display.
* Sort by: sets the sort order, either alphabetically, by actual or predicted frequency, or by F1 Score.
* Order: orders the matrix display in either ascending or descending order.
For example, to view the lowest Predicted Frequency values, select the Predicted Frequency and Ascending order options to display those values at the top of the matrix.
### Matrix detail {: #matrix-detail }
The blue bars that border the right and bottom sides of the Multiclass Confusion Matrix display numeric frequency details for each class and help determine DataRobot’s accuracy. For any class, click a bar opposite the Actual axis to see actual frequency or opposite the Predicted axis to see predicted frequency.
The example below reports the actual frequency for the class `[50-60)` of the feature `age`. In this case, based on the training data, there were 264 instances (at this sample size) in which the `[50-60)` class was the value of the target `age`. Those 264 rows make up 16.5% of the total dataset:

!!! tip
You can view frequency details for any class, regardless of which class is currently selected, by hovering over any of the blue bars.
### Class selector {: #class-selector }
The dropdown selects an individual class and provides details based on the active mode.

### Selected Class Confusion Matrix {: #selected-class-confusion-matrix }
The smaller matrix provides accuracy details for a single class. Changing the mode or the selected class, whether through the dropdown or by clicking a green circle in the full matrix, dynamically updates the Selected Class Confusion Matrix. The class displayed on the Selected Class Confusion Matrix is simultaneously highlighted on the full matrix and the frequency percentages are displayed in the labeled quadrants. Hover over a circle in the matrix to view its contribution to the total number of rows in that sample (for the selected partition). The sum of rows in each quadrant equals the total dataset. For example, there are 1600 instances where `Bad Support` was the value of the target ChurnReasons. Hover over each quadrant to view a count of each outcome (the accuracy) of the DataRobot prediction.

The Selected Class Confusion Matrix is divided into four quadrants, summarized in the following table:
| Quadrant | Description |
|------------|-------------|
| True Positive | For all rows in the dataset that were actually ClassA, how many (what percent) did DataRobot correctly predict as ClassA? This quadrant is equal to the value reflected in the full matrix. |
| True Negative | For all rows in the dataset that were *not* ClassA, how many (what percent) did DataRobot correctly predict as not ClassA? This quadrant is equal to the value reflected in the full matrix. |
| False Positive | For all rows in the dataset that DataRobot predicted as ClassA, how many (what percent) were not ClassA? This is the sum of all incorrect predictions for the class in the full matrix. |
| False Negative | For all rows in the dataset that were ClassA, how many (what percent) did DataRobot incorrectly predict as something other than ClassA? This quadrant shows the sum of all rows that should have been the selected class in the full matrix but were not. |
### Extended-class Confusion Matrix thumbnail {: #extended-class-confusion-matrix-thumbnail }
For extended-class (between 11 and 100) multiclass projects, DataRobot provides a thumbnail pagination tool to allow you a more detailed inspection of your results. The thumbnail is a smaller representation of the full multiclass matrix. The blue dots in the thumbnail indicate locations that contain the most predictions (whether classified correctly or incorrectly) and therefore might be the most interesting to investigate.
Clicking on an area in the thumbnail updates the larger matrix to display the 10x10 area surrounding your selection. The final frame (lower right corner) displays only the remaining columns beyond the last `10` boundary (for example, a dataset with 83 classes will show only three entries). The full matrix functions in the same way as the non-extended multiclass matrix described above. Statistics on each cell shown in the larger 10x10 matrix are calculated across the full confusion matrix represented by the thumbnail.

You can navigate the thumbnail either using the arrows along the outside or by clicking in a specific box; row and column numbers help identify the current matrix position:

A thumbnail displaying blue dots roughly on the diagonal from upper left to lower right potentially indicates a good model—there are many correct predictions. However, it is also possible that, because categories are not ordered, the dots indicate misses that are gathered by chance and so it is important to fully investigate each square to check performance.
## Feature considerations {: #feature-considerations }
The following notes apply to working with multiclass models generally. These sections provide details specific to more than 10 classes:
* [Working with more than 11 classes](#more-than-11-classes)
* [Working with more than 100 classes](#more-than-100-classes)
* If you do not have unlimited multiclass enabled, DataRobot supports up to 100 classes in multiclass projects. If you create a project with more than 100 classes, the **Data** page will indicate that the target is unsuitable for modeling by displaying an “Invalid target” badge next to its name.
* When using the **Leaderboard > Lift Chart** visualization, selecting a class is not backward compatible; you must retrain any model built before the feature was introduced to see its multiclass insights.
* Stratified partitioning and smart downsampling are not supported.
* Exposures, offsets, and event counts are not supported.
* Advanced preprocessing steps are not supported (e.g., auto-encoders, k-means, cosine similarity, credibility intervals, extra-trees-based feature selection, search for best transform, search for differences/ratios).
* The following tabs and tools are not supported:
* ROC Curve
* SHAP-based Prediction Explanations (XEMP is supported)
* Rating Table
* Hotspots and Variable Effects insights
* When working with the Text Mining and Word Cloud insights or data from the Coefficients tab, multiclass projects with more than 20 classes will only display insights for the 20 classes that appear the most often in the training data.
* You cannot make multiclass predictions using the legacy Prediction API `/api/v1`. You must use the new `/predApi/v1.0` API.
* User and open-source models are not supported (and are deprecated).
* The Confusion Matrix for multiclass projects that are run with slim-run (no [stacked predictions](data-partitioning#what-are-stacked-predictions)) is disabled when the model was trained into Validation.
* You cannot use anomaly detection with multiclass models.
### More than 11 classes {: #more-than-11-classes }
The following considerations apply when your project has 11 or more classes:
* [Stacked predictions](data-partitioning#what-are-stacked-predictions) are disabled (if trained into Validation and/or Holdout, those scores display N/A on the Leaderboard).
* ExtraTrees Classifier models have a row limit of 500K.
* Maximum derived text features are set to 20,000 to prevent OOM errors on text-heavy datasets.
* Some models can take significantly longer to train, depending on the dataset. On average, training time scales up with the number of classes.
### More than 100 classes {: #more-than-100-classes }
The following considerations apply when your project has 100 or more classes:
* Per-class Feature Impact is unavailable.
* The Confusion Matrix uses a 25x25 grid instead of the standard 10x10 grid.
* The public API response for the Confusion Matrix does not include `classMetrics` due to response size limitations. All metrics there can be derived from the Confusion Matrix data itself.
|
multiclass
|
---
title: Per-Class Bias
description: How to use Per-Class Bias, which helps to identify if a model is biased, and if so, how much and whom it's biased towards or against.
---
# Per-Class Bias {: #per-class-bias }
**Per-Class Bias** helps to identify *if* a model is biased, and if so, *how much* and *who* it's biased towards or against. Click **Per-Class Bias** to view the per-class bias chart.
The **Per-Class Bias** tab uses the fairness threshold and fairness score of each class to determine if certain classes are experiencing bias in the model's predictive behavior. Any class with a fairness score below the threshold is likely to be experiencing bias. Once these classes have been identified, use the [**Cross-Class Data Disparity**](cross-data) tab to determine where in the training data the model is learning bias.

## Per-Class Bias Chart {: #per-class-bias-chart }
The **Per-Class Bias** chart displays individual class values for the selected protected feature on the Y-axis. The class' respective fairness score, calculated using DataRobot's [fairness metrics](bias-ref#fairness-metrics), is displayed on the X-axis. Scores can be viewed as either [absolute](#show-absolute-values) or [relative](#show-relative-values) values.

The blue bar indicates a class is above the fairness threshold; red indicates a class is below that threshold and is therefore likely to be experiencing model bias. A gray bar indicates that there is not enough data for the class due to one of the following reasons:
* It contains fewer than 100 rows.
* It contains between 100 and 1,000 rows, but fewer than 10% of the rows belong to the majority class (the class with the most rows of data).
Hover over a class to see additional details, including both absolute and relative fairness scores, the number of values for the class, and a summary of the fairness test results.

Use the information in this chart to identify if there is bias in the outcomes between protected classes. Then, from the [**Cross-Class Data Disparity**](cross-data) tab, evaluate which features are having the largest impact on this bias.
### Control the chart display {: #control-the-chart-display }
This chart provides several controls that modify the display, allowing you to focus on information of particular interest.

#### Prediction threshold {: #prediction-threshold }
The [prediction threshold](threshold)—as seen in the [**ROC Curve**](roc-curve-tab/index) tab tools —is the dividing line for interpreting results in binary classification models. The default threshold is 0.5, and every prediction above this dividing line has the positive class label.
For imbalanced datasets, a threshold of 0.5 can result in a validation partition without any positive class predictions, preventing the calculation of fairness scores on the **Per-Class Bias** tab. To recalculate and surface fairness scores, modify the prediction threshold to resolve the dataset imbalance.

All fairness metrics (except prediction balance) use the model's prediction threshold when calculating fairness scores. Changing this value recalculates the fairness scores and updates the chart to display the new values.
#### Fairness metric {: #fairness-metric }
Use the **Metric** dropdown menu to change which fairness metric DataRobot uses to calculate the fairness score displayed on the X-axis.

#### Show absolute values {: #show-absolute-values }
Select **Show absolute values** to display the raw score each class received for the selected fairness metric.

#### Show relative values {: #show-relative-values }
Select **Show relative values** to scale the class with the highest fairness score to 1, and scale all other class fairness scores relative to 1.

In this view, the fairness threshold is visible on the chart because DataRobot uses relative fairness scores to check against the fairness threshold.
#### Protected feature {: #protected-feature }
All protected features configured during project setup are listed on the left. Select a different protected feature to display its individual class values and fairness scores.

|
per-class
|
---
title: Cross-Class Accuracy
description: How to use the Cross-Class Accuracy table to understand the model's accuracy performance for each protected class.
---
# Cross-Class Accuracy {: #cross-class-accuracy }
The **Cross-Class Accuracy** tab calculates, for each protected feature, evaluation metrics and ROC curve-related scores segmented by class. Use these metrics to better understand how well the model is performing, and its behavior on a given protected feature/class segment.

## Cross-Class Accuracy table {: #cross-class-accuracy-table }
Use the **Cross-Class Accuracy** table to understand the model's accuracy performance for each protected class. Change the protected feature using the dropdown at the top.
The table below describes each accuracy metric:
| Metric | Description |
|-------------|------------------|
| [Optimization metric](opt-metric) (LogLoss in this example) | Displays the optimization metric selected on the **Data** page before model building. |
| [F1](metrics) | Reports the model's accuracy score, computed based on precision and recall. |
| [AUC (Area under the curve)](roc-curve#area-under-the-roc-curve) | Measures how well the model can distinguish between classes. |
| [Accuracy](metrics) | Measures the percentage of correctly classified instances. |
The above example compares LogLoss (the project's optimization metric) between male and female. The score for females is lower, meaning the model is better at predicting salary rate correctly for females than males.
|
cross-acc
|
---
title: Bias and fairness
description: Introduces the Bias and Fairness tabs, which identify if a model is biased and why the model is learning bias from the training data.
---
# Bias and Fairness {: #bias-and-fairness }
The **Bias and Fairness** tabs identify if a model is biased and why the model is learning bias from the training data. The following sections provide additional information on using the tabs:
Leaderboard tab | Description | Source
------------------|-------------|------------
[Per-Class Bias](per-class) | Identify if a model is biased, and if so, how much and who it's biased towards or against. | Validation data
[Cross-Class Data Disparity](cross-data) | Depict why a model is biased, and where in the training data it learned that bias from. | Validation data
[Cross-Class Accuracy](cross-acc) | Measure the model's accuracy for each class segment of the protected feature. | Validation data
[Settings](fairness-metrics#configure-metrics-and-mitigation-post-autopilot) | Configure fairness tests from the Leaderboard. | N/A
If you did not configure **Bias and Fairness** prior to model building, you can [configure fairness tests for Leaderboard models](fairness-metrics#configure-metrics-and-mitigation-post-autopilot) in **Bias and Fairness** > **Settings**.
See the [Bias and Fairness reference](bias-ref) for a description of the methods used to calculate fairness for a machine learning model and to identify any biases from the model's predictive behavior.
## Bias and Fairness considerations {: #bias-and-fairness-considerations }
Consider the following when using the **Bias and Fairness** tab:
- Bias and fairness testing is only available for binary classification projects.
- Protected features must be categorical features in the dataset.
|
index
|
---
title: Cross-Class Data Disparity
description: How to use the Cross-Class Data Disparity insight, which shows why the model is biased, and where in the training data it learned the bias from.
---
# Cross-Class Data Disparity {: #cross-class-data-disparity }
The **Cross-Class Data Disparity** insight shows *why* the model is biased, and *where* in the training data it learned the bias from.
To view cross-class data disparity charts, click **Cross-Class Data Disparity**. Select a protected feature and two class values of that feature to measure for data disparities. The page updates to display a **Data Disparity vs Feature Importance** chart and a **Feature details** chart based on your selections. Use these charts in conjunction to perform root-cause analysis of the model's bias for the selected classes—the **Data Disparity vs Feature Importance** chart to identify which features in the dataset impact bias most, and the **Feature details** chart to investigate where the bias exists within the feature.

## Data Disparity vs Feature Importance chart {: #data-disparity-vs-feature-importance-chart }
The **Data Disparity vs Feature Importance** chart helps identify major disparities between two class values of the protected feature. The chart plots up to 100 features with the largest impact on the selected class pair of the protected feature. To change the number of features displayed, click the settings icon.

Each point on the graph represents a single feature. The placement of the point along the X-axis measures the [impact of the feature](feature-impact), and the Y-axis measures the disparity of that feature's data distribution between the two protected classes. This value is a calculation of the [Population Stability Index (PSI)](https://www.listendata.com/2015/05/population-stability-index.html){ target=_blank }, a measure of difference in distribution over time.
The color of each point represents a combination of the two axes: red indicating high-importance and high-disparity features, green indicating low-disparity and low-importance features. Yellow representing everything in between.

An additional border around a point specifies the project's target feature, as seen below:

Hover on any point to view the feature name as well as the importance and data disparity calculated scores. Note that the calculated scores measure feature impact, and can also be found on the **Understand > Feature Impact** tab.
After identifying features with a major impact on the disparity between two class segments, use the **Feature details** chart to investigate the disparity by viewing the distribution of its values across the two classes.
## Feature details chart {: #feature-details-chart }
The **Feature details** chart displays a feature's value distribution across the two class segments of the protected feature. The dropdown includes the 10 features from the **Data Disparity vs Feature Importance** chart.
Click to select a point on the **Data Disparity vs Feature Importance** chart or choose a feature from the dropdown, and the **Feature details** chart updates to display the differences in distribution between the two class values.

To investigate how the model interprets the relationship between each feature, click **View Feature Effects** to go to the [**Feature Effects**](feature-effects) tab.

|
cross-data
|
---
title: Portable Predictions
description: Learn about DataRobot's available methods for portable predictions.
---
# Portable Predictions {: #portable-predictions}
{% include 'includes/port-pred-options.md' %}
|
port-pred
|
---
title: Deploy tab
description: How to deploy a model from the Leaderboard
---
# Deploy tab {: #deploy-tab }
You can deploy models you build with DataRobot AutoML from the Leaderboard. In most cases, before deployment, you should unlock holdout and [retrain your model](creating-addl-models#retrain-a-model) at 100% to improve predictive accuracy. DataRobot automatically runs [**Feature Impact**](feature-impact) for the model (this also calculates **Prediction Explanations**, if available).
{% include 'includes/deploy-leaderboard.md' %}
|
deploy
|
---
title: Predict
description: Details on the Leaderboard Predict tab's capabilities.
---
# Predict {: #predict }
The **Predict** tab allows you to download various model assets and test predictions. For more information about the predictions methods in DataRobot, see [Predictions Overview](../../../predictions/index).
The following sections describe the components of the **Predict** tab.
Tab | Description
------ | ------------
[Make Predictions](predict) | Use the model to test predictions on up to 1GB of data before deploying it.
[Deploy](deploy) | Deploy a model from the Leaderboard to make predictions in a production environment.
[Downloads](download) | Download artifacts, such as model packages and a ZIP archive of model information.
[Portable Predictions](port-pred) | Execute predictions outside of the DataRobot application using [Scoring Code](port-pred/scoring-code/index) or the [Portable Prediction Server](port-pred/pps/index). |
|
index
|
---
title: Make predictions before deploying a model
description: Learn how to make predictions on models that are not yet deployed and how to make predictions using an external dataset or your training data.
---
# Make predictions before deploying a model {: #make-predictions-before-deploying-a-model }
This section describes the Leaderboard's **Make Predictions** tab used to test predictions for models that are not yet deployed. Once you verify that a model can successfully generate predictions, DataRobot recommends [deploying](deploy-model) the model to [make predictions in a production environment](predictions/index). To make predictions before deploying a model, you can follow one of the [workflows for testing predictions](#workflows-for-testing-predictions).
!!! note
Review the [Predictions Overview](predictions/index) to learn about the best prediction for your needs. Additionally, the [Predictions Reference](pred-file-limits) outlines important considerations for all prediction methods. When working with time series predictions, the <b>Make Predictions</b> tab works slightly differently than with traditional modeling. Continue on this page for a general description of using <b>Make Predictions</b>; see the [time series documentation](ts-predictions#make-predictions-tab) for details unique to time series modeling.
## Workflows for testing predictions {: #workflows-for-testing-predictions }
Before deploying a model, you can use the following workflows to test predictions:
* [Make predictions on a new model](#make-predictions-on-a-new-model)
* [Make predictions on an external test dataset](pred-test#make-predictions-on-an-external-test-dataset)
* [Make predictions on training data](pred-test#make-predictions-on-training-data)
!!! note
A particular upload method may be disabled on your cluster. If a method is not available, the corresponding ingest option will be grayed out (contact your system administrator for more information, if needed). There are slight differences in the **Make Predictions** tab depending on your project type. For example, binary classification projects include a prediction threshold setting that is not applicable to regression projects.
## Make predictions on a new model {: #make-predictions-on-a-new-model }
1. On the Leaderboard, select the model you want to make predictions on and click **Predict > Make Predictions**.

2. <a name="upload-pred-dataset"></a>Upload your test data to run against the model. Drag-and-drop a file onto the screen or click **Choose file** to upload a local file (browse), specify a URL, choose a configured [data source](data-conn) (or create a new one), or select a dataset from the AI Catalog. If you choose the **Data source** option, you will be prompted for database login credentials.

!!! tip
The example above shows importing data for a binary classification project. In a regression project, there is no need to set a [prediction threshold](threshold) (the value that determines a cutoff for assignment to the positive class), so the field does not display.
2. Once the file is uploaded, click **Compute predictions** for the selected dataset. The **Compute predictions** button disappears and job status appears in the Worker Queue on the right sidebar.

3. <a name="step-add-columns"></a>When the prediction is complete, you can append up to five columns to the prediction dataset by clicking in the field below **Optional Features (0 of 5)**. Type the first few characters of the column name; the name autocompletes and you can select it. To add more columns, click in the field, type the first few characters, and select.

!!! notes
<ul><li>You can append a column only if it was present in the original dataset. The column does not have to have been included in the feature list used to build the model.</li><li>The **Optional Features (0 of 5)** feature is not available via the API.</li></ul>
4. Click **Download predictions** to save prediction results to a CSV file. To upload and run predictions on additional datasets, use the **Choose file** dropdown menu again. To delete a prediction dataset, click the trash icon.
|
predict
|
---
title: Downloads tab
description: Understand how to use the Downloads tab to export models for transfer and download exportable charts.
---
# Downloads tab {: #downloads-tab }
The **Downloads** tab allows you to download model artifacts—chart/graph PNGs and model data—in a single ZIP file. To access and download these artifacts, select a model on the **Leaderboard** and click **Predict > Downloads**.

!!! note
The **Downloads** tab previously contained Scoring Code for downloading. Scoring Code is now available from the [Leaderboard](sc-download-leaderboard) or a [deployment](sc-download-deployment). The artifacts available depend on your installation and which features are enabled.
=== "SaaS and Self-Managed"
**Download Exportable Charts**: Click the **Download** link to download a single ZIP archive containing charts, graphs, and data for the model. Charts and graphs are exported in PNG format; model data is exported in CSV format. To save individual charts and graphs, use the [**Export**](export-results) function.
!!! note
If Feature Effects is computed, you can export the chart image for individual features. If you choose to export a ZIP file, you will get all of the chart images and the CSV files for partial dependence and predicted vs. actual data.
=== "Self-Managed"
**MLOps package**: Download a package for DataRobot MLOps containing all the information needed to create a deployment. To use the model in a separate DataRobot instance, download the model package and upload it to the Model Registry of your other instance.
Accessing the MLOps package directs you to the **Deploy** tab. From there, you can download your model as a model package file (`MLPKG`). Once downloaded, you can use the model in a separate DataRobot instance by uploading it to the **Model Registry** of your other instance.
For full details, see the section on [model transfer](reg-transfer).
!!! info "Availability information"
DataRobot’s exportable models and independent prediction environment option, which allows a user to export a model from a model building environment to a dedicated and isolated prediction environment, is not available for managed AI Platform deployments.
|
download
|
---
title: Text Prediction Explanations
description: Helps to understand the importance (both negative and positive impacts) a model places on words and phrases.
---
# Text Prediction Explanations {: #text-prediction-explanations }
DataRobot has several visualizations that help to understand which features are most predictive. While this is sufficient for most variable types, text features are more complex. With text, you need to understand not only the text feature that is impactful, but also which specific words within a feature are impactful.
Text Prediction Explanations help to understand, at the word level, the text and its influence on the model—which data points within those text features are actually important.
Text Prediction Explanations evaluate n-grams (contiguous sequences of n-items from a text sample—phonemes, syllables, letters, or words). With detailed n-gram-based importances available to explore after model building (as well as after deploying a model), you can understand what causes a negative or positive prediction. You can also confirm that the model is learning from the right information, does not contain undesired bias, and is not overfitting on spurious details in the text data.
Consider a movie review. Each row in the dataset includes a review of the movie, but the `review` column contains a varying number of words and symbols. Instead of saying simply that the review, in general, is why DataRobot made a prediction, with Text Prediction Explanations you can identify on a more granular level which _words_ in the review led to the prediction.
Text Prediction Explanations are available for both XEMP and SHAP. While the Leaderboard insight displays quantitative indicators in a different visual format, based on different calculation methodologies, the specific explanation modal is largely consistent (and is described [below](#understand-the-output)).
## Access text explanations {: #access-text-explanations }
Access either [XEMP-based](xemp-pe) or [SHAP](shap-pe) Prediction Explanations from a Leaderboard model's **Understand > Prediction Explanations** tab. Functionality is generally the same as for non-text explanations. However, instead of showing the raw text in the value column, you can click the open () icon to access a modal with deeper text explanations.
For XEMP:

For SHAP:

## View text explanations {: #view-text-explanations }
Once the modal is open, the information provided is the same for both methodologies, with the exception of one value:
* XEMP reports an [impact](xemp-pe#interpret-xemp-prediction-explanations) of the explanation’s strength using `+` and `-` symbols.
* SHAP reports the [contribution](shap-pe#interpret-shap-prediction-explanations) (how much is the feature responsible for pushing the target away from the average?).
### Understand the output {: #understand-the-output }
Text Explanations help to visualize the impact of different n-grams by color (the n-gram impact scores). The brighter the color, the higher the impact, whether positive or negative. The color palette is the same as the spectrum used for the [Word Cloud](word-cloud) insight, where blue represents a negative impact and red indicates a positive impact. In the example below, the text shown represents the content of one row (row 47) in the feature column "review."

Hover on an n-gram and notice that the color is emphasized on the color bar. Use the scroll bar to view all text for the row and feature.
Check the **Unknown ngrams** box to easily determine (via gray highlight) those n-grams not recognized by the model (most likely because they were not seen during training). In other words, the gray-highlighted n-grams were not fed into the modeler for the blueprint.

Showing **Unknown ngrams** helps prevent the misinterpretation of a model's usefulness in cases where tokens are are shown to be neutrally attributed when they are expected to have either a strong positive attribute or a strong negative attribute. The reason for that is, again, because the model did not see them during training.
!!! note
Text is shown in its original format, without modification by a tokenizer. This is because a tokenizer can distort the original text when run through preprocessing. These modifications can render the explanation distorted as well. Additionally, for Text Prediction Explanations downloads and API responses, DataRobot provides the location of each ngram token using starting and ending indexes with reference to the text data. This allows you to replicate the same view externally, if required. In Python, when data is text, use (`text[starting_index: ending_index]`) to return the referenced text ngram token.
## Compute and download predictions {: #compute-and-download-predictions }
Compute explanations as you would with standard Prediction Explanations. You can upload additional data using the same model, calculate explanations, and then download a CSV of results. The output for XEMP and SHAP differs slightly.
### XEMP Text Explanations downloaded {: #xemp-text-explanations-downloaded }
After computing, you can download a CSV that looks similar to:

The JSON-encoded output of the per-n-gram explanation contains all the information needed to recreate what was visible in the UI—attribution scores, impact symbols—according to starting and ending indexes. The original text is included as well.
View the [XEMP compute and download](xemp-pe#compute-and-download-predictions) documentation for more detail.
### SHAP Text Explanations downloaded {: #shap-text-explanations-downloaded }
Download SHAP text explanations also show the information described above for XEMP downloads. When there is a row with no value, Text Explanations returns:

Compare this to a row with JSON-encoded data:

View the [SHAP compute and download](shap-pe#computing-and-downloading-explanations) documentation for more detail.
## Explanations from a deployment {: #explanations-from-a-deployment }
When you [calculate predictions from a deployment](batch-pred#set-prediction-options) (**Deployments > Predictions > Make Predictions**), you:
1. Upload the dataset.
2. Toggle on **Include prediction explanations**.
3. Check the **Number of ngrams explanations** box to make available CSV output that includes Text Explanations.
From the [**Prediction API**](code-py) tab, you can generate text Explanations
for using scripting code from any of the interface options. In the resulting snippet, you must enable:
* `maxExplanations`
* `maxNgramExplanations`
## Additional support {: #additional-support }
Text Explanations are supported for a deployed model in a [Portable Prediction Server](portable-pps). They are exported as an `mlpkg` file, where the language data associated with the dataset is saved.
If the explanations are XEMP-based, they are supported for [custom models](custom-models/index) and [custom tasks](cml-custom-tasks).
|
predex-text
|
---
title: XEMP Prediction Explanations
description: To view XEMP-based Prediction Explanations, which work for all models, first calculate feature impact on the Prediction Explanations or Feature Impact tabs.
---
# XEMP Prediction Explanations {: #xemp-prediction-explanations }
This section describes XEMP-based Prediction Explanations. See also the general description of Prediction Explanations for an overview of [SHAP and XEMP methodologies.](pred-explain/index)
See the associated [considerations](pred-explain/index#feature-considerations) for important additional information.
## Prediction Explanations overview {: #prediction-explanations-overview }
The following steps provide a general overview of using the **Prediction Explanations** tab with uploaded datasets. You can upload and compute explanations for [additional datasets](#upload-a-dataset), however.
!!! note
In XEMP-based projects, one significant difference between methodologies is the ability to additionally generate Prediction Explanations for [multiclass projects](#multiclass-prediction-explanations). The basic function and interpretation are the same, with the addition of multiclass-specific filtering and viewing options.
1. Click **Prediction Explanations** for the selected model.
2. If Feature Impact has not already been calculated for the model, click the **Compute Feature Impact** button. (You can calculate impact from either the **Prediction Explanations** or [**Feature Impact**](feature-impact) tabs—they share computational results.)

3. Once the computation completes, DataRobot displays the **Prediction Explanations** preview, using the default values (described below):

| | Component | Description |
|-------------|-----------------|-------|
| | [Computation inputs](#change-computation-inputs) | Sets the number of explanations to return for each record and toggles whether to apply low and/or high ranges to the selection. |
| | [Change threshold values](#change-threshold-values) | Sets low and high validation score thresholds for prediction selection. |
|  | [Prediction Explanations preview](#prediction-explanations-overview) | Displays a preview of explanations, from the validation data, based on the input and threshold settings. |
|  | [Calculator](#compute-and-download-predictions)  | Initiates computation of predictions and then explanations for the full selected prediction set, using the selected criteria. |
4. If desired, change the [computation inputs](#change-computation-inputs) and/or [threshold values](#change-threshold-values) and update the preview.
5. Compute using the new values and download the results.
!!! note
Additional elements for [Visual AI](visual-ai/index) projects, described [below](#prediction-explanations-for-visual-ai), are available to support the unique quality of image features.
DataRobot applies the default or user-specified baseline thresholds to all datasets (training, validation, test, prediction) using the same model. Whenever you modify the baseline, you must update the preview and recompute Prediction Explanations for the uploaded datasets.
## Interpret XEMP Prediction Explanations {: #interpret-xemp-prediction-explanations }
A sample preview looks as follows:

A simple way to explain this result is:
> The prediction value of 0.894 can be found in row 4936. For that value, the six listed features had the highest positive impact on the prediction.
From the example above, you could answer "Why did the model give one of the patients a 89.4% probability of being readmitted?" The explanations indicate that the patient's weight, number of emergency visits (3), and 25 medications all had a strong positive effect on the (also positive) prediction (as well as other reasons).
For each prediction, DataRobot provides an ordered list of explanations, with the number of explanations based on the [setting](#change-computation-inputs). Each explanation is a feature from the dataset and its corresponding value, accompanied by a qualitative indicator of the explanation’s strength. A positive influence is represented as +++ (strong), ++ (medium) or + (weak), and a negative influence is represented as --- (strong), -- (medium) or - (weak). For more information, see the description of [how qualitative strength is calculated](xemp-calc) for XEMP.
Scroll through the prediction values to see results for other patients:

### Notes on explanations {: #notes-on-explanations }
Consider the following:
* If the data points are very similar, the explanations can list the same rounded up values.
* It is possible to have an explanation state of MISSING if a “missing value” was important (a strong indicator) in making the prediction.
* Typically, the top explanations for a prediction have the same direction as the outcome, but it is possible that with interaction effects or correlations among variables, an explanation could, for instance, have a strong positive impact on a negative prediction.
* The number in the ID column is the row number ID from the imported dataset.
* It is possible that a high-probability prediction shows an explanation of negative influence (or, conversely, a low score prediction shows a variable with high positive effect). In this case, the explanation is indicating that if the value of the variable were different, the prediction would likely be even higher.
For example, consider predicting hospital readmission risk for a 107-year-old woman with a broken hip, but with excellent blood pressure. She undoubtedly has a high likelihood of readmission, but due to her blood pressure she has a lower risk score (even though the overall risk score is very high). The Prediction Explanations for blood pressure indicate that if the variable were different, the prediction would be higher.
??? info "How are explanations calculated for a model trained on 100%?"
The question arises because validation data is necessary for the calculation of Prediction Explanations, but the 100% model uses validation for training. However, because partitions are defined at the project level, the same rows are used in the validation partition for every model. Those are the rows used to pick “exemplar” values for the features in XEMP explanations—they are the same for every model, including 100% models.
If you use the validation rows for predictions on that model—for example, in calculating metrics—the resulting predictions will then be [“in-sample”](glossary/index#in-sample-predictions). In that case, approach the results with an appropriate amount of uncertainty regarding whether they will generalize to new data as there is the risk of target leakage.
In the case of Prediction Explanations on a deployed model that makes predictions on _new_ data, predictions are not on the training data. Instead they use the new data and synthetic rows based on the new data plus the “exemplar” values. Although the exemplars come from the validation rows, DataRobot does not use predictions from those rows in the explanation, so the risk of leakage is remote.
## Modify the preview {: #modify-the-preview }
DataRobot computes a preview of up to 10 Prediction Explanations for a maximum of six predictions from your training data (i.e., from the validation set).
The following are DataRobot's default settings for the **Prediction Explanations** tab:
| Component | Default value | Notes |
|-------------|-----------------|-------|
| Number of Prediction Explanations | 3 | Set any number of explanations between 1 and 10. |
| Number of predictions | 6 maximum | The number of preview predictions shown is capped by the number of data points in the specified range. If there are only four in the specified range, for example, only four rows are shown in the preview. |
| Low threshold checkbox | Selected | NA |
| High threshold checkbox | Selected | NA |
| Prediction threshold range | Top and bottom 10% of the prediction distribution | Drag to change. |
The *training* data is automatically available for prediction and explanation preview. When you upload your *prediction* dataset, DataRobot computes Prediction Explanations for the full set of predictions.
If you modify the [computation inputs](#change-computation-inputs) and/or [threshold values](#change-threshold-values), DataRobot prompts you to update the preview:

Click **Update** to redisplay the preview with the new settings; click **Undo Changes** to restore the previous settings. Updating the preview generates a new set of explanations with the given parameters for up to six predictions from within the highlighted range.
### Change computation inputs {: #change-computation-inputs }
There are three inputs you can set for DataRobot to use when computing Prediction Explanations—a low or high prediction threshold [value](#change-threshold-values) (when checked) or no threshold when unchecked, and the number of explanations for each prediction.
To change the number of explanations, type (or use the arrows in the box) to set a value between one and 10. Check the low and high threshold boxes and use the sliders to set the range from which to view Prediction Explanations. Modifying the inputs prompts you to update the preview.

!!! tip
You must click **Update** any time you modify the thresholds (and want to save the changes).
### Change threshold values {: #change-threshold-values }
The threshold values demarcate a range in the prediction distribution from which DataRobot pulls the predictions. To change the threshold values, drag the low and/or high threshold bar to your desired location and update the preview.
You can apply low and high threshold filters to speed up computation. When at least one is specified, DataRobot only computes Prediction Explanations for the selected outlier rows. Rows are considered to be outliers if their predicted value (in the case of regression projects) or probability of being the positive class (in classification projects) is less than the low or greater than the high value. If you toggle both filters off, DataRobot computes Prediction Explanations for all rows.
If [Exposure](additional#set-exposure) is set (for regression projects), the distribution shows the distribution of *adjusted* predictions (e.g., *predictions divided by the exposure*). Accordingly, the label of the distribution graph changes to **Validation Predictions/Exposure** and the prediction column name in the preview table becomes **Prediction/Exposure**.
## Compute and download predictions {: #compute-and-download-predictions }
DataRobot automatically previews Prediction Explanations for up to six predictions from your training data's validation set. These are shown in the initial display. You can, however, [compute and download](#compute-full-explanations) explanations for the full training partition (1), for the complete dataset (2), or for [new datasets](#upload-a-dataset) (3):

### Upload a dataset {: #upload-a-dataset }
Once you are satisfied that the thresholds are returning the types and range of explanations that you are interested in, upload one or more prediction datasets. To do so:
1. Click **+ Upload new dataset**. DataRobot transfers you to the [**Make Predictions**](predict) tab, where you can browse, import, or drag datasets for upload. Optionally, [append columns](#append-columns).
2. Import the new dataset. When import completes, click again on the **Understand > Prediction Explanations** tab to return.
### Append columns {: #append-columns }
Sometimes you may want to append columns to your prediction results. Appending is a useful tool, for example, to help minimize any additional post-processing work that may be required. Because by default the target feature is not included in the explanation output, appending it is a common action.
The append action is independent of other actions, so you can append at any point in the Prediction Explanation workflow (before or after uploading new datasets or running calculations). When you initiate a download, DataRobot appends the columns you added to the output.
To append features (and you can only append a column that was present when the model was built) either switch to the [**Make Predictions**](predict) tab or click **Upload a new dataset** and you will be taken to that tab automatically. Follow the instructions there beginning with [step 5](predict#step-5).
## Compute full explanations {: #compute-full-explanations }
Although by default the insight that you see reflects validation data, you can view predictions and explanations for all data points in the project's training data. To do so, click the compute button () next to the dataset named _Training data_. This dataset is automatically available for every model.
### Generate and download Prediction Explanations {: #generate-and-download-prediction-explanations }
You can generate explanations on predictions from any uploaded dataset. First though, DataRobot must calculate explanations for *all* predictions, not just the six from the preview. To compute and download predictions once your dataset is uploaded:
1. If DataRobot has not calculated explanations for all predictions in a dataset, click the calculator icon () to the right of the dataset to initiate explanation computation.
2. Complete the fields in the **Compute explanations** modal to set the parameters and click **Compute** to compute explanations for each row in the corresponding dataset:

DataRobot begins calculating explanations; track the progress in the Worker Queue.
3. When calculations complete, the dataset is marked as ready for download.

4. Click the download icon () to export all of the dataset's predictions and corresponding explanations in CSV format.
Note that predictions outside the selected range are included in the data but do not contain explanations.
3. If you update the settings (change the thresholds or number of explanations) you must first click the **Update** button and then recalculate the explanations by clicking the calculator:
!!! note
Only the most recent version of explanations is saved for a dataset. To compare parameter settings, download the Prediction Explanations CSV for a setting and rerun them for the new setting.
## Multiclass Prediction Explanations {: #multiclass-prediction-explanations }
Prediction Explanations for multiclass classification projects are available from both a Leaderboard model or [a deployment](#explanations-from-a-deployment).
### Explanations from the Leaderboard {: #explanations-from-the-leaderboard }
In multiclass projects, DataRobot returns a prediction value for each class—multiclass Prediction Explanations describe why DataRobot determined that prediction value for any class that explanations were requested for. So if you have classes `A`, `B`, and `C`, with values of `0.4`, `0.1`, `0.5` respectively, you can request the explanations for why DataRobot assigned class `A` a prediction value of `0.4`.
### View explanations preview {: #view-explanations-preview }
1. Access [XEMP-based](xemp-pe) Prediction Explanations from a Leaderboard model's **Understand > Prediction Explanations** tab.
2. Use the **Class** dropdown to view training data-based explanations for the class. Each class has its own distribution chart (1) and its own set of samples (2).

??? tip "Deep dive: Multiclass preview "
Preview data is available for a subset of the most frequent model classes. The selection is derived from the [Lift Chart](lift-chart#lift-chart-with-multiclass-projects) distribution and typically represents the top 20 classes. Although multiclass supports an unlimited number of classes, the display supports just the 20 available in the the Lift Chart.</br>
There are models that don't have a Lift Chart calculated. Most often this happens for slim run projects (for example, GB+ dataset sizes or multiclass projects with >10 classes) trained into validation (>64% for default parameters). In those types of cases, although the chart isn't available, DataRobot can still calculate explanations. This is not unique to multiclass projects, multiclass just has additional corner cases when there can be no distribution chart for some classes—when that class is rare and wasn't present in training data, for example.</br>
When calculating the multiclass preview, DataRobot selects a limited number of classes to display (there can be up to 1000) in support of better UX and faster calculation times. As a result, the available display is a selection of those classes that do have Lift Chart calculations (DataRobot calculates 20 classes for a multiclass model). If the model doesn't have any Lift Charts data, DataRobot selects the first 20 classes alphabetically.
### Calculate explanations {: #calculate-explanations }
You can calculate explanations either for the full training data set or for new data. [The process](xemp-pe#compute-and-download-predictions) is generally the same as for classification and regression projects, with a few multiclass-specific differences. This is because DataRobot calculates explanations separately for each class. Clicking the calculator opens a modal that controls which classes explanations are generated for:

The **Classes** setting controls the method for selecting which classes are used in explanation computation. The **Number of classes** setting configures the number of classes, for each row, DataRobot computes explanations for. For example, consider a dataset with 6 classes. Choosing **Predicted** data and **3** classes will generate explanations for the the 3 classes—of the 6—with the highest prediction values. To maximize response and readability, the maximum number of classes to compute explanations for is 10. (This is a different value than what is supported in the prediction preview chart.)
The **Classes** options include:
Class | Description
----- | -----------
Predicted | Selects classes based on prediction value. For each row in the prediction dataset, compute explanations for the number of classes set by the **Classes** value.
Actual | Compute explanations from classes that are known values. For each row, explain the class that is the "ground truth." This option is only available when using the training dataset.
List of classes | Selects specific classes from a list of classes. For each row, explain only the classes identified in the list.

Once explanations are computed, hover on the info icon () to see a summary of the computed explanations:

### Download explanations {: #download-explanations }
Click the download icon () to export all of a dataset's predictions and corresponding explanations in CSV format. Explanations for multiclass projects contain additional fields for each explained class—a class label and a list of explanations (based on your computation settings) for each.
Consider this sample output:

Some notes:
* Each row has each predicted class explained (1).
* The first class column is the top predicted class.
* If you've used the **List of classes** option, the output shows just those classes. This is useful if you want a specific class explained, that is, are less interested in predicted values.
When a dataset shows prediction percentages that are close in value, the explanations become very important to understanding why DataRobot predicted a given class—to help understand the predicted class and the challenger class(es).
## Explanations from a deployment {: #explanations-from-a-deployment }
When you [calculate predictions from a deployment](batch-pred#set-prediction-options) (**Deployments > Predictions > Make Predictions**), DataRobot adds the **Classes** and **Number of classes** fields to the options available for non-multiclass projects:

## Prediction Explanations for Visual AI {: #prediction-explanations-for-visual-ai }
Prediction Explanations for [Visual AI](visual-ai/index) projects, also known as Image Explanations, allow you to retrieve explanations for datasets that include features of type "image". Visual AI Image Explanations support all the features described above, with some additions. For explanations size limitations for Visual AI prediction datasets, see the [considerations](vai-model#feature-considerations).
Once calculated, notice the addition of an icon, indicating that an image was an important part of the explanation:

Click on the icon () to drill down into the image explanation:

Toggle on the [Activation Map](vai-insights#activation-maps) to see what the model "looked at" in the image.

### Compute and download explanations {: #compute-and-download-explanations }
As with Prediction Explanations, you can [compute predictions and download explanations](#compute-and-download-predictions) for every row in your dataset. When you download the Image Explanations archive, it contains:
* a predictions CSV file (1)
* a folder of images (2)

Open the CSV and notice that for image features that are part of the explanation, the image file name is listed as the feature's value.

Open the image folder to find and view a rendered (heat-mapped) photo of the associated image.

|
xemp-pe
|
---
title: Prediction Explanations
description: Index page for SHAP, XEMP, and Text Prediction Explanations.
---
# Prediction Explanations {: #prediction-explanations }
The following sections provide an overview and describe the alternate methodologies for working with Prediction Explanations:
Topic | Describes...
----- | ------
[Prediction Explanations overview](predex-overview) | Describes the SHAP and XEMP methodologies, including benefits and tradeoffs.
[SHAP Prediction Explanations](shap-pe) | Describes how to work with SHAP-based Prediction Explanations.
[XEMP Prediction Explanations](xemp-pe) | Describes how to work with XEMP-based Prediction Explanations.
[Text Prediction Explanations](predex-text) | Helps to interpret the output of text-based explanations.
|
index
|
---
title: Prediction Explanations overview
description: SHAP and XEMP Prediction Explanations give a quantitative indicator of how variables affect predictions by row. Text explanations identify which specific words within a feature are impactful.
---
# Prediction Explanations overview {: #prediction-explanations-overview }
**Prediction Explanations** illustrate what drives predictions on a row-by-row basis—they provide a quantitative indicator of the effect variables have on the predictions, answering why a given model made a certain prediction. It helps to understand why a model made a particular prediction so that you can then validate whether the prediction makes sense. It's especially important in cases where a human operator needs to evaluate a model decision and also when a model builder needs to confirm that the model works as expected. For example, "why does the model give a 94.2% chance of readmittance?" (See more examples [below](#examples).)
DataRobot offers two methodologies for computing Prediction Explanations: SHAP (based on Shapley Values) and XEMP (eXemplar-based Explanations of Model Predictions).
!!! note
To avoid confusion when the same insight is produced yet potentially returns different results, you must enable SHAP in [**Advanced options**](additional) prior to project start.
DataRobot also provides [Text Prediction Explanations](predex-text) specific to text features, which help to understand, at the word level, the text and its influence on the model. Text Prediction Explanations support both XEMP and SHAP methodologies.
To access and enable Prediction Explanations, select a model on the Leaderboard and click **Understand > Prediction Explanations**.

See these [things to consider](#feature-considerations) when working with Prediction Explanations.
## SHAP or XEMP-based methodology? {: #shap-or-xemp-based-methodology }
Both SHAP and XEMP methodologies estimate which features have stronger or weaker impact on the target for a particular row. While the two methodologies usually provide similar results, the explanation _values_ are different (because methodologies are different). The list below illustrates some differences:
Characteristic | SHAP | XEMP
-------------- | ---- | ----
Open-source? | Open source algorithm provides regulators an easy audit path. | Uses a well-supported DataRobot proprietary algorithm.
Model support | Works for [linear models, Keras deep learning models, and tree-based models, including tree ensembles](shap#shap-compatibility-matrix). | XEMP works for all models.
Column/value limits | No column or value limits. | Up to 10 values in up to the top 50 columns.
Speed | 5-20 times faster than XEMP. | —
Accuracy | Supports all _key_ blueprints, so often times accuracy is the same. | Because all blueprints are included in Autopilot, results may produce slightly higher accuracy.
Measure | Multivariate, measuring the effect of varying multiple features at once. Additivity allocates the total effect across individual features that were varied. | Univariate, measuring the effect of varying a single feature at a time.
Best use case | Explaining exactly how you get from an average outcome to a specific prediction amount. | Explaining which individual features have the greatest impact on the outcome versus an average input value (i.e., which feature has a value that most changed the prediction versus an average data row).
Additional notes | SHAP is additive, making it easy to see how much top-N features contribute to a prediction. | —
!!! note
While Prediction Explanations provide several quantitative indicators for why a prediction was made, the calculations do not fully explain _how_ a prediction is computed. For that information, use the [coefficients with preprocessing information](coefficients#coefficientpreprocessing-information-with-text-variables) from the **Coefficients** tab.
See the [XEMP](xemp-pe) or [SHAP](shap-pe) pages for a methodology-based description of using and interpreting Prediction Explanations.
## Examples {: #examples }
A common question when evaluating data is “why is a certain data point considered high-risk (or low-risk) for a certain event”?
A sample case for **Prediction Explanations**:
> Sam is a business analyst at a large manufacturing firm. She does not have a lot of data science expertise, but has been using DataRobot with great success to predict the likelihood of product failures at her manufacturing plant. Her manager is now asking for recommendations for reducing the defect rate, based on these predictions. Sam would like DataRobot to produce Prediction Explanations for the expected product failures so that she can identify the key drivers of product failures based on a higher-level aggregation of explanations. Her business team can then use this report to address the causes of failure.
Other common use cases and possible reasons include:
* What are indicators that a transaction could be at high risk for fraud? Possible explanations include transactions out of a cardholder's home area, transactions out of their “normal usage” time range, and transactions that are too large or small.
* What are some reasons for setting a higher auto insurance price? The applicant is single, male, under 30 years old, and has received a DUI or multiple tickets. A married homeowner may receive a lower rate.
_SHAP_ estimates how much a feature is responsible for a given prediction being different from the average. Consider a credit risk example that builds a simple model with two features—number of credit cards and employment status. The model predicts that an unemployed applicant with 10 credit cards has a 50% probability of default, while the average default rate is 5%. SHAP estimates how each feature contributed to the 50% default risk prediction, determining that 25% is attributed to the number of cards and 20% is due to the customer's lack of a job.
## Feature considerations {: #feature-considerations }
Consider the following when using Prediction Explanations. See also [time-series specific](ts-consider#accuracy) considerations.
* Predictions requested with Prediction Explanations will typically take longer to generate than predictions without explanations, although actual speed is model-dependent. Computation runtime is affected by the number of features, blenders (only supported for XEMP), and text variables. You can try to increase speed by reducing the number of features used, or by avoiding blenders and text variables.
* [Image Explanations](xemp-pe#prediction-explanations-for-visual-ai)—or Prediction Explanations for images—are not available from a deployment (for example, Batch predictions or the Predictions API). See also [SHAP considerations](#shap) below.
* Once you set an explanation method (XEMP or SHAP), insights are only available for that method. (For example, you cannot build with XEMP and then request the SHAP endpoint via the API.)
* Anomaly detection models trained from DataRobot blueprints always compute Feature Impact using SHAP. For anomaly detection models from user blueprints, Feature Impact is computed using the permutation-based approach.
* The deployment [Data Export](data-export) tab doesn't store the Prediction Explanations for export, even when Prediction Explanations are requested while making predictions through that deployment.
### Prediction Explanation and Feature Impact methods {: #prediction-explanation-and-feature-impact-methods }
Prediction Explanations and Feature Impact are calculated in multiple ways depending on the project and target type:
!!! note
SHAP Impact is an aggregation of SHAP explanations. For more information, see [SHAP-based Feature Impact](feature-impact#shap-based-feature-impact)
=== "Non-time series / Out-of-time validation"
Target type | Feature Impact method | Prediction Explanations method
-------------------------------|--------------------------------------------|-------------------------------
Regression | Permutation Impact or SHAP Impact | XEMP or [SHAP (opt-in)](additional)
Binary | Permutation Impact or SHAP Impact | XEMP or [SHAP (opt-in)](additional)
Multiclass | Permutation Impact | XEMP
Unsupervised Anomaly Detection | SHAP Impact | XEMP
Unsupervised Clustering | Permutation Impact | XEMP
=== "Time series"
Target type | Feature Impact method | Prediction Explanations method
-------------------------------|--------------------------------------------|-------------------------------
Regression | Permutation Impact | XEMP
Binary | Permutation Impact | XEMP
Multiclass | N/A* | N/A*
Unsupervised Anomaly Detection | SHAP Impact | XEMP**
Unsupervised Clustering | N/A* | N/A*
_* This project type isn't available._
_** For the time series unsupervised anomaly detection visualizations, the [Anomaly Assessment chart](anom-viz#anomaly-assessment) uses SHAP to calculate explanations for anomalous points._
### XEMP {: #xemp }
Consider the following when using XEMP (which is based on permutation-based Feature Importance scores):
* Prediction Explanations are compatible with models trained before the feature was introduced.
* There must be at least 100 rows in the validation set for Prediction Explanations to compute.
* Prediction Explanations work for all variable types (numeric, categorical, text, date, time, image) except geospatial.
* DataRobot uses a maximum of 50 features for Prediction Explanations computation, limiting the computational complexity and improving the response time. Features are selected in order of their Feature Impact ranking.
* The maximum number of Prediction Explanations via the UI is 10 and via the prediction API is 50.
### SHAP {: #shap }
* Multiclass classification Prediction Explanations are not supported for SHAP (but are available for XEMP).
* SHAP-based Prediction Explanations for models trained into Validation and Holdout are in-sample, not stacked.
* For AutoML, SHAP is only supported by linear, tree-based, and Keras deep learning blueprints. Most of the non-blender AutoML BPs that typically appear at the top of Leaderboard are supported (see the [compatibility matrix](shap#shap-compat)).
* SHAP is not supported for time series projects.
* SHAP does not work with sliced insights.
* SHAP is not supported for Scoring Code Prediction Explanations.
* SHAP does not support image feature types. As a result, [Image Explanations](xemp-pe#prediction-explanations-for-visual-ai) are not available.
* When a link function is used, SHAP is additive in the margin space (`sum(shap) = link(p)-link(p0)`). The recommendation is:
* When you require additive qualities of SHAP, use blueprints that don’t use a link function (e.g., a tree-based model).
* When log is used as a link function, you could also explain predictions using `exp(shap)`.
* Limits on the number of explanations available:
* Backend/API: You can retrieve SHAP values for all features (using `shapMatrices` API).
* UI: Prediction Explanations preview is limited to five explanations; however, you can [download](shap-pe#compute-and-download) up to 100 via CSV or use the API if you need access to more than 100 explanations. Feature Impact preview is limited to 25 features, but you can export up to 1000 to CSV.
* Unsupervised anomaly detection models:
* Feature Impact is calculated using SHAP.
* Prediction Explanations are calculated using XEMP.
* As with supervised mode, SHAP is not calculated for blenders.
* SHAP is not available for models built on feature lists with greater than 1000 features.
|
predex-overview
|
---
title: SHAP Prediction Explanations
description: Enable SHAP-based Prediction Explanations prior to building tree- and linear-based models to understand which features drive each model decision.
---
# SHAP Prediction Explanations {: #shap-prediction-explanations }
!!! note
This section describes SHAP-based Prediction Explanations. See also the general description of Prediction Explanations for an overview of [SHAP and XEMP methodologies](pred-explain/index).
To retrieve SHAP-based Prediction Explanations, you must enable the [**Include only models with SHAP value support**](additional) advanced option prior to model building.
SHAP Prediction Explanations estimate how much each feature contributes to a given prediction differing from the average. They are intuitive, unbounded (computed for all features), fast, and, due to the open source nature of SHAP, transparent. The benefits of SHAP help not only better understand model behavior—and quickly—but allow you to easily validate if a model adheres to business rules. See the [SHAP reference](shap) for additional technical detail.
See the associated SHAP [considerations](pred-explain/predex-overview#feature-considerations) for important additional information.
Use SHAP to understand, for each model decision, which features are key. What drives a particular customer's decision to buy—age? gender? buying habits?—what is the magnitude on the decision for each factor?
## Preview Prediction Explanations {: #preview-prediction-explanations }
SHAP-based **Prediction Explanations**, when previewed, display the top five features for each row. This provides a general "intuition" of model performance. You can then quickly [compute and download](#computing-and-downloading-explanations) explanations for the entire training dataset to perform a deeper analytics. See [SHAP calculations](#prediction-explanation-calculations) for more detail.

You can also:
* Upload [external datasets](#upload-a-dataset) and manually compute (and download) explanations.
* Access explanations via the API, for both deployed and Leaderboard models.
## Interpret SHAP Prediction Explanations {: #interpret-shap-prediction-explanations }
Open the **Prediction Explanations** tab to see an interactive preview of the top five features that contribute most to the difference from the average (base) prediction value. In other words, how much does each feature explain the difference? For example:

The elements describe:
| | Element | Value in example |
| ------ | -------- | ----------- |
|  | Base (average) prediction value | 43.11 |
|  | Prediction value for the row | 67.5 |
|  | Contribution, or how much each feature explains the difference between the base and prediction values | Varies from row to row and from feature to feature |
|  | Top 5 features | Varies from row to row |
Subtract the base prediction value from the row prediction value to determine the difference from the average, in this case **24.4**. The contribution then describes how much each listed feature is responsible for pushing the target away from the average (the allocation of 24.4 between the features).
SHAP is _additive_ which means that the sum of all contributions for all features equals the difference between the base and row prediction values. (See additivity details [here](shap#additivity-in-prediction-explanations).)
Some additional notes on interpreting the visualization:
* Contributions can be either positive or negative. Features that push the predictive value to be higher display in red and are positive numbers. Features that reduce the prediction display in blue and are negative numbers.
* The arrows on the plot are proportionate to the SHAP values positively and negatively impacting the observed prediction.
* The "Sum of all other features" is the sum of features that are not part of the top five contributors.
See the SHAP reference for information on additivity (including [possible breakages](shap#additivity-in-prediction-explanations)).
??? tip "Deep dive: SHAP preview"
The SHAP preview shows a preview based on validation data, even when training into the validation partition. When the model has been trained into validation, the option to download SHAP explanations for the full validation data is not available because models trained into validation are making in-sample predictions (predictions on a row that was also used to train the model). Because this type of prediction does not accurately represent what the model will do on new, unseen data, DataRobot does not provide it and instead uses a technique called [stacked predictions](data-partitioning#what-are-stacked-predictions). It is not possible to calculate SHAP explanation for Holdout data.
To understand this, consider that during cross-validation, DataRobot generates a series of models, equal to the number of CV [partitions](data-partitioning) (or "folds," five by default). If you download training predictions, you receive predictions on all rows. These predictions are “stacked”—each row is predicted by whichever of the sub-models did not use it for training.
By contrast, all SHAP explanations are based on the “primary” model—the model trained on CV fold 1 (or a portion of it) and tested on the Validation fold. Applying that model to training rows outside of the Validation fold would result in predictions and explanations being in-sample. For the same reason, if the model is trained into validation, predictions on validation become in-sample instead of the usual out-of-sample.
### View points in the distribution {: #view-points-in-the-distribution }
Use the prediction distribution component to click through a range of prediction values and understand how the top and bottom values are explained. In the chart, the Y-axis shows the prediction value, while the X-axis indicates the frequency.

Notice that if you look at a point near the bottom of the distribution, the contribution values show more blue than red values (more negative than positive contributions). This is because majority of key features are pushing the prediction value to be lower.
## Computing and downloading explanations {: #computing-and-downloading-explanations }
While DataRobot automatically computes the explanations for selected records, you can compute explanations for all records by clicking the calculator () icon. DataRobot computes the remaining explanations and when ready, activates a download button. Click to save the list of explanations as a CSV file. Note that the CSV will only contain the top 100 explanations for each record. To see all explanations, use the API.

## Upload a dataset {: #upload-a-dataset }
To compute explanations for additional data using the same model, click **Upload new dataset**:

DataRobot opens the [**Make Predictions**](predict) tab where you can upload a new, external dataset. When complete, return to **Prediction Explanations**, where the new dataset is listed in the download area.

Compute () and then download explanations in the same way as with the training dataset. DataRobot runs computations for the entire external set.
## Prediction Explanation calculations {: #prediction-explanation-calculations }
DataRobot automatically computes SHAP Prediction Explanations. In the UI, SHAP initially returns the five most important features in each previewed row. Additional features are bundled and reported in `Sum of all other features`. (You can [compute for all features](#computing-and-downloading-explanations) as described above.) In the API, explanations for a given row are limited to the top 100 most important features in that row. If there are more features, they get bundled together in the `shapRemainingTotal` value. See the public API documentation for more detail.
|
shap-pe
|
---
title: Profit curve
description: The ROC Curve tab in DataRobot lets you generate profit curves that help you estimate the business impact of a selected model.
---
# Profit curve {: #profit-curve }
Like the other visualization tools on the **[ROC Curve](roc-curve-tab-use)** tab, profit curves are available for binary classification problems.
Profit curves help you estimate the business impact of a selected model. For many classification problems, there is asymmetry between the benefit of correct predictions and/or the penalty (or cost) of incorrect predictions. The average profit chart helps you assess a model based on your supplied costs or benefits so that you can see how those profits change with different inputs.
## Generate a profit curve {: #generate-a-profit-curve }
To generate a profit curve, first create a payoff matrix using:
* A confusion matrix that reports how actual versus predicted values were classified.
* Payoff values—a set of values that represent business impact (free of currency). For example, "if I identify who will default on a loan, what will the cost or benefit be for each observation for both correct and incorrect predictions?"
??? tip "Deep dive: Profit curve"
The metrics that DataRobot reports are good for understanding both the absolute and relative performance of models in a machine learning context, and are generalizable for many different contexts. For example, it is not immediately apparent how much money you may gain or lose from deploying a model by looking only at the ROC curve (see a further comparison [below](#relationship-of-profit-curves-to-roc-curves)). With the profit curve, even using average payoff values, you can make a quick evaluation of a model's direct application in your business setting. Specifically, the curve can help you understand where to set your classification threshold and how much you stand to gain based on average costs/benefits associated with correct classification or misclassification.
There are two main interactive components needed to set up a profit curve:
* A payoff matrix configuration that determines the profit calculation used to update the profit curve
* The profit curve visualization
Together, the matrix and payoff values create the average profit chart. Creating matrices with different matrix values allows you to compare different cost scenarios, for example, an optimistic and pessimistic cost.
To generate a profit curve:
1. Select a model on the Leaderboard and navigate to **Evaluate > ROC Curve**.
2. Select a [data source](threshold#select-data-for-visualizations) and set the [display threshold](threshold#set-the-display-threshold).
3. In the Matrix pane on the right, create a payoff matrix by clicking **+ Add payoff**.

4. Enter the name of the payoff matrix.

Before you create the payoff matrix, the displayed payoff values are `1` for correct classifications and `-1` for incorrect classifications—this is not really a matrix, but instead a "placeholder" set of values to provide an initial curve visualization.
5. Enter payoff values for each category (TN, FP, FN, and TP).

The payoff values determine the profit calculation that generates the profit curve.
6. Click **Save**.
!!! tip
The new payoff matrix becomes available to all models in the project. You can edit or delete the matrix as needed; these changes are also reflected across the project. You can create up to six matrices.
7. Set the **Chart** pane to **Average Profit** and for **Display Threshold**, select **Maximize profit**.

This is the maximum profit that can be achieved using the selected payoff matrix.
8. Click the circle on the profit curve to see the average profit at that threshold. Click other areas along the curve to see how the average profit changes. Take a look at the payoff matrix to see how the TN, FP, FN, and TP counts change based on the display threshold.

The total profit (or loss) is [calculated](#matrix-formulas-for-profit-curves) based on the matrix settings and reflected in the curve. In other words, the total profit/loss is the sum of the correct and incorrect classifications multipled by the benefit or loss from each.
## View the average profit metric {: #view-the-average-profit-metric }
To view the average profit metric:
1. Click **Select metrics** and choose **Average Profit (for Payoff Matrix)**.
2. View the average profit in the **Metrics** pane:

## Profit curve explained {: #profit-curve-explained }
The average profit curve plots the average profit against the classification threshold. The average profit curve visualization is based on two inputs:
* The [confusion matrix](confusion-matrix), which categorizes correct and incorrect predictions, and the [display threshold](threshold#set-the-display-threshold).
* The [payoff matrix](#compare-models-based-on-a-payoff-matrix), which assigns costs and benefits to the different types of correct and incorrect predictions (true positives/true negatives and false positives/false negatives).
Consider the following average profit curve:

The following table describes elements of the display:
| | Element | Description |
|---|----------|-------------|
|  | Threshold (Probability) | The focus of the display, which plots profit against the classification point of positive versus negative. This is the point used as the basis for counts in the payoff matrix. You can set the [prediction threshold](threshold#prediction-threshold) to this display value. |
|  | Profit (Average) | Determined at each threshold from the sum of the product of each pair of confusion matrix and [payoff matrix](#create-a-payoff-matrix) elements (with formulas described [below](#matrix-formulas-for-profit-curves)). DataRobot generates the profit/loss based off the "right and wrong" numbers combined with configured payoff values. |
|  | Display threshold | Circle that denotes the threshold on the profit curve. You can set the display threshold to the maximum profit by selecting **Maximize profit** in the **Display Threshold** pulldown above the Prediction Distribution graph.|
|  | Profit/loss line | A line that always orients to 0 to help visualize the break even point. It indicates where values are positive versus negative based on the selected data partition. |
## Compare models based on a payoff matrix {: #compare-models-based-on-a-payoff-matrix }
Use the [**Model Comparison**](model-compare) tab to compare how two different models handle the data. Results are based on the payoff matrix, so you must have created at least one matrix before using the comparison. Some information to evaluate in the comparison include:
* How different is the shape between the two models?
* Is there a large difference in the max profit?
* Where do the thresholds occur?

The comparison uses the same controls (data selection, graph scale, and matrix) as the individual model visualizations.
## Matrix formulas for profit curves {: #matrix-formulas-for-profit-curves }
The profit curve plots the profit against the classification threshold. Profit is determined at each threshold from the sum of the product of each pair of confusion matrix and payoff matrix elements. Using this matrix as an example, with a total profit/loss 186:

Total profit/loss:
* True Negative (TN) = 133
* False Negative (FN) = 16
* False Positive (FP) = 8
* True Positive (TP) = 3
And corresponding payoff (P) matrix:
* P<sub>TN</sub> = 2
* P<sub>FN</sub> = –5
* P<sub>FP</sub> = –3
* P<sub>TP</sub> = 8
the net profit is the sum of the products of corresponding elements of the two matrices, calculated as follows:
<code>Profit = (TN * P<sub>TN</sub>) + (FP * P<sub>FP</sub>) + (FN * P<sub>FN</sub>) + (TP * P<sub>TP</sub>)</code>
In this example:
<code>(133 * 2) + (8 * (-3)) + (16 * (-5)) + (3 * 8)</code>
or
<code>266 – 24 – 80 + 24 = 186</code>
## Relationship of profit curves to ROC curves {: #relationship-of-profit-curves-to-roc-curves }
A profit curve is most useful for determining an optimal classification probability threshold, supplemental to the metrics of a [**ROC curve**](roc-curve). That is, while the ROC curve can help you find the “best” threshold based on the various statistics or your domain expertise, a profit curve helps you pick a threshold based on the costs of true and false positive and negative predictions. It provides a sense of model sensitivity in the context of your business problem—a gentle sloping curve suggests more flexibility, while a sharp pitch tells you what threshold area to avoid. The shape depends on the selected model and the payoff values assigned.
By adding [payoff values in the profit matrix](#create-a-payoff-matrix), you create a multiplicative effect that can give you total profit/loss estimates, with varying inputs to allow comparison. The profit curve uses the same data as the ROC curve, meaning that when the threshold is the same, the confusion matrix counts in each visualization are the same. The threshold set for prediction output is shared between the profit curve and ROC Curve.
## Profit Curve considerations {: #profit-curve-considerations }
* Because you cannot change the Prediction Threshold value after a model has been downloaded or deployed, there is slight delay in displaying the threshold while DataRobot checks the model status.
* Using the profit curve is not recommended for baseline (majority class classifier) models.
* The payoff matrix shows weighted counts (and those weighted counts are used to calculate profit).
|
profit-curve
|
---
title: Cumulative Charts
description: Cumulative Charts available in the DataRobot ROC Curve tab help you to assess model performance by exploring the model's cumulative characteristics.
---
# Cumulative charts {: #cumulative-charts }
The Chart pane (on the **[ROC Curve](roc-curve-tab-use)** tab) allows you to generate cumulative charts. These charts help you to assess model performance by exploring the model's cumulative characteristics—how successful would you be with the model compared to without it? Cumulative charts allow you to identify the advantages of using a predictive model.
??? tip "Common use case for cumulative charts"
Suppose you want to run a marketing campaign targeted at existing customers. Your customer database has 1,000 names. Because you know that the campaigns historically have only had a 20% response rate, you do not want to pay for printing and mailing to the entire base; instead you want to target only those most likely to respond positively.
* Use a predictive model to determine the probability of a positive or negative reaction from each customer.
* Sort by probability of a positive reaction.
* Target those customers with the highest probability.
## Use cumulative charts {: #use-cumulative-charts }
Use the **Cumulative Gain** and **Cumulative Lift** charts to determine how successful your model is. The X-axis of the charts shows the threshold value cutoffs of the model's predictions, and the Y-axis, either gain or lift, is calculated based on that percentage. The model shows the gain or lift (improvement over using no model) for each percentage cutoff level.
To view a cumulative chart:
1. Select a model on the Leaderboard and navigate to **Evaluate > ROC Curve**.
2. Select a type of cumulative chart from the **Chart** dropdown. You can select a Cumulative Gain or Cumulative Lift chart for either the positive or negative class:

3. View the chart, in this case, a positive class Cumulative Gain curve:

| | Element | Description
|---|---|---|
|  | Display threshold | Corresponds to the [display threshold](threshold#set-the-display-threshold) that governs the **ROC Curve** tab visualization tools. You can select a new display threshold by hovering over the curve and moving the circle to a new point on the curve, then clicking to select a new threshold. |
|  | Best curve | Represents the theoretically best model. |
|  | Actual curve | Represents the actual lift or gain curve. This example shows a positive Cumulative Lift curve. |
|  | Worst curve | Represents a random model. |
## Cumulative Gain and Lift charts explained {: #cumulative-gain-and-lift-charts-explained }
The following sections provide an explanation of cumulative gains and lift. For more detailed examples, see [Cumulative Gains and Lift Charts](http://www2.cs.uregina.ca/~dbd/cs831/notes/lift_chart/lift_chart.html){ target=_blank }.
### Cumulative Gain chart {: #cumulative-gain-chart }
*Cumulative gain* shows how many instances of a particular class you will identify when you look at different cutoff levels of your most confident predictions. For example:
Let's say there are 100 NCAA basketball teams and only 50 of them will be chosen for March Madness. You want to predict which 50 teams will make it. If you examined only your top 10 most confident guesses (predictions) and they all turned out to be perfect, you would have found 20% of the teams making the tournament.
* You got 10 correct picks out of the 50 teams making it, `10 / 50 = 20% Gain`
* Your gain at the top 10% of your most confident guesses is 20%
If you chose a different cutoff level and looked at your top 20 most confident guesses (and you were still perfect), your gain would be 40%.
* 20 correct picks out of the 50 teams making it, `20 / 50 = 40% Gain`
By extension, if you can predict all the teams correctly, your gain is 100% when cutting off at the top 50% of your guesses.
* 50 correct picks out of 50 teams, `50 / 50 = 100% Gain`
### Random baseline {: #random-baseline }
On the other hand, if you had no guessing skill and were basically choosing randomly, your top 10 most confident guesses might only get 5 teams correct (based on the same accuracy as the underlying distribution of the groups).
* 5 correct picks out of 50 teams, `5 / 50 = 10% Gain`
The framework “assumes” that a random "baseline" will get the same accuracy as the underlying distribution of the groups. As a result, the random baseline prediction level will have a gain equal to whatever the cutoff level is (10% correct picks, 10% gain).
### Cumulative Lift chart {: #cumulative-lift-chart }
*Cumulative lift* compares how much gain you've achieved relative to the random baseline. In the basketball example, your top 10 most confident guesses results in a lift equal to 2.0.
* 10 teams picked correctly out of `50 = 20% Gain`
* 20% gain for your predictions (your model) divided by 10% gain for the baseline (random model), `20 / 10 = 2.0 Lift`
In other words, you'll get two times more correct guesses using your model than the random model.
If you have a situation where the two classes are evenly balanced, lift will max out at 2.0.
* Top 10% most confident predictions, `20% Gain → 20 / 10 = 2.0 Lift`
* Top 50% most confident predictions, `100% Gain → 100 / 50 = 2.0 Lift`
If you're predicting at the baseline random level, lift will be 1; if you're predicting worse than random, lift will be less than 1.
## Interpret the insight {: #interpret-the-insight }
Cumulative Lift and Cumulative Gain both consist of a lift curve and a baseline ("[random](#random-baseline)"). The greater the area between the two, the better the model. The baseline is always a diagonal line, representing uniformly distributed overall response: if we contact x% of customers then we will receive x% of total positive responses. The charts display based on the selected class; with the class you are choosing whether to display those predictions with scores higher (positive) or lower (negative) than the [classification threshold](threshold).
The **Theoretical Best** curve is determined by the class distribution. For example, for balanced classes if [TPR](metrics) is 10%, TRP would be 20%. This is because perfectly balanced classes means that there are two times more rows in general than rows of the class you're interested in (because there are only 2 classes and both have exactly the same number of rows). Therefore, a random predictor for any sample correctly returns half of the sample; an ideal returns all of the sample correctly `1/0.5=2`.
If you were predicting a minority class, for example 40% of the labels, TPR @ 10% would be 25% (`10 / 40`). Generally, the larger the minority class, the steeper the Theoretical Best curve (it takes fewer perfect predictions to get total recall).
For both charts, the X-axis displays, at each point, the percentage of the data sample that is predicted to be categorized as the selected class (all the possible thresholds you can act on).
=== "Cumulative Gain explained"
Cumulative Gain represents the [sensitivity and specificity](metrics) values for different percentages of predicted data. That is, the ratio of the cumulative number of targets (events) up to a certain threshold to the total number of targets (events) in the dataset. As a result, the model can help with various use cases, for example, targeting customers for a marketing campaign. If you can sort customers according to the probability of a positive reaction, you can then run the campaign only for the percentage of customers with the highest probability of response (instead of random targeting). In other words, if the model indicates that 80% of targets are covered in the top 20% of data, you can just send mail to 20% of total customers.
=== "Cumulative Lift explained"
Cumulative Lift, derived from the Cumulative Gain chart, illustrates the effectiveness of a predictive model. It is calculated as the ratio between the results obtained with and without the model. In other words, lift measures how much better you can expect your predictions to be when using the model.
Technically speaking, Cumulative Lift is the ratio of gain percentage to the random expectation percentage, measured at various threshold value levels. For example, a Cumulative Lift of 4.0 for the top 2% thresholds means that when selecting 20% of the records based on the model, you can expect 4.0 times the total number of targets (events) you would have found by randomly selecting 20% of data without the model. In other words it shows how many times the model is better than the random choice of cases. To calculate exact values, take the gain of a model divided by the baseline.
### Interpret Cumulative Gain {: #interpret-cumulative-gain }
For the Cumulative Gain chart, the Y-axis displays the percentage of the selected class that the model correctly classified with the current threshold ([sensitivity](metrics) for positive selected class, specificity for negative).

In the chart above, you can see:
* If you act on 60% of model predictions, the result will be just below the 80% of true positives.
* According to the theoretical best line, there are 40% of predictions falling into the positive class in your data. In other words, you would only have to act on 40% of data to catch all occurrences of the positive class (*if* you have an ideal predictor, which is extremely rare).
### Interpret Cumulative Lift {: #interpret-cumulative-lift }
In the Cumulative Lift chart, the Y-axis shows the coefficient of improvement over a random model. For example, if you pick 10% of rows randomly, you expect to catch 10% of the class you're interested in. If the model's top 10% of predictions catch 28% of the selected class, the lift is indicated as 28/10 or 2.8. Because the values for Cumulative Lift are divided by the baseline, the random baseline—horizontal at a value of 1.0—becomes straight because it is divided by itself.

The chart above uses the same data as that used for Cumulative Gain—the lines are the same, the values are just adjusted. So, using roughly 60% of model predictions will result in roughly 1.3 more positive class responses than using random selection.
|
cumulative-charts
|
---
title: Select data and display threshold
description: Thresholds in the ROC Curve tab in DataRobot set the class boundary for a predicted value. The display threshold updates the visualizations and the prediction threshold changes the threshold for all predictions made using the model.
---
# Select data and display threshold {: #select-data-and-display-threshold }
To use [ROC Curve tab](roc-curve-tab-use) visualizations, you [select a data source](#select-data-for-the-visualizations) and a [display threshold](#set-the-display-threshold). These values drive the ROC Curve visualizations:
* [Confusion matrix](confusion-matrix)
* [Prediction Distribution graph](pred-dist-graph)
* [ROC curve](roc-curve)
* [Profit curve](profit-curve)
* [Cumulative charts](cumulative-charts)
* [Custom charts](custom-charts)
* [Metrics](metrics)
## Select data for visualizations {: #select-data-for-visualizations }
To select the data source reflected in ROC Curve visualizations:
1. Select a model on the Leaderboard and navigate to **Evaluate > ROC Curve**.
2. Click the **Data Selection** dropdown menu above the Prediction Distribution graph and select a data source to view in the visualizations.
!!! note
The **Data Selection** list includes only the [partitions that have been enabled](partitioning#partitioning-methods) and run. The list includes all test datasets that have been added to the project; test dataset selections are inactive until they are run. [Time-aware modeling](ts-date-time#lift-roc) allows backtest-based selections.
| Selection | Description |
|---|---|
| Holdout | Visualizations use the [holdout](glossary/index#holdout) partition. **Holdout** does not appear in the selection list if [holdout has not been unlocked](unlocking-holdout) for the model and run. |
| Cross Validation | Visualizations use the [cross-validation](glossary/index#cross-validation) partition. DataRobot "stacks" the [cross-validation folds](data-partitioning#k-fold-cross-validation-cv) (5 by default) and computes the visualizations on the combined data. |
| Validation | Visualizations use the [validation](glossary/index#validation) partition. |
| External test data | Visualizations use the data for an external test you have run. If you've added a test dataset but have not yet run it, that test dataset selection is inactive.
| Add external test data | If you select **Add external data**, the [**Predict > Make Predictions**](predict#make-predictions-on-an-external-dataset) tab displays. Use the tab to add test data and run an external test. Then return to the ROC Curve tab, click **Data Selection**, and select the test data you ran. |
3. View the ROC Curve tab visualizations. Update the [display threshold](#set-the-display-threshold) as necessary to meet your modeling goals.
## Set the display threshold {: #set-the-display-threshold }
The display threshold is the basis for several visualizations on the **ROC Curve** tab. The threshold you set updates the Prediction Distribution graph, as well as the Chart, Matrix, and Metrics panes described in the following sections. Experiment with the threshold to meet your modeling goals.
??? tip "Deep dive: Threshold"
A threshold for a classification model is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as "false," and an observation above the threshold as "true." In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold.
There are two thresholds you can modify:
* The [display threshold](#set-the-display-threshold): Updates the visualizations on the **[ROC Curve](roc-curve-tab/index)** tab.
* The [prediction threshold](#prediction-threshold): Changes the threshold (and thus, the label) for all predictions made using this model.
You have a choice of two bases for the display threshold—a prediction value (0-1) or a prediction percentage. The prediction value represents the numeric value used to determine the class boundary. The percentage option allows you to set the top or bottom *n*% of records that are categorized as one class or another. You may want to do this, for example, to filter top predictions and compute recall using that boundary. Then, you can use the value as a comparison metric or to simply inspect the top percentage of records.
To set the display threshold:
1. On the ROC Curve tab, click the **Display Threshold** dropdown menu.

| | Element | Description |
|---|---|---|
|  | Display Threshold | Displays the threshold value you set. Click to select the threshold settings. Note that you can also update the display threshold by clicking in the [Prediction Distribution](pred-dist-graph) graph. The Display Threshold defaults to maximize F1. <br><br>If you switch to a different model, the Display Threshold updates to maximize F1 for the new model. This allows you to easily compare classification results between models. If you select a different data source (by selecting **Holdout**, **Cross Validation**, or **Validation** in the [**Data Selection** list](roc-curve-tab/index#roc-curve-tab-components)), the Display Threshold updates to maximize F1 for the new data. |
|  | Threshold | Drag the slider or enter a display threshold value; the visualization tools update accordingly. |
|  | Maximize option | Select a threshold that maximizes [metrics](metrics) such as the F1 score, MCC (Matthews Correlation Coefficient), or profit. To maximize for profit, first set a payoff by clicking **+Add payoff** on the **Matrix** pane. <br><br>{% include 'includes/max-metrics-roc.md' %}|
|  | Use as Prediction Threshold | Click to set the **Prediction Threshold** to the current value of the **Display Threshold**. By doing so, at prediction time, the threshold value serves as the boundary between positive and negative classifications—observations above the threshold receive the positive class's label and those below the threshold receive the negative class's label. The **Prediction Threshold** is used when you generate [profit curves](profit-curve) and when you [make predictions](predictions/index). |
|  | View Prediction Threshold | Click to reset the visualization components (graphs and charts) to the model's prediction threshold. |
|  | Threshold Type | Select **Top % of highest predictions** or a **Prediction value (0 - 1)**. See [Threshold Type](#threshold-type) for details. |
In this example, the **Display Threshold** is set to 0.2396, which maximizes the F1 score.
2. View the updated visualizations. Valid input for the Display Threshold changes the following page elements:
* Updated values are displayed in the [Metrics](metrics) pane and the [confusion matrix](confusion-matrix) (in the Matrix pane).
* The dividing line on the [Prediction Distribution](pred-dist-graph) graph moves to the selected value and is marked with a circle.
* On the current curve displayed in the Charts pane—for example, a [ROC curve](roc-curve) or a [profit curve](profit-curve)—the new point is selected (indicated by a circle). Some curves also have line intercepts corresponding to the point.
!!! note
The displays for the visualizations represents the closest data point to the specified threshold (i.e., if you entered 20%, the display might actually be something like 20.7%). The box reports the exact value after you enter return.
### Methods of setting the display threshold {: #methods-of-setting-the-display-threshold }
Click a tab to view alternative methods of setting the display threshold:
=== "Specify the threshold"
1. On the **ROC Curve** tab, click the **Display Threshold** dropdown menu.
2. Use the slider or enter a value to set the display threshold.

If the **Threshold Type** is **Top %**, enter a value between 0 and 100 (which will update to the exact point after entry). If the **Threshold Type** is **Prediction value**, enter a number between 0.0 and 1.0. If the input is not valid, a warning appears to the right.
3. Click outside of the dropdown to view the effects of the display threshold on the visualization tools.
=== "Set to maximized metric"
1. Select a [metric](metrics) maximum to use for the display threshold. Choose from F1, MCC, or profit. The metrics' maximum values display:

!!! note
You must set the **Matrix** pane to a Payoff Matrix to be able to maximize profit. Otherwise, the **Maximize profit** option is grayed out.
2. Click outside of the dropdown to view the effects of the display threshold on the visualization tools.
=== "Prediction Distribution graph"
1. Hover over the Prediction Distribution graph until a "ghost" line appears with the corresponding value above it.

2. Click to automatically update the display threshold to the new selected value.
## Set the prediction threshold {: #set-the-prediction-threshold }
Prediction requests for binary classification models return both a probability of the positive class and a label. Although DataRobot automatically calculates a threshold (the [display threshold](#set-the-display-threshold)), when applying the label at prediction time, the threshold value defaults to `0.5`. In the resulting predictions, records with values above the threshold will have the positive class's label (in addition to the probability) based on this threshold. If this value causes a need for post-processing predictions to apply the actual threshold label, you can bypass that step by changing the prediction threshold.
To set the prediction threshold:
1. On the **ROC Curve** tab, click the **Display Threshold** dropdown menu.
2. Update the [display threshold](#set-the-display-threshold) if necessary.
3. Select **Use as Prediction Threshold**.

Once deployed, all predictions made with this model that fall above the new threshold will return the positive class label.
The **Prediction Threshold** value set here is also saved to the following tabs:
* [**Make Predictions**](predict)
* [**Deploy**](deploy-model)
Changing the value in any of these tabs writes the new value back to all the tabs. Once a model is deployed, the threshold cannot be changed within that deployment.
To return the setting to the default threshold value of `0.5`, click **View Prediction Threshold**.
|
threshold
|
---
title: Custom charts
description: The ROC Curve tab lets you create custom charts that help you explore classification, performance, and statistics related to a selected machine learning model.
---
# Custom charts {: #custom-charts }
The Chart pane in the **[ROC Curve](roc-curve-tab-use)** tab allows you to create your own charts to explore classification, performance, and statistics related to a selected model.
## Create a custom chart {: #create-a-custom-chart }
Create a custom chart by selecting values for the X- and Y- axes:
1. In the **Chart** pane, select **Custom Chart**.

2. In the **X-Axis** dropdown list, select the value to display on the X-Axis. Do the same for the **Y-Axis** and click **Apply**.

The custom chart displays in the Chart pane.

Hover over the circle on the graph to see the values at the display threshold.
## Data available for custom charts {: #data-available-for-custom-charts }
Click below to view the data available for custom charts.
=== "X-axis values"
* False Positive Rate (Fallout)
* True Positive Rate (Sensitivity)
* True Negative Rate (Specificity)
* Fraction Predicted as Positive
* Fraction Predicted as Negative
* Threshold (Probability)
=== "Y-axis values"
* False Positive Rate (Fallout)
* True Positive Rate (Sensitivity)
* True Negative Rate (Specificity)
* Cumulative List (Positive)
* Cumulative List (Negative)
* Fraction Predicted as Positive
* Fraction Predicted as Negative
* Profit (Overall)
* Profit (Average)
* Threshold (Probability)
* F1 Score
* Negative Predictive Value
* Positive Predictive Value
* Accuracy
* Matthews Correlation Coefficient
|
custom-charts
|
---
title: Prediction Distribution graph
description: The Prediction Distribution graph on the ROC Curve tab helps you evaluate classification models by showing the distribution of actual values in relation to the prediction threshold.
---
# Prediction Distribution graph {: #prediction-distribution-graph }
The Prediction Distribution graph (on the **[ROC Curve](roc-curve-tab-use)** tab) illustrates the distribution of actual values in relation to the [display threshold](threshold#set-the-display-threshold) (a dividing line for interpreting results).
To use the Prediction Distribution graph:
1. Select a model on the Leaderboard and navigate to **Evaluate > ROC Curve**.
2. Select a [data source](threshold#select-data-for-visualizations) and set the [display threshold](threshold#set-the-display-threshold). The Prediction Distribution graph updates, showing the display threshold line.

Every prediction to the left of the dividing line is classified as "false" and every prediction to the right of the dividing line is classified as "true."
The Prediction Distribution graph visually expresses model performance for the selected data source. Based on [Classification use case 2](roc-curve-tab-use#classification-use-case-2), this Prediction Distribution graph shows the predicted probabilities for the two groups of patients (readmitted and not readmitted), illustrating how well your model discriminates between them. The colors correspond to the rows of the confusion matrix—red represents patients not readmitted, blue represents readmitted patients. You can see that both red and blue fall on either side of the [display threshold](threshold#set-the-display-threshold).
3. Interpret the graph using this table:
| Color on graph | Location | State |
|---|---|---|
| red | left of the threshold | true negative (TN) |
| blue | left of the threshold | false negative (FN) |
| red | right of the threshold | false positive (FP) |
| blue | right of the threshold | true positive (TP) |
Note that the gray represents the overlap of red and blue.
With a classification problem, each prediction corresponds to a single observation (readmitted or not, in this example). The Prediction Distribution graph shows the overall distribution of the predictions for all observations in the selected data source.
4. Select one of the following from the **Y-Axis** dropdown. The **Y-Axis** distribution selector allows you to choose between showing the Prediction Distribution graph as a density or frequency curve:
=== "Density"
The chart displays an equal area underneath both the positive and negative curves.

=== "Frequency"
The area underneath each curve varies and is determined by the number of observations in each class.

The distribution curves are based on the data source and/or distribution selection. Alternating between **Frequency** and **Density** changes the curves but does not change the threshold or any values in the associated page elements.
## Experiment with the Prediction Distribution graph {: #experiment-with-the-prediction-distribution-graph }
Try the following changes and observe the results.
1. Pass your cursor over the Prediction Distribution graph. The threshold value displays in white text as you move your cursor.

For curves displayed in the **Chart** pane (a ROC curve shown here), DataRobot displays a circle that dynamically moves to correspond with the threshold value.
2. Click on the Prediction Distribution graph to select a new threshold value.

The new value appears in the **Display Threshold** field. The circle and intercept lines on the Prediction Distribution graph update to the new threshold value. The Metrics pane, the Chart pane (set to **ROC Curve** here), and the Matrix pane (set to **Confusion matrix** here) also update to reflect the new threshold.
Alternatively, you can [change the threshold setting](threshold#set-the-display-threshold) by typing a new value in the threshold field.
3. Click the **Y-Axis** dropdown to switch the prediction's distribution between displaying a **Density** or **Frequency** curve. This change does not impact other page elements.
|
pred-dist-graph
|
---
title: Confusion matrix
description: The confusion matrix available on the DataRobot ROC Curve tab lets you evaluate accuracy by comparing actual versus predicted values.
---
# Confusion matrix {: #confusion-matrix }
The **[ROC Curve](roc-curve-tab-use)** tab provides a confusion matrix that lets you evaluate accuracy by comparing actual versus predicted values. The confusion matrix is a table that reports true versus predicted values. The name “confusion matrix” is used because the matrix shows whether the model is confusing two classes (consistently mislabeling one class as another class).
## Evaluate accuracy using the confusion matrix {: #evaluate-accuracy-using-the-confusion-matrix }
1. Select a model on the Leaderboard and navigate to **Evaluate > ROC Curve**.
2. Select a [data source](threshold#select-data-for-visualizations) and set the [display threshold](#set-the-display-threshold). The confusion matrix displays on the right side of the ROC Curve tab.

## Analyze the confusion matrix {: #analyze-the-confusion-matrix }
The sample confusion matrix above is based on [use case 2](roc-curve-tab-use#classification-use-case-2).
* Each column of the matrix represents the instances in a *predicted* class (predicted not readmitted, predicted readmitted).
* Each row represents the instances in an *actual* class (actually not readmitted, actually readmitted). If you look at the **Actual** axis on the left in the example above, **True** corresponds to the blue row and represents the positive class (1 or readmitted), while **False** corresponds to the red row and represents the negative class (0 or not readmitted).
* The matrix displays totals by row and column:

* Total correct predictions are TP +TN; total incorrect predictions are FP + FN. You can interpret the sample matrix as follows (reading left to right, top to bottom) for use case 2:
| Value | Model prediction |
|-------|---------------------|
| True Negative (TN) | 1207 patients predicted to not readmit that actually did not readmit. |
| False Positive (FP) | 3594 patients predicted to readmit, but actually did not readmit. |
| False Negative (FN) | 1504 patients predicted to not readmit, but actually did readmit. |
| True Positive (TP) | 6496 patients predicted to readmit that actually readmitted. |
!!! note
The [Prediction Distribution graph](pred-dist-graph) uses these same values and definitions.
The confusion matrix facilitates more detailed analysis than relying on accuracy alone. Accuracy yields misleading results if the dataset is unbalanced (great variation in the number of samples in different classes), so it is not always a reliable metric for the real performance of a classifier.
When [smart downsampling](smart-ds) is enabled, the confusion matrix totals may differ slightly from the size of the data partitions (validation, cross-validation, and holdout). This is largely due to a rounding error. In actuality, rows from the minority class are always assigned a "weight" of 1 (not to be confused with the weight set in [**Advanced options**](additional) and therefore never removed during downsampling. Only rows from the majority class get a "weight" greater than 1 and are potentially downsampled.
??? tip "To view total counts"
If you hover over a cell in the matrix (for example, the True Negative cell in the top left), you can see the total count as a numeric or percentage (total count as a numeric shown here):

|
confusion-matrix
|
---
title: ROC Curve tools
description: The ROC Curve tools help you explore classification, performance, and statistics related to a selected model at any point on the probability scale.
---
# ROC Curve tools {: #roc-curve-tools }
The **ROC Curve** tab provides tools for exploring classification, performance, and statistics related to a selected model at any point on the probability scale. The following topics show how to use these tools:
| Topic | Describes... |
|---|---|
| [Use the ROC Curve tools](roc-curve-tab-use) | Accessing the ROC Curve tab and understanding its components. |
| [Select data and display threshold](threshold) | Setting the data source and display threshold used for ROC Curve visualizations. |
| [Confusion matrix](confusion-matrix) | Using a confusion matrix to evaluate model accuracy by comparing actual versus predicted values. |
| [Prediction Distribution graph](pred-dist-graph) | Viewing the distribution of actual values in relation to the display threshold. |
| [ROC curve](roc-curve) | Using a ROC curve to view a plot of the true positive rate against the false positive rate for given data source. |
| [Profit curve](profit-curve) | Generating a profit curve to estimate the business impact of a selected model. |
| [Cumulative charts](cumulative-charts) | Generating charts to help assess a model's cumulative characteristics. |
| [Custom charts](custom-charts) | Generating your own charts to explore classification, performance, and statistics for a model. |
| [Metrics](metrics) | Viewing statistics that describe model performance at the selected display threshold. |
|
index
|
---
title: ROC curve
description: The ROC curve visualization in DataRobot helps you explore classification, performance, and statistics for a selected model. ROC curves plot the true positive rate against the false positive rate for a given data source.
---
# ROC curve {: #roc-curve }
The ROC curve visualization (on the **[ROC Curve](roc-curve-tab/index)** tab) helps you explore classification, performance, and statistics for a selected model. ROC curves plot the true positive rate against the false positive rate for a given data source.
## Evaluate a model using the ROC curve {: #evaluate-a-model-using-the-roc-curve }
1. Select a model on the Leaderboard and navigate to **Evaluate > ROC Curve**.
2. Select a [data source](threshold#select-data-for-visualizations) and set the [display threshold](threshold#set-the-display-threshold). The ROC curve displays in the center of the ROC Curve tab.

The curve is highlighted with the following elements:
* Circle—Indicates the new threshold value. Each time you set a new [display threshold](threshold#set-the-display-threshold), the position of the circle on the curve changes.
* Gray intercepts—Provides a visual reference for the selected threshold.
* 45 degree diagonal—Represents the "random" prediction model.
## Analyze the ROC curve
View the ROC curve and consider the following:
* [The shape of the curve](#roc-curve-shape)
* [The area under the curve (AUC)](#area-under-the-roc-curve)
* [The Kolmogorov-Smirnov (KS) metric](#kolmogorov-smirnov-ks-metric)
### ROC curve shape {: #roc-curve-shape }
Use the ROC curve to assess model quality. The curve, drawn based on each value in the dataset, plots the true positive rate against the false positive rate. Some takeaways from an ROC curve:
* An ideal curve grows quickly for small x-values, and slows for values of x closer to 1.
* The curve illustrates the tradeoff between sensitivity and specificity. An increase in sensitivity results in a decrease in specificity.
* A "perfect" ROC curve yields a point in the top left corner of the chart (coordinate (0,1)), indicating no false negatives and no false positives (a high true positive rate and a low false positive rate).
* The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the model and closer it is to a random assignment model.
* The shape of the curve is determined by the overlap of the classification distributions.
### Area under the ROC curve {: #area-under-the-roc-curve }
The **AUC** (area under the curve) is literally the lower-right area under the **ROC Curve**.

!!! note
AUC does not display automatically in the Metrics pane. Click **Select metrics** and select **Area Under the Curve (AUC)** to display it.
AUC is a metric for binary classification that considers all possible thresholds and summarizes performance in a single value, reported in the bottom right of the graph. The larger the area under the curve, the more accurate the model, however:
- An AUC of 0.5 suggests that predictions based on this model are no better than a random guess.
- An AUC of 1.0 suggests that predictions based on this model are perfect, and because a perfect model is highly uncommon, it is likely flawed (target leakage is a common cause of this result).
[StackExchange](http://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it?){ target=_blank } provides an excellent explanation of AUC.
### Kolmogorov-Smirnov (KS) metric {: #kolmogorov-smirnov-ks-metric }
For binary classification projects, the KS optimization metric measures the maximum distance between two non-parametric distributions.

The KS metric evaluates and ranks models based on the degree of separation between true positive and false positive distributions.
!!! note
The KS metric does not display automatically in the Metrics pane. Click **Select metrics** and select **Kolmogorov-Smirnov Score** to display it.
For a complete description of the the Kolmogorov–Smirnov test (K–S test or KS test), see the [Wikipedia](https://en.wikipedia.org/wiki/Kolmogorov-Smirnov_test){ target=_blank } article on the topic.
|
roc-curve
|
---
title: Metrics
description: The Metrics pane in the DataRobot ROC Curve tab helps you explore statistics related to a selected machine learning model.
---
# Metrics {: #metrics }
The Metrics pane, on the bottom right of the **[ROC Curve](roc-curve-tab-use)** tab, contains standard statistics that DataRobot provides to help describe model performance at the selected display threshold.
## View metrics {: #view-metrics }
1. Select a model on the Leaderboard and navigate to **Evaluate > ROC Curve**.
2. Select a [data source](threshold#select-data-for-visualizations) and set the [display threshold](threshold#set-the-display-threshold).
3. View the **Metrics** pane on the bottom right:

The Metrics pane initially displays the F1 Score, True Positive Rate (Sensitivity), and Positive Prediction Value (Precision). You can set up to six metrics.
4. To view different metrics, click **Select metrics** and select a new metric.
!!! note
You can select up to six metrics to display. If you change the selection, new metrics display the next time you access the **ROC Curve** tab for any model until you change them again.

??? tip "ROC curve metrics calculations"
{% include 'includes/max-metrics-roc.md' %}
## Metrics explained {: #metrics-explained }
The following table provides a brief description of each statistic, using [Classification use case 1](roc-curve-tab-use#classification-use-case-1) to illustrate.
| Statistic | Description | Sample (from [use cases](roc-curve-tab-use#classification-use-cases)) | Calculation |
|------|-------|------|---------|
| F1 Score | A measure of the model's accuracy, computed based on precision and recall. | N/A |  |
| True Positive Rate (TPR) | *Sensitivity* or *recall*. The ratio of true positives (correctly predicted as positive) to all actual positives. | What percentage of diabetics did the model correctly identify as diabetics? |  |
| False Positive Rate (FPR) | *Fallout*. The ratio of false positives to all actual negatives. | What percentage of healthy patients did the model incorrectly identify as diabetics? |  |
| True Negative Rate (TNR) | *Specificity*. The ratio of true negatives (correctly predicted as negative) to all actual negatives. | What percentage of healthy patients did the model correctly predict as healthy? |  |
| Positive Predictive Value (PPV) | *Precision*. For all the positive predictions, the percentage of cases in which the model was correct.| What percentage of the model’s predicted diabetics are actually diabetic? |  |
| Negative Predictive Value (NPV) | For all the negative predictions, the percentage of cases in which the model was correct. | What percentage of the model’s predicted healthy patients are actually healthy? |  |
| Accuracy | The percentage of correctly classified instances. | What is the overall percentage of the time that the model makes a correct prediction? |  |
| Matthews Correlation Coefficient | Measure of model quality when the classes are of very different sizes (unbalanced). | N/A | [formula](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient){ target=_blank } |
| Average Profit | Estimates the business impact of a model. Displays the average profit based on the [payoff matrix](profit-curve) at the current [display threshold](threshold#set-the-display-threshold). If a payoff matrix is not selected, displays N/A. | What is the business impact of readmitting a patient? | [formula](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient){ target=_blank } |
| Total Profit | Estimates the business impact of a model. Displays the total profit based on the [payoff matrix](profit-curve) at the current [display threshold](threshold#set-the-display-threshold). If a payoff matrix is not selected, displays N/A. | What is the business impact of readmitting a patient? | [formula](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient){ target=_blank } |
|
metrics
|
---
title: Use the ROC Curve tools
description: Learn how to access the visualization tools available on the ROC Curve tab.
---
# Use the ROC Curve tools {: #use-the-roc-curve-tools }
The **ROC Curve** tools provide visualizations and metrics to help you determine whether the classification performance of a particular model meets your specifications. It is important to understand that the **ROC Curve** and other charts on that tab are based on a _sample_ of calculated thresholds. That is, DataRobot calculates thresholds for all data and then, because sampling provides faster performance (returns results in the UI more quickly), it uses a maximum of 120 thresholds—a quantile-based representative selection—for the visualizations. Manual calculations are slightly more precise, therefore, the initial auto-generated calculations and the manually generated will not match exactly.
??? tip "Benefits of ROC curve visualizations"
Models produce probabilities, values anywhere from 0 to 1, with millions if not 10s or 100s of millions of unique possibilities. The action you take based on model results, however, is binary—you either send an offer or you don’t, for example. To "convert" all those probabilities to a binary action, you select a threshold (“any probability above X% and I send the offer”). The threshold creates two values—false positives and true positives, which can be plotted on a graph. The number of false and true positives vary as you change your threshold. A very low threshold results in a lot of false positives, whereas a high threshold provides none.
Conceptually, the best outcome (and what a good model will give you) is low false positives and high true positives. If you look at an ROC curve, points in the upper left are what you want, as they represent low false positive, high true positive. A model that does no better than a coin flip will show a 45-degree line.
## Access the ROC Curve tools {: #access-the-roc-curve-tools }
1. To access the **ROC Curve** tab, navigate to the **Leaderboard**, select the model you want to evaluate, then click **Evaluate > ROC Curve**. The **ROC Curve** tab contains the set of interactive graphical displays described below.

!!! tip
{% include 'includes/slices-viz-include.md' %}
| | Element | Description |
|---|---|---|
|  | Data Selection | [Select the data source](threshold#select-data-for-visualizations) for your visualization. Data sources can be partitions—**Holdout**, **Cross Validation**, and **Validation**—as well as external test data. Once you select a data source, the ROC curve visualizations update to reflect the new data.|
|  | [Data slice](sliced-insights) | _Binary classification only_. Selects the filter that defines the subpopulation to display within the insight.|
|  | Display Threshold | Select a [display threshold](threshold#set-the-display-threshold) that separates predictions classified as "false" from predictions classified as "true." |
|  | Export | Export to a CSV, PNG, or ZIP file: <ul><li>Download the data from your generated ROC Curve or Profit Curve as a CSV file.</li><li>Download a PNG of a ROC Curve, Profit Curve, Prediction Distribution graph, Cumulative Gain chart, or a Cumulative Lift chart.</li><li> Download a ZIP file containing all of the CSV and PNG files.</li></ul> See also [Export charts and data](export-results#export-charts-and-data). |
|  | Prediction Distribution | Use the [Prediction Distribution](pred-dist-graph) graph to evaluate how well your classification model discriminates between the positive and negative classes. The graph separates predictions classified as "true" from predictions classified as "false" based on the prediction threshold you set. |
|  | Chart selector | Select a type of chart to display. Choose from ROC Curve (default), Average Profit, Precision Recall, Cumulative Lift (Positive/Negative), and Cumulative Gain (Positive/Negative). You can also create your own [custom chart](custom-charts).|
|  | Matrix selector | Select a type of matrix to display. By default, a [confusion matrix](confusion-matrix) displays. You can choose to display the confusion matrix data by instance counts or percentages. You can instead create a payoff matrix so that you can generate and view a [profit curve](profit-curve).|
|  | + Add payoff | Enter payoff values to generate a [profit curve](profit-curve) so that you can estimate the business impact of the model. Clicking **Add payoff** displays a **Payoff Matrix** in the **Matrix** pane if not already displayed. Adjust the **Payoff** values in the matrix and set the **Chart** pane to **Average Profit** to view the impact. |
|  | Metrics | View summary statistics that describe model performance at the selected threshold. Use the **[Select metrics](metrics)** menu to choose up to six metrics to display at one time. |
2. To use these components, select a [data source and a display threshold](threshold) between predictions classified as "true" or "false"—each component works together to provide an interactive snapshot of the model's classification behavior based on that threshold.

!!! note
Several [Wikipedia pages](https://en.wikipedia.org/wiki/Receiver_operating_characteristic){ target=_blank } and the Internet in general provide thorough descriptions explaining many of the elements provided by the **ROC Curve** tab. Some are summarized in the sections that follow.
## Classification use cases {: #classification-use-cases }
The following sections use one of two binary classification use cases to illustrate the concepts described. In both cases, each row in the dataset represents a single patient, and the features (columns) contain descriptive variables about the patient's medical condition.
The ROC curve is a graphical means of illustrating classification performance for a model as the relevant performance statistics at all points on the probability scale change. To understand the reported statistics, you must understand the four possible outcomes of a classification problem; these outcomes are the basis of the [confusion matrix](confusion-matrix).
### Classification use case 1 {: #classification-use-case-1 }
Use case 1 asks "Does a patient have diabetes?" This hypothetical dataset has both categorical and numeric values and describes whether a patient has diabetes. The target variable, `has_diabetes`, is a categorical value that describes whether the patient has the disease (`has_diabetes=1`) or does not have the disease (`has_diabetes=0`). Numeric and other categorical variables describe factors like blood pressure, payer code, number of procedures, days in hospital, and more. For use case 1:
| Outcome | Description |
|-------------|--------------|
| True positive (TP) | A positive instance that the model correctly classifies as positive. For example, a diabetic patient correctly identified as diabetic. |
| False positive (FP) | A negative instance that the model incorrectly classifies as positive. For example, a healthy patient incorrectly identified as diabetic. |
| True negative (TN) | A negative instance that the model correctly classifies as negative. For example, a healthy patient correctly identified as healthy. |
| False negative (FN) | A positive instance that the model incorrectly classifies as negative. For example, a diabetic patient incorrectly identified as healthy. |
The following points provide some statistical reasoning behind using the outcomes:
* Correct predictions: 
* Incorrect predictions: 
* Total scored cases: 
* Error rate: 
* Overall accuracy (probability a prediction is correct): 
### Classification use case 2 {: #classification-use-case-2 }
Use Case 2 is a model that tries to determine whether a diabetic patient will be readmitted to hospital (the target feature). This hypothetical dataset has both categorical and numeric values and describes whether a patient will be readmitted to the hospital within 30 days (target `variable=readmitted`). This categorical value describes whether the patient is readmitted inside of 30 days (`readmitted=1`) or is not readmitted within that time frame (`readmitted=0`); other categorical values include things like admission id and payer code. Numeric variables describe things like blood pressure, number of procedures, days in hospital, and more.
!!! note
DataRobot displays the **ROC Curve** tab only for models created for a binary classification target (a target with two unique values).
|
roc-curve-tab-use
|
---
title: Multiseries segmentation visual overview
dataset_name: N/A
Description: Provides a list of frequently asked questions, and brief answers about multiseries modeling with segmentation in DataRobot. Answers link to more complete documentation.
domain: time series
expiration_date: 3-10-2022
owner: anatolii.stehnii@datarobot.com
url: docs.datarobot.com/docs/modeling/time/ts-reference/segmented-qs.html
---
# Multiseries segmentation visual overview {: #multiseries-segmentation-visual-overview }
<span style="font-size: 1rem">Imagine that you sell avocados—different kinds (SKUs).</span>

<span style="font-size: 1rem">You want to predict avocado sales, so your target is **Sales**.</span>

<span style="font-size: 1rem">You sell these avocados in different stores, in different regions of the country. So your series ID is **store**. </span>

<span style="font-size: 1rem">Of course, stores sales don’t always have anything to do with one another. Maybe avocados sell often in hot places, and less often in cold places.</span>

<span style="font-size: 1rem">What you really need is a way to group series (stores in different regions) and forecast avocado sales based on that grouping. You can group the series ("stores") based on location and set that as the segment ID ("region"). </span>

<span style="font-size: 1rem">Now you can build the right model for every segment, instead of one model for all. For example, you can model avocados that don’t sell very often with a Zero-Inflated XGBoost model.</span>

<span style="font-size: 1rem">You may even benefit from using a different metric per segment. Metrics are automatically selected based on target distribution.</span>

<span style="font-size: 1rem">How? Multiseries modeling with segmentation.</span>

|
segmented-qs
|
---
title: Segmented modeling FAQ
dataset_name: N/A
Description: Provides a list of frequently asked questions, and brief answers about multiseries modeling with segmentation in DataRobot. Answers link to more complete documentation.
domain: time series
expiration_date: 3-10-2022
owner: anatolii.stehnii@datarobot.com
url: docs.datarobot.com/docs/modeling/time/ts-reference/segmented-faq.html
---
# Segmented modeling FAQ {: #segmented-modeling-faq }
??? faq "What is an example of a segment vs. a series?"
Imagine you sell avocados. Your target is "avocado_sales" and the series ID is “stores selling avocados.” The segment ID is “Region of the country." Think of a segment as a group of series. Let's look at the _Northwest_ segment. Because avocado sales in Alaskan stores don't resemble California stores’ sales, assigning and building around a segment ID is like building a business rule cluster. Instead of predicting avocado sales, you are predicting avocado sales in the _Northwest_ region. See also the [visual quickstart](segmented-qs).
??? faq "Is DataRobot building one model per series?"
DataRobot builds multiple models *per segment* (a group of series); every segment has its own Leaderboard. DataRobot then selects and prepares a champion model from each segment Leaderboard.
??? faq "Does DataRobot pick the segment champion?"
DataRobot recommends one model from the segment Leaderboard, prepares it for deployment, and marks it as segment champion. That model then represents the segment in the Combined Model. You can, however, reset the champion to any model on the segment's Leaderboard.
??? faq "What are your dataset file size constraints?"
Regular time series [dataset file size](file-types#time-series-file-import-sizes) constraints apply. Segmented modeling supports up to 100 segments, but those segment sizes cannot exceed the total training set size limitation. However, it is important to monitor how many segments you want to create because each segment is effectively its own Autopilot. In other words, if you aren't prepared to run 100 instances of Autopilot, don’t start a segmented modeling project with 100 segments.
??? faq "Do segmented projects use the same feature engineering?"
The internal time series feature engineering process makes different features for each segmented project, based on what it finds useful in that segment. There is likely some overlap between segments, but the full list of generated features per segment will differ.
??? faq "Where do I set the Forecast Window and Feature Derivation Window?"
The flow is the same as setting a series ID, except that now you set a segment ID before configuring windows. You also have the ability to go back and edit your segment ID if you need to.
??? faq "Can the same column be used for the series ID and the segment ID?"
No they must use different columns. If you want to have one series per segment (a single-series segmented project), duplicate the series ID column, giving it new name. Set the segment ID to that column name. DataRobot will generate the segments using the series ID.
If you don't want to create a new column, you can often work within the original data to extrapolate. For example, if you previously set `customer_unique_id` as your series ID to predict sales for different product IDs, try using `customer_unique_id` as your segment ID and use `product_id` as your series ID.
??? faq "What are some ways to think about creating segments?"
Here are some favorites—segment by:
* Customer size
* SKUs, grouped by sales velocity
* Region
* Areas, by temperature
* Series by size (small, medium, large)
* Target distribution
??? faq "What kind of partitioning does segmented modeling use?"
Segmented modeling uses automated partitioning based on the size of the data, running different partitioning for each project. This ensures that the backtests are not too long or are not so short that there is no data in them.
??? faq "How are segmented models treated as a deployment?"
The Combined Model created with segmented modeling is treated as one deployment.
??? faq "How are metric scores computed?"
DataRobot runs Autopilot (full or Quick) on each segment independently, providing better accuracy. When modeling is complete for all child projects, metrics become available for the Combined Model and are displayed on the Leaderboard. The metrics for each champion model are aggregated as a weighted sum of their metrics. When you change a champion model, scores are recomputed. Available metrics are MAD, MAE, MAPE, MASE, RMSE, RMSLE, SMAPE, and Theil’s U.
|
segmented-faq
|
---
title: Time-aware considerations
description: This page describes considerations to be aware of when working with DataRobot time series modeling.
---
# Time-aware considerations {: #time-aware-considerations }
Both time-aware modeling mechanisms—OTV and automated time series—are implemented using [date/time partitioning](ts-date-time). Therefore, the date/time partitioning notes apply to all time-aware modeling. See also:
* [Time series-specific](#time-series-specific-considerations) considerations
* [Multiseries](#multiseries-considerations) considerations
* [Clustering (time series-specific)](#clustering-considerations) considerations
* [Segmented modeling](#segmented-modeling-considerations)
See the documented [file requirements](file-types) for information on file size and series limit considerations.
!!! note
Considerations are listed beginning with newest additions for easier identification.
## Date/time partitioning considerations {: #datetime-partitioning-considerations }
{% include 'includes/dt-consider.md' %}
## Time series-specific considerations {: #time-series-specific-considerations }
In addition to the above items, consider the following when working with time series projects:
* [Accuracy](#accuracy)
* [Anomaly Detection](#anomaly-detection)
* [Data prep tool](#data-prep-tool)
* [Data Quality](#data-quality)
* [Monotonic constraints](#monotonic-constraints)
* [Productionalization](#productionalization)
* [Scale](#scale)
* [Trust](#trust)
### Accuracy {: #accuracy }
* DeepAR:
* Regression only
* Feature lists must contain latest naive baseline feature
* Supports covariates, but only those that are available at prediction time e.g., date derived, known in advance, and calendar features.
* Target cannot be DND
* Not available in FW-0 mode
* Not available in unsupervised mode
* Training dataset cannot be sampled
* Temporal hierarchical models:
* Supports regression projects only
* Target cannot be DND
* Not available in FW=0 mode
* Not available for row-based projects
* Nowcasting:
* Because MASE and Theil’s U are only available when the target is derived, these metrics are only available for regression projects with a derived target.
* Feature Effects, Compliance documentation, and Prediction Explanations are not supported for autoregressive models (Traditional Time Series (TTS) and deep learning models). This includes:
* All ARIMA:
* Per Series nonseasonal AUTOARIMA with Fixed Error Terms (required feature flags: Enable Multiseries Scoring Code Developer Blueprints + Enable Scoring Code)
* Per-Series nonseasonal AUTOARIMA
* Per-Series nonseasonal AUTOARIMA with Fourier terms
* Non-seasonal AUTOARIMA
* AUTOARIMA with naive prediction offset
* All VAR:
* Multiseries VARMAX
* Multiseries VARMAX with Fourier terms
* All RNN and LSTM (DeepAR, Sequence to Sequence, etc.)
* Other autoregressive modelers such as Prophet, TBATs, and ETS.
### Anomaly Detection {: #anomaly-detection }
* Model comparison:
* External test sets are not available.
* The “All backtests” option is not available.
* Multistage OTV is not available for unsupervised projects.
* The anomaly threshold for the **Anomaly Over Time** chart is fixed at 0.5 for per-series kind blueprints. Non-per-series blueprints will use a computed threshold, which is dynamic.
* The Anomaly Assessment Insight:
* Does not work for unsupervised AutoML
* The Max number of points is 500 most anomalous per source, but can be reconfigured
* Is not available for blenders
* Will not be computed for training if training is considered to be too large.
### Data prep tool {: #data-prep-tool }
Consider the following when doing gap handling and aggregation:
* Data prep is not supported for deployments or for use with the API.
* Only numeric targets are supported.
* Only numeric, categorical, text, and primary date columns are included in the output.
* The smallest allowed time step for aggregation is one minute.
* Datasets added to the AI catalog prior to introduction of the data prep tool are not eligible. Re-upload datasets to apply the tool.
* Shared deployments do not support automatic application of the transformed data prep dataset for predictions.
### Data Quality {: #data-quality }
* Check for leading-trailing zeros only runs when less than 80% of target values are zeros.
### Monotonic constraints {: #monotonic-constraints }
* XGBoost is the only supported model.
* While you can create a monotonic feature list after project creation with any numeric post-derivation feature, if you specified a raw feature list as monotonic before project creation, all features in it will be marked as Do not Derive (DND).
* When there is an offset in the blueprint, for example naive predictions, the final predictions may not be monotonic after offset is applied. The XGBoost itself honors monotonicity.
* If the model is a collection of models, like per-series XGBoost or performance-clustered blueprint, monotonicity is preserved per series/cluster.
### Productionalization {: #productionalization }
* Prediction Explanations:
* Are not available for AutoRegressive Models (LSTM/ARIMA/VARMAX) or blenders containing them.
* Are defined relative to the training dataset, not the recent history.
* Require at least 100 rows of validation data.
* For a model trained into Holdout as part of Autopilot are not available until the holdout is unlocked.
* For blenders created directly from frozen start/end models trained into Validation are not available. They are available if a blender of the parent models is retrained into Validation or Holdout.
* Are not supported for series-scaling models in cross-series projects or the blenders containing them.
* Are only available using the XEMP methodology.
* ARIMA, LSTM, and DeepAR models cannot be deployed to prediction servers. Instead, deploy using either:
* the **Portable Predictions Server**—an execution environment for DataRobot model packages (.mlpkg files).
* the **Make Predictions** tab (for datasets up to 1GB).
* DataRobot **Scoring Code** (ARIMA only).
* [Scoring code support](download#scorecode-intro) requires the following feature flags: Enable Scoring Code, Enable Scoring Code Support for Time Series, Enable Scoring Code support for Keras Models (if needed), Enable Multiseries Scoring Code Developer Blueprints (if needed)
* Time series batch predictions are not available for cross-series projects or traditional time series models (such as ARIMA).
### Scale {: #scale }
* For temporal hierarchical models, the **Feature Over Time** chart may look different from the data used at the edges of the partitions for the temporal aggregate.
* When using configurable model parallelization (Customizable FD splits), if one parallel job is deleted during Autopilot, the remaining model split jobs will error.
* 10GB OTV requires multistep OTV be enabled.
### Trust {: #trust }
* Model Comparison (over time) shows the first 1000 series only. The insight does not support synchronization with job computation status and is only able to show completely precomputed data.
* Forecast vs Actuals (FvsA) chart:
* UI is limited to showing and computing a maximum of 100 forecast distances at a time
* UI is limited to showing 1000 bins at a time
* API is not public
* Training CSV export is not available
* PNG and ZIP export are not available
* Chart could work slowly on large datasets with wide FDs
* FvsA chart is not available for projects with [0,0] forecast window
* Calculation for any particular backtest/source will remove any previously calculated Accuracy Over Time (AOT) data for this backtest/source. However, AOT will be recalculated with FvsA for the selected forecast distance range.
* Accuracy over Time (AOT) chart:
* UI is limited to showing 1000 bins at a time
* When handling data quality issues in Numeric Data Cleansing, some models can experience performance regression.
* CSV Export is not available for “All Backtest” in the Forecast vs Actuals chart.
## Multiseries considerations {: #multiseries-considerations }
In addition to the general time series considerations above, be aware:
* The Feature Association Matrix is not supported.
* Most multiseries UI insights and plots support up to 1000 series. For large datasets, however, some insights must be calculated on-demand, per series.
* Multiseries supports a single (1) series ID column.
* Multiseries ID values should be either all numeric or all strings. Blank or float data type series ID values are not fully supported.
* Multiseries does not support Prophet blueprints.
## Clustering considerations {: #clustering-considerations }
* Clustering is only available for multiseries time series projects. Your data must contain a time index and at least 10 series.
* To create X _clusters_, you need at least X _series_, each with 20+ time steps. (For example, if you specify 3 clusters, at least three of your series must be a length of 20 time steps or more.)
* Building from the union of all selected series, the union needs to collectively span at least 35 time steps.
* At least two clusters must be discovered for the clustering model to be used in a segmented modeling run.
??? info "What does it mean to "discover" clusters?"
To build clusters, DataRobot must be able to group data into two or more distinct groups. For example, if a dataset has 10 series but they are all copies of the same single series, DataRobot would not be able to discover more than one cluster. In a more realistic example, very slight time shifts of the same data will also not be discoverable. If all the data is too mathematically similar that it cannot be separated into different clusters, then it cannot subsequently be used by segmentation.
The "closeness" of the data is model-dependent—the convergence conditions are different. Velocity clustering would not converge if a project has 10 series, all with the same means. That, however, does not imply that K-means itself wouldn't converge.
Note, however, the restrictions are less strict if clusters are _not_ being used for segmentation.
## Segmented modeling considerations {: #segmented-modeling-considerations }
* Projects are limited to 100 segments; all segments must total less than 1GB (5GB with feature flag, contact your DataRobot representative).
* Predictions are only available when using the **Make Predictions** tab on the Combined Model's Leaderboard or via the API.
* If you manually assigned segments by selecting a segment ID (instead of using discovered clusters), the prediction dataset must not contain a new segment ID that does not appear in the training dataset.
* The prediction dataset must fulfill historical data requirements for each segment. For segment projects with detected seasonality, there must be more historical rows than those segment projects without seasonality.
* Time series clustering projects are supported. See the associated [considerations](clustering#clustering-for-time-aware-projects).
### Combined Model deployment considerations {: #combined-model-deployment-considerations }
{% include 'includes/deploy-combined-model-include.md' %}
## Release 6.0 and earlier {: #release-60-and-earlier }
* For the [**Make Predictions**](ts-predictions#make-predictions-tab ) tab:
* The Forecast Settings modal appears only if the dataset was uploaded after release 5.3. The automatically generated extended prediction file template is available only if the dataset was uploaded after release 6.0.
* If a dataset exceeds the [upload file size limit](file-types) after expansion it will not be expanded.
* When a prediction dataset requires automatic expansion and also contains rows without a target, the expanded rows might have duplicate dates in the rows without target (and will fail to predict). To resolve this, simply remove the rows without target before uploading the file.
* DataRobot displays a warning when KA values are missing but does not itemize the specific missing values per forecast point.
* Classification models are not optimized for rare events, and should have >15% frequency for their minority label.
* Run Autoregressive models using the "Baseline Only" feature list. Using other feature lists could cause Feature Effects or compliance documentation to fail, as the autoregressive models do not use the additional features that are part of the larger default lists and they are not designed to work with them.
* Feature Effects and Compliance documentation are disabled for LSTM/DeepAR blueprints.
* Eureqa with Forecast Distance is limited to 15 FD values. They will only run on smaller datasets with fewer than 100K rows or if the total number of levels for the categorical features is less than 1000. Their grid search plots in Advance Tuning marks only the single best grid search point, independent of the FD value. The blueprint can take a long time to complete if the `task size` parameter is set too large.
* Forecast distance blenders are limited to projects with a maximum of 50 FDs.
* The "Forecast distance" selector on the **Coefficients** tab is not available for backtests and models that do not use ForecastDistanceMixin, for example, ARIMA models.
* Monthly differencing on daily datasets can only be triggered through detection. Currently, there is no support to specify monthly seasonality via an advanced option in the UI or API.
* RNN-based (LSTM and GRU—long short-term memory and gated recurrent unit) supports a maximum categorical limit of 1000 (to prevent OOM errors). High-cardinality features will be truncated beyond this.
* The training partition for the holdout row in the flexible backtesting configuration is not directly editable. The duration of the first backtest’s training partition is used as the duration for the training partition of the holdout.
* For Repository blueprints, selecting a best-case default feature list is available for ARIMA models only.
* Hierarchical modeling requires the data’s series to be aligned in time (specifically 95% of series must appear on 95% of the timestamps in the data).
* Hierarchical and series-scaled blueprints require the target to be non-negative.
* Series-scaled blueprints only support squared loss (no log link).
* Hierarchical and LSTM blueprints do not support projects that require sampling.
* Model-per-series blueprints (XGB, XGB Boost, ENET) support up to 50 series. They will not be advance tunable if number of series is more than 10.
* ARIMA per-series blueprints are limited to 15K rows per series (i.e., 150K rows for 10 series) and support up to 40 series. The blueprint runs in Autopilot when the number of series is less than 10. Due to a refit for every prediction, the series accuracy computation can take a long time.
* Clustered blueprints are not available for classification. Similarity-based clustering is very time-consuming and can take a long time to train and will use large amounts of memory (use the default performance-based clustering for large datasets).
* Zero-inflated blueprints are enabled if the target’s minimum value is 0.
* Zero-inflated blueprints only support the “nonzero average baseline” feature list.
* Setting the target to do-not-derive still derives the simple naive target feature for regression projects.
* Hierarchical and zero-inflated models cannot be used when a target is set to do-not-derive because the feature derivation process does not generate the target derived features required for zero-inflated & hierarchical models.
* The group ID for cross-series features cannot have blank or missing values; they cannot mix numeric and non-numeric values, similar to the series ID constraints.
* Prediction Explanations are not available for XGBoost-based hierarchical and two-stage models.
* Series scaling blueprints may have poor accuracy when predicting new series.
* The Feature Association Matrix is not supported in multiseries projects.
* Timestamps can be irregularly spaced but cannot contain duplicate dates within a series.
* Time series datasets cannot contain dates past the year 2262.
* To ensure backtests have enough rows, in highly irregular datasets use the row-count instead of duration partitioning mode.
* VARMAX and VAR blueprints do not support log-transform/exponential modeling.
* ARIMA, VARMAX, and VAR blueprint predictions require history back to the end of the training data when making predictions.
* For non-forecasting time series models (those that allows predicting the current target `FW=[0, 0]`):
* Forecast window FW=[0,0] is allowed but not FW=[0, N] where N>0
* Forecast window FW=[0,0] will not generate any lags of the target (similar to OTV)
* Loss families have changed for time series blenders, which may slightly change blending results. Specifically:
* When the target is exponential and metric is RMSE, MASE, or Theil's U, the loss family is Poisson or Gamma.
* When the target is not exponential, the loss family is Gaussian.
* Binary classification projects have somewhat [different options available](ts-flow-overview#project-types) than regression projects. Additionally, classification projects:
* are not optimized for rare events (they should have >15% frequency of the minority label).
* must have examples of all labels in all backtest partitions.
* do not support differencing, ARIMA, or detecting seasonality.
* can show error bars beyond the 0-1 range in the prediction preview plot.
* Millisecond datasets:
* Can only specify training and partitioning boundaries at the second level.
* Must span multiple seconds for partitioning to work.
* Row-based projects require a primary date column.
* Calendar event files:
* cannot be updated in an active project. You must specify all future calendar events at project start or if you did not, train a new project.
* If you upload a multiseries calendar, changing the series ID after the upload will require you to clear and re-upload the dataset.
* must be under 10MB.
* When running blueprints from the repository, the <em>Time Series Informative Features</em> list (the default selection if you do not override it) is not optimal. Preferably, select one of the “with differencing” or the “no differencing” feature lists.
* The [Forecast Window](glossary/index#forecast-window) must be 1000 forecast distances (FDs)/time steps or fewer for small datasets.
* You cannot modify R code for Prophet blueprints; also, they do not support calendar events and cannot use known in advance features.
* Only Accuracy Over Time, Stability, Forecasting Accuracy, and Series Insights plots are available for export; other time series plots are not exportable from the UI or available through the public API.
* Large datasets with many forecast distances are down-sampled after feature derivation to <25GB.
* [Accuracy Over Time](aot) training computation is disabled if the dataset exceeds the configured threshold after creation of the modeling dataset. The default threshold is 5 million rows.
* Seasonal AUTOARIMA uses large amounts of memory for large seasonality and, due to Python 2.7 issues, could fail on large datasets.
* Seasonality is only detected automatically if the periodicity fits inside the feature derivation window.
* TensorFlow neural network blueprints (in the Repository) do not support text features or making predictions on new series not in the training data.
|
ts-consider
|
---
title: Time series reference
description: This section provides deep-dive reference material for DataRobot time series modeling.
---
# Time series reference {: #time-series-reference }
This section provides deep-dive reference material for DataRobot time series modeling.
Topic | Describes...
----- | ------
[Time series framework](ts-framework) | The framework DataRobot uses to build time series models.
[Time series feature derivation](feature-eng) | The feature derivation process and intermediate and final features.
[Autopilot in time-aware projects](multistep-ta) | Modeling modes as they apply to time aware projects.
[Time series feature lists](ts-feature-lists) | Details of the feature lists used for time series modeling.
Multiseries with segmentation | An [FAQ](segmented-faq) and [visual example](segmented-qs) to help understand multiseries segmentation.
[Feature considerations](ts-consider) | Considerations to keep in mind when working with time-aware modeling.
|
index
|
---
title: Autopilot in time-aware projects
description: DataRobot's modeling modes are different in time series projects where the modeling mode defines the set of blueprints run but not the amount data to train on.
---
# Autopilot in time-aware projects {: #autopilot-in-time-aware-projects }
!!! note
See the [AutoML modeling mode](model-data#set-the-modeling-mode) description for non-time-aware modeling.
Modeling modes define the automated model-building strategy—the set of blueprints run and the sampling size used. DataRobot selects and runs a predefined set of blueprints, based on the specified target and date/time feature, and then trains the blueprints on an ever-increasing portion of the training backtest partition. Running more models in the early stages and advancing only the top models to the next stage allows for greater model diversity and faster Autopilot runtimes.
The default, Quick (Autopilot), is a shortened and optimized version of the full Autopilot mode. Comprehensive mode, which can be quite time-intensive, runs all Repository blueprints. Manual mode allows you to choose blueprints and sample sizes. The sample percentage sizes used are based on the selected mode, which are described in the table below.
!!! note
For time series projects, the modeling mode defines the set of blueprints run but not the feature reduction process. Using Quick mode has additional implications for [time series](#feature-reduction-with-time-series) (not OTV) projects.

The following table defines the modeling percentages for the selectable modes for OTV projects. Time series projects run on 100% of data. All modes, by default, run on these feature lists:
* [Informative Features](feature-lists#automatically-created-feature-lists) (OTV)
* [Time Series Informative Features](ts-feature-lists) (time series)
Percentages listed refer to the percentage of total rows (rows are defined by duration or row count of the partition). Maximum number of rows is determined by [project type](ts-flow-overview#project-types). You can, however, train any model to any sample size from the **Repository**. Or, from the Leaderboard, [retrain models](creating-addl-models) to any size or change the training range and sample size using the [**New training period**](ts-customization#change-the-training-period) option.
| Start mode | Blueprint selection | Sample size for each partition |
|-----------------|-------------------------|---------------------------------|
| Quick (default) | Runs a subset of blueprints, based on the specified target feature and performance metric, to provide a base set of models and insights quickly. | Models are directly trained at the maximum training size for each backtest, defined by the project's date/time partitioning.|
| Autopilot | Runs on a larger selection of blueprints. | Runs using sample sizes beginning with 25%, then 50%, and finally 100% on highest accuracy models of the previous phase. |
| [Comprehensive](more-accuracy) | Runs all Repository blueprints on the maximum sample size (100%) to ensure highest accuracy for models. This mode results in extended build times. Not available for time series or unsupervised projects. | 100% |
| Manual | Runs [EDA2](eda-explained) and then provides a link to the blueprint Repository for full control over which models to run and at what sample size. | Custom |
Sample sizes differ when working with [smaller datasets](#small-datasets).
For example, when you start full Autopilot for an OTV project, DataRobot first selects blueprints optimized for your project based on the target and date/time feature selected. It then runs models using 25% of the data in Backtest 1. When those models are scored, DataRobot selects the top models and reruns them on 50% of the data. Taking the top models from that run, DataRobot runs on 100% of the data. Results of all model runs, at all sample sizes, are displayed on the Leaderboard. The data that comprises those samples is determined by the [sampling method](ts-date-time#bt-force), either _random_ (random _x_% rows within the same range) or _latest_ (_x_% of latest rows within the backtest for row count or selected time period for duration).
### Small datasets {: #small-datasets }
Autopilot changes the sample percentages run depending on the number of rows in the dataset. The following table describes the criteria:
| Number of rows | Percentages run |
|-----------------------|----------------------------------------------|
| Less than 2000 | Final Autopilot stage only (100%) |
| Between 2001 and 3999 | Final two Autopilot stages (50% and 100%) |
| 4000 and larger | All stages of Autopilot (25%, 50%, and 100%) |
## Why the sampling method matters {: #why-the-sampling-method-matters }
When you configure the [backtest sampling method](ts-date-time#bt-force), the selection has an impact on backtesting configuration, model blending, and selecting the best model. Unlike AutoML, the model trained on the highest sample size might not be the best model. When using *Random* sampling, observable history remains the same on all sample sizes. In that case, DataRobot's behavior is similar to AutoML and Autopilot prefers models trained on higher sample sizes.
By contrast, using the latest sampling method implies a level of importance of historical data in model training. This is because in time-aware projects, going further back into historical data can have a significant effect on accuracy, either boosting it or introducing additional noise. When using *Latest*, Autopilot considers models trained on any sample size during its various stages (e.g., when retraining the best model on a reduced feature list or preparing for deployment).
When using duration or customized backtest ("project settings mode"), DataRobot uses a percentage of the time window sample. For row count mode, it uses the maximum rows used by the smallest backtest. You can see the mode/sampling/training type [listed on the Leaderboard](ts-leaderboard#time-aware-models-on-the-leaderboard).
## Other aspects of multistep OTV {: #other-aspects-of-multistep-otv }
The following sections describe aspects specific to time-aware modeling.
### Blending {: #blending }
With multistep OTV, you can train on different sample sizes. This is because the top models may have been trained on different sample sizes. DataRobot does not blend models that use the blueprint and feature list (but different sample sizes) even if they are the highest scoring models.
### Preparing for deployment {: #preparing-for-deployment }
When preparing the best model for deployment, DataRobot retrains it on the most recent data by shifting the training period to the end of dataset and freezing parameters. The [sampling method](ts-date-time#set-rows-or-duration) can affect how the model is prepared for deployment in a following manner:
* if *random*, the model prepared for deployment uses the largest possible sample. For example, if the best model was trained on `P1Y @ 50% (Random)`, the resulting model will be trained on the last P1Y in the dataset, with no sampling.
* if *latest*, the exact training parameters are preserved. (In the same case above, the resulting model would be trained on `P1Y @ 50% (Latest).`)
### Downscaling {: #downscaling }
When running Autopilot, DataRobot initially caps the sample size and downscales the dataset to 500 MB. If the estimated training size exceeds that amount, downscaling happens proportionately. In downscaled projects with random sampling, the model prepared for deployment will still be trained on 100% to maximize accuracy (despite the fact that Autopilot's max sample size is smaller). Additional frozen model will be trained on 100% of data within backtest to provide user with insights as close to prepared for deployment model as possible.Note the you can train any model to any sample size (exceeding 500 MB) from the Repository or retrain models to any size from the Leaderboard.
### Feature reduction with time series {: #feature-reduction-with-time-series }
When using Quick mode in time series (not OTV) modeling, DataRobot applies a more aggressive feature reduction strategy, resulting in fewer derived features and therefore different types of blueprints available in the Repository.
This does not apply to unsupervised time series projects. In unsupervised, blueprint choice is the same between full Autopilot and Quick modes. The only difference for Quick is that the feature reduction threshold used affects the number of derived features used for the [SHAP-based Reduced Features](ts-feature-lists) list.
|
multistep-ta
|
---
title: Time series framework
description: Gain a deeper understanding of the framework DataRobot uses to build time series models.
---
# Time series framework {: #time-series-framework }
This section describes the basic time series framework, window-created gaps, and common data patterns for time series problems.
## Basic time series framework {: #basic-time-series-framework }
The simple time series modeling framework can be illustrated as follows:

* The [_Forecast Point_](glossary/index#forecast-point) defines an arbitrary point in time for making a prediction.
* The [_Feature Derivation Window (FDW)_](glossary/index#feature-derivation-window), to the left of the Forecast Point, defines a rolling window of data that DataRobot uses to derive new features for the modeling dataset.
* Finally, the [_Forecast Window (FW)_](glossary/index#forecast-window), to the right of the Forecast Point, defines the range of future values you want to predict (known as the [_Forecast Distances (FDs)_](glossary/index#forecast-distance)). The Forecast Window tells DataRobot, "Make a prediction for each day inside this window."
Note that the values specified for the Forecast Window are inclusive. For example, if set to +2 days through +7 days, the window includes days 2, 3, 4, 5, 6, and 7. By contrast, the Feature Derivation Window does not include the left boundary but does include the right boundary. (In the image above, DataRobot uses from 7 days before the Forecast Point to 27 days before, but not day 28). This is important to consider when setting the window because it means that DataRobot sets lags exclusive of the left (older) side, but inclusive of the right (newer) side. Be aware that when using a differenced feature list at prediction time, you need to account for the difference. For example, if a model uses 7-day differencing, and the feature derivation window spanned [-28 to 0] days, the effective derivation window would be [-35 to 0] days.
The time series framework captures the business logic of how your model will be used by encoding the amount of recent history required to make new predictions. Setting the recent history configures a rolling window used for creating features, the forecast point, and ultimately, predictions. In other words, it sets a minimum constraint on the feature creation process and a minimum history requirement for making predictions.
In the framework illustrated above, for example, DataRobot uses data from the previous 28 days and as recent as up to 7 days ago. The forecast distances the model will report are for days 2 through 7—your predictions will include one row for each of those days. The Forecast Window provides an objective way to measure the total accuracy of the model for training, where total error can be measured by averaging across all potential Forecast Points in the data and the accuracy for each forecast distance in the window.
## Window-created gaps {: #window-created-gaps }
Now, add the gaps that are inherent to time series problems.

This illustration includes the "blind history" (1) and "can't operationalize" (2) periods.
“Blind history" captures the gap created by the delay of access to recent data (e.g., “most recent” may always be one week old). It is defined as the period of time between the smaller of the values supplied in the Feature Derivation Window and the Forecast Window. A gap of zero means "use data up to, and including, today;" a gap of one means "use data starting from yesterday" and so on.
The "can't operationalize" period defines the gap of time immediately after the Forecast Point and extending to the beginning of the Forecast Window. It represents the time required once a model is trained, deployed to production, and starts making predictions—the period of time that is too near-term to be useful. For example, predicting staffing needs for tomorrow may be too late to allow for taking action on that prediction.
### Common patterns of time series data {: #common-patterns-of-time-series-data }
Time series models are built based on consideration of common patterns in time series data:
1. _Linearity_ : A specific type of trend. Searching on the term "machine learning," you see an increase over time. The following shows the linear trends (you can also view as a non-linear trend) created by the search term, showing that interest may fluctuate but is growing over time:

2. _Seasonality_: Searching on the term "Thanksgiving" shows periodicity. In other words, spikes and dips are closely related to calendar events (for example, each year starting to grow in July, falling in late November):

3. _Cycles_: Cycles are similar to seasonality, except that they do not necessarily have a fixed period and are generally require a minimum of four years of data to be qualified as such. Usually related to global macroeconoimc events or changes in the political landscapes, cycles can be seen as a series of expansions and recessions:

4. _Combinations_: Data can combine patterns as well. Consider searching the term "gym." Search interest spikes every January with lows over the holidays. Interest, however, increases over time. In this example you can see both seasonality with linear a trend:

|
ts-framework
|
---
title: Time series feature derivation
description: A comprehensive reference of the DataRobot time series feature derivation process.
---
# Time series feature derivation {: #time-series-feature-derivation }
The following tables document the feature derivation process—operators used and feature names created—that create the time series modeling dataset. For additional information, see the descriptions of:
* [Intra-month seasonality detection](ts-feature-lists#intra-month-seasonality-detection)
* [Zero-inflated models](ts-feature-lists#zero-inflated-models)
* [Automatically created feature lists](ts-feature-lists#automatically-created-feature-lists)
## Process overview {: #process-overview }
When deriving new features, DataRobot passes each feature through zero or more preprocessors (some features are not preprocessed), then passes the result though one or more extractors, and then finally through postprocessors.
Preprocessors are only run—although this step can be skipped—for target, date, and text columns (no feature columns):
dataset --> preprocessor --> extractor --> postprocessor --> final
Feature columns move from input to extractor or postprocessor:
dataset --> extractor --> postprocessor --> final
dataset --> extractor --> final
More detailed, DataRobot:
1. Applies [automatic feature transformation](auto-transform) on date features during EDA1. These features are excluded from the EDA2 feature derivation process described below; only the original undergoes the process (i.e., transformed features are not further transformed).
2. Applies a preprocessor (e.g., CrossSeriesBinaryPreprocessor, NextEventType, Transform, and others).
3. Creates an "intermediate feature" for the target, date, or text features—a feature where preprocessing was applied but will not be complete until application of the post-processing operation. For example, `Sales (log)` is an intermediate step to the final `Sales (log) (diff) (14 day min)` and by itself is not a valid feature.
4. Uses an extractor step to consume the input from either the original dataset or the intermediate feature. The postprocessor (next step) consumes this output as input.
5. Applies postprocessing to the results of the extractor, creating the "final feature" to use for modeling.
See the [visual representation](#feature-reference) of feature generation for a quick reference.
### Feature reference {: #feature-reference }
The following provides a general overview of the derivation process.
A sample <span style="color:blue;">input dataset</span>:
| <span style="color:blue;"> Date </span> | <span style="color:blue;">Target </span> |
|----------|:---------:|
| <span style="color:blue;">1/1/20 </span> | <span style="color:blue;">1 </span> |
| <span style="color:blue;">2/1/20 </span> | <span style="color:blue;">2 </span> |
| <span style="color:blue;">3/1/20 </span> | <span style="color:blue;"> 3 </span> |

The resulting <span style="color:green;">time series modeling dataset</span>:
<span style="color:green;">
|<span style="color:green;"> Date (actual) </span> | <span style="color:green;">Target (actual) </span> | <span style="color:green;">Forecast distance </span>|
|----------------|:-----------------:|:--------------------:|
| <span style="color:green;"> 1/1/20 </span> | <span style="color:green;"> 1 </span>| <span style="color:green;"> 1 </span>|
| <span style="color:green;">2/1/20</span> | <span style="color:green;"> 2 </span>| <span style="color:green;"> 1 </span>|
| <span style="color:green;">3/1/20 </span> | <span style="color:green;"> 3 </span>| <span style="color:green;"> 1 </span>|
| <span style="color:green;">1/1/20 </span> | <span style="color:green;"> 1 </span>| <span style="color:green;"> 2 </span>|
| <span style="color:green;">2/1/20 </span> | <span style="color:green;"> 2 </span>| <span style="color:green;"> 2 </span>|
| <span style="color:green;"> 3/1/20 </span> | <span style="color:green;"> 3 </span>| <span style="color:green;"> 2 </span>|
**Example of target-derived feature**

**Example of numeric feature**

**Example of categorical feature**

**Example of text feature**

**Example of date feature**

## Feature types {: #feature-types }
Feature derivation acts on features based on their type. The examples and explanations below use these variables (for example, `<target>`) to describe the interactions.
| Component | Description |
|------------|-------------|
| [<target> (intermediate)](#intermediate-features)<br />[<target> (final)](#final-features) | The feature selected at project start as the feature to predict. |
| [<feature> (final)](#final-features) | Any feature or target column from the dataset that is not of type date or text. Processing is the same as that done to the target if the feature is numeric; if the feature is categorical, there are differences (noted in the tables below). DataRobot does not apply preprocessing to non-target features. |
| [<primary\_date> (intermediate)](#intermediate-features)<br />[<primary\_date> (final)](#final-features) | The primary date/time feature selected to enable time-aware modeling at project start. |
| [<date> (intermediate)](#intermediate-features)<br />[<date> (final)](#final-features) | Any date feature, other than automatically transformed features during EDA1, that is not a primary date/time feature. |
| [<text> (intermediate)](#intermediate-features) | A text column. |
The tables include information on:
* Feature name patterns—the feature type followed by the pattern tag ("actual" if the feature is from the original uploaded dataset). This is the resulting feature name after all transformations are complete (for example: `<target> (diff)`).
* Tags—characteristics of the feature.
* Examples of the post-processed feature.
## Intermediate features {: #intermediate-features }
The sections below detail the intermediate features created for target, primary date, date, and text features.
=== "`<target>`"
The sections below list each name pattern for target features.
---
<span style="color:red;font-size: 1rem"> `<target> (log)`</span>
**Description:** A log-transformed target.
**Project type:** Regression, multiplicative trend
**Tags:**
* Target-derived
* Numeric
* Multiplicative
**Example(s):**
```
sales (log) (naive latest value)
sales (log) (diff) (1st lag)
sales (log) (7 day diff) (35 day max)
sales (log) (1 month diff) (2nd lag)
```
---
<span style="color:red;font-size: 1rem"> `<target> (diff)`</span>
**Description:** A diff-transformed target, created by calculating the difference between the current value and the previous single time step value. Time step is based on the interval in the uploaded dataset. Example: A quarterly dataset has a time step of 3 months.
**Project type:** Regression, non-stationary
**Tags:**
* Target-derived
* Numeric
* Stationarity
**Example(s):**
```
sales (diff) (1st lag)
sales (diff) (7 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<period> diff)`</span>
**Description:** A diff-transformed target, created by calculating the difference between the current value and the previous <period> value.
**Project type:** Regression, seasonality
**Tags:**
* Target-derived
* Numeric
* Seasonal
**Example(s):**
```
sales (7 day diff) (1st lag)
sales (7 day diff) (14 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (1 month diff)`</span>
**Description:** A diff-transformed target, created by calculating the difference between the current value and the previous month (same day of month) value.
**Project type:** Regression, intramonth seasonality
**Tags:**
* Target-derived
* Numeric
* Seasonal
**Example(s):**
```
sales (1 month diff) (35 day mean)
sales (1 month diff) (1st lag)
```
---
<span style="color:red;font-size: 1rem"> `<target> (1 month match end diff)`</span>
**Description:** A diff-transformed target, created by calculating the difference between the current value and the previous month (aligned to the end of the month) value.
**Project type:** Regression, intramonth seasonality
**Tags:**
* Target-derived
* Numeric
* Seasonal
**Example(s):**
```
sales (1 month match end diff) (2nd lag)
sales (1 month match end diff) (35 day max)
```
---
<span style="color:red;font-size: 1rem"> `<target> (1 month match weekly diff)`</span>
**Description:** A diff-transformed target, created by calculating the difference between the current value and the previous month (aligned to the week of the month and weekday) value.
**Project type:** Regression, intramonth seasonality
**Tags:**
* Target-derived
* Numeric
* Seasonal
**Example(s):**
```
sales (1 month match weekly diff) (3th lag)
sales (1 month match weekly diff) (35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (1 month match weekly diff from end)`</span>
**Description:** A diff-transformed target, created by calculating the difference between the current value and the previous month (aligned to the weekday and the "week of the month from the end of the month") value.
**Project type:** Regression, intramonth seasonality
**Tags:**
* Target-derived
* Numeric
* Seasonal
**Example(s):**
```
sales (1 month match weekly diff from end) (2nd lag)
sales (1 month match weekly diff from end) (35 day min)
```
---
<span style="color:red;font-size: 1rem"> `<target> (total)`</span>
**Description:** Total target, for the given time, across all series.
**Project type:** Cross series regression, total aggregation
**Tags:**
* Target-derived
* Numeric
* Cross series
**Example(s):**
```
sales (total) (2nd lag)
sales (total) (35 day mean)
sales (total) (3rd lag) (diff 35 day mean)
sales (total) (7 day diff) (35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (weighted total)`</span>
**Description:** Weighted total target, for the given time, across all series.
**Project type:** Cross series regression, total aggregation, user-specified weights
**Tags:**
* Target-derived
* Numeric
* Cross series
* Weighted
**Example(s):**
```
sales (weighted total) (2nd lag)
sales (weighted total) (35 day mean)
sales (weighted total) (3rd lag) (diff 35 day mean)
sales (weighted total) (7 day diff) (35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<groupby> total)`</span>
**Description:** Total target, for the given time, across all series within the same user-specified group.
**Project type:** Cross series regression, total aggregation, user-specified groupby feature
**Tags:**
* Target-derived
* Numeric
* Cross series
**Example(s):**
```
sales (region total) (2nd lag)
sales (region total) (35 day mean)
sales (region total) (3rd lag) (diff 35 day mean)
sales (region total) (7 day diff) (35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<groupby> weighted total)`</span>
**Description:** Weighted total target, for the given time, across all series within the same user-specified group.
**Project type:** Cross series regression, total aggregation, user-specified groupby feature, user-specified weights
**Tags:**
* Target-derived
* Numeric
* Cross series
* Weighted
**Example(s):**
```
sales (region weighted total) (2nd lag)
sales (region weighted total) (35 day mean)
sales (region weighted total) (3rd lag) (diff 35 day mean)
sales (region weighted total) (7 day diff) (35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (average)`</span>
**Description:** Target average, for the given time, across all series.
**Project type:** Cross series regression, average aggregation
**Tags:**
* Target-derived
* Numeric
* Cross series
**Example(s):**
```
sales (average) (2nd lag)
sales (average) (35 day mean)
sales (average) (3rd lag) (diff 35 day mean)
sales (average) (7 day diff) (35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (weighted average)`</span>
**Description:** Weighted target average, for the given time, across all series.
**Project type:** Cross series regression, average aggregation, user-specified weights
**Tags:**
* Target-derived
* Numeric
* Cross series
* Weighted
**Example(s):**
```
sales (weighted average) (2nd lag)
sales (weighted average) (35 day mean)
sales (weighted average) (3rd lag) (diff 35 day mean)
sales (weighted average) (7 day diff) (35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<groupby> average)`</span>
**Description:** Target average, for the given time, across all series within the same group.
**Project type:** Cross series regression, average aggregation, user-specified cross-series groupby feature
**Tags:**
* Target-derived
* Numeric
* Cross series
**Example(s):**
```
sales (region average) (2nd lag)
sales (region average) (35 day mean)
sales (region average) (3rd lag) (diff 35 day mean)
sales (region average) (7 day diff) (35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<groupby> weighted average)`</span>
**Description:** Weighted target average, for the given time, across all series within the same group.
**Project type:** Cross series regression, total aggregation, user-specified groupby feature and weights
**Tags:**
* Target-derived
* Numeric
* Cross series
* Weighted
**Example(s):**
```
sales (region weighted average) (2nd lag)
sales (region weighted average) (35 day mean)
sales (region weighted average) (3rd lag) (diff 35 day mean)
sales (region weighted average) (7 day diff) (35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (proportion)`</span>
**Description:** Numeric target that specifies the proportion of the target across all series.
**Project type:** Cross series regression, total aggregation, nonnegative target, sufficiently consistent series presence across timestamps
**Tags:**
* Target-derived
* Numeric
* Cross series
**Example(s):**
```
sales (proportion) (1st lag)
sales (proportion) (14 day mean)
sales (proportion) (30 day max) (diff 7 day mean)
sales (proportion) (7 day diff) (1st lag)
sales (proportion) (7 day diff) (30 day min)
```
---
<span style="color:red;font-size: 1rem"> `<target> (weighted proportion)`</span>
**Description:** Numeric target that specifies the weighted proportion of the target across all series.
**Project type:** Cross series regression, total aggregation, nonnegative target, sufficiently consistent series presence across timestamps, user-specified weights
**Tags:**
* Target-derived
* Numeric
* Cross series
* Weighted
**Example(s):**
```
sales (weighted proportion) (1st lag)
sales (weighted proportion) (14 day mean)
sales (weighted proportion) (30 day max) (diff 7 day mean)
sales (weighted proportion) (7 day diff) (1st lag)
sales (weighted proportion) (7 day diff) (30 day min)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<groupby> proportion)`</span>
**Description:** Numeric target that specifies the proportion of the target across all series within the same group.
**Project type:** Cross series regression, total aggregation, nonnegative target, sufficiently consistent series presence across timestamps, user-specified cross-series groupby feature
**Tags:**
* Target-derived
* Numeric
* Cross series
**Example(s):**
```
sales (region proportion) (naive latest value)
sales (region proportion) (2nd lag)
sales (region proportion) (7 day mean)
sales (region proportion) (1st lag) (diff 7 day mean)
sales (region proportion) (7 day diff) (1st lag)
sales (region proportion) (7 day diff) (30 day min)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<groupby> weighted proportion)`</span>
**Description:** Numeric target that specifies the weighted proportion of the target across all series within the same group.
**Project type:** Cross series regression, total aggregation, nonnegative target, sufficiently consistent series presence across timestamps, user-specified cross-series groupby feature and weights
**Tags:**
* Target-derived
* Numeric
* Cross series
* Weighted
**Example(s):**
```
sales (region weighted proportion) (naive latest value)
sales (region weighted proportion) (2nd lag)
sales (region weighted proportion) (7 day mean)
sales (region weighted proportion) (1st lag) (diff 7 day mean)
sales (region weighted proportion) (7 day diff) (1st lag)
sales (region weighted proportion) (7 day diff) (30 day min)
```
---
<span style="color:red;font-size: 1rem"> `<target> (total equal <label>)`</span>
**Description:** Total target that equals <label> boolean flag, for a given time, across all series.
**Project type:** Cross-series classification, total aggregation
**Tags:**
* Target-derived
* Binary
* Cross series
**Example(s):**
`is_zero_sales (total equal 1) (1st lag)`
---
<span style="color:red;font-size: 1rem"> `<target> (weighted total equal <label>)`</span>
**Description:** Weighted total target-equals-`<label>` boolean flag, for a given time, across all series.
**Project type:** Cross-series classification, total aggregation, user-specified weights
**Tags:**
* Target-derived
* Binary
* Cross series
* Weighted
**Example(s):**
```
is_zero_sales (weighted total equal 1) (1st lag)
is_zero_sales (weighted total equal 1) (1st lag) (diff 35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<groupby> total equal <label>)`</span>
**Description:** Total target-equals-`<label>` boolean flag, for a given time, across all series and within the same group.
**Project type:** Cross-series classification, total aggregation, user-specified groupby feature
**Tags:**
* Target-derived
* Binary
* Cross series
**Example(s):**
```
is_zero_sales (region total equal 1) (1st lag)
is_zero_sales (region total equal 1) (1st lag) (diff 35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<groupby> weighted total equal <label>)`</span>
**Description:** Weighted total target-equals-`<label>` boolean flag, for a given time, across all series within the same group.
**Project type:** Cross-series classification, total aggregation, user-specified cross-series groupby feature and weights
**Tags:**
* Target-derived
* Binary
* Cross series
* Weighted
**Example(s):**
```
is_zero_sales (region weighted total equal 1) (1st lag)
is_zero_sales (region weighted total equal 1) (1st lag) (diff 35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (fraction equal <label>)`</span>
**Description:** Average target-equals-`<label>` (also called fraction) boolean flag, for a given time, across all series.
**Project type:** Cross-series classification, average aggregation
**Tags:**
* Target-derived
* Binary
* Cross series
**Example(s):**
```
is_zero_sales (fraction equal 1) (1st lag)
is_zero_sales (fraction equal 1) (1st lag) (diff 35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (weighted fraction equal <label>)`</span>
**Description:** Weighted average target-equals-`<label>` (also called fraction) boolean flag, for a given time, across all series.
**Project type:** Cross-series classification, average aggregation, user-specified weights
**Tags:**
* Target-derived
* Binary
* Cross series
* Weighted
**Example(s)**
```
is_zero_sales (weighted fraction equal 1) (3rd lag)
is_zero_sales (weighted fraction equal 1) (3rd lag) (diff 35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<groupby> fraction equal <label>)`</span>
**Description:** Average target-equals-`<label>` (also called fraction) boolean flag, for a given time, across all series within the same group.
**Project type:** Cross-series classification, average aggregation, user-specified cross-series groupby feature
**Tags:**
* Target-derived
* Binary
* Cross series
**Example(s):**
```
is_zero_sales (region fraction equal 1) (3rd lag)
is_zero_sales (region fraction equal 1) (3rd lag) (diff 35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<groupby> weighted fraction equal <label>)`</span>
**Description:** Weighted average target-equals-`<label>`(also called fraction) boolean flag, for a given time, across all series within the same group.
**Project type:** Cross-series binary, average aggregation, user-specified cross-series groupby feature and weights
**Tags:**
* Target-derived
* Binary
* Cross series
* Weighted
**Example(s):**
```
is_zero_sales (region weighted fraction equal 1) (3rd lag)
is_zero_sales (region weighted fraction equal 1) (3rd lag) (diff 35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (is zero)`</span>
**Description:** Boolean flag that indicates whether the target equals zero (used by zero-inflated tree-based models).
**Project type:** Regression, minimum target equals zero
**Tags:**
* Target-derived
* Numeric
* Zero-inflated
**Example(s):**
```
sales (is zero) (1st lag)
sales (is zero) (7 day fraction equal 1)
sales (is zero) (naive binary) (35 day fraction equal 1)
sales (is zero) (1st lag) (diff 35 day mean)
```
---
<span style="color:red;font-size: 1rem"> `<target> (nonzero)`</span>
**Description:** Replaces zero target value with missing value (used by zero-inflated tree-based models).
**Project type:** Regression, minimum target equals zero
**Tags:**
* Target-derived
* Numeric
* Zero-inflated
**Example(s):**
```
sales (nonzero) (log) (1st lag) (diff 35 day mean)
sales (nonzero) (7 day max) (log) (diff 35 day mean)
sales (nonzero) (35 day average baseline) (log)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<time_unit> aggregation)`</span>
**Description:** Aggregates target data to a higher time unit (used by temporal hierarchical models).
**Project type:** Regression
**Tags:**
* Target-derived
* Numeric
**Example(s):**
`sales (week aggregation) (actual)`
---
<span style="color:red;font-size: 1rem"> `<target> (weighted <time_unit> aggregation)`</span>
**Description:** Weighted target data, aggregated to a higher time unit (used by temporal hierarchical models).
**Project type:** Regression, user-specified weights
**Tags:**
* Target-derived
* Numeric
**Example(s):**
`sales (weighted week aggregation) (actual)`
=== "`<primary_date>`"
The sections below list each name pattern for the primary date/time feature.
---
<span style="color:red;font-size: 1rem"> `<primary_date> (previous calendar event type)`</span>
**Description:** Value of the previous calendar event. For example, if the calendar file has two events—Christmas and New Year—all observations between December 25 and January 1 will have previous calendar event type equal to “Christmas.” All observations between January 1 and December 25 will have feature equal to “New Year.” If there is no previous value, the feature will be null.
**Project type:** Uploaded event calendar
**Tags:**
* Date
* Calendar
**Example(s):**
`date (previous calendar event type) (actual)`
---
<span style="color:red;font-size: 1rem"> `<primary_date> (next calendar event type)`</span>
**Description:** Value of the next calendar event. For example, if the calendar file has two events—Christmas and New Year—all observations between December 25 and January 1 will have the next calendar event type equal to “New Year.” All observations between January 1 and December 25 will have feature equal to “Christmas”.
**Project type:** Uploaded event calendar
**Tags:**
* Date
* Calendar
**Example(s):**
`date (previous calendar event type) (actual)`
---
<span style="color:red;font-size: 1rem"> `<primary_date> (calendar event type <N> day(s) before)`</span>
**Description:** Feature that specifies a calendar event _N_ days before the date of the observation. For example, if the observation date is December 27, the feature `date (calendar event type 2 days before) (actual)` will be equal to "Christmas." Feature `date (calendar event type 1 days before) (actual)` will be null.
If event types are not provided in the calendar file, this feature will take (1) or (0) values, specifying whether there is a calendar event _N_ days before.
**Project type:** Uploaded event calendar
**Tags:**
* Date
* Calendar
**Example(s):**
```
date (calendar event type 1 day before) (actual)
date (calendar event type 2 days before) (actual)
```
---
<span style="color:red;font-size: 1rem"> `<primary_date> (calendar event type <N> day(s) after)`</span>
**Description:** Feature that specifies a calendar event _N_ days after the date of the observation. For example, if the observation date is December 23, feature `date (calendar event type 2 days after) (actual)` will be equal to "Christmas." Feature `date (calendar event type 3 days after) (actual)` will be null.
If event types are not provided in the calendar file, this feature will take (1) or (0) values specifying whether there is a calendar event _N_ days after.
**Project type:** Uploaded event calendar
**Tags:**
* Date
* Calendar
**Example(s):**
`date (days from previous calendar event) (actual)`
---
<span style="color:red;font-size: 1rem"> `<primary_date> (<time_unit>(s) from previous calendar event)`</span>
**Description:** Numeric feature that specifies the number of time units since a previously known calendar event. Time units depend on the dataset time step (e.g., for daily datasets, time units are in days). For example, if the observation date is December 28, this feature will be equal to 3 (in days).
**Project type:** Uploaded event calendar
**Tags:**
* Date
* Calendar
**Example(s):**
```
date (calendar event type 1 day after) (actual)
date (calendar event type 2 days after) (actual)
```
---
<span style="color:red;font-size: 1rem"> `<primary_date> (<time_unit>(s) to next calendar event)`</span>
**Description:** Numeric feature that specifies the number of time units until the next known calendar event. Time units depend on the dataset time step (e.g., for daily datasets, time units are in days). For example, if the observation date is December 30, this feature will be equal to 5 (in days).
**Project type:** Uploaded event calendar
**Tags:**
* Date
* Calendar
**Example(s):**
```
date (calendar event type 1 day after) (actual)
date (calendar event type 2 days after) (actual)
```
---
<span style="color:red;font-size: 1rem"> `<primary_date> (calendar event type)`</span>
**Description:** Specifies calendar events happening on the same date as the observation. For example, for observation an on December 25, the feature will be equal to “Christmas.” For December 26, the feature will be null.
**Project type:** Uploaded event calendar
**Tags:**
* Date
* Calendar
**Example(s):**
`date (calendar event type) (actual)`
---
<span style="color:red;font-size: 1rem"> `<primary_date> (calendar event)`</span>
**Description:** Specifies whether there is a calendar event on the date. Values are (1) if there is a calendar event on the same date as observation, otherwise (0).
**Project type:** Uploaded event calendar
**Tags:**
* Date
* Calendar
**Example(s):**
`date (calendar event) (actual)`
---
<span style="color:red;font-size: 1rem"> `<primary_date> (hour of week)`</span>
**Description:** Equals (day of week * 24 + hour) of the primary date. Result enumerates hours from beginning of the week to the end of the same week.
**Project type:** Detected weekly seasonality, 24-hour seasonality
**Tags:**
* Date
**Example(s):**
`date (hour of Week) (actual)`
---
<span style="color:red;font-size: 1rem"> `<primary_date> (common event)`</span>
**Description:** Specifies whether the primary date is expected to be there or not. For example, for a Monday-to-Friday dataset, all the samples with primary date within (inclusive) Monday to Friday are true. Samples with weekend primary date will have the value of false.
**Project type:** Regular missing of sample on certain day-of-week or hour-of-day (e.g., Monday to Friday dataset)
**Tags:**
* Date
**Example(s):**
`date (common event) (actual)`
=== "`<date>`"
The sections below list each name pattern for date—non primary date/time—features.
---
<span style="color:red;font-size: 1rem"> `<date> (<time_unit>s from <primary_date>)`</span>
**Description:** Numeric feature that specifies the number of time units from the input date feature to the primary date/time. Output of this preprocessor is a numeric feature. Input is a date feature.
**Project type:** Any, with minimum one non-low-info and non-primary date/time feature (at least one feature that fulfills both conditions).
**Tags:**
* Date
**Example(s):**
```
due_date (days from date) (1st lag)
due_date (days from date) (7 day mean)
```
=== "`<text>`"
The section below lists each name pattern for text features.
---
<span style="color:red;font-size: 1rem"> `<text> Length`</span>
**Description:** Numeric feature that specifies the number of characters in a text column. Output of this preprocessor is a numeric feature. Input is a text feature.
**Project type:** Numeric, minimum one non-low-info text input
**Tags:**
* Text
**Example(s):**
```
(description Length) (1st lag)
(description Length) (7 day mean)
```
## Final features {: #final-features }
The sections below detail the final features created using target-only, feature/target/intermediate, primary date, and date features during the feature engineering process.
=== "`<target>`, `<feature>`, and `<intermediate>`"
The sections below list each name pattern for features that can be either a target, a non-target feature, or an intermediate feature.
---
<span style="color:red;font-size: 1rem"> `<feature_or_target_or_intermediate> (actual)`</span>
**Description:** Simple passthrough feature that, for a specific date, has the same value as in the raw dataset. These features are considered to be known in advance and can be copied as-is from the raw to the derived dataset. For non-target features, it is used when the feature is available at prediction time. Examples are date, date-derived, calendar, or user specified known-in-advance (a priori) features. For the target or derived target column, it is used as the target to fit the model.
**Tags:**
* Known-in-advance
* Calendar
* Date-derived
* Target
* Target-derived
**Example(s):**
```
sales (actual)
date (actual)
date (Month of Year) (actual)
date (calendar event) (actual)
sales (actual)
sales (week aggregation) (actual)
```
---
<span style="color:red;font-size: 1rem"> `<feature_or_target_or_intermediate> (<N> lag)`</span>
**Description:** Feature extracts the _N_ th most recent value in the feature derivation window. The minimum number of lags for any project is 1. For projects with a zero forecast distance (FDW=[-n, 0] and FW=[0]), the last value in the feature derivation window is the value at the forecast point and so the first lag is equivalent to the actual value known at the forecast point.
**Tags:**
* Lag
**Example(s):**
```
sales (2nd lag)
sales (region average) (1st lag)
sales (region total) (4th lag)
sales (diff 7 day) (2nd lag)
```
---
<span style="color:red;font-size: 1rem"> `<feature_or_target_or_intermediate> (<window> <time_unit> <categorical_method>)`</span>
**Description:** Feature extracts categorical statistics within the most recent `<window> <time_unit>` of the feature derivation window. The categorical statistics include "most_frequent" (returns item with the highest frequency), "n_unique" (returns number of unique values) and "entropy" (measure of uncertainty).
**Tags:**
* Category
**Example(s):**
```
product_type (7 day most_frequent)
product_type (7 day n_unique)
product_type (7 day entropy)
```
---
<span style="color:red;font-size: 1rem"> `<feature_or_target_or_intermediate> (same <matching_period>) (<window> <time_unit> <categorical_method>)`</span>
**Description:** Feature extracts the categorical statistics of the same period within the most recent `<window> <time_unit>` of the feature derivation window. The categorical statistics include "most_frequent" (returns item with the highest frequency), "n_unique" (returns number of unique values) and "entropy" (measure of uncertainty). For example, the feature `product_type (same weekday) (35 day entropy)` computes `product_type` entropy of weekdays equal to forecast point over the last 5 weeks.
**Tags:**
* Category
**Example(s):**
```
product_type (same weekday) (35 day most_frequent)
product_type (same weekday) (35 day n_unique)
product_type (same weekday) (35 day entropy)
```
---
<span style="color:red;font-size: 1rem"> `<feature_or_target_or_intermediate> (<window> <time_unit> <fraction>)`</span>
**Name patterns:**
`<feature_or_target_or_intermediate> (<window> <time_unit> fraction empty)`
`<feature_or_target_or_intermediate> (<window> <time_unit> fraction equal <label>)`
**Description:** Feature computes the fraction of `<feature> equals <label>`. If `<label>` is an empty string, `<feature> equals <label>` becomes `fraction empty` within the most recent `<window> <time_unit>` of the feature derivation window. For example, `is_raining (7 day fraction empty)` computes the fraction of the `is_raining` feature equal to an empty string over the last 7 days.
**Tags:**
* Binary
**Example(s):**
```
is_holiday (35 day fraction equal True)
is_raining (7 day fraction equal empty)
```
---
<span style="color:red;font-size: 1rem"> `<feature_or_target_or_intermediate> (same <matching_period>) (<window> <time_unit> <fraction>)`</span>
**Name patterns:**
`<feature_or_target_or_intermediate> (same <matching_period>) (<window> <time_unit> fraction empty)`
`<feature_or_target_or_intermediate> (same <matching_period>) (<window> <time_unit> fraction equal <label>)`
**Description:** Feature computes the fraction of `<feature> equals <label>`. If `<label>` is an empty string, `<feature> equals <label>` becomes `fraction empty` of the same period within the most recent `<window> <time_unit>` of the feature derivation window. For example, `is_raining (same weekday) (35 day fraction equal True)` computes the fraction of the `is_raining` feature equal to true over the last 35 days.
**Tags:**
* Binary
**Example(s):**
```
is_raining (same weekday) (35 day fraction equal True)
is_holiday (same weekday) (35 day fraction equal empty)
```
---
<span style="color:red;font-size: 1rem"> `<feature_or_target_or_intermediate> (<window> <time_unit> <method>)`</span>
**Description:** Feature computes the numerical statistic `<method>` within the most recent `<window> <time_unit>` of the feature derivation window. The numeric statistics include "max," "min," "mean," "median," "std," and "robust zscore."
**Tags:**
* Numeric
**Example(s):**
```
sales (7 day max)
sales (7 day min)
sales (7 day mean)
sales (7 day median)
sales (7 day std)
```
---
<span style="color:red;font-size: 1rem"> `<feature_or_target_or_intermediate> (same <matching_period>) (<window> <time_unit> <method>)`</span>
**Description:** Feature computes the numerical statistic `<method>` of the same period within the most recent `<window> <time_unit>` of the feature derivation window. For example, the feature `sales (same weekday) (35 day mean)` computes the mean value of `sales` on the same weekday over the last 35 days.
**Tags:**
* Numeric
**Example(s):**
```
sales (same weekday) (35 day max)
sales (same weekday) (35 day min)
sales (same weekday) (35 day mean)
sales (same weekday) (35 day median)
```
=== "`<target>`-only"
The sections below list each name pattern for target-only features.
---
<span style="color:red;font-size: 1rem"> `<target> (naive or match) <strategy>`</span>
**Name patterns:**
`<target> (naive latest value)`
`<target> (naive <period> seasonal value)`
`<target> (naive 1 month seasonal value)`
`<target> (match end of month) (naive 1 month seasonal value)`
`<target> (match weekday from start of month) (naive 1 month seasonal value)`
`<target> (match weekday from end of month) (naive 1 month seasonal value)`
**Description:** Feature selects value from history to forecast the future based on different strategies. Naive latest prediction uses the latest history value to forecast the rows in the forecast window. Naive seasonal prediction extracts the previous season's target value in the history to forecast.</br>
For example, for a given Monday-Friday dataset, naive latest prediction on Monday uses the target value of last Friday as the forecast for Monday. For the naive 7-day prediction, it uses the target value of last Monday. If a multiplicative trend is detected on the dataset, the naive prediction is in log scale.
**Tags:**
* Numeric
* Naive/baseline
**Example(s):**
```
sales (naive lastest value)
sales (naive 7 day seasonal value)
sales (naive 1 month seasonal value)
sales (match end of month) (naive 1 month seasonal value)
sales (match weekday from start of month) (naive 1 month seasonal value)
sales (match weekday from end of month) (naive 1 month seasonal value)
sales (log) (naive lastest value)
sales (log) (naive 7 day seasonal value)
sales (log) (naive 1 month seasonale value)
sales (log) (match end of month) (naive 1 month seasonal value)
sales (log) (match weekday from start of month) (naive 1 month seasonal value)
sales (log) (match weekday from end of month) (naive 1 month seasonal value)
```
---
<span style="color:red;font-size: 1rem"> `<target> (last month <strategy>)`</span>
**Name patterns:**
`<target> (last month average baseline)`
`<target> (last month weekly average)`
`<target> (match end of month) (last month weekly average)`
**Description:** Feature computes the previous month average target value, or previous month weekly average target value, with respect to the forecast point.
For example, `sales (last month average baseline)` computes the average target value in the previous month, `sale (last month weekly average)` computes the weekly average target value of the same week in the previous month, `sales (match end of month) (last month weekly average)` computes the weekly average target value of the same week (aligned to the end of month) in the previous month. If multiplicative is detected in the dataset, log transform is applied after average value is computed.
**Tags:**
* Numeric
* Naive/baseline
**Example(s):**
```
sales (last month average baseline)
sales (last month weekly average)
sales (match end of month) (last month weekly average)
sales (last month average baseline) (log)
sales (last month weekly average) (log)
sales (match end of month) (last month weekly average) (log)
```
---
<span style="color:red;font-size: 1rem"> `<target> (last month <fraction_strategy>)`</span>
**Name patterns:**
`<target> (last month fraction empty)`
`<target> (match end of month) (last month weekly fraction empty)`
`<target> (last month fraction equal <label>)`
`<target> (match end of month) (last month weekly fraction equal <label>)`
**Description:**
Feature computes the fraction of the boolean flag that compares whether target equals <label>. `fraction empty` is used when the label is empty. All rows that fall within the previous month are used to compute the fraction.
**Tags:**
* Binary
**Example(s):**
```
sales (last month fraction empty)
sales (match end of month) (last month weekly fraction empty)
sales (last month fraction equal True)
sales (match end of month) (last month weekly fraction equal True)
```
---
<span style="color:red;font-size: 1rem"> `<target> (last month weekly <fraction>)`</span>
**Name patterns:**
`<target> (last month weekly fraction empty)`
`<target> (last month weekly fraction equal <label>)`
**Description:** Feature computes the fraction of the boolean flag that compares whether target equals <label>. `"fraction empty"` is used when the label is empty. All rows that fall within the same week of the previous month are used to compute the fraction.
**Tags:**
* Binary
**Example(s):**
```
sales (last month weekly fraction empty)
sales (last month weekly fraction equal True)
```
---
<span style="color:red;font-size: 1rem"> `<target> (naive binary) (match_and_fraction )`</span>
**Name patterns:**
`<target> (naive binary) (last month fraction empty)`
`<target> (naive binary) (last month weekly fraction empty)`
`<target> (naive binary) (match end of month) (last month weekly fraction empty)`
`<target> (naive binary) (last month fraction equal <label>)`
`<target> (naive binary) (last month weekly fraction equal <label>)`
`<target> (naive binary) (match end of month) (last month weekly fraction equal <label>)`
**Description:** Feature has the same value as the one without "naive binary" (for example, `<target> (naive binary) (last month fraction empty)` has the same value as `<target> (last month fraction empty)`). The distinction is that it can be used for naive binary predictions.
**Tags:**
* Binary
* Naive/baseline
**Example(s):**
```
is_raining (naive binary) (last month fraction empty)
is_raining (naive binary) (last month weekly fraction empty)
is_raining (naive binary) (match end of month) (last month weekly fraction empty)
is_raining (naive binary) (last month fraction equal True)
is_raining (naive binary) (last month weekly fraction equal True)<
is_raining (naive binary) (match end of month) (last month weekly fraction equal True)
```
---
<span style="color:red;font-size: 1rem"> `<target> (naive binary) (<window> <time_unit> <fraction>`</span>
**Name patterns:**
`<target> (naive binary) (<window> <time_unit> fraction empty)`
`<target> (naive binary) (<window> <time_unit> fraction equal <label>)`
**Description:** Feature has the same value as a feature without "naive binary" (for example, `<target> (naive binary) (<window> <time_unit> fraction empty)` has the same value as `<target> (<window> <time_unit> fraction empty)`). The distinction is that it can be used for naive binary predictions.
**Tags:**
* Binary
* Naive/baseline
**Example(s):**
```
is_raining (naive binary) (35 day fraction equal True)
is_raining (naive binary) (35 day fraction equal empty)
```
---
<span style="color:red;font-size: 1rem"> `<target> (<window> <time_unit> mean baseline)`</span>
**Description:** Feature is the same as `<target> (<window> <time_unit> mean)`. The distinction is that it can be used for naive predictions.
**Tags:**
* Numeric
**Example(s):**
`sales (7 day mean baseline)`
---
<span style="color:red;font-size: 1rem"> `<target> (last month weekly average baseline)`</span>
**Description:** Feature computes the average between <code>`<target> (last month weekly average)``</span> and <code>`<target> (match end of the month) (last month weekly average)``</span>.</br>
For example, `sales (last month weekly average)` is the average of `sales (last month weekly average)` (which is the average of sales on last month/same week) and `sales (match end of the month) (last month weekly average)` (which is the average of sales on last month/same week, week count starts from end of the month).
**Tags:**
* Numeric
**Example(s):**
`sales (last month weekly average baseline)`
=== "`<primary_date>`"
<span style="color:red;font-size: 1rem"> `<primary_date> (<naive_boolean>)`</span>
**Name patterns:**
`<primary_date> (No History Available)`
`<primary_date> (naive <period> prediction is missing)`
`<primary_date> (naive 1 month prediction is missing)`
`<primary_date> (match end of month) (naive 1 month prediction is missing)`
`<primary_date> (match weekday from start of month) (naive 1 month prediction is missing)`
`<primary_date> (match weekday from end of month) (naive 1 month prediction is missing)`
**Description:** Boolean flag feature that specifies whether its corresponding naive prediction is missing. For example, a 7-day naive prediction on _this_ Friday is missing if the shop was closed _last_ Friday. In this case, the boolean feature value is true on this Friday. Each of these boolean features is related to different naive predictions. `<primary_date> (No History Available)` is related to naive latest predictions whereas the rest of the boolean features are related to different types of naive seasonal predictions.
**Tags:**
* Numeric
* Multiseries
**Example(s):**
```
date (No History Available)
date (naive 7 day prediction is missing)
date (naive 1 month prediction is missing)
date (match end of month) (naive 1 month prediction is missing)
date (match weekday from start of month) (naive 1 month prediction is missing)
date (match weekday from end of month) (naive 1 month prediction is missing)
```
=== "`<target-derived>`"
<span style="color:red;font-size: 1rem"> `<target_derived> (diff <strategy>)`</span>
**Name patterns:**
`<target_derived> (diff <window> <time_unit> mean)`
`<target_derived> (diff last month weekly mean)`
`<target_derived> (diff last month mean)`
**Description:** Feature computes the difference between the target-derived and baseline features.</br>
For example:</br> • `sales (1st lag) (diff 7 day mean)` is the difference between `sales (1st lag)` and `sales (7 day mean baseline)`</br> • `sales (35 day max) (diff last month weekly mean)` is the difference between `sales (35 day meax)` and `sales (last month weekly average baseline)` </br> • `sales (7 day mean) (diff last month mean)` is the difference between `sales (7 day mean)` and `sales (last month average baseline)`.
**Tags:**
* Numeric
**Example(s):**
```
sales (1st lag) (diff 7 day mean)
sales (35 day max) (diff last month weekly mean)
sales (7 day mean) (diff last month mean)
```
=== "`<date>`"
<span style="color:red;font-size: 1rem"> `<date> (<time_unit>s between 1st forecast distance and last observable row)`</span>
**Description:** Feature computes the time delta (in terms of integer number of time units) between the date/time of the first forecast distance and the date/time of the last row in the feature derivation window.
**Tags:**
* Numeric
* Row-based
**Example(s):**
`date (days between 1st forecast distance and last observable row)`
|
feature-eng
|
---
title: Clustering algorithms
Description: A deep dive into the DTW, Velocity, and K-means algorithms used in clustering.
---
# Clustering algorithms {: #clustering-algorithms }
Clustering is the ability to cluster time series within a multiseries dataset and then directly apply those clusters as segment IDs within a [segmented modeling](ts-segmented) project. It allows you to group the most similar time series together to create more accurate segmented models, reducing time-to-value when creating complex multiseries projects.
DataRobot uses the following algorithms for time series clustering:
* [Velocity clustering](#velocity-clustering)
* [K-means clustering with Dynamic Time Warping (DTW)](#k-means-with-dtw)
The following table compares Velocity and K-means:
Characteristic | Velocity clustering | K-means clustering
-------------- | ------------------- | ------------------
Speed | Fast | Slow
Robust to irregular time steps | Yes | No
Clusters based on series shape | No | Yes
Series need to be the same length | No | No
While a single series may contain many unique values that vary over time, the time series variant looks for similarities across the different series to identify which series most closely relate to one another. Series are classified by being most closely associated with a specific barycenter (the time series clustering equivalent of a centroid), which is derived from the original dataset.
## Velocity clustering {: #velocity-clustering }
Velocity clustering is an unsupervised learning technique that splits series up based on summary statistics. The goal, as with most clustering, is to put similar series in the same cluster and series that differ significantly in different clusters. Specifically, it groups time series based on statistical properties such as the mean, standard deviation, and the percentage of zeros. The benefit of this approach is that time series with similar values within the feature derivation window are grouped together so that during segmented modeling, these features within the FDW have more signal.
Calculation for each clustering feature is as follows:
1. Perform the given aggregation (i.e., mean, standard deviation) on all series.
2. Divide the resulting aggregations into quantiles representing the number of desired clusters.
3. Determine in which quantile the feature’s aggregation falls and assign the feature to that cluster.

The cluster assigned the most features is the final cluster for the series.
DataRobot implements four types of Velocity clustering:
* Mean Aggregations
* Standard Deviation Aggregations
* Zero-Inflated & Mean Aggregations
* Zero-Inflated & Standard Deviation Aggregations
## K-means with DTW {: #k-means-with-dtw }
Understanding the implementation of K-means clustering requires understanding how DataRobot measures the distance between two time series. This is done with Dynamic Time Warping (DTW). See the [K-Means deep dive](#deep-dive-k-means-dtw-clustering) (as it relates to clustering and DTW), below.
### Single-feature DTW {: #single-feature-dtw }
DTW produces a similarity metric between two series by attempting to match the "shapes." The graph below shows an example where the [matching](#dtw-match-requirements) between the indices is not 1-to-1—it's warped.

### Multi-feature DTW {: #multi-feature-dtw }
The simple diagram above illustrates how to measure the distance between series focusing on a single feature. However, what if there are multiple features? For example, to calculate the distance between store A and store B using daily sales data and daily customer count data, do you calculate the average of the "DTW distance of sales data" and the "DTW distance of customer count data"?
DTW is calculated as the distances measure between each feature. The value is then paired with K-means to do the actual clustering.
Think of a series as a matrix of shape `m x n`, where `m` is each feature represented, and `n` is each point in time. Each series has its own `m x n` matrix. You then have a collection of series within a cluster. Note that the distance matrix itself is _not_ the error metric. The error metric is calculated on top of the distance matrix. The distance matrix is then calculated for each `m` and `n` point across the series.
Features are kept independent within DTW, resulting in a `2D` DTW representation instead of the `1D` representation in the image above. The actual K-means is an optimization of the resulting `2D` distance representations for each cluster.
### DTW match requirements {: #dtw-match-requirements }
There are four requirements to create a match:
1. Every index from the series `A` must be matched with at least one index from series `B` and vice versa.
2. The first index of sequence `A` must match with at least the first index of the sequence `B`.
3. The last index from sequence `A` is matched with last index from sequence `B`.
4. Mapping of the indices from sequence A to sequence B (and vice versa) must be monotonically increasing to prevent index mappings from crossing.
How does DataRobot know if the match is accurate? To calculate distance with DTW:
1. Identify all matches between the two time series.
2. Calculate the sum of the squared differences between the values for each match.
3. Find the minimum squared sum across all matches.

## Deep dive: K-Means DTW clustering {: #deep-dive-k-means-dtw-clustering }
K-means is an unsupervised learning algorithm that clusters data by trying to separate samples into _K_ groups of equal variance, minimizing a criterion known as “inertia” or “within-cluster sum-of-squares.” This algorithm requires the number of clusters to be specified in advanced. It scales well to large numbers of samples and has been used across a broad range of application areas in many different fields.
To apply the K-means algorithm to a sequence, DataRobot initializes a cluster by selecting a series to serve as a barycenter. Specifically, DataRobot:
1. Identifies how many clusters to create (K).
2. Initializes clusters by randomly selecting K series as the barycenters.
3. Calculates the sum of the squared distance from each time series to all barycenters.
4. Assigns the time series to the cluster with the smallest sum.
5. Recalculates the barycenter of each cluster.
6. Repeats steps 4 & 5 until convergence or maximum iterations reached.
### DTW distance calculations {: #dtw-distance-calculations }
A barycenter is a time series that is derived from other series, with the explicit goal of minimizing the distance between itself and other series. A barycenter (`B`) is mathematically defined as the minimum sum of the squared distances (`d`) to a reference time series (`b`), when you are given a new time series dataset (`x`):

Clustering time series algorithms, i.e., K-means, computes these distances using different distance algorithms, such as Euclidean and DTW.
With DTW K-means, DTW is used as the similarity measure between different series. One major advantage of DTW is that series do not need to be the same length, or aligned with one another in time. Instead two series can be compared after minimizing the temporal alignments between them.
Compare this to Euclidean distance. In Euclidean distance, each point is compared exactly with the other point that occupies the same point in time between series (`T_0` will always be compared with `T_0`). However if `T_10` is a peak in Series A, and `T_30` is a peak in Series B, than `T_10` will be compared with `T_30` when using DTW. Euclidean distance would compare `T_10` with `T_10`. While DTW is slightly slower than other methods, it provides a much more robust similarity metrics. This is especially important when considering that most multiseries datasets have a wide variety of characteristics, including differing start and stop times for each series.
|
clustering-algos
|
---
title: Time series feature lists
description: Understand DataRobot's feature lists that are specialized for time series modeling.
---
# Time series feature lists {: #time-series-feature-lists }
DataRobot automatically constructs time series features based on the characteristics of the data (e.g., stationarity and periodicities). Multiple periodicities can result in several possibilities when it comes to constructing the features—both “Sales (7 day diff) (1st lag)” or “Sales (24 hour diff) (1st lag)” can make sense, for example. In some cases, it is better to perform no transforming of the target by differencing. The choice that yields the optimal accuracy often depends on the data.
After constructing time series features for the data, DataRobot creates multiple feature lists (the target is automatically included in each). Then, at project start, DataRobot automatically runs blueprints using multiple feature lists, selecting the list that best suits the model type. With non-time series projects, by contrast, blueprints run on a single feature list, typically <em>Informative Features</em>).
Time series feature lists can be viewed from the **Data > Derived Modeling Data** page, for example:

These lists are different, and more targeted, than those [created by non-time series](feature-lists#automatically-created-feature-lists) projects.

## Exclude features from feature lists {: #exclude-features-from-feature-lists }
There are times when you cannot exclude features from derivation because other features rely on those features. Instead, you can exclude them from a feature list. In that way, they are still used in initial feature derivation but are excluded from modeling.
Note the following behavior that results from excluding certain special features from feature lists:
* **Target column**: DataRobot will not derive target-derived features.
* **Primary date/time column**: DataRobot will not derive calendar and duration features. Also, the feature list without the date/time column will not be available for modeling.
!!! info
You may still want to create a list that excludes the primary date/time feature for use with [monotonic](monotonic) modeling.
* **Series ID column**: DataRobot will not generate any models that depend on the series ID, including per-series, series-level effects, or hierarchical models.
## MASE and baseline models {: #mase-and-baseline-models }
The baseline model is a model that uses the most recent value that matches the longest periodicity. That is, while a project could have multiple different naïve predictions with different periodicity, DataRobot uses the longest naïve predictions to compute the [MASE score](opt-metric#mase). MASE is a measure of the accuracy of forecasts, and is a comparison of one model to a naïve baseline model—the simple ratio of the MAE of a model over the baseline model.
On the Leaderboard, DataRobot identifies the model being used as the baseline model using a BASELINE [indicator](leaderboard-ref#tags-and-indicators).
??? tip "No baseline model on the Leaderboard?"
To generate the baseline model by default, use [comprehensive Autopilot mode](more-accuracy). If you run in [Quick Autopilot mode](model-ref#quick-autopilot), you won't see the baseline model on the Leaderboard; however, DataRobot generates a blueprint of the baseline model in the background. To build a baseline model from the Repository, search for `Baseline Predictions Using Most Recent Value` and train it on the `Baseline Only` feature list that has the longest seasonality.

## Automatically created feature lists {: #automatically-created-feature-lists }
The following table describes the feature lists automatically created for time series modeling available from the **Feature List** dropdown:
| Feature list | Description |
|-----------------|---------------|
| All Time Series Features | Not actually a feature list, this is the dropdown setting that displays all derived features. |
| Baseline Only (<*period*>) | Naïve predictions column matching the period; used for Baseline Predictions blueprints. |
| Date Only | All features of type Date; used for trend models that only depend on the date. |
| No differencing | <ul><li> All available naïve predictions features</li><li> Time series features derived using the raw target (not differenced) </li><li> All other non-target derived features </li></ul>|
| Target Derived Only With Differencing | <ul><li> naïve predictions column matching the period </li><li> Time series features derived using differenced target matching the period </li></ul> Note that this list is not run by default. |
| Target Derived Only Without Differencing (<*period*>) | <ul><li> All available naïve predictions features </li><li> Time series features derived using the raw target (not differenced) </li></ul> Note that this list is not run by default.|
| Time Series Extracted Features | A feature list version of `All Time Series Features`; that is, all derived features. |
| Time Series Informative Features* | All time series features that are considered [informative](feature-lists#automatically-created-feature-lists) (includes features based on all differencing periods). |
| Time Series Retraining Features | A copy of the feature list used by the original model, to ensure that the [retrained model](set-up-auto-retraining#set-up-retraining-policies) is as close to origin as possible. |
| [Univariate Selections](feature-lists#automatically-created-feature-lists) | Features that meet a certain threshold for non-linear correlation with the selected target; same as non-time series projects. |
| With Differencing (<*period*>) | <ul><li> naïve predictions column matching the period </li><li> Time series features derived using differenced target matching the period </li><li> All other non-target derived features </li></ul> |
| With Differencing ([average baseline](glossary/index#average-baseline)) | <ul><li> naïve predictions using average baseline </li><li> Target-derived features that capture deviation from the average baseline </li><li> All other non-target derived features </li></ul> |
| With Differencing ([EWMA baseline](ts-adv-opt#exponentially-weighted-moving-average)) | <ul><li> naïve predictions using average baseline with smoothing applied to the baseline</li><li> Target-derived features that capture deviation from the smoothed average baseline</li><li> All other non-target EWMA derived features </li></ul> |
| With Differencing ([intra-month seasonality detection](#intra-month-seasonality-detection)) | Multiple feature list options to leverage detected seasonalities (see below). |
| With Differencing ([nonzero average baseline](#zero-inflated-models)) | <ul><li> naïve predictions using nonzero average baseline (zero values are removed when computing the average) </li><li> Target-derived features that capture deviation from the average baseline </li><li> Target-derived features that capture lags and statistics of the target flag (whether or not it is zero) </li><li> All other non-target derived features </li></ul> |
\* The _Time Series Informative Features_ list is not optimal. Preferably, select one of the “with differencing” or the “no differencing” feature lists.
### Feature lists for unsupervised time series projects {: #feature-lists-for-unsupervised-time-series-projects }
The following table describes the feature lists automatically created for time series projects that use [unsupervised mode](anomaly-detection#anomaly-detection-feature-lists-for-time-series) (anomaly detection). See the referenced section for details on how DataRobot manages these lists for point anomalies and anomaly windows detection:
| Feature list | Description |
|-----------------|---------------|
| Time Series Extracted Features | A feature list version of `All Time Series Features`; that is, all derived features. |
| Time Series Informative Features | All time series features that are considered informative for time series anomaly detection. For example, DataRobot excludes features it determines are low information or redundant, such as duplicate columns or a column containing empty values. |
| Actual Values and Rolling Statistics | Actual values of the dataset together with the derived statistical information (e.g., mean, median, etc.) of the corresponding feature derivation windows. These features are selected from time series anomaly detection and are applicable to both point anomalies and anomaly windows. |
| Robust z-score Only | Selected rolling statistics from time series derived features but containing only the derived robust z-score values. These features are useful for evaluating point anomalies. |
| SHAP-based Reduced Features | A subset of features based on the Isolation Forest SHAP value scores. |
| Actual Values Only | Selected actual values from the dataset. These features are useful for evaluating point anomalies. |
| Rolling Statistics Only | Selected rolling statistics from time series derived features. These features are useful for evaluating anomaly windows. |
### Feature lists for Repository blueprints {: #feature-lists-for-repository-blueprints }
When building models from the [Repository](repository), you can select a specific feature list to run—either the default lists or any lists you created. However, because some blueprints require specific features be present in the feature list, using a feature list without those features can cause model build failure. This may happen, for example, if you created a feature list independent of the model type. To prevent this type of failure, DataRobot checks feature list and blueprint compatibility before starting the model build and returns an error message if appropriate.
Additionally, because DataRobot can identify a preferable feature list type for some blueprints, it suggests that list by default. See the [time series considerations](ts-consider) for a list of applicable blueprints.
### Zero-inflated models {: #zero-inflated-models }
When the project target is positive and has at least one zero value, DataRobot always creates a nonzero average baseline feature list and uses it to build optimized zero-inflated models to reflect the data. These models may provide higher accuracy because the specialized algorithms model the zero and count distributions separately.
The nonzero average baseline feature list, with differencing, appends `(nonzero)` or `(is zero)` to the target name. Specifically:
* For (nonzero): features are derived by treating any zero target value as an instance of a missing value.
* For (is zero): features are derived by substituting target values with a boolean flag (whether the target is zero or not).
The transformed target values ("<<em>target</em>> (nonzero)" and "<<em>target</em>> (is zero)" are not used in modeling. To avoid target leakage during modeling, DataRobot only uses derived transformed target values (lags and statistics). In addition, the "With Differencing (nonzero average baseline)" feature list is only used for zero-inflated model blueprints, which are prefixed with "Zero-Inflated" (for example, Zero-Inflated eXtreme Gradient Boosted Trees Regressor). Note that not all model types have a zero-inflated counterpart.
#### Zero-inflated modeling considerations {: #zero-inflated-modeling-considerations }
When working with the zero-inflated model and/or feature list, keep the following in mind:
* You can use the zero-inflated feature list to train on nonzero-inflated models and expect decent (if not optimum) performance.
* If you use a different feature list to retrain a zero-inflated model, model performance may be poor since the model expects the target derived features in log scale.
### Intra-month seasonality detection {: #intra-month-seasonality-detection }
Intra-month seasonality is the periodic variation that repeats in the same day/week number or weekday/week number each month. Detecting patterns in seasonality is important for building accurate models—how do you define the date needed from the previous month? Are you counting up from the beginning of the month or down from the end?
Some examples:
| Repeat patterns | Time unit | Example |
|------------------|-----------|-----------|
| Same day of month | Day | A payment is due on a specific day of the month— "payment due on the 15th." |
| Same week of month and day of week | Day | Payday is on a certain position within the month—"payday is the second Friday." |
| Week of month | Week | High sales for a retail dataset the last week of each month—"sales quota for the month is calculated on the last day." |
To provide better handling of seasonality, DataRobot detects and generates appropriate feature lists and then resulting features. These additions are based on whether, when executing the feature engineering that creates the [modeling dataset](glossary/index#modeling-dataset), DataRobot detects intra-month seasonality and a Feature Derivation Window greater than a certain threshold. The feature lists run by Autopilot are based on the characteristics of the data, as described in the table below.
!!! note
"FDW covers at least <em>X</em> days" is equal to <code>fdw_end - fdw_start >= X</code>.
| Condition | Description | Example |
| ------- | ----------- | ------- |
| *With Differencing (monthly)* | :~~: | :~~: |
| Detected intra-month seasonality and feature derivation window covers at least 31 days | <ul><li> naïve predictions column matching the period (align to the beginning of the month) </li><li> Time series features derived using the differenced target matching the period (align to the beginning of the month) </li><li> All other non-target derived features </li></ul> | Use the first *N*th day of the previous month target value as the prediction of the first *N*th day of current month—March 5th will use the target value of Feb 5th. Or, in the case of March 30th, the list will use the value of Feb 28 (the last day of February). |
| *With Differencing (monthly, same day from end)* | :~~: | :~~: |
| Detected intra-month seasonality and minimum feature derivation window covers at least 31 days | <ul><li> naïve predictions column matching the period (align to the end of the month) </li><li> Time series features derived using the differenced target matching the period (align to the end of the month)</li><li> Use the last *N*th day of the previous month target value as the prediction of the last *N*th day of current month—March 31st will use the target value of Feb 28th (or February 29 in leap years). |
| *With Differencing (monthly, same day of week, same week from start)*| :~~: | :~~: |
| Detected intra-month seasonality, FDW start ≥ 35, FDW end ≤ 21, FDW window covers at least 29 days | <ul><li> naïve predictions column matching the period (align to the week of the month and weekday) </li><li> Time series features derived using the differenced target matching the period (align to the week number and weekday)</li><li> All other non-target derived features | Use the target of the first *X*-day of last month as the prediction of the first *X*-day of the current month—March 5th (Monday) will use the target value of the February Monday that falls between February 1-7.|
| *With Differencing (monthly, same day of week, same week from end)* | :~~: | :~~: |
| Detected intra-month seasonality, FDW start ≥ 35, FDW end ≤ 21, FDW window covers at least 29 days | <ul><li>naïve predictions column matching the period (align to the weekday and the "week of the month from the end of the month")</li><li> Time series features derived using the differenced target matching the period (align to the week number and weekday)</li><li>All other non-target derived features</li></ul> | Use the target of the last *X*-day of last month as the prediction of the last *X*-day of current month—March 31st (Tuesday) will use the target value of the February Tuesday that falls between February 22-28.|
| *With Differencing (monthly, average of previous month)* | :~~: | :~~: |
| Detected intra-month seasonality, FDW start ≥ 62, FDW end ≤ 21, FDW window covers at least 29 days | <ul><li> naïve predictions using average of the previous month </li><li> Target-derived features that capture deviation from the previous month’s average baseline </li><li> All other non-target derived features </li></ul> | Use the average target value of the previous month as the naïve prediction of days in the next month—June 7 will use May 1-30 or the average target value of February as the naïve prediction of days in March. (Requires a longer FDW.) |
| *With Differencing (monthly, average of same week of previous month)* | :~~: | :~~: |
| Detected intra-month seasonality, FDW start ≥ 37, FDW end ≤ 21, FDW window covers at least 29 days | <ul><li>naïve predictions using weekly average of the previous month </li><li> Target-derived features that capture deviation from the previous month’s weekly average baseline </li><li> All other non-target derived features </li></ul> | Use the first week average of last month as the predictions of the first week of the current month--see below for detail.|
| *With Differencing (monthly, average of [nonzero values](#zero-inflated-models) of previous month)* | :~~: | :~~: |
| Detected intra-month seasonality, FDW start ≥ 62, FDW end ≤ 21, FDW window covers at least 29 days with minimum target value of 0| <ul><li>naïve predictions using nonzero average of the previous month (zero values are removed in computing the average)</li><li> Target-derived features that capture deviation from the previous month nonzero average baseline </li><li>Target-derived features that capture lags and statistics of the target flag (whether or not it is zero)</li><li>All other non-target derived features </li></ul> | Use the average nonzero target value of February as the naïve prediction of days in March. |
| *With Differencing (previous week of the month [nonzero values](#zero-inflated-models) average baseline)* | :~~: | :~~: |
| Detected intra-month seasonality, minimum FDW start ≥ 37, maximum FDW end ≤ 21, FDW window covers at least 29 days with minimum target value of 0 | <ul><li> naïve predictions using weekly nonzero average of previous month (zero values are removed in computing the average) </li><li>Target-derived features that capture deviation from the previous month weekly nonzero average baseline</li><li> Target-derived features that capture lags and statistics of the target flag (whether or not it is zero) </li><li> All other non-target derived features </li></ul> | Use weekly nonzero average of previous month--see below for detail. |
#### Monthly, average of same week from start of previous month {: #monthly-average-of-same-week-from-start-of-previous-month }
The following details calculations for naïve prediction for March:
1. Compute the weekly average from the start of the month:
- March 1-7 uses the average value of February 1-7...March 22-31 (last day of month) uses the average value of February 22-28. This feature is called <code>y (last month weekly average)</code>.
2. Compute the weekly average from the end of the month:
- March 25-31 uses the average value of February 22-28...March 1-10 uses the average value of February 1-7. This feature is called <code>y (match end of the month) (last month weekly average)</code>.
3. Compute the average of the above two features to calculate the naïve predictions of the current month:
- 0.5 * <code>y (last month weekly average)</code> + 0.5 * <code>y (match end of the month) (last month weekly average)</code>
#### Monthly, average nonzero values in same week from start {: #monthly-average-nonzero-values-in-same-week-from-start }
The following details calculations for naïve prediction for March for [nonzero values](#zero-inflated-models):
1. Compute the weekly nonzero average from the start of the month:
- March 1-7 uses the nonzero average value of February 1-7...March 22-31 (last day of month) uses the nonzero average value of February 22-28. This feature is called <code>y (nonzero)(last month weekly average)</code>.
2. Compute the weekly nonzero average from the end of the month:
- March 25-31 uses the nonzero average value of February 22-28...March 1-10 uses the nonzero average value of February 1-7. This feature is called <code>y (nonzero)(match end of the month) (last month weekly average)</code>.
3. Compute the average of the above two features to compute the naïve predictions of the current month:
- 0.5 * <code>y (nonzero)(last month weekly average)</code> + 0.5 * <code>y (nonzero)(match end of the month) (last month weekly average)</code>
|
ts-feature-lists
|
---
title: Batch predictions for TTS and LSTM models
description: Make batch predictions for Traditional Time Series (TTS) and Long Short-Term Memory (LSTM) models.
section_name: Time Series
maturity: public-preview
---
# Batch predictions for TTS and LSTM models {: #batch-predictions-for-tts-and-lstm-models }
!!! info "Availability information"
Batch predictions for TTS and LSTM models, available as a public preview feature, are off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
**Feature flag**: Enable TTS and LSTM Time Series Model Batch Predictions
Traditional Time Series (TTS) and Long Short-Term Memory (LSTM) models— sequence models that use autoregressive (AR) and moving average (MA) methods—are common in time series forecasting. Both AR and MA models typically require a complete history of past forecasts to make predictions. In contrast, other time series models only require a single row after feature derivation to make predictions. Previously, batch predictions couldn't accept historical data beyond the effective [feature derivation window (FDW)](glossary/index#feature-derivation-window) if the history exceeded the maximum size of each batch, while sequence models required complete historical data beyond the FDW. These requirements made sequence models incompatible with batch predictions. Enabling this public preview feature removes those limitations to allow batch predictions for TTS and LSTM models.
!!! note
Time series Autopilot still doesn't include TTS or LSTM model blueprints; however, you can access the model blueprints in the [model Repository](repository).
To allow batch predictions with TTS and LSTM models, this feature:
* Updates batch predictions to accept historical data up to the maximum batch size (equal to 50MB or approximately a million rows of historical data).
* Updates TTS models to allow refitting on an incomplete history (if the complete history isn't provided).
!!! warning
If you don't provide sufficient historical data at prediction time, you could encounter prediction inconsistencies. For more information on maintaining accuracy in TTS and LSTM models, see the [prediction accuracy considerations](#prediction-accuracy-considerations) below.
Without this feature enabled, the **Predictions > Make Predictions** and **Predictions > Job Definitions** tabs are unavailable for TTS and LSTM models.
=== "Without feature flag"

=== "With feature flag"

With this feature enabled, you can access the [**Predictions > Make Predictions**](batch-pred) and [**Predictions > Job Definitions**](batch-pred-jobs) tabs of a deployed TTS or LSTM model.
??? tip "What models are impacted by this change?"
The following model types can now make batch predictions:
* Recurrent Neural Network Regressor using Keras
* Recurrent Neural Network Regressor using Keras for each forecast distance
* Univariate Recurrent Neural Network Regressor using Keras
* Multi-Step Recurrent Neural Network Regressor using Keras
* Sequence to sequence Recurrent Neural Network Regressor using Keras
* DeepAR Recurrent Neural Network Regressor using Keras
* Seasonal AUTOARIMA model based on statsmodels SARIMAX model
* Nonseasonal AUTOARIMA model based on statsmodels SARIMAX model
* Per Series Nonseasonal AUTOARIMA model based on statsmodels SARIMAX model
* AUTOARIMA model based on statsmodels SARIMAX model using baseline offset for each forecast distance
* Nonseasonal AUTOARIMA model based on statsmodels SARIMAX model using Fourier Terms
* Per-series Nonseasonal AUTOARIMA model based on statsmodels SARIMAX model using Fourier Terms
* Error-Trend-Seasonal (ETS) exponential smoothing model
* Per-series AutoETS model based on statsmodels ETS model
* TBATS forecasting regressor
* Prophet
* Per-series Prophet
* Vector Autoregressive Model (VAR)
* VARMAX model based on stats model (Multiseries VARMAX model)
* Multiseries VARMAX model with Fourier features
* Per-series TBATS forecasting regressor
## Prediction accuracy considerations {: #prediction-accuracy-considerations }
Calculating the percentage difference in RMSE between "full history" and "incomplete history" predictions measures the impact of using an incomplete feature derivation window history when making batch predictions with sequence models. Based on testing, DataRobot recommends applying the following guidelines to maintain prediction accuracy:
* ARIMA and ETS: These models use a smooth method (based on Kalman filtering), which does not change model parameters and uses the original model if refitting fails. To maintain accuracy, provide at least 20 points of historical data. It is particularly important to provide sufficient historical data to effectively smooth the new data with existing parameters when the FDW is small and the forecast isn't seasonal.
* TBATS and PROPHET: These models use a warm-start method, which uses the existing model parameters as an initial "guess" and completes the refit with more data. The model parameters can change, and the accuracy results are less consistent. To maintain accuracy, provide at least 40 points of historical data.
* LSTM: There are two groups of LSTM models.
* The following models have the same requirements at prediction time as most time series models—prediction data must contain an effective feature derivation window history. These models are:
* Recurrent Neural Network Regressor using Keras
* Recurrent Neural Network Regressor using Keras for each forecast distance
* The following models require additional history to ensure accurate and consistent predictions for all prediction methods. While they will return predictions if historical data allows for effective feature derivation window size, the most accurate predictions require additional historical data rows. The amount can be calculated as:
`2 * (feature derivation window start + forecast window end + 1)`.
For example, if features are derived on a [7, 0] days interval and forecast at [2,5] days, the amount of history required is `2 * [7 + 5 + 1]`, or 26 days.
These models are:
* Univariate Recurrent Neural Network Regressor using Keras
* Multi-Step Recurrent Neural Network Regressor using Keras
* Sequence to Sequence Recurrent Neural Network Regressor using Keras (LSTM)
* DeepAR Recurrent Neural Network Regressor using Keras
|
pp-ts-tts-lstm-batch-pred
|
---
title: Period Accuracy
description: Period Accuracy provides the ability to compute error metric values for specific periods of the backtest validation source.
section_name: Time Series
maturity: public-preview
---
# Period Accuracy {: #period-accuracy }
!!! info "Availability information"
The Period Accuracy insight is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
**Feature flag:** Period Accuracy Insight
In some use cases, certain time periods can have more significance than others. This is particularly true for financial markets—for example, a trader may only be interested in seeing the performance of a model over the first 4 hours of each trading day. **Period Accuracy** gives you the ability to specify which are the more important periods within your training dataset, which DataRobot can then provide aggregate accuracy metrics for and surface results on the Leaderboard.
Using a selected optimization (accuracy) metric, you can use the **Period Accuracy** insight to compare these specified periods against the metric score of the model as a whole. In the example above, seeing the RSME for the validation period of a model does not provide much insight into the performance of that model when it really matters most to the trader.

To use the insight, simply:
1. Upload a period file and import it to a project after models are built.
2. Set filters for calculating period performance.
The insight is available for both OTV and single- and multiseries time series projects in **Evaluate > Period Accuracy**.
## Create a period file {: #create-a-period-file }
The first step in using **Period Accuracy** is to create a period file. Similar to calendar files, the period file indicates the name of the periods, its start date/time (and by that, its duration). Unlike calendar files, which support ranges, the period file is a two-column CSV that includes:
1. **Column 1:** The date/time column.
This is the feature used to build the project; its label must match the name of the feature exactly. The data populating the date/time feature column should represent all the time steps you want to visualize in the insight. For example, if the project has daily data from January 30, 2022 through February 8, 2023, and you want to visualize all of that data, the first column would contain 374 entries, one per date in that range.
2. **Column 2:** The period column.
The period column represents how you would like to group the data in the insight—it represents the core of what the insight should visualize, giving more information about the accuracy of the model within the defined subset of the data, so define it based on how you want to understand your data. In the above example, you could:
* Mark all dates in January as members of the January bucket by entering the string `January` in column 2 for every applicable date. Next, mark all dates in February as `February`, etc.
* Group by weekday by labeling each Sunday with the string `Sunday`, each Monday with the string `Monday`, etc.
* Represent dates corresponding to Monday through Friday as the string `weekday` and the dates corresponding to Saturday and Sunday as `weekend`.

Once the period file is created, save it locally or upload it to the AI Catalog.
### Time steps in a period file {: #time-steps-in-a-period-file }
Defining specific time periods within a date feature is dependent on the granularity of your data (e.g., you need hourly data to view hourly predictions). To show results that match data granularity, add multiples rows in the period file to match the times of interest. For example:
Your date/time feature is `date` and you have hourly data for each day. You are interested in sales between 11:00am and 1:00pm each weekday. Your period file would look like:

## Generate Period Accuracy {: #generate-period-accuracy }
**Period Accuracy** must be computed for each model in a project. However, once a period file is uploaded to one model in the project, it is available to all models. You can upload multiple period files to a project, which may be useful for examining data in different ways (for example, each day, weekday vs weekend, etc.).
To view insights, open a model's **Period Accuracy** tab and, using the dropdowns, set filters for calculating period performance. Only project-applicable filters are visible.
Filter | Description
------ | -----------
Period file | Select a [period file](#create-a-period-file). From there, you can also: <br /> <ul><li> Upload a new period file, either directly or from the AI Catalog. </li><li>Remove an uploaded file from the insight. This action will not delete the file from the AI Catalog.
Backtest | Select the backtest to display results for. Although DataRobot runs all backtests when building a project, you must individually train a backtest's model and compute its validation predictions before viewing period insights for that backtest. If you select a backtest that is not yet calculated, DataRobot will prompt to run calculations.
Series to plot (_multiseries only_) | If the project is multiseries, select a series to plot.
Forecast distance (_time series and multiseries only_) | Set the window of time to base the visualization on. See more details in [**Accuracy Over Time**](aot#change-the-forecast-distance).
Click **Compute period insights** to start calculations. Once computed, changing any filter—other than series, where applicable—requires rerunning the calculations.
## Interpret Period Accuracy {: #interpret-period-accuracy }
When calculations are complete, DataRobot displays a table reflecting results based on the validation data. You can also generate over time histograms.
Field | Description
----- | -----------
Period name | The name of the period, identified by column 2 in the period file.
Observations | The number of data points that fall within the defined period. The period is based on the applied period file and filters (backtest, series, and forecast distance, as applicable).
Earliest/latest date | The first and last timestamp found in the period.
Predicted/Actual | The average predicted and actual values observed in the selected backtest.
Metric <metric>* | The performance of the observation for the period. In other words, if you were to create a project with just this period in the validation data, the displayed value is the value that would display on the Leaderboard. The red/green values below the score indicate the percentage variance from the Leaderboard score. Note that "preferedness" of a score (red/green, up/down) is dependent on the metric type.
**Plot over time** | A link to display the Over Time chart for the selected period. Click and scroll down to see the histogram.
\* You can change the reported metric using the Leaderboard dropdown:

When you click **Plot over time**, the histogram shows a point for each observation in the selected period, visualizing actual and predicted values. This helps to understand how the model performs on each row of the period of interest.

## Considerations {: #considerations }
Consider the following when working with **Period Accuracy**:
- Only the first 1000 series are computed.
- Maximum period file size is 5MB. An unlimited number of files are allowed.
- Insight export is not supported.
|
ts-period-accuracy
|
---
title: Time series public preview features
description: Read preliminary documentation for time series features currently in the DataRobot public preview pipeline.
section_name: Time Series
maturity: public-preview
---
# Time series public preview features {: #time-series-public-preview-features }
{% include 'includes/pub-preview-notice-include.md' %}
Public preview for... | Describes...
----- | ------
[Period Accuracy](ts-period-accuracy) | View model performance over periods within the training dataset.
[Time series model package prediction intervals](pp-ts-pred-intervals-mlpkg) | Export time series models with prediction intervals in model package (.mlpkg) format.
[Batch predictions for TTS and LSTM models](pp-ts-tts-lstm-batch-pred) | Make batch predictions for Traditional Time Series (TTS) and Long Short-Term Memory (LSTM) models.
|
index
|
---
title: Time series model package prediction intervals
description: Export time series models with prediction intervals in model package (.mlpkg) format.
section_name: Time Series
maturity: public-preview
---
# Time series model package prediction intervals {: #time-series-model-package-prediction-intervals }
!!! info "Availability information"
Time series prediction interval support for model packages, available as a public preview feature, is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
**Feature flag**: Enable computation of all Time-Series Intervals for .mlpkg
Now available for public preview, you can enable the computation of a model's time series prediction intervals (from 1 to 100) during model package generation. To run a DataRobot time series model in a remote prediction environment, you download a model package (.mlpkg file) from the model's [deployment](#deployment-download) or the [Leaderboard](#leaderboard-downloads). In both locations, you can now choose to **Compute prediction intervals** during model package generation. You can then run prediction jobs with a [portable prediction server (PPS)](portable-pps) outside DataRobot.
!!! note
The **Compute prediction intervals** option is off by default because the computation and inclusion of prediction intervals can significantly increase the amount of time required to generate a model package.
## Deployment model package download {: #deployment-model-package-download }
To download a model package with prediction intervals from a deployment, ensure that your deployment supports model package downloads. The deployment must have a DataRobot build environment and an *external* prediction environment, which you can verify using the [**Governance Lens**](gov-lens) in the deployment inventory:

1. In the external deployment, click **Predictions > Portable Predictions**.
2. Click **Compute prediction intervals**, then click **Download model package (.mlpkg)**.
The download appears in the downloads bar when complete.

3. Once the PPS download completes, use the provided code snippet to launch the Portable Prediction Server with the downloaded model package.
## Leaderboard model package download {: #leaderboard-model-package-download }
To download a model package with prediction intervals from a model on the Leaderboard, you can use the **Predict > Deploy** or **Predict > Portable Predictions** tab.
=== "Deploy tab download"
To download from the **Predict > Deploy** tab, take the following steps:
1. Navigate to the model in the **Leaderboard**, then click **Predict > Deploy**.
2. Click **Compute prediction intervals**, and then click **Download .mlpkg**.
The download appears in the downloads bar when complete.

=== "Portable Prediction Server tab download"
!!! info "Availability information"
The ability to download a model package from the Portable Predictions tab depends on the [MLOps configuration](pricing) for your organization.
To download from the **Predict > Portable Predictions** tab, take the following steps:
1. Navigate to the model in the **Leaderboard**, then click **Predict > Portable Predictions**.
2. Click **Compute prediction intervals**, and then click **Download .mlpkg**.
The download appears in the downloads bar when complete.

3. Once the PPS download completes, use the provided code snippet to launch the Portable Prediction Server with the downloaded model package.
## PPS prediction interval configuration {: #pps-prediction-interval-configuraton }
After you've enabled prediction intervals for a model package and deployed the model to a portable prediction server, you can configure the prediction intervals percentile and exponential trend in the `.yaml` PPS configuration file or through the use of PPS environment variables. For more information on PPS configuration, see the [Portable Prediction Server](portable-pps) documentation.
!!! note
The environment variables below are only used if the YAML configuration isn't provided.
| YAML Variable / Environment Variable | Description | Type | Default |
|--------------------------------------|---------------|------|---------|
`prediction_intervals_percentile` / `MLOPS_PREDICTION_INTERVALS_PERCENTILE` | Sets the percentile to use when defining the prediction interval range. | integer | `80` |
|
pp-ts-pred-intervals-mlpkg
|
---
title: Create the modeling dataset
description: Understand how DataRobot's feature derivation process creates a new modeling dataset for time series projects.
---
# Create the modeling dataset {: #create-the-modeling-dataset }
The time series modeling framework extracts relevant features from time-sensitive data, modifies them based on user-configurable forecasting needs, and creates an entirely new dataset derived from the original. DataRobot then uses standard, as well as time series-specific, machine learning algorithms for model building. This section describes:
* [Reviewing data and new features](#review-data-and-new-features)
* [Understanding the Feature Lineage tab](ts-leaderboard#feature-lineage-tab)
* [Downsampling in time series projects](#downsampling-in-time-series-projects)
* [Handling missing values](#handle-missing-values)
You cannot influence the *type* of new features DataRobot creates, but the application adds a variety of new columns including (but not limited to): average value over *x* days, max value over past *x* days, median value over *x* days, rolling most frequent label, rolling entropy, average length of text over *x* days, and many more.
Additionally, with time series date/time partitioning, DataRobot scans the configured rolling window and calculates summary statistics (not typical with traditional partitioning approaches). At prediction time, DataRobot automatically handles recreating the new features and verifies that the framework is respected within the new data.
Time series modeling features are the features derived from the original data you uploaded but with rolling windows applied—lag statistics, window averages, etc. Feature names are based on the original feature name, with parenthetical detail to indicate how it was derived or transformed. Clicking any derived feature displays the same type of information as an original feature. You can look at the [**Importance**](model-ref#data-summary-information) score, calculated using the same algorithms as with traditional modeling, to see how useful (generally, very) the new features are for predicting. See the [time series feature engineering reference](feature-eng) for a list of operators used and feature names created by the feature derivation process.
## Review data and new features {: #review-data-and-new-features }
Once you click **Start**, DataRobot derives new time series features based on your time series configuration, creating the time series modeling data. By default DataRobot displays the **Derived Modeling Data** panel, a feature summary that displays the settings used for deriving time series features, dataset expansion statistics, and a link to view the derivation log. (To see your original data, click **Original Time Series Data**.)
When [sampling](#downsampling-in-time-series-projects) is required, that information is also included. Click **View more info** to see the derivation log, which lists the decisions made during feature creation and is downloadable.

Within the log, you can see that every candidate derived feature is assigned a priority level (`Generating feature "Sales (35 day mean)" from "Sales" (priority: 11)` for example). When deciding which of the candidates to keep after time series feature derivation completes, DataRobot picks a priority threshold and excludes features outside that threshold. When a candidate feature is removed, the feature derivation log displays the reason:
`Removing feature "y (1st lag)" because it is a duplicate of the simple naïve of target`
or
`Removing feature "y (42 row median)" because the priority (7) is lower than the allowed threshold (7)`
## Downsampling in time series projects {: #downsampling-in-time-series-projects }
Because the modeling dataset creates so many additional features, the dataset size can grow exponentially. Downsampling is a technique DataRobot applies to ensure that the derived modeling dataset is manageable and optimized for speed, memory use, and model accuracy. (This sampling method is not the same as the [smart downsampling](smart-ds) option that downsamples the majority class (for classification) or zero values (regression).)
Growth in a time series dataset is based on the number of columns and the length of the forecast window (i.e., the number of forecast distances within the window). The derived features are then sampled across the backtests and holdout and the sampled data provides the basis of related insights (Leaderboard scores, Forecasting Accuracy, Forecasting Stability, Feature Effects, Feature Over Time). DataRobot reports that information in the additional info modal accessible from the **Derived Modeling Data** panel:

With multiseries modeling, the number of series, as well as the length of each series, also contribute to the number of new features in the derived dataset. [Multiseries projects](multiseries#sampling-in-multiseries-projects) have a slightly different approach to sampling; the **Series Insights** tab does not use the sampled values because the result may be too few values for accurate representation.
## Handle missing values {: #handle-missing-values }
DataRobot handles missing value imputation differently with time series projects. The following describes the process.
*Consider the following from a time series dataset, which is missing a row:*
Date,y
2001-01-01,1
2001-01-02,2
2001-01-04,4
2001-01-05,5
2001-01-06,6
In this example, the value `2001-01-03` is "missing."
For ARIMA models, DataRobot attempts to make the time series more regular and use forward filling. This is applicable when the Feature Derivation Window and Forecast Window use a time unit. When these windows are created as row-based projects, DataRobot skips the history regularization process (no forward filling) and keeps the original data.
For non-ARIMA models, DataRobot uses the data as is and does not allow modeling to start if it is too irregular.
Consider the following—the dataset is missing a target or date/time value:*
Date,y
2001-01-01,1
2001-01-02,2
,3
2001-01-04,
2001-01-05,5
In this example the third row is missing `Date`, the fourth is missing `y`. DataRobot drops those rows, since they have no target or date/time value.
Consider the case of missing feature values, in this example `2001-01-02,,2`:
Date,feat1,y
2001-01-01,1,1
2001-01-02,,2
2001-01-03,3,3
2001-01-04,4,4
* At the feature level, the derived features (rolling statistics) will ignore the missing value.
* At the blueprint level, it is dependent on the blueprint. Some blueprints can handle a missing feature value without any issue. For others (for example, some ENET-related blueprints), DataRobot may use median value imputation for the missing feature value.
There is one additional special circumstance—the naïve prediction feature, which is used for differencing. In this case, DataRobot uses a seasonal forward fill (which falls back on median if not available).
|
ts-create-data
|
---
title: Data prep for time series
description: For time series projects, DataRobot's data quality detection evaluates whether the time step is irregular and provides tools to correct the dataset.
---
# Data prep for time series {: #data-prep-for-time-series }
When starting a time series project, DataRobot's data quality detection evaluates whether the [time step is irregular](data-quality#irregular-time-steps). This can result in significant gaps in some series and precludes the use of seasonal differencing and cross-series features that can improve accuracy. To avoid the inaccurate rolling statistics these gaps can cause, you can:
* Let DataRobot use row-based partitioning.
* Fix the gaps with the time series data prep tool by using duration-based partitioning.
Generally speaking, the data prep tool first aggregates the dataset to the selected time step, and, if there are still missing rows, imputes the target value. It allows you to choose aggregation methods for numeric, categorical, and text values. You can also use it to explore modeling at different time scales. The resulting dataset is then published to the **AI Catalog**.
## Access the data prep tool {: #access-the-data-prep-tool }
Access the data prep tool from the [Start screen](#access-data-prep-from-a-project) in a project or directly from the [AI Catalog](#access-data-prep-from-the-ai-catalog).
The method to modify a dataset in the AI Catalog is the same regardless of whether you start from the Start screen or from the catalog.
### Access data prep from a project {: #access-data-prep-from-a-project }
From the Start screen, the data prep tool becomes available after initial set up (target, date/time feature, forecasting or nowcasting, series ID, if applicable). Click **Fix Gaps For Duration-Based** to use the tool when DataRobot detects that the time steps are irregular:

Or, even if the time steps are regular, use it to apply dataset customizations:

Click **Time series data prep**.
!!! warning
A message displays to warn you that the current project and any manually created feature transformations or feature lists in the project are lost when you access the time series data prep tool from the project:

Click **Go to time series data prep** to open and modify the dataset in the AI Catalog. Click **Cancel** to continue working in the current project.
### Access data prep from the AI Catalog {: #access-data-prep-from-the-ai-catalog }
In the [AI Catalog](catalog), open the dataset from the inventory and, from the menu, select **Prepare time series dataset**:

For the **Prepare time series dataset** option to be enabled for a dataset in the AI Catalog, you must have permission to modify it. Additionally, the dataset must:
* Have a status of static or Spark.
* Have at least one date/time feature.
* Have at least one numeric feature.
## Modify a dataset {: #modify-a-dataset }
Use the following mechanisms to modify a dataset using the time series data prep tool:
* Set [manual options](#set-manual-options) using dropdowns and selectors to generate code that sets the aggregation and imputation methods.
* Optionally, modify the [Spark SQL query](#edit-the-spark-sql-query) generated from the manual settings. (To instead create a dataset from a blank Spark SQL query, use the **AI Catalog**'s [**Prepare data with Spark SQL**](spark) functionality.)
### Set manual options {: #set-manual-options }
Once you [open time series data prep](#access-the-data-prep-tool), the **Manual settings** page displays.

Complete the fields that will be used as the basis for the imputation and aggregation that DataRobot computes. You cannot save the query or edit it in Spark SQL until all required fields are complete. (See additional information on [imputation](#imputing-values), below.)
| Field | Description | Required? |
|-----------|--------------|---------------|
| Target feature | Numeric column in the dataset to predict. | Yes |
| Primary date/time feature | Time feature used as the basis for partitioning. Use the dropdown or select from the identified features. | Yes |
| [Series ID](multiseries) | Column containing the series identifier, which allows DataRobot to process the dataset as a separate time series.| No |
| Series start date (only available once series ID is set) | Basis for the series start date, either the earliest date for each series (per series) or the earliest date found for any series (global). | Defaults to per-series |
| Series end date (only available once series ID is set) | Basis for the series end date, either the last entry date for each series (per series) or the latest date found for any series (global). | Defaults to per-series |
| Target and numeric feature aggregation & imputation | Aggregate the target using either **mean & most recent** or **sum & zero**. In other words, the time step's aggregation is created using either the sum or the mean of the values. If there are still missing target values after aggregating, those values are imputed with zero (if `sum`) or the most recent value (if `mean`). | Yes |
| Categorical feature aggregation & imputation | Aggregate categorical features using the most frequent value or the last value within the aggregation time step. Imputation only applies to features that are constant within a series (for example, the cross-series groupby column) which is imputed so that they remain constant within the series. | Yes |
| Text feature aggregation & imputation (only available if text features are present.) | Choose `ignore` to skip handling of text features or aggregate by:<br />• `most frequent` text value <br />• `last` text value<br />• `concatenate all` text values<br />•`total text length`<br />• `mean text length` | Yes |
|**Time Step:** The components—frequency and unit—that make up the detected median time delta between rows in the *new* dataset. For example, 15 (frequency) days (unit). | :~~: | :~~: |
| Frequency | Number of (time) units that comprise the time step. | Defaults to detected |
| Unit | Time unit (seconds, days, months, etc.) that comprise the time step from the dropdown. | Defaults to the detected unit |
Once all required fields are complete, three options become available:
* Click **Run** to [preview the first 10,000 results](spark#preview-results) of the query (the resulting dataset).
!!! note
The preview can fail to execute if the output is too large, instead returning an alert in the console. You can still save the dataset to the AI catalog, however.
* Click **Save** to create a new Spark SQL dataset in the **AI Catalog**. DataRobot opens the **Info** tab for that dataset; the dataset is available to be used to create a new project or any other options available for a Spark SQL dataset in the **AI Catalog**. If the dataset has greater than [50% or more imputed rows](#imputation-warning), DataRobot provides a warning message.
* Click **Edit Spark SQL query** to open the Spark SQL editor and modify the initial query.
### Edit the Spark SQL query {: #edit-the-spark-sql-query }
When you complete the **Manual** settings and click **Edit Spark SQL query**, DataRobot populates the edit window with an initial query based on the manual settings. The script is customizable, just like any other Spark SQL query, allowing you to create a new dataset or a new version of the existing dataset.

When you have finished with changes, click **Run** to preview the results. If satisfied, click **Save** to add the new dataset to the **AI Catalog**. Or, click **Back to manual settings** to return to the dropdown-based entry. Because switching back to **Manual** settings from the Spark SQL query configuration results in losing all Spark SQL dataset preparation, you can use it as a method of undoing modifications. If the dataset has greater than [50% or more imputed rows](#imputation-warning), DataRobot provides a warning message.
!!! note
If you update a query and then try to save it as a new AI Catalog Spark SQL item, you will not be able to use it for predictions. DataRobot provides a warning message in this case. You can choose to save the updated query, save with the initial query, or close the window without taking action. If you save the updated query, the dataset is saved as a standard Spark dataset.
### Imputing values {: #imputing-values }
Keep in mind these imputation considerations:
* *Known in advance*: Because the time series data prep tool imputes target values, there is a risk of target leakage. This is due to a correlation between the imputation of target and feature values when features are [known in advance (KA)](ts-adv-opt#set-known-in-advance-ka). All KA features are [checked for imputation leakage](data-quality#imputation-leakage) and, if leakage is detected, removed from KA before running time series feature derivation.
* *Numeric features in a series*: When numeric features are constant within a series, handling them with sum aggregation can cause issues. For example, if dates in an output dataset will aggregate multiple input rows, the result may make the numeric column ineligible to be a cross series groupby column. If your project requires that the value remain constant within the series instead of aggregated, [convert the numeric](feature-transforms#single-feature-transformations) to categorical prior to running the data prep tool.
#### Feature imputation {: #feature-imputation}
It is best practice to review the dataset before using it for training or predictions to ensure that changes will not impact accuracy. To do this, select the "new" dataset in the AI Catalog and open the [**Profile**](catalog-asset#view-asset-data) tab. A new column will be present—`aggregated_row_count`. Scroll through the column values; a `0` in a row indicates the value was imputed.

Notice that other, non-target features also have no missing values (with the possible exception of leading values at the start of each series where there is no value to forward fill). DataRobot's feature imputation uses forward filling to enable imputation for all features (target and others) when applying time series data prep.

#### Imputation warning {: #imputation-warning }
When changes made with the data prep tool result in more than 50% of target rows being imputed, DataRobot alerts you with both:
* An alert on the catalog item's **Info** page

* A badge on the dataset in the AI Catalog inventory:

## Build models {: #build-models }
Once the dataset is prepped, you can use it to create a project. Notice that when you upload the new dataset from the AI Catalog, after EDA1 completes, the warning indicating irregular time steps is gone and the forecast window setting shows duration, not rows.
To ensure that there is no target leakage from [known in advance features](ts-adv-opt#set-known-in-advance-ka) due to imputation during data prep, DataRobot runs an imputation leakage check. The check is run during EDA2 and is surfaced as part of the [data quality assessment](data-quality#imputation-leakage).

The check looks at the KA features to see if they have leaked the imputed rows. It is similar to the target leakage check but instead uses `is_imputed` as the target. If leakage is found for a feature, that feature's known in advance status is removed and the project proceeds.
## Make predictions {: #make-predictions }
When a project is created from a dataset that was modified by the data prep tool, you can automatically apply the transformations to a corresponding prediction dataset. On the [**Make Predictions**](predict) tab, toggle the option to make your selection:

When on, DataRobot applies the same transformations to the dataset that you upload. Click **Review transformations in AI Catalog** to view a read-only version of the manual and Spark SQL settings, for example:

Once the dataset is uploaded, configure the [forecast settings](ts-customization#forecast-settings). (Use *forecast point* to select a specific date or *forecast range* to predict on all forecast distances within the selected range.)
!!! note
You are required to specify a forecast point for forecast point predictions. DataRobot does not apply the most recent valid timestamp (the default when not using the tool).

When you deploy a model built from a prepped dataset, the [**Make Predictions**](batch-pred#make-predictions-with-a-time-series-deployment) tab in the **Deployments** section also allows you to apply time series data prep transformations.
See also the [considerations](ts-consider#time-series-data-prep) for working with the time series data prep tool.
|
ts-data-prep
|
---
title: Restore features removed by reduction
description: DataRobot then runs a feature reduction algorithm, removing features it detects as low impact, but you can add these features back into your available derived modeling data.
---
# Restore features removed by reduction {: #restore-features-removed-by-reduction }
In any time series project, DataRobot generates derived features based on the window settings at project start. DataRobot then runs a feature reduction algorithm, removing features it detects as low impact. Sometimes, however, the algorithm may remove some important features during the [feature reduction process](ts-adv-opt#use-supervised-feature-reduction)—features that you want included in the generated feature lists or evaluated for feature impact. Some examples of this are certain calendar-derived features or a particular numeric statistic of a financial variable. After [EDA2](eda-explained#eda2) completes, you can add these features back into your available derived modeling data.
!!! note
Even if you disable supervised reduction in advanced options, DataRobot may still remove features based on extractor priority. These features can also be restored with the restoration process.
## Identify removed features {: #identify-removed-features }
The easiest way to determine whether features were removed in the feature reduction process is to review the [feature derivation log](ts-create-data#review-data-and-new-features) after EDA2 completes.

Depending on the dataset size, it is likely you need to download the log. This is because the reduction process runs last (is at the end of the file) and may be truncated from the preview.
## Restore pruned features {: #restore-pruned-features }
The following describes how to restore removed features (identified from the derivation log) to the modeling dataset. You can use this option repeatedly, until you have restored all features or have reached the maximum supported features, which may be constrained by data ingest limits.
1. On the **Data > Derived Modeling Data** tab, select **Restore pruned features** from the menu:

2. In the **Restore pruned features** window, begin typing to select features for restoration. DataRobot indicates the number of features that can be added back.

3. Click **Add features** when all desired features are listed. DataRobot reports progress:

And then success:

4. To verify the restoration, click the index column. DataRobot re-sorts the features, listing the restored features first and marking them with a restoration icon ().

!!! note
Feature restoration does not change the feature lists created during EDA2. To use the restored features for modeling, [create new feature lists](#create-new-feature-lists).
## Create new feature lists {: #create-new-feature-lists }
When features are restored, they are not added into existing feature lists. To use the new features as part of your modeling dataset you must create new feature list(s) that incorporates them. For example:
1. From the **Derived Modeling Data** tab, select the best performing feature list. Check the **Feature Name** box to select all features in that list.

2. Change to the **All Time Series Features** list (selections from the previous action are preserved).
3. Select the restored features you would like to add.
4. Click **Create feature list** to add the new list.
Once one or more new lists are created that contain the restored features, build models with them (individually or by rerunning Autopilot). Compare model performance between lists to see if there is value in including the restored features as part of the model to use for making predictions.
|
restore-features
|
---
title: Time series modeling data
description: This topic describes the creation and management of the modeling dataset that is a result of the feature derivation process.
---
# Time series modeling data {: #time-series-modeling-data }
This topic describes the creation and management of the modeling dataset that is a result of the feature derivation process.
Topic | Describes...
----- | ------
[Create the modeling dataset](ts-create-data) | How DataRobot's feature derivation process creates a new modeling dataset for time series projects.
[Data prep for time series](ts-data-prep) | Using the time series data prep tool to correct data quality time step issues.
[Restore features removed by reduction](restore-features) | Adding features back into your available derived modeling data after running EDA2.
|
index
|
---
title: Date/time partitioning advanced options
description: Date/time partitioning sets up the underlying structure that supports time-aware modeling.
---
# Date/time partitioning advanced options {: #configure-ts-date-time-partitioning-advanced-options }
DataRobot's default partitioning settings are optimized for the specific dataset and target feature selected. For most users, the defaults that DataRobot selects provide optimized modeling. See the time series [customization documentation](ts-customization) to gain an understanding of how DataRobot calculates partitions and advice on setting window sizes.
If you do choose to change partitioning, the content below describes changing backtest partitions.
## Advanced options {: #advanced-options }
Expand the **Show Advanced options** link to set details of the partitioning method. When you enable time-aware modeling, **Advanced options** opens to the date/time partitioning method by default. The **Backtesting** section of date/time partitioning provides tools for configuring backtests for your time-aware projects.

{% include 'includes/date-time-include-1.md' %}
{% include 'includes/date-time-include-6.md' %}
|
ts-date-time
|
---
title: Customizing time series projects
description: Describes how DataRobot calculates training partitions and the partitioning requirements for time series modeling.
---
# Customizing time series projects {: #customizing-time-series-projects }
DataRobot provides default window settings ([Feature Derivation](glossary/index#feature-derivation-window) and [Forecast](glossary/index#forecast-window) windows) and [partition sizes](ts-date-time) for time series projects. These settings are based on the characteristics of the dataset and can generally be left as-is—they will result in robust models.
If you choose to modify the default configurations, keep in mind that setting up a project requires matching your actual prediction requests with your work environment. Modifying project settings out of context to increase accuracy independent of your use case often results in disappointing outcomes.
The following reference material describes how DataRobot determines defaults, requirement, and a selection of other settings, specifically covering:
* Guidance on [setting window values](#set-window-values).
* Understanding backtests, including:
* [Backtest settings](#setting-backtests).
* [Backtest importance](#backtest-importance).
* The logic behind [row selection and usage](#default-partition-deep-dive) from training data.
* Understanding [duration and row count](#duration-and-row-count).
* How DataRobot handles [training and validation](#handle-training-and-validation-folds) folds.
* How to [change the training period](#change-the-training-period).
Additionally, see the guidance on [model retraining](ts-predictions#retrain-before-deployment) before deploying.
!!! tip
Read the [real-world example](#window-and-gap-settings-example) to understand how gaps and window settings relate to data availability and production environments.
## Set window values {: #set-window-values }
Use the [Feature Derivation Window](glossary/index#fdw) to configure the periods of data that DataRobot uses to [derive features](ts-create-data) for the modeling dataset.

On the left, the Feature Derivation Window (1), constrains the time history used to derive features. That is, it defines how many values to look at, which determines how much data you need to provide to make a prediction. (It does _not_ constrain the time history used for modeling—that is determined by the training partitions.) In the example above, DataRobot will use the most recent 35 days of data.
DataRobot auto-suggests values for the windows based on the dataset's time unit. For example, a common period for the Feature Derivation Window is a roughly one-month setting ("35 to 0 days"), or for minute-based data, an hourly span (60 to 0 minutes). These settings provide enough coverage to view some important lags but not so much that irrelevant lagged features are derived. The feature reduction process removes those lags that don't produce good information anyway, so creating a too-wide Feature Derivation Window increases build time without necessarily adding benefit. (DataRobot creates a maximum of five lags, regardless of the Feature Derivation Window size, in order to limit the window used for rolling statistics.)
Unless you are dealing with data in which the time unit is yearly, it is very rare that what happened, for example, on February of last year is relevant to what will happen February of this year. There are instances where certain past year dates are relevant, and in this case using a calendar is a better solution for determining yearly importance and yearly lags. For daily data, the primary seasonality is more likely to be weekly and so a Feature Derivation Window of 365 days is not necessary and not likely to improve results.
??? info "What is a lag?"
A lagged feature contains information about that feature from previous time steps. Each lag shifts the feature back in time, so that at any time step you can see the value of that feature from the past.
For example, if you have the following series:
Date | Target
---- | ------
8/1 | 10
8/2, | 11
8/3 | 12
The resulting lags would be:
Date | Target | Lag1 | Lag2
---- | ------ | ---- | ----
8/1 | 10 | NaN | NaN
8/2 | 11 | 10 | NaN
8/3 | 12 | 11 | 10
!!! tip "Tip"
Ask yourself, "how much data do I realistically have access to that is _also_ relevant at the time I am making predictions?" Just because you have, for example, six years of data, that does not mean you should widen the Feature Derivation Window to six years. If your Forecast Window is relatively small, your Feature Derivation Window should be compatibly reasonable. With time series data, because new data is always flowing in, feeding "infinite" history will not result in more accurate models.
Keep in mind the distinction between the Feature Derivation Window and the training window. It might be very reasonable to train on six years of data, but the _derived_ features should typically focus on the most recent data on a time scale that is similar to the Forecast Window (for example, a few multiples of it).
On the right, the Forecast Window (2) sets the time range of predictions that the model outputs. The example configures DataRobot to make predictions on days 1 through 7 after the [forecast point](glossary/index#forecast-point). The time unit displayed (days, in this case) is based on the unit detected when you selected a date/time feature.
!!! tip "Tip"
It is not uncommon to think you need a larger Forecast Window than you actually need. For example, if you only need 2 weeks of predictions and a "comparison result" from 30 days ago, it is better practice to configure your operationalized model for two weeks and create a separate project for the 30-day result.
Predicting from 1-30 days is suboptimal because the model will optimize to be as accurate as possible for each of the 1-30 predictions. In reality, though, you only need accuracy for days 1-14 and day 30. Splitting the project up ensures the model you are using is best for the specific need.
You can specify either the time unit detected or a number of rows for the windows. DataRobot calculates rolling statistics using that selection (e.g., `Price (7 days average)` or `Price (7 rows average)`). If the time-based option is available, you should use that. Note that when you configure for row-based windows, DataRobot does not detect common event patterns or seasonalities. DataRobot provides special handling for datasets with irregularly spaced date/time features, however. If your dataset is irregular, the window settings default to row-based.
!!! tip "When to use row-based mode"
Row-based mode (that is, using [**Row Count**](#duration-and-row-count)) is a method for modeling when the [dataset is irregular](ts-flow-overview#time-steps).
You can change these values (and notice that the visualization updates to reflect your change). For example, you may not have real-time access to the data or don't want the model to be dependent on data that is too new. In that case, change the Feature Derivation Window (FDW in the calculation below)—move it to the end of the most recent data that will be available. If you don't care about tomorrow's prediction because it is too soon to take action on, change the Forecast Window (FW) to the point from which you want predictions forward. This changes how DataRobot optimizes models and ranks them on the Leaderboard, as it only compares for accuracy against the configured range.
!!! note "Deep dive"
DataRobot's default suggestion will always be `FDW=[-n, -0]` and `FW=[1, m]` time units/rows. Consider adjusting the -0 and 1 so that DataRobot can optimize the models to match the data that will be available relative to the forecast point when the model is in production.
### Understanding gaps {: #understanding-gaps }
When you set the Feature Derivation Window and Forecast Window, you create time period gaps (not to be confused with the [**Gap Length** setting](#gap-length) gaps):
* _Blind history_ is the time window between when you actually acquire/receive data and when the forecast is actually made. Specifically, it extends from the most recent time in the Feature Derivation Window to the forecast point.
* _Can't operationalize_ represents the period of time that is too near-term to be useful. Specifically, it extends from immediately after the Forecast Point to the beginning of the Forecast Window.

#### Blind history example {: #blind-history-example }
Blind history accounts for the fact that the predictions you are making are using data that has some delay. Setting the blind history gap is mainly relevant to the state of your data _at the time of predictions_. If you misunderstand where your data comes from, and the associated delay, it is likely that your actual predictions will not be as accurate as the model suggested they would be.
Put another way: you can look at your data and say “Ah, I have two years of daily data, and for any day in my data set there is always a previous day of data, so I have no gap!” From the _training_ perspective, this is true—it is unlikely that you will have a visible or obvious gap in your data if you are viewing it from the perspective of "now" relative to your _training_ data.
The key is to understand where your actual, prediction data comes from. Consider:
* Your company processes all data available in a large batch job that runs 1 time weekly.
* You collect up historical data and train a model, but are unaware of this delay.
* You build a model that makes 2 weeks of projections for the company to act on.
* Your model goes into production but can't figure out why the predictions are so "off."
The hard thing to understand here is this:
***At any given time, if the most reliable data you have available is _X_ days old, nothing newer than that can be reliably used for modeling.***
??? "Let's get more specific about how to set the gap"
If your company:
* Collects data every Monday, but that data isn’t processed and available for making predictions until Friday—“blind history” = 5 days.
* Makes real-time predictions, where the data you receive comes in and your model predicts on the next point—“blind history” = 0 days.
* Collects data every day, the processing for predictions takes a day (or two days) to appear, and data is logged as it is _processed_—“blind history” = 1 (or 2) days.
* Collects data every day, and the data takes the same 1 (or 2) days to process, but everything is retroactively dated—“blind history” = 0 days.
You can also use blind history to cut out short term biases. For example, if the previous 24 hours of your data is very volatile, use the blind gap to ignore that input.
#### Can't operationalize example {: #cant-operationalize-example }
Let's say you are making predictions for how much bread to stock in your store and there is a period of time that is too near-term to be useful.
* It takes 2 days to fulfill a bread order (have the stock arrive at your store).
* Predictions for 1 and 2 days out aren't useful—you cannot take any action on those predictions.
* The "can't operationalize" gap = 2 days.
* With this setting, the forecast is 3 days from when you generate the prediction so that there is enough time to order and stock the bread.
## Setting backtests {: #setting-backtests }
Backtesting simulates the real prediction environments a model may see. Use backtests to evaluate accuracy given the constraints of your use case. Be sure not to build backtests in a way that gives you accuracy but are not representative of real life. (See the description of [date/time partitioning](ts-date-time) for details of the configuration settings.) The following describes considerations when changing those settings.

The following sections describe:
* Setting [**Validation Length**](#validation-length)
* Setting [**Number of Backtests**](#number-of-backtests)
* Setting [**Gap Length**](#gap-length)
* [Backtest importance](#backtest-importance)
### Validation length {: #validation-length }
Validation length describes the frequency with which you retrain your models. It is the most important setting to factor in if you change the default configuration.
For example, if you plan to retrain every 2 weeks, set the validation length to 14 days; if you will retrain every quarter, set it to 3 months or 90 days. Setting a validation length to an arbitrary number that doesn’t match your retraining plan will not provide the information you need to understand model performance.
### Number of backtests {: #number-of-backtests }
Setting the number of backtests is somewhat of a "gut instinct." The question you are trying to answered is "What do I need to feel comfortable with the predictions based on my retraining schedule?" Ultimately, you want a number of backtests such that the validation lengths cover, at a minimum, the full forecast window. Coverage beyond this depends on what time period you need to feel comfortable with the performance of the model.
While more backtests will provide better validation of the model, it will take longer to train and you will likely run into a limit on how many backtests you can configure based on the amount of historical data you have. For a given validation duration, the training duration will shrink as you increase the number of backtests and at some point the training duration may get too small.
Select the validation duration based on the retraining schedule and then select the number of backtests to balance your confidence in the model against the time to train and availability of training data to support more backtests.
If you retrain often, you shouldn't care if the model performs differently across different backtests, as you will be retraining on more recent data more often. In this case you can have fewer backtests, maybe only Backtest 1. But if you expect to retrain less often, volatility in backtest performance could impact your model's accuracy in production. In this case, you want more backtests so that you get models that are validated to be more stable across time. Backtesting assures that you select a model that generalizes well across different time periods in your historical data. The longer you plan to leave a model in production the before retraining the more of a concern this is.
Also, setting the number depends on whether the Forecast Distance is less or more than the retraining period.
If... | Number of backtests equals...
----- | -------------------
`FW < retraining period` | The number of retraining periods where you want to minimize volatility. While volatility should always should be low, consider the _minimum_ time you want to minimize volatility.
* Example: You have a Forecast Window of one week and will retrain each month. Based on historical sales, you want to feel comfortable that the model is stable over a quarter of the year. Because `FW = [1,7]` and the retraining period is 1M, select the validation duration to match that. To be comfortable that the model is stable over 3M, select 3 backtests.
If the Forecast Window is longer than the retraining period, you can consider the previous example but you also want to make sure you have enough backtests to account for the entire Forecast Window.
If... | Number of backtests equals...
----- | -------------------
`FW > retraining period` | The number of backtest from above, multiplied by the Forecast Window or retraining period.
* Example: You have a Forecast Window of 30 days and will retrain every 15 days. You need a minimum of two backtests to validate that. To feel comfortable that the model is stable over a quarter of the year, like the last example, use six backtests.
Ultimately, you want a number of backtests such that the validation lengths cover, at a minimum, the full Forecast Window. Coverage beyond this depends on what time period you need to feel comfortable with the performance of the model.
### Gap Length {: #gap-length }
The Gap Length set as part of date/time partitioning (not to be confused with the [blind history or "can't operationalize"](#understanding-gaps) periods) helps to address the issue of delays in getting a model to production. For example, if you train a model and it takes five days to get the model running in production, then you would want a gap of five days. For example, ask yourself, how many days old from Friday is the most recent data point have that equals _actual_ and _usable_ data? Few companies have the capacity to train, deploy, and begin using the deployments immediately in their production environments.
### Backtest importance {: #backtest-importance }
All backtests are not equal.

**Backtest 1** is the most important, because when you setup a project you are trying to simulate reality. Backtest 1 is simulating what happens if you train and validate on the data that would actually be available to you at the most recent time period you have available. This is the _most recent_ and likely _most relevant_ data that you would be training on, and so the accuracy of Backtest 1 is extremely important.
**Holdout** simulates whats going to happen when your model goes into production—the best possible “simulation” of what would be happening during the time you are using the model between retrainings. Accuracy is important, but should be used more as a guideline of what to except of the model. Determine if there are drastic differences in performance between Holdout and Backtest 1, as this could point to over- or under-fitting.
Other backtests are designed to give you confidence that the model performs reliably across time. While they are important, having a perfect “All backtests” score is a lot less important than the scores for Backtest 1 and Holdout. These tests can provide guidance on how often you need to retrain your data due to volatility in the performance across time. If Backtest 1 and Holdout have good/similar accuracy, but the other backtests have low accuracy, this may simply mean that you need to retrain the model more often. In other words, don't try to get a model that performs well on all backtests at the cost of a model that performs well on Backtest 1.
### Valid backtest partitions {: #valid-backtest-partitions }
When configuring backtests for time series projects, the number of backtests you select must result in the following minimum rows _for each backtest_:
* A minimum of 20 rows in the training partition.
* A minimum of four rows in the validation and holdout partitions.
When setting up partitions:
* Consider the boundaries from a predictions point-of-view and make sure to set the appropriate [gap length](#gap-length).
* If you are collecting data in realtime to make predictions, a feature derivation window that ends at `0` will suffice. However, most users find that the blind history gap is more realistically anywhere from 1 to 14 days.
### Deep dive: default partition {: #deep-dive-default-partition }
It is important to understand how DataRobot calculates the default training partitions for time series modeling before configuring backtest partitions. The following assumes you meet the [minimum row requirements](#valid-backtest-partitions) in the training partition of each backtest.
Note that for projects with greater than 10 forecast distances, DataRobot does not include a number of rows, the amount determined by the forecast distance, in the training partition. As a result, the dataset requires more rows than the stated minimum, with the number of additional rows determined by the depth of the forecast distance.
!!! note
DataRobot uses rows outside of the training partitions to calculate features as part of the time series [feature derivation process](feature-eng). That is, the rows removed are still uses to calculate new features.
To reduce bias to certain forecast distances, and to leave room for validation, holdout, and gaps, DataRobot does not include all dataset rows in the backtest training partitions. The number of rows in your dataset that DataRobot _does not include_ in the training partitions is dependent on the elements described below.
Calculations described below using the following terminology:
Term | Link to description
---- | -------------------
BH | ["Blind history"](glossary/index#blind-history)
FD | [Forecast Distance](glossary/index#forecast-distance)
FDW | [Feature Derivation Window](glossary/index#feature-derivation-window)
FW | [Forecast Window](glossary/index#forecast-window)
CO | ["Can't operationalize"](glossary/index#cant-operationalize-period)
Holdout | [Holdout](data-partitioning)
Validation | [Validation](data-partitioning)
The following are calculations for the number rows *not included* in training.
#### Single series and <= 10 FDs
For a single series with 10 or fewer forecast distances, DataRobot calculates excluded rows as follows:
FDW + 1 + BH + CO + Validation + Holdout
??? Info "In other words"
`Feature Derivation Window` +
`1` +
`Gap between the end of FDW and the start of FW` +
`Gap between training and validation` +
`Validation` +
`Holdout`
#### Multiseries or > 10 FDs
When a project has a Forecast Distance `> 10`, DataRobot adds the length of Forecast Window to the rows removed. For example, if project a has 20 Forecast Distances, DataRobot removes 20 rows from consideration in the training set. In other words, the greater the number of Forecast Distances, the more rows removed from training consideration (and thus the more data you need to have in the project to maintain the 20 row minimum).
For a multiseries project, or single series with greater than forecast distances, DataRobot calculates excluded rows as follows:
FDW + 1 + FW + BH + CO + Validation + Holdout
??? Info "In other words"
`Feature Derivation Window` +
`1` +
`Forecast window` +
`Gap between the end of FDW and the start of FW` +
`Gap between training and validation` +
`Validation`+
`Holdout`
#### Projects with seasonality
If there is seasonality (i.e., you selected the [**Apply differencing**](ts-adv-opt#apply-differencing) advanced options), replace `FDW + 1` with `FDW + Seasonal period`. Note that if not selected, DataRobot tries to detect seasonality by default. In other words, DataRobot calculates excluded rows as follows:
* Single series and <= 10 FDs:
`FDW + Seasonal period + BH + CO + Validation + Holdout`
* Multiseries or >10 FDs:
`FDW + Seasonal period + FW + BH + CO + Validation + Holdout`
### Duration and Row Count {: #duration-and-row-count }
If your data is evenly spaced, **Duration** and **Row Count** give the same results. It is not uncommon, however, for date/time datasets to have unevenly spaced data with noticeable gaps along the time axis. This can impact how **Duration** and **Row Count** affect the training data for each backtest. If the data has gaps:
* **Row Count** results in an even number of rows per backtest (although some of them may cover longer time periods). Row Count models can, in certain situations, use more RAM than Duration models over the same number of rows.
* **Duration** results in a consistent length-of-time per backtest (but some may have more or fewer rows).
Additionally, these values have different meanings depending on whether they are being applied to training or validation.
For _irregular_ datasets, note that the setting for **Training Window Format** defaults to **Row Count**. Although you can change the setting to **Duration**, it is highly recommended that you leave it, as changing may result in unexpected training windows or model errors.
#### Handle training and validation folds {: #handle-training-and-validation-folds }
The values for **Duration** and **Row Count** in training data are set in the training window format section of the Time Series Modeling configuration.
When you select **Duration**, DataRobot selects a default fold size—a particular period of time—to *train* models, based on the duration of your training data. For example, you can tell DataRobot "always use three months of data." With **Row Count**, models use a specific number of rows (e.g., always use 1000 rows) for training models. The training data will have exactly that many rows.
For example, consider a dataset that includes fraudulent and non-fraudulent transactions where the frequency of transactions is increasing over time (the number is increasing per time period). Set **Row Count** if you want to keep the number of training examples constant through the backtests in the training data. It may be that the first backtest is only trained on a short time period. Select **Duration** to keep the time period constant between backtests, regardless of the number of rows. In either case, models will not be trained into data more recent than the start of the holdout data.

_Validation_ is always set in terms of duration (even if training is specified in terms of rows). When you select **Row Count**, DataRobot sets the **Validation Length** based on the row count.
### Change the training period {: #change-the-training-period }
!!! note
Consider [retraining your model on the most recent data](#retrain-before-deployment) before final deployment.
You can change the training range and sampling rate and then rerun a particular model for date-partitioned builds. Note that you cannot change the duration of the validation partition once models have been built; that setting is only available from the **Advanced options** link before the building has started. Click the plus sign (**+**) to open the **New Training Period** dialog:

The **New Training Period** box has multiple selectors, described in the table below:

| | Selection | Description |
|---|---|---|
|  | Frozen run toggle | [Freeze the run](frozen-run). |
|  | Training mode | Rerun the model using a different training period. Before setting this value, see [the details](ts-customization#duration-and-row-count) of row count vs. duration and how they apply to different folds. |
|  | Snap to | "Snap to" predefined points, to facilitate entering values and avoid manually scrolling or calculation. |
|  | [Enable time window sampling](#time-window-sampling) | Train on a subset of data within a time window for a duration or [start/end](#setting-the-start-and-end-dates) training mode. Check to enable and specify a percentage. |
|  | [Sampling method](#set-rows-or-duration) | Select the sampling method used to assign rows from the dataset. |
| | Summary graphic | View a summary of the observations and testing partitions used to build the model. |
|  | Final Model | View an image that changes as you adjust the dates, reflecting the data to be used in the model you will make predictions with (see the [note](#about-final-models) below). |
Once you have set a new value, click **Run with new training period**. DataRobot builds the new model and displays it on the Leaderboard.
## Window and gap settings example {: #window-and-gap-settings-example }
There is an important difference between the [**Gap Length**](#gap-length) setting (the time required to get a model into production) and the [gap periods](#understanding-gaps) (the time needed to make data available to the model) created by window settings.
The following description provides concrete examples of how these values each impact the final model in production.
At the highest level, there are two discrete actions involved in modeling a time series project:
1. You build a model, which involves data processing, and the the model ultimately makes predictions.
2. Once built, you productionalize the model and it starts contributing value to your organization.
Let's start with the more straight-forward piece. Item #2 is the [**Gap Length**](#gap-length)—the one-time delay between you completing your work and the time it takes, for example, the IT department to actually move the latest model into production. (In some heavily regulated environments, this can be 30 days or more.)
For item #1, you must understand these fundamental things about your data:
* How often is your data collected or aggregated?
* What does the data's timestamp _really_ represent? A _very_ common real-world kind of example:
* The database system runs a refresh process on the data and, for this example, let's say it becomes usable on 9/9/2022 ("today").
* The refresh process is, however, actually being applied to data from the previous Friday, in this example, 9/2/2022.
* The database timestamps the update as the day the processing happened (9/9/2022) even though this is actually the data for 9/2/2022.
??? info "Why all the delays?"
In most systems there is a grace period between data being collected in real time and data being processed for analytics or predictions. This is because often you have a lot of complex systems that feed into a database. After the initial collection, the data is moved through a "cleaning and processing" stage. In these complex systems, each step can take hours or days, creating a time lag. From the example, this is how data might be processed and made usable Friday, but the original, raw data may have arrived days before. This is why you need to understand the lag—you need to know both when the data was created and when it became usable.
In another example:
* The database system runs a refresh process on the data but timestamps it with the actual date (9/2/2022) in the most recent data refresh.
!!! important
In both of these examples you know the latest _actual_ age of data on 9/9/2022 is really 9/2/2022. You **_must_** understand your data, as it is critically important for properly understanding your project setup.
Once you know what is happening with your data acquisition, you can adjust your project to account for it. For training, the timestamp/delay issue isn't a problem. You have a dataset over time and every day has values. But this in itself is also a challenge, as you and the system you are using need to account for the difference in training and production data. Another example:
* Today is Friday 9/9/2022, and you received a refresh of data. You need to make predictions for Monday. How should you setup the Feature Derivation Window and Forecast Window in this situation?
* You know the most recent data you have is _actually_ from 7 days ago.
* As of 9/9/2022, the predictions you need to make each day are 3 days in the future, and you want to project a week out.
* The Feature Derivation Window start date could be any time, but for the example the "length" is 28 days.
Think about everything from the _point of prediction_. As of prediction any date, the example can be summarized as:
* Most recent data is actually 7 days in the past.
* The Feature Derivation Window is 28 days.
* You want to predict three days ahead of today.
* You want to know what will happen over the next 7 days.
* After building the model, it will take your company 30 days to deploy it to production.
How do you configure this? In this scenario, settings would be as following
* Feature Derivation Window: -35 to -7 days
* Forecast Window: 3 to 10 days
* Gap Length: 30 days
Another way to express it is that blind history is 7 days, the "can't operationalize" period is 3 days, and the Gap Length is 30.

|
ts-customization
|
---
title: Time series advanced modeling
description: This topic provides deep-dive reference material for DataRobot time series modeling.
---
# Time series advanced modeling {: #time-series-advanced-modeling }
This section provides information for DataRobot time series modeling.
Topic | Describes...
----- | ------
[Time Series advanced options](ts-adv-opt) | The many advanced features available for customizing time series projects.
[Clustering advanced options](ts-cluster-adv-opt) | How to set the number of clusters DataRobot automatically discovers.
[Date/Time for time series](ts-date-time) | Non-default time formats and backtesting.
[Customizing time series projects](ts-customization)| How DataRobot calculates training partitions as well as changing default partitioning and window values.
|
index
|
---
title: Time series advanced options
description: Describes the settings available from the Time Series advanced option tab, where you can set features known in advance, exponential trends, and differencing for time series projects.
---
# Time Series advanced options {: #time-series-advanced-options }
{% include 'includes/ts-adv-opt-include.md' %}
|
ts-adv-opt
|
---
title: Clustering advanced options
description: Allows you to set the number of clusters that DataRobot automatically discovers during time series clustering.
---
# Clustering advanced options {: #clustering-advanced-options }
{% include 'includes/ts-cluster-adv-opt-include.md' %}
|
ts-cluster-adv-opt
|
---
title: Error metric guidance
description: Identifies the top Eureqa model error metrics for different types of problems.
---
# Error metric guidance {: #error-metric-guidance }
The good news is that there are common error metrics that work well on a large majority of problem types. Starting with one of these error metrics is usually a safe bet. This section identifies the top error metrics for different types of problems.
## Numeric and time series models {: #numeric-and-time-series-models }
### Mean Absolute Error {: #mean-absolute-error }
* By minimizing the absolute residual error, rather than squared residual error, Mean Absolute Error is also a good general-purpose error metric, but is more permissive to outliers than Mean Squared Error and R^2 Goodness of Fit Error. This can be a good choice if outliers in the data are likely to be due to noise, or if capturing an overall trend is more important than avoiding a few large errors. Mean Absolute Error can also be interpreted as "on average, predictions are off by this amount."
### Mean Absolute Percentage Error {: #mean-absolute-percentage-error }
* Mean Absolute Percentage Error is a common error metric for time series forecasting. It can be interpreted as the average absolute percentage by which predicted values deviate from the actuals. It can be a good choice when relative errors are more important than absolute error values. Mean Absolute Percentage Error may not be a good choice if there are very small actual values in the dataset since small errors on these rows may dominate the metric calculation; any rows where the actual value is 0 are not included in the error metric calculation.
### R^2 Goodness of Fit Error (R^2) {: #r2-goodness-of-fit-error-r2 }
* R^2 is a standard measure of model fitness. It can be interpreted as the “percent of variance explained”. As a percentage, the R^2 can be compared across models and datasets since the scale is not dependent on the scale of the data.
!!! note
R^2 Goodness of Fit Error is the Eureqa default for numeric and time series models.
### Mean Squared Error {: #mean-squared-error }
* Mean Squared Error is a common error metric. Optimizing for Mean Squared Error will be equivalent to optimizing for R^2; however, Mean Squared Error values will depend on the scale of the data. Because they depend on squared error, both Mean Squared Error and R^2 tend to be sensitive to outliers and a good choice when there is strong incentive to avoid individual large errors.
## Classification models {: #classification-models }
### Mean Squared Error for classification {: #mean-squared-error-for-classification }
* Mean Squared Error for Classification is the default error metric for classification problems in Eureqa. This metric optimizes Mean Squared Error but with internal optimizations for classification problems. The output values of logistic models that have been optimized for Mean Squared Error can be interpreted as the probability of a 1 outcome. Mean Squared Error may not be the best error metric when trying to classify rare events since it attempts to minimize overall error rather than separation between positive and negative cases.
### Area Under ROC Curve Error (AUC) {: #area-under-roc-curve-error-auc }
* AUC is a common error metric for classification and works by optimizing the ability of a model to separate the 1s from the 0s. AUC is not sensitive to the relative number of 0s and 1s in the target variable and can be a good choice for skewed classes. When optimized for AUC, predicted values will effectively order inputs from the most to least likely to be 1; however, they _cannot_ be interpreted as a predicted probability.
## Error metrics and noise {: #error-metrics-and-noise }
One consideration for choosing an error metric is the expected amount of noise in the data. Different error metrics effectively make different assumptions about the distribution of the noise in the observed output. For example, for very noisy systems you might select an error metric that would give relatively less weight to some large errors (e.g., Mean Absolute Error, IQR Error, Median Error) under the assumption that these large errors may be due to noise in the input data rather than poor model fit. On the contrary, when input data is expected to have very low noise you might select an error metric which heavily penalizes large errors (e.g., R^2 or Maximum Error).
### Noisy systems {: #noisy-systems }
#### Mean Logarithm Squared Error {: #mean-logarithm-squared-error }
* Mean Logarithm Squared Error uses the log function to squash error values and decrease the impact of large errors.
#### Interquartile Mean Absolute Error (IQME) {: #interquartile-mean-absolute-error-iqme }
* By ignoring the smallest 25% and largest 75% of error values, IQME will not be impacted by a significant number of outliers and may work well if you are most interested in “on average” performance.
#### Median Absolute Error {: #median-absolute-error }
* By ignoring all residual values except for the median, Median Absolute Error is the most permissive of outliers.
### Low-noise systems {: #low-noise-systems }
#### Maximum Absolute Error {: #maximum-absolute-error }
* By ignoring all but the maximum error value, Maximum Absolute Error can work well if you are expecting a perfect or nearly perfect fit; for example, if you are using the Eureqa model for symbolic simplification.
## Error Metrics for classification {: #error-metrics-for-classification }
In addition to the common classification error metrics outlined above, the error metrics described in this section are specific to classification problems.
### Additional error metrics for classification {: #additional-error-metrics-for-classification }
#### Log Loss Error {: #log-loss-error }
* Log Loss Error is a common metric for classification problems. The log transformation on the errors heavily penalizes a high confidence in wrong predictions.
#### Maximum Classification Accuracy {: #maximum-classification-accuracy }
* Maximum Classification Accuracy optimizes the overall ability of a model to make correct 0 or 1 predictions. It may not work well for skewed classes (e.g., when only 1% of the data is ‘1’), since in these cases sometimes the highest predictive accuracy is achieved by simply predicting 0 all of the time.
#### Hinge Loss Error {: #hinge-loss-error }
* Hinge Loss Error is used to optimize classification models that will be used for 0 or 1 predictions. It is a one-sided metric that increasingly penalizes wrong predictions as they get more confident, but treats all true predictions identically after they reach a minimum threshold value. When optimizing Hinge Loss Error, logistic() (**building_block__logistic_function**) should not be used in the target expression since this metric expects a large range of predicted score values.
## Use case-specific error metrics {: #use-case-specific-error-metrics }
### Predicting rank {: #predicting-rank }
Rank Correlation will measure a model based on its ability to rank-order observations rather than to predict a particular value. This can be useful when looking for a model that can predict a relative ranking, such as the finishing order of contestants in a race.
|
guidance
|
---
title: Tune Eureqa models
description: Customize Eureqa models by modifying various Advanced Tuning parameters and creating custom target expressions.
---
# Tune Eureqa models {: #tune-eureqa-models }
You can customize Eureqa models by modifying various Advanced Tuning parameters and creating custom target expressions. Parameters you can adjust for your models include [building blocks](#building-blocks), [target expressions](#target-expressions), [error metrics](#error-metrics), [row weighting](#row-weights), and [prior solutions](#prior-solutions). Additionally, you can customize how DataRobot [partitions data for the Eureqa model](#data-partitioning-for-training-and-cross-validation).
## Building blocks {: #building-blocks }
Eureqa model expressions use building blocks (discrete sets of mathematical functions) for combining variables and creating new features from a dataset. Building blocks range from simple arithmetic functions (addition, subtraction) to complex functions (logistic or gaussian) and more.
DataRobot creates Eureqa models using default sets of building blocks for preset problem types; however, certain problems may require different sets of building blocks. Advanced users dealing with systems that already have known or expected behavior may want to encourage certain model structures in DataRobot. For example, if you think that seasonality or some other cyclical trend may be a factor in your data, including the building blocks sin(x) and cos(x) will let DataRobot know to test those types of interactions against the data.
See [Configuring building blocks](building-blocks) for information on selecting building blocks for the target expression.
### Building block complexity {: #building-block-complexity }
Complexity settings are additional weights DataRobot can apply to specific building blocks and terms to penalize related aspects of a given model. Changing the complexity given to certain building blocks or terms will affect which models will appear on the pareto frontier (in the [**Eureqa Models** tab](eureqa)) with the focus on finding the simplest possible models that achieve increasing levels of accuracy.
The default complexity settings typically work well; however, if you have prior knowledge of the system you are trying to model, you may want to modify those settings. If there are particular building blocks that you know will be, or expect to be, part of a solution that accurately captures the core dynamics of a system, you might lower the complexity values of those building blocks to make it more likely that they will appear in the related Eureqa models. Similarly, if there are building blocks that you don't want to appear unless they significantly improve the fit of the models, you might raise the complexity values of those building blocks.
See [Setting building block complexity](building-blocks#building-block-complexity) for more information.
## Target expressions {: #target-expressions }
The target expression tells DataRobot how to create the Eureqa model. Target expressions are comprised of variables that exist in your dataset and mathematical "building blocks". DataRobot creates the default target expression for a model using the selected target variable modeled as a function of all input variables.
Here's an example default target expression:

You can customize the expression (model formula) to specify the type of relationship you want to model and incorporate your domain expertise of the fundamental behavior of the system. Complex expressions are possible and give you the power to tune for complex relationships, including: differential equations, polynomial equations, and binary classification.
See [Customizing target expressions](custom-expressions) for more information.
## Error metrics {: #error-metrics }
DataRobot uses error metrics to guide how the quality of potential solutions is assessed. Each Eureqa model has default error metrics settings; however, advanced users can choose to optimize for different error metrics. Changing the error metric will change how DataRobot optimizes the solutions.
See [Configuring error metrics](error-metrics) for more information.
## Row weights {: #row-weights }
You can designate one of your variables as an indicator of how much relative weight (i.e., importance) you want Eureqa to give to the data in each row. For example, if the designated row weight variable has a value of 10 in the first row and 20 in the second row, data in the second row will be given twice the weight of the data in the first row when Eureqa is calculating how well a model performs on the data. Row weight can be specified by using a row weight variable or by using a row weight expression.
See [Configuring row weighting blocks](row-weighting) for more information.
## Prior solutions {: #prior-solutions }
Prior solutions "seed" DataRobot with solutions or partial solutions that express relationships that you believe will play some role in an eventual solution. Entering prior solutions for a Eureqa model may speed search performance by initializing that model with known information. The Prior Solutions parameter, **prior_solutions**, is available within the Prediction Model Parameters and can be specified as part of tuning your Eureqa models. You can specify multiple expressions, one per line, where each expression is a valid target expression (such as from a previous Eureqa model).
The following shows an example of two prior solutions (expressions), **sin(x1 - x2)** and **sin(x2)**, set for a model:

If you have entered a custom target expression that uses multiple functions ([as explained here](custom-expressions#multiple-functions)), enter a sub-expression for each function. Each f() is listed with its sub-expression, separated by a comma, on the same line. For example, if the expression contains two functions, such as **Target = f0(x) * f1(x)**, and the prior model is **Target = (x-1) * sin(2 * x)**, you will enter the prior solution as:
**Target = (x - 1), f1 = sin(2 * x)**
To specify multiple expressions from prior models, enter each set of functions on a new line. You can enter expressions for only some of the functions that exist in the target expression; if this is the case, DataRobot will fill in '0' as the seed for other functions.
For example, if you enter:
**f1 = sin(2 * x)**
DataRobot will translate this to:
**f0 = 0, f1 = sin(2 * x)**
## Data partitioning for training and cross-validation {: #data-partitioning-for-training-and-cross-validation }
DataRobot performs its standard process for data partitioning ([as explained here](data-partitioning)) for each Eureqa model. Then, it further subdivides the training set data into two more sets: a Eureqa internal training set and a Eureqa internal validation set. The data for these Eureqa internal sets is derived from the original DataRobot training set. (The original DataRobot validation set is never used as part of the Eureqa data partitioning process.)
DataRobot uses the Eureqa internal training set to drive the core Eureqa evolutionary algorithm, and uses both the Eureqa internal training and validation sets to select which models are the "best" and, therefore, selected for inclusion in the final Eureqa Pareto Front (within the [Eureqa Models tab](eureqa)).
### Random split {: #random-split }
A random split will randomly assign rows for Eureqa internal training and Eureqa internal validation. Rows (within the original training set) are split based on the Eureqa internal training and Eureqa internal validation percentages. If the training and validation percentages total more than 100%, the overlapping percentage of rows will be assigned to both training and validation. Random split with 50% of data for Eureqa internal training and 50% for Eureqa internal validation is recommended for most (non-Time Series) modeling problems.
For very small data sets (e.g., under a few hundred points) it is usually best to use overlapping Eureqa internal training/Eureqa internal validation datasets. When the data is extremely small, or has very little or no noise, you may want to use 50% of the original DataRobot training data for Eureqa internal training and 100% for Eureqa validation. In extreme cases, you may want to include 100% of the data for both Eureqa internal training/Eureqa internal validation datasets, and then limit your model selection to those with lower complexities.
For large data sets (e.g., over 1,000 points) it is usually best to use a smaller fraction of data for the Eureqa training set. It is recommended to choose a fraction such that the size of the Eureqa training data is approximately 10,000 rows or less. Then, use all remaining data for the Eureqa validation set.
### In-order split {: #in-order-split }
An in-order split maintains the original order of the input data (i.e., the original DataRobot training set) and selects a percentage of rows, starting with the first row, to use for the Eureqa internal training set and a different percentage of rows, starting with the last row, to use for the Eureqa internal validation set. If the training and validation percentages total more than 100%, the overlapping percentage of rows will be assigned to both the Eureqa internal training and internal Eureqa validation sets.
This option can be used if you have pre-arranged your data with rows you want to use for the Eureqa internal training set at the beginning of the dataset and rows you want to use for the Eureqa internal validation set at the end.
In-order split is applied by default when performing data partitioning for Time Series and OTV models, as explained [here](#eureqa-data-partitioning).
### Split by variable {: #split-by-variable }
Split by variable allows you to manually indicate which rows to use for training and which to use for validation using variables that have been pre-defined in your project dataset. Rows are selected if the indicator variable has a value greater than 0. By default, the Eureqa internal training rows will be selected as the inverse of the Eureqa internal validation rows, unless a separate indicator is provided for training rows.
You may include a validation data variable and/or a training data variable in your data before uploading it to Eureqa, or use Eureqa to create a derived variable that will be used to split the data.
### Split by expression {: #split-by-expression }
Split by expression allows you to manually identify which rows to use for Eureqa internal training and which rows to use for Eureqa internal validation using expressions entered as part of the target expression. Rows are selected if the expression has a value greater than 0. By default, the Eureqa internal training set rows will be selected as the inverse of the Eureqa internal validation set rows, unless a separate expression is provided for training rows.
### Eureqa data partitioning {: #eureqa-data-partitioning }
You can modify the default Data Partitioning settings using the **training_fraction** and **validation_fraction** parameters. To adjust how DataRobot splits the data for the model, modify the **split_mode** parameter. Finally, to direct DataRobot to create the Eureqa internal training and internal validation sets based on custom expressions (rather than the default settings, explained previously), add those expressions to the **training_split _expr** and/or **validation_split _expr** parameters, as applicable.
The default Eureqa data partitioning process (to create the Eureqa internal training and internal validation sets) differs between non-Time Series and Time Series models:
* For non-Time Series models: DataRobot performs a 50/50 random split of the shuffled training set data and then uses the first half as the Eureqa internal training set and the second half as the Eureqa internal validation set (where **split_mode = 1**, for random).
* For datetime-partitioned models (i.e., models created as either Time Series Modeling and Out-of-Time Validation (OTV): DataRobot performs a 70/30 in-order split of the chronologically sorted training set data and then uses the first 70% as the Eureqa internal training set and the second 30% as the Eureqa internal validation set (where **split_mode = 2**, for in-order).
!!! tip
If you selected random partitioning when you started your project (using Advanced options), it is strongly recommended that you do _not_ select in-order split mode when tuning Eureqa models.
### Data Partitioning for cross-validation {: #data-partitioning-for-cross-validation }
When performing cross-validation for Eureqa models, DataRobot uses only the first CV split for training; therefore, only that training data (from the first CV split) is split further into Eureqa internal training data and Eureqa internal validation data.
## Advanced tuning {: #advanced-tuning }
Advanced tuning allows you to manually set Eureqa model parameters, overriding the DataRobot defaults. Through advanced tuning you can control how gridsearch searches when multiple Eureqa hyperparameters are available for selection. Search types are `brute force` and `smart search`, as described in the general modeling [**Advanced Tuning**](adv-tuning#set-the-search-type) search type section.
|
advanced-options
|
---
title: Error metrics
description: Eureqa error metrics measure how well a Eureqa model fits your data; DataRobot supports a variety of different error metrics for Eureqa models.
---
# Error metrics {: #error-metrics }
Eureqa error metrics are measures of how well a Eureqa model fits your data. When DataRobot performs Eureqa model tuning, it searches for models that optimized error and complexity. The error metric that best defines a well-fit model will depend on the nature of the data and the objectives of the modeling exercise. DataRobot supports a variety of different error metrics for Eureqa models.
Error metric selection and configuration determines how DataRobot will assess the quality of potential solutions. DataRobot sets default error metrics for each model, but advanced users can choose to optimize for different error metrics. Changing error metrics settings changes how DataRobot optimizes its solutions.
The Error Metric parameter, **error_metric**, is available within the Prediction Model Parameters and can be modified as part of tuning your Eureqa models. Available error metrics are listed for selection.

## DataRobot versus Eureqa {: #datarobot-versus-eureqa }
!!! note
There is some overlap in the DataRobot [optimization metrics](opt-metric) and Eureqa error metrics. You may notice, however, that in some cases the metric formulas are expressed differently. For example, predictions may be expressed as `y^` versus `f(x)`. Both are correct, with the nuance being that `y^` often indicates a prediction generally, regardless of how you got there, while `f(x)` indicates a function that may represent an underlying equation.
DataRobot provides the [optimization metrics](opt-metric) for setting error metrics at the project level. The optimization metric is used to evaluate the error values shown on the Leaderboard entry for all models (including Eureqa), to compare different models, and to sort the Leaderboard.
By contrast, the Eureqa error metric specifically governs how the Eureqa algorithm is optimized for the related solution and is not a project-level setting. When configuring these metrics, keep in mind they are fully independent and, in general, setting either metric does not influence the other metric.
## Choose a Eureqa error metric {: #choose-a-eureqa-error-metric }
The best error metric for your problem will depend on your data and the objectives of your modeling analysis. For many problems there isn't one correct answer; therefore, DataRobot recommends running models with several different error metrics to see what types of models will be produced and which results best align with the modeling objectives.
When choosing an error metric, consider these [suggestions for setting and configuring error metrics](guidance).
## Error metric parameters {: #error-metric-parameters }
### Mean Absolute {: #mean-absolute }
The mean of the absolute value of the residuals. A smaller value indicates a lower error.
**Details:** Mean Absolute Error is calculated as:

**How it's used:** To minimize the residual errors. Mean Absolute Error is a good general purpose error metric, similar to Mean Squared Error but more tolerant of outliers. It is a common measure of forecast error in time series analyses. The value of this metric can be interpreted as the average distance predictions are from the actual values.
**Considerations:**
* Assumes the noise follows a double exponential distribution.
* Compared to Mean Squared Error, Mean Absolute Error tends to be more permissive of outliers and can be a good choice if outliers are being given too much weight when optimizing for MSE.
* Can be interpreted as the average error between predictions and actuals of the model.
### Mean Absolute Percentage {: #mean-absolute-percentage }
The mean of the absolute percentage error. A smaller value indicates a lower error. Note that rows for which the actual value of the target variable = 0 are excluded from the error calculation.
**Details:** Mean Absolute Percentage Error is calculated as:

**How it's used:** To minimize the absolute percentage error. MAPE is a common measure of forecast error in time series forecasting analyses. The value of this metric can be interpreted as the average percentage that predictions vary from the actual values.
**Considerations:**
* Mean Absolute Percentage Error will be undefined when the actual value is zero. Eureqa’s calculation excludes rows for which the actual value is zero.
* Mean Absolute Percentage Error is extremely sensitive to very small actual values - high percentage errors on small values may dominate the error metric calculation.
* Can be interpreted as the average percentage that predicted values vary from actual values.
* Mean Absolute Percentage Error may bias a model to underestimate actual values.
### Mean Squared {: #mean-squared }
The mean of the squared value of the residuals. A smaller value indicates a lower error.
**Details:** Mean Squared Error is calculated as:

**How it's used:** to minimize the squared residual errors. Mean Squared Error is the most common error metric. It emphasizes the extreme errors, so it's more useful if you have concerns about large errors with greater consequences.
**Considerations:**
* It assumes that the noise follows a normal distribution.
* It's tolerant of small deviations, and sensitive to outliers.
* For classification problems, logistic models optimized for Mean Squared Error produce values which can be interpreted as predicted probabilities.
* Optimizing for Mean Squared Error is equivalent to optimizing for R^2.
### Root Mean Squared {: #root-mean-squared }
The mean of the squared value of the residuals. A smaller value indicates a lower error.
**Details:** Root Mean Squared Error is calculated as:

**How it's used:** to minimize the squared residual errors. Root Mean Squared Error is used similarly to Mean Squared Error. Root Mean Squared Error de-emphasizes extreme errors as compared to Mean Squared Error, so it is less likely to be swayed by outliers but more likely to favor models that have many records that do a little better and a few that do a lot worse.
**Considerations:**
* It assumes that the noise follows a normal distribution.
### R^2 Goodness of Fit (R^2) {: #r2-goodness-of-fit-r2 }
The percentage of variance in the target variable that can be explained by the model. A higher value indicates a better fit.
**Details:** R^2 is calculated as:

to give the fraction of variance explained. This value is multiplied by 100 to give the percentage of variance explained.
SStot is proportional to the total variance, and SSres is the residual sum of squares (proportional to the unexplained variance).


**How it's used:** to maximize the explained variance. It is the equivalent to optimizing Mean Squared Error, except the numbers are reported as a percentage. It is a good default error metric. Like Mean Squared Error, R^2 penalizes large errors more than small error, so it's useful if you have concerns about large errors with greater consequences.
**Considerations:**
* It assumes that the noise follows a normal distribution.
* It has the same interpretation regardless of the scale of your data.
* When R^2 is negative, it suggests that the model is not picking up a signal, is too simple, or is not useful for prediction.
* Optimizing for R^2 is equivalent to optimizing for Mean Squared Error.
* R^2 is closely related to the square of the correlation coefficient.
### Correlation Coefficient {: #correlation-coefficient }
Measures how closely predictions made by the model and the actual target values follow a linear relationship. A higher value indicates a stronger positive correlation, with 0 representing no correlation, 1 representing a perfect positive correlation, and -1 representing a perfect negative correlation.
**Details:** Correlation Coefficient is measured as:

Where **sf(x)** and **sy** are the uncorrected sample standard deviations of the model and target variable.
**How it's used:** to maximize normalized covariance. A commonly used error metric for feature exploration, to find patterns that explain the shape of the data. It's faster to optimize than other metrics because it does not require models to discover the scale and offset of the data.
**Considerations:**
* It ignores the magnitude and the offset of errors. This means that a model optimized for correlation coefficient alone will try to fit the same shape as the target variable, however actual predicted values may not be close to the actual values.
* It is always on a [-1, 1] scale regardless of the scale of your data.
* 1 is a perfect positive correlation, 0 is no correlation, and -1 is a perfect inverse correlation.
### Maximum Absolute {: #maximum-absolute }
The value of the largest absolute error. Smaller values are better.
**Details:** Maximum Absolute Error is computed as:

**How it's used:** to minimize the largest error. It is used when you only care about the single highest error, and when you are looking for an exact fit, such as symbolic simplification. It would typically be used when there is no noise in the data (e.g., processed, simulated, or generated data).
**Considerations:**
* The whole model's evaluation depends on a single data point.
* It is best when the input data has little to no noise.
### Mean Logarithm Squared {: #mean-logarithm-squared }
The mean of the logs of the squared residuals. A smaller value indicates a lower error.
**Details:** Mean Logarithm Squared Error is computed as:

where log is the natural log.
**How it's used:** to *minimize* the squashed log error. It decreases the effect of large errors, which can help to decrease the role of outliers in shaping your model.
**Considerations:**
* It assumes diminishing marginal disutility of error with error size.
### Median Absolute {: #median-absolute }
The median of the residual values. A smaller value indicates lower error.
**Details:** Median Absolute Error is calculated as:

!!! note
For performance reasons, Eureqa models use an estimated (rather then exact) median value. For small datasets, the estimated median value may differ significantly from the actual value.
**How it's used:** to *minimize* the median error value. If you expect your residuals to have a very skewed distribution, it is best at handling high noise and outliers.
**Considerations:**
* The scale of errors on either side of the median will have no effect.
* It is the most robust error metric to outliers.
* The estimated median may be inaccurate for very small datasets.
### Interquartile Mean Absolute {: #interquartile-mean-absolute }
The mean of the absolute error within the interquartile range (the middle 50%) of the residual errors. A smaller value indicates lower error.
**Details:** The Interquartile Mean Absolute Error is calculated by taking the Mean Absolute Error of the middle 50% of residuals.
**How it's used:** to minimize the error of the middle 50% error values. It is used when the target variable may contain large outliers, or when you care most about "on average" performance. It is similar to the median error in that it is very robust to outliers.
**Considerations:**
* It ignores the smallest and largest errors.
### AIC Absolute {: #aic-absolute }
The Akaike Information Criterion (AIC) based on the Mean Absolute Error. It is a combination of the normalized Mean Absolute Error and a penalty based on the number of model parameters. A lower value indicates a better quality model. Unlike other error metrics, AIC may be a negative value, approaching negative infinity, for increasingly accurate models.
**Details:** AIC Absolute Error is calculated as:

where **log** is the natural logarithm, **sy** is the standard deviation of the target variable, **MAE** is the mean of the absolute value of the residuals:

and **k** is the number of parameters in the model (including terms with a coefficient of 1).
**How it's used:** to *minimize* the residual error and number of parameters. It penalizes complexity by including the number of parameters in the loss function. This metric can be useful if you want to directly limit and penalize the number of parameters in a model, for example when modeling physical systems or for other problems where solutions with fewer free parameters are preferred. It is similar to the AIC-Mean Squared Error metric but is more tolerant to outliers.
**Considerations:**
* Searches using this metric may produce fewer solutions and limit the number of very complex solutions.
* Optimizing for this metric ensures you only get AIC-optimal models.
* Assumes the noise follows a double exponential distribution.
### AIC Squared {: #aic-squared }
The Akaike Information Criterion (AIC). It is a combination of the normalized Mean Squared Error and a penalty based on the number of model parameters. A lower value indicates a better quality model. Unlike other error metrics, the value may be negative, approaching negative infinity, for increasingly accurate models.
**Details:** AIC Squared Error is calculated as:

where **log** is the natural logarithm, **sy** is the standard deviation of the target variable, **MSE** is the mean of the squared value of the residuals:

and **k** is the number of parameters in the model (including terms with a coefficient of 1).
**How it's used:** to minimize the squared residual error and number of parameters. It penalizes complexity by including number of parameters in the loss function. This metric can be useful if you want to directly limit and penalize the number of parameters in a model, for example when modeling physical systems or for other problems where solutions with fewer free parameters are preferred.
**Considerations:**
* Searches using this metric may produce fewer solutions and limit the number of very complex solutions.
* Optimizing for this metric ensures you only get AIC-optimal models.
* It assumes the residuals have a normal distribution.
### Rank Correlation (Rank-r) {: #rank-correlation-rank-r }
The Correlation Coefficient between the ranks of the predicted values and the actual values. A higher value indicates a stronger positive correlation, with 0 representing no correlation and 1 representing a perfect rank correlation.
**Details:** The rank correlation is calculated as the correlation coefficient between the ranks of predicted values and ranks of actual values, meanings it optimizes models whose outputs can be used for ranking things. Ranks are computed by sorting each in ascending order and assigning incrementing values to each row based on its sorted position.
**How it's used:** to maximize the correlation between predicted and actual rank. Use for problems where it is important that a model be able to predict a relative ordering between points, but where the actual predicted values are not important.
**Considerations:**
* The variables must be ordinal, interval, or ratio.
* A rank correlation of 1 indicates that there is a monotonic relationship between the two variables.
* It is always on a [-1,1] scale, regardless of the scale of your data.
* If ranges for the actual and prediction values are vastly different, the resulting plots may not appear to display as expected. For example, prediction values of (29, 30, 32, 584, 9999, 10000) - or even (100001, 100002, 100003, 100004, 100005, 100006) - and actual values of (1, 2, 3, 4, 5, 6) have the same rank order of the predictions and actuals and so will result in a zero error. But, because the range of predictions vary wildly, they may not be visible on a plot of the actuals.
### Hinge Loss {: #hinge-loss }
The mean “hinge loss” for classification predictions. Hinge loss increasingly penalizes wrong predictions as they get more confident but treats true predictions identically after they reach a minimum threshold value. A smaller value indicates lower error.
**Details:** Hinge Loss Error is computed as:

where, for this metric, binary classes typically represented as 0 and 1 are re-scaled to be -1 and 1 respectively:

and

With this calculation, when the prediction **f(xi)** and actual **yi** have the same sign (a correct prediction) and the predicted value >= 1, the error is considered 0, while when they have a different sign (an incorrect prediction), the error increases linearly with **f(xi)**.
**How it's used:** to minimize classification error. Hinge Loss Error is a one-sided metric that increasingly penalizes wrong predictions as they get more confident, but treats all true predictions identically after they reach a minimum threshold value.
**Considerations:**
* When using for classification problems, the target expression should _not_ include the logistic() function. This error metric expects that predicted values will have a larger range.
* It assumes a threshold value of 0.5 when turning a predicted score into a 0 or 1 prediction.
### Slope Absolute {: #slope-absolute }
The mean absolute error of the predicted row-to-row deltas and the actual row-to-row deltas. A smaller value indicates a lower error.
**Details:** Slope Absolute Error is computed as:

For each side of the model equation (predicted values and actual values), this metric computes row-to-row "deltas" between the value of each row and its previous row. The Slope Absolute Error is the Mean Absolute Error between the actual deltas and the predicted deltas.
**How it's used:** to minimize the error of the deltas. It is used for time series analysis where you are trying to predict the change from one time period to the next.
**Considerations:**
* It is an experimental (non-standard) error metric.
* It fits the shape of a dataset; actual predicted values may not be in the same range as the actual values.
|
error-metrics
|
---
title: Custom target expressions
description: Describes the many ways to customize target expressions for Eureqa models.
---
# Custom target expressions {: #custom-target-expressions }
Customizing target expressions provides one way to custom tune Eureqa models. Expressions may be any nested combination of [Eureqa model building blocks](building-blocks). For example, if a, b, and c are input variables, example expressions might include:
* **Target = 10 * a + b * c**
* **Target = if( a > 10, b, c) + 15**
## What is the target expression? {: #what-is-the-target-expression }
The target expression tells DataRobot what type of model to create. By default, the target expression is an equation that models your target variable as a function of all input variables.
The target expression must be created in the form **"Target = "** regardless of the actual target variable name. For example, to create a target expression for target variable loan_is_bad (or default_rate, Sales, purchase_price, and so forth) you use the format **"Target = f(...)"**. DataRobot automatically fills out the target expression for the selected Eureqa model type, using the defined target variable and input variables defined in the dataset. For a given target variable **Target** and 1 to *n* input variables of **x**, the default target expression for each of the search templates is:
* Numeric search template: **Target = f(x1 ... x<sub>n</sub>)**
* Classification search template: **Target = logistic(f0() + f2() * f1(x1 ... x<sub>n</sub>))**
* Exponential search template: **Target = exp(f0(x1 ... x<sub>n</sub>)) + f1()**
More complex expressions are also possible and give advanced users the power to specify and search for complex relationships, including the modeling of polynomial equations, and binary classification.
The Target Expression parameter, **target_expression_string**, is available within the Prediction Model Parameters and can be modified as part of tuning Eureqa models.
### Exponential search template {: #exponential-search-template }
When DataRobot detects an exponential trend in the dataset for a Eureqa model, it applies the **exp()** function. As part of this process, DataRobot automatically takes the log() of all input variables, manipulates the transformed variables to get the final target value, and then uses exp() to invert the log transform.
The exp() building block is *Disabled* by default. If you are customizing the target expression in a model in a project whose data has an exponential trend, you may want to enable exp() for the model so that DataRobot will consider it during model building.
!!! tip
For Eureqa GAM models only: If you enable exp() support, you will want to select _exponential_ as the variable for the **EUREQA_target_expression_format** [parameter](#constrain-the-target-expression-format-gam-only).
## Example expressions {: #example-expressions }
The following are some examples of basic and advanced expressions you could create as target expressions. The examples below assume the dataset contains four variables named: w, x, y, and z.
### Basic examples {: #basic-examples }
Model the **Target** variable as a function of variable **x**:
**Target = f(x)**
Model the **Target** variable as a function of two variables **x** and **z**:
**Target = f(x, z)**
Model the **Target** variable as a function of **x** and an expression, **sin(z)**:
**Target = f(x, sin(z))**
!!! note
As shown in this example, including **sin(z)** and not **z** means DataRobot has access to the data in variable **z** only after it passes through the sine function.
### Multiple functions {: #multiple-functions }
To incorporate multiple functions into the target expression, use numbered functions starting with **f0()**. For example:
**Target = f0(x) + f1(w, z)**
Model the **Target** variable as a function of **x**, **w**, and the power law relationship:
**Target = f(x, w, x^f1(), w^f2())**
Find a mechanism change with two known models:
**Target = if(x > f1(),exp(f2() * x), exp(f3() * x))**
### Constrain the target expression format (GAM only) {: #constrain-the-target-expression-format-gam-only }
For GAM models, you can enable the parameter **EUREQA_target_expression_format** if you want to constrain the expression format for the model. By default, there are no constraints to the expression format.
* **exponential** constrains the target expression to an exponential format, similar to the following:
**Target = exp(f(...))**
For example, if the default target expression would have been: **Target = f(var1, var2, var3)**, the same target expression constrained to an exponential format would be: **Target = exp(f(var1, var2, var3))**.
* **feature_interaction** constrains the target expression to contain 2-way interactions detected between features as functions. This ensures the feature interaction is declared explicitly. For example, if the model detects interaction of features **x** and **y**, the expression will be:
**Target = n + f(x, y)**
(where *n* identifies other features of the dataset)
### Fit coefficients {: #fit-coefficients }
You can represent an unknown constant or coefficient as a function with no arguments, **f()**. You can use multiple, no-argument functions, such as **f1()** to fit the coefficients of arbitrary nonlinear equations. For example, if you are looking for a polynomial of the form:
**Target = a * x + b * x^2 + c * x^3**
use the following target expression:
**Target = f0() * x + f1() * x^2 + f2() * x^3**
### Nested functions {: #nested-functions }
Model the **Target** variable as the output of a recursive or iterated function to a depth of 3:
**Target = f(f(f(x)))**
## Binary classification {: #binary-classification }
If **y** is a binary variable filled with 0s and 1s, model it using a squashing function, such as the logistic function. Using DataRobot for classification has a few advantages:
* Finding models requires less data
* Models can often extrapolate extremely well
* Resulting models are simple to analyze, refit, and reuse
* The structure of the models gives insight into the classification problem, allowing you to both predict as well as learn something about how the classification works
### Basic binary classification {: #basic-binary-classification }
Model the **Target** variable as a binary function of **x** and **w**:
**Target = logistic(f(x, w))**
Keep in mind that the logistic function will produce intermediate values between 0 and 1, such as 0.77 and 0.0001; therefore, you will need to threshold the value to get final 0 or 1 outputs.


## Model constraints {: #model-constraints }
You can also use the target expression to constrain the model output. You can include **require** and/or **contains** functions in target expressions to force specific model building or output behaviors.
!!! tip
Be aware that **require** and **contains** are very advanced, "experimental" settings that make it harder for DataRobot to find solutions and may significantly slow model search. To use these settings, we strongly suggest that you contact DataRobot for assistance as output behavior cannot be guaranteed.
### Add variables or terms {: #add-variables-or-terms }
If you need to force a certain variable or term to appear in Eureqa models, add a term that nests **require** or **contains** functions.
Model **y** as a function of **x**, with all projects required to contain an **x^2** term:
**Target = f(x) + 0 * require(contains(f(x),x^2))**
For this to work, the first term of the **contains** operator must exactly match the functional term you are trying to fit (f(x) in this case). By multiplying the second term by 0, you guarantee that it won't impact the value produced by a particular solution f(x).
### Add a constraint {: #add-a-constraint }
You can also enforce a constraint on the model output if there are certain realities that need to be followed (e.g., price > cost). To do this, add a term with the **require** function.
Model **y** as a function of **x**, with all solutions required to output values greater than 0:
**Target = f(x) + require( f(x) > 0 )**
The following is a faster alternative:
**Target = max( f(x), 0 )**
### Force a condition {: #force-a-condition }
There may be known relationships in your data that do not fully explain the data. For example, if you have a model that was generated based on academic theory, you can use DataRobot to fit the residual between that known model and the actual data.
Model **Target** as a function of **x**, using existing knowledge of an **x^2** relationship to model the residual:
**Target = f0(a, b, c, d, e) + f1() * x^2**
DataRobot will interpret **f1()** as a coefficient and fit the term appropriately.
|
custom-expressions
|
---
title: Row weighting blocks
description: Use the Row Weighting parameter as part of tuning Eureqa models.
---
# Row weighting blocks {: #row-weighting-blocks }
You can configure row weight to help improve performance for your models. The Row Weighting parameter, **weight_expr**, is available within the Prediction Model Parameters and can be modified as part of tuning Eureqa models.
!!! note
This Eureqa model-level row weight is separate from the DataRobot project-level row weighting (as set from [**Advanced options**](additional)). If set, the DataRobot project-level row weight affects how DataRobot calculates the validation score for the Eureqa models (e.g., performance on out-of-sample data) but has no effect on how these models are optimized.
## When to use a row weight {: #when-to-use-a-row-weight }
The following are some common scenarios for which using a row weight may help to improve performance:
* Suppose for each data point you have a confidence value that you determined while collecting the data or computed in some other program. Create a variable (i.e., a column) containing those confidence values and designate it as the row weight variable. DataRobot will weight the data accordingly, giving more weight to those values with higher confidence.
* Suppose you want to give extra weight to a few important data points. You could give those points more weight by adding a new column to your data before you upload it to DataRobot. This new variable should label important rows with 10, 100, or 1000 (or some other weight) and set the remaining rows to 1.
* Suppose you want to balance your data by giving more weight to rare events than to common ones. More specifically, suppose you want to model credit card fraud, and 99.99% of the data points are legitimate transactions while 0.01% are fraudulent. You could create a variable whose value is 1 in rows representing legitimate transactions and 9999 (i.e., 99.99% / 0.01%) in rows representing fraud, thereby creating equal pressure to model both legitimate and fraudulent cases.
## Row weight variable {: #row-weight-variable }
Include a row weight variable in your dataset before it is uploaded to DataRobot to reference it as a row weight variable during model creation. Then, when tuning the model, type the name of that variable as the row weight variable. This tells DataRobot to weight the rest of the data in each row in proportion to the value of the row weight variable in that row.
## Row weight expression {: #row-weight-expression }
Some row weighting schemes can be more easily achieved with a row weight expression than with a row weight variable. When defined, DataRobot will evaluate that row weight expression using the values in that row, and then weight the row with the result.
### 1 / occurrences (variable_name) {: #1-occurrences-variable_name }
This expression provides a quick way to balance data. To illustrate, let's imagine a toy dataset containing just three values of one variable:
| x | 1 / occurrences (x)|
|----:|-------------------:|
| 99 | 0.5 |
| 99 | 0.5 |
| 86 | 1 |
The value returned by **occurrences(x)** is the number of times a particular value of **x** occurs in the dataset; in this case, it would return **2** in the first row, **2** in the second row, and **1** in the third row. Selecting **1/occurrences(x)** as your row weight would therefore give the first row a weight of **1/2**, the second row a weight of **1/2**, and the third row a weight of **1**.
Returning to the credit card fraud example [(shown above)](#when-to-use-a-row-weight), you could create variable **z** with a value of **0** in rows representing legitimate transactions and **1** in rows representing fraudulent ones. Selecting a row weight of **1/occurrences(x)** would then automatically create equal pressure to model legitimate and fraudulent transactions. If new data is added, weights are automatically adjusted to maintain the balance.
The special variable **<row>** takes on the value of the row number. Using this as the row weight will give the first row a weight of **1**, the second row a weight of **2**, and so on.
Row weighting can improve results for sparse datasets in which the target behavior only happens very rarely (such as for fraud and failures). Using this row weighting expression will help isolate and highlight those sparse signals in the data.
## Other row weight expressions {: #other-row-weight-expressions }
Aside from the special row weighting variable, the best option for creating a custom row weight is to derive a new variable (feature), use a custom expression to populate the column automatically with the desired row weights, and use that new derived variable as the row weight variable directly. For information on deriving a new variable, see the documentation for [feature transformations](feature-transforms).
The following example expressions assume the dataset contains variables x and y:
* **abs( x )** gives row weights in proportion to the absolute value of x.
* **1 / abs( x-y )** gives row weights in inverse proportion to the difference between x and y.
* **1 / <row>** gives row 1 a weight of 1, row 2 a weight of 1/2, row 3 a weight of 1/3, ...
* **0.5 + 0.5 * ( <row> <= 100 )** gives row 1 through 100 a weight of 1 and the remaining rows a weight of 0.5. (Note that **<=** returns 1 if satisfied, 0 if unsatisfied.)
|
row-weighting
|
---
title: Eureqa advanced tuning
description: Eureqa models use expressions to represent mathematical relationships and transformations. DataRobot provides specialized workflows for tuning Eureqa models.
---
# Eureqa advanced tuning {: #eureqa-advanced-tuning }
Eureqa models use expressions to represent mathematical relationships and transformations. You can tune your Eureqa models by modifying building blocks, customizing the target expression, and modifying other model parameters, such as support for building blocks, error metrics, row weighting, and data splitting. To customize a Eureqa model, select the model from the Leaderboard and then click **Evaluate > Advanced Tuning**.
The following sections detail specialized workflows for tuning Eureqa models:
Topic | Describes...
----- | ------
[Tune Eureqa models](advanced-options) | Customize Eureqa models by modifying **Advanced Tuning** parameters.
[Configure building blocks](building-blocks) | Combine and configure building blocks to create a new target expression.
[Building blocks reference](eureqa-reference) | Definitions and usage of building blocks available to Eureqa models.
[Customize target expressions](custom-expressions) | Custom tune Eureqa models by modifying the target expression.
[Configure error metrics](error-metrics) | Optimize for different error metrics.
[Guidance for using error metrics](guidance) | Understand the top error metrics for different problem types.
[Configure row weighting](row-weighting) | Improve model performance with weighting.
|
index
|
---
title: Configure building blocks
description: Combine and configure building blocks to create a new Target Expression for Eureqa models.
---
# Configure building blocks {: #configure-building-blocks }
Building blocks are components of Eureqa models. As part of tuning a Eureqa model, you can combine and configure building blocks to create a new Target Expression for that model. For example, if you think that seasonality or some other cyclical trend may be a factor in your data you can use building block configurations to create target expressions that account for those types of trends.
Building blocks provide mathematical functions and operators, including:
* Constants
* Variables
* Operators (addition, subtraction, multiplication, division)
* Functions (sin(), sqrt(), logistic(), etc.)
## How to choose building blocks {: #how-to-choose-building-blocks }
The default building blocks have been selected based on extensive testing and are a good place to start. If you do decide to change which building blocks are selected, expert knowledge will help you. Which of the building blocks are typically used in your domain? Which ones are found in solutions to problems related to yours? Which ones are suggested by graphs of your data? Which ones just seem like good candidates based on your intuition (expert or otherwise)?

## Disable building blocks {: #disable-building-blocks }
If there are building blocks you know you don't want DataRobot to include in model building, you can set them as Disabled. Although reducing the number of building blocks will speed up your search and may increase the likelihood that DataRobot will find an exact solution, disabling too many building blocks could preclude the discovery of an exact solution if a necessary operation is disabled. The DataRobot model building engine is extremely fast and can perform iterations quickly; we recommend additional iterations when trying to decide between actions.
As needed, consult with your domain expertise when adjusting the building block settings to better reflect your use case and help DataRobot find models that provide better explaining power.
## Building block complexity {: #building-block-complexity }
Each building block is assigned a default complexity value which factors into the overall complexity of any model containing that block. The complexity weights indicate to DataRobot the relative importance of model interpretability vs. accuracy. If two different models have the same accuracy, but one uses building blocks with higher complexity values, DataRobot will favor the less complex model. If you know that some building blocks are more common within your specific problem type (such as power laws) than the default value would suggest, you can manually set the building block complexity value to a lower value.
This value is also the “complexity penalty” for a building block. A building block complexity weight (or penalty) of 0 means the model can use that building block as needed without being penalized. Take care when setting building block complexity: models that use building blocks repeatedly and unnecessarily can become “cluttered”.
To calculate model complexity, DataRobot will iterate over every building block in the model's target expression, read their individual complexity weights, and calculate the total complexity weight sum for that model.
To set the complexity weight for a building block complexity:
* Make sure the block is not set to **Disabled**.
* Type the numeric value to use as the complexity weight. Typical complexity values are from 0-4. You should assign low complexity weights (such as 0) for blocks that can be used repeatedly during model processing without penalty, and assign higher complexity weights to blocks that should be used infrequently.

### Example of complexity metrics {: #example-of-complexity-metrics }
Assuming the following building block complexities:
* Input variable: 1
* Addition: 1
* Multiplication: 0
| Expression | Complexity |
|------------|-----------:|
| **a + b** | 3 |
| **a * b** | 2 |
## Available building blocks {: #available-building-blocks }
For information about all building blocks available to Eureqa models, including definitions and usage, refer to the [Building blocks reference page](eureqa-reference). To access further information about building block configuration, click the **Documentation** link provided in any Eureqa model blueprint (found in the upper right corner of any building block under the **Advanced Tuning** tab).
|
building-blocks
|
---
title: Building blocks reference
description: Provides the definitions and usage of all building blocks available to Eureqa models.
---
# Building blocks reference {: #building-blocks-reference }
This page provides the definitions and usage of all building blocks available to Eureqa models. To access further information about building block configuration, click the **Documentation** link provided in any Eureqa model blueprint (found in the upper right corner of any building block under the **Advanced Tuning** tab).
## Arithmetic {: #arithmetic }
| Building Block | Definition | Usage |
|----------------|-----------|--------|
| Addition | Returns the sum of `x` and `y`. | `x+y` or `add(x,y)` |
| Constant | `c` is a real valued constant. | `c` |
| Division | Returns the quotient of `x` and `y`. `y` must be non-zero. | `x/y` or `div(x,y)` |
| Input Variable | `x` is a variable in your prepared dataset. | `x` |
| Integer Constant | `c` is an integer constant. | `c` |
| Multiplication | Returns the product of `x` and `y`. | `x*y` or `mul(x,y)` |
| Negation | Returns the inverse of `x`. | `-x` |
| Subtraction | Returns the difference of `x` and `y`. | `x-y` or `sub(x,y)` |
## Exponential {: #exponential }
| Building Block | Definition | Usage |
|-------------------|--------------|-------|
| Exponential | Returns `e^x`. | `exp(x)` |
| Factorial | Returns the product of all positive integers from 1 to `x`. | `factorial(x)` or `x!` |
| Natural Logarithm | Returns the natural logarithm (base `e`) of `x`. | `log(x)` |
| Power | Returns `x` raised to the power of `y`. `x` and `y` can be any expression. | `x^y` or `pow(x,y)` |
| Square Root | Returns the square root of `x`. `x` must be positive. | `sqrt(x)` |
## Squashing functions {: #squashing-functions }
Squashing functions take a continuous input variable and map it to a constrained output range.
Recommended use: Depending on the shape of the particular squashing function, that function may be useful in identifying transition points in the data, and/or limiting the total impact of a particular term.
| Building Block | Definition | Usage |
|----------|-------------------|-------------|
| Complementary Error Function | `1.0 - erf(x)` where `erf(x)` is the integral of the normal distribution. Returns a value between 2 and 0. | `erfc(x)` |
| Error Function | Integral of the normal distribution. Returns a value between -1 and +1. | `erf(x)` |
| Gaussian Function | Returns `exp(-x^2)`. This is a bell-shaped squashing function. | `gauss(x)` |
| Hyperbolic Tangent | The hyperbolic tangent of `x`. Hyperbolic tangent is a common squashing function that returns a value between -1 and +1. | `tanh(x)` |
| Logistic Function | Returns `1/(1+exp(-x))`. This is a common sigmoid (s-shaped) squashing function that returns a value between 0 and 1. | `logistic(x)` |
| Step Function | Returns 1 if `x` is positive, 0 otherwise. | `step(x)` |
| Sign | Returns -1 if `x` is negative, +1 if `x` is positive, and 0 if `x` is zero. | `sgn(x)` |
## Comparison/Boolean functions {: #comparisonboolean-functions }
| Building Block | Definition | Usage |
|--------------------------|-----------------|-------|
| Equal To | Returns 1 if `x` is numerically equal to `y`, 0 otherwise. | `equal(x,y)` or `x=y` |
| Greater Than | Returns 1 if `x>y`, 0 otherwise. | `greater(x,y)` or `x>y` |
| Greater Than or Equal To | Returns 1 if `x>=y`, 0 otherwise. | `greater_or_equal(x,y)` or `x>=y` |
| If-Then-Else | Returns `y` if `x` is greater than 0, `z` otherwise. If `x` is `nan`, the function returns `z`. | `if(x,y,z)` |
| Less Than | Returns 1 if `x<y`, 0 otherwise. | `less(x,y)` or `x<y` |
| Less Than or Equal To | Returns 1 if `x<=y`, 0 otherwise. | `less_or_equal(x,y)` or `x<= y` |
| Logical And | Returns 1 if both `x` and `y` are greater than 0, 0 otherwise. | `and(x,y)` |
| Logical Exclusive Or | Returns 1 if `(x<=0 and y>0)` or `(x>0 and y<=0)`, 0 otherwise. | `xor(x,y)` |
| Logical Not | Returns 0 if `x` is greater than 0, 1 otherwise. | `not( x )` |
| Logical Or | Returns 1 if either `x` or `y` are greater than 0, 0 otherwise. | `or(x,y)` |
## Trigonometric functions {: #trigonometric-functions }
Trigonometric functions are functions of an angle; they relate the angles of a triangle to the length of its sides.
Recommended use: When modeling data from physical systems containing angles as inputs.
| Building Block | Definition | Usage |
|-------------------|----------------------|---------|
| Cosine | The standard trigonometric cosine function. The angle `(x)` is in radians. | `cos(x)` |
| Hyperbolic Cosine | The standard trigonometric hyperbolic cosine function. | `cosh(x)` |
| Hyperbolic Sine | The standard trigonometric hyperbolic sine function. | `sinh(x)` |
| Sine | The standard trigonometric sine function. The angle `(x)` is in radians. | `sin(x)` |
| Tangent | The standard trigonometric tangent function. The angle `(x)` is in radians. | `tan(x)` |
## Inverse trigonometric functions {: #inverse-trigonometric-functions }
| Building Block | Definition | Usage |
|------------------------|--------------------|------------|
| Arccosine | The standard trigonometric arccosine function. | `acos(x)` |
| Arcsine | The standard trigonometric arcsine function. | `asin(x)` |
| Arctangent | The standard trigonometric arctangent function. | `atan(x)` |
| Inverse Hyperbolic Cosine | The standard inverse hyperbolic cosine function. | `acosh(x)` |
| Inverse Hyperbolic Sine | The standard inverse hyperbolic sine function. | `asinh(x)` |
| Inverse Hyperbolic Tangent | The standard inverse hyperbolic tangent function. | `atanh(x)` |
| Two-Argument Arctangent | The standard trigonometric two-argument arctangent function. | `atan2(y,x)` |
## Other functions {: #other-functions }
| Building Block | Definition | Usage |
|----------------|--------------|----------|
| Absolute Value | Returns the positive value of `x`, without regard for its sign. | `abs(x)` |
| Ceiling | Returns the smallest integer not less than `x`. | `ceil(x)` |
| Floor | Returns the largest integer not greater than `x`. | `floor(x)` |
| Maximum | Returns the maximum (signed) result of `x` and `y`. | `max(x,y)` |
| Minimum | Returns the minimum (signed) result of `x` and `y`. | `min(x,y)` |
| Modulo | Returns the remainder of `x/y`. | `mod(x,y)` |
| Round | Returns an integer of `x` rounded to the nearest integer. | `round(x)` |
|
eureqa-reference
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.