markdown
stringlengths
44
160k
filename
stringlengths
3
39
--- title: Profile settings description: How to edit your DataRobot profile settings, including account information, security settings, system settings, and notifications. --- # Profile settings {: #profile-settings } !!! info "Availability information" If your organization uses LDAP as an external account management system for single sign-on, you can't edit your profile in DataRobot. Contact your system administrator for assistance. To view your available profile settings, click your profile avatar (or the default avatar ![](images/icon-gen-settings.png)) in the upper-right corner of DataRobot and click **Profile**. On the **Profile** page, you can access the following tabs: | Tab | Description | |--------|-------------| [Account](#edit-account-information) | View and edit your private account information, avatar, and public profile.| [Security](#configure-security-settings) | Configure your login password and two-factor authentication.| [System](#configure-system-settings) | Configure your DataRobot language setting, color theme, and CSV export byte order mark (BOM) inclusion.| [Notifications](#configure-notification-settings) | Mute email notifications and configure Autopilot completion notifications.| ## Edit account and profile information {: #edit-account-and-profile-information } To personalize your DataRobot account, edit your account information and public profile on the **Account** tab. ### Edit account information {: #edit-account-information } To edit your DataRobot account information, edit any of the following fields and then click **Save changes**: * **First name** * **Last name** * **Phone number** * **Login or email address** ### Change avatar {: #change-avatar } !!! info "Availability information" The custom avatar feature is only for DataRobot managed AI Platform deployments. To upload or change your avatar, upload a Gravatar at the top of the **Account** tab: 1. Click **Upload Picture** to open [Gravatar](https://en.gravatar.com/){ target=_blank } in a new window. Log in to Gravatar or create an account. 2. Confirm that you registered your DataRobot email address with Gravatar. 3. If you haven't already, upload an avatar to your account. (Click **My Gravatars** > **Add a new image**.) 4. From the **[Manage Gravatars](https://en.gravatar.com/emails/){ target=_blank }** page, select a default image and click **Confirm**. 5. Return to DataRobot and refresh the page. If your new avatar does not appear, [verify that the emails match](https://en.gravatar.com/site/check/){ target=_blank }. !!! tip To see your new avatar, you may need to clear your browser cache and refresh the page. If your avatar does not appear, wait approximately 10 minutes and try again (Gravatar doesn't instantaneously serve new images). ### Edit public profile {: #edit-public-profile } If you choose to provide public profile information, under **Public Profile**, add any of the following information, and then click **Save changes**: * **Display name** * **Company** * **Industry** * **About me** ### Edit professional networks {: #edit-professional-networks } If you choose to provide professional network information, under **Professional Networks**, add any of the following information, and then click **Save changes**: * **LinkedIn profile** * **Kaggle profile** ## Configure security settings {: #configure-security-settings } To secure your DataRobot account, configure your password and two-factor authentication settings on the **Security** tab. ### Change password {: #change-password } To change your password, enter your current password, enter a new password, and then confirm the new password. If your current password is incorrect, or the new password and confirmation password fields don't match, you receive an error message. DataRobot passwords must meet the following requirements: * Only printable ASCII characters * Minimum one uppercase letter * Minimum one number * Minimum 8 characters * Maximum 512 characters * Username and password cannot be the same ### Enable two-factor authentication (2FA) {: #enable-two-factor-authentication-2fa } Two-factor authentication (2FA) is an opt-in feature that provides additional security for DataRobot users. See the section on [2FA](2fa) for setup information. ## Configure system settings {: #configure-system-settings } To configure system settings such as display language, color theme, and CSV export, use the **System settings** tab. ### Change display language {: #change-display-language } To change the language DataRobot displays, open the **Language** dropdown menu and click your preferred language. DataRobot reloads to display in your language. !!! note Changing the DataRobot display language does not affect uploaded data, model names, and some other UI elements. ### Change display theme {: #change-display-theme } To change the color of the application display (dark theme by default), use the **Themes** dropdown menu to select **Dark** or **Light**. ### Include byte order mark (BOM) {: #include-byte-order-mark-bom } The [byte order mark (BOM)](https://en.wikipedia.org/wiki/Byte_order_mark){ target=_blank } is a byte sequence that indicates encoding by adding three bytes to the beginning of a file. DataRobot allows you to enable or disable the inclusion of this marker in your profile settings. Software recognizing the BOM is then able to display files correctly. When exported CSV files include non-ASCII characters, use the BOM to ensure compatibility with file editors that don't verify encoding. For example, without the BOM, Excel may misrepresent characters from languages other than English. Modern versions of Excel correctly display international characters if you include the BOM. To enable or disable BOM insertion in exported CSV files, under CSV export, switch the **Include BOM** toggle. This setting is disabled by default. ## Configure notification settings {: #configure-notification-settings } To mute email notifications and configure Autopilot completion notifications, use the **Notifications** tab. ### Mute email notifications {: #mute-email-notifications } To mute all DataRobot email notifications, such as Autopilot notifications, deployment monitoring notifications, etc., enable the **Mute all email notifications** setting. ### Configure Autopilot notifications {: #configure-autopilot-notifications} If you close your browser or log out, DataRobot continues building models in any projects that have started the model building phase. To allow DataRobot to send a notification when Autopilot completes—via browser alert or email—use the **Enable email notification when Autopilot has finished** and **Enable browser notification when Autopilot has finished** settings.
profile-settings
--- title: Manage account settings description: How to access your DataRobot account settings, change your password, update your profile, find your API key, update your display language, and more. --- # Manage account settings {: #manage-account-settings } To access profile information, [two-factor authentication](2fa), developer tools, data sources, and membership assignments, click your [profile icon](profile-settings) (or the default avatar ![](images/icon-gen-settings.png)) in the upper-right corner of DataRobot. ![](images/help-options.png) !!! note The options available in the dropdown are dependent on your DataRobot permissions. The following table summarizes the options: === "SaaS" Topic | Describes... ------|------------- **Profile** | :~~: [Account](profile-settings#edit-account-information) | Update your account information and avatar. [Security](profile-settings#configure-security-settings) | Change your password and configure two-factor authentication. [System](profile-settings#configure-system-settings) | Update your display language, color theme, and CSV export settings. [Notifications](profile-settings#configure-notification-settings) | Mute email notifications and update Autopilot completion notifications. **Developer tools** | :~~: [API keys](api-key-mgmt#api-key-management) | Access and manage your API keys. [MLOps agent tarball](api-key-mgmt#mlops-agent-tarball) | Download the MLOps agent tarball. [Portable Prediction Server](api-key-mgmt#portable-prediction-server-docker-image) | Download the Portable Prediction Server Docker image. **Data Connections** | :~~: [Data Connections](data-conn) | Add, delete, modify, and test data connections. **Credentials Management** | :~~: [Credentials Management](stored-creds) | Add and manage securely stored credentials to reuse when accessing secure data sources. **Membership** | :~~: [Membership](view-memberships) | View the organizations and groups you belong to and join groups. **Admin only** | :~~: [Settings](user-settings) | Enable and disable product features and permissions. (Requires "Can manage own settings" permission.) [User Activity Monitor](main-uam-overview) | Access and analyze usage data and prediction statistics. [Maintenance Notifications](webhooks/index) | Integrate flexible and centralized notifications into an organization's processes around change and incident management. [Groups](manage-groups)| Create and delete groups, and configure organization membership. [Users](manage-users)| Create and deactivate users, and configure their permissions. (Requires "Can manage users" permission.) === "Self-Managed" Topic | Describes... ------|------------- **Profile** | :~~: [Account](profile-settings#edit-account-information) | Update your account information and avatar. [Security](profile-settings#configure-security-settings) | Change your password and configure two-factor authentication. [System](profile-settings#configure-system-settings) | Update your display language, color theme, and CSV export settings. [Notifications](profile-settings#configure-notification-settings) | Mute email notifications and update Autopilot completion notifications. **Developer tools** | :~~: [API keys](api-key-mgmt#api-key-management) | Access and manage your API keys. [MLOps agent tarball](api-key-mgmt#mlops-agent-tarball) | Download the MLOps agent tarball. [Portable Prediction Server](api-key-mgmt#portable-prediction-server-docker-image) | Download the Portable Prediction Server Docker image. **Data Connections** | :~~: [Data Connections](data-conn) | Add, delete, modify, and test data connections. **Credentials Management** | :~~: [Credentials Management](stored-creds) | Add and manage securely stored credentials to reuse when accessing secure data sources. **Membership** | :~~: [Membership](view-memberships) | View the organizations and groups you belong to and join groups. **Admin only** | :~~: [Settings](user-settings) | Enable and disable product features and permissions. (Requires "Can manage own settings" permission.) [Resource Monitor](resource-monitor)| Examine DataRobot's active modeling workers across the installation—view general information about the current state of the application and specific information about the status of components. [User Activity Monitor](main-uam-overview) | Access and analyze usage data and prediction statistics. [Organizations](manage-orgs) | Create and delete organizations. [Maintenance Notifications](webhooks/index) | Integrate flexible and centralized notifications into an organization's processes around change and incident management. [Groups](manage-groups)| Create and delete groups, and configure organization membership. [Users](manage-users)| Create and deactivate users, and configure their permissions. (Requires "Can manage users" permission.)
index
--- title: Developer tools description: How to use DataRobot's developer tools, including API key management, MLOps agent tarball, Portable Prediction Server Docker image. --- # Developer tools {: #developer-tools } DataRobot provides multiple developer tools for you to use when making prediction requests or engaging with the DataRobot API. Currently available tools are: * [API key management](#api-key-management) * [The MLOps agent tarball](#mlops-agent-tarball) * [Portable Prediction Server Docker image](#portable-prediction-server-docker-image) Click on your user icon and navigate to **Developer Tools** to access these features: ![](images/api-key-1.png) ## API key management {: #api-key-management } API keys are the preferred method for authenticating web requests to the DataRobot API and Prediction API; they replace the legacy API token method. You can simply post the API key into the request header, and the application requests you as a user. All DataRobot API endpoints use API keys as a mode of authentication. Generating multiple API keys allows you to create a new key, update your existing integrations, and then revoke the old key all <em>without disruption of service</em> of your API calls. This also allows you to generate distinct keys for different integrations (script A using key A, script B using key B). Note that if you were previously using an API token, that token has been upgraded to an API key. All existing integrations will continue to function as expected, both for the DataRobot API and the [Prediction API](dr-predapi). ??? tip "Additional resources" * The [developer portal](https://developers.datarobot.com/){ target=_blank }. * Github [community repositories](https://github.com/datarobot-community){ target=_blank }. {% include 'includes/github-sign-in.md' %} ### Access API key management {: #access-api-key-management } You can register and manage—name, create, and delete—multiple API keys. To access this page, click on your user icon and navigate to the **Developer Tools** page. The **API Keys** section lists your API keys with a variety of options available to you: ![](images/api-key-2.png) * Search (1) the list of API keys by name or date created. * Create (2) a new key. * Access (3): * API documentation (REST API, Python client, and R client). * The [Developer Portal](https://developers.datarobot.com){ target=_blank }. * [GitHub community repositories](https://github.com/datarobot-community){ target=_blank }. * Manage API Keys (4): * Copy (![](images/icon-copy.png)) a key to your clipboard (contents blurred out in this image) to paste elsewhere. * Edit (![](images/icon-rename.png)) the name of a key. * Delete (![](images/icon-delete.png)) a key. Each individual key has three pieces of information: ![](images/api-key-3.png) * The name of the key (1), which you can edit. * The date the key was last used (2). Newly created keys will instead display "Wasn't used." * The key itself (3). ### Create a new key {: #create-a-new-key } 1. Click your user icon in the top right corner and navigate to the **Developer Tools** page. 2. To generate a new key, click **Create new key**. 3. Name the new key, and click **Save**. This activates your new key, making it ready for use. ![](images/api-key-4.png) Once created, each individual key has three pieces of information: ![](images/api-key-3.png) ### Delete an existing key {: #delete-an-existing-key } To delete an existing key, click the trash can icon next to the key you wish to delete. This prompts a dialog box warning you about the impacts of deletion. Click **Delete** to remove your key. ![](images/api-key-5.png) ## MLOps agent tarball {: #mlops-agent-tarball } !!! info "Availability information" Contact your DataRobot representative for information on enabling the MLOps agent and accessing the agent tarball. DataRobot offers the [MLOps agent](mlops-agent/index) as a solution for monitoring external models outside of DataRobot and reporting back statistics. To monitor a deployment of this kind, you must first implement the following software components, provided by DataRobot: * MLOps library (available in Python, Java, and R) * The MLOps agent These components are part of an installer package available as a tarball in the DataRobot application. ### Download the MLOps agent tarball {: #download-the-mlops-agent-tarball } The MLOps agent tarball can be accessed from two locations: the **Developer Tools** section, and the [Predictions > Monitoring](code-py#monitoring-snippet) tab for a deployment. Click on your user icon and navigate to **Developer Tools**. Under the **External Monitoring Agent** header, click the download icon. Additional documentation for setting up the Agent is included in the tarball. ![](images/api-key-6.png) !!! note You can also download the MLOps Python libraries from the public [Python Package Index site](https://pypi.org){ target=_blank }. Download and install the [DataRobot MLOps metrics reporting library](https://pypi.org/project/datarobot-mlops){ target=_blank } and the [DataRobot MLOps Connected Client](https://pypi.org/project/datarobot-mlops-connected-client){ target=_blank }. These pages include instructions for installing the libraries. ## Portable Prediction Server Docker image {: #portable-prediction-server-docker-image } !!! info "Availability information" The Portable Prediction Server image may not be available in some installations. Review the [availability guidelines](portable-pps#obtain-the-pps-docker-image) for more information. Download the Portable Prediction Server Docker image from the **Developer Tools** page: ![](images/api-key-7.png) You can see some important information about the image: | Element | Description | |---------|-------------| | Image name (1) | The name of the image archive file that will be downloaded. | | Image creation date (2) | The date that the image was built. | | File size (3) | The size of the *compressed image* to be downloaded. Be aware that the uncompressed image size can exceed 12GB. | | Docker `Image ID` (4) | A shortened version of the Docker `Image ID`, as displayed by the `docker images` command. It is content-based so that regardless of the image tag, this value will remain the same. Use it to compare versions with the image you are currently running.| | Hash (5) | Hash algorithm and content hash sum. Use to check file integrity after download (see example below). Currently `SHA256` is used as a hash algorithm. | Click the download icon and wait for the file to download. Due to image size, download times may take minutes (or even hours) depending on your network speed. Once the download completes, check the file integrity using its hash sum. For example, on Linux: ```shell-session sha256sum datarobot-portable-prediction-api-7.0.0-r1736.tar.gz 5bafef491c3575180894855164b08efaffdec845491678131a45f1646db5a99d datarobot-portable-prediction-api-7.0.0-r1736.tar.gz ``` If the checksum matches the value displayed in the image information (Hash value (5), above), the image was downloaded successfully and can be safely [loaded to Docker](portable-pps#load-the-image-to-docker).
api-key-mgmt
--- title: Membership assignments description: How to see which organization and groups your system administrator has assigned you to. --- # Membership assignments {: #membership-assignments } From the **Membership** page, you can see which organization and groups your system administrator has assigned you to and join other groups. ![](images/membership.png) Organization and group assignments allow system administrators to manage users, control project sharing, apply actions across user populations, etc. You may be a member of up to ten groups or not a member of any groups. Likewise, you may be a member of a single organization or not a member of any organization. System administrators choose how they want to configure user membership assignments. Only they can create and modify these memberships.
view-memberships
--- title: Monitor resource usage description: How to monitor allocation and availability of modeling workers (compute resources), in a SAAS or standalone cluster environment. platform: self-managed-only --- # Monitor resources {: #monitor-resources } !!! info "Availability information" **Required permission:** Enable Resource Monitor The Resource Monitor provides visibility into DataRobot's active modeling workers across the installation, providing general information about the current state of the application and specific information about the status of components. With this service in place, you can easily track user activity on each project and know when DataRobot has available resources. Specifically, the Resource Monitor provides the number of currently running jobs, number of allowed concurrent jobs, and number of jobs waiting for a worker. Additionally, the tool provides information on which specific users are employing system resources. Additionally, monitoring resources over time helps to determine whether your organization has the correct number of modeling workers to meet usage needs. ## Modeling worker terminology {: #modeling-worker-terminology } The Resource Monitor reports on the system’s queue and workers, both overall and for individual users. See the [overview](admin-overview#what-are-workers) for a discussion of workers; the following table describes the terminology used to describe DataRobot activity: | Term | Description | |------|--------------| | Jobs | The tasks DataRobot completes with workers, such as model building and certain calculations. Statistics are based on jobs.| | Modeling worker| Jobs displayed in the Worker Queue, such as model building or Feature Impact calculations. Other worker tasks, for example, EDA, do not use modeling workers. | | In Progress (or running)| A job that has received a worker and is currently executing on the worker.| | Waiting for resources (or waiting for worker) | A job that is ready to execute, but has not yet received a worker. These jobs appear as “Waiting for worker” in the Processing section of the Worker Queue. | | Queued| A job that is in the queue but is not ready to execute. These jobs appear in the Queue section of the Worker Queue.| | Active User| A user that is the owner of at least one in-progress or waiting job.| ## Monitor resources: standalone {: #monitor-resources-standalone } The Resource Monitor in a standalone installation provides an at-a-glance view of modeling worker job requests. ![](images/resource-monitor-docker.png) The following table describes the fields displayed by the standalone version of the Resource Monitor. Use the [refresh option](#refresh-the-resource-monitor) to redisplay results. | **Field** | **Displays...** | |-----------|-----------------| | Total | Total number of workers allocated to the installation. | | In use | Number of workers currently in use across the system. | | Not in use | Number of workers not currently in use and therefore available. | | Jobs waiting | Number of queued jobs. | | Users waiting | Number of individual users with at least one job waiting. | | Demand | Number of jobs trying to run vs. the number allowed by the organization's license (capacity). To calculate, add all *In progress* and *Waiting* jobs for all active users and divide that value by the total number of available workers. | | Active now | Number of users with a job running or waiting that requires a modeling worker. This value matches the total of the *In use* and *Jobs waiting* fields. | | Worker usage by user | Graphic and breakdown of per-active-user usage. (See [Interpreting "Worker" usage by user](#interpret-worker-usage-by-user).) | ## Interpret "Worker usage by user" {: #interpret-worker-usage-by-user } The <em>Users and current activity</em> section reports on users that are actively using the system. An active user is one that has at least one running or waiting job. The bar graph is a quick visual indicator of usage, with active users listed below it. For each user name, DataRobot displays: * <em>In progress</em>: the number of in-progress jobs. The number of jobs a user can have in progress is determined by both system availability and the individual allowance. * <em>Waiting</em>: the number of jobs awaiting an available worker. * <em>Max workers</em>: an individual's worker allowance. This value corresponds to the maximum setting of the **Workers** value at the top of the Worker Queue. ![](images/resource-monitor-usage.png) ## Refresh the Resource Monitor {: #refresh-the-resource-monitor } DataRobot refreshes the Resource Monitor display at the interval selected from the dropdown. Expand the dropdown to change the interval or click the **Refresh Now** button to immediately update the page.
resource-monitor
--- title: Apply a DataRobot license description: Learn how to apply a DataRobot license using either the user interface of DataRobot's API. platform: self-managed-only --- # Apply a DataRobot license {: #apply-a-datarobot-license } This page explains how to apply a DataRobot license using either the application interface or the DataRobot API. Note that this workflow is exclusive to administrators, and requires you to be logged into DataRobot with admin permissions. ## Apply a license in the DataRobot application {: #apply-a-license-in-the-datarobot-application } 1. Access the DataRobot cluster in your browser. 2. Click the profile icon in the top-right corner of the page and select **License**. ![](images/license-1.png) 3. On the **License** page, enter the license key in the field and click **Validate**. ![](images/license-2.png) 4. After validating, DataRobot lists the features and functionality included in the subscription (blurred in the image below). ![](images/license-3.png) 5. Confirm that the validated license lists the correct subscription features, then click **Submit**. ## Apply a license via the API {: #apply-a-license-via-the-api } Alternatively, administrators can validate a DataRobot license using DataRobot's REST API. To do so, provide your information in the snippet below and execute a REST API call. ``` shell # Set the URI of the DataRobot App node # Ex. https://datarobot.example.com # Ex. http://10.2.3.4 dr_app_node="http://10.2.3.4" # If you have a username and password, start here # Set the initial username admin_username=localadmin@datarobot.com # Set the local administrator user password admin_password="" # Read the license ldata=$(cat ./license.txt) # Apply the license with the following commands # Log in to the App Node curl --silent -X POST \ --cookie "/tmp/cookies.txt" \ --cookie-jar "/tmp/cookies.txt" \ -H "Content-Type: application/json" \ -d '{"username":"'"${admin_username}"'","password":"'"${admin_password}"'"}' \ ${dr_app_node}/account/login # Get an API key api_key=$(curl --silent -X POST \ --cookie "/tmp/cookies.txt" \ --cookie-jar "/tmp/cookies.txt" \ -H "Content-Type: application/json" \ -d '{"name":"apiKey"}' \ ${dr_app_node}/api/v2/account/apiKeys/ | cut -d ',' -f 5 | cut -d '"' -f 4) # Apply the license curl --silent -w "%{http_code}\n" -X PUT \ -H "Content-Type: application/json" \ -H "HTTP/1.1" \ -H "Authorization: Token ${api_key}}" \ -d '{"licenseKey":"'"${ldata}"'"}' \ ${dr_app_node}/api/v2/clusterLicense/ # Expected output: 200 ```
license
--- title: Delete and restore projects description: For administrators, how to permanently delete a project, or restore a project temporarily deleted by a user. platform: self-managed-only --- # Delete/restore projects {: #deleterestore-projects } !!! info "Availability information" **Required permission:** Can delete/restore projects As an administrator, you can delete and restore (or recover) projects. Deleting projects is a valuable tool that can help clear space for new projects. When the project owner deletes a project, it is not _permanently_ deleted; only an administrator can permanently delete projects, or restore them if needed. ## Permanently delete projects {: #permanently-delete-projects } To permanently delete a project: 1. Expand the profile icon located in the upper right and click **Manage Deleted Projects**: ![](images/manage-deletedprojects-dropdown.png) The **Manage Deleted Projects** tab shows all deleted projects for all users. ![](images/perm-delete-project.png) 2. You can permanently delete one or more projects at a time, as follows: * Deleting one project at a time&mdash;click on the "X" icon for that project (shown above). * Deleting multiple projects at one time&mdash;select the checkbox for each project to delete, or from **Menu** choose **Select All** (to delete all projects). Then, from **Menu** choose **Permanently Delete Selected Projects**. ![](images/perm-multidelete-projects.png) A warning message appears prompting you to confirm that you want to delete the project(s). ![](images/confirm-perm-delete-project.png) 3. Click to delete the project(s). !!! note Once you delete a project from the <b>Manage Deleted Projects</b> page, it is permanently deleted and cannot be recovered. ## Restore projects {: #restore-projects } Any projects listed in the **Manage Deleted Projects** page can be restored. 1. Open the **Manage Deleted Projects** page (if not open). The **Manage Deleted Projects** tab shows all deleted projects for all users. ![](images/recover-deleted-projects.png) 2. You can restore one or more projects at a time, as follows: * Restoring one project at a time&mdash;click on the circular arrow pointing in counter-clockwise direction (as shown above) * Restoring multiple projects at one time&mdash;select the checkbox for each project to restore, or from **Menu** choose **Select All** (to restore all projects). Then, from **Menu** choose **Recover Selected Projects**. A warning message appears prompting you to confirm that you want to restore the project(s). ![](images/confirm-project-restoration.png) 3. Click to restore the project(s). Restored projects will be immediately available, to the owner and to anyone sharing the project.
delete-restore
--- title: Manage access and licenses description: How to manage access to DataRobot, including setting up a DataRobot license or a custom user agreement, and configuring SSO SAML for users. platform: self-managed-only --- # Manage access {: #manage-access } To manage access to DataRobot, you may need to: * Manage your DataRobot license. * Create a custom user agreement. * Configure SSO SAML for users. ## Manage licenses {: #manage-licenses } !!! info "Availability information" **Required permission:** Enable Application Management A valid, unexpired DataRobot license is required for model building. Use the **License** page to apply a new license key to the deployed cluster when the current license is expiring or close to expiring. Contact Support to obtain a new DataRobot license key, when required. The application banner shows messages, visible to all users, when: * The current license is expiring within 30 days (by default); users can click the banner to snooze for four days at a time. * The license has expired. If your license expires before applying the new license, users continue to have access to DataRobot and can make predictions using existing models. They cannot build new models or new insights or generate Compliance Documentation; existing elements are still available in the UI, however. If model building (EDA2) is running for a project when the license expires, the current round finishes. When you apply the new license, existing projects resume and users can again create new models and use all features. ### Apply a new license {: #apply-a-new-license } !!! note If you do not already have the new license key, you must first request it from Support. Once you have the license: 1. Expand the profile icon located in the upper right and click **License**. The page shows license details, including expiration date, concurrent workers limit, maximum active users, prepaid deployment limit, maximum deployment limit, and in some instances, subscription features: ![](images/admin-license-tab2.png) 2. Copy the license key you receive from Support, paste it to the **License Key** field, and click **Validate**. ![](images/admin-license-tab5.png) 3. Review the details associated with the provided license key and, if correct, click **Submit**. ![](images/admin-license-tab3.png) When successful, a message in the application banner shows the license is valid, and the tooltip shows the new license expiration date. It may take a few moments for the license change to take effect across your deployed cluster. !!! note The details shown for a specific license are based on your subscription. For more information on license details, contact Support. ## Create a user agreement {: #create-a-user-agreement } !!! note **Required permissions:** "Can manage users", "Can manage user agreement" The custom user agreement (also known as "clickthrough agreement") requires users to accept terms in order to access DataRobot. If the deployed cluster is configured to require a clickthrough agreement, each new user created for v5.2 or later is presented with the user agreement by default. For non-LDAP authentication integrations only, you can [modify the default settings for a user](manage-users#create-user-accounts) so they don't see a user agreement when they log in. If the deployed cluster requires a user agreement but a custom agreement was not created, the standard DataRobot Subscription Agreement is presented to new users when they log in. Users who do not accept the user agreement terms are redirected back to the login page. These users cannot log in and access DataRobot until they explicitly accept the license terms (i.e., they click the **Agree** button, or the equivalent). !!! note Users authenticated to use DataRobot prior to version 5.2 do not see the agreement and continue to have access to DataRobot as before. To create the agreement: 1. Expand the profile icon in the upper right and click **Manage User Agreement** from the dropdown menu. By default, the agreement template appears with the content for the DataRobot Subscription Agreement; you can change some or all of this information to create your custom user agreement. ![](images/admin-clickthrough-default.png) 2. Replace default information as needed to create your custom agreement. | Field | Description | |---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Title for custom user agreement, formatted as plain text (required content; maximum of 200 characters) | | Body | Body content for the agreement; recommended format is HTML (required content; maximum of 200000 characters). If provided as plain text there will be no formatting, including no line breaks. | | Accept Button Text | Label for the Accept button, formatted as plain text (required content; maximum of 30 characters) | | Decline Button Text | Label for the Decline button, formatted as plain text (required content; maximum of 30 characters) | 3. Click **Update** to submit the user agreement to DataRobot for immediate use. When successful, you see the message "User agreement was successfully updated." New users configured to see the user agreement will see the updated version when they log in. ## Configure SSO SAML {: #configure-sso-saml } !!! note Required cluster configuration</b>: "Enable SAML SSO configuration management" !!! warning SSO SAML will be deprecated in an upcoming release. You must configure [Enhanced SAML Single Sign-on](sso-ref) before DataRobot 7.1. If you need help configuring Enhanced SAML SSO, contact your DataRobot representative. DataRobot can use external services (Identity Providers, or IdP) for user authentication through Single Sign-On (SSO) technology. DataRobot supports SSO using the SAML (Security Assertion Markup Language) standard protocol. When users log in to a DataRobot cluster configured for SSO, they click a Single Sign-On button which redirects them to the IdP's authentication page. After signing in successfully, users are then redirected to DataRobot. If supported, configure the identity provider integration settings as follows. 1. Click the profile icon in the top right corner and select **APP ADMIN > Manage Users** from the dropdown menu. 2. Click **Manage SSO** and configure the SSO settings described in the table below. When finished, click **Create**. | Field | Description | |----------------------------------------|-----------------------------------------------------------------------------------------------------------------------| | Entity Id | Unique identifier. Provided by Identity Provider | | IdP Metadata URL | Link to XML document with integration specific information | | Single Sign-On URL | DataRobot URL to redirect user after successful authentication on Identity Provider’s side | | Single Sign-Out URL | DataRobot URL to redirect user after sign out on Identity Provider’s side | | Verify IdP Metadata HTTPS Certificate | If selected, the public certificate of the Identity Provider is required | | User Session Length (sec) | Session cookie expiration time. Default is one month. | | SP Initiated Method | SAML method used to start authentication negotiation | | IdP Initiated Method | SAML method used to move user to DataRobot after successful authentication | | Identity Provider Metadata | XML document with integration specific information (needed if the Identity Provider doesn't provide IdP Metadata URL) |
manage-access
--- title: Manage the cluster description: Summarizes the admin tasks for managing a deployed cluster, such as connecting to external systems, and monitoring resources, activity, access, and licenses. platform: self-managed-only --- # Manage the cluster {: #manage-the-cluster } !!! info "Availability information" Only Self-Managed AI Platform admins can manage their cluster. To manage the deployed cluster, you can perform the following activities. | Topic | Description... | |-------------------------|-------------| | [Configure external systems](sys-config) | Use the configuration management interface to ensure compatible configuration for LDAP and SMTP. | | [Manage JDBC drivers](manage-drivers) | Add, delete, and control JDBC access. | | [Manage access](manage-access) | Manage access to the cluster, including licenses, custom user agreements, and configuring SAML SSO. | | [Monitor resources](resource-monitor) | Monitor resources in a standalone (Docker). | | [Delete/restore projects](delete-restore) | Learn to delete and restore (or recover) projects. | | [Apply a DataRobot license](license) | Apply a DataRobot license for a user via the UI or the API. |
index
--- title: Configure external systems description: How to configure external LDAP and SMTP systems easily, by using DataRobot's configuration management interface. platform: self-managed-only --- # Configure external systems {: #configure-external-systems } !!! info "Availability information" **Required permission:** Enable System Configuration To minimize iteration time when configuring external LDAP and SMTP systems, DataRobot provides a configuration management interface to ensure compatible configuration options. Using the new **System Configuration** interface, configuration values related to LDAP and SMTP can be controlled dynamically without the need to reconfigure the application, saving time and making the process more user friendly. All changes made through the interface are also recorded in the audit logs for future reference. To work with the interface, select **System Configuration** from the **Settings** dropdown: ![](images/sysconfig-1.png) ## Change configuration settings {: #change-configuration-settings } The method for setting values is the same for all configuration options. See the full lists of [LDAP](#ldap-settings) and [SMTP](#smtp-settings) settings below. To change settings: 1. Select an option from the **System Configuration** page (LDAP or SMTP). The list of configuration settings updates to display those specific to that option. The current configuration setting value is automatically populated in the **OPTION** field. The displayed value is based on the default and/or the `config.yaml` value. It is displayed in read-only mode. 2. To override a current configuration setting value, toggle the setting's **OVERRIDE**. When toggled on, the **OPTION** field becomes editable. ![](images/sysconfig-2.png) 3. Change the **OPTION** field to the desired value and click **SAVE**. The system immediately updates to use the new configuration value across the application (and the toggle remains on). 4. To revert settings to the default/`config.yaml` value: - for an individual value, toggle the **OVERRIDE** option off. - for all configuration values for a given option (i.e., LDAP or SMTP), click **Reset**. ![](images/sysconfig-3.png) ## Configuration-specific notes {: #configuration-specific-notes } The following provide some option-specific details. ### LDAP and SMTP notes {: #ldap-and-smtp-notes } For LDAP and SMTP options, in addition to the **Save** and **Reset** buttons, there is also a **Test** button. Use it to validate the current configuration setting values and catch errors that could result from an invalid configuration (e.g., incorrect LDAP or SMTP authentication settings). It is <em>highly recommended</em> that you use the **Test** option to confirm no errors before saving changes to LDAP- and SMTP-related configuration settings. !!! note Saving invalid configuration settings could result in users (including the Admin user): <ul> <li> LDAP: being locked out of the application, requiring a fix from your support representative.</li> <li>SMTP: losing the ability to generate email notifications. </li> </ul> ## Configuration settings {: #configuration-settings } The following sections list settings by type. ### LDAP settings {: #ldap-settings } - `USER_AUTH_LDAP_ATTR_EMAIL_ADDRESS` - `USER_AUTH_LDAP_ATTR_FIRST_NAME` - `USER_AUTH_LDAP_ATTR_LAST_NAME` - `USER_AUTH_LDAP_ATTR_UNIX_USER` - `USER_AUTH_LDAP_BIND_PASSWORD` - `USER_AUTH_LDAP_CONNECTION_OPTIONS` - `USER_AUTH_LDAP_DIST_NAME_TEMPLATE` - `USER_AUTH_LDAP_GLOBAL_OPTIONS` - `USER_AUTH_LDAP_GROUP_SEARCH_BASE_DN` - `USER_AUTH_LDAP_ORGANIZATION_NAME_ACCOUNT_ATTRIBUTE` - `USER_AUTH_LDAP_REQUIRED_GROUP` - `USER_AUTH_LDAP_REQUIRED_GROUP_ACCOUNT_ATTR` - `USER_AUTH_LDAP_REQUIRED_GROUP_MEMBER_ATTR` - `USER_AUTH_LDAP_SEARCH_BASE_DN` - `USER_AUTH_LDAP_SEARCH_FILTER` - `USER_AUTH_LDAP_SEARCH_SCOPE` - `USER_AUTH_LDAP_URI` - `USER_AUTH_SERVICE_USERNAMES` - `USER_AUTH_TYPE` ### SMTP settings {: #smtp-settings } - `DEFAULT_SENDER` - `DEFAULT_SUPPORT` - `SMTP_CONNECTION-_TIMEOUT_SECONDS` - `SMTP_MODE` - `SMTP_PASSWORD` - `SMTP_PORT` - `SMTP_USER`
sys-config
--- title: Manage JDBC drivers description: How to create predefined JDBC driver configurations, upload, modify, and delete drivers, and restrict access to JDBC data stores. platform: self-managed-only --- # Manage JDBC drivers {: #manage-jdbc-drivers } You manage Java Database Connectivity (JDBC) drivers by: * Working with JDBC drivers * Restricting access for Kerberos authentication systems !!! info "Availability information" **Required permission:** Can manage JDBC database drivers A driver allows DataRobot to provide a way for users to ingest data from a database via JDBC. The administrator can [upload JDBC driver files](#upload-drivers) (JAR files) for their organization's users to access when creating data connections. As part of driver creation, the administrator can also upload additional JAR files containing library dependencies. Once uploaded, drivers can be modified or deleted only by administrators. By default, all users have permissions to create, modify (depending on their roles), and share data connections and data sources. (See more information about user [data connection roles](roles-permissions)) If needed, you can prevent access to data stores and data sources for a specific user with the ["Disable Database Connectivity"](manage-users#additional-permissions-options) user permission; this prevents that user from creating new JDBC data connections or importing data from any defined JDBC data sources (from the new project page). Additionally, for cluster deployments using Kerberos authentication, you can control access to data stores through [validation and variable substitution](#restrict-access-to-jdbc-data-stores). ## Predefined driver configurations {: #predefined-driver-configurations } When users create data connections for a selected JDBC driver, they specify how to retrieve the data. This may be a defined JDBC URL or a set of parameters. Because creating the JDBC URLs for data connections can be complicated, DataRobot provides predefined configurations for some supported drivers that have parameter support. Driver configurations specify the information users need to provide to retrieve data from their data sources. Each predefined configuration includes typical information needed to create connections using that type of driver. For example, while connections to Presto driver typically require the catalog and schema, connections to Snowflake driver often need the database and warehouse. Predefined configurations are available for the following drivers: - AWS Athena - Azure SQL - Azure Synapse - Google BigQuery - Intersystems - kdb+ - Microsoft SQL Server - MySQL - Postgres - Presto - Redshift - SAP HANA - Snowflake - Treasure Data: Hive When you add a new driver, you can select to use a predefined configuration (if one exists for that driver), or you can create a custom driver which does not include a configuration. The steps below describe how to create an instance of a driver. ## Upload drivers {: #upload-drivers } The steps below describe how to create a driver instance. 1. Click the profile icon in the top right corner of the application screen, and select **Data Connections** from the dropdown menu. 2. Select the **JDBC Drivers** link: ![](images/jdbc-drivers.png) 3. In the left-panel **JDBC Drivers** list, click **+ Add new driver**. ![](images/admin-jdbc-addnewdriver.png) 4. In the displayed dialog, select the type of configuration you are using for this driver: - [**Predefined**](#predefined-driver-configurations)&mdash;use a configuration provided by DataRobot - **Custom**&mdash;create a configuration when creating data connections 5. If you are adding a driver that has a configuration (**Predefined**), complete the following fields as prompted: ![](images/jdbc-driver-predefined-add.png) | Field | Description | |-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Configuration | Select the configuration to use for the driver you are creating. Configurations for some supported drivers with parameter support in DataRobot are provided for selection. If you don't see the configuration you want then create this as a *custom* driver and specify driver name and driver class. | | Class name (Predefined) | Shows the driver class name defined in this configuration. | | Version | Enter the version (user-defined) for the driver. This value is required. The combination of driver name and version are used to identify the driver configuration for users. | | Driver Files * | Click **+ UPLOAD JAR** to add the driver JAR file. Follow the same process for each additional library dependencies. When uploaded successfully, the JAR filename(s) appear in this field. | 6. If you are adding a driver that does not include a configuration (**Custom**), complete the following fields as prompted: ![](images/jdbc-driver-custom-add.png) | Field | Description | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Driver name | Enter a display name for the driver. | | Class name | Enter the driver class name. If unknown, refer to the driver documentation. | | Driver Files * | Click **+ UPLOAD JAR** to add the driver JAR file. Follow the same process for each additional library dependencies. When uploaded successfully, the JAR filename(s) appear in this field. | \* The JAR driver and dependency file size limit is 100MB. That is, each file upload can be a maximum of 100MB, but totals may exceed that limit. !!! note DataRobot does not validate uploaded drivers (other than simple extension checking). 7. Click **Create driver** to add the driver. The new driver is shown in the left-panel **JDBC Drivers** list, and the driver configuration is now available to all users in your organization. Drivers created with predefined configurations are named in the format <em>driver name (version)</em>. ![](images/admin-jdbc-predefineddriver.png) ### Modify drivers {: #modify-drivers } If you are modifying drivers that use <em>predefined</em> configurations, you can change only the version and JAR files for the driver configuration. (The driver name is created automatically as a combination of selected configuration and version.) Also, note that removing the predefined configuration for a driver makes it a driver with a custom configuration (i.e., connections using this driver will require a JDBC URL). If you are modifying drivers that use <em>custom</em> configurations, you can change the driver name, configuration, class name, or JAR file(s). Adding a configuration to a driver makes it a driver with a predefined configuration. If you do select to add a configuration for the driver, DataRobot automatically verifies that any JDBC URLs for existing data connections are not affected. If that is not necessary, then you can select to skip the URL validation. !!! note DataRobot recommends that you notify your users about driver configuration modifications that will affect JDBC URLs for existing data connections, so they can recreate them, if needed. 1. Select the driver from the left-panel **JDBC Drivers** list. The information for the driver configuration is added to the main window. 2. If this driver has a predefined configuration: * You can modify the version. * You can remove the configuration. This changes it to a <em>custom</em> driver, with no configuration. 3. If this driver has a custom configuration: * You can modify the driver name or class name. * You can add a (predefined) configuration to the driver (**+ ADD CONFIGURATION**). This changes it to a driver with a <em>predefined</em> configuration. If there are existing connections to the driver and the new configuration will affect the JDBC URLs (and you still want to make this change), select **Skip data connections verification**. This ensures DataRobot does not validate JDBC URLs for existing data connections. As shown in the below image, if existing JDBC URLs are affected by the new configuration and **Skip data connections verification** is <em>not</em> selected, the configuration cannot be added to the driver. ![](images/admin-jdbc-addconfig-skip.png) 4. For either type of driver, you can click **UPDATE JAR FILE** to replace the JAR file(s). 5. Click **Save changes** to save modifications to this driver. You see the updated driver listing in the left-panel **JDBC Drivers** list. Any existing data connections for this driver are updated with these changes automatically. ### Delete drivers {: #delete-drivers } You can delete any driver that is not being used by existing data connections. If it is being used, you must first delete the dependencies. 1. Select the driver in the left-panel **JDBC Drivers** list. 2. In the upper right, click **DELETE DRIVER**. If there are no existing data connections for the driver, DataRobot prompts for confirmation. 3. Click **Delete** to remove the driver. If there are existing data connections using the driver, DataRobot returns an error: ![](images/jdbc-driver-delete-2.png) If you see this message, those dependencies first need to be removed (in the **Data Connections** tab). Then, you can try again to delete the driver. ## Restrict access to JDBC data stores {: #restrict-access-to-jdbc-data-stores } !!! info "Availability information" **Required permission:** Can manage users, Can manage JDBC database drivers When using Kerberos authentication for JDBC, you can control access to data stores through validation and variable substitution. ### Control user access {: #control-user-access } You can restrict the ability to create and modify JDBC data stores that utilize impersonation to only those users with the [admin setting](user-settings) "Can create impersonated Data Store." Within cluster configuration (`config.yaml`), you can define impersonated keywords that are used by any installed drivers that support impersonation. These keywords could be used to define operations considered "dangerous" and therefore permitted to only select users. By default, no impersonation keywords are defined. When a user attempts to create or modify a JDBC data store and there are impersonation keywords defined for the installation, DataRobot determines if the URI includes any of the keywords. If the URI includes one or more of the keywords and the "Can create impersonated Data Store" admin setting is enabled for the user, they are allowed to create or modify it. If keywords are included in the URI but the user does not have the "Can create impersonated Data Store" setting enabled, then DataRobot does not allow the request to create or modify it. ### Variable substitutions {: #variable-substitutions } The variable substitution syntax,`${datarobot_read_username}`, provides another way to control access to drivers. If that variable is included in the URI when trying to ingest from the data source/data store, DataRobot replaces it with the impersonated account associated with the logged in user.
manage-drivers
--- title: Notification channels and policies description: Admins with permission can create, edit, and delete notification channels, view logs, and create, edit, enable, disable, or delete notification policies. --- # Notification channels and policies {: #notification-channels-and-policies } Although notifications are visible to all users, their configuration is limited to system and organization admins with permission to manage policies. * System-level admins are exclusive to Self-Managed AI Platform users. * Organization admins are exclusive to managed AI Platform users. This page describes how to configure various webhook, email, and Slack elements. For notification channels, you can: * Create a notification channel. * Edit a notification channel. * View notification logs. * Delete a notification channel. For notification policies, you can: * Create a notification policy. * Edit a notification policy. * Enable or disable a notification policy. * Delete a notification policy. Before proceeding, review the [considerations](webhooks/index#considerations). ## Create a channel {: #create-a-channel } Notification channels are mechanisms for delivering notifications created by admins. DataRobot supports email, webhook, and Slack notifications. You may want to set up several channels for each type of notification; for example, a webhook with a URL for deployment-related events, and a webhook for all project-related events. To create a notification channel: Click on your user icon and navigate to the **Notification Channels** dashboard. This page is also accessible from the app administrator page. ![](images/webhooks-1.png) Select **Add channel**. Before proceeding, select the channel type: [webhook](#add-a-webhook-channel), [email](#add-an-email-channel), or [Slack](#add-a-slack-channel). Then, fill out the corresponding fields. ### Add a webhook channel {: #add-a-webhook-channel } A dialog box prompts you to provide information about the new webhook channel. ![](images/webhooks-2.png) | Field | Description | |-------|-------------| | Channel name | The name of the notification channel being added. | | Payload URL | The URL of the server that will receive the webhook POST requests. For example: http://localhost:3527/payload | | Content type | Method for serializing the payload. Webhooks can be sent as different content types: **json** delivers the JSON payload directly as the body of the POST request **form** sends the JSON payload as a form parameter called payload. | | Secret token | A hashed secret token used to secure the connection of the webhook. It ensures that POST requests sent to the payload URL are from DataRobot. | | Enable SSL verification | Toggle on for DataRobot to validate SSL certificates against a certificate authority. Toggle off to allow unvalidated (or “self-signed”) certificates. | Note that these settings can be configured at a later time after the channel is saved. Select **Show advanced options** to add a custom header. Custom headers allow you to describe the event type that triggered the webhook delivery, provide a way to identify the delivery, and more. Select **Add header** and provide a name and value for the header you want to add. ![](images/webhooks-3.png) For example, you can use a custom header to detail a specific dataset with loan information being shared. The header would be formatted as: `loan-dataset-shared: 1` When your fields are completed, click the **Test connection** link. DataRobot tests the webhook connection and allows you to view the results before saving the completed fields. For the connection test, the body of the POST serves as the event template merged with placeholder values; you are unable to modify this content. The webhook will not be saved until it is successfully called in the test. When you have completed the required fields and passed the connection test, click **Add channel**. Your channel is available for use in the **Notification Channels** dashboard. ### Add an email channel {: #add-an-email-channel } Select the **Email** tab and a dialog box prompts you to provide information about the new email channel. ![](images/webhooks-14.png) | Field | Description | |-------|-------------| | Channel name | The name of the notification channel being added. | | Email address | The email address that receives notifications. Only one email address can be entered into this field. If you need to send notifications to multiple emails, you must set up separate notification channels for each address. | You must verify the recipient email address. Click **Send verification code to email**. ![](images/webhooks-16.png) After receiving the email with the verification code, enter it in the corresponding field and click **Verify**. ![](images/webhooks-15.png) When you have completed the required fields and verified the email address, click **Add channel**. Your channel is available for use in the **Notification Channels** dashboard. ### Add a Slack channel {: #add-a-slack-channel } Select the **Slack** tab. A dialog box prompts you to provide information about the new Slack notification channel. ![](images/slack-1.png) | Field | Description | |-------|-------------| | Channel name | The name of the notification channel being added. | | Slack Incoming Webhook URL | A URL generated by Slack, found on Slack's workspace settings page, that DataRobot uses to send notifications to a specific workspace. On Slack's workplace settings, indicate a Slack app and a Slack channel where DataRobot will send notifications. Reference the <a target="_blank" href="https://api.slack.com/messaging/webhooks">Slack documentation</a> for more information. | When your fields are completed, click the **Test connection** link. DataRobot tests the Slack workspace connection and allows you to view the results before saving the completed fields. If configured successfully, DataRobot delivers a Slack notification (`connection test`) to the channel you configured on the Slack workplace settings page.. The channel does not save until the connection test successfully passes. If the test fails, verify that the Webhook URL is correct and verify the Slack workspace settings. When the connection test notification is succesful, click **Add channel**. Your channel is available for use in the **Notification Channels** dashboard. ### Edit a channel {: #edit-a-channel } You can edit the fields for an existing channel by selecting it and navigating to the **Configuration** tab. Make the desired changes and either retest your connection (for webhook notifications) or verify the new email address (for email notifications). When your test passes, click **Save Changes**. ![](images/webhooks-4.png) ### View notification logs {: #view-notification-logs } Notification logs allow you to view the status of any system using a specific webhook, the number of times the system transmitted an event, and whether the transmission was successful. Notification logs are essential for debugging webhooks. Note that notification logs are not available for email notification channels. The notification logs list the 25 most recent [trigger](#choose-a-trigger) events with their send date and time. Logs also include a copy of the HTTP request, the response code, and time to delivery. You can view notification logs for a channel by selecting it and navigating to the **Logs** tab. This tab lists every policy using the channel (1), the last 25 notification deliveries (2), and the header and payload for requests (3) and responses (4). If you have a failed delivery, you can choose to redeliver that notification (5). ![](images/webhooks-10.png) ### Delete a channel {: #delete-a-channel } If you no longer wish to use a notification channel, you can remove it from the dashboard. Locate the channel you want to remove and select the trash can icon ![](images/icon-delete.png) to delete it. ## Create a notification policy {: #create-a-notification-policy } A notification policy is a group of one or more alert conditions. A policy has two settings that apply to all of its conditions—incident preference and notification channels. Order of activity is as follows: 1. Create a notification channel. 2. Create a notification policy. 3. Add policy conditions. Policies can be organized by: * **Architecture**: Organizational-based policy structure. For example, Production website, Staging website, Production databases. * **Teams**: Team-based policy structure. For example, Team: Data Scientists, Team: IT Ops, Team: Model Validators. * **Individuals**: Notification policies set up for specific individuals. This configuration is useful for when users want to personally track a particular resource or metric. To create a notification policy: Click on your user icon and navigate to the **Notification Channels** dashboard. This page is also accessible from the app administrator page. ![](images/webhooks-5.png) Select **Add policy**. A dialog box prompts you to provide information to configure the new policy. #### Choose a trigger {: #choose-a-trigger } Choose a single event or a group of events that will trigger a notification. * The **Event group** tab displays groups of events organized by type. Select a group if you want the notification policy to be triggered by any of the events that belong to it. For example, the event group "Dataset-related events" consists of dataset creation, dataset sharing, and dataset deletion events. ![](images/webhooks-6.png) * The **Single event** tab displays every individual event available as a trigger for the notification policy. You can only select one. ![](images/webhooks-7.png) Once you have selected the event group or single event you wish to use to configure the notification policy, click **Next**. #### Choose a channel {: #choose-a-channel } Select the [notification channel](#add-a-webhook-channel) for the policy to use from the dropdown. All notification channels that have been created or shared with you are available for selection (1). You can use the search bar (2) to find specific channels. ![](images/webhooks-8.png) When you have selected the channel you want to use, click **Next**. #### Name and review a policy {: #name-and-review-a-policy } Review the selected trigger and channel for the policy you are creating. If you want to go back and edit either of the selections, click the edit icon (![](images/icon-rename.png)) next to each selection. Then provide a name for your new policy. ![](images/webhooks-9.png) When you are satisfied with your policy configuration, click **Create policy**. Your policy is available for use in the **Notification Channels** dashboard. ### Edit a policy {: #edit-a-policy } You can edit an existing policy to re-configure the trigger and channel it uses, or to rename it. Locate the policy in the **Notification Policies** dashboard, and select the edit icon (![](images/icon-rename.png)). You can make any desired edits from the dialog box that appears. ### Enable or disable a policy {: #enable-or-disable-a-policy } Policies can be enabled or disabled at any time, allowing you to prevent notification fatigue, or bring attention to additional events. Locate the policy in the **Notification Policies** dashboard. Select the play icon to enable a policy and the pause icon to disable it. ### Delete a policy {: #delete-a-policy } If you no longer wish to use a policy, you can remove it from the dashboard. Locate the policy you want to remove and select the trash can icon ![](images/icon-delete.png) to delete it.
web-notify
--- title: Notification service description: How to use the Notification service to integrate flexible, centralized notifications into your organization's processes as webhooks, emails, and Slack messages. --- # Notification service {: #notification-service } The Notification service allows you to integrate flexible and centralized notifications into your organization's processes around change and incident management. [Notification channels](web-notify), available as webhooks, emails, and Slack messages, allow users in an organization to subscribe to certain DataRobot events. When a notification event is triggered, an HTTP POST payload is sent to the webhook's configured URL, or an email is sent. Various DataRobot events, such as sharing projects, deployment activity, or Autopilot completing, generate notifications. When you configure a notification channel, you can choose which events you want to receive notifications for. Each event relates to a unique action within DataRobot. Choose to opt in to all events for a configuration, or subscribe to specific events. !!! note The [notification center](getting-help#notification-center), available to users from the management tools in the upper right, is based on this notification service but is determined by a default system policy and is not configurable. These sections describe: | Topic | Description | |-------|-------------| | [Notification channels and policies](web-notify) | Create notification channels and policies. | | [Webhook event payloads](web-events) | Read about event payload configurations available for DataRobot webhooks. | | [Mute deployment notifications](web-mute) | Stop receiving notifications for deployment-specific events tied to a configured policy.| | [Maintenance notifications](maintenance-notes) | Configure notifications that communicate service interruption. | ## Considerations {: #considerations } Consider the following when using the Notification service: - Webhook channels do not support the ability to change a webhook payload to send notifications with specifically formatted messages. - Notification channels do not support Adaptive Card format for webhook messages, which means Microsoft Teams integration isn't currently possible.
index
--- title: Webhook event payloads description: Admins can configure notification channels to subscribe to some or all DataRobot event notifications, delivered by webhooks. Includes code samples. --- # Webhook event payloads {: #webhook-event-payloads } Events generate notifications delivered by webhooks. When you configure a [notification channel](web-notify#create-a-channel), you can choose which events you want to receive notifications for. Each event relates to a unique action within DataRobot. Choose to opt into all events for a configuration, or subscribe to specific events that are useful for you. This page details the event payload configurations available for DataRobot webhooks. Each event category includes an example. Before proceeding, review the [considerations](webhooks/index#considerations). ### Project events {: #project-events } There are 4 available project event types: | Action | Payload format | |-----------------------|--------------------| | Project created | `project.created` | | Project deleted | `project.deleted` | | Project shared | `project.shared` | | Autopilot completed | `autopilot.complete` | #### Example: Project deleted event {: #example-project-deleted-event } { "event": { "deleted_by": "123a456b7c8e9f", "deletion_time": 1581504952, "entity_id": "123a456b7c8e9f", "uid": "<User_ID>" }, "event_type": "project.deleted", "project": { "active": 1, "default_dataset_id": "123a456b7c8e9f", "original_name": "https://s3.amazonaws.com/datarobot_public_datasets/DR_Demo_Store_Sales_Forecast_Train.xlsx", "project_id": "<project_ID>", "project_name": "DR_Demo_Store_Sales_Forecast_Train.xlsx", "stage": "modeling:" }, "timestamp": 1581504953 } #### Example: Autocomplete finished event {: #example-autocomplete-finished-event } { "event": { "dataset_id": "123a456b7c8e9f", "entity_id": "123a456b7c8e9f", "uid": "<User_ID>" }, "event_type": "autopilot.complete", "project": { "active": 1, "default_dataset_id": "123a456b7c8e9f", "original_name": "advanced_options.csv", "project_id": "<project_ID>", "project_name": "test-tvh-no-holdout-f2c6607d-544d-4e94-a488-c282b6aaa192", "stage": "modeling:" }, "timestamp": 1581507975 } ### Mongo fields: project events {: #mongo-fields-project-events } The following tables details all possible fields that can be included in project event payloads. | Field in Mongo | Required | Description | |----------------------|----------|---------------------------------------------------------------------------------| | uid | | N/A | | created | | N/A | | active | ✔ | Indicates whether the project is active. | | default\_dataset\_id | ✔ | Indicates the origin of the dataset in the **AI Catalog**. | | holdout\_unlocked | | N/A | | originalName | ✔ | Contains the name of the file when it was uploaded to DataRobot. | | project\_name | ✔ | Identifies the project name. | | stage | ✔ | Indicates the stage the project was in when the action was taken. | | is\_deleted | | N/A | | deletion\_time | ✔ | Indicates the deletion time (useful for troubleshooting delayed notifications). | | deleted\_by | ✔ | Indicates the user who deleted the project. | ### Dataset events {: #dataset-events } There are 3 available dataset event types: | Action | Payload format | |-------------------|-----------------| | Dataset created | dataset.created | | Dataset deleted | dataset.deleted | | Dataset shared | dataset.shared | #### Example: Dataset shared event {: #example-dataset-shared-event } { "dataset": { "catalog_type": "non_materialized_dataset", "dataset_id": "123a456b7c8e9f", "latest_catalog_version_id": "123a456b7c8e9f", "original_name": "amazon_de_reviews_small_80.csv", "version": 1 }, "event": { "entity_id": "123a456b7c8e9f", "shared_uids": [ "<Shared_user_ID>", "<Shared_user_ID>", "<Shared_user_ID>" ], "uid": "<User_ID>" }, "event_type": "dataset.shared", "timestamp": 1581508736 } ### Mongo fields: dataset events {: #mongo-fields-dataset-events } The following tables details all possible fields that can be included in dataset event payloads. | Field in Mongo | Required | Description | |------------------------------|----------|---------------------------------------------------------------------------------| | uid | | N/A | | created | | N/A | | latest\_catalog\_version\_id | ✔ | Indicates the version of the dataset used. | | originalName | ✔ | Contains the name of the file when it was uploaded to DataRobot. | | last\_modified | | N/A | | last\_modified\_uid | | N/A | | catalog\_type | ✔ | Determines the project type based on **AI Catalog** information. | | version | ✔ | Indicates the version of the dataset used. | | is\_deleted | | N/A | | deletion\_time | ✔ | Indicates the deletion time (useful for troubleshooting delayed notifications). | | deleted\_by | ✔ | Indicates the user who deleted the project. | ### Model deployment events {: #model-deployment-events } There are 10 available deployment event types: | Action | Payload format | |------------------------------------------------------|---------------------------------------------------------| | Model Deployment Shared | model\_deployments.deployment\_sharing | | Model Deployment Replaced | model\_deployments.model\_replacement | | Model Deployment Created | model\_deployments.deployment\_creation | | Model Deployment Deleted | model\_deployments.deployment\_deletion | | Deployment Service Health Change: Green to Yellow | model\_deployments.service\_health\_yellow\_from\_green | | Deployment Service Health Change: Red | model\_deployments.service\_health\_red | | Deployment Data Drift Change: Green to Yellow | model\_deployments.data\_drift\_yellow\_from\_green | | Deployment Data Drift Change: Red | model\_deployments.data\_drift\_red | | Deployment Accuracy Health Change: Green to Yellow | model\_deployments.accuracy\_yellow\_from\_green | | Deployment Accuracy Health Change: Red | model\_deployments.accuracy\_red | #### Example: Deployment creation event {: #example-deployment-creation-event } { "event": { "entity_id": "123a456b7c8e9f", "model_id": "123a456b7c8e9f", "performer_uid": "<Performer_ID>", "status": "active" }, "event_type": "model_deployments.deployment_creation", “deployment": { "deployment_id": "123a456b7c8e9f", "model_id": "123a456b7c8e9f", "model_package_id": "123a456b7c8e9f", "project_id": "<project_ID>", "status": "active", "type": "dedicated", "user_id": "<User_ID>" }, "timestamp": 1581505115 } ### Mongo fields: deployment events {: #mongo-fields-deployment-events } The following tables details all possible fields that can be included in deployment event payloads. | Field in mongo | Required | Description | |------------------|----------|-------------| | created\_at | | N/A | | deployed | | N/A | | description | | N/A | | export\_target | | N/A | | instance\_id | | N/A | | label | | N/A | | model\_id | ✔ | N/A | | organization\_id | | N/A | | project\_id | ✔ | N/A | | service\_id | | N/A | | updated\_at | | N/A | | user\_id | ✔ | N/A | | deleted | | N/A |
web-events
--- title: Mute deployment notifications description: Learn how to mute notifications for a specific deployment, to stop receiving notifications for events tied to a configured notification policy. --- # Mute deployment notifications {: #mute-deployment-notifications } You can mute notifications for a specific deployment, allowing you to stop receiving notifications for the deployment-specific events tied to a configured [notification policy](web-notify#create-a-notification-policy). Deployments can create a lot of noise if they often change health status or encounter issues with data drift or anomalous predictions. This can result in a large number of notifications. To mute deployment notifications, navigate to **Deployments**, select a deployment, and go to the **Settings > Notifications** tab. ![](images/webhooks-11.png) This tab lists all of the notification policies applied to the deployment. ![](images/webhooks-12.png) Identify the notification policy you wish to mute, and toggle on in **Mute for channel**. Once the toggle is turned on, you no longer receive notifications applied by the selected policy. ![](images/webhooks-13.png)
web-mute
--- title: Maintenance notifications description: Learn how you as an administrator can configure notifications that communicate service interruptions to users. --- # Maintenance notifications {: #maintenance-notifications } Administrators can configure notifications that communicate service interruption to users. Admins will create a banner to notify users when the system is planning maintenance or is currently impacted by an incident. The banner communicates the incident start time, scheduled end time (optional), and a link for more details (also optional). Users will see the banner if they are logged-in during the incident or during the configured notification window. ## Create a new notification {: #create-a-new-notification } To create a maintenance notification: Click on your user icon and navigate to the **Maintenance Notifications** dashboard. This page is also accessible from the app administrator page. ![](images/maintenance-1.png) Select **Add notification**. A dialog box prompts you to provide information about the new notification. ![](images/maintenance-2.png) |Field | Description | |------|------------------| | Event type | The type of event to notify users about. Select "Incident" or "Maintenance" from the dropdown. | | Event start | The time when the event starts. Use the calendar modal to indicate the date and time. | | Event end (optional) | The time when the event ends. Use the calendar modal to indicate the date and time. Display notifications on | | Read more" link (optional) | The URL users can select to learn more information about the maintenance event. Appears as a clickable "Read more" link at the end of the notification. | When you have fully configured the fields, click **Add**. Note that these settings can be configured at a later time after the notification is saved. Your notification is available in the **Maintenance Notifications** dashboard. ### Edit a notification {: #edit-a-notification } When you have created a maintenance notifications, you can edit them from the dashboard. Select a notification to expand it and edit the fields. ![](images/maintenance-4.png) When you have finished editing, click **Save**. If you wish to abandon the changes, click **Discard changes**. Select **Preview** to view the notification banner that will display at the configured time. ![](images/maintenance-3.png) ### Notification actions {: #notification-actions } Notifications have two actions available: preview and deletion. * Select the eye icon (![](images/icon-eye.png)) to preview the notification banner. * Select the trash icon (![](images/icon-trash.png)) to permanently delete a notification.
maintenance-notes
--- title: Role-based access control description: Admins can control access to the DataRobot application by assigning users roles with designated privileges. --- # Role-based access control {: #role-based-access-control } <a target="_blank" href="https://en.wikipedia.org/wiki/Role-based_access_control">Role-based access control</a> (RBAC) controls access to the DataRobot application by assigning users roles with designated privileges. Role-based permissions and role-role relationships make it simple to assign the appropriate permissions the specific ways in which users intend to use the application. You can assign a role to specific users in [User Permissions](manage-users#rbac-for-users), or to all members in a group in [Group Permissions](manage-groups#rbac-for-groups). The assigned role controls both what the user sees when using the application and which objects they have access to. RBAC is additive, so a user's permissions will be the sum of all permissions set at the user and group level. The following roles can be assigned: * Data Scientist * Viewer * MLOps Admin * Apps Consumer * Apps Admin * Project Admin * Prediction-only * Data Consumer * Data Admin The following objects also use the RBAC framework in the DataRobot application: * Projects * Deployments * Database Connectivity * Datasets * Dataset metadata * Custom Models and Environments * Execution Environments * AI Applications * Model Packages The sections below describe the permissions applied for each role provided with Role-based access control. ### Tiers of access {: #tiers-of-access } Each role is granted a different degree of access for the various object types available within the application: * **Read** access to an object allows the user to access that area of the application for viewing but they cannot create these objects. * **Write** access to an object type allows the user to create objects in that area of the application. There are no restrictions applied with write access aside from administrative permissions. * **Admin** access to an object type grants a user access to all objects of a given type that belong to the user's organization. For example, if a user has admin access to projects, they can view every project created within their organization and make edits to them. * **No Access** disables a user's access to an object type. This is indicated by the red "X" label displayed for a given permission. They will be unable to access that part of the application, create that type of object, or gain access to any of the objects of that type. ### Data Scientist {: #data-scientist } Access: Can build or add models in the platform, both using AutoML and creating custom or remote models. Notes: Cannot perform any actions that will break production systems. This type of user can also build AI applications. | Object | Admin | Read | Write | | ---------- | ----------- | ----| -----| | Application | | ✔ | ✔ | | Custom Environment | | ✔ | | | Custom Model | | ✔ | ✔ | | Dataset Data | | ✔ | ✔ | | Dataset Info | | ✔ | ✔ | | Deployment | | ✔ | | | Model Package | | ✔ | ✔ | | Prediction Environment | | ✔ | | | Project | | ✔ | ✔ | ### Viewer {: #viewer } Access: Can view any object across the system that they have access to, but cannot perform any actions beyond viewing datasets. | Object | Admin | Read | Write | | ---------- | ----------- | ----| -----| | Application | | ✔ | | | Custom Environment | | ✔ | | | Custom Model | | ✔ | | | Dataset Data | | ✔ | | | Dataset Info | | ✔ | | | Deployment | | ✔ | | | Model Package | | ✔ | | | Prediction Environment | | ✔ | | | Project | | ✔ | | ### MLOps Admin {: #mlops-admin } Access: Can access every MLOps object on the system—deployments, model packages, custom models, and custom environments. Useful for: Debugging and reporting usage and activity for any MLOps object created in their organization. | Object | Admin | Read | Write | | ---------- | ----------- | ----| -----| | Application | | ✔ | ✔ | | Custom Environment | ✔ | ✔ | ✔ | | Custom Model | ✔ | ✔ | ✔ | | Dataset Data | | ✔ | ✔ | | Dataset Info | | ✔ | ✔ | | Deployment | ✔ | ✔ | ✔ | | Model Package | ✔ | ✔ | ✔ | | Prediction Environment | ✔ | ✔ | ✔ | | Project | | ✔ | ✔ | ### Apps Consumer {: #apps-consumer } Access: Can consume the DataRobot AI-powered applications that are shared with them to help make business decisions. | Object | Admin | Read | Write | | ---------- | ----------- | ----| -----| | Application | | ✔ | | | Custom Environment | | | | | Custom Model | | | | | Dataset Data | | ✔ | | | Dataset Info | | ✔ | | | Deployment | | | | | Model Package | | | | | Prediction Environment | | | | | Project | | | | ### Apps Admin {: #apps-admin } Access: Can access every AI Application created across the system with admin permissions. Useful for: Debugging and reporting on usage and activity for any AI Application created in their organization. | Object | Admin | Read | Write | | ---------- | ----------- | ----| -----| | Application | ✔ | ✔ | ✔ | | Custom Environment | | | | | Custom Model | | | | | Dataset Data | | ✔ | ✔ | | Dataset Info | | ✔ | ✔ | | Deployment | | ✔ | ✔ | | Model Package | | ✔ | ✔ | | Prediction Environment | | | | | Project | | ✔ | ✔ | ### Project Admin {: #project-admin } Access: Can access every modeling project created across the system. Useful for: Debugging and reporting on usage and activity for any modeling project created in their organization. | Object | Admin | Read | Write | | ---------- | ----------- | ----| -----| | Application | | ✔ | ✔ | | Custom Environment | | | | | Custom Model | | | | | Dataset Data | | ✔ | ✔ | | Dataset Info | | ✔ | ✔ | | Deployment | | | | | Model Package | | | | | Prediction Environment | | | | | Project | ✔ | ✔ | ✔ | ### Prediction-only {: #prediction-only } Access: Can make predictions on a specified deployment and no other. | Object | Admin | Read | Write | | ---------- | ----------- | ----| -----| | Application | | | | | Custom Environment | | | | | Custom Model | | | | | Dataset Data | | ✔ | | | Dataset Info | | ✔ | | | Deployment | | ✔ | | | Model Package | | | | | Prediction Environment | | ✔ | | | Project | | | | ### Data Consumer {: #data-consumer } Access: Can consume the datasets created across the system. Notes: To restrict users from being able to upload local files to a project directly, combine this role with the "Enable AI Catalog as File Source Limitation" feature flag. | Object | Admin | Read | Write | | ---------- | ----------- | ----| -----| | Application | | ✔ | ✔ | | Custom Environment | | ✔ | | | Custom Model | | ✔ | ✔ | | Dataset Data | | ✔ | | | Dataset Info | | ✔ | | | Deployment | | ✔ | ✔ | | Model Package | | ✔ | ✔ | | Prediction Environment | | ✔ | | | Project | | ✔ | ✔ | ### Data Admin {: #data-admin } Access: Can access every dataset created across the system with admin permissions, including all metadata associated with each dataset. Useful for: Debugging and reporting on usage and activity for any data asset pulled into the **AI Catalog**. | Object | Admin | Read | Write | | ---------- | ----------- | ----| -----| | Application | | | | | Custom Environment | | | | | Custom Model | | | | | Dataset Data | ✔ | ✔ | ✔ | | Dataset Info | ✔ | ✔ | ✔ | | Deployment | | | | | Model Package | | | | | Prediction Environment | | | | | Project | | | |
rbac-ref
--- title: Reference --- # Reference {: #reference } The information in this section provides reference information for managing your DataRobot account. Topic | Describes... ----- | ------------ [Role-based access control (RBAC)](rbac-ref) | View descriptions of each role in RBAC. [Custom RBAC roles](custom-roles) | System and organization administrators can create roles and define access at a more granular level, and assign them to users and groups. [User Activity Monitor reference](uam-ref) | View descriptions of the fields included in each activity report.
index
--- title: Custom RBAC roles description: Custom role-based access control (RBAC) roles are a solution for organizations with use cases that are not addressed by default roles in DataRobot. section_name: Administrator platform: cloud-only --- # Custom RBAC roles {: #custom-rbac-roles } Custom role-based access control (RBAC) roles are a solution for organizations with use cases that are not addressed by [default roles](rbac-ref) in DataRobot. System and organization administrators can create roles and define access at a more granular level, and assign them to users and groups. You can access custom RBAC roles from **User Settings** > **User Roles**. The **User Roles** page lists each available role an admin can assign to a user in their organization, including DataRobot default roles. Before creating a custom role, review the [considerations](#feature-considerations). ![](images/custom-rbac-1.png) ## Create custom roles {: #create-custom-roles } To create a new role, click **+ Add a user role**. Enter a name for the role and assign permissions. DataRobot features are listed on the left, and role access options&mdash;[read, write, admin](roles-permissions#role-definitions), and entity-specific permissions&mdash;are listed on the right. Select an object, then select an access level for the object under _Permissions_. ![](images/custom-rbac-2.png) ## Manage roles {: #manage-roles } Unlike default DataRobot roles, you can edit and delete any custom role created for an organization from the **User Roles** page. To determine if a role can be modified, refer to the **Custom Role** column; default roles display _Yes_ and custom roles display _No_. ![](images/custom-rbac-3.png) From this page, you can: | &nbsp; | Element | Description | | ---------- | ----------- | ---------- | | ![](images/icon-1.png) | Edit | Select a custom user role to modify its permissions. | | ![](images/icon-2.png) | Delete | Click the **Menu** icon and select **Delete**. If the role is currently assigned to a user, a warning message appears before deleting the role. | | ![](images/icon-3.png) | Duplicate | Click the duplicate icon and name the new role, then select the role to modify permissions. | ## Feature considerations {: #feature-considerations } * The RBAC feature flag must be enabled. * When system administrators create a custom role, the role is enabled system-wide across all organizations. When organization administrators create a custom role, the role is only enabled for that organization. * The feature flag for custom roles must be enabled system- or org-wide. * Feature flags cannot be configured through roles. * Multiple roles cannot be assigned to a user/group/organization at this time.
custom-roles
--- title: User Activity Monitor reference description: Use the User Activity Monitor to preview or download the Admin Usage and Prediction Usage reports. You can filter, hide sensitive fields, export CSV, and more. --- # User Activity Monitor reference {: #user-activity-monitor-reference } The following sections describe the fields returned for the **User Activity Monitor** (UAM), based on the selected report view. See the [**User Activity Monitor** overview](main-uam-overview) for information on using the tool. Each row of both the online preview and download of reports relates to a single audited event. Data is presented in ascending order, with the earliest data appearing at the top of the report. You can download the reports using [Export CSV](main-uam-overview#download-activity-data). When exporting reports, you are prompted to filter records for report download. The filters you apply when previewing the report apply only to the online preview. ## Hide sensitive information {: #hide-sensitive-information } When viewing App Usage or Prediction Usage reports, you can hide or display identifying information with the "Include identifying fields" option. You may want to hide the information, for example, if Customer Support will be accessing a report. If unchecked, the columns display in the report without values. The fields considered sensitive are marked with an asterisk in the tables below. ## Admin Usage activity report {: #admin-usage-activity-report } The Admin Usage activity output reports the following data about administrator operations and activities. | Report field | Description | |----------------|--------------------| | Timestamp—UTC | Timestamp (UTC time standard) when the administrator event occurred | | Event | Type of administrator event, such as Create Account, Organization created, Change Password, Update Account, etc. | | UID | ID for this user | | Username (*) | Username for this user | | Admin Org ID | ID of the administrator organization | | Admin Org Name | Name of the administrator's organization | | Org ID | ID of the user's organization | | Org Name (*) | Name of the user's organization | | Group ID | ID for this user's group | | Group Name | Name of the user's group | | Admin UID | ID for this administrator (if applicable) | | Admin Username | Username for this administrator (if applicable) | | Old values (*) | User account settings before the administrator made changes. For example, if the administrator activity changed the workers for the related user, this field shows the "max\_workers" value *before* the change. | | New values (*) | User account settings after the administrator made changes. For example, if the administrator activity changed the workers for the related user, this field shows the "max\_workers" value *after* the change. | (\*) denotes an [identifying field](#hide-sensitive-information) for this report ## App Usage activity report {: #app-usage-activity-report } The App Usage activity output reports the following data about application events. | Report field | Description | |----------------|--------------------| | Timestamp—UTC | Timestamp (UTC time standard) when the application event occurred | | Event | Type of application event, such as Add Model, Compliance Doc Generated, aiAPI Portal Login, Dataset Upload, etc. | | UID | ID of the user | | Username (*) | Name of the user | | Project ID | ID of the project | | Project Name (*) | Name of the project | | Org ID | ID of the user's organization | | Org Name (*) | Name of the user's organization | | Group ID | ID of the user's group | | Group Name (*) | Name of the user's group | | User Role | Role for the user who initiated the event; values include OWNER, USER, OBSERVER | | Project Type | Type for the related project; possible values include Binary Classification, Regression, Time Series—Regression, Multiclass Classification, etc. | | Metric | Optimization metric for the related project; potential values include LogLoss, RMSE, AUC, etc. | | Partition Method | Partition method for the related project (i.e., how data is partitioned for this project) | | Target Variable (\*) | Target variable for the related project (i.e., what DataRobot will predict) | | Model ID | ID of the model | | Model Type | Type for the model; this also is the name of the model or blueprint | | Blender Model Types | Type of blender model, if applicable | | Sample Type | Type of training sample for the project; values may include Sample Percent, Row Count, Duration, etc. | | Sample Length | Amount of sample data for training the project; values are based on Sample Type and may be percentage, number of rows, or length of time | | Model Fit Time | Amount of time (in seconds) used to build the model | | Recommended Model | Identifies if this is the recommended model for deployment (true) or not (false) | | Insight Type | Type of insight requested for this model; possible values may include Variable Importance, Compute Series Accuracy, Compute Accuracy Over Time, Dual Lift, etc. | | Custom Template | Identifies if the compliance document (for the event) was developed with a custom template (true) or not (false); applies to Compliance Doc Generated events | | Deployment ID | ID for the deployment; applies to Replaced Model events | | Deployment Type | Type of deployment; applies to events such as Replaced Model and Deployment Added, and possible values include Dedicated Prediction (deployment to a dedicated prediction server) or Secure Worker (in-app modeling workers used for predictions) | | Client Type | Client (DataRobotPythonClient or DataRobotRClient) used to interface with DataRobot; applies to events such as DataSet Upload, Project Created, Project Target Selected, Select Model Metric, etc. | | Client Version | Version of the related client (DataRobotPythonClient or DataRobotRClient)| | Catalog ID | The ID of an item in the catalog. It can be used to address an individual item. Catalog ID is the same as a Dataset ID when the catalog item is a dataset. | | Catalog Version ID | An ID indicating a specific version of a catalog item. Catalog items can have multiple versions; by default the catalog uses the latest version. To work with an earlier version, you must use the specific version ID. | | Dataset ID | ID of dataset | | Dataset Name (*) | Name of dataset | | Dataset Size | Size of project dataset | | Snapshotted State | A dataset can be [snapshotted](glossary/index#snapshot) (materialized) or not. If it is snapshotted, the data has been stored locally. If it is not, the data is requested from the source whenever it is used. | | Grantee | The UID of the user sharing the asset | | With Grant | Indicates whether the user receiving the new role is allowed to share with others | (\*) denotes an [identifying field](#hide-sensitive-information) for this report ## Prediction Usage activity report {: #prediction-usage-activity-report } The Prediction Usage activity output reports the following data about prediction statistics. | Report field | Description | |----------------|--------------------| | Timestamp—UTC | Timestamp (UTC time standard) when the prediction event occurred | | UID| ID for the user who initiated this event | | Username (*) | Name for the user who initiated this event | | Project ID | ID of the project | | Org ID | ID of the user's organization | | Org Name (*) | Name of the user's organization | | Group ID | ID of the user's group | | Group Name (*) | Name of the user's group | | User Role | Role for the user who initiated the prediction event; values include OWNER, USER, OBSERVER | | Model ID | ID of the model | | Model Type | Type for the model; this also is the name of the model or blueprint | | Blender Model Types | Type of blender model, if applicable | | Recommended Model | Identifies if this is the recommended model for deployment (true) or not (false) | | Project Type | Type for the project; possible values include Binary Classification, Regression, Time Series—Regression, Multiclass Classification, etc. | | Deployment ID | ID of the deployment | | Deployment Type | Type of deployment; possible values include dedicated (deployment to a dedicated prediction server) or Secure Worker (in-app modeling workers used for predictions) | | Dataset ID | ID of the dataset | | Prediction Method | Method for making predictions for the related project; possible values are Modeling Worker (predictions using modeling workers) or Dedicated Prediction (predictions using dedicated prediction server) | | Prediction Explanations | Identifies if Prediction Explanations were computed for this project model (true) or not (false) | | # of Requests | Number of prediction requests the deployment has received (where a single request can contain multiple prediction requests); provides statistics for deployment service health | | # Rows Scored | Number of dataset rows scored (for predictions) for this deployment | | User Errors | Number of user errors (4xx errors) for this deployment; provides statistics for deployment service health | | Server Errors | Number of server errors (5xx errors) for this deployment; provides statistics for deployment service health | | Average Execution Time | Average time (in milliseconds) DataRobot spent processing prediction requests for this deployment; provides statistics for deployment service health | (\*) denotes an [identifying field](#hide-sensitive-information) for this report ## Self-Managed AI Platform admins {: #self-managed-ai-platform-admins } The system information report is available only on the Self-Managed AI Platform. ### System Information report {: #system-information-report } System information is available for download only, using [Export CSV](main-uam-overview#download-activity-data) (online preview is not available). The System Information report provides information (key:value pairs) specific to the deployed cluster. The category of information is dependent on the type of deployed cluster. | Report field | Description | |----------------|--------------------| | deployment | Python version used to deploy the cluster. | | install type | The type of installation for the deployed cluster. | | mongo | Mongo database configuration. Identifies whether secrets are enforced when communicating with the Mongo database and also database availability. | | redis | Redis queue service configuration. Identifies whether secrets are enforced when communicating with a Redis database and also database availability. | | postgresql | Postgres database configuration. Identifies whether secrets are enforced when communicating with the Postgres database. | | elasticsearch | Elasticsearch configuration. Identifies whether secrets are enforced when communicating with Elasticsearch and lists configuration parameters and their availability. | | tls config | Identifies which services are set up to use transport layer security (TLS). | | modeling workers | Reports the number of modeling workers configured. If a cluster has unlimited modeling workers, this field is empty. | | dedicated predictions | Identifies whether a dedicated prediction environment is configured and if audit logs are enabled on the environment. | | smtp | Identifies whether SMTP integration is configured. | | product type | Lists DataRobot products (i.e. MLOps, AutoML, Time series) enabled for the deployed cluster. | Typical information for a report is shown below: ![](images/sysinfo.png)
uam-ref
{% if section(page, "dr-notebooks") %} You can create and run code, text, and chart cells within your notebook. Before proceeding, be sure to review the guidelines for [configuring and initializing a notebook environment](dr-env-nb). {% else %} You can create and run code, text, and chart cells within your notebook. Before proceeding, be sure to review the guidelines for [configuring and initializing a notebook environment](wb-env-nb). {% endif %} ### Use the DataRobot API {: #use-the-datarobot-api } Within DataRobot Notebooks you can easily use the DataRobot API. The environment images DataRobot provides come with the respective DataRobot Python and R clients preinstalled. DataRobot automatically fetches your endpoint and your API token and sets them as environment variables (`DATAROBOT_ENDPOINT` and `DATAROBOT_API_TOKEN` for Python and `DATAROBOT_API_ENDPOINT` and `DATAROBOT_API_TOKEN` for R) within your notebook container, so that when the session starts up it automatically handles client instantiation without requiring manual authentication. === "Python" Import the DataRobot package to start using the Python client: `import datarobot as dr` === "R" Import the DataRobot library to start using the R client: `library(datarobot)` ### Create code cells The default image for the notebooks is a pre-built Python image with all the dependencies and common open-source software (OSS) libraries preinstalled. To see the list of all packages available in the default image, hover over that image in the **Environment** tab: {% if section(page, "dr-notebooks") %} ![](images/nb-8.png) {% else %} ![](images/wb-nb-8.png) {% endif %} !!! note DataRobot notebooks are not polyglot; run Python code or R code in a notebook, but do not intermingle both. The language supported will depend on the image you’ve selected for the notebook’s environment configuration. {% if section(page, "dr-notebooks") %} For more information about configuring a notebook's environment, read about [environment management](dr-env-nb). {% else %} For more information about configuring a notebook's environment, read about [environment management](wb-env-nb). {% endif %} ### Add libraries {: #add-libraries } Follow the instructions below to ad-hoc install additional libraries in Python or R. !!! note Notebooks support rich outputs similar to Jupyter, so you can use plotting libraries of your choice (such as matplotlib, seaborn, etc.) and see the plotted charts inline within the cell output. You can run other shell commands for Python as well by using the `!` notation. Note that when the session restarts, these ad-hoc installations will not persist as the environment will revert back to the original state of the pre-built image. === "Python" To install and use a Python package that is not included in the default image, run the following within a code cell: `!pip install <your-package>` === "R" You can work with the R language in a notebook instead of Python by selecting the pre-built R image from the **Environment** tab. You can then write and execute R code in the code cells of the notebook with the R kernel. To install and use additional R packages, use [devtools](https://cran.r-project.org/web/packages/devtools/devtools.pdf){ target=_blank } from a code cell or the following code: `install.packages(<package_name>)` ### Create text cells {: #create-text-cells } Markdown text cells support traditional Markdown syntax and [GitHub-flavored Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax){ target=_blank } syntax. Render Markdown cells by executing the cell. To edit a rendered Markdown cell, double click on the cell. {% if section(page, "dr-notebooks") %} ![](images/nb-9.png) {% else %} ![](images/wb-nb-9.png) {% endif %} ### Create chart cells DataRobot allows you to create built-in, code-free chart cells within DataRobot Notebooks, enabling you to quickly visualize your data without coding your own plotting logic. !!! note Note that chart cells are only supported for Python notebooks. They are unavailable for R notebooks. To add a code cell, hover your cursor outside of a cell in the notebook and select the **Chart** option. {% if section(page, "dr-notebooks") %} ![](images/nb-32.png) {% else %} ![](images/wb-nb-32.png) {% endif %} When you add a chart cell to a notebook, you first need to specify the data you want to visualize by selecting a DataFrame. Select the variable name corresponding to the DataFrame from the dropdown list in the chart cell. The cell lists all of the DataFrame objects that are currently in-memory. !!! note Note that DataRobot only plots the first 5,000 rows of the DataFrame. {% if section(page, "dr-notebooks") %} ![](images/nb-33.png) {% else %} ![](images/wb-nb-33.png) {% endif %} After selecting a DataFrame, pick the type of chart you want to create. DataRobot offers configurations for bar charts, line charts, scatter plots, and area charts. They all require the same configuration fields, detailed below. {% if section(page, "dr-notebooks") %} ![](images/nb-34.png) {% else %} ![](images/wb-nb-34.png) {% endif %} Field | Description ---------- | ----------- Show title | Displays a title for the chart. Once enabled, provide a name for the title. Display tooltips | Shows tooltips with more details on the data when hovering over the chart. Show legend | Displays legend mapping. X-axis (Dimension) | Select a column from the DataFrame (provided as a dropdown list) to encode as the X-axis. Y-axis (Measure) | Select a column from the DataFrame (provided as a dropdown list) to encode as the Y-axis. Color | Choose the color for the chart contents. In addition to the configuration above, you can modify aspects of the X- and Y-axes by selecting the settings wheel next to each field. {% if section(page, "dr-notebooks") %} ![](images/nb-35.png) {% else %} ![](images/wb-nb-35.png) {% endif %} Toggle | Description ---------- | ----------- Hide axis label | Hides the label for the axis from the chart display. Hide grid | Removes the grid pattern in the background of the chart. Hide in tooltip | Hides the value for the axis in the tooltip shown when hovering on points of the chart. Show point markers | Displays point markers along the points for the axis in the chart. Aggregation | Sets whether to aggregate values for the axis. You can aggregate by a variety of values, including the median, mean, sum, standard deviation, and more. After configuring the chart, you can edit how the cell displays using the icons in the top-right corner. {% if section(page, "dr-notebooks") %} ![](images/nb-36.png) {% else %} ![](images/wb-nb-36.png) {% endif %} {% if section(page, "dr-notebooks") %} * Select the pencil icon to hide the editor menu for the chart cell. * Click the download icon to save a local copy of the chart as an SVG file. * Select the trash can icon to delete the chart cell. * Click the menu icon to access the [actions](dr-action-nb) available for the cell. {% else %} * Select the pencil icon to hide the editor menu for the chart cell. * Click the download icon to save a local copy of the chart as an SVG file. * Select the trash can icon to delete the chart cell. * Click the menu icon to access the [actions](wb-action-nb) available for the cell. {% endif %} ### Table of contents {: #table-of-contents } As you create and execute cells to develop your notebook, DataRobot automatically generates a table of contents for easier navigation. To access it, click the table of contents icon (![](images/icon-toc.png)) in the sidebar. The **Markdown** tab provides an autogenerated table of contents. Each entry maps to headings in the notebook's Markdown cells, corresponding to the Markdown heading level (#, ##, ###, etc.). Entries are hyperlinked, so clicking on an entry navigates you to the corresponding cell in the notebook. {% if section(page, "dr-notebooks") %} ![](images/nb-23.png) The **Cells** tab provides an overview of all cells within the notebook (both code and Markdown cells). You can perform [cell actions](dr-action-nb) (and move cells around via drag-and-drop) from the cell list in the table of contents. {% else %} ![](images/wb-nb-23.png) The **Cells** tab provides an overview of all cells within the notebook (both code and Markdown cells). You can perform [cell actions](wb-action-nb) (and move cells around via drag-and-drop) from the cell list in the table of contents. {% endif %} {% if section(page, "dr-notebooks") %} ![](images/nb-24.png) {% else %} ![](images/wb-nb-24.png) {% endif %} ### Cell settings {: #cell-settings } {% if section(page, "dr-notebooks") %} You can configure [notebook-wide cell settings](dr-settings-nb#notebook-settings) by accessing the gear icon menu at the top right of the Notebook header. ![](images/nb-4.png) {% else %} You can configure [notebook-wide cell settings](wb-settings-nb#notebook-settings) by accessing the gear icon menu at the top right of the Notebook header. ![](images/wb-nb-4.png) {% endif %}
cell-nb
### Access to notebooks {: #access-to-notebooks } Notebooks offer an in-browser editor to create and execute code for data science analysis and modeling. They also display computation results in various formats, including text, images, graphs, plots, tables, and more. You can customize output display by using open-source plugins. Cells can also contain Markdown rich text for commentary and explanation of the coding workflow. !!! info "Availability information" Support for notebooks in DataRobot is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. <b>Feature flag:</b> Enable Notebooks {% if section(page, "dr-notebooks") %} For frequently asked questions, feature considerations, and additional reading, view [Notebook reference](dr-notebook-ref). {% endif %} {% if section(page, "workbench") %} For frequently asked questions, feature considerations, and additional reading, view [Notebook reference](wb-notebook-ref). {% endif %} ### Notebook workflow overview {: #notebook-workflow-overview } ``` mermaid graph TB A[Create a DataRobot notebook] A --> |New notebook|C[Add a new notebook] A --> |Existing notebook|D[Upload an .ipynb notebook]; C --> E{Configure the environment} D --> E E --> F[Start the notebook session] F --> G[Edit the notebook] G --> |Writing guidelines?|H[Create and edit Markdown cells] G --> |Coding?|I[Reference code snippets and create code cells] H --> J[Run the notebook] I --> J J --> K[Create a revision history] ``` ### DataRobot use case notebooks {: #datarobot-use-case-notebooks } You can view, download, and import notebooks that outline common DataRobot use cases and workflows from the [API User Guide](api/guide/index){ target=_blank }. ### Notebook management {: #notebook-management } {% if section(page, "dr-notebooks") %} Topic | Describes... ----- | ------ [Create notebooks](dr-create-nb) | How to create, import, and export notebooks. [Notebook settings](dr-settings-nb) | The settings available for notebooks. [Notebook versioning](dr-revise-nb) | How notebooks are versioned, and how to view the revision history of a notebook. {% endif %} {% if section(page, "workbench") %} Topic | Describes... ----- | ------ [Create notebooks](wb-create-nb) | How to create, import, and export notebooks. [Notebook settings](wb-settings-nb) | The settings available for notebooks. [Notebook versioning](wb-revise-nb) | How notebooks are versioned, and how to view the revision history of a notebook. {% endif %} ### Notebook coding experience {: #notebook-coding-experience } {% if section(page, "dr-notebooks") %} Topic | Describes... ----- | ------ [Environment management](dr-env-nb) | How to configure and start the notebook's environment. [Create and execute cells](dr-cell-nb) | How to create and execute code and Markdown cells in a notebook, and how to integrate DataRobot's API into your coding workflow. [Cell actions](dr-action-nb) | Understand the actions and keyboard shortcuts available in a notebook. [Code intelligence](dr-code-int) | Learn about the code intelligence features provided throughout the notebook coding experience. {% endif %} {% if section(page, "workbench") %} Topic | Describes... ----- | ------ [Environment management](wb-env-nb) | How to configure and start the notebook's environment. [Create and execute cells](wb-cell-nb) | How to create and execute code and Markdown cells in a notebook, and how to integrate DataRobot's API into your coding workflow. [Cell actions](wb-action-nb) | Understand the actions and keyboard shortcuts available in a notebook. [Code intelligence](wb-code-int) | Learn about the code intelligence features provided throughout the notebook coding experience. {% endif %}
nb-index-main
!!! info "Availability information" The OpenAI integration is off by default. Contact your DataRobot representative or administrator for information on enabling this feature. <b>Feature flag:</b> Enable Notebooks OpenAI Integration You can now power your code development workflows in DataRobot Notebooks by applying OpenAI large language models for assisting with code generation. With the Azure OpenAI Service integration in DataRobot Notebooks, you can leverage state-of-the-art generative models with Azure's enterprise-grade security and compliance capabilities. ## Use the Code Assistant {: #use-the-code-assistant } When working in a code cell in a DataRobot Notebook, you can access the Code Assistant by selecting **Assist**. {% if section(page, "dr-notebooks") %} ![](images/nb-41.png) {% else %} ![](images/wb-nb-41.png) {% endif %} Once selected, complete the prompt by telling the assistant what you want to do. The example below requests Azure OpenAI Service to write code to create a Pandas DataFrame with the Iris dataset and include comments. After completing the prompt, click **Run OpenAI**. {% if section(page, "dr-notebooks") %} ![](images/nb-42.png) {% else %} ![](images/wb-nb-42.png) {% endif %} Allow some time for the assistant to run&mdash;it then generates the result of the prompt in the cell. The prompt is maintained as the first comment in the cell. {% if section(page, "dr-notebooks") %} ![](images/nb-43.png) {% else %} ![](images/wb-nb-43.png) {% endif %} As the Code Assistant dynamically materializes the generated code in the cell's code editor, you can choose to speed up the process by clicking **Finish Up!**. {% if section(page, "dr-notebooks") %} ![](images/nb-44.png) {% else %} ![](images/wb-nb-44.png) {% endif %} You can iterate on the same code cell after generating the initial result. Click **Assist**, provide another prompt in the same code cell, and select **Run OpenAI**. For example, in the cell displayed above, you can prompt the Code Assistant to add comments in Spanish to each line. {% if section(page, "dr-notebooks") %} ![](images/nb-45.png) {% else %} ![](images/wb-nb-45.png) {% endif %} The resulting cell provides comments for each line of code in Spanish. {% if section(page, "dr-notebooks") %} ![](images/nb-46.png) {% else %} ![](images/wb-nb-46.png) {% endif %} When generation completes, you can evaluate the helpfulness of the Code Assistant's generation. Select the smiley face to provide an evaluation. {% if section(page, "dr-notebooks") %} ![](images/nb-47.png) {% else %} ![](images/wb-nb-47.png) {% endif %} From the modal, select whether the generated code was helpful, not helpful, or contains inappropriate or harmful content. ![](images/nb-48.png)
openai-nb
{% if section(page, "dr-notebooks") %} These pages outline the coding experience when using DataRobot notebooks. Learn about common cell actions, keyboard shortcuts, and how to run notebooks. Topic | Describes... ----- | ------ [Environment management](dr-env-nb) | How to configure and start the notebook environment. [Create and execute cells](dr-cell-nb) | How to create and execute code and Markdown cells in a notebook. [Cell actions](dr-action-nb) | The actions and keyboard shortcuts available in a notebook. [Code intelligence](dr-code-int) | The code intelligence features provided throughout the notebook coding experience. [Notebook terminals](dr-terminal-nb) | Use integrated terminals to execute commands such as .py scripts or package installations. [Azure OpenAI Service integration](dr-openai-nb) | Leverage Azure's OpenAI Code Assistant to generate code in DataRobot Notebooks using ChatGPT. {% else %} These pages outline the coding experience when using DataRobot notebooks. Learn about common cell actions, keyboard shortcuts, and how to run notebooks. Topic | Describes... ----- | ------ [Environment management](wb-env-nb) | How to configure and start the notebook environment. [Create and execute cells](wb-cell-nb) | How to create and execute code and Markdown cells in a notebook. [Cell actions](wb-action-nb) | The actions and keyboard shortcuts available in a notebook. [Code intelligence](wb-code-int) | The code intelligence features provided throughout the notebook coding experience. [Notebook terminals](wb-terminal-nb) | Use integrated terminals to execute commands such as .py scripts or package installations. [Azure OpenAI Service integration](wb-openai-nb) | Leverage Azure's OpenAI Code Assistant to generate code in DataRobot Notebooks using ChatGPT. {% endif %}
code-index
This page outlines how to configure and start the notebook environment. ## Manage the notebook environment {: #manage-the-notebook-environment } Before you create and execute code, click the environment icon to configure the notebook's environment. The [environment image](#built-in-environment-images) determines the coding language, dependencies, and open-source libraries used in the notebook. The default image for the DataRobot Notebooks is a pre-built Python image. To see the list of all packages available in the default image, hover over that image in the **Environment** tab: {% if section(page, "dr-notebooks") %} ![](images/nb-8.png) {% else %} ![](images/wb-nb-8.png) {% endif %} The screenshot and table below outline the configuration options available for a notebook environment. {% if section(page, "dr-notebooks") %} ![](images/nb-6.png) | | Element | Description | |---|---|---| | ![](images/icon-1.png) | Start environment toggle | Starts or stops the notebook environment's kernel, which allows you to execute the notebook's code cells. | | ![](images/icon-2.png) | Resource type | Represents a machine preset, specifying the CPU and RAM you have on your machine where the environment's kernel runs. | | ![](images/icon-3.png) | Image | Determines the coding language and associated libraries that the notebook uses. | | ![](images/icon-4.png) | Runtime | Indicates the CPU usage, RAM usage, and elapsed runtime for the notebook's environment during an active session. | | ![](images/icon-5.png) | Session timeout | Limits the amount of inactivity time allowed before the environment stops running and the underlying machine is shut down. The default timeout on inactivity is 60 minutes, and the maximum configurable timeout is 180 minutes. | {% else %} ![](images/wb-nb-6.png) | | Element | Description | |---|---|---| | ![](images/icon-1.png) | Start environment toggle | Starts or stops the notebook environment's kernel, which allows you to execute the notebook's code cells. | | ![](images/icon-2.png) | Resource type | Represents a machine preset, specifying the CPU and RAM you have on your machine where the environment's kernel runs. | | ![](images/icon-3.png) | Environment | Determines the coding language and associated libraries that the notebook uses. | | ![](images/icon-4.png) | Runtime | Indicates the CPU usage, RAM usage, and elapsed runtime for the notebook's environment during an active session. | | ![](images/icon-5.png) | Session timeout | Limits the amount of inactivity time allowed before the environment stops running and the underlying machine is shut down. The default timeout on inactivity is 60 minutes, and the maximum configurable timeout is 180 minutes. | {% endif %} To begin a notebook session where you create and run code, start the environment by toggling it on in the toolbar. {% if section(page, "dr-notebooks") %} ![](images/nb-49.png) {% else %} ![](images/wb-nb-49.png) {% endif %} Wait a moment for the environment to initialize, and once it displays the **Started** status, you can begin editing. {% if section(page, "dr-notebooks") %} ![](images/nb-50.png) {% else %} ![](images/wb-nb-50.png) {% endif %} If you upgrade any of the existing packages in the notebook environment during your session and want the upgraded version to be recognized, you need to restart the kernel. To do so, click the circular arrow icon in the toolbar. Note that restarting the kernel is different than restarting the environment session: when you stop the environment session (using the session toggle), this will stop the container your notebook is running in. The notebook state and any packages installed at runtime will be lost, as the next time you start the session, a new container will be spun up. {% if section(page, "dr-notebooks") %} ![](images/nb-51.png) {% else %} ![](images/wb-nb-51.png) {% endif %} Note that the session will automatically shut down on inactivity when the timeout is reached. The session is considered inactive when the notebook has no running cells and no changes have been made to the notebook contents in the time set by the session timeout value. {% if section(page, "dr-notebooks") %} From the notebook dashboard, you can view the status of all notebook environments. ![](images/nb-7.png) !!! note Note that you can only run up to two active notebook sessions at the same time. {% endif %} ### Built-in environment images DataRobot maintains a set of built-in Docker images that you can select from to use as the container image for a given notebook. DataRobot provides the following images: * Python 3.9 image: Contains Python version 3.9, the [DataRobot Python client](https://datarobot-public-api-client.readthedocs-hosted.com/en/latest/index.html){ target=_blank }, and suite of common data science libraries. * Python 3.8 image: Contains Python version 3.8, the DataRobot Python client, and suite of common data science libraries. Most notably, this image comes preinstalled with the dependencies needed to work with Snowflake. These libraries include the [Snowflake Python Connector](https://docs.snowflake.com/en/user-guide/python-connector.html) and [Snowpark](https://docs.snowflake.com/en/developer-guide/snowpark/index.html) (which requires Python 3.8). * R 4.2 image: Contains R version 4.2, the [DataRobot R client](https://cran.r-project.org/web/packages/datarobot/index.html){ target=_blank }, and a suite of common data science libraries. ## Environment variables {: #environment-variables } If you need to reference sensitive strings in a notebook, rather than storing them in plain text within the notebook you can use environment variables to securely store the values. These values are stored encrypted by DataRobot. Environment variables are useful if you need to specify credentials for connecting to an external data source within your notebook, for instance. Whenever you start a notebook session, DataRobot sets the notebook's associated environment variables in the container environment, so you can reference them from your notebook code using the following code: === "Python" ``` import os KEY = os.environ['KEY'] # KEY variable now references your VALUE ``` === "R" ``` KEY = Sys.getenv("KEY") ``` To access environment variables, click the lock icon in the sidebar. {% if section(page, "dr-notebooks") %} ![](images/nb-15.png) {% else %} ![](images/wb-nb-15.png) {% endif %} Click **Create new entry**. In the dialog box, enter the key and value for a single entry, and provide an optional description. {% if section(page, "dr-notebooks") %} ![](images/nb-16.png) {% else %} ![](images/wb-nb-16.png) {% endif %} If you want to add multiple variables, select **Bulk import**. Use the following format on each line in the field: `KEY=VALUE # DESCRIPTION` {% if section(page, "dr-notebooks") %} ![](images/nb-17.png) {% else %} ![](images/wb-nb-17.png) {% endif %} !!! note Any existing environment variable with the same key will have its value overwritten by the new value specified. When you have finished adding environment variables, click **Save**. ## Edit existing variables {: #edit-existing-variables } You can also edit and delete a notebook’s associated environment variables from the **Environment variables** panel: {% if section(page, "dr-notebooks") %} ![](images/nb-18.png) {% else %} ![](images/wb-nb-18.png) {% endif %} * Click the pencil icon on a variable to edit it. * Select the eye icon to view a hidden value. * Click the trash can icon to delete a variable. * Click **Insert all** to insert a code snippet that retrieves all of the notebook's environment variables and includes them in the notebook.
env-nb
DataRobot notebooks support integrated terminal windows. When you have a notebook session running, you can open one or more integrated terminals to execute terminal commands such as running .py scripts or installing packages. Terminal integration also allows you to have full support for a system shell (bash) so you can run installed programs. ## Create a terminal window {% if section(page, "dr-notebooks") %} To create a terminal window in a DataRobot notebook, first ensure that the [notebook environment is running](dr-env-nb#manage-the-notebook-environment). ![](images/nb-37.png) From the sidebar, select the terminal icon at the bottom of the page to create a new terminal window. ![](images/nb-38.png) After creating a window, the notebook page divides into two sections: one for the notebook itself, and another for the terminal. ![](images/nb-40.png) {% else %} To create a terminal window in a DataRobot notebook, first ensure that the [notebook environment is running](dr-env-nb#manage-the-notebook-environment). ![](images/wb-nb-37.png) From the sidebar, select the terminal icon at the bottom of the page to create a new terminal window. ![](images/wb-nb-38.png) After creating a window, the notebook page divides into two sections: one for the notebook itself, and another for the terminal. ![](images/wb-nb-40.png) {% endif %} !!! note Note that terminal windows only last for the duration of the notebook session and they will not persist when you access the notebook at a later time. ### Manage a terminal window You can manage the terminal window in a variety of ways: {% if section(page, "dr-notebooks") %} ![](images/nb-39.png) | | Element | Description | |---|---|---| | ![](images/icon-1.png)| Rename terminal window | To rename the terminal window window, select the pencil icon next to the window name. | | ![](images/icon-2.png)| Add new terminal window | To add an additional terminal session, click the plus sign (**+**) next to the terminal window name. This will create an additional terminal window as a tab, allowing you to work in multiple windows simultaneously. | | ![](images/icon-3.png)| Page divider | Use the page divider to manage the size of the notebook and terminal sections of the page. | | ![](images/icon-4.png)| Fullscreen | Click to view the the terminal on the full page and hide the notebook in session. | {% else %} ![](images/wb-nb-39.png) | | Element | Description | |---|---|---| | ![](images/wb-icon-1.png)| Rename terminal window | To rename the terminal window window, select the pencil icon next to the window name. | | ![](images/wb-icon-2.png)| Add new terminal window | To add an additional terminal session, click the plus sign (**+**) next to the terminal window name. This will create an additional terminal window as a tab, allowing you to work in multiple windows simultaneously. | | ![](images/wb-icon-3.png)| Page divider | Use the page divider to manage the size of the notebook and terminal sections of the page. | | ![](images/wb-icon-4.png)| Fullscreen | Click to view the the terminal on the full page and hide the notebook in session. | {% endif %} !!! note Terminal instances and the notebook itself share the same current working directory.
terminal-nb
# Notebook reference {: #notebook-reference } ## FAQ {: #faq } ??? faq "Are DataRobot Notebooks available in Classic and Workbench?" Yes, DataRobot Notebooks are available in both Workbench and DataRobot Classic. ??? faq "Why should I use DataRobot Notebooks?" DataRobot Notebooks offer you the flexibility to develop and run code in the language of your choice using both your preferred open-source ML libraries as well as the DataRobot API for streamlining and automating your DataRobot workflows&mdash;all within the DataRobot platform. With the fully managed, hosted platform for creating and executing notebooks with auto-scaling capabilities, you can focus on data science work rather than infrastructure management. You can easily organize, collaborate, and share notebooks and related assets among your teams in one unified environment with centralized governance via DataRobot Use Cases. ??? faq "What’s different about DataRobot Notebooks compared to Jupyter?" Working locally with open-source Jupyter Notebooks can be challenging to scale with an enterprise, whereas DataRobot Notebooks help make data science a team sport by serving as a central repository for notebooks and data science assets, enabling your team to collaborate and make progress on complex problems. Since DataRobot Notebooks is a fully managed solution, you can work with notebooks with scalable resources without having to manage the infrastructure yourself. DataRobot Notebooks also provide enhanced features beyond the classic Jupyter offering, such as built-in revision history, credential management, built-in visualizations, and more. You will be able to have the full flexibility of a code-first environment while also leveraging DataRobot’s suite of other ML offerings, including automated machine learning. ??? faq "Are DataRobot Notebooks compatible with Jupyter?" Yes, notebooks are Jupyter compatible and utilize Jupyter kernels. DataRobot supports import from and export to the .ipynb standard file format for notebook files, so you can easily bring existing workloads into the platform without concern for vendor lock-in of your IP. The user interface is also closely aligned with Jupyter (e.g. modal editor, keyboard shortcuts), so you can easily onboard without a steep learning curve. ??? faq "Can I install any packages I need within the notebook environment?" Yes, you can install any additional packages you need into your environment at runtime during your notebook session. You can use Jupyter's magic commands (e.g. `!pip install <package-name>`) from a notebook cell; however, when your session shuts down, packages installed at runtime do not persist. You must reinstall them the next time you restart the session. ??? faq "Can I share notebooks?" Although you cannot share notebooks directly with other users, in Workbench, you can share Use Cases that contain notebooks. Therefore, to share a notebook with another user, you must [share the entire Use Case](wb-build-usecase#share) so that they have access to all associated assets. ??? faq "How can I access datasets in my Use Case that I have not yet loaded into my notebook?" Access the dataset you want to include in the notebook from the Use Case dashboard. The ID is included in the dataset URL (after `/prepare/`); it is the same ID stored for the dataset in the AI Catalog. ![](images/nb-faq-1.png) ## Feature considerations {: #feature-considerations } Review the tables below to learn more about the limitations for DataRobot Notebooks. ### CPU and memory limits {: #cpu-and-memory-limits } Review the limits below based on the machine size you are using. Machine size | CPU limit | Memory limit | ------------ | --------- | ------------ | XS | 1 CPU | 4 GB | S | 2 CPU | 8 GB | M* | 4 CPU | 16 GB | L* | 8 CPU | 32 GB | \* M and L machine sizes are available for customers depending on their pricing tier (up to M for Enterprise tier customers, and up to L for Business Critical tier). ### Cell limits {: #cell-limits } The table below outlines limits for cell execution time, cell source, and output sizes Limit | Value | ------------ | --------- | Max cell execution time | 8 hours | Max cell output size | 10 MB | Max notebook cells count | 1000 cells | Max cell source code size | 2 MB | ## Read more {: #read-more } DataRobot offers a library of common use cases that outline data science and machine learning workflows to solve problems. You can view a selection of notebooks below that use v3.0 of DataRobot's Python client. Topic | Describes... | ----- | ------ | [Use cases for version 2.x](python2/index){ target=_blank } | Notebooks for uses cases that use methods for 2.x versions of DataRobot's Python client. [Identify money laundering with anomaly detection](aml/index){ target=_blank } | How to use a historical financial transaction dataset and train models that detect instances of money laundering. | [Measure price elasticity of demand](elasticity/index){ target=_blank } | A use case to identify relationships between price and demand, maximize revenue by properly pricing products, and monitor price elasticities for changes in price and demand. | [Insurance claim triage](insurance/index){ target=_blank } | How to evaluate the severity of an insurance claim in order to triage it effectively. | [Predict loan defaults](loan-default/index){ target=_blank } | A use case that reduces defaults and minimizes risk by predicting the likelihood that a borrower will not repay their loan. | [No-show appointment forecasting](no-show-appt/index){ target=_blank } | How to build a model that identifies patients most likely to miss appointments, with correlating reasons. | [Predict late shipments](predict-shipment/index){ target=_blank } | A use case that determines whether a shipment will be late or if there will be a shortage of parts. | [Reduce 30-Day readmissions rate](readmission/index){ target=_blank } | How to reduce the 30-day readmission rate at a hospital. | [Predict steel plate defects](steel/index){ target=_blank } | A use case that helps manufacturers significantly improve the efficiency and effectiveness of identifying defects of all kinds, including those for steel sheets. | [Predict customer churn](customer-churn-v3.ipynb){ target=_blank } | How to predict customers that are at risk to churn and when to intervene to prevent it. | [Large scale demand forecasting](demand-v3.ipynb){ target=_blank } | An end-to-end demand forecasting use case that uses DataRobot's Python package. | [Predictions for fantasy baseball](fantasy-v3.ipynb){ target=_blank } | An estimate of a baseball player's true talent level and their likely performance for the coming season. [Lead scoring](lead-scoring-v3.ipynb){ target=_blank } | A binary classification problem of whether a prospect will become a customer. | [Forecast sales with multiseries modeling](multiseries-v3.ipynb){ target=_blank } | How to forecast future sales for multiple stores using multiseries modeling. | [Identify money laundering with anomaly detection](outlier-v3.ipynb){ target=_blank } | How to train anomaly detection models to detect outliers. | [Predict CO₂ levels with out-of-time validation modeling](otv-v3.ipynb){ target=_blank } | How to use [out-of-time validation (OTV)](otv){ target=_blank } modeling with DataRobot's Python client to predict monthly CO₂ levels for one of Hawaii's active volcanoes, Mauna Loa. | [Predict equipment failure](part-fail-v3.ipynb){ target=_blank } | A use case that that determines whether or not equipment part failure will occur. | [Predict fraudulent medical claims](pred-fraud-v3.ipynb){ target=_blank } | The identification of fraudulent medical claims using the DataRobot Python package. | [Generate SHAP-based Prediction Explanations](shap-nb.ipynb){ target=_blank } | How to use DataRobot's SHAP Prediction Explanations to determine what qualities of a home drive sale value. |
notebook-ref
Select the settings wheel to configure notebook settings. {% if section(page, "dr-notebooks") %} ![](images/nb-4.png) {% else %} ![](images/wb-nb-4.png) {% endif %} These settings allow you to control the display of: * Line numbers in code blocks. * Cell titles and outputs. * Cell output scrolling. Select the actions menu ![](images/icon-menu.png) to access additional notebook actions: * Download the notebook as an `.ipynb` file. * Duplicate the notebook. * Delete the notebook. Note that deleting a notebook will also delete its associated assets, such as the notebook's environment variables and any revision history. ### Notebook metadata {: #notebook-metadata } Click the info icon ![](images/icon-info-circle.png) to edit the notebook's metadata: {% if section(page, "dr-notebooks") %} ![](images/nb-3.png) {% else %} ![](images/wb-nb-3.png) {% endif %} * **Tags**: (Optional) Enter one or more descriptive tags that you can use to filter notebooks when viewing them in the dashboard. * **Description**: (Optional) Enter a description of the notebook.
settings-nb
{% if section(page, "dr-notebooks") %} This section outlines the management capabilities available for DataRobot Notebooks. Access the dashboard that hosts the currently available notebooks and all functionality from the top-level **Notebooks** tab. ![](images/nb-28.png) Topic | Describes... ----- | ------ [Add notebooks](dr-create-nb) | How to create new notebooks, import existing notebooks, and export notebooks as `.ipynb` files. [Notebook settings](dr-settings-nb) | The settings available for notebooks. [Notebook versioning](dr-revise-nb) | How notebooks are versioned and how to load the revision history of a notebook. {% else %} This section outlines the management capabilities available for DataRobot Notebooks. Select and use case and then click the **Notebooks** tab to access the dashboard that displays all notebooks for that use case. ![](images/wb-nb-28.png) Topic | Describes... ----- | ------ [Add notebooks](wb-create-nb) | How to create new notebooks, import existing notebooks, and export notebooks as `.ipynb` files. [Notebook settings](wb-settings-nb) | The settings available for notebooks. [Notebook versioning](wb-revise-nb) | How notebooks are versioned and how to load the revision history of a notebook. {% endif %}
manage-index
You can snapshot iterations of a notebook, allowing you to maintain older versions that you can revert back to in the future. These snapshots are called revisions (also known as checkpoints). Each saved revision contains the notebook's cells and their output in their state at the time of the snapshot. ### Revision types {: #revision-types } DataRobot Notebooks support both automatic and manual revisions. *Automatic revisions* are saved each time a notebook’s active session is shut down (from either your own termination of the session or via an inactivity timeout). Automatic revisions are defined by the creation timestamp and include an "Autosaved" label. ![](images/nb-29.png) You can create a *manual revision* at any time to save a checkpoint of a notebook. To do so, navigate to the **Revision history** tab in the sidebar. {% if section(page, "dr-notebooks") %} ![](images/nb-19.png) {% else %} ![](images/wb-nb-19.png) {% endif %} Click **Create new revision**. Provide a name for the revision in the a dialog box. If no name is specified, the timestamp of the checkpoint creation is used. Once named, click **Create**. {% if section(page, "dr-notebooks") %} ![](images/nb-20.png) {% else %} ![](images/wb-nb-20.png) {% endif %} ### Revision history {: #revision-history } View a list of all revisions (manual and automatic) for a notebook from the **Revision history** in the left panel. Click on the revision entry to view a preview of that version of the notebook: {% if section(page, "dr-notebooks") %} ![](images/nb-21.png) {% else %} ![](images/wb-nb-21.png) {% endif %} You can restore your current notebook to a specific revision by clicking **Restore this revision** at the bottom of the preview. Access additional actions for managing revisions by clicking the menu icon on the right of each entry. Note that you can update the names of manual revisions, while autosaved revisions will only have their timestamp as a name. ![](images/nb-22.png) This menu gives you options to: * Restore revision: Restore the current notebook to the state of the selected revision. After restoring to a given revision, saving will create new revisions at the top of the revision list. * Clone revision: Create a copy of the revision as a new notebook. * Delete the revision: Permanently delete an entry from the notebook’s revision history.
revise-nb
This page describes the coding intelligence features included in the code cells of DataRobot Notebooks. ### Code snippets {: #code-snippets } DataRobot provides a set of pre-defined code snippets, inserted as cells in a notebook, for commonly used methods in the DataRobot API as well as other data science tasks. These include connecting to external data sources, deploying a model, creating a model factory, and more. Access code snippets by selecting the code icon in the sidebar. ![](images/nb-14.png) ### Docstrings reference {: #docstrings-reference } When using a specific method or class, use the `Shift + Tab` keyboard shortcut directly from the code editor to query docstrings. Additional documentation appears as an overlay. ![](images/nb-13.png) ### Autocomplete {: #autocomplete } {% if section(page, "dr-notebooks") %} When [editing code](dr-action-nb#modal-editor), you can autocomplete lines using the **Tab** key. To activate autocompletion, you must first [configure and start the notebook environment](dr-env-nb). {% else %} When [editing code](wb-action-nb#modal-editor), you can autocomplete lines using the **Tab** key. To activate autocompletion, you must first [configure and start the notebook environment](wb-env-nb). {% endif %} ### Syntax highlighting {: #syntax-highlighting } When writing code in code cells, DataRobot highlights syntax for the language set in the notebook environment. Below is a sample of syntax highlighting for Python: ![](images/nb-27.png)
code-int
{% if section(page, "dr-notebooks") %} To add notebooks to DataRobot, navigate to the **Notebooks** page. This brings you the notebook dashboard, which hosts all notebooks currently available. ![](images/nb-28.png) ### Create notebooks {: #create-notebooks } You can create and manage notebooks across DataRobot. To get started, create a notebook. 1. Click **Notebooks > Create new notebook**. ![](images/nb-1.png) {% endif %} {% if section(page, "workbench") %} To add notebooks to a Use Case via Workbench, first select and navigate to the Use Case. ### Create notebooks {: #create-notebooks } You can create and manage notebooks across DataRobot. To get started, create a notebook. 1. Click **Add new > Add notebook**. ![](images/wb-nb-1.png) {% endif %} 2. When you create a notebook, you are brought to the notebook editing interface. {% if section(page, "dr-notebooks") %} ![](images/nb-2.png) {% endif %} {% if section(page, "workbench") %} ![](images/wb-nb-2.png) {% endif %} | | Element | Description | |---|---|---| | ![](images/icon-1.png) | Notebook name | The name of the notebook displayed at the top of the page and in the dashboard. Click on the pencil icon ![](images/icon-pencil.png) to rename it. | | ![](images/icon-2.png) | Sidebar | Hosts options for configuring notebook settings as well as additional notebook capabilities. | | ![](images/icon-3.png) | Cell | The body of the notebook, where you write and edit code with full syntax highlighting (code cells) or include explanatory and procedural text (Markdown cells). | | ![](images/icon-4.png) | Menu bar | Provides the notebook environment status, display options, notebook management options, and a button to run the notebook. | {% if section(page, "dr-notebooks") %} Reference the DataRobot documentation for more information about [notebook settings](dr-settings-nb) and [cell actions](dr-action-nb). {% else %} Reference the DataRobot documentation for more information about [notebook settings](wb-settings-nb) and [cell actions](wb-action-nb). {% endif %} ### Import notebooks {: #import-notebooks } {% if section(page, "dr-notebooks") %} To import existing Jupyter notebooks (in `.ipynb` format) into DataRobot, click **Notebooks > Upload notebook**. ![](images/nb-5.png) {% endif %} {% if section(page, "workbench") %} To import existing Jupyter notebooks (in `.ipynb` format) into DataRobot, click **Add new > Upload notebook**. ![](images/wb-nb-5.png) {% endif %} Upload the notebook by providing a URL or selecting a local file. If using a URL, note that it must be a public URL that is not blocked by authentication. Once uploaded, the notebook will appear in the dashboard alongside those you have already created. ### Export notebooks {: #export-notebooks } If you have a notebook that you would like to export, you can do so by downloading it as an .ipynb file. Select the actions menu ![](images/icon-menu.png) inside the notebook and then click **Download**. {% if section(page, "dr-notebooks") %} ![](images/nb-25.png) {% endif %} {% if section(page, "workbench") %} ![](images/wb-nb-25.png) {% endif %}
create-nb
You can perform a variety of actions with each cell in a notebook, including most of the cell actions supported in classic Jupyter notebooks. To see the set of available cell actions in the notebook editor, click on the menu icon in the upper right corner of the cell. {% if section(page, "dr-notebooks") %} ![](images/nb-10.png) {% else %} ![](images/wb-nb-10.png) {% endif %} ## Modal editor {: #modal-editor } DataRobot notebooks include a modal editor that changes keyboard shortcuts and cell actions based on which mode the notebook is in: [edit mode](#edit-mode) or [command mode](#command-mode). ### Edit mode {: #edit-mode } Enter edit mode by clicking on the text editor portion of a cell. When a cell is in edit mode, you can enter text into the cell. This mode is indicated by a green border around the selected cell and text cursor inside of it: {% if section(page, "dr-notebooks") %} ![](images/nb-30.png) {% else %} ![](images/wb-nb-30.png) {% endif %} ### Command mode {: #command-mode } When you are in command mode, you are able to execute notebook-wide and cell-wide actions. However you cannot enter text into individual cells. Enter command mode by clicking the empty space outside of a DataRobot notebook's cells (where the pointer is in the following screenshot), or on the header or footer portion of the cell. This mode is indicated by a blue border around a cell: {% if section(page, "dr-notebooks") %} ![](images/nb-31.png) {% else %} ![](images/wb-nb-31.png) {% endif %} When a cell is currently selected in edit mode, you can switch to command mode via the keyboard shortcut ++esc++. ## Keyboard shortcuts {: #keyboard-shortcuts } Some cell actions have corresponding keyboard shortcuts. It is important to note that *there are two different sets of keyboard shortcuts*, one corresponding to edit mode and another set for command mode. Edit mode primarily contains shortcuts for editing text, while command mode shortcuts control actions on the cell itself (and not the cell’s text contents). The majority of the DataRobot Notebook keyboard shortcuts align with the classic Jupyter shortcuts you may already be familiar with. The tables below include a summary of all available cell actions. Some cell actions have corresponding keyboard shortcuts. To view the shortcuts in the notebook editor, select the keyboard icon from the sidebar. === "Mac keyboard shortcuts" Cell action | Keyboard shortcut | Description | ----------- | ----------------- | ---------- **Command mode** | :~~: | :~~: | Switch to edit mode | ++enter++ | Exits command mode and uses edit mode shortcuts. | Run selected cells | ++shift+enter++ | Runs the currently selected cell. | Run selected cells without advancing | ++cmd+enter++ | Runs the selected cell without progressing to the following cell. Insert code cell above | ++a++ | Inserts a code cell above the currently selected cell. Insert code cell below | ++b++ | Inserts a code cell below the currently selected cell. Change cell to code | ++y++ | Changes a Markdown cell into a code cell. Change cell to Markdown | ++m++ | Changes a code cell into a Markdown cell. Stop execution (Interrupt kernel) | ++cmd+shift+c++ | Terminates the cell currently running in a notebook, as well as all queued cells. Select cell above | ++k+up++ | Selects the cell above the one currently selected. Select cell below | ++k+down++ | Selects the cell below the one currently selected. Delete cell(s) | ++x++ | Deletes the currently selected cell. Copy selected cell(s) | ++c++ | Copies the selected cell(s) to your clipboard. Paste cell(s) below | ++v++ | Pastes copied cells below the currently selected cell. | Paste cell(s) above | ++shift+v++ | Pastes copied cells above the currently selected cell. | Undo delete | ++d+d++ | Undoes the previous cell deletion action. | Merge cells | ++shift+m++ | Merges the selected cell with the cell above or below. | Toggle line numbers | ++l++ | Shows or hides the line numbers for the selected code cell. | Toggle all line numbers | ++shift+l++ | Shows or hides the line numbers for all cells. | Toggle cell output | ++o++ | Shows or hides the output of a code cell. | Toggle all outputs | ++shift+o++ | Shows or hides the output of all code cells. | Toggle code display | ++e++ | Shows or hides the code input for the currently selected cell. If hidden, only the cell output is displayed. | Toggle all code displays | ++shift+e++ | Shows or hides the code input for all cells. | **Edit mode** | :~~: | :~~: | Switch to command mode | ++esc++ | Exits edit mode and uses command mode shortcuts. | Autocomplete code | ++tab++ | Triggers code completion suggestions. | Select all text | ++cmd+a++ | Selects all text in a cell. | Undo text | ++cmd+z++ | Undoes the previously entered text. | Split cell | ++cmd+shift+minus++ | Splits the contents of a cell into two separate cells at the cursor location. | Run selected cells | ++shift+enter++ | Runs the currently selected cell. | Run selected cells without advancing | ++cmd+enter++ | Runs the selected cell without progressing to the following cell. | Show inline documentation | ++shift+tab++ | For code cells, displays the docstrings tooltip for function documentation and parameters. | Comment | Insert a comment on the selected line. | N/A | Move cursor up | ++up++ | :~~: | Move cursor down | ++down++ | :~~: | Move one word right | ++opt+right++ | :~~: | Move one word left | ++opt+left++ | :~~: | Delete a line | ++cmd+d++ | :~~: | === "Windows keyboard shortcuts" Cell action | Keyboard shortcut | Description | ----------- | ----------------- | ---------- **Command mode** | :~~: | :~~: | Switch to edit mode | ++enter++ | Exits command mode and uses edit mode shortcuts. | Run selected cells | ++shift+enter++ | Runs the currently selected cell. | Run selected cells without advancing | ++ctrl+enter++ | Runs the selected cell without progressing to the following cell. | Insert code cell above | ++a++ | Inserts a code cell above the currently selected cell. | Insert code cell below | ++b++ | Inserts a code cell below the currently selected cell. | Change cell to code | +y+ | Changes a Markdown cell into a code cell. | Change cell to Markdown | ++m++ | Changes a code cell into a Markdown cell. | Stop execution (Interrupt kernel) | ++ctrl+shift+c++ | Terminates the cell currently running in a notebook, as well as all queued cells. | Select cell above | ++k+up++ | Selects the cell above the one currently selected. | Select cell below | ++k+down++ | Selects the cell below the one currently selected. | Delete cell(s) | ++x++ | Deletes the currently selected cell. | Copy selected cell(s) | ++c++ | Copies the selected cell(s) to your clipboard. | Paste cell(s) below | ++v++ | Pastes copied cells below the currently selected cell. | Paste cell(s) above | ++shift+v++ | Pastes copied cells above the currently selected cell. | ++shift+v++ Undo delete | ++d+d++ | Undoes the previous cell deletion action. | Merge cells | ++shift+m++ | Merges the selected cell with the cell above or below. | Toggle line numbers | ++l++ | Shows or hides the line numbers for the selected code cell. | Toggle all line numbers | ++shift+l++ | Shows or hides the line numbers for all cells. | Toggle cell output | ++o++ | Shows or hides the output of a code cell. | Toggle all outputs | ++shift+o++ | Shows or hides the output of all code cells. | Toggle code display | ++e++ | Shows or hides the code input for the currently selected cell. If hidden, only the cell output is displayed. | Toggle all code displays | ++shift+e++ | Shows or hides the code input for all cells. | **Edit mode** | :~~: | :~~: | Switch to command mode | ++esc++ | Exits edit mode and uses command mode shortcuts. | Autocomplete code | ++tab++ | Triggers code completion suggestions. | Select all text | ++ctrl+a++ | Selects all text in a cell. | Undo text | ++ctrl+z++ | Undoes the previously entered text. | Split cell | ++ctrl+shift+minus++ | Splits the contents of a cell into two separate cells at the cursor location. | Run selected cells | ++shift+enter++ | Runs the currently selected cell. | Run selected cells without advancing | ++ctrl+enter++ | Runs the selected cell without progressing to the following cell. | Show inline documentation | ++shift+tab++ | For code cells, displays the docstrings tooltip for function documentation and parameters. | Comment | ?? | Insert a comment on the selected line. | Move cursor up | ++up++ | :~~: | Move cursor down | ++down++ | :~~: | Move one word right | ++opt+right++ | :~~: | Move one word left | ++opt+left++ | :~~: | Delete a line | ++ctrl+d++ | :~~: | ### Additional actions {: #additional-actions } In addition to the actions outlined above, DataRobot offers some additional cell control capabilities. Cell action | Description ----------- | ----------- Move cell | Moves a cell to a new location in the notebook via the drag icon in the top center of the cell. Run cells above | Runs all the cells above the currently selected cell (exclusive). | Run cells below | Runs the currently selected cell and all cells below it (inclusive). | Disable run | Disables a cell from executing after running. | Move up | Moves the currently selected cell up one cell. | Move down | Moves the currently selected cell down one cell. | Duplicate cell | Duplicates the currently selected cell. | Insert markdown cell above | Inserts a Markdown cell above the currently selected cell. | Insert markdown cell below | Inserts a Markdown cell below the currently selected cell. | Clear output | Clears the output of the selected code cell. | Merge cell above or below | Merges the selected cell with the cell above or below. | ### Multi-cell selection Similar to Jupyter, you can select multiple adjacent cells at once to perform bulk actions on those cells. To select multiple cells, either: 1. Hold down the Shift key and use your mouse to select the cells. 2. Hold down Cmd-Shift and use the Up/Down arrows to select the cells.
action-nb
--- title: Self-Managed AI Platform releases description: An archive of release announcements published for DataRobot's Self-Managed AI Platform (on-premise and VPC). --- # Self-Managed AI Platform releases {: #self-managed-ai-platform-releases } View details, by release version, of AutoML, time series, MLOps, and Data Prep feature announcements, enhancements, and fixed and known issues. These notes were provided for DataRobot's Self-Managed AI Platform deployment options. Version | Release date ------- | ------------- [DataRobot 9.0](v9.0/index) | _March 29, 2023_ [DataRobot 8.0](v8.0/index) | _March 14, 2022_ [DataRobot 7.3](v7.3/index) | _December 13, 2021_ [DataRobot 7.2](v7.2/index) | _September 13, 2021_ [DataRobot 7.1](v7.1/index) | _June 14, 2021_ [DataRobot 7.0](v7.0/index) | _March 15, 2021_
index
--- title: Public preview features description: Read preliminary documentation for data, modeling, time series, and MLOps features currently in the DataRobot public preview pipeline. --- # Public preview features {: #public-preview-features } {% include 'includes/pub-preview-notice-include.md' %} ## Available public preview documentation {: #available-public-preview-documentation } Public preview documentation is divided into five sections: Topic | Describes... ----- | ------ [Data](data-preview/index) | Public preview features that ingest, transform, and store your data. [AutoML ](automl-preview/index) | Public preview features that build and analyze models. [Time series (AutoTS)](autots-preview/index) | Public preview features that support DataRobot's time-aware functionality to forecast future values. [MLOps](mlops-preview/index) | Public preview features that deploy, monitor, manage, and govern models in production environments. [Administrator ](admin-preview/index) | Public preview features that support platform and user account management.
index
--- title: February 2023 description: Read release note announcements for DataRobot's generally available and public preview features released in February, 2023. --- # February 2023 {: #february-2023 } _February 22, 2023_ This page provides announcements of newly released features available in DataRobot's SaaS single- and multi-tenant AI Platform, with links to additional resources. With the February deployment, DataRobot's AI Platform delivered the following new GA and Public Preview features. From the release center you can also access: * [AI Platform announcement history](cloud-history/index) * [Public preview features](public-preview/index) * [Self-Managed AI Platform release notes](archive-release-notes/index) ## February release {: #february-release } The following table lists each new feature. See the [deployment history](cloud-history/index) for past feature announcements and also the [**deprecation notices**](#deprecation-announcements), below. ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Data** | :~~: | :~~: [Connect to Snowflake using external OAuth](#connect-to-snowflake-using-external-oauth) | ✔ | | **Modeling** | :~~: | :~~: [Quick Autopilot improvements now available for time series](#quick-autopilot-improvements-now-available-for-time-series) | ✔ | | [Retraining Combined Models now faster](#retraining-combined-models-now-faster) | ✔ | | [Add custom logos to No-Code AI Apps](#add-custom-logos-to-no-code-ai-apps) | ✔ | | [Multiclass support in No-Code AI Apps](#multiclass-support-in-no-code-ai-apps) | ✔ | | [Sliced insights show a subpopulation of model data](#sliced-insights-show-a-subpopulation-of-model-data) | | ✔ | [Period Accuracy allows focus on specific periods in training data](#period-accuracy-allows-focus-on-specific-periods-in-training-data) | | ✔ | **Predictions and MLOps** | :~~: | :~~: [Python and Java Scoring Code snippets](#python-and-java-scoring-code-snippets) | ✔ | | [Export deployment data](#export-deployment-data) | ✔ | | [Create custom metrics](#create-custom-metrics) | ✔ | | [Drill down on the Data Drift tab](#drill-down-on-the-data-drift-tab) | ✔ | | [Monitor deployment data processing](#monitor-deployment-data-processing) | ✔ | | [Deployment creation workflow redesign](#deployment-creation-workflow-redesign) | ✔ | | [View Service Health and Accuracy history](#view-service-health-and-accuracy-history) | | ✔ | [Create monitoring job definitions](#create-monitoring-job-definitions) | | ✔ | [Automate deployment and replacement of Scoring Code in Snowflake](#automate-deployment-and-replacement-of-scoring-code-in-snowflake) | | ✔ | [Define runtime parameters for custom models](#define-runtime-parameters-for-custom-models) | | ✔ | ### GA {: #ga } #### Quick Autopilot improvements now available for time series {: #quick-autopilot-improvements-now-available-for-time-series } With this month’s release, [Quick](multistep-ta) Autopilot has been streamlined for time series projects, speeding experimentation. In the new version of Quick, to maximize runtime efficiency, DataRobot no longer automatically generates and fits the DR Reduced Features list, as fitting requires retraining models. Models are still trained at the maximum sample size for each backtest, defined by the project’s date/time partitioning. The specific number of models run varying by project and target type. See the documentation on the [model recommendation process](model-rec-process) for alternate methods to build a reduced feature list. #### Retraining Combined Models now faster {: #retraining-combined-models-now-faster } Now generally available, time series segmented models now support retraining on the same feature list and blueprint as the original model without the need to rerun Autopilot or feature reduction. Previously, rerunning Autopilot was the only way to retrain this model type. This new support creates parity in retraining between retraining a non-segmented time series model and a segmented model. Because the improvement ensures that retraining leverages the feature reduction computations from the original, only newly introduced features need to go through that process, saving time and adding flexibility. Note that retraining retrains the champion of a segment, it does not rerun the project and select a new champion. #### Python and Java Scoring Code snippets {:#python-and-java-scoring-code-snippets } Now generally available, DataRobot allows you to [use Scoring Code via Python and Java](sc-download-leaderboard). Although the underlying Scoringe Code is based off Java, DataRobot now provides the [DataRobot Prediction Library](https://pypi.org/project/datarobot-predict/){ target=_blank } to make predictions using various prediction methods supported by DataRobot via a Python API. The library provides a common interface for making predictions, making it easy to swap out any underlying implementation. Access Scoring Code for Python and Java from a model in the Leaderboard or from a deployed model that supports Scoring Code. ![](images/scoring-code-python-rn.png) #### Export deployment data { #export-deployment-data } Now generally available, on a deployment’s Data Export tab, you can export stored training data, prediction data, and actuals to compute and monitor custom business or performance metrics on the [Custom Metrics tab](custom-metrics) or outside DataRobot. You can export the available deployment data for a specified model and time range. To export deployment data, make sure your deployment stores prediction data, generate data for the required time range, and then view or download that data. ![](images/dep-data-export.png) !!! note The initial release of the deployment data export feature enforces some row count limitations. For details, review the [considerations in the feature documentation](data-export#prediction-data-and-actuals-considerations). For more information, see the [Data Export tab](data-export) documentation. #### Create custom metrics { #create-custom-metrics } Now generally available, on a deployment's Custom Metrics tab, you can use the data you collect from the [Data Export tab](data-export) (or data calculated through other custom metrics) to compute and monitor up to 25 custom business or performance metrics. After you add a metric and upload data, a configurable dashboard visualizes a metric’s change over time and allows you to monitor and export that information. This feature enables you to implement your organization's specialized metrics to expand on the insights provided by DataRobot's built-in [Service Health](service-health), [Data Drift](data-drift), and [Accuracy](deploy-accuracy) metrics. ![](images/upload-custom-metric-data.png) !!! note The initial release of the custom metrics feature enforces some row count and file size limitations. For details, review the [considerations in the feature documentation](custom-metrics). For more information, see the [Custom Metrics tab](custom-metrics) documentation. #### Drill down on the Data Drift tab {: #drill-down-on-the-data-drift-tab } Now generally available on the [Data Drift tab](data-drift), the new Drill Down visualization tracks the difference in distribution over time between the training dataset of the deployed model and the datasets used to generate predictions in production. The drift away from the baseline established with the training dataset is measured using the Population Stability Index (PSI). As a model continues to make predictions on new data, the change in the drift status over time is visualized as a heat map for each tracked feature. This heat map can help you identify data drift and compare drift across features in a deployment to identify correlated drift trends: ![](images/rn-drill-down-heat-map.png) In addition, you can select one or more features from the heat map to view a Feature Drift Comparison chart, comparing the change in a feature's data distribution between a reference time period and a comparison time period to visualize drift. This information helps you identify the cause of data drift in your deployed model, including data quality issues, changes in feature composition, or changes in the context of the target variable: ![](images/rn-drill-down-drift-compare.png) For more information, see the [Drill down on the Data Drift tab](data-drift#drill-down-on-the-data-drift-tab) documentation. #### Monitor deployment data processing {: #monitor-deployment-data-processing } Now generally available, the Usage tab reports on prediction data processing for the [Data Drift](data-drift) and [Accuracy](deploy-accuracy) tabs. Monitoring a deployed model’s data drift and accuracy is a critical task to ensure that model remains effective; however, it requires processing large amounts of prediction data and can be subject to delays or rate limiting. The information on the Usage tab can help your organization identify these data processing issues. The Prediction Tracking chart, a bar chart of the prediction processing status over the last 24 hours or 7 days, tracks the number of processed, rate-limited, and missing association ID prediction rows: ![](images/rn-prediction-usage-tracking.png) On the right side of the page are the processing delays for Predictions Processing (Champion) and Actuals Processing (the delay in actuals processing is for ALL models in the deployment): ![](images/predictions-processing-delay.png) For more information, see the [Usage tab](deploy-usage) documentation. #### Deployment creation workflow redesign {: #deployment-creation-workflow-redesign } Now generally available, the redesigned deployment creation workflow provides a better organized and more intuitive interface. Regardless of where you create a new deployment (the Leaderboard, the Model Registry, or the Deployments inventory), you are directed to this new workflow. The new design clearly outlines the capabilities of your current deployment based on the data provided, grouping the settings and capabilities logically and providing immediate confirmation when you enable a capability, or guidance when you’re missing required fields or settings. A new sidebar provides details about the model being used to make predictions for your deployment, in addition to information about the deployment review policy, deployment billing details (depending on your organization settings), and a link to the deployment information documentation. ![](images/deploy-create-settings-1.png) For more information, see the [Configure a deployment](add-deploy-info) documentation. #### Connect to Snowflake using external OAuth {: #connect-to-snowflake-using-external-oauth } Now generally available, Snowflake users can set up a Snowflake data connection in DataRobot using an external identity provider (IdP)—either Okta or Azure Active Directory— for user authentication through OAuth single sign-on (SSO). For more information, see the [Snowflake External OAuth](dc-snowflake#snowflake-external-oauth) documentation. #### Add custom logos to No-Code AI Apps {: #add-custom-logos-to-no-code-ai-apps } Now generally available, you can add a custom logo to your No-Code AI Apps, allowing you to keep the branding of the AI App consistent with that of your company before sharing it either externally or internally. ![](images/app-logo-5.png) To upload a new logo, open the application you want to edit and click **Build**. Under **Settings > Configuration Settings**, click **Browse** and select a new image, or drag-and-drop an image into the **New logo** field. ![](images/app-logo-3.png) For more information, see the [No-Code AI App](app-settings#add-a-custom-logo) documentation. #### Multiclass support in No-Code AI Apps {: #multiclass-support-in-no-code-ai-apps } No-Code AI Apps now support multiclass classification deployments across all three template types—Predictor, Optimizer, and What-If. This gives users the ability to create applications that solve a broader range of business problems. ### Public Preview {: #public-preview } #### Sliced insights show a subpopulation of model data {: #sliced-insights-show-a-subpopulation-of-model-data } Now available as public preview, slices allow you to define filters for categorical, numeric, or both types of features. Viewing and comparing insights based on segments of a project’s data helps to understand how models perform on different subpopulations. You can also compare a slice against the “global” slice--all training data (depending on the insight). Configuring a slice allows you to choose a feature and set operators and values to narrow the data returned. ![](images/sliced-insights-7.png) Sliced insights are available for Lift Chart, ROC Curve, Residual, and Feature Impact visualizations. **Required feature flag**: Enable Sliced Insights Public preview [documentation](sliced-insights). #### Period Accuracy allows focus on specific periods in training data {: #period-accuracy-allows-focus-on-specific-periods-in-training-data } Available as public preview for OTV and time series projects, the Period Accuracy insight lets you define periods within your dataset and then compare their metric scores against the metric score of the model as a whole. Periods are defined in a separate CSV file that identifies rows to group based on the project’s data/time feature. ![](images/period-accuracy-2.png) Once uploaded, and with the insight calculated, DataRobot provides a table of period-based results and an “over time” histogram for each period. ![](images/period-accuracy-1.png) **Required feature flag**: Period Accuracy Insight Public preview [documentation](ts-period-accuracy). #### View Service Health and Accuracy history {: #view-service-health-and-accuracy-history } Now available as a public preview feature, when analyzing a deployment's [Service Health](service-health) and [Accuracy](deploy-accuracy), you can view the History tab, providing critical information about the performance of current and previously deployed models. This tab improves the usability of service health and accuracy analysis, allowing you to view up to five models in one place and on the same scale, making it easier directly compare model performance. === "Service Health history" On a deployment's [**Service Health > History**](pp-deploy-history#service-health-history) tab, you can access visualizations representing the service health history of up to five of the most recently deployed models, including the currently deployed model. This history is available for each metric tracked in a model's service health, helping you identify bottlenecks and assess capacity, which is critical to proper provisioning. ![](images/service-health-history-details.png) === "Accuracy history" On a deployment's [**Accuracy > History**](pp-deploy-history#accuracy-history) tab, you can access visualizations representing the accuracy history of up to five of the most recently deployed models, including the currently deployed model, allowing you to compare their accuracy directly. These accuracy insights are rendered based on the problem type and its associated optimization metrics. ![](images/accuracy-history-details.png) **Required feature flag**: Enable Deployment History Public preview [documentation](pp-deploy-history). #### Create monitoring job definitions {: #create-monitoring-job-definitions } Now available as a public preview feature, monitoring job definitions enable DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot, integrating deployments more closely with external data sources. For example, you can create a monitoring job to connect to Snowflake, fetch raw data from the relevant Snowflake tables, and send the data to DataRobot for monitoring purposes. This integration extends the functionality of the existing [Prediction API](predictions) routes for `batchPredictionJobDefinitions` and `batchPredictions`, adding the `batch_job_type: monitoring` property. This new property allows you to create monitoring jobs. In addition to the Prediction API, you can create monitoring job definitions through the DataRobot UI. You can then view and manage monitoring job definitions as you would any other job definition. **Required feature flag**: Monitoring Job Definitions For more information, see the [Prediction monitoring jobs](pred-monitoring-jobs/index) documentation. #### Automate deployment and replacement of Scoring Code in Snowflake {: #automate-deployment-and-replacement-of-scoring-code-in-snowflake } Now available as a public preview feature, you can create a DataRobot-managed Snowflake prediction environment to deploy DataRobot Scoring Code in Snowflake. With the [Managed by DataRobot option](pp-snowflake-sc-deploy-replace#create-a-snowflake-prediction-environment) enabled, the model deployed externally to Snowflake has access to MLOps management, including automatic Scoring Code replacement: ![](images/pred-env-settings.png) Once you've created a Snowflake prediction environment, you can [deploy a Scoring Code-enabled model to that environment from the Model Registry](pp-snowflake-sc-deploy-replace#deploy-a-model-to-the-snowflake-prediction-environment): ![](images/sf-deploy-target.png) **Required feature flag**: Enable the Automated Deployment and Replacement of Scoring Code in Snowflake Public preview [documentation](pp-snowflake-sc-deploy-replace). #### Define runtime parameters for custom models {: #define-runtime-parameters-for-custom-models } Now available as a public preview feature, you can add runtime parameters to a custom model through the model metadata, making your custom model code easier to reuse. To define runtime parameters, you can add the following `runtimeParameterDefinitions` in `model-metadata.yaml`: Key | Value ---------------|------ `fieldName` | The name of the runtime parameter. `type` | The data type the runtime parameter contains:`string` or `credentials`. `defaultValue` | (Optional) The default string value for the runtime parameter (the credential type doesn't support default values). `description` | (Optional) A descripton of the purpose or contents of the runtime parameter. When you add a `model-metadata.yaml` file with `runtimeParameterDefinitions` to DataRobot while creating a custom model, the **Runtime Parameters** section appears on the **Assemble** tab for that custom model: ![](images/assemble-tab-runtime-params.png) **Required feature flag**: Enable the Injection of Runtime Parameters for Custom Models Public preview [documentation](pp-cus-model-runtime-params). ### Deprecation announcements {: #deprecation-announcements } #### DataRobot Prime model creation removed {: #dataRobot-prime-model-creation-removed } With this deployment, the ability to create new DataRobot Prime models has been removed from the application. This does not affect existing Prime models or deployments. RuleFit models, which differ from Prime only in that they use raw data for their prediction target rather than predictions from a parent model, support Java/Python source code export.
february2023-announce
--- title: May 2023 description: Read release announcements for DataRobot's generally available and public preview features released in May, 2023. --- # May 2023 {: #may-2023 } _May 24, 2023_ With the latest deployment, DataRobot's AI Platform delivered the new GA and Public Preview features listed below. From the release center you can also access: * [Monthly deployment announcement history](cloud-history/index) * [Public preview features](public-preview/index) * [Self-Managed AI Platform release notes](archive-release-notes/index) ### May release {: #may-release } The following table lists each new feature: ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Data** | :~~: | :~~: [Data connection browsing improvements](#data-connection-browsing-improvements) | | ✔ | [Improvements to wrangling preview](#improvements-to-wrangling-preview) | | ✔ | **Modeling** | :~~: | :~~: [Lift Chart now available in Workbench](#lift-chart-now-available-in-workbench) |✔ | | [Upgrades to Pandas libraries](#upgrades-to-pandas-libraries)| ✔ | [Date/time partitioning now available in Workbench](#datetime-partitioning-now-available-in-workbench) | | ✔ | [Backend date/time functionality simplification](#backend-datetime-functionality-simplification) | ✔ | | [Feature Effects now supports slices](#feature-effects-now-supports-slices) | | ✔ | **Notebooks** | :~~: | :~~: [Azure OpenAI Service integration for DataRobot Notebooks](#azure-openai-service-integration-for-datarobot-notebooks) | | ✔ | **Predictions and MLOps** | :~~: | :~~: [Automate deployment and replacement of Scoring Code in AzureML](#automate-deployment-and-replacement-of-scoring-code-in-azureml) | | ✔ | [MLOps reporting for unstructured models](#mlops-reporting-for-unstructured-models) | | ✔ | **API enhancements** | :~~: | :~~: [Python client v3.1.1](#python-client-version-311) | ✔ | | [Feature Fit removed from the API](#feature-fit-removed-from-the-api) | ✔ | | ### GA {: #ga } #### Lift Chart now available in Workbench {: #lift-chart-now-available-in-workbench } With this deployment, the [Lift Chart](wb-experiment-evaluate#insights) has been added to the list of available insights in Workbench experiments. #### Upgrades to Pandas libraries {: #upgrades-to-pandas-libraries } The Pandas library has been upgraded from version 0.23.4 to 1.3.5. There are multiple updates and bug fixes since the last version, summarized below: * The aggregated summation logic for XEMP Prediction Explanation insights was improved in Pandas, improving calculation accuracy. * The floating precision change in Pandas slightly affects the precision of normalized SHAP impact values. However, the difference is minimal. For example, compare values before and after this version change: * Version 0.23.4: `saledate (Day of Month),0.7861313684206008` * Version 1.3.5: `saledate (Day of Month),0.786131368420601` * The logic of the Pandas resample API has been improved, yielding better accuracy for the start and end dates of Feature Over Time insight previews. #### Backend date/time functionality simplification {: #backend-datetime-functionality-simplification } With this release, the mechanisms that support date/time partitioning have been simplified to provide greater flexibility by relaxing certain guardrails and streamlining the backend logic. While there are no specific user-facing changes, you may notice: * When the default partitioning does not have enough rows, DataRobot automatically expands the validation duration (the portion of data leading up to the beginning of the training partition that is reserved for feature derivation). * DataRobot automatically disables holdout when there are insufficient rows to cover both validation and holdout. * DataRobot includes the forecast window when reserving data for feature derivation before the start of the training partition in all cases. Previously this was only applied to multiseries or wide forecast windows. ### Public Preview {: #public-preview } #### Data connection browsing improvements {: #data-connection-browsing-improvements } This release introduces improvements to the [data connection browsing experience](wb-connect) in Workbench: * If a Snowflake database is not specified during configuration, you can browse and select a databased after saving your configuration. Otherwise, you are brought directly to the schema list view. * DataRobot has reduced the time it takes to display results when browsing for databases, schemas, and tables in Snowflake. #### Improvements to wrangling preview {: #improvements-to-wrangling-preview } This release includes several improvements for data wrangling in Workbench: * Introducing [reorder operations](wb-add-operation#reorder-operations) in your wrangling recipe. * If the addition of an operation results in an error, use the new **Undo** button to revert your changes. * The live preview now features infinite scroll for seamless browsing for up to 1000 columns. #### Date/time partitioning now available in Workbench {: #datetime-partitioning-now-available-in-workbench } With this deployment, [date/time partitioning](wb-experiment-create#modify-partitioning) becomes available if your experiment is eligible for time-aware modeling, as reported in the experiment summary. ![](images/wb-exp-16.png) Select **Date/time** as the partitioning method to expose the options for setting up backtests and other time-aware modeling options. As with non-time aware, you can train on new settings to add models to your Leaderboard. Additionally, the Accuracy Over Time and Stability visualizations are available for date/time experiments. **Required feature flag:** Enable Date/Time Partitioning (OTV) in Workbench #### Feature Effects now supports slices {: #feature-effects-now-supports-slices } Sliced insights provide the option to view a subpopulation of a model's data based on feature values—either raw or derived. With this deployment, slices have been made available in the **Feature Effects** insight (joining options of the Lift Chart, ROC Curve, Residuals, and Feature Impact). **Required feature flag:** Enable Sliced Insights Public preview [documentation](sliced-insights). #### Azure OpenAI Service integration for DataRobot Notebooks {: #azure-openai-service-integration-for-datarobot-notebooks } Now available for public preview, you can power code development workflows in DataRobot Notebooks by applying OpenAI large language models for assisting with code generation. With the Azure OpenAI Service integration in DataRobot Notebooks, you can leverage state-of-the-art generative models with Azure's enterprise-grade security and compliance capabilities. By selecting **Assist** in a DataRobot notebook, you can provide a prompt for the Code Assistant to use to generate code in a cell. ![](images/wb-nb-43.png) **Required feature flag:** Enable Notebooks OpenAI Integration Public preview [documentation](dr-openai-nb). #### Automate deployment and replacement of Scoring Code in AzureML {: #automate-deployment-and-replacement-of-scoring-code-in-azureml } Now available for public preview, you can create a DataRobot-managed AzureML prediction environment to deploy DataRobot Scoring Code in AzureML. With the [Managed by DataRobot option enabled](azureml-sc-deploy-replace#create-an-azure-prediction-environment), the model deployed externally to AzureML has access to MLOps management, including automatic Scoring Code replacement: ![](images/azure-pred-env-settings.png) Once you've created an AzureML prediction environment, you can [deploy a Scoring Code-enabled model to that environment from the Model Registry](azureml-sc-deploy-replace#deploy-a-model-to-the-azure-prediction-environment): ![](images/azure-deploy-target.png) **Required feature flag:** Enable the Automated Deployment and Replacement of Scoring Code in AzureML Public preview [documentation](azureml-sc-deploy-replace). #### MLOps reporting for unstructured models {: #mlops-reporting-for-unstructured-models } Now available for public preview, you can report MLOps statistics for Python custom inference models [created in the Custom Model Workshop](custom-inf-model) with an Unstructured (Regression), Unstructured (Binary), or Unstructured (Multiclass) target type: ![](images/unstructured-model-reporting.png) With this feature enabled, when you [assemble an unstructured custom inference model](unstructured-custom-models), you can [use new unstructured model reporting methods in your Python code](mlops-unstructured-models#unstructured-custom-model-reporting-methods) to report deployment statistics and predictions data to MLOps. For an example of an unstructured Python custom model with MLOps reporting, see the [DataRobot User Models repository](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/python3_unstructured_with_mlops_reporting){ target=_blank }. **Required feature flag:** Enable MLOps Reporting from Unstructured Models Public preview [documentation](mlops-unstructured-models). ### API enhancements {: #api-enhancements } #### Python client version 3.1.1 {: #python-client-version-311 } Python client version 3.1.1 is now available, and introduces the following configuration changes: * Removed the dependency on package [contextlib2](https://pypi.org/project/contextlib2/) as the package is Python 3.7+. * Updated [typing-extensions](https://pypi.org/project/typing-extensions/) to be inclusive of versions 4.3.0 to < 5.0.0. #### Feature Fit removed from the API {: #feature-fit-removed-from-the-api } Feature Fit has been removed from DataRobot's API. DataRobot recommends using Feature Effects instead, as it provides the same output.
may2023-announce
--- title: April 2023 description: Read release note announcements for DataRobot's generally available and public preview features released in April, 2023. --- # April 2023 {: #april-2023 } _April 22, 2023_ With the latest deployment, DataRobot's AI Platform delivered the following new GA and Public Preview features. From the release center you can also access: * [Monthly deployment announcement history](cloud-history/index) * [Public preview features](public-preview/index) * [Self-Managed AI Platform release notes](archive-release-notes/index) ### In the spotlight {: #in-the-spotlight } #### Time series clustering metrics and insights {: #time-series-clustering-metrics-and-insights } To help in comparing and evaluating clustering models, two new Public Preview optimization metrics are available via the UI and the API for time series clustering projects. Previously [Silhouette scores](opt-metric#silhouette-score) were the only supported metric. The DTW (Dynamic Time Warping) Silhouette Score measures the average similarity of objects within a cluster and their DTW distance to other objects in the other clusters. (It’s an alternative to Euclidean distance measure for time series.) The Calinski-Harabasz Score describes, for all clusters, the ratio of the sum of between-clusters dispersion and of inter-cluster dispersion. You can set these metrics when configuring DataRobot to discover clusters. Note that when using clustering in the API, you can enable additional insights and metrics as Public Preview. These additional metrics are automatically computed. However, DTW metrics are not automatically computed for datasets with a large number of series (over 500) due to a risk of out-of-memory errors. Although you can request to compute these metrics manually, they are prone to failure without a significant amount of memory available. ### April release {: #april-release } The following table lists each new feature. See the [deployment history](cloud-history/index) for past feature announcements and also the [**deprecation notices**](#deprecation-announcements), below. ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Data** | :~~: | :~~: [Fast Registration in the AI Catalog](#fast-registration-in-the-ai-catalog) | ✔ | [Workbench adds new operations to data wrangling capabilities](#workbench-adds-new-operations-to-data-wrangling-capabilities) | | ✔ [Snowflake key pair authentication](#snowflake-key-pair-authentication) | | ✔ [New driver version](#new-driver-version) | ✔ | **Modeling** | :~~: | :~~: [Time series clustering metrics](#time-series-clustering-metrics-and-insights)| | ✔ [Workbench expands validation/partitioning settings in experiment set up](#workbench-expands-validation-partitioning-settings-in-experiment-set-up)| | ✔ **Notebooks** | :~~: | :~~: [Integrated terminals](#integrated-notebook-terminals)| | ✔ [Built-in visualization charting](#built-in-visualization-charting)| | ✔ [DataRobot Notebooks now available for EU](#datarobot-notebooks-are-now-available-in-the-eu)| | ✔ **Predictions and MLOps** | :~~: | :~~: [Deployment settings redesign](#deployment-settings-redesign)| ✔ | ### GA {: #ga } #### Fast Registration in the AI Catalog {: #fast-registration-in-the-ai-catalog } Now generally available, you can quickly register large datasets in the AI Catalog by specifying the first N rows to be used for registration instead of the full dataset—giving you faster access to data to use for testing and Feature Discovery. In the **AI Catalog**, click **Add to catalog** and select your data source. Fast registration is only available when adding a dataset from a new data connection, an existing data connection, or a URL. ![](images/fast-reg-2.png) For more information, see [Configure Fast Registration](catalog#configure-fast-registration). #### New driver version {: #new-driver-version } With this release, the following driver version has been updated: - Snowflake==3.13.28 See the complete list of [supported driver versions](data-sources/index) in DataRobot. #### Deployment settings redesign {: #deployment-settings-redesign } The new deployment settings workflow enhances the deployment configuration experience by providing the required options for each MLOps feature directly on the deployment tab for that feature. This new organization also provides improved tooltips and additional links to documentation to help you enable the functionality your deployment requires. The new workflow separates the categories of deployment configuration tasks into dedicated settings on the following tabs: * [Service Health](service-health-settings) * [Predictions](predictions-settings) * [Data Drift](data-drift-settings) * [Accuracy](accuracy-settings) * [Fairness](fairness-settings) * [Humility](humility-settings) * [Challengers](challengers-settings) * [Retraining](retraining-settings) The **Deployment > Settings** tab is now deprecated. During the deprecation period, a warning appears on the **Settings** tab to provide links to the new settings pages: ![](images/rn-deploy-settings.png) In addition, on each deployment tab with a **Settings** page, you can click the setting icon to access the required configuration options: ![](images/rn-deploy-settings-1.png) For more information, see the [Deployment settings](deployment-settings/index) documentation. ### Public Preview {: #public-preview } #### Workbench expands validation/partitioning settings in experiment set up {: #workbench-expands-validation-partitioning-settings-in-experiment-set-up } Workbench now supports the ability to [set and define](wb-experiment-create#modify-partitioning) the validation type when setting up an experiment. With the addition of training-validation-holdout (TVH), users can experiment with building models on more data without impacting run time to maximize accuracy. **Required feature flag:** No flag required #### Workbench adds new operations added to data wrangling capabilities {: #workbench-adds-new-operations-to-data-wrangling-capabilities } With this release, three new operations have been added to DataRobot’s wrangling capabilities in Workbench: 1. **De-deuplicate rows:** Automatically remove all duplicate rows from your dataset. 2. **Rename features:** Quickly change the name of one or more features in your dataset. 3. **Remove features:** Remove one or more features from your dataset. To access new and existing operations, register data from Snowflake to a Workbench Use Case and then click **Wrangle**. When you publish the recipe, the operations are then applied to the source data in Snowflake to materialize an output dataset. **Required feature flag:** No flag required See the Workbench public preview [documentation](wb-add-operation#add-operations). #### Snowflake key pair authentication {: #snowflake-key-pair-authentication } Now available for public preview, you can create a Snowflake data connection in DataRobot Classic and Workbench using the key pair authentication method&mdash;a Snowflake username and private key&mdash;as an alternative to basic authentication. **Required feature flag:** Enable Snowflake Key-pair Authentication Public preview [documentation](snow-keypair). #### Integrated notebook terminals {: #integrated-notebook-terminals } Now available for public preview, DataRobot notebooks support integrated terminal windows. When you have a notebook session running, you can open one or more integrated terminals to execute terminal commands, such as running .py scripts or installing packages. Terminal integration also allows you to have full support for a system shell (bash) so you can run installed programs. When you create a terminal window in a DataRobot Notebook, the notebook page divides into two sections: one for the notebook itself, and another for the terminal. ![](images/wb-nb-40.png) **Required feature flag:** Enable Notebooks Public preview [documentation](wb-terminal-nb). #### Built-in visualization charting {: #built-in-visualization-charting } Now available for public preview, DataRobot allows you to create built-in, code-free chart cells within DataRobot Notebooks, enabling you to quickly visualize your data without coding your own plotting logic. Create a chart by selecting a DataFrame in the notebook, choosing the type of chart to create, and configuring its axes. ![](images/wb-nb-34.png) **Required feature flag:** Enable Notebooks Public preview [documentation](wb-cell-nb#create-chart-cells). #### DataRobot Notebooks are now available in the EU {: #datarobot-notebooks-are-now-available-in-the-eu } Now available for public preview, EU users can access DataRobot Notebooks. [DataRobot Notebooks](dr-notebooks/index) offer an enhanced code-first experience in the application. Notebooks play a crucial role in providing a collaborative environment, using a code-first approach to accelerate the machine learning lifecycle. Reduce hundreds of lines of code, automate data science tasks, and accommodate custom code workflows specific to your business needs. ![](images/rn-notebooks-spotlight.png) **Required feature flag:** Enable Notebooks Public preview [documentation](dr-notebooks/index). ### API enhancements {: #api-enhancements } #### New time series clustering metrics and insights {: #new-time-series-clustering-metrics-and-insights } To help in comparing and evaluating clustering models, two new [Public Preview optimization metrics](#time-series-clustering-metrics-and-insights) are available via the API for time series clustering projects. When using clustering in the API, you can enable additional insights and metrics as Public Preview. These additional metrics are automatically computed. However, DTW metrics are not automatically computed for datasets with a large number of series (over 500) due to a risk of out-of-memory errors. Although you can request to compute these metrics manually, they are prone to failure without a significant amount of memory available.
april2023-announce
--- title: October 2022 description: Read about DataRobot's new public preview and generally available features released in October, 2022. --- ## October 2022 {: #october-2022 } _October 26, 2022_ DataRobot's managed AI Platform deployment for October delivered the following new GA and Public Preview features. See the [this month's release announcements](release/index) as well as a [deployment history](cloud-history/index) for additional past feature announcements. See also: * [**Deprecation notices**](#deprecation-announcements) ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Data and integrations** | :~~: | :~~: [Added support for manual transforms of text features](#added-support-for-manual-transforms-of-text-features) |✔ | | **Modeling** | :~~: | :~~: [Add custom logos to No-Code AI Apps](#add-custom-logos-to-no-code-ai-apps) | | ✔ | **Predictions and MLOps** | :~~: | :~~: [Deployment Usage tab](#deployment-usage-tab) | | ✔ | [Drill down on the Data Drift tab](#drill-down-on-the-data-drift-tab) | | ✔ | [Model logs for model packages](#model-logs-for-model-packages) | | ✔ | ### GA {: #ga } #### Added support for manual transforms of text features {: #added-support-for-manual-transforms-of-text-features } With this release, DataRobot now allows manual, user-created variable type transformations from categorical-to-text even when a feature is flagged as having "too many values". These transformed variables will not be included in the Informative Features list, but can be manually added to a feature list for modeling. ### Public Preview {: #public-preview } #### Deployment Usage tab {: #deployment-usage-tab } After deploying a model and making predictions in production, monitoring model quality and performance over time is critical to ensure the model remains effective. This monitoring occurs on the [Data Drift](data-drift) and [Accuracy](deploy-accuracy) tabs and requires processing large amounts of prediction data. Prediction data processing can be subject to delays or rate limiting. On the left side of the **Usage** tab is the **Prediction Tracking** chart, a bar chart of the prediction processing status over the last 24 hours or 7 days, tracking the number of processed, missing association ID, and rate-limited prediction rows. Depending on the selected view (24-hour or 7-day), the histogram's bins are hour-by-hour or day-by-day: ![](images/rn-prediction-usage-tracking.png) To view additional information on the **Prediction Tracking** chart, hover over a column to see the time range during which the predictions data was received and the number of rows that were **Processed**, **Rate Limited**, or **Missing Association ID**: ![](images/prediction-tracking-details.png) On the right side of the **Usage** tab are the processing delays for **Predictions Processing (Champion)** and **Actuals Processing** (the delay in actuals processing is for _ALL_ models in the deployment): ![](images/predictions-processing-delay.png) The **Usage** tab recalculates the processing delays without reloading the page. You can check the **Updated** value to determine when the delays were last updated. **Required feature flag:** Enable Deployment Processing Info Public preview [documentation](deploy-usage). #### Drill down on the Data Drift tab {: #drill-down-on-the-data-drift-tab } The **Data Drift** > **Drill Down** chart visualizes the difference in distribution over time between the training dataset of the deployed model and the datasets used to generate predictions in production. The drift away from the baseline established with the training dataset is measured using the Population Stability Index (PSI). As a model continues to make predictions on new data, the change in the drift status over time is visualized as a heat map for each tracked feature. This heat map can help you identify [data drift](glossary/index#data-drift) and compare drift across features in a deployment to identify correlated drift trends: ![](images/rn-drill-down-heat-map.png) In addition, you can select one or more features from the heat map to view a **Feature Drift Comparison** chart, comparing the change in a feature's data distribution between a reference time period and a comparison time period to visualize drift. This information helps you identify the cause of data drift in your deployed model, including data quality issues, changes in feature composition, or changes in the context of the target variable: ![](images/rn-drill-down-drift-compare.png) **Required feature flag:** Enable Drift Drill Down Plot Public preview [documentation](data-drift#drill-down-on-the-data-drift-tab). #### Model logs for model packages {: #model-logs-for-model-packages } A model package's model logs display information about the operations of the underlying model. This information can help you identify and fix errors. For example, compliance documentation requires DataRobot to execute many jobs, some of which run sequentially and some in parallel. These jobs may fail, and reading the logs can help you identify the cause of the failure (e.g., the Feature Effects job fails because a model does not handle null values). !!! important In the Model Registry, a model package's **Model Logs** tab _only_ reports the operations of the underlying model, not the model package operations (e.g., model package deployment time). In the **Model Registry**, access a model package, and then click the **Model Logs** tab: ![](images/pp-model-pkg-logs.png) | | Information | Description | |-|-------------|-------------| | ![](images/icon-1.png) | Date / Time | The date and time the model log event was recorded. | | ![](images/icon-2.png) | Status | The status the log entry reports: <ul><li><span style="color:#3BC169">INFO</span>: Reports a successful operation.</li><li><span style="color:#E74D4D">ERROR</span>: Reports an unsuccessful operation.</li></ul> | | ![](images/icon-3.png) | Message | The description of the successful operation (INFO), or the reason for the failed operation (ERROR). This information can help you troubleshoot the root cause of the error. | If you can't locate the log entry for the error you need to fix, it may be an older log entry not shown in the current view. Click **Load older logs** to expand the **Model Logs** view. ![](images/pp-model-pkg-logs-load.png) !!! tip Look for the older log entries at the top of the **Model Logs**; they are added to the top of the existing log history. **Required feature flag:** Enable Model Logs for Model Packages Public preview [documentation](pp-model-pkg-logs). #### Add custom logos to No-Code AI Apps {: #add-custom-logos-to-no-code-ai-apps } Now available for public preview, you can add a custom logo to your No-Code AI Apps, allowing you to keep the branding of the AI App consistent with that of your company before sharing it either externally or internally. ![](images/app-logo-5.png) To upload a new logo, open the application you want to edit and click **Build**. Under **Settings > Configuration Settings**, click **Browse** and select a new image, or drag-and-drop an image into the **New logo** field. ![](images/app-logo-3.png) **Required feature flag:** Enable Application Builder Custom Logos ### Deprecation announcements {: #deprecation-announcements } #### DataRobot Pipelines to be removed in November {: #datarobot-pipelines-to-be-removed-in-november } As of November 2022, DataRobot will be retiring Pipeline Workspaces and will no longer continue to support it. If you are currently using this offering, contact [support@datarobot.com](mailto:Support@datarobot.com) or visit [support.datarobot.com](http://support.datarobot.com/). #### User/Open source models disabled in November {: #user-open-source-models-disabled-in-november } As of November 2022, DataRobot disabled all models containing User/Open source (“user”) tasks. See the [release announcement](june2022-announce#user-open-source-models-deprecated-and-soon-disabled) for full information on identifying these models. Use the [Composable ML](cml/index) functionality to create custom models. #### DataRobot Prime models to be deprecated {: #datarobot-prime-models-to-be-deprecated } [DataRobot Prime](prime/index), a method for creating a downloadable, derived model for use outside of the DataRobot application, will be removed in an upcoming release. It is being replaced with the new ability to export Python or Java code from Rulefit models using the [Scoring Code](scoring-code/index) capabilities. Rulefit models differ from Prime only in that they use raw data for their prediction target rather than predictions from a parent model. There is no change in the availability of Java Scoring Code for other blueprint types, and any existing Prime models will continue to function. #### Automodel functionality to be removed {: #automodel-functionality-to-be-removed } The October release deployment brings the removal of the "Automodel" Public Preview functionality. There is no impact to existing projects, but the feature will no longer be accessible from the product. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
october2022-announce
--- title: November 2022 description: Read about DataRobot's new public preview and generally available features released in November, 2022. --- ## November 2022 {: #november-2022 } _November 22, 2022_ With the latest deployment, DataRobot's managed AI Platform deployment delivered the following new GA and Public Preview features. See the [deployment history](cloud-history/index) for past feature announcements. See also: * [**API enhancements**](#api-enhancements) * [**Deprecation notices**](#deprecation-announcements) ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Modeling** | :~~: | :~~: [Text Prediction Explanations now GA](#text-prediction-explanations-now-ga) | ✔ | | [Changes to blender model defaults](#changes-to-blender-model-defaults) | ✔ | | [Japanese compliance documentation now generally available, more complete](#japanese-compliance-documentation-now-generally-available-more-complete) | ✔ | | [Prediction Explanations for cluster models](#prediction-explanations-for-cluster-models) | |✔ | [Use Cases tab renamed to Value Tracker](#use-cases-tab-renamed-to-value-tracker) | ✔ | | **Predictions and MLOps** | :~~: | :~~: [Environment limit management for custom models](#environment-limit-management-for-custom-models) | ✔ | | [Dynamically load required agent spoolers in a Java application](#dynamically-load-required-agent-spoolers-in-a-java-application) | ✔ | | **API enhancements** | :~~: | :~~: [R client v2.29](#public-preview-r-client-v229) | | ✔ | ### NumPy library to be upgraded in December {: #numpy-library-to-be-upgraded-in-december } DataRobot is upgrading a Python library called `numpy` during the week of December 11, 2022. Users should not experience backward compatibility issues as a result of this change. The `numpy` library handles various numerical transformations related to data processing and preparation in the platform. Upgrading the `numpy` library is a proactive step to address common vulnerabilities and exposures (CVEs). DataRobot regularly upgrades libraries to improve speed, security, and predictive performance. Testing indicates that a subset of users may experience small changes in model predictions as a result of the upgrade. Only users that have trained and deployed a model using `.xpt` or `.xport` file formats may see predictions change. In cases where predictions change, the difference in prediction values is typically less than 1%. These changes are due to incremental differences in the treatment of floats between the current and target upgrade versions of the `numpy` library. ### GA {: #ga } #### Text Prediction Explanations now GA {: #text-prediction-explanations-now-ga } [Text Prediction Explanations](predex-text) help understand how individual words (n-grams) in a text feature influence predictions, helping to validate and understand the model and the importance it is placing on words. They use the standard color bar spectrum of blue (negative) to red (positive) impact to easily visualize and understand your text and display n-grams not recognized by the model in grey. Text Prediction Explanations, either XEMP OR SHAP, are run by default when text is present in a dataset. ![](images/text-predex-unknown.png) #### Changes to blender model defaults {: #changes-to-blender-model-defaults } This release brings changes to the default behavior of [blender models](leaderboard-ref#blender-models). A blender (or ensemble) model combines the predictions of two or more models, potentially improving accuracy. DataRobot can automatically create these models at the end of Autopilot when the [**Create blenders from top models** advanced option](additional) is enabled. Previously the default setting was to enable creating blenders automatically; now, the default is not to build these models. Additionally, the number of models allowed when creating blenders either automatically or manually has changed. While previously there was both no limit, and later a three-model maximum in the number of contributory models, that limit has been adjusted to allow up to eight models per blender. Finally, the automatic creation of advanced blenders has been removed. These blenders used a backwards stage-wise process to eliminate models when it benefits the blend's cross-validation score. * Advanced Average (AVG) Blend * Advanced Generalized Linear Model (GLM) Blend * Advanced Elastic Net (ENET) Blend The following blender types are currently in the process of deprecation: Blender | Deprecation status --------| ------------------- Random Forest Blend (RF) | Existing RF blenders continue to work; you cannot create new RF blenders. Light Gradient Boosting Machine Blend (LGBM) | Existing LGBM blenders continue to work; you cannot create new LGBM blenders. TensorFlow Blend (TF) | Existing TF blenders do not work; you cannot create new TF blenders. These changes have been made in response to customer feedback. Because blenders can extend build times and cause deployment issues, the changes ensure that these impacts only affect those users needing the capability. Testing has determined that, in most cases, the accuracy gain does not justify the extended runtimes imposed on Autopilot. For data scientists who need blender capabilities, manual blending is not affected. #### Japanese compliance documentation now generally available, more complete {: #japanese-compliance-documentation-now-generally-available-more-complete } With this release, [model compliance documentation](compliance/index) is now generally available for users in Japanese. Now, Japanese-language users can generate, for each model, individualized documentation to provide comprehensive guidance on what constitutes effective model risk management and download it as an editable Microsoft Word document. In the public preview version, some sections were untranslated and therefore removed from the report. Now the following previously untranslated sections are translated and available for binary classification and multiclass projects: * Bias and Fairness * Lift Chart * Accuracy Anomaly detection compliance information is not yet translated and is not included. It is available in English if the information is required. Compliance Reports are a premium feature; contact your DataRobot representative for information on availability. #### Environment limit management for custom models {: #environment-limit-management-for-custom-models } The execution environment limit allows administrators to control how many custom model environments a user can add to the [Custom Model Workshop](custom-model-workshop/index). In addition, the execution environment _version_ limit allows administrators to control how many versions a user can add to _each_ of those environments. These limits can be: 1. **Directly applied to the user**: Set in a user's permissions. Overrides the limits set in the group and organization permissions (if the user limit value is lower). 2. **Inherited from a user group**: Set in the permissions of the group a user belongs to. Overrides the limits set in organization permissions (if the user group limit value is lower). 3. **Inherited from an organization**: Set in the permissions of the organization a user belongs to. If the environment or environment version limits are defined for an organization or a group, the users within that organization or group inherit the defined limits. However, a more specific definition of those limits at a lower level takes precedence. For example, an organization may have the environment limits set to 5, a group to 4, and the user to 3; in this scenario, the final limit for the individual user is 3. For more information on adding custom model execution environments, see the [Custom model environment](custom-environments) documentation. === "View environment limits" Any user can view their environment and environment version limits. On the [**Custom Models** > **Environments** tab](custom-environments), next to the **Add new environment** and the **New version** buttons, a badge indicates how many environments (or environment versions) you've added and how many environments (or environment versions) you can add based on the environment limit: ![](images/rn-env-ver-limits.png) The following status categories are available for this badge: Badge | Description ------|------------ ![](images/env-limit-badge.png){: style="height:22px; width:auto;"} | The number of environments (or versions) is less than 75% of the limit. ![](images/env-limit-badge-alert.png){: style="height:22px; width:auto;"} | The number of environments (or versions) is equal to or greater than 75% of the limit. ![](images/env-limit-badge-warn.png){: style="height:22px; width:auto;"} | The number of environments (or versions) has reached the limit. === "Set environment limits (for administrators)" With the correct permissions, an administrator can set these limits at a [user](manage-users#manage-execution-environment-limits) or [group](manage-groups#manage-execution-environment-limits) level. For a user or a group, on the **Permissions** tab, click **Platform**, and then click **Admin Controls**. Next, under **Admin Controls**, set either or both of the following settings: * **Execution Environments limit**: The maximum number of custom model execution environments users in this group can add. * **Execution Environments versions limit**: The maximum number of versions users in this group can add to each custom model execution environment. ![](images/execution-env-controls.png) For more information, see the [Manage user execution environment limits](manage-users#manage-execution-environment-limits) documentation (or the [Manage group execution environment limits](manage-groups#manage-execution-environment-limits) documentation). #### Dynamically load required agent spoolers in a Java application {: #dynamically-load-required-agent-spoolers-in-a-java-application } Dynamically loading third-party Monitoring Agent spoolers in your Java application improves security by removing unused code. This functionality works by loading a separate JAR file for the [Amazon SQS](spooler#amazon-sqs), [RabbitMQ](spooler#rabbitmq), [Google Cloud Pub/Sub](spooler#google-cloud-pubsub), and [Apache Kafka](spooler#apache-kafka) spoolers, as needed. The natively supported file system spooler is still configurable without loading a JAR file. Previously, the `datarobot-mlops` and `mlops-agent` packages included all spooler types by default. To use a third-party spooler in your MLOps Java application, you must include the required spoolers as dependencies in your POM (Project Object Model) file, along with `datarobot-mlops`: ``` xml title="Dependencies in a POM file" <properties> <mlops.version>8.3.0</mlops.version> </properties> <dependency> <groupId>com.datarobot</groupId> <artifactId>datarobot-mlops</artifactId> <version>${mlops.version}</version> </dependency> <dependency> <groupId>com.datarobot</groupId> <artifactId>spooler-sqs</artifactId> <version>${mlops.version}</version> </dependency> ``` The spooler JAR files are included in the [MLOps agent tarball](monitoring-agent/index#mlops-agent-tarball). They are also available individually as downloadable JAR files in the public Maven repository for the [DataRobot MLOps Agent](https://mvnrepository.com/artifact/com.datarobot/mlops-agent){ target=_blank }. To use a third-party spooler with the executable agent JAR file, add the path to the spooler to the classpath: ``` shell title="Classpath with Kafka spooler" java ... -cp path/to/mlops-agent-8.3.0.jar:path/to/spooler-kafka-8.3.0.jar com.datarobot.mlops.agent.Agent ``` The `start-agent.sh` script provided as an example automatically performs this task, adding any spooler JAR files found in the `lib` directory to the classpath. If your spooler JAR files are in a different directory, set the `MLOPS_SPOOLER_JAR_PATH` environment variable. For more information, see the [Dynamically load required spoolers in a Java application](spooler#dynamically-load-required-spoolers-in-a-java-application) documentation. #### Use Cases tab renamed to Value Tracker {: #use-cases-tab-renamed-to-value-tracker } With this release, the **Use Cases** tab at the top of the DataRobot is now the **Value Tracker**. While the functionality remains the same, all instances of “use cases” in this feature have been replaced by “value tracker.” ![](images/usecase-valuetrack.png) See the [Value Tracker documentation](value-tracker) for more information. ### Public Preview {: #public-preview } #### Prediction Explanations for cluster models {: #prediction-explanations-for-cluster-models } Now available as Public Preview, you can use Prediction Explanations with clustering to uncover which factors most contributed to any given row’s cluster assignment. With this insight, you can easily explain clustering model outcomes to stakeholders and identify high-impact factors to help focus their business strategies. Functioning very much like multiclass Prediction Explanations&mdash;but reporting on clusters instead of classes&mdash;cluster explanations are available from both the Leaderboard and deployments when enabled. They are available for all XEMP-based clustering projects and are not available with time series. **Required feature flag**: Enable Clustering Prediction Explanations Public preview [documentation](cluster-pe). ![](images/pe-clustering-3.png) ### API enhancements {: #api-enhancements } The following is a summary of API new features and enhancements. Go to the [API documentation user guide](guide/index){ target=_blank } for more information on each client. !!! tip DataRobot highly recommends updating to the latest API client for Python and R. #### Public Preview: R client v2.29 {: #public-preview-r-client-v229 } Now available for public preview, DataRobot has released [version 2.29 of the R client](https://github.com/datarobot/rsdk/releases){ target=_blank }. This version brings parity between the R client and version 2.29 of the Public API. As a result, it introduces significant changes to common methods and usage of the client. These changes are encapsulated in a new library (in addition to the `datarobot` library): `datarobot.apicore`, which provides auto-generated functions to access the Public API. The `datarobot` package provides a number of API wrapper functions around the `apicore` package to make it easier to use. Reference the [v2.29 documentation](r-pub-prev/index) for more details on the new R client, including installation instructions, detailed method overviews, and [reference documentation](r-ref). ##### New R Functions {: #new-r-functions } * Generated API wrapper functions are organized into categories based on their tags from the OpenAPI specification, which were themselves redone for the entire DataRobot Public API in v2.27. * API wrapper functions use camel-cased argument names to be consistent with the rest of the package. * Most function names follow a `VerbObject` pattern based on the OpenAPI specification. * Some function names match "legacy" functions that existed in v2.18 of the R Client if they invoked the same underlying endpoint. For example, the wrapper function is called `GetModel`, not `RetrieveProjectsModels`, since the latter is what was implemented in the R client for the endpoint `/projects/{mId}/models/{mId}`. * Similarly, these functions use the same arguments as the corresponding "legacy" functions to ensure DataRobot does not break existing code calling those functions. * The R client (both `datarobot` and `datarobot.apicore` packages) outputs a warning when you attempt to access certain resources (projects, models, deployments, etc.) that are deprecated or disabled by the DataRobot platform migration to Python 3. * Added the helper function `EditConfig` that allows you to interactively modify `drconfig.yaml`. * Added the `DownloadDatasetAsCsv` function to retrieve a dataset as a CSV file using `catalogId`. * Added the `GetFeatureDiscoveryRelationships` function to get the feature discovery relationships for a project. * The R client (both `datarobot` and `datarobot.apicore` packages) will output a warning when you attempt to access certain resources (projects, models, deployments, etc.) that are deprecated or disabled by the DataRobot platform migration to Python 3. ##### R enhancements {: #r-enchancements } * The function `RequestFeatureImpact` now accepts a `rowCount` argument, which will change the sample size used for Feature Impact calculations. * The internal helper function `ValidateModel` was renamed to `ValidateAndReturnModel` and now works with model classes from the `apicore` package. * The `quickrun` argument has been removed from the function `SetTarget`. Set `mode = AutopilotMode.Quick` instead. * The Transferable Models family of functions (`ListTransferableModels`, `GetTransferableModel`, `RequestTransferableModel`, `DownloadTransferableModel`, `UploadTransferableModel`, `UpdateTransferableModel`, `DeleteTransferableModel`) have been removed. The underlying endpoints&mdash;long deprecated&mdash;were removed from the Public API with the removal of the Standalone Scoring Engine (SSE). * Removed files (code, tests, doc) representing parts of the Public API not present in v2.27-2.29. ##### R deprecations {: #r-deprecations } Review the breaking changes introduced in version 2.29: * The `quickrun` argument has been removed from the function SetTarget. Set `mode = AutopilotMode.Quick` instead. * The Transferable Models functions have been removed. Note that the underlying endpoints were also removed from the Public API with the removal of the Standalone Scoring Engine (SSE). The affected functions are listed below: * `ListTransferableModels` * `GetTransferableModel` * `RequestTransferableModel` * `DownloadTransferableModel` * `UploadTransferableModel` * `UpdateTransferableModel` * `DeleteTransferableModel` Review the deprecations introduced in version 2.29: * Compliance Documentation API is deprecated. Instead use the Automated Documentation API. ### Deprecation announcements {: #deprecation-announcements } #### Current status of Python 2 deprecation and removal {: #current-status-of-python-2-deprecation-and-removal } As of the November 2022 release, the following describes the state of the Python 2 removal: * Python 2 projects and models are disabled and no longer support Leaderboard predictions. * Python 2-based model deployments are disabled with the exception of organizations that requested an extension for the frozen runtime. See the [guide](python2) for detailed information on Python 2 deprecation and migration to Python 3.
november2022-announce
--- title: August 2022 description: Read about DataRobot's new public preview and generally available features released in August, 2022. --- # August 2022 {: #august-2022 } _August 24, 2022_ With the latest deployment, DataRobot's managed AI Platform deployment delivered the following new GA and Public Preview features. See the [deployment history](cloud-history/index) for past feature announcements. See also: * [**Deprecation notices**](#deprecation-announcements) ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Data and integrations** | :~~: | :~~: [UI/UX improvements to No-Code AI Apps](#uiux-improvements-to-no-code-ai-apps) | ✔ | [New data connection UI](#new-data-connection-ui) | | ✔ | **Predictions and MLOps** | :~~: | :~~: [Clear deployment statistics](#clear-deployment-statistics) | ✔ | [Challenger insights for multiclass and external models](#challenger-insights-for-multiclass-and-external-models) | ✔ | [Remote repository file browser for custom models and tasks](#remote-repository-file-browser-for-custom-models-and-tasks) | | ✔ | [Deployment prediction and training data export for custom metrics](#deployment-prediction-and-training-data-export-for-custom-metrics) | | ✔ | ### GA {: #ga } #### UI/UX improvements to No-Code AI Apps {: #uiux-improvements-to-no-code-ai-apps } This release introduces the following improvements to No-Code AI Apps: * An in-app tour has been added to help you set up Optimizer applications. Click the **?** in the upper-right and select **Show Optimizer Guide**. * When opening an application, it now opens in Consume mode instead of Build mode. * In **Consume > Optimization Details**, the What-if and Optimizer widgets have been moved towards the top of the page. * In Optimizer applications, you previously needed to select a prediction row to calculate an optimization. Now, you can click the **Optimize Row** button in the All Rows widget to calculate and display the optimized prediction without leaving the page. * In Build mode, widgets no longer display an example. #### Clear deployment statistics {: #clear-deployment-statistics } Now generally available, you can clear monitoring data by model version and date range. If your organization has enabled the [deployment approval workflow](dep-admin), approval must be given before any monitoring data can be cleared from the deployment. This feature allows you to remove monitoring data sent inadvertently or during the integration testing phase of deploying a model from the deployment. Choose a deployment for which you want to reset statistics from the inventory. Click the actions menu and select **Clear statistics**. ![](images/reset-dep-1.png) Complete the settings in the **Clear Deployment Statistics** window to configure the conditions of the reset. ![](images/reset-dep-2.png) After fully configuring the settings, click **Clear statistics**. DataRobot clears the monitoring data from the deployment for the indicated date range. For more information, see the [Clear deployment statistics documentation](actions-menu#clear-deployment-statistics). #### Challenger insights for multiclass and external models {: #challenger-insights-for-multiclass-and-external-models } Now generally available, you can compute challenger model insights for multiclass models and external models. * Multiclass classification projects only support accuracy comparison. * External models (regardless of project type) require an external challenger comparison dataset. === "Add external challenger comparison dataset" To compare an external model challenger, you need to provide a dataset that includes the actuals *and* the prediction results. When you upload the comparison dataset, you can specify a column containing the prediction results. To add a comparison dataset for an external model challenger, follow the [Generate model comparisons](challengers#generate-model-comparions) process, and on the **Model Comparison** tab, upload your comparison dataset with a **Prediction column** identifier. Make sure the prediction dataset you provide includes the prediction results generated by the external model at the location identified by the **Prediction column**. ![](images/ext-champ-6.png) === "View insights by comparison tab" Once you compute model insights, the **Model Insights** page displays comparison tabs depending on the project type: <table> <tr> <th></th> <th scope="col">Accuracy</th> <th scope="col">Dual lift</th> <th scope="col">Lift</th> <th scope="col">ROC</th> <th scope="col">Predictions Difference</th> </tr> <tr> <th scope="row">Regression</th> <td>✔</td> <td>✔</td> <td>✔</td> <td></td> <td>✔</td> </tr> <tr> <th scope="row">Binary</th> <td>✔</td> <td>✔</td> <td>✔</td> <td>✔</td> <td>✔</td> </tr> <tr> <th scope="row">Multiclass</th> <td>✔</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <th scope="row">Time series</th> <td>✔</td> <td>✔</td> <td>✔</td> <td></td> <td>✔</td> </tr> </table> For more information, see the [View model comparisons documentation](challengers#view-model-comparisons). ### Public Preview {: #public-preview } #### New data connection UI {: #new-data-connection-ui } Now available for public preview, DataRobot introduces improvements to the data connection user interface that simplifies the process of adding and configuring data connections from the **AI Catalog > Data Connection** page. Instead of opening multiple windows to set up a data connection, after selecting a data store, you can configure parameters and authenticate credentials in the same window. For each data connection, only the required fields are displayed, however, you can define additional parameters under **Advanced Options** at the bottom of the page. ![](images/connect-ui-5.png) Additionally, using credentials to connect to data sources has also been simplified. Once you enter credentials when configuring a data connection, DataRobot automatically applies these credentials when you create a new AI Catalog dataset from the connection. **Required feature flag:** Enable New Data Connection UI Public preview [documentation](new-data-ui). #### Remote repository file browser for custom models and tasks {: #remote-repository-file-browser-for-custom-models-and-tasks } Now available as a public preview feature, you can browse the folders and files in a remote repository to select the files you want to add to a custom model or task. When you [add a model](custom-inf-model#create-a-new-custom-model) or [add a task](cml-custom-tasks) to the Custom Model Workshop, you can add files to that model or task from a wide range of repositories, including Bitbucket, GitHub, GitHub Enterprise, S3, GitLab, and GitLab Enterprise. After you [add a repository to DataRobot](custom-model-repos#add-a-remote-repository), you can pull files from the repository and include them in the custom model or task. When you [pull from a remote repository](custom-model-repos#pull-files-from-the-repository), in the **Pull from GitHub repository** dialog box, you can select the checkbox for any files or folders you want to pull into the custom model. In addition, you can click **Select all** to select every file in the repository, or, after you select one or more files, you can click **Deselect all** to clear your selections. !!! note This example uses GitHub; however, the process is the same for each repository type. ![](images/pp-custom-model-repo-browse.png) **Required feature flag:** Enable File Browser for Pulling Model or Task Files from Remote Repositories Public Preview [documentation](pp-remote-repo-file-browser). #### Deployment prediction and training data export for custom metrics {: #deployment-prediction-and-training-data-export-for-custom-metrics } Now available as a public preview feature, you can export a deployment's stored training and prediction data&mdash;both the scoring data, and the prediction results&mdash;to compute and monitor custom business or performance metrics outside DataRobot. To export a deployment's stored prediction and training data: 1. In the top navigation bar, click **Deployments**. 2. On the **Deployments** tab, click on the deployment you want to open and export stored prediction or training data from. !!! note To access the Data Export tab, the deployment must store prediction data. Ensure that you [Enable prediction rows storage for challenger analysis](challengers-settings) in the deployment settings. 3. In the deployment, click the **Data Export** tab. === "Download training data" To open or download training data: * Under **Training Data**, click the open icon ![](images/icon-open.png) to open the training data in the AI Catalog. * Click the download icon ![](images/icon-down.png) to download the training data. === "Download prediction data" To open or download prediction data: 1. Configure the following settings to specify the stored prediction data you want to export: ![](images/stored-dep-data-export-settings.png) | | Setting | Description | |-|---------|-------------| | ![](images/icon-1.png) | Model | Select the deployment's model, current or previous, to export prediction data for. | | ![](images/icon-2.png) | Range (UTC) | Select the start and end dates of the period you want to export prediction data from. | | ![](images/icon-3.png) | Resolution | Select the granularity of the date slider. Select from hourly, daily, weekly, and monthly granularity based on the time range selected. If the time range is longer than 7 days, hourly granularity is not available. | | ![](images/icon-4.png) | Reset | Reset the data export settings to the default. | 2. Under **Prediction Data**, click **Generate Prediction Data**. ??? "Prediction data generation considerations" When generating prediction data, consider the following: * When generating prediction data, you can export up to 200,000 rows. If the time range you set exceeds 200,000 rows of prediction data, decrease the range. * In the AI Catalog, you can have up to 100 prediction export items. If generating prediction data for export would cause the number of prediction export items in the AI Catalog to exceed that limit, delete old prediction export AI Catalog items. * When generating prediction data for time series deployments, two prediction export items are added to the AI Catalog. One item for the prediction data, the other for the prediction results. The Data Export tab links to the prediction results. The prediction data export appears in the table below. 3. After the prediction data is generated: * Click the open icon ![](images/icon-open.png) to open the prediction data in the AI Catalog. * Click the download icon ![](images/icon-down.png) to download the prediction data. To use the exported deployment data to create your own custom metrics, you can implement a script to read from the CSV file containing the exported data and then calculate metrics using the resulting values, including columns automatically generated during the export process. === "Create a custom metric" This example uses the exported prediction data to calculate and plot the change in the `time_in_hospital` feature over a 30-day period using the DataRobot prediction timestamp (`DR_RESERVED_PREDICTION_TIMESTAMP`) as the DateFrame index (or row labels). It also uses the exported training data as the plot's baseline: ``` py import pandas as pd feature_name = "<numeric_feature_name>" training_df = pd.read_csv("<path_to_training_data_csv>") baseline = training_df[feature_name].mean() prediction_df = pd.read_csv("<path_to_prediction_data_csv>") prediction_df["DR_RESERVED_PREDICTION_TIMESTAMP"] = pd.to_datetime( prediction_df["DR_RESERVED_PREDICTION_TIMESTAMP"] ) predictions = prediction_df.set_index("DR_RESERVED_PREDICTION_TIMESTAMP")["time_in_hospital"] ax = predictions.rolling('30D').mean().plot() ax.axhline(y=baseline - 2, color="C1", label="training data baseline") ax.legend() ax.figure.savefig("feature_over_time.png") ``` === "DataRobot column reference" DataRobot automatically adds the following columns to the prediction data generated for export: Column | Description -------|------------ DR_RESERVED_PREDICTION_TIMESTAMP | Contains the prediction timestamp. DR_RESERVED_PREDICTION | Identifies regression prediction values. DR_RESERVED_PREDICTION_{Label} | Identifies classification prediction values. **Required feature flag:** Enable Training and Prediction Data Export for Deployments Public Preview [documentation](data-export). ## Deprecation announcements {: #deprecation-announcements } #### USER/Open Source models deprecated and soon disabled {: #user-open-source-models-deprecated-and-soon-disabled } With this release, all models containing USER/Open source (“user”) tasks are deprecated. The exact process of deprecating existing models will be rolling out over the next few months and implications will be announced in subsequent releases. See the full announcement in the [June Cloud Announcements](june2022-announce). <hr> _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
august2022-announce
--- title: July 2022 description: Read about DataRobot's new public preview and generally available features released in July, 2022. --- # July 2022 {: #july-2022 } _July 27, 2022_ With the latest deployment, DataRobot's managed AI Platform deployment delivered the following new GA and Public Preview features. See the [deployment history](cloud-history/index) for past feature announcements. See also: * [**API enhancements**](#api-enhancements) * [**Deprecation notices**](#deprecation-announcements) === "Saas" ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Modeling** | :~~: | :~~: [Native Prophet and Series Performance blueprints in Autopilot](#native-prophet-and-series-performance-blueprints-in-autopilot) | ✔ | [Text AI parameters now generally available via Composable ML](#text-AI-parameters-now-generally-available-via-composable-ML) | ✔ | [Composable ML task categories refined](#composable-ml-task-categories-refined) | ✔ | [NLP Autopilot with better language support now GA](#nlp-autopilot-with-better-language-support-now-ga) | ✔ | [No-Code AI App header enhancements](#no-code-ai-app-header-enhancements) | ✔ | [Project duplication, with settings, for time series projects](#project-duplication-with-settings-for-time-series-projects) | | ✔ | [Multiclass support in No-Code AI Apps](#multiclass-support-in-no-code-ai-apps) | | ✔ | [Details page added to time series Predictor applications](#details-page-added-to-time-series-predictor-applications) | | ✔ | [Support for Manual mode introduced to segmented modeling](#support-for-manual-mode-introduced-to-segmented-modeling) | ✔ | [Blueprint toggle allows summary and detailed views from Leaderboard](#blueprint-toggle-allows-summary-and-detailed-views-from-leaderboard) | | ✔ | [Scoring code for time series projects](#scoring-code-for-time-series-projects) | ✔ | **Predictions and MLOps** | :~~: | :~~: [Text Prediction Explanations illustrate impact on an n-gram level](#text-prediction-explanations-illustrate-impact-on-an-n-gram-level) | | ✔ | === "Self-Managed" ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Modeling** | :~~: | :~~: [Native Prophet and Series Performance blueprints in Autopilot](#native-prophet-and-series-performance-blueprints-in-autopilot) | ✔ | [Text AI parameters now generally available via Composable ML](#text-AI-parameters-now-generally-available-via-composable-ML) | ✔ | [Composable ML task categories refined](#composable-ml-task-categories-refined) | ✔ | [NLP Autopilot with better language support now GA](#nlp-autopilot-with-better-language-support-now-ga) | ✔ | [No-Code AI App header enhancements](#no-code-ai-app-header-enhancements) | ✔ | [Project duplication, with settings, for time series projects](#project-duplication-with-settings-for-time-series-projects) | | ✔ | [Multiclass support in No-Code AI Apps](#multiclass-support-in-no-code-ai-apps) | | ✔ | [Details page added to time series Predictor applications](#details-page-added-to-time-series-predictor-applications) | | ✔ | [Support for Manual mode introduced to segmented modeling](#support-for-manual-mode-introduced-to-segmented-modeling) | ✔ | [Blueprint toggle allows summary and detailed views from Leaderboard](#blueprint-toggle-allows-summary-and-detailed-views-from-leaderboard) | | ✔ | [Scoring code for time series projects](#scoring-code-for-time-series-projects) | ✔ | **Predictions and MLOps** | :~~: | :~~: [Text Prediction Explanations illustrate impact on an n-gram level](#text-prediction-explanations-illustrate-impact-on-an-n-gram-level) | | ✔ | ### GA {: #ga } #### Native Prophet and Series Performance blueprints in Autopilot {: #native-prophet-and-series-performance-blueprints-in-autopilot } Support for native Prophet, ETS, and TBATS models for single and multiseries time series projects was announced as generally available in the June release. (A detailed model description can be found for each model by accessing the model [blueprint](blueprints).) With this release, a slight modification has been made so that these models no longer run as part of Quick Autopilot. DataRobot will run them, as appropriate in full Autopilot and they are also available from the model repository. #### Text AI parameters now generally available via Composable ML {: #text-AI-parameters-now-generally-available-via-composable-ML } The ability to modify certain Text AI preprocessing tasks (Lemmatizer, PosTagging, and Stemming) has moved from the Advanced Tuning tab to blueprint tasks accessible via composable ML. The new Text AI preprocessing tasks unlock additional pathways to create unique text blueprints. For example, you can now use lemmatization in any text model that supports that preprocessing task instead of being limited to TF-IDF blueprints. Previously available as a Public Preview feature, these tasks are now available without a feature flag. #### Composable ML task categories refined {: #composable-ml-task-categories-refined } In response to the feedback and widespread adoption of Composable ML and [blueprint editing](cml-blueprint-edit), this release brings some refinements to task categorization. For example, boosting tasks are now available under the specific project/model type: ![](images/cml-blue-14.png) #### NLP Autopilot with better language support now GA {: #nlp-autopilot-with-better-language-support-now-ga } A host of natural language processing (NLP) improvements are now generally available. The most impactful is the application of FastText for language detection at data ingest, which: * Allows DataRobot to generate the appropriate blueprints with parameters optimized for that language. * Adapt tokenization to the detected language for better word clouds and interpretability. * Trigger specific blueprint training heuristics so that accuracy-optimized Advanced Tuning settings are applied. This feature works with multilingual use cases as well; Autopilot will detect multiple languages and adjust various blueprint settings for the greatest accuracy. The following NLP enhancements are also now generally available: * New pre-trained BPE tokenizer (which can handle any language). * Refined Keras blueprints for NLP for improved accuracy and training time. * Various improvements across other NLP blueprints. * New Keras blueprints (with the BPE tokenizer) in the Repository. #### No-Code AI App header enhancements {: #no-code-ai-app-header-enhancements } This release introduces improvements to the layout and header of [No-Code AI Apps](app-builder/index). Toggle between the tabs below to view the improvements made to the UI when using and editing an application: === "Editing and application" ![](images/app-build-1.png) | &nbsp; | Element | Description | |---|---|---| | ![](images/icon-1.png) | Pages panel | Allows you to rename, reorder, add, hide, and delete application pages. | | ![](images/icon-2.png) | Widget panel | Allows you to add widgets to your application. | | ![](images/icon-3.png) | Settings | Modifies general configurations and permissions as well as displays app usage. | | ![](images/icon-4.png) | Documentation | Opens the DataRobot documentation for No-Code AI Apps. | | ![](images/icon-5.png) | Editing page dropdown | Controls the application page you are currently editing. To view a different page, click the dropdown and select the page you want to edit. Click **Manage pages** to open the **Pages** panel. | | ![](images/icon-6.png) | Preview | Previews the application on different devices. | | ![](images/icon-7.png) | Go to app / Publish | Opens the end-user application, where you can make new predictions, as well as view prediction results and widget visualizations. After editing an application, this button displays **Publish**, which you must click to apply your changes.| | ![](images/icon-8.png) | Widget actions | Moves, hides, edits, and deletes widgets. | === "Using an application" ![](images/use-app-1.png) | &nbsp; | Widget | Description | |---|---| ---| | ![](images/icon-1.png) | Application name | Displays the application name. Click to return to the app's Home page. | | ![](images/icon-2.png) | Pages | Navigates between application pages. | | ![](images/icon-3.png) | Build | Allows you to edit the application. | | ![](images/icon-4.png) | Share | Share the application with users, groups, or organizations within DataRobot. | | ![](images/icon-5.png) | Add new row | Opens the **Create Prediction** page, where you can make single record predictions. | | ![](images/icon-6.png) | Add Data | Upload batch predictions&mdash;from the **AI Catalog** or a local file. | | ![](images/icon-7.png) | All rows | Displays a history of predictions. Select a row to view prediction results for that entry. | #### Support for Manual mode introduced to segmented modeling {: #support-for-manual-mode-introduced-to-segmented-modeling } With this release, you can now use manual mode with [segmented modeling](ts-segmented). Previously you could on choose Quick or full Autopilot. When using Manual mode with segmented modeling, DataRobot creates individual projects per segment and completes preparation as far as the modeling stage. However, DataRobot does not create per-project models. It does create the Combined Model (as a placeholder), but does not select a champion. Using Manual mode is a technique you can use to have full manual control over which models are trained in each segment and selected as champions, without taking the time to build models. #### Scoring code for time series projects {: #scoring-code-for-time-series-projects } Now generally available, you can export time series models in a Java-based Scoring Code package. [Scoring Code](scoring-code/index) is a portable, low-latency method of utilizing DataRobot models outside the DataRobot application. You can download a model's time series Scoring Code from the following locations: * [Download from the Leaderboard](sc-download-leaderboard) (**Leaderboard > Predict > Portable Predictions**) * [Download from the deployment](sc-download-deployment) (**Deployments > Predictions > Portable Predictions**) === "Download for segmented modeling projects" With [segmented modeling](ts-segmented), you can build individual models for segments of a multiseries project. DataRobot then merges these models into a Combined Model. You can [generate Scoring Code for the resulting Combined Model](sc-time-series#scoring-code-for-segmented-modeling-projects). To generate and download Scoring Code, each segment champion of the Combined Model must have Scoring Code: ![](images/sc-segmented-scoring-code.png) After you ensure each segment champion of the Combined Model has Scoring Code, you can download the Scoring Code [from the Leaderboard](sc-download-leaderboard) or you can deploy the Combined Model and download the Scoring Code [from the deployment](sc-download-deployment). === "Download with prediction intervals" You can now [include prediction intervals in the downloaded Scoring Code JAR](sc-time-series#prediction-intervals-in-scoring-code) for a time series model. You can download Scoring Code with prediction intervals [from the Leaderboard](sc-download-leaderboard) or [from a deployment](sc-download-deployment). ![](images/sc-prediction-intervals.png) === "Score data at the command line" You can score data at the command line using the downloaded time series Scoring Code. This release introduces efficient batch processing for time series Scoring Code to support scoring larger datasets. For more information, see the [Time series parameters for CLI scoring](sc-time-series#time-series-parameters-for-cli-scoring) documentation. For more details on time series Scoring Code, see [Scoring Code for time series projects](sc-time-series). ### Public Preview {: #public-preview } #### Project duplication, with settings, for time series projects {: #project-duplication-with-settings-for-time-series-projects } Now available for public preview, you can duplicate ("clone") any DataRobot project type, including unsupervised and time-aware projects like time series, OTV, and segmented modeling. Previously, [this capability](manage-projects#duplicate-a-project) was only available for AutoML projects (non time-aware regression and classification). Duplicating a project provides an option to select the dataset only&mdash;which is faster than re-uploading it&mdash;or a dataset and project settings. For time-aware projects, this means cloning the target, the feature derivation and forecast window values, any selected calendars, KA, features, series IDs&mdash;all time series settings. If you used the [data prep](ts-data-prep#data-prep-for-time-series) tool to address irregular time step issues, cloning uses the modified dataset (which is the one that was used for model building in the parent project.) You can access the **Duplicate** option from either the projects dropdown (upper right corner) or the Manage Project page. ![](images/proj-dup.png) **Required feature flag:** Enable Cloning Time-Aware and Unsupervised Projects with Project Settings #### Multiclass support in No-Code AI Apps {: #multiclass-support-in-no-code-ai-apps } In addition to binary classification and regression problems, No-Code AI Apps now support multiclass classification deployments across all three templates&mdash;Predictor, Optimizer, and What-if. This gives you the ability to leverage No-Code AI Apps for a broader range of business problems across several industries, thus expanding its benefits and value. **Required feature flag:** Enable Application Builder Multiclass Support #### Details page added to time series Predictor applications {: #details-page-added-to-time-series-predictor-applications } In time series Predictor No-Code AI Apps, you can now view prediction information for specific predictions or dates, allowing you to not only see the prediction values, but also compare them to other predictions that were made for the same date. Previously, you could only view values for the prediction, residuals, and actuals, as well as the top three Prediction Explanations. To drill down into the prediction details, click on a prediction in either the **Predictions vs Actuals** or **Prediction Explanations** chart. This opens the Forecast details page, which displays the following information: ![](images/ts-pred-2.png) ![](images/ts-pred-detail-3.png) | | Description | |---|---| | ![](images/icon-1.png) | The average prediction value in the forecast window. | | ![](images/icon-2.png) | Up to 10 Prediction Explanations for each prediction. | | ![](images/icon-3.png) | Segmented analysis for each forecast distance within the forecast window. | | ![](images/icon-4.png) | Prediction Explanations for each forecast distance included in the segmented analysis. | **Required feature flag:** Enable Application Builder Time Series Predictor Details Page Public preview [documentation](ts-app#forecast-details-page). #### Text Prediction Explanations illustrate impact on an n-gram level {: #text-prediction-explanations-illustrate-impact-on-an-n-gram-level } With Text Prediction Explanations, you can understand how the individual words (n-grams) in a text feature influence predictions, helping to validate and understand the model and the importance it is placing on words. Previously, DataRobot evaluated the impact of text in a dataset as the impact of a text feature as a whole, potentially requiring reading the full text for best understanding. With Text Prediction Explanations, which uses the standard color bar spectrum of blue (negative) to red (positive) impact, you can easily visualize and understand your text. An option to display unknown n-grams helps to identify, via gray highlight, those n-grams not recognized by the model (most likely because they were not seen during training). ![](images/text-predex-unknown.png) **Required feature flag:** Enable Text Prediction Explanations For more information, see the [Text Prediction Explanations](predex-text) documentation. #### Blueprint toggle allows summary and detailed views from Leaderboard {: #blueprint-toggle-allows-summary-and-detailed-views-from-leaderboard } Blueprints that are viewed from the Leaderboard’s **Blueprint** tab are, by default, a read-only, summarized view, showing only those tasks used in the final model. ![](images/blue-toggle-1.png) However, the original modeling algorithm often contains many more “branches,” which DataRobot prunes when they are not applicable to the project data and feature list. Now, you can toggle to see a detailed view while in read-only mode. Prior to the introduction of this feature, viewing the full blueprint required entering edit mode of the blueprint editor. ![](images/blue-toggle-2.png) **Required feature flag:** Enable Blueprint Detailed View Toggle Public preview [documentation](blueprint-toggle). ### API enhancements {: #api-enhancements } The following is a summary of API new features and enhancements. Go to the [API Documentation home](https://docs.datarobot.com/en/docs/api/index.html){ target=_blank } for more information on each client. !!! tip DataRobot highly recommends updating to the latest API client for Python and R. #### Calculate Feature Impact for each backtest {: #calculate-feature-impact-for-each-backtest } Feature Impact provides a transparent overview of a model, especially in a model's compliance documentation. Time-dependent models trained on different backtests and holdout partitions can have different Feature Impact calculations for each backtest. Now generally available, you can calculate Feature Impact for each backtest using DataRobot's REST API, allowing you to inspect model stability over time by comparing Feature Impact scores from different backtests. ## Deprecation announcements {: #deprecation-announcements } #### Excel add-in removed {: #excel-add-in-removed } With deprecation announced in June, the existing DataRobot Excel Add-In is now removed from the product. Users who have already downloaded the add-in can continue using it, but it will not be supported or further developed. #### USER/Open Source models deprecated and soon disabled {: #user-open-source-models-deprecated-and-soon-disabled } With this release, all models containing USER/Open source (“user”) tasks are deprecated. The exact process of deprecating existing models will be rolling out over the next few months and implications will be announced in subsequent releases. See the full announcement in the [June Cloud Announcements](june2022-announce). #### Feature Fit insight disabled {: #feature-fit-insights-disabled } The **Feature Fit** visualization has been disabled. Any existing projects will no longer show the option from the Leaderboard, and new projects will not create the chart. Organization admins can re-enable it for their users until the tool is removed completely. Use the [**Feature Effects**](feature-effects) insight in place of **Feature Fit**, as it provides the same output. #### Auto-Tuned Word N-gram Text Modeler blueprints removed from the Leaderboard {: #auto-tuned-word-n-gram-text-modeler-blueprints-removed-from-the-leaderboard } With this release, Auto-Tuned Word N-gram Text Modeler blueprints are no longer run as part of Autopilot for binary classification, regression, and multiclass/multimodal projects. The modeler blueprints remain available in the repository. Currently, Light GBM (LGBM) models run these auto-tuned text modelers for each text column, and for each, a new blueprint is added to the Leaderboard. However, these Auto-Tuned Word N-gram Text Modelers are not correlated to the original LGBM model (i.e., modifying them does not affect the original LGBM model). Now, Autopilot creates a single, larger blueprint for all Auto-Tuned Word N-gram Text Modeler tasks instead of one for each text column. Note that this change has no backward-compatibility issues; it applies to new projects only. #### Hadoop deployment and scoring removed {: #hadoop-deployment-and-scoring-removed } With this release, Hadoop deployment and scoring, including the Standalone Scoring Engine (SSE), is fully removed and unavailable. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
july2022-announce
--- title: January 2023 description: Read release note announcements for DataRobot's generally available and public preview features released in January, 2023. --- # January 2023 {: #january-2023 } _January 25, 2023_ This page provides announcements of newly released features available in the managed AI Platform, with links to additional resources. With the January deployment, DataRobot's managed AI Platform deployment delivered the following new GA and Public Preview features. From the release center you can also access: * [Cloud announcement history](cloud-history/index) * [Public preview features](public-preview/index) * [Self-Managed AI Platform release notes](archive-release-notes/index) ## In the spotlight {: #in-the-spotlight } [DataRobot Notebooks](dr-notebooks/index) offer an enhanced code-first experience in the application. Notebooks play a crucial role in providing a collaborative environment, using a code-first approach, to accelerate the machine learning lifecycle. Reduce hundreds of lines of code, automate data science tasks, and accommodate custom code workflows specific to your business needs. See the full description [below](#datarobot-notebooks). ![](images/rn-notebooks-spotlight.png) ## January release {: #january-release } The following table lists each new feature. See the [deployment history](cloud-history/index) for past feature announcements and also the [**deprecation notices**](#deprecation-announcements), below. ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Modeling** | :~~: | :~~: [Quick Autopilot mode improvements for faster experimentation](#quick-autopilot-mode-improvements-speed-experimentation) | ✔ | | [Time series clustering experience improvements](#time-series-clustering-experience-improvements) | ✔ | | [Time series 5GB support](#time-series-5gb-support) | ✔ | | [Time series project cloning goes GA](#time-series-project-cloning-goes-ga) | ✔ | | [Create AI Apps from Leaderboard models](#create-ai-apps-from-models-on-the-leaderboard) | ✔ | | [Feature Discovery memory improvements](#feature-discovery-memory-improvements) | ✔ | | **Predictions and MLOps** | :~~: | :~~: [Batch predictions for TTS and LSTM models](#batch-predictions-for-tts-and-lstm-models) | | ✔ | [Compliance documentation for models that don’t support null imputation](#compliance-documentation-for-models-that-dont-support-null-imputation) | ✔ | | [Feature drift word cloud for text features](#feature-drift-word-cloud-for-text-features) | ✔ | | [MLOps deployment logs](#mlops-deployment-logs) | ✔ | | [Model package artifact creation workflow](#model-package-artifact-creation-workflow) | | ✔ | [GitHub Actions for custom models](#github-actions-for-custom-models) | | ✔ | [**Public Preview: DataRobot Notebooks**](#datarobot-notebooks) | :~~: | :~~: ### GA {: #ga } #### Quick Autopilot mode improvements speed experimentation {: #quick-autopilot-mode-improvements-speed-experimentation } With this month’s release, [Quick](model-ref#quick-autopilot) Autopilot mode now uses a one-stage modeling process to build models and populate the Leaderboard in AutoML projects. In the new version of Quick, all models are trained at a max sample size&mdash;typically 64%. The specific number of Quick models run varies by project and target type. DataRobot selects which models to run based on a variety of criteria, including target and performance metric, but as its name suggests, chooses only models with relatively short training runtimes to support quicker experimentation. Note that to maximize runtime efficiency, DataRobot no longer automatically generates and fits the [DR Reduced Features](feature-lists#automatically-created-feature-lists) list. (Fitting the reduced list requires retraining models.) ![](images/start-with-options.png) #### Time series clustering experience improvements {: #time-series-clustering-experience-improvements } Initially released as generally available in September 2022, this release brings enhancements to [time series clustering](ts-clustering). Clustering enables you to easily group similar series to get a better understanding of your data or use them as input to time series [segmented modeling](ts-segmented). Clustering enhancements include: * A toggle to control the 10% clustering buffer if you aren’t using the result for segmented modeling. * Clarified project setup that removes extraneous feature lists and window setup. * Clustering models, and their resulting segmented models, use a uniform quantity of data for predictions (with the size based on the training size for the original clustering model). ![](images/cluster-segmented-buffer.png) #### Time series 5GB support {: #time-series-5gb-support } With this deployment, time series projects on the DataRobot managed AI Platform can support datasets up to 5GB. Previously the limit for time series projects on the cloud was 1GB. For more project- and platform-based information, see the [dataset requirements](file-types#time-series-file-import-sizes) reference. #### Time series project cloning goes GA {: #time-series-project-cloning-goes-ga } Now generally available, you can duplicate ("clone") unsupervised, time series, OTV, and segmented modeling projects. Previously, [this capability](manage-projects#duplicate-a-project) was only available for AutoML regression and classification projects. Use the duplication feature to copy just the dataset or a variety of project settings and assets for faster project experimentation. ![](images/duplicate-1.png) #### Create AI Apps from models on the Leaderboard {: #create-ai-apps-from-models-on-the-leaderboard } You can now create No-Code AI Apps directly from trained models on the Leaderboard. To do so, select the model, click the new **Build app** tab, and select the template that best suits your use case. ![](images/app-leaderboard-2.png) Then, name the application, select an access type, and click **Create**. ![](images/app-leaderboard-3.png) The new app appears in the **Build app** tab of the Leaderboard model as well as the **Applications** tab. For more information, see the [documentation](create-app#from-the-leaderboard) for No-Code AI Apps. #### Feature Discovery memory improvements {: #feature-discovery-memory-improvements } Feature discovery projects now use less memory, improving overall performance and reducing the risk of error. #### Compliance documentation for models that don’t support null imputation {: #compliance-documentation-for-models-that-dont-support-null-imputation } To generate the Sensitivity Analysis section of the default **Automated Compliance Document** template, your custom model must support null imputation (the imputation of NaN values), or compliance documentation generation will fail. If the custom model doesn't support null imputation, you can use a specialized template to generate compliance documentation. In the **Report template** drop-down list, select **Automated Compliance Document (for models that do not impute null values)**. This template excludes the Sensitivity Analysis report and is only available for custom models. For more information, see information on [generating compliance documentation](reg-compliance#generate-compliance-documentation). ![](images/rn-comp-doc-no-imputation.png) !!! note If this template option is not available for your version of DataRobot, you can download the <a href="../../mlops/deployment/registry/custom-template-for-models-without-null-imputation-regression.json" download>custom template for regression models</a> or the <a href="../../mlops/deployment/registry/custom-template-for-models-without-null-imputation-binary.json" download>custom template for binary classification models</a>. #### Feature drift word cloud for text features {: #feature-drift-word-cloud-for-text-features } The [Feature Details](data-drift#feature-details-chart) chart plots the differences in a feature's data distribution between the training and scoring periods, providing a bar chart to compare the percentage of records a feature value represents in the training data with the percentage of records in the scoring data. For text features, the feature drift bar chart is replaced with a word cloud, visualizing data distributions for each token and revealing how much each individual token contributes to data drift in a feature. To access the feature drift word cloud for a text feature, open the **Data Drift** tab of a [drift-enabled](data-drift-settings) deployment. On the **Summary** tab, in the **Feature Details** chart, select a text feature from dropdown list: ![](images/pp-drift-word-cloud-details.png) !!! note Next to the **Export** button, you can click the settings icon (![](images/icon-gear.png)) and clear the **Display text features as word cloud** check box to disable the feature drift word cloud and view the standard chart: ![](images/pp-drift-word-cloud-disable.png) For more information, see the Feature Details chart’s [Text features](data-drift#text-features) documentation. #### MLOps deployment logs {: #mlops-deployment-logs } On the new **MLOps Logs** tab, you can view important deployment events. These events can help diagnose issues with a deployment or provide a record of the actions leading to the current state of the deployment. Each event has a type and a status. You can filter the event log by event type, event status, or time of occurrence, and you can view more details for an event on the Event Details panel. To access MLOps logs: 1. On a deployment's **Service Health** page, scroll to the **Recent Activity** section at the bottom of the page. 2. In the **Recent Activity** section, click **MLOps Logs**. 3. Under **MLOps Logs**, configure the log filters. 4. On the left panel, the **MLOps Logs** list displays deployment events with any selected filters applied. For each event, you can view a summary that includes the event name and status icon, the timestamp, and an event message preview. 5. Click the event you want to examine and review the **Event Details** panel on the right. ![](images/mlops-logs-details.png) For more information, see the Service Health tab’s [View MLOps Logs](service-health#view-mlops-logs) documentation. ### Public Preview {: #public-preview } #### DataRobot Notebooks {: #datarobot-notebooks } The DataRobot application now includes an in-browser editor to create and execute notebooks for data science analysis and modeling. Notebooks display computation results in various formats, including text, images, graphs, plots, tables, and more. You can customize output display by using open-source plugins. Cells can also contain Markdown rich text for commentary and explanation of the coding workflow. As you develop and edit a notebook, DataRobot stores a history of revisions that you can return to at any time. DataRobot Notebooks offer a dashboard that hosts notebook creation, upload, and management. Individual notebooks have containerized, built-in environments with commonly used machine learning libraries that you can easily set up in a few clicks. Notebook environments seamlessly integrate with DataRobot's API, allowing a robust coding experience supported by keyboard shortcuts for cell functions, in-line documentation, and saved environment variables for secrets management and automatic authentication. ![](images/notebooks-rn.png) Public preview [documentation](dr-notebooks/index). #### Batch predictions for TTS and LSTM models {: #batch-predictions-for-tts-and-lstm-models } Traditional Time Series (TTS) and Long Short-Term Memory (LSTM) models— sequence models that use autoregressive (AR) and moving average (MA) methods—are common in time series forecasting. Both AR and MA models typically require a complete history of past forecasts to make predictions. In contrast, other time series models only require a single row after feature derivation to make predictions. Previously, batch predictions couldn't accept historical data beyond the effective [feature derivation window (FDW)](glossary/index#feature-derivation-window) if the history exceeded the maximum size of each batch, while sequence models required complete historical data beyond the FDW. These requirements made sequence models incompatible with batch predictions. Enabling this public preview feature removes those limitations to allow batch predictions for TTS and LSTM models. Time series Autopilot still doesn't include TTS or LSTM model blueprints; however, you can access the model blueprints in the model [Repository](repository). To allow batch predictions with TTS and LSTM models, this feature: * Updates batch predictions to accept historical data up to the maximum batch size (equal to 50MB or approximately a million rows of historical data). * Updates TTS models to allow refitting on an incomplete history (if the complete history isn't provided). If you don't provide sufficient forecast history at prediction time, you could encounter prediction inconsistencies. For more information on maintaining accuracy in TTS and LSTM models, see the [prediction accuracy considerations](pp-ts-tts-lstm-batch-pred#prediction-accuracy-considerations). With this feature enabled, you can access the [**Predictions > Make Predictions**](batch-pred) and [**Predictions > Job Definitions**](batch-pred-jobs) tabs of a deployed TTS or LSTM model. ![](images/pp-tts-lstm-batch-pred1.png) **Required feature flag**: Enable TTS and LSTM Time Series Model Batch Predictions Public preview [documentation](pp-ts-tts-lstm-batch-pred). #### Model package artifact creation workflow {: #model-package-artifact-creation-workflow } Now available as a public preview feature, the improved model package artifact creation workflow provides a clearer and more consistent path to model deployment with visible connections between a model and its associated model packages in the [Model Registry](reg-create). Using this new approach, when you deploy a model, you begin by providing model package details and adding the model package to the Model Registry. After you create the model package and allow the build to complete, you can deploy it by [adding the deployment information](add-deploy-info). === "Leaderboard workflow" 1. From the **Leaderboard**, select the model to use for generating predictions and then click **Predict > Deploy**. To follow best practices, DataRobot recommends that you first [prepare the model for deployment](model-rec-process#prepare-a-model-for-deployment). This process runs **Feature Impact**, retrains the model on a reduced feature list, and trains on a higher sample size, followed by the entire sample (latest data for date/time partitioned projects). 2. On the **Deploy model** tab, provide the required model package information, and then click **Add to Model Registry**. 3. Allow the model to build. The **Building** status can take a few minutes, depending on the size of the model. A model package must have a **Status** of **Ready** before you can deploy it. ![](images/pp-model-artifact-creation-building.png) 4. In the **Model Packages** list, locate the model package you want to deploy and click Deploy. ![](images/pp-model-artifact-creation5.png) 5. Add [deployment information and create the deployment](add-deploy-info). === "Model Registry workflow" 1. Click **Model Registry** > **Model Packages**. 2. Click the **Actions** menu for the model package you want to deploy, and then click **Deploy**. The **Status** column shows the build status of the model package. ![](images/pp-model-artifact-creation6.png) If you deploy a model package that has a **Status** of **N/A**, the build process starts: ![](images/pp-model-artifact-creation7.png) 3. Add [deployment information and create the deployment](add-deploy-info). !!! tip You can also open a model package from the Model Registry and deploy it from **Package Info** tab. **Required feature flag**: Enable .mlpkg Artifact Creation for Model Packages Public preview [documentation](pp-model-pkg-artifact-creation). #### GitHub Actions for custom models {: #github-actions-for-custom-models } The custom models action manages custom inference models and their associated deployments in DataRobot via GitHub CI/CD workflows. These workflows allow you to create or delete models and deployments and modify settings. Metadata defined in YAML files enables the custom model action's control over models and deployments. Most YAML files for this action can reside in any folder within your custom model's repository. The YAML is searched, collected, and tested against a schema to determine if it contains the entities used in these workflows. For more information, see the [custom-models-action repository](https://github.com/datarobot-oss/custom-models-action){ target=_blank }. A [quickstart example](custom-model-github-action#github-actions-quickstart), provided in the documentation, uses a [Python Scikit-Learn model template](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/python3_sklearn){ target=_blank } from the [datarobot-user-model repository](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates){ target=_blank }. After you configure the workflow and create a model and a deployment in DataRobot, you can access the commit information from the model's version info and package info and the deployment overview: === "Model version info" ![](images/pp-cus-model-github2.png) === "Model package info" ![](images/pp-cus-model-github4.png) === "Deployment overview" ![](images/pp-cus-model-github1.png) **Required feature flag**: Enable Custom Model GitHub CI/CD For more information, see the GitHub Actions for custom models [documentation](custom-model-github-action). ### Deprecation announcements {: #deprecation-announcements } #### Current status of Python 2 deprecation and removal {: #current-status-of-python-2-deprecation-and-removal } As of the January 2023 release, the following describes the state of the Python 2 removal: * Python 2 has been completely removed from the platform. * All Python 2 projects are disabled and compute workers are no longer able to process Python 2-related jobs. * All Python 2 deployments are now disabled and will, unless managed under an DataRobot-implemented individualized migration plan, return a HTTP 405 response to prediction requests. * The Portable Prediction Server (PPS) image no longer contains Python 2 and is not capable of serving Python 2 models using dual inference mode. The PPS image will only serve prediction requests for Python 3 models.
january2023-announce
--- title: AI Platform releases description: A monthly record of the new Public Preview and GA features announced for DataRobot's managed AI Platform. --- # AI Platform releases {: #ai-platform-releases } A monthly record of the new Public Preview and GA features announced for DataRobot's managed AI Platform. Deprecation announcements are also included and link to deprecation guides, as appropriate. * [May 2023 release announcements](may2023-announce) * [April 2023 release announcements](april2023-announce) * [March 2023 release announcements](march2023-announce) * [February 2023 release announcements](february2023-announce) * [January 2023 release announcements](january2023-announce) * [November 2022 release announcements](november2022-announce) * [October 2022 release announcements](october2022-announce) * [September 2022 release announcements](sept2022-announce) * [August 2022 release announcements](august2022-announce) * [July 2022 release announcements](july2022-announce) * [June 2022 release announcements](june2022-announce) * [May 2022 release announcements](may2022-announce) _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
index
--- title: May 2022 description: Read about DataRobot's new public preview and generally available features released in May, 2022. --- # May 2022 {: #may-2022 } _May 24, 2022_ With the latest deployment, DataRobot's managed AI Platform deployment delivered the following new GA and Public Preview features. See the [deployment history](cloud-history/index) for past feature announcements. ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Data and integrations** | :~~: | :~~: [Bulk action capabilities added to the AI Catalog](#bulk-action-capabilities-added-to-the-ai-catalog) | ✔ | [Apache Kafka environment variables for Azure Even Hubs spoolers](#apache-kafka-environment-variables-for-azure-event-hubs-spoolers) | ✔ | [Improved performance when importing large datasets to the AI Catalog](#improved-performance-when-importing-large-datasets-to-the-ai-catalog) | | ✔ **Modeling** | :~~: | :~~: [Bias Mitigation functionality](#bias-mitigation-functionality) | ✔ | [Visual AI Image Embeddings visualization adds new filtering capabilities](#visual-ai-embeddings-visualization-adds-new-filtering-capabilities) | ✔ | [Reorder scheduled modeling jobs](#reorder-scheduled-modeling-jobs) | ✔ | [Text AI parameters now available via Composable ML](#text-ai-parameters-now-available-via-composable-ai) | | ✔ [Prediction Explanations for multiclass projects](#prediction-explanations-for-multiclass-projects) | | ✔ [Clustering for segmented modeling](#clustering-for-segmented-modeling) | | ✔ [New blueprints for time series projects](#new-blueprints-for-time-series-projects) | | ✔ **Predictions and MLOps** | :~~: | :~~: [Creation date sort for the Deployments inventory](#creation-date-sort-for-the-deployment-inventory) | ✔ | [Model and Deployment IDs on the Overview tab](#model-and-deployment-ids-on-the-overview-tab) | ✔ | [Rate limit enforcement events in the MLOps agent event log](#rate-limit-errors-in-the-mlops-agent-event-log) | | ✔ **Deprecation notices** | :~~: | :~~: [H2O and SparkML Scaleout blueprints](#deprecated-h20-and-sparkml-scaleout-blueprints) | | [Custom task unaltered data feature lists](#deprecated-custom-task-unaltered-data-feature-lists) | | ## GA {: #ga } These feature have become generally available since the last release. ### Bulk action capabilities added to the AI Catalog {: #bulk-action-capabilities-added-to-the-ai-catalog } With this release, you can share, tag, download, and delete multiple AI Catalog assets at once; making working with these assets more efficient. In the AI Catalog, select the box to the left of the asset(s) you want to manage, then select the appropriate action at the top. ![](images/ai-bulk-action.png) For more information, see the documentation for [managing catalog assets](catalog-asset#bulk-actions-on-datasets). ### Apache Kafka environment variables for Azure Even Hubs spoolers {: #apache-kafka-environment-variables-for-azure-event-hubs-spoolers } The `MLOPS_KAFKA_CONFIG_LOCATION` environment variable was removed and replaced by new environment variables for Apache Kafka spooler configuration. These new environment variables eliminate the need for a separate configuration file and simplify support for Azure Event Hubs as a spooler type. For more information on Apache Kafka spooler configuration, see the [Apache Kafka](spooler#apache-kafka) environment variables reference. For more information on leveraging the Apache Kafka spooler type to use a Microsoft Azure Event Hubs spooler, see the [Azure Event Hubs](spooler#azure-event-hubs) spooler configuration reference. ### Bias Mitigation functionality {: #bias-mitigation-functionality } Bias mitigation is now available as a generally available feature for binary classification projects. To clarify relationships between the parent model and any child models with mitigation applied, this release adds a table&mdash;**Models with Mitigation Applied**&mdash; accessible from the parent model on the Leaderboard. Bias Mitigation works by augmenting blueprints with a pre- or post-processing task, causing the blueprint to then attempt to reduce bias across classes in a protected feature. You can apply mitigation either automatically (as part of Autopilot) or manually (after Autopilot completes). When run automatically, you set mitigation criteria as a part of the Bias and Fairness advanced option settings. Autopilot then applies mitigation to the top three Leaderboard models. Or, once Autopilot completes, you can apply mitigation to any non-blender, unmitigated model available from the Leaderboard. Finally, compare mitigated versus unmitigated models from the Bias vs Accuracy insight. ![](images/bias-mit-13.png) See the [Bias Mitigation](fairness-metrics#set-mitigation-techniques) for more information. ### Visual AI Image Embeddings visualization adds new filtering capabilities {: #visual-ai-embeddings-visualization-adds-new-filtering-capabilities } The **Understand > Image Embeddings** tab helps to visualize what predicted results for your AI Project. Now, DataRobot calculates predicted values for the images and allows you to filter by those predictions. In addition, for select project types you can modify the prediction threshold (which may change the predicted label) and filter based on the new results. The image below shows all filtering options&mdash;new and existing&mdash;for all supported project types. ![](images/image-embeddings-all.png) In addition, usability enhancements for clusters make exploring Visual AI results easier. With clustering, images display colored borders to indicate the predicted cluster. ### Reorder scheduled modeling jobs {: #reorder-scheduled-modeling-jobs } You can now change the order of scheduled modeling and prediction jobs in your project’s Worker Queue&mdash;allowing you to run more important jobs sooner. For more information, see the [Worker Queue](worker-queue#reorder-workers) documentation. ### Creation date sort for the Deployments inventory {: #creation-date-sort-for-the-deployment-inventory } The deployment inventory on the **Deployments** page is now sorted by creation date (from most recent to oldest, as reported in the new **Creation Date** column). You can click a different column title to sort by that metric instead. A blue arrow appears next to the sort column's header, indicating if the order is ascending or descending. ![](images/rn-deploy-tab-sort-create-date.png) !!! note When you sort the deployment inventory, your most recent sort selection persists in your local settings until you clear your browser's local storage data. As a result, the deployment inventory is usually sorted by the column you selected last. For more information, see the [Deployment inventory](deploy-inventory) documentation. ### Model and Deployment IDs on the Overview tab {: #model-and-deployment-ids-on-the-overview-tab } The **Content** section of the **Overview** tab lists a deployment's model and environment-specific information, now including the following IDs: ![](images/rn-deploy-overview-ids.png) * **Model ID:** Copy the ID number of the deployment's current model. * **Deployment ID:** Copy the ID number of the current deployment. In addition, you can find a deployment's model-related events under **History** > **Logs**, including the creation and deployment dates and any model replacements events. From this log, you can copy the **Model ID** of any previously deployed model. ![](images/rn-deploy-log-ids.png) For more information, see the deployment [Overview tab](dep-overview) documentation. ## Public Preview {: #public-preview } These feature have entered the Public Preview program since the last release. Contact your DataRobot representative or administrator for information on enabling any of them. ### Improved performance when importing large datasets to the AI Catalog {: #improved-performance-when-importing-large-datasets-to-the-ai-catalog } When uploading a large dataset to the AI Catalog via the REST API, a [data stage](glossary/index#data-stage)&mdash;intermediary storage that supports multipart upload of large datasets&mdash;to reduce the chance of failure, can be used. Once the dataset is whole and finalized in the data stage, it can then be pushed to the AI Catalog. ### Text AI parameters now available via Composable ML {: #text-ai-parameters-now-available-via-composable-ai } The ability to modify certain Text AI preprocessing tasks (Lemmatizer, PosTagging, and Stemming) is moving from the Advanced Tuning tab to blueprint tasks accessible via composable ML. The new Text AI preprocessing tasks unlock additional pathways to create unique text blueprints. For example, you can now use lemmatization in any text model that supports that preprocessing task instead of being limited to TF-IDF blueprints. **Required feature flag:** Enable Text AI Composable Vertices ### Prediction Explanations for multiclass projects {: #prediction-explanations-for-multiclass-projects } DataRobot now calculates explanations for each class in an XEMP-based multiclass classification project, both from the Leaderboard and from deployments. With multiclass, you can set the number of classes to compute for as well as select a mode from predicted or actual (if using training data) results or specify to see only a specific set of classes: ![](images/mc-predex-3.png) This capability helps especially with projects that require “humans-in-the-loop” to review multiple options. Previously comparisons required building several binary classification models and use scripting to evaluate. When building a multiclass project, Prediction Explanations can help improve models by highlighting, for example, where a model is too accurate (potential leakage?), where residuals are too large (some data could be missing?), or where a model can’t clearly distinguish two classes (some data could be missing?). ### Updated NLP Autopilot with better language support {: #updated-nlp-autopilo-with-better-language-support } This release brings a host of natural language processing (NLP) improvements, the most impactful of which is the application of FastText for language detection at data ingest. Then, DataRobot generates the appropriate blueprints, with parameters optimized for that language. It adapts tokenization to the detected language, for better word clouds and interpretability. Additionally, specific blueprint training heuristics are triggered, so that accuracy-optimized Advanced Tuning settings are applied. This feature works with multilingual use cases as well; Autopilot will detect multiple languages and adjust various blueprint settings for the greatest accuracy. Additionally, the following NLP enhancements are part of this release: * New pre-trained BPE tokenizer (which can handle any language). * Refined Keras blueprints for NLP for improved accuracy and training time. * Various improvements across other NLP blueprints. * New Keras blueprints (with the BPE tokenizer) in the Repository. ### Clustering for segmented modeling {: #clustering-for-segmented-modeling } Clustering, an unsupervised learning technique, can be used to identify natural segments in your data. DataRobot now allows you to use clustering to discover the segments to be used for [segmented modeling](ts-segmented). ![](images/rn-pp-ts-cluster-segmented-new.png) This workflow builds a clustering model and uses the model to help define the segments for a segmented modeling project. ![](images/rn-pp-ts-cluster-segmented-cluster-tile.png) A new **Use for Segmentation** tab lets you enable the clusters to be used in the segmented modeling project. ![](images/rn-pp-ts-cluster-segmented-use-for-segmentation.png) The clustering model is saved as a model package in the Model Registry, so that you can use it for subsequent segmented modeling projects. Alternatively, you can save the clustering model to the Model Registry explicitly, without creating a segmented modeling project immediately. In this case, you can later create a segmented modeling project using the saved clustering model package. ![](images/rn-pp-ts-cluster-segmented-existing.png) **Required feature flag:** Enable Time Series Clustering to Segmentation Flow [Now GA](ts-clustering). ### New blueprints for time series projects {: #new-blueprints-for-time-series-projects } DataRobot now supports native Prophet, ETS, and TBATS models for single series projects. For multiseries projects, these models can be used per series. To access a detailed description for a model, access the model blueprint (Models > Describe > Blueprint), click the model block in the blueprint, and click DataRobot Model Docs. **Required feature flags:** * Enable Native Prophet Blueprints for Time Series Projects * Enable Series Performance Blueprints for Time Series Projects ### Rate limit enforcement events in the MLOps agent event log {: #rate-limit-errors-in-the-mlops-agent-event-log } Now available for public preview, on a deployment's **Service Health** tab, under **Monitoring** events, you can view MLOps agent events indicating that the API **Rate limit was enforced**. If you haven't installed and configured the MLOps agent, see the [Installation and configuration](agent) guide. **Required feature flag:** Enable MLOps management agent Read the [documentation](agent-event-log). ## Deprecation notices {: #deprecation-notices } The following deprecation announcements help track the state of changes to DataRobot's managed AI Platform. ### Deprecated: H2O and SparkML Scaleout blueprints {: #deprecated-h20-and-sparkml-scaleout-blueprints } In June 2021, DataRobot deprecated scaleout functionality by disallowing blueprint creation. Now, scaleout models have been fully disabled. All actions&mdash;including running a scaleout blueprint or getting predictions from a scaleout model&mdash;are no longer available from the product. ### Deprecated: Custom task unaltered data feature lists {: #deprecated-custom-task-unaltered-data-feature-lists } This deprecation notice refers to unaltered-data feature lists (also previously known as a super raw feature lists) which are currently available when using custom tasks. These lists will no longer be available beginning on October 25, 2022. _What’s changing?_ Previously, when a blueprint contained only custom tasks, DataRobot created an unaltered-data feature list (in other words, all features from the dataset) for the project. After October 25, 2022, the unaltered-data feature lists will no longer be available. _How does this affect you?_ After the disablement date, you will no longer be able to select unaltered-data feature lists and existing unaltered data feature lists will be removed. This may impact your existing projects, so carefully examine projects that are currently using unaltered-data feature lists and migrate them as appropriate. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
may2022-announce
--- title: June 2022 description: Read about DataRobot's new public preview and generally available features released in June, 2022. --- # June 2022 {: #june-2022 } _June 28, 2022_ With the latest deployment, DataRobot's managed AI Platform deployment delivered the following new GA and Public Preview features. See the [deployment history](cloud-history/index) for past feature announcements. See also: * [**API enhancements**](#api-enhancements) * [**Deprecation notices**](#deprecation-announcements) ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Data and integrations** | :~~: | :~~: [Improved join feature type compatibility in Feature Discovery](#improved-join-feature-type-compatibility-in-feature-discovery) | ✔ | [Feature Discovery explores Latest features within an FDW by default](#feature-discovery-explores-latest-features-within-an-fdw-by-default) | ✔ | **Modeling** | :~~: | :~~: [“Uncensored” blueprints now available to all users](#uncensored-blueprints-now-available-to-all-users) | ✔ | [New Leaderboard and Repository filtering options](#new-leaderboard-and-repository-filtering-options) | ✔ | [Multiclass Prediction Explanations for XEMP](#multiclass-prediction-explanations-for-xemp) | ✔ | [New metric support for segmented projects](#new-metric-support-for-segmented-projects) | ✔ | [Native Prophet and Series Performance blueprints](#native-prophet-and-series-performance-blueprints) | ✔ | [Multiclass support in No-Code AI Apps](#multiclass-support-in-no-code-ai-apps) | | ✔ **Predictions and MLOps** | :~~: | :~~: [Autoexpansion of time series input in Prediction API](#autoexpansion-of-time-series-input-in-prediction-api) | ✔ | [MLOps management agent](#mlops-management-agent) | ✔ | [Large-scale monitoring with the MLOps library](#large-scale-monitoring-with-the-mlops-library) | ✔ | [MLOps Java library and agent public release](#mlops-java-library-and-agent-public-release) | ✔ | [MLOps monitoring agent event log](#mlops-monitoring-agent-event-log) | ✔ | [Prediction API cURL scripting code](#prediction-api-curl-scripting-code) | ✔ | [Deployment for time series segmented modeling](#deployment-for-time-series-segmented-modeling) | | ✔ ### GA {: #ga } #### “Uncensored” blueprints now available to all users {: #uncensored-blueprints-now-available-to-all-users } Previously, depending on an organization’s configuration, DataRobot users had visibility to either censored or uncensored blueprints. The difference between the settings was reflected in the preprocessing details shown in a model’s [**Blueprint**](blueprints) tab (the graphical representation of the data preprocessing and parameter settings). With this release, all users will be able to see the specific algorithms DataRobot uses. (Note that there is no functional change for those who already have uncensored blueprints.) Additional capabilities with uncensored blueprints: * More options from within [Composable ML](cml/index). * Access to the [Data Quality Handling Report](dq-report). * More complete model documentation (by clicking DataRobot Model docs from inside the blueprint’s tasks). ![](images/blueprint-uncensored.png) #### Improved join feature type compatibility in Feature Discovery {: #improved-join-feature-type-compatibility-in-feature-discovery } In Feature Discovery projects, you can now join secondary datasets using columns of different types. Previously, columns had to be the same type to execute a join. For information on join compatibility, see the [Feature Discovery](fd-overview#set-join-conditions) documentation. #### Feature Discovery explores Latest features within an FDW by default {: #feature-discovery-explores-latest-features-within-an-fdw-by-default } As part of the Feature Discovery process, DataRobot now defaults to a new setting, _Latest within window_, when performing feature engineering. This new setting explores _Latest_ values within the defined feature discovery window (FDW), as opposed to _Latest_, which generates _Latest_ values by exploring all historical data up until the end point of any defined FDWs. You can change the default settings in **Feature Discovery Settings > Feature Engineering**. ![](images/safer-feature-engineering-controls.png) For more information, see the [Feature Discovery](fd-overview#feature-engineering-controls) documentation. #### New Leaderboard and Repository filtering options {: #new-leaderboard-and-repository-filtering-options } With this release, you can now limit the Leaderboard or Repository to display models/blueprints matching the selected filters. Leaderboard filters allow you to set options categorized as: sample size&mdash;or for time series projects, training period&mdash;model family, model characteristics, feature list, and more. Repository filtering includes blueprint characteristics, families, and types. The new, enhanced filtering options are centralized in a single modal (one for the Leaderboard and one for the Repository), where previously, the more limited methods for filtering were in separate locations. ![](images/leaderboard-filter-1.png) See the [Leaderboard reference](leaderboard-ref#use-leaderboard-filters) for more information. #### Multiclass Prediction Explanations for XEMP {: #multiclass-prediction-explanations-for-xemp } Now generally available, DataRobot calculates explanations for each class in an XEMP-based multiclass classification project, both from the Leaderboard and from deployments. With multiclass, you can set the number of classes to compute for as well as select a mode from predicted or actual (if using training data) results or specify to see only a specific set of classes. ![](images/mc-predex-3.png) See the section on [XEMP Prediction Explanations for Multiclass](xemp-pe#multiclass-prediction-explanations) for more information. #### New metric support for segmented projects {: #new-metric-support-for-segmented-projects } Combined Models, the main umbrella project that acts as a collection point for all segments in a time series [segmented modeling project](ts-segmented), introduces support for RMSE-based metrics. In addition to earlier support for MAD, MAE, MAPE, MASE, and SMAPE, segmented projects now also support RMSE, RMSLE, and Theil’s U (weighted and unweighted). #### Native Prophet and Series Performance blueprints {: #native-prophet-and-series-performance-blueprints } For time series projects, support for native Prophet, ETS, and TBATS models for single and multiseries projects is now generally available. A detailed model description can be found for each model by accessing the model [blueprint](blueprints). #### Autoexpansion of time series input in Prediction API {: #autoexpansion-of-time-series-input-in-prediction-api } When making predictions with time series models via the API using a forecast point, you can now skip the forecast window in your prediction data. DataRobot generates a forecast point automatically via autoexpansion. Autoexpansion applies automatically if predictions are made for a specific forecast point and not a forecast range. It also applies if a time series project has a regular time step and does not use Nowcasting. #### MLOps management agent {: #mlops-management-agent } Now generally available, the MLOps management agent provides a standard mechanism for automating model deployments in any type of environment or infrastructure. The management agent supports models trained on DataRobot, or models trained with open source tools on external infrastructure. The agent, accessed from the DataRobot application, ships with an assortment of example plugins that support custom configurations. Use the management agent to automate the deployment and monitoring of models to ensure your machine learning pipeline is healthy and reliable. This release introduces usability improvements to the management agent, including [deployment status reporting](mgmt-agent-events-status#deployment-status), [deployment relaunch](mgmt-agent-relaunch), and the option to [force the deletion of a management agent deployment](mgmt-agent-delete). ![](images/mgmt-agent-1.png) For more information on agent installation, configuration, and operation, see the [MLOps management agent](mgmt-agent/index) documentation. #### Large-scale monitoring with the MLOps library {: #large-scale-monitoring-with-the-mlops-library } To support large-scale monitoring, the MLOps library provides a way to calculate statistics from raw data on the client side. Then, instead of reporting raw features and predictions to the DataRobot MLOps service, the client can report anonymized statistics without the feature and prediction data. Reporting prediction data statistics calculated on the client side is the optimal method compared to reporting raw data, especially at scale (with billions of rows of features and predictions). In addition, because client-side aggregation only sends aggregates of feature values, it is suitable for environments where you don't want to disclose the actual feature values. The large-scale monitoring functionality is available for the Java Software Development Kit (SDK) and the MLOps Spark Utils Library: === "Java SDK" Replace calls to `reportPredictionsData()` with calls to `reportAggregatePredictionsData()`. === "MLOps Spark Utils Library" Replace calls to `reportPredictions()` with calls to `predictionStatisticsParameters.report()`. You can find an example of this use-case in the agent `.tar` file in `examples/java/PredictionStatsSparkUtilsExample`. !!! note To support the use of challenger models, you must send raw features. For large datasets, you can report a small sample of raw feature and prediction data to support challengers and reporting; then, you can send the remaining data in aggregate format. This use case can be found in the [Monitoring agent use case](agent-use#enable-large-scale-monitoring) documentation. #### MLOps Java library and agent public release {: #mlops-java-library-and-agent-public-release } You can now download the MLOps Java library and agent from the public [Maven Repository](https://mvnrepository.com/){ target=_blank } with a `groupId` of `com.datarobot` and an `artifactId` of `datarobot-mlops` (library) and `mlops-agent` (agent). In addition, you can access the [DataRobot MLOps Library](https://mvnrepository.com/artifact/com.datarobot/datarobot-mlops){ target=_blank } and [DataRobot MLOps Agent](https://mvnrepository.com/artifact/com.datarobot/mlops-agent){ target=_blank } artifacts in the Maven Repository to view all versions and download and install the JAR file. #### MLOps monitoring agent event log {: #mlops-monitoring-agent-event-log } Now generally available, on a deployment's **Service Health** tab, under **Recent Activity**, you can view **Management** events (e.g., deployment actions) and **Monitoring** events (e.g., spooler channel and rate limit events). **Monitoring** events can help you quickly diagnose MLOps agent issues. For example, spooler channel error events can help you diagnose and fix [spooler configuration](spooler) issues while rate limit enforcement events can help you identify if service health stats or data drift and accuracy values aren't updating because you exceeded the API request rate limit. ![](images/rn-mlops-agent-events.png) To view **Monitoring** events, you must provide a `predictionEnvironmentID` in the agent configuration file (`conf\mlops.agent.conf.yaml`). If you haven't already installed and configured the MLOps agent, see the [Installation and configuration](agent) guide. For more information on enabling and reading the monitoring agent event log, see the [Monitoring agent event log](agent-event-log) documentation. #### Prediction API cURL scripting code {: #prediction-api-curl-scripting-code } The Prediction API Scripting Code section on a deployment's **Predictions** > **Prediction API** tab now includes a **cURL** scripting code snippet for **Real-time** predictions. cURL is a command-line tool for transferring data using various network protocols, available by default in most Linux distributions and macOS. For more information on Prediction API cURL scripting code, see the [Real-time prediction snippets](code-py#real-time-prediction-snippet-settings) documentation. ### Public Preview {: #public-preview } #### Multiclass support in No-Code AI Apps {: #multiclass-support-in-no-code-ai-apps } In addition to binary classification and regression problems, No-Code AI Apps now support multiclass classification deployments across all three templates&mdash;Predictor, Optimizer, and What-if. This gives you the ability to leverage No-Code AI Apps for a broader range of business problems across several industries, thus expanding its benefits and value. **Required feature flag:** Enable Application Builder Multiclass Support #### Deployment for time series segmented modeling {: #deployment-for-time-series-segmented-modeling } Now available for Public Preview, to fully leverage the value of segmented modeling, you can deploy Combined Models as you would deploy any other time series models. After selecting the champion model for each included project, you can deploy the Combined Model to bring predictions into production. Creating a deployment allows you to use DataRobot MLOps for accuracy monitoring, prediction intervals, and challenger models. !!! note Time series segmented modeling deployments do not support data drift monitoring, prediction explanations, or retraining. ##### Deploy a time series Combined Model {: #deploy-a-time-series-combined-model } After you complete the [segmented modeling workflow](ts-segmented#segmented-modeling-workflow), you can deploy the resulting Combined Model to bring its predictions into production. Once Autopilot has finished, the **Models > Leaderboard** tab contains one model. This model is the completed **Combined Model**. To deploy, click the **Combined Model**, click **Predict > Deploy**, and then click **Deploy model**. ![](images/ts-segmented-deploy-2.png) ##### Modify and clone a deployed Combined Model {: #modify-and-clone-a-deployed-combined-model } When a Combined Model is deployed, to change the segment champion for a segment, you must clone the deployed Combined Model and modify the cloned model. This process is automatic and occurs when you attempt to change a segment's champion within a deployed Combined Model. The cloned model you can modify becomes the **Active Combined Model**. This process ensures stability in the deployed model while allowing you to test changes within the same segmented project. !!! note Only one Combined Model in a project can be the **Active Combined Model** (marked with a badge). Once a **Combined Model** is deployed, it is labeled **Prediction API Enabled**. To modify this model, click the active and deployed **Combined Model**, and then in the **Segments** tab, click the segment you want to modify. ![](images/ts-segmented-deploy-3.png) Next, [reassign the segment champion](ts-segmented#reassign-the-champion-model), and in the dialog box that appears, click **Yes, create new combined model**. ![](images/ts-segmented-deploy-4.png) On the segment's **Leaderboard**, you can now access and modify the **Active Combined Model**. ![](images/ts-segmented-deploy-5.png) **Required feature flag**: Enable Time Series Segmented Deployments Support Segmented modeling [documentation](ts-segmented#deploy-a-combined-model). ### API enhancements {: #api-enhancements } The following is a summary of API new features and enhancements. Go to the [API Documentation home](https://docs.datarobot.com/en/docs/api/index.html){ target=_blank } for more information on each client. !!! tip DataRobot highly recommends updating to the latest API client for Python and R. #### Access DataRobot REST API documentation from docs.datarobot.com {: #access-datarobot-rest-api-documentation-from-docs-datarobot-com} DataRobot now offers REST API documentation available [directly from the public documentation hub](https://docs.datarobot.com/en/docs/api/reference/public-api/index.html). Previously, REST API docs were only accessible through the application. Now, you can access information about REST endpoints and parameters in the API reference section of the public documentation site. #### Public preview: Set default credentials in the credential store {: #public-preview-set-default-credentials-in-the-credential-store} For a given resource in the credential store, you can make associated credentials the default set. When calling the REST API directly, you can request the default credentials using the newly implemented API routes for credential associations: - `PUT /api/v2/credentials/(credentialId)/associations/(associationId)/` - `GET /api/v2/credentials/associations/(associationId)/` ## Deprecation announcements {: #deprecation-announcements } #### Auto-Tuned Word N-gram Text Modeler blueprints removed from Leaderboard {: #auto-tuned-word-n-gram-text-modeler-blueprints-removed-from-leaderboard } On July 6, 2022, Auto-Tuned Word N-gram Text Modeler blueprints will no longer be run as part of Autopilot for binary classification, regression, and multiclass/multimodal projects. The modeler blueprints will remain available in the repository. Currently, Light GBM (LGBM) models run these auto-tuned text modelers for each text column, and for each, a new blueprint is added to the Leaderboard. However, these Auto-Tuned Word N-gram Text Modelers are not correlated to the original LGBM model (i.e., modifying them does not affect the original LGBM model). Once disabled, Autopilot will create a single, larger blueprint for all Auto-Tuned Word N-gram Text Modeler tasks instead of one for each text column. Note that this change has no backward-compatibility issues; it applies to new projects only. #### Feature Fit insight to be disabled in July {: #feature-fit-insight-to-be-disabled-in-july } Beginning in July 2022, the **Evaluate > Feature Fit** insight will be disabled. Any existing projects will no longer show the option from the Leaderboard, and new projects will not create the chart. Organization admins will be able to re-enable it for their users until the tool is removed completely. The [**Feature Effects**](feature-effects) insight can be used in place of **Feature Fit**, as it provides the same output. #### USER/Open Source models deprecated and soon disabled {: #user-open-source-models-deprecated-and-soon-disabled } With this release, all models containing USER/Open source (“user”) tasks are deprecated. The exact process of deprecating existing models will be rolling out over the next few months and implications will be announced in subsequent releases. ??? tip "Identifying affected models" To determine whether your model is deprecated, you can open the blueprint where you will see the task name: ![](images/rn-oss-deprecate.png) Additionally, you can see the task listed in the model description on the Leaderboard. DataRobot is making this change now because [Composable ML](cml/index) allows you to create custom models instead of using a USER model. Separately, there are native solutions for all currently supported Open source models. Eliminating and replacing existing USER/Open Source models addresses any potential security concerns. At this time, any users who have generated predictions (via the Leaderboard or a deployment) with the deprecated models in the last 6 months have been contacted and provided with a migration plan. If you believe you use such models for predictions and have not been contacted, get in touch with your DataRobot representative. #### Excel add-in deprecated, to be removed July 2022 {: #excel-add-in-deprecated-to-be-removed-July-2022 } The existing DataRobot Excel Add-In is deprecated and will be removed in July 2022. Although users who have already downloaded it can continue using the add-in, it will not be supported or further developed. If you need the add-in, you can download it until July 20, 2022. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
june2022-announce
--- title: March 2023 description: Read release note announcements for DataRobot's generally available and public preview features released in March, 2023. --- # March 2023 {: #march-2023 } _March 22, 2023_ This page provides announcements of newly released features available in DataRobot's SaaS single- and multi-tenant AI Platform, with links to additional resources. With the March deployment, DataRobot's AI Platform delivered the following new GA and Public Preview features. From the release center you can also access: * [Monthly deployment announcement history](cloud-history/index) * [Public preview features](public-preview/index) * [Self-Managed AI Platform release notes](archive-release-notes/index) ## March release {: #march-release } The following table lists each new feature. See the [deployment history](cloud-history/index) for past feature announcements and also the [**deprecation notices**](#deprecation-announcements), below. ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Data** | :~~: | :~~: [New driver versions](#new-driver-versions) | ✔ | **Modeling** | :~~: | :~~: [Reduced feature lists restored in Quick Autopilot mode](#reduced-feature-lists-restored-in-quick-autopilot) | ✔ | [Details page added to time series applications](#details-page-added-to-time-series-applications)| ✔ | [Increased prediction limit for No-Code AI Apps](#increased-prediction-limit-for-no-code-ai-apps)| | ✔ **Predictions and MLOps** | :~~: | :~~: [Assign training data to a custom model version](#assign-training-data-to-a-custom-model-version) | ✔ | **API enhancements** | :~~: | :~~: [Python client v3.1](#python-client-v31) | ✔ | ### GA {: #ga } #### Reduced feature lists restored in Quick Autopilot mode {: #reduced-feature-lists-restored-in-quick-autopilot } With this release, Quick mode now reintroduces creating a reduced feature list when [preparing a model for deployment](model-rec-process). In January, DataRobot made Quick mode enhancements for [AutoML](january2023-announce#quick-autopilot-mode-improvements-speed-experimentation); in February, the improvement was made available for [time series projects](february2023-announce#quick-autopilot-improvements-now-available-for-time-series). At that time, DataRobot stopped automatically generating and fitting the DR Reduced Features list, as fitting required retraining models. Now, based on user requests, when recommending and preparing a model for deployment, DataRobot once again creates the reduced feature list. The process, however, does not include model fitting. To apply the list to the recommended model—or any Leaderboard model—you can manually retrain it. #### Details page added to time series applications {: #details-page-added-to-time-series-applications } In the [Time Series Forecasting widget](ts-app#forecast-details-page), you can now view prediction information for specific predictions or dates, allowing you to not only see the prediction values, but also compare them to other predictions that were made for the same date. To drill down into the prediction details, click on a prediction in either the **Predictions vs Actuals** or **Prediction Explanations** chart. This opens the Forecast details page, which displays the following information: ![](images/ts-pred-2.png) ![](images/ts-pred-detail-3.png) | | Description | |---|---| | ![](images/icon-1.png) | The average prediction value in the forecast window. | | ![](images/icon-2.png) | Up to 10 Prediction Explanations for each prediction. | | ![](images/icon-3.png) | Segmented analysis for each forecast distance within the forecast window. | | ![](images/icon-4.png) | Prediction Explanations for each forecast distance included in the segmented analysis. | #### New driver versions {: #new-driver-versions } With this release, the following driver versions have been updated: - AWS Athena==2.0.35 - SAP Hana==2.15.10 See the complete list of [supported driver versions](data-sources/index) in DataRobot. #### Assign training data to a custom model version {: #assign-training-data-to-a-custom-model-version } To enable feature drift tracking for a custom model deployment, you must add training data. Currently, when you add training data, you assign it directly to the custom model. As a result, every version of that model uses the same data. In this release, the assignment of training data directly to a custom model is deprecated and scheduled for removal, replaced by the assignment of training data to each custom model version. To support backward compatibility, the deprecated method of training data assignment remains the default during the deprecation period, even for newly created models. To assign training data to a custom model's versions, you must convert the model. On the **Assemble** tab, locate the **Training data for model versions** alert and click **Permanently convert**: ![](images/convert-custom-model.png) !!! warning Converting a model's training data assignment method is a one-way action. It _cannot_ be reverted. After conversion, you can't assign training data at the model level. This change applies to the UI _and_ the API. If your organization has any automation depending on "per model" training data assignment, before you convert a model, you should update any related automation to support the new workflow. As an alternative, you can create a new custom model to convert to the "per version" training data assignment method and maintain the deprecated "per model" method on the model required for the automation; however, you should update your automation before the deprecation process is complete to avoid gaps in functionality. After you convert the model, you can assign training data to a custom model version: * If the model was already assigned training data, the **Datasets** section contains information about the existing training dataset. To replace existing training data, click the edit icon (![](images/icon-pencil.png)). In the **Change Training Data** dialog box, click the delete icon (![](images/icon-delete.png)) to remove the existing training data, then upload new training data. * If the model version doesn't have training data assigned, click **Assign**, then, in the **Add Training Data** dialog box, upload training data. When you create a new custom model version, you can **Keep training data from previous version**. This setting is enabled by default to bring the training data from the current version to the new custom model version: ![](images/cmodel-17.png) For more information, see [Add training data to a custom model](custom-model-training-data) and [Add custom model versions](custom-model-versions). ### Public Preview {: #public-preview } #### Increased prediction limit for No-Code AI Apps {: #increased-prediction-limit-for-no-code-ai-apps } Now available for public preview, you can make up to 50K predictions in an application. Previously, and without the flag enabled, applications supported only 5K predictions. With or without the flag, a message will indicate how many predictions remain. Note that the limit applies to individual apps, not to individual users. This means that if you share the app, any predictions that a user makes are deducted from the remainder. **Required feature flag**: Enable Increased Prediction Row Limit ### API enhancements {: #api-enhancements } #### Python client v3.1 {: #python-client-v31 } The following API enhancements are introduced with version 3.1 of DataRobot's Python client: * Added new methods `BatchPredictionJob.apply_time_series_data_prep_and_score` and `BatchPredictionJob.apply_time_series_data_prep_and_score_to_file` that apply time series data prep to a file or dataset and make batch predictions with a deployment. * Added new methods `DataEngineQueryGenerator.prepare_prediction_dataset` and `DataEngineQueryGenerator.prepare_prediction_dataset_from_catalog` that apply time series data prep to a file or catalog dataset and upload the prediction dataset to a project. * Added new `max_wait` parameter to the method `Project.create_from_dataset`. Values larger than the default can be specified to avoid timeouts when creating a project from a dataset. * Added the `Project.create_segmented_project_from_clustering_model` method for creating a segmented modeling project from an existing clustering project and model. Switch to this function if you were previously using ModelPackage for segmented modeling purposes. * Added the `is_unsupervised_clustering_or_multiclass` method for checking whether clustering or multiclass parameters are used. It is quick and efficient without extra API calls. * Added value `PREPARED_FOR_DEPLOYMENT` to the `RECOMMENDED_MODEL_TYPE` enum. * Added two new methods to the `ImageAugmentationList` class: `ImageAugmentationList.list` and `ImageAugmentationList.update`. * Added `format` key to Batch Prediction intake and output settings for S3, GCP and Azure. * The method `PredictionExplanations.is_multiclass` now adds an additional API call to check for multiclass target validity, which adds a small delay. * The `AdvancedOptions` parameter `blend_best_models` now defaults to false. * The `AdvancedOptions <datarobot.helpers.AdvancedOptions>` parameter `consider_blenders_in_recommendation` now defaults to false. * `DatetimePartitioning` now has the parameter `unsupervised_mode`.
march2023-announce
--- title: September 2022 description: Read about DataRobot's new public preview and generally available features released in September, 2022. --- # September 2022 {: #september-2022 } _September 27, 2022_ DataRobot's managed AI Platform deployment for September delivered the following new GA and Public Preview features. See the [this month's release announcements](release/index) as well as a [deployment history](cloud-history/index) for additional past feature announcements. See also: * [**Deprecation notices**](#deprecation-announcements) === "SaaS" ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Data and integrations** | :~~: | :~~: [Feature cache for Feature Discovery deployments](#feature-cache-for-feature-discovery-deployments) | | ✔ | **Modeling** | :~~: | :~~: [ROC Curve enhancements aid model interpretation](#roc-curve-enhancements-aid-model-interpretation) | ✔ | | [Create Time Series What-if AI Apps](#create-time-series-what-if-ai-apps) | ✔ | | **Time series** | :~~: | :~~: [Accuracy Over Time enhancements](#accuracy-over-time-enhancements) | ✔ | | [Time series clustering now GA](#time-series-clustering-now-GA) | ✔ | | **Predictions and MLOps** | :~~: | :~~: [Drift Over Time chart](#drift-over-time-chart) | ✔ | | [Deployment for time series segmented modeling](#deployment-for-time-series-segmented-modeling) | ✔ | | [Large-scale monitoring with the MLOps library](#large-scale-monitoring-with-the-mlops-library) | ✔ | | [Batch prediction job history for challengers](#batch-prediction-job-history-for-challengers) | ✔ | | [Time series model package prediction intervals](#time-series-model-package-prediction-intervals) | | ✔ | **Documentation changes** | :~~: | :~~: [Documentation change summary](#documentation-change-summary) | ✔ | | **API enhancements** | :~~: | :~~: [Python client v3.0](#python-client-v30) | ✔ | | [Python client v3.0 new features](#python-client-v30-new-features) | ✔ | | [New methods for DataRobot projects](#new-methods-for-datarobot-projects) | ✔ | | [Calculate Feature Impact for each backtest](#calculate-feature-impact-for-each-backtest) | ✔ | | === "Self-Managed" ??? abstract "Features grouped by capability" Name | GA | Public Preview ---------- | ---- | --- **Data and integrations** | :~~: | :~~: [Feature cache for Feature Discovery deployments](#feature-cache-for-feature-discovery-deployments) | | ✔ | **Modeling** | :~~: | :~~: [ROC Curve enhancements aid model interpretation](#roc-curve-enhancements-aid-model-interpretation) | ✔ | | [Create Time Series What-if AI Apps](#create-time-series-what-if-ai-apps) | ✔ | | [Create No-Code AI Apps from Feature Discovery projects](#create-no-code-ai-apps-from-feature-discovery-projects) | | ✔ | **Time series** | :~~: | :~~: [Accuracy Over Time enhancements](#accuracy-over-time-enhancements) | ✔ | | [Time series clustering now GA](#time-series-clustering-now-GA) | ✔ | | **Predictions and MLOps** | :~~: | :~~: [Drift Over Time chart](#drift-over-time-chart) | ✔ | | [Deployment for time series segmented modeling](#deployment-for-time-series-segmented-modeling) | ✔ | | [Large-scale monitoring with the MLOps library](#large-scale-monitoring-with-the-mlops-library) | ✔ | | [Batch prediction job history for challengers](#batch-prediction-job-history-for-challengers) | ✔ | | [Time series model package prediction intervals](#time-series-model-package-prediction-intervals) | | ✔ | **Documentation changes** | :~~: | :~~: [Documentation change summary](#documentation-change-summary) | ✔ | | **API enhancements** | :~~: | :~~: [Python client v3.0](#python-client-v30) | ✔ | | [Python client v3.0 new features](#python-client-v30-new-features) | ✔ | | [New methods for DataRobot projects](#new-methods-for-datarobot-projects) | ✔ | | [Calculate Feature Impact for each backtest](#calculate-feature-impact-for-each-backtest) | ✔ | | ### GA {: #ga } #### ROC Curve enhancements aid model interpretation {: #roc-curve-enhancements-aid-model-interpretation } With this release, the [ROC Curve](roc-curve-tab/index) tab introduces several improvements to help increase understanding of model performance at any point on the probability scale. Using the visualization now, you will notice: * Row and column totals are shown in the Confusion Matrix. * The Metrics section now displays up to six accuracy metrics. * You can use ** Display Threshold > View Prediction Threshold** to reset the visualization components (graphs and charts) to the model's default prediction threshold. ![](images/roc-pred-threshold.png) #### Create Time Series What-if AI Apps {: #create-time-series-what-if-ai-apps } Now generally available, you can create What-if Scenario AI Apps from time series projects. This allows you to launch and easily configure applications in an enhanced visual and interactive interface, as well as share your What-if Scenario app with consumers who will be able to effortlessly build upon what’s already been generated by the builder and/or create their own scenarios on the same prediction files. ![](images/ts-whatif-7.png) Additionally, you can edit the known in advance features for multiple scenarios at once using the [**Manage Scenarios**](ts-app#bulk-edit-scenarios) feature. ![](images/ts-app-bulk-1.png) For more information, see the [Time series applications](ts-app#what-if-widget) documentation. #### Accuracy Over Time enhancements {: #accuracy-over-time-enhancements } Because multiseries modeling supports up to 1 million series and 1000 forecast distances, previously, DataRobot limited the number of series in which the accuracy calculations were performed as part of Autopilot. Now, the visualizations that use these calculations can automatically run a number of series (up to a certain threshold) and then run additional series, either individually or in bulk. ![](images/aot-multi-2.png) The visualizations that can leverage this functionality are: * Accuracy Over Time * Anomaly Over Time * Forecast vs. Actual * Model Comparison For more information, see the [Accuracy Over Time for multiseries](aot#display-by-series) documentation. #### Time series clustering now GA {: #time-series-clustering-now-GA } [Time series clustering](ts-clustering) enables you to easily group similar series across a multiseries dataset from within the DataRobot platform. Use the discovered clusters to get a better understanding of your data or use them as input to time series [segmented modeling](ts-segmented). The general availability of clustering brings some improvements over the Public Preview version: * A new [**Series Insights**](series-insights) tab specifically for clustering provides information on series/cluster relationships and details. * A [cluster buffer](ts-clustering#cluster-discovery) prevents data leakage and ensures that you are not training a clustering model into what will be the holdout partition in segmentation. ![](images/cluster-segmented-buffer.png) #### Drift Over Time chart {: #drift-over-time-chart } On a deployment’s [**Data Drift** dashboard](data-drift), the **Drift Over Time** chart visualizes the difference in distribution over time between the training dataset of the deployed model and the datasets used to generate predictions in production. The drift away from the baseline established with the training dataset is measured using the Population Stability Index (PSI). As a model continues to make predictions on new data, the change in the PSI over time is visualized for each tracked feature, allowing you to identify [data drift](glossary/index#data-drift) trends: ![](images/rn-drift-over-time.png) As data drift can decrease your model's predictive power, determining when a feature started drifting and monitoring how that drift changes (as your model continues to make predictions on new data) can help you estimate the severity of the issue. You can then compare data drift trends across the features in a deployment to identify correlated drift trends between specific features. In addition, the chart can help you identify seasonal effects (significant for time-aware models). This information can help you identify the cause of data drift in your deployed model, including data quality issues, changes in feature composition, or changes in the context of the target variable. The example below shows the PSI consistently increasing over time, indicating worsening data drift for the selected feature. For more information, see the [Drift Over Time chart](data-drift#drift-over-time-chart) documentation. #### Deployment for time series segmented modeling {: #deployment-for-time-series-segmented-modeling } To fully leverage the value of segmented modeling, you can deploy Combined Models like any other time series model. After selecting the champion model for each included project, you can deploy the Combined Model to create a "one-model" deployment for multiple segments; however, the individual segments in the deployed Combined Model still have their own segment champion models running in the deployment behind the scenes. Creating a deployment allows you to use [DataRobot MLOps](mlops/index) for accuracy monitoring, prediction intervals, challenger models, and retraining. !!! note Time series segmented modeling deployments do not support data drift monitoring or prediction explanations. After you complete the [segmented modeling workflow](ts-segmented#segmented-modeling-workflow) and Autopilot has finished, the **Model** tab contains one model. This model is the completed **Combined Model**. To deploy, click the **Combined Model**, click **Predict** > **Deploy**, and then click **Deploy model**. ![](images/ts-segmented-deploy-2.png) After deploying a Combined Model, you can change the segment champion for a segment by cloning the deployed Combined Model and modifying the cloned model. This process is automatic and occurs when you attempt to change a segment's champion within a deployed Combined Model. The cloned model you can modify becomes the **Active Combined Model**. This process ensures stability in the deployed model while allowing you to test changes within the same segmented project. !!! note Only one Combined Model on a project's Leaderboard can be the **Active Combined Model** (marked with a badge) Once a **Combined Model** is deployed, it is labeled **Prediction API Enabled**. To modify this model, click the active and deployed **Combined Model**, and then in the **Segments** tab, click the segment you want to modify. ![](images/ts-segmented-deploy-3.png) Next, [reassign the segment champion](ts-segmented#reassign-the-champion-model), and in the dialog box that appears, click **Yes, create new combined model**. ![](images/ts-segmented-deploy-4.png) On the segment's **Leaderboard**, you can now access and modify the **Active Combined Model**. For more information, see the [Deploy a Combined Model](ts-segmented#deploy-a-combined-model) documentation. #### Large-scale monitoring with the MLOps library {: #large-scale-monitoring-with-the-mlops-library } To support large-scale monitoring, the MLOps library provides a way to calculate statistics from raw data on the client side. Then, instead of reporting raw features and predictions to the DataRobot MLOps service, the client can report anonymized statistics without the feature and prediction data. Reporting prediction data statistics calculated on the client side is the optimal (and highly performant) method compared to reporting raw data, especially at scale (billions of rows of features and predictions). In addition, because client-side aggregation only sends aggregates of feature values, it is suitable for environments where you don't want to disclose the actual feature values. Previously, this functionality was [released for the Java SDK and MLOps Spark Utils Library](june2022-announce#large-scale-monitoring-with-the-mlops-library). With this release, large-scale monitoring functionality is now available for Python: To use large-scale monitoring in your Python code, replace calls to `report_predictions_data()` with calls to: ``` python report_aggregated_predictions_data( self, features_df, predictions, class_names, deployment_id, model_id ) ``` To enable the large-scale monitoring functionality, you must set one of the feature type settings. These settings provide the dataset's feature types and can be configured programmatically in your code (using setters) or by defining environment variables. For more information, see the [Enable large-scale monitoring](agent-use#enable-large-scale-monitoring) use case. #### Batch prediction job history for challengers {: #batch-prediction-job-history-for-challengers } To improve error surfacing and usability for challenger models, you can now access a challenger's prediction job history from the [**Deployments** > **Challengers**](challengers) tab. After adding one or more challenger models and replaying predictions, click **Job History**: ![](images/challenger-job-history.png) The [**Deployments** > **Prediction Jobs**](batch-pred-jobs#manage-prediction-jobs) page opens and is filtered to display the challenger jobs for the deployment you accessed the job history from. You can also apply this filter directly from the **Prediction Jobs** page: ![](images/rn-challenger-job-filter.png) For more information, see the [View challenger job history](challengers#view-challenger-job-history) documentation. #### Documentation change summary {: #documentation-change-summary } This release brings the following improvements to the in-app and public-facing documentation: * **Time series docs enhancement**: A section on [advanced modeling](ts-customization) helps to understand how to make determinations of the best window and backtest settings for those planning to change the default settings in a time series project. Additionally, with a slight reorganization of the material, each step in the modeling process now has its own page with stage-specific instruction. * **Learn more section added**: A Learn more section has been added to the top-level navigation of the user documentation. From there, you can access the DataRobot Glossary, ELI5, and Tutorials. Additionally, the Release section has also been moved out of UI Docs and added to the top-level navigation, making it easier to access release materials. * **Price elasticity use case**: The API user guide now includes a price elasticity of demand use case, which helps you to understand the impact that changes in price will have on consumer demand for a given product. Follow the [workflow in the use case’s notebook](elasticity.ipynb) to understand how to identify relationships between price and demand, maximize revenue by properly pricing products, monitor price elasticities for changes in price and demand, and reduce manual processes used to obtain and update price elasticities. ### Public Preview {: #public-preview } #### Time series model package prediction intervals {: #time-series-model-package-prediction-intervals } Now available for public preview, you can enable the computation of a model's time series prediction intervals (from 1 to 100) during model package generation. To run a DataRobot time series model in a remote prediction environment, you download a model package (.mlpkg file) from the model's deployment or the Leaderboard. In both locations, you can now choose to **Compute prediction intervals** during model package generation. You can then run prediction jobs with a [portable prediction server (PPS)](portable-pps) outside DataRobot. Before you download a model package with prediction intervals from a deployment, ensure that your deployment supports model package downloads. The deployment must have a DataRobot build environment and an *external* prediction environment, which you can verify using the [**Governance Lens**](gov-lens) in the deployment inventory: ![](images/pps-1.png) To download a model package with prediction intervals from a *deployment*, in the external deployment, you can use the **Predictions > Portable Predictions** tab: ![](images/pp-ts-deploy-pred-int.png) To download a model package with prediction intervals from a model in the *Leaderboard*, you can use the **Predict > Deploy** or **Predict > Portable Predictions** tab. === "Deploy tab download" ![](images/pp-ts-leaderboard-pred-int.png) === "Portable Prediction Server tab download" ![](images/pp-ts-leaderboard-pps-pred-int.png) **Required feature flag:** Enable computation of all Time-Series Intervals for .mlpkg Public preview [documentation](pp-ts-pred-intervals-mlpkg). #### Create No-Code AI Apps from Feature Discovery projects {: #create-no-code-ai-apps-from-feature-discovery-projects } *SaaS users only.* Now available for public preview, you can create No-Code AI Apps from Feature Discovery projects (i.e., projects built with multiple datasets) with feature cache enabled. [Feature cache](safer-ft-cache) instructs DataRobot to source data from multiple datasets and generate new features in advance, storing this information in a "cache," which is then drawn from to make predictions. **Required feature flags:** - Enable Application Builder Feature Discovery Support - Enable Feature Cache for Feature Discovery Public preview [documentation](app-ft-cache). #### Feature cache for Feature Discovery deployments {: #feature-cache-for-feature-discovery-deployments } Now available for public preview, you can schedule feature cache for Feature Discovery deployments, which instructs DataRobot to pre-compute and store features before making predictions. Generating these features in advance makes single-record, low-latency scoring possible for Feature Discovery projects. To enable feature cache, go to the **Settings** tab of a Feature Discovery deployment. Then, turn on the **Feature Cache** toggle and choose a schedule for DataRobot to update cached features. ![](images/ft-cache-2.png) Once feature cache is enabled and configured in the deployment's settings, DataRobot caches features and stores them in a database. When new predictions are made, the primary dataset is sent to the prediction endpoint, which enriches the data from the cache and returns the prediction response. The feature cache is then periodically updated based on the specified schedule. **Required feature flag:** Enable Feature Cache for Feature Discovery Public preview [documentation](safer-ft-cache). ### API enhancements {: #api-enhancements } The following is a summary of API new features and enhancements. Go to the [API Documentation home](https://docs.datarobot.com/en/docs/api/index.html){ target=_blank } for more information on each client. !!! tip DataRobot highly recommends updating to the latest API client for Python and R. #### Python client v3.0 {: #python-client-v30 } Now generally available, DataRobot has released version 3.0 of the [Python client](https://pypi.org/project/datarobot/){ target=_blank }. This version introduces significant changes to common methods and usage of the client. Many prominent changes are listed below, but **view the [changelog](https://datarobot-public-api-client.readthedocs-hosted.com/page/CHANGES.html){ target=_blank } for a complete list of changes introduced in version 3.0**. #### Python client v3.0 new features {: #python-client-v30-new-features } A summary of some new features for version 3.0 are outlined below: * Version 3.0 of the Python client does not support Python 3.6 and earlier versions. Version 3.0 currently supports Python 3.7+. * The default Autopilot mode for the `project.start_autopilot` method has changed to `AUTOPILOT_MODE.QUICK`. * Pass a file, file path, or DataFrame to a deployment to easily make batch predictions and return the results as a DataFrame using the new method `Deployment.predict_batch`. * You can use a new method to retrieve the canonical URI for a project, model, deployment, or dataset: * `Project.get_uri` * `Model.get_uri` * `Deployment.get_uri` * `Dataset.get_uri` #### New methods for DataRobot projects {: #new-methods-for-datarobot-projects } Review the new methods available for `datarobot.models.Project`: * `Project.get_options` allows you to retrieve saved modeling options. * `Project.set_options` saves `AdvancedOptions` values for use in modeling. * `Project.analyze_and_model` initiates Autopilot or data analysis using data that has been uploaded to DataRobot. * `Project.get_dataset` retrieves the dataset used to create the project. * `Project.set_partitioning_method` creates the correct Partition class for a regular project based on input arguments. * `Project.set_datetime_partitioning` creates the correct Partition class for a time series project. * `Project.get_top_model` returns the highest scoring model for a metric of your choice. #### Calculate Feature Impact for each backtest {: #calculate-feature-impact-for-each-backtest } Feature Impact provides a transparent overview of a model, especially in a model's compliance documentation. Time-dependent models trained on different backtests and holdout partitions can have different Feature Impact calculations for each backtest. Now generally available, you can calculate Feature Impact for each backtest using DataRobot's REST API, allowing you to inspect model stability over time by comparing Feature Impact scores from different backtests. ## Deprecation announcements {: #deprecation-announcements } #### API deprecations Review the deprecations introduced in version 3.0: * `Project.set_target` has been removed. Use `Project.analyze_and_model` instead. * `PredictJob.create` has been removed. Use `Model.request_predictions` instead. * `Model.get_leaderboard_ui_permalink` has been removed. Use `Model.get_uri` instead. * `Project.open_leaderboard_browser` has been removed. Use `Project.open_in_browser` instead. * `ComplianceDocumentation` has been removed. Use `AutomatedDocument` instead. #### DataRobot Prime models to be deprecated {: #datarobot-prime-models-to-be-deprecated } [DataRobot Prime](prime/index), a method for creating a downloadable, derived model for use outside of the DataRobot application, will be removed in an upcoming release. It is being replaced with the new ability to export Python or Java code from Rulefit models using the [Scoring Code](scoring-code/index) capabilities. Rulefit models differ from Prime only in that they use raw data for their prediction target rather than predictions from a parent model. There is no change in the availability of Java Scoring Code for other blueprint types, and any existing Prime models will continue to function. #### Automodel functionality to be removed {: #automodel-functionality-to-be-removed } An upcoming release will bring the removal of the Public Preview "Automodel" functionality. There is no impact to existing projects, but the feature will no longer be accessible from the product. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
sept2022-announce
* Frozen thresholds are not supported. * Blenders that contain monotonic models do not display the MONO label on the Leaderboard for OTV projects. * When previewing predictions over time, the interval only displays for models that haven’t been retrained (for example, it won’t show up for models with the **Recommended for Deployment** badge). * User models are supported from the Repository tab. * If you configure long backtest durations, DataRobot will still build models, but will not run backtests in cases where there is not enough data. In these case, the backtest score will not be available on the Leaderboard. * Timezones on date partition columns are ignored. Datasets with multiple time zones may cause issues. The workaround is to convert to a single time zone outside of DataRobot. Also there is no support for daylight savings time. * Dates before 1900 are not supported. If necessary, shift your data forward in time. * Leap seconds are not currently supported.
dt-consider
# Feature considerations {: #feature-considerations } The following sections describe considerations to be aware of when working with DataRobot features: * [Anomaly detection](anomaly-detection#feature-considerations) * [Clustering](clustering#feature-considerations) * [Composable ML](cml-consider) * [Custom models](custom-models/index#feature-considerations) * [Data requirements](data/index#feature-considerations) * [Data Quality Assessment](data-quality#feature-considerations) * [DataRobot Prime](prime/index#feature-considerations) (Disabled) * [Deployments (model monitoring and management)](deployment/index#feature-considerations) * [Eureqa models](eureqa#feature-considerations) * [Feature Discovery](feature-discovery/index#feature-considerations) * [Location AI](location-ai/index#feature-considerations) * [Monotonic modeling](monotonic#feature-considerations) * [Multiclass modeling](multiclass#feature-considerations) * [Multilabel modeling](multilabel#feature-considerations) * [Out of time validation (OTV)](otv#feature-considerations) * [Prediction Explanations (XEMP and SHAP)](pred-explain/index#feature-considerations) * [Rating Tables](rating-table#feature-considerations) * [Time series and multiseries](ts-consider) * [Visual AI considerations](vai-model#feature-considerations)
index
--- title: Batch Scoring Script description: The Python batch scoring script is designed to efficiently score large files using the Prediction API. It has been replaced with the Batch Prediction Scripts. --- # Batch Scoring Script {: #batch-scoring-script } !!! warning The Python batch scoring script has been deprecated and replaced with the [Batch Prediction Scripts](cli-scripts). While the script can still function in some environments, because legacy Prediction API routes on the prediction servers in the managed AI Platform are disabled, some commands won't work. The Python batch scoring script is designed to efficiently score large files using the Prediction API. The batch-scoring script only runs against dedicated prediction workers (for managed AI Platform deployments) or a dedicated prediction cluster (for Self-Managed AI Platform users). It achieves greater speed by splitting a CSV input file into optimally sized batches and submitting these concurrently to the prediction server. Batches can be score much more quickly than individual rows. The script handles queueing, resource management, and concurrent request management, requiring no user intervention. Concurrent requests greatly increase the efficiency of the process by using multiple processors to make predictions. However, you should not use a value for `<n>_concurrency` greater than your number of prediction cores. Consult <a target="_blank" href="https://support.datarobot.com">DataRobot Support</a> if you are unsure of how many cores you have. ## Prerequisites {: #prerequisites } There is a known bug in Python 2.7.8 and later 2.7.x versions that causes SSL connections to fail and so they are not supported. This script supports Python 2.7.7, but Python 3.4 and later are recommended for better speed and text decoding. You can use Anaconda 2.2.0 or later to install the `datarobot_batch_scoring` script. If you do not have access to the Internet for downloading dependencies, <a target="_blank" href="https://support.datarobot.com">DataRobot Support</a> can provide a bundle that includes everything needed to install offline. ## Installation instructions {: #installation-instructions } <a target="_blank" href="https://pypi.python.org/pypi/datarobot_batch_scoring">Download and install</a> the DataRobot batch scoring package for Python 2 and 3 using the following command: pip install -U datarobot_batch_scoring ## Alternative install methods {: #alternative-install-methods } DataRobot provides two alternative install methods on the project <a target="_blank" href="https://github.com/datarobot/batch-scoring/releases">releases</a> page. ({% include 'includes/github-sign-in.md' %}) These can help when you do not have: * Internet access * administrative privileges * the Python package manager (pip) installed * the correct version of Python installed (use ``PyInstaller``, option 2 below, only) In any of the above situations, use: 1. **offlinebundle**: For performing installations in environments where Python2.7 or Python3+ is available. Works on Linux, OSX, or Windows. These files have "offlinebundle" in their name on the release page. The install directions are included in the zip or tar file. 2. **PyInstaller**: Using <a target="_blank" href="http://www.pyinstaller.org/">PyInstaller</a>, DataRobot builds a single-file executable that does not depend on Python. It can be installed without administrative privileges. These files on the release page have "executables" in their name, as well as the version and platform (Linux, Windows, or OSX). The install directions are included in the zip or tar file. Note that the PyInstaller builds for Linux work on distros equal to or newer than Centos 5. Contact <a target="_blank" href="https://support.datarobot.com">DataRobot Support</a> if you have questions or if you have problems getting a build to work on your system. ## Syntax, examples, and usage notes {: #syntax-examples-and-usage-notes } For complete and up-to-date scoring script syntax and information, visit DataRobot's <a target="_blank" href="https://github.com/datarobot/batch-scoring">batch-scoring Github page</a>. {% include 'includes/github-sign-in.md' %} ### Sample output {: #sample-output } The `--verbose` output of the script provide information about the progress of the scoring procedure, as shown in the following example. Some particularly informative sections are described below the image. ![](images/dnt-batch-scoring-script-output.png) * `--host="https://datarobot-xxxxx.datarobot.com"`: Hostname of the prediction API endpoint (the location of the data to use for predictions * `'user': 'mike@datarobot.com', 'api_token': 'ABCD1234XYZ7890', ... 'datarobot_key': 'xxxxxxxxxxxxxxxxx', ... 'deployment_id': 'yyyyyyyyyyyyyyyyyy'`: User name and corresponding API key, DataRobot key, and deployment ID * `batch_scoring v1.16.4`: Script name and version number * Multiple checks of encoding and dialect with response timing * `Authorization has succeeded`: Verification that login credentials are valid * `MainProcess [WARNING] File output.csv exists. Do you want to remove output.csv (Yes/No)> y`: Notification that a file already exists with the specified output name * `1 responses sent | time elapsed 0.545090913773s`: Time to score submission
python-batch-scoring
--- title: Python 2 deprecation / migration to Python 3 description: Explains the deprecation of Python 2 support within the DataRobot platform and how to migrate to Python 3. --- # Python 2 deprecation / migration to Python 3 {: #python-2-deprecation-migration-to-python-3 } DataRobot is deprecating and removing Python 2, in its entirety, from the platform. Part of the deprecation process includes migrating all user projects to Python 3. Upgrading to Python 3 will improve platform reliability and security. Additionally it enables the DataRobot development team to modernize the codebase and deliver more innovative features faster and with better quality. Pay careful attention to the guidance below as it will likely require actions from you and other members within your organization. If at any point you have questions, reach out to your DataRobot representative with any questions and they will gladly assist in any way possible to make the migration a smooth experience. This guide explains the deprecation process and how to migrate to Python 3. ## Users impacted {: #users-impacted } The Python changes apply to the following users: * All managed AI Platform (SaaS) users. * All Self-Managed AI Platform users (except net-new installations starting with Release 7.1 or later. See the [FAQ](#faq) for more details). ## Products impacted {: #products-impacted } - AutoML - Time series (AutoTS) - MLOps ## Actions required and important dates {: #actions-required-and-important-dates } As milestones approach, specific dates will be communicated, both via email communications and within this documentation. View the information for your installation in the appropriate tab. === "managed AI Platform" Consider the dates below: **March 2022** * Starting March 7th, 2022, new projects created on the managed AI Platform (SaaS) will use Python 3 by default for model building and predictions. * Existing projects and models created using Python 2 (created before March 7, 2022) will continue to work as expected. * Deployments can start using models from Python 3 projects. Deployments are not tied to a specific Python version, only the underlying models are. **April 2022** * Existing projects and models using Python 2 will start displaying a deprecation notice in the UI as well in the API response between April 15-22, 2022. To reiterate, these projects and models will continue to work as expected while in the deprecated state, however, this is a good time to start planning and executing the migration steps using the guide below. * It is highly recommended that you compute insights and generate compliance documentation you will require for auditing on these projects prior to project disablement (the milestone below). All data computed within the project will be retained in a read-only state for long term reference. * Identify your active and important projects that will be impacted with deprecation and make a plan to migrate them by July 2022. !!! note If a formal model audit process is required in your organization, starting this process as soon as possible to avoid future disruptions to any impacted production workloads is strongly recommended. **July 2022** * Deprecated projects will transition to a “disabled” state and will be placed in read-only mode between July 25-31, 2022. * Duplication of projects and downloading artifacts such as Scoring Code, models, model packages, and insights charts will still remain enabled. More details are available in the [Deprecated and disabled functionality](#deprecated-and-disabled-functionality) section. * Existing deployments and predictions will continue to operate (uninterrupted) on Python 2 model deployments within MLOps and Prediction Servers, however, no new deployments can be made with Python 2 models. Migrate your Python 2 model deployments before October 2022 using the guide below. * Existing or new deployments and predictions on Python 3 models will not experience any impact or limitations. **October 2022** * Python 2 projects will remain in “disabled” state indefinitely. Duplication of projects and additional capabilities will still be supported, as mentioned in the [Deprecated and disabled functionality](#deprecated-and-disabled-functionality) section. * Python 2 projects and models will no longer be supported. All project data will be retained in a read-only state. You must replace all critical Python 2 projects and all MLOps model deployments with eligible Python 3 models before October 25, 2022. Schedule: | Timeline | Projects | Deployments | Action Required | | -------- | -------- | ----------- | --------------- | | March 7, 2022 | Newly created projects and models in these projects use Python 3 by default. | No impact on existing Python 2 model deployments. <br /><br />Existing or new deployments may use Python 2 or Python 3 models (Python 3 preferable). | None | | April 15-22, 2022 | Projects using Python 2 to be **deprecated**. No functional impact. | No impact on existing Python 2 model deployments. <br /><br />Existing or new deployments may use Python 2 or Python 3 models (Python 3 preferable). | Migrate necessary Python 2 projects and deployments to Python 3. | | July 25-31, 2022 | Projects using Python 2 to be **disabled** and converted to read-only mode. | No impact on existing Python 2 model deployments. <br /><br />New deployments will not allow Python 2 models (Python 3 models only). | Migrate necessary Python 2 deployments to Python 3. | | October 25, 2022 | Projects using Python 2 remain in **disabled** and read-only mode indefinitely. | Python 2 model deployments to be **disabled**. | Complete migration in advance of this date. | === "Self-Managed AI Platform" Consider the dates below: **Release 8.0 upgrades, starting March 2022** * Identify critical projects that will be impacted and make a plan to migrate them. !!! note If a formal model audit process is required, starting this process as soon as possible to avoid future disruptions to any impacted production workloads is strongly recommended. * The Python Version field is now added to User Activity Monitor reports for App Usage and Predictions Usage in the UI, API, and exported CSV data. This feature is available from DataRobot release 8.0.3 onwards. * New projects created in Release 8.x will start using Python 3 by default for model building and predictions. * Existing projects and models created using Python 2 will continue to work as expected. * Existing projects and models using Python 2 will begin displaying a deprecation notice in the UI as well in the API response. To reiterate, these projects and models will continue to work as expected while in the deprecated state, however, it is best to start planning and executing the migration steps using the guide below. **Prior to Release 9.0 upgrades, March 2023** * Python 2 projects and models will no longer be supported in release 9.x. All project data will be retained in a read-only state. Before upgrading to release 9.x, ensure that all critical Python 2 projects and models are migrated to Python 3 and that any Python 2-based MLOps model deployments have been replaced with eligible Python 3 models. * Python 2 projects and models created prior to 9.x releases will be disabled once you upgrade to release 9.x. The projects will transition to read-only and new computations will be disabled. * You are encouraged to compute insights and generate any compliance documentation for auditing purposes on these projects prior to the 9.0 upgrade when these projects will be disabled. All data computed within the project will be retained in a read-only state for long term reference. ## Deprecated and disabled functionality {: #deprecated-and-disabled-functionality } Because of outdated dependencies, Python 2 projects will transition first to deprecated and later to disabled. The following table describes the differences. | Stage | Impact | Function | | ----- | ------- | ------- | | *Deprecated* | No functional impact. A prominent notification will inform users that Python 2 projects will not be supported in the future and that action should be taken to migrate as necessary. | Models and deployments continue to function as expected. | | *Disabled* | Further actions on these projects is prevented. Any pre-computed data can be viewed for reference and comparison while migrating to Python 3 and will be retained long-term for audit purposes. | Project and model data is read-only. Any actions involving compute jobs are disabled (e.g., retraining models, adding new models, computing insights charts, etc.). REST API and associated clients will similarly prevent these actions. The following functionality will still be enabled: duplicating projects, and downloading Scoring Code, models, model packages, and insights. <br />*Predictions via the public API will no longer function.* You must instead use the [Prediction API](dr-predapi) or [Batch Prediction API](batch-prediction-api/index). | The following functionality is not affected: - Custom models - No-code AI Apps - AI Catalog - Data Prep ## Guide for migrating to Python 3 {: #guide-for-migrating-to-python-3 } Use the following procedures to migrate projects and deployments. For scenarios not covered, contact your DataRobot representative. ### Migrate projects {: #migrate-projects } Migrate projects and models using the “Duplicate project” action on the **Manage Projects** page. This will create a new project using Python 3 from the same dataset and the same advanced options (for [eligible project](manage-projects#duplicate-a-project) types). ![](images/duplicate-1.png) Once copied, you must manually recreate models within the project. You can either: * Re-run Autopilot to build all models. * Use Manual mode to build select models from the [**Repository**](repository). After the Python 2 project is migrated, you can delete the project. ### Migrate deployments {: #migrate-deployments } For deployments based on a Python 2 model, you have several options: Option | Notes ------ | ----- [Replace the deployed model](deploy-replace) with a Python 3 model after [creating a new project](#migrate-projects). | This is the most straightforward option. While it will require time to build new models and perform the model replacement, it will not impact deployment API predictions because model replacements are seamless. Set up an [automatic retraining](set-up-auto-retraining) policy for your model to let the DataRobot rebuild and replace the model for you automatically, using the schedule and modeling strategies you specify.| <ul><li>Requires an MLOps license</li><li>Review the [considerations](set-up-auto-retraining#retraining-considerations)</li></ul> Replace the model with a [Custom inference model](custom-inf-model) using the [Java Drop-in](drop-in-environments) environment and the [Scoring Code export](scoring-code/index) for the Python 2 model. | <ul><li>Requires an MLOps license</li><li>Not all model blueprints support Scoring Code</li><li>Best for deployments that do not have low-latency prediction requirements.</ul> Export the model package and use the DataRobot [Portable Prediction Server (PPS)](portable-pps) to serve predictions within your own environment.| <ul><li>Requires an MLOps license</li><li>Requires altering the API integration from the deployments API to a newly hosted endpoint.</li></ul> ## Tips {: #tips } The following tips will help with the migration process. 1. Use [project tags](manage-projects#tag-a-project) to keep track of which projects need to be migrated (for example, `py2-to-migrate`) and which ones have already been migrated (for example, `py2-migrated`). ![](images/python-migrate-tag.png) 2. Compute any insights charts that you may want to refer to in the future prior to June 2022 so that they are precomputed and will later be available as read-only when the project becomes disabled. ## FAQ {: #faq } ??? faq "How can I identify which projects and models use Python 2 and are impacted?" For managed AI Platform users, any projects **created before March 7, 2022** are based on Python 2 and will be deprecated. By the end of April 2022, you will see a Deprecation alert on the Models tab for affected projects similar to the following: ![](images/python-migrate-banner.png) On the Manage Projects screen, you will see an icon next to deprecated projects: ![](images/python-migrate-icon.png) Currently, it is not possible to retrieve this information using the REST APIs or to see impacted model deployments in MLOps. Starting with the 8.0.3 release, the Python Version field is added to the User Activity Monitor reports for App Usage and Predictions Usage in the UI, API, and exported CSV data. The data can be filtered on python 2.7 to see usage. For help producing a report, contact your DataRobot representative. ??? faq "Why is Python 2 being deprecated and removed?" Python 2 was end of life (EOL) in Jan 2020, and the Python Software Foundation stopped providing patches for bugs and security vulnerabilities. It is no longer supported by the community. DataRobot is removing it from the platform entirely to avoid security risks. The DataRobot platform uses many third-party libraries that have also dropped support for Python 2. Upgrading to newer versions of those libraries requires removing Python 2 support. ??? faq "Why do projects and models using Python 2 need to be migrated?" To minimize user impact from this change, DataRobot has made as many areas of the platform as compatible as possible. In fact, changes have been rolling out incrementally over the past few years, likely unnoticed. However, to avoid potentially significant incompatibilities in model performance and prediction consistency, it became necessary to make a hard break from old models trained under Python 2. As the most trusted AI platform, DataRobot wants to ensure that customers are in complete control over managing their AI models and are owners of the decision to replace a model. ??? faq "What happens to project data for deprecated projects after they are disabled?"" So that it can serve as a reference for auditing purposes, all Python 2 project data will be retained and will be accessible within the application in a read-only state. To ensure thorough audit material, it is highly recommended that you compute insights and generate compliance documentation on these projects prior to the "disabled" milestone. ??? faq "Will DataRobot still support older Self-Managed AI Platform releases that use Python 2?"" Yes. The deprecation of Python 2 does not affect the support policy for Self-Managed AI Platform users on supported enterprise releases. DataRobot will continue to honor the Long Term Support (LTS) commitments for those customers. Beginning with Release 9.0 and going forward, Python 2 will no longer be supported. ??? faq "I’m a relatively new customer that installed Release 7.1 or later in my Self-Managed AI Platform environment. Does this impact me?" Possibly. New Self-Managed AI Platform installations since 7.1 using Docker or RPM configurations have been configured to use Python 3 for all projects. In this case, no action is required. However, Hadoop-based installations (Cloudera, Hortonworks) did not have Python 3 enabled for new projects. In these cases, you will need to migrate projects. If you have any questions about your install type, contact the IT Admin who performed the install. Ask them to verify that the configuration (`config.yaml`) setting for `PYTHON3_SERVICES` is enabled. Or, reach out to your DataRobot representative for help verifying the configuration.
python2
--- title: Deprecations and migrations description: Documentation related to deprecated functionality and guides on how to migrate your workflows. --- # Deprecations and migrations {: #deprecation-migration-guides } Deprecation _notices_ are included within the Cloud announcement pages. The pages within _this_ section provide more detailed guidance related to deprecated features. When applicable, a guide that explains the process for migrating your workflows to support the new functionality is provided. * [Python 2](python2) deprecation/migration guide * Deprecated: [Batch scoring script](python-batch-scoring) * Deprecated: [Open source and USER models](oss-user-models)
index
--- title: Open source and User models description: Explains the deprecation of Open Source and User models. --- # Open source and User models {: #open-source-and-user-models } As of November 2022, DataRobot disabled all models containing User/Open source (“user”) tasks. See the [release announcement](june2022-announce#user-open-source-models-deprecated-and-soon-disabled) for full information on identifying these models. Use the [Composable ML](cml/index) functionality to create custom models. There are native solutions for all currently supported open source models, and disabling and replacing the deprecated model types addresses any potential security concerns. To determine whether your model is of the open source/User type, open the blueprint, where you will see the task name: ![](images/rn-oss-deprecate.png) Additionally, you can see the task listed in the model description on the Leaderboard. ## Deprecated and disabled functionality {: #deprecated-and-disabled-functionality } The following table describes the stages of deprecation: Stage | Impact | Timeline ----- |------- | -------- *Deprecated* | Existing models cannot be deployed, but predictions and insights can be computed. New models cannot be created. | 8/2022 - 11/ 2022 *Disabled* | Model data is read-only. Any pre-computed data can be viewed for reference and will be retained long-term for audit purposes. Any actions involving compute jobs are disabled (computing insights charts, etc.). | 11/2022 and beyond === "Managed AI Platform (SaaS)" Consider the dates below: **August 2022** * Open source and Usuer models, and any blenders that include these models, are **deprecated**. **November 2022** * Open source and User models, and any blenders that include these models, are **disabled**. === "Self-Managed AI Platform" Consider the dates below: * Open source and User models, and any blenders that include these models, will be **disabled** in release 9.0.
oss-user-models
--- title: AutoML (V8.0) description: DataRobot Release 8.0 AutoML release notes --- # AutoML (V8.0) {: #automl-v80 } _March 14, 2022_ The DataRobot v8.0.0 release includes many new AutoML features and enhancements described in this section. See also the new features described in the [time series (AutoTS)](v8.0.0-ats) and [MLOps](v8.0-mlops) release notes. Release v8.0 provides updated UI string translations for the following languages: * Japanese * French * Spanish * Korean See these important [deprecation](#deprecation-notices) announcements for information about changes to DataRobot's support for older, expiring functionality. This document also describes DataRobot's [fixed](#customer-reported-fixed-issues) issues. ## Data enhancements {: #data-enhancements } ### Active Directory support added for Azure Synapse and SQL {: #active-directory-support-added-for-azure-synapse-and-sql } DataRobot now supports Microsoft Azure Synapse and Azure SQL as a data source. When [adding a new data connection](data-conn#create-a-new-connection), both tiles will be listed among the available stores. ![](images/rn8-azure-1.png) When defining the parameters of the connection, you can specify the authentication method as **SqlPassword** or **ActiveDirectoryPassword**. Selecting ActiveDirectoryPassword allows you to use your Azure identity instead of credentials defined in the database. For information on Active Directory, see the [client setup requirements](https://docs.microsoft.com/en-us/sql/connect/jdbc/connecting-using-azure-active-directory-authentication?view=sql-server-ver15#client-setup-requirements){ target=_blank }. ![](images/rn8-azure-2.png) ### Exasol JDBC driver supported in DataRobot and batch predictions {: #exasol-jdbc-driver-supported-in-datarobot-and-batch-predictions } DataRobot now supports the latest version of the Exasol JDBC Driver as a data source. When adding a new data connection, an Exasol tile will be listed among the available stores. After the data connection is set up, you can create batch prediction jobs that score to and from your Exasol database. ![](images/rn8-exasol-1.png) ### Google Cloud and Azure Storage SDK upgraded for improved reliability {: #google-cloud-and-azure-storage-sdk-upgraded-for-improved-reliability } Storage is a fundamental part of the DataRobot infrastructure because it is used to store datasets, models, insights, etc. To keep the storage subsystem reliable and performant, DataRobot upgraded Google Cloud and Azure Storage SDK versions for Self-Managed AI Platform installations. ### Verified and updated list of data sources for batch predictions {: #verified-and-updated-list-of-data-sources-for-batch-predictions } DataRobot verified existing data sources that support batch predictions and added support for new data sources. See [Data sources supported for batch predictions](batch-prediction-api/index#data-sources-supported-for-batch-predictions) for an up-to-date list. ## Feature Discovery features {: #feature-discovery-features } ### Improvements to Feature Discovery feature derivation process {: #improvements-to-feature-discovery-feature-derivation-process } DataRobot reduced the likelihood of not generating features after defining relationships when the feature derivation window (FDW) is too large, causing the computation to be too complex, or when the column is set as both the time-index and join column. ## Modeling features {: #modeling-features } ### Duplicate applications in the App Builder {: #duplicate-applications-in-the-app-builder} With this release, you can create a copy of an existing application, so new users can leverage the existing work without having to spend the time and effort recreating every aspect of the app’s charts, predictions, scenarios, simulations, and more. When an application is shared, any changes made by the new user affect the original owner’s application; however, you can now duplicate an application and share the copy&mdash;allowing new users to access pre-existing content without disrupting or changing the work of the original owner’s app. To duplicate an application, go to **Applications > Current Applications**. Click the menu icon ![](images/icon-menu.png) of the app you want to copy and select **Duplicate**. For more information, see [Duplicate applications](current-app#duplicate-applications). ![](images/current-app-9.png) ### Improvements to the no code App Builder {: #improvements-to-the-no-code-app-builder } This release introduces several improvements to the no code App Builder: - In an application’s **Settings**, you can now specify the number of decimal places to display for predictions throughout an application. ![](images/rn8-app-adhoc-1.png) - When making single record predictions, click **Populate averages** to enter the average feature value for each visible field. ![](images/rn8-app-adhoc-2.png) - You can now add an Adjusted Prediction Threshold column to the All Rows widget for binary classification projects. To add this column, go to **Build** mode, select the **All Rows** widget, and click **Manage** on the left. Click the orange arrow next to **Adjusted Prediction Threshold** and click **Save**. ![](images/rn8-app-adhoc-3.png) - Additional date formats are now supported in applications. ### Support for unlimited labels goes GA {: #support-for-unlimited-labels-goes-ga } This release brings enhanced support for multicategorical targets, now allowing any number of labels (“[unlimited multilabel modeling](multilabel)”). Previously, projects were limited to 100 labels. When DataRobot builds multilabel projects, it uses up to 1,000 labels in each multicategorical feature. You can either allow the application to trim extraneous labels or you can specify which labels to trim in the [Feature Constraints](feature-con#trim-target-labels) section of advanced options. Additionally, export of labelwise Lift Charts via **Predict → Download** is now enabled. ![](images/rn-trim-target-labels.png) !!! note Availability of multilabel modeling is dependent on your DataRobot package. If it is not enabled for your organization, contact your DataRobot representative for more information. ### Improved blueprint handling in the AI Catalog {: #improved-blueprint-handling-in-the-ai-catalog } The ability to simultaneously train multiple blueprints (“bulk train”) from the AI Catalog has been improved by helping to identify errored blueprints. Now, the **Train multiple blueprints** modal displays a color-coded message that indicates status for the group of blueprints in the training request and the number of affected blueprints. You can hover over an error or warning to display a tooltip containing additional information. ![](images/rn-8-bulk-train.png) Other usability improvements include: - A new bulk delete feature allows you to select multiple blueprints for deletion and confirm, via modal, the specific blueprints to ensure an accidental deletion does not occur. - When selecting blueprints from the AI Catalog, your selections persist as you page through the inventory. From any page, you can apply bulk actions such as training, validating, or deletion. ### Blueprint editor enhancements {: #blueprint-editor-enhancements } This release brings improvements to the blueprint editor used for [composable ML](cml-overview). #### Add and edit blueprint objects Adding and editing blueprint objects is now more intuitive. In past releases, you needed to click a node to access actions for the node. Now, you only need to hover over a node to access the actions. You can now perform actions directly on connectors, as well&mdash;a more intuitive approach. Hover over a node to access the actions described below: ![](images/rn-cml-node.png) | action icon | description | |---|---| | ![](images/icon-pencil.png) | Modify a node. | | ![](images/icon-plus.png) | Add a node. | | ![](images/rn-icon-diagonal-arrow.png) | Connect nodes. | | ![](images/icon-trash.png) | Remove a node. Removing nodes removes downstream nodes. | Hover over a connector to access the actions described below. ![](images/rn-cml-connector.png) | action icon | description | |---|---| | ![](images/icon-plus.png) | Add a node. | | ![](images/icon-trash.png) | Remove a connector. | #### Blueprint validation Blueprint validation has also been enhanced. When you hover over a node that contains warnings (highlighted in yellow), the warning messages display. You can now train on blueprints that contain warnings. To do so, click **Train with warnings**. ![](images/rn-blueprint-editor-warnings.png) #### Remove data type nodes You can now remove data type nodes directly. ![](images/rn-blueprint-editor-rm-data-type.png) In past releases, you needed to clear check boxes in the **Input data available** window to remove data type nodes. ![](images/rn-blueprint-editor-available-data-types.png) See the documentation on [modifying blueprints](cml-blueprint-edit) for details. ### NLP Fine-Tuner blueprints for multi-modal datasets in any language {: #nlp-fine-tuner-blueprints-for-multi-modal-datasets-in-any-langauge } Natural language processing (NLP) deals with the interaction between computers and humans using the natural language and is essential for every AutoML system. Fine-tuning is a process that takes a model that has already been trained for a given task, and makes it perform a second similar task, as long as the second dataset does not differ drastically from the first. NLP Fine-Tuner blueprints allow you to use a model previously trained for NLP and fine-tune them, similar to existing functionality in Visual AI. Doing so increases accuracy, and lets you adjust models to a specific use case and downstream task. NLP Fine-Tuner blueprints are available in any language for multi-modal datasets, multilabel datasets, and Composable ML. ### Improvements to External Predictions insights {: #improvements-to-external-prediction-insights } You can now configure up to 100 external prediction column names in the [**External Predictions**](external-preds#set-advanced-options) tab of advanced options. ## Admin enhancements {: #admin-enhancements } ### Python 3 support for Hadoop clusters {: #python-3-support-for-hadoop-clusters } ​ Following on from our announcement regarding [Python 2 deprecation](python2), support for Python 3 on Hadoop clusters is now available. New installations will use Python 3 for projects and models. Those upgrading from previous releases will see support for both Python 2 and Python 3 side-by-side so that pre-existing projects will continue to function as expected. See the [deprecation notice and Python 3 migration guide](python2) for more information. The following deprecated features are not supported with Python 3 projects, but Python 2 projects containing these features will work as expected: * Scaleout models * Hadoop Scoring ​ ### RHEL 8.5 now supported in on-premise installations {: #rhel-85-now-supported-in-on-premise-installations } ​ DataRobot now officially supports Red Hat Enterprise Linux 8.4 (RHEL 8.4) and 8.5 (RHEL 8.5) as installation targets. Additionally, CentOS Linux 8 has reached End of Life (EOL) as of December 31st, 2021 and is no longer supported. ​ ### Account and profile settings reorganized {: #account-and-profile-settings-reorganized } ​ To improve the user account management experience, the **Profile** page now includes the following tabs for individual user preferences, including the settings previously located on the **Settings** page: | Tab | Settings | | ---------- | ----------- | | [**Account**](profile-settings#edit-account-and-profile-information) | The original **Profile** page settings. | | [**Security**](profile-settings#configure-security-settings) | The following individual user security settings: <ul><li>Change Your Password</li><li>Two-Factor Authentication</li></ul> | | [**System**](profile-settings#configure-system-settings) | The following individual user system settings: <ul><li>Language</li><li>Theme</li><li>CSV export</li></ul> | | [**Notifications**](profile-settings#configure-notification-settings) | The following individual user notification settings: <ul><li>Mute all email notifications</li><li>Enable email notification when Autopilot has finished</li><li>Enable browser notification when Autopilot has finished</li></ul> | Users with proper access and permissions can [view and manage feature settings on the **Settings** page](user-settings). For all other users, the **Settings** page is deprecated. {% if 'enterprise' in tags %} ### Skip the DataRobot login when SAML is enforced {: #skip-the-datarobot-login-when-saml-is-enforced } If your organization has SAML authentication enforced, you can now bypass the DataRobot login screen, automatically redirecting users to the SAML login page from the application url. To skip the login screen, set the following configuration setting to `TRUE` in your config.yaml: `SKIP_LOGIN_UI_IF_SAML_SSO_IS_ENFORCED`. {% endif %} ## API Enhancements {: #api-enhancements } The following is a summary of API new features and enhancements. Go to the [API Documentation home](https://docs.datarobot.com/en/docs/api/index.html){ target=_blank } for more information on each client. !!! tip DataRobot highly recommends updating to the latest API client for Python and R. ### New Features {: #new-features } API release v2.28.0 introduces new routes for computing and retrieving samples for Image Augmentation Lists: * `POST /api/v2/imageAugmentationLists/(augmentationId)/samples/` * `GET /api/v2/imageAugmentationLists/(augmentationId)/samples/` ### Enhancements {: #enhancements } Version 2.28.0 adds new information if retrieved cluster insights are up to date: * `GET /api/v2/projects/(projectId)/models/(modelId)/clusterInsights/` New properties have been added to a leaderboard item: `bias_mitigation` and `bias_mitigation_parent_lid`. * `GET /api/v2/projects/(projectId)/models/(modelId)/` ### API deprecation notices {: #api-deprecation-notices } The `customModelType` parameter is now deprecated in the following routes. It will be removed completely in a later release. * `POST /api/v2/customModels/` * This endpoint only creates custom inference models. * To create a Custom Training Task (`customModelType=training`) use the dedicated customTasks endpoint `POST /api/v2/customTasks/`. * `GET /api/v2/customModels/` * This endpoint only lists custom inference models. * To list custom training tasks (`customModelType=training`) use the dedicated customTasks endpoint `GET /api/v2/customTasks/`. Routes for Image Augmentation Samples not related to Image Augmentation Lists are deprecated and will be removed in the following routes: * `POST /api/v2/imageAugmentationSamples/` * To create image augmentation samples create image augmentation list and generate samples for it using the endpoint `POST /api/v2/imageAugmentationLists/(augmentationId)/samples/`. * `GET /api/v2/imageAugmentationSamples/(samplesId)/` * To retrieve image augmentation samples retrieve them using the endpoint `GET /api/v2/imageAugmentationLists/(augmentationId)/samples/`. ## Public preview features {: #public-preview-features } ### Bias mitigation now available for binary classification projects {: #bias-mitigation-now-available-for-binary-classification-projects} Bias mitigation, a technique to mitigate Leaderboard models for biased behavior, is now available as a public preview feature. It works by augmenting blueprints with a pre- or post-processing task causing the blueprint to then attempt to reduce bias across classes in a protected feature. You can apply mitigation either automatically (as part of Autopilot) or manually (after Autopilot completes). When run automatically, you set mitigation criteria as a part of the Bias and Fairness advanced option settings. Autopilot then applies mitigation to the top three Leaderboard models. Or, once Autopilot completes, you can apply mitigation to any non-blender, unmitigated model available from the Leaderboard. Finally, compare mitigated versus unmitigated models from the Bias vs Accuracy insight. For more information, see the [documentation](fairness-metrics#set-mitigation-techniques). ![](images/bias-mit-11.png) ## Deprecation notices {: #deprecation-notices } Note the following to better plan for later migration to new releases. ### TensorFlow blueprints deprecated and soon to be removed {: #tensorflow-blueprints-deprecated-and-soon-to-be-removed } TensorFlow (TF) blueprints are being deprecated with this release, making them unavailable for building in new projects. They are being replaced by Keras blueprints, which in most cases outperform TF for both speed and accuracy. TF blueprints built as part of an existing project will still function normally. These blueprints are no longer searchable in user blueprints, either new or existing. ### Feature Fit visualization deprecated in favor of Feature Effects {: #feature-fit-visualization-deprecated-in-favor-of-feature-effects } The **Feature Fit** visualization, available from the Leaderboard under the **Evaluate** tab, is deprecated and will soon be removed. For insight on a feature’s impact on model predictions, use **Understand > Feature Effects** instead. Both visualizations report a feature’s model-agnostic importance. Feature Fit, calculated during EDA2, charted results based on a feature’s importance score. This score is still available from the **Data** page. Feature Effects ranks based on the feature impact score and shows how changes to a feature’s value would effect a model’s predictions. Feature Fit will be removed in on-premise release 9.0.0. For managed AI Platform users, Feature Fit will be removed within the next quarter. ## Customer-reported fixed issues {: #customer-reported-fixed-issues } The following issues have been fixed since release [7.3.5](v7.3.5-aml). ### Platform {: #platform } - EP-2285: Testing facts failed to set target for mongo version. - MODEL-8321: Fixes an internal service error (ISE) when selecting a character level analyzer along with any tokenization method besides `None`. - VIZAI-3055: Removes the ability to create multilabel projects with OTV or TS that are not supported for multilabel project types via the API. - VIZAI-3062: Enables Feature Discovery for multilabel projects. ### Predictions {: #predictions } - PRED-7153: Fixes an issue with frozen models causing the **Predict** tab on the Leaderboard to not render properly. - PRED-7191: Fixes an issue with the **Make Predictions** tab on the Leaderboard when attempting to use a derived features as an optional pass-through column. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.0-aml
--- title: Time series (V8.0) description: DataRobot Release 8.0 time series release notes --- # Time series (V8.0) {: #time-series-v80 } _March 14, 2022_ The DataRobot v8.0.0 release includes new [time series](#new-time-series-features) features, described below. See also details of Release 8.0.0 in the [AutoML](v8.0.0-aml) and [MLOps](v8.0-mlops) release notes. ## New time series features {: #new-time-series-features } See details of the following new GA feature: * [Time series Predictor support in the AI App Builder](#time-series-predictor-support-in-the-ai-app-builder) See details of the following public preview feature: * [Scoring Code for time series](#scoring-code-for-time-series) ## Generally available features {: #generally-available-features } The following new features are now generally available. ### Time series Predictor support in the AI App Builder {: #time-series-predictor-support-in-the-ai-app-builder } Now generally available, you can build AI-powered Predictor applications for both multi- and single-series projects. In your time series deployment, click the actions menu and select **Create Application**. Once created, upload batch predictions to populate the new Time Series widget, which allows you to navigate between multiple time unit resolutions, view calendar events (if uploaded), compare forecasted vs actual values for new data, and view insights for Prediction Explanations over time. ![](images/ts-app-3.png) For details, see [Time series Predictor applications](create-app#time-series-applications). ## Public preview features {: #public-preview-features } The following features are part of the public preview program. ## Scoring Code for time series {: #scoring-code-for-time-series } Scoring Code public preview capabilities for time series have expanded with this release, bringing [Scoring Code](scoring-code/index) support for: * [Time series parameters for scoring at the command line](#time-series-parameters-for-cli-scoring) * [Segmented modeling](#scoring-code-for-segmented-modeling) * [Prediction intervals](#prediction-intervals-in-scoring-code) !!! note If you want Scoring Code support for a project using [calendars](ts-adv-opt#calendar-files) and your calendar has only full-day events (such as holidays), ask your platform administrator to enable the *Disable High-Resolution Calendars for Time Series Projects* feature flag for your account. ### Time series parameters for CLI scoring {: #time-series-parameters-for-cli-scoring } DataRobot supports using [scoring at the command line](scoring-cli) for time series deployments. You can now specify the [time series parameters](sc-time-series#time-series-parameters-for-cli-scoring) for forecast point, date format, prediction start and end dates, and prediction intervals. ### Scoring Code for segmented modeling {: #scoring-code-for-segmented-modeling } With [segmented modeling](ts-segmented), you can build individual models for segments of a multiseries project. DataRobot then merges these models into a Combined Model. Now you can [generate Scoring Code for the Combined Model](sc-time-series#scoring-code-for-segmented-modeling-projects). To generate Scoring Code, each segment champion of the Combined Model must have Scoring Code: ![](images/sc-segmented-scoring-code.png) After you deploy the Combined Model, download Scoring Code as normal. ### Prediction intervals in Scoring Code {: #prediction-intervals-in-scoring-code } You can now include [prediction intervals in the downloaded Scoring Code JAR](sc-download-leaderboard) for a time series model. To include prediction intervals in your Scoring Code JAR, in your deployment, click **Predictions > Portable Predictions** and select **Scoring Code**. Toggle on **Include prediction intervals**. ![](images/sc-prediction-intervals.png) For more details on the time series Scoring Code features, see [Scoring Code for time series projects](sc-time-series). _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.0-ats
--- title: MLOps (V8.0) description: DataRobot Release 8.0 MLOps release notes --- # MLOps (V8.0) {: #mlops-v80} _March 14, 2022_ The DataRobot MLOps v8.0 release includes many new features and capabilities, described below. See also details of Release 8.0 in the [AutoML](v8.0.0-aml) and [time series (AutoTS)](v8.0.0-ats) release notes. Release v8.0 provides updated UI string translations for the following languages: * Japanese * French * Spanish * Korean ## New features and enhancements {: #new-features-and-enhancements } See details of new features below: **New deployment features** * [Cancel retraining policies](#cancel-retraining-policies) * [DataRobot MLOps library and third-party spooler types ](#datarobot-mlops-library-and-third-party-spooler-types) * [Challenger accuracy](#challenger-accuracy) * [Challenger insights](#challenger-insights) * [Database integrations removed from integrations tab](#database-integrations-removed-from-integrations-tab) **New prediction features** * [Enhancements to Prediction Batch Job Definitions](#enhancements-to-prediction-batch-job-definitions) * [Leaderboard Scoring Code enhancements](#leaderboard-scoring-code-enhancements) * [Scoring Code in Snowflake](#scoring-code-in-snowflake) * [Batch Prediction write support for Presto](#batch-prediction-write-support-for-presto) <!--**New Model Registry features**--> **New governance features** * [Data drift separated into target monitoring and feature tracking]() **Public preview features** * [MLOps agent event log](#mlops-agent-event-log) * [Multipart upload for batch prediction API](#multipart-upload-for-batch-prediction-api) ## New deployment features {: #new-deployment-features } ### Cancel retraining policies {: #cancel-retraining-policies } To manage the automatic retraining of deployed models, you set up [retraining policies](set-up-auto-retraining#set-up-retraining-policies). The policies can be triggered manually or in response to a schedule, drift status, or accuracy status. Now you can cancel policy runs that are in progress or scheduled. You cannot cancel a run if it has finished successfully, has failed, has a status of "Creating challenger" or "Replacing model," or has already been cancelled. ![](images/rn-retrain-manage-policies.png) ### DataRobot MLOps library and third-party spooler types {: #datarobot-mlops-library-and-third-party-spooler-types } The datarobot-mlops library no longer includes AWS (SQS) and RabbitMQ dependencies by default. If you are using these spooler types, you must install the spooler-specific dependencies See the documentation on [installing the DataRobot MLOps metrics reporting library](https://pypi.org/project/datarobot-mlops/) for details. ### Challenger accuracy {: #challenger-accuracy } On a deployment's **Challengers** tab, the **Deployment Challengers** overview now includes an **Accuracy** column for the champion and every challenger. This column reports a model's accuracy score for the selected date range and, for challenger models, a comparison with the champion's accuracy score. You can use the **Accuracy metric** dropdown menu to compare different metrics. ![](images/rn-challenger-accuracy.png) For more information on challenger accuracy comparison, see [Challenger models overview](challengers#challenger-models-overview). ### Challenger insights {: #challenger-insights } Now generally available, the **Model Insights** on the **Model Comparison** tab allow you to compare the composition, reliability, and behavior of champion and challenger models using powerful visualizations. Choose two models to go head-to-head to determine if a challenger model outperforms the current champion and should replace the champion model in production. After you select two models, DataRobot computes the following model comparisons for those models: === "Accuracy" The **Accuracy** list contains two columns to report accuracy metrics for each model. Highlighted numbers represent favorable values. In this example, the champion, **Model 1**, outperforms **Model 2** for most metrics shown: ![](images/challenger-compare-accuracy.png) === "Dual lift" A [dual lift chart](model-compare#dual-lift-chart) is a visualization comparing how two selected models underpredict or overpredict the actual values across the distribution of their predictions. ![](images/challenger-compare-dual-lift.png) === "Lift" A [lift chart](lift-chart) depicts how well a model segments the target population and how capable it is of predicting the target, allowing you to visualize the model's effectiveness. ![](images/challenger-compare-lift.png) === "ROC" !!! info "Availability information" The ROC tab is only available for binary classification projects. An [ROC curve](roc-curve) plots the true-positive rate against the false-positive rate for a given data source. Use the ROC curve to explore classification, performance, and statistics for the models you're comparing. ![](images/challenger-compare-roc.png) === "Predictions Difference" The **Predictions Difference** histogram shows the percentage of predictions that fall within the match threshold you specify in the **Prediction match threshold** field (along with the corresponding numbers of rows). ![](images/challenger-compare-predictions-diff-1.png) The list below the histogram shows the 1000 most divergent predictions (in terms of absolute value). The **Difference** column shows how far apart the predictions are. ![](images/challenger-compare-predictions-diff-2.png) For more information on these challenger insights, see [Challenger model comparisons](challengers#challenger-model-comparisons). ### Database integrations removed from integrations tab {: #database-integrations-removed-from-integrations-tab } To simplify the prediction database integration process, the **Database** section on the **Settings** > **Integrations** tab is now fully deprecated. This functionality is replaced by the **Prediction** > [**Job Definitions**](batch-pred-jobs) tab. You can still find the [Qlik predictions integration](integration-code-snippets) code snippet, and with proper permissions the public preview [Tableau Analytics extension](tableau-extension), on the **Integrations** tab. For more information on setting up prediction sources, see [Schedule recurring batch prediction jobs](batch-pred-jobs). ## New prediction features {: #new-prediction-features } ### Enhancements to Prediction Batch Job Definitions {: #enhancements-to-prediction-batch-job-definitions } ![](images/rn-batch-pred-job-def-list.png) #### Disable a job definition In previous releases, you could disable a job description only by editing it and turning off the **Run this job automatically on a schedule** toggle. Now, you can disable the description by selecting **Disable definition** in the action menu for a job definition. Jobs scheduled from that job definition will cease to run. Select **Enable definition** to resume the jobs. #### Clone a job definition You can now create a copy of an existing job description and update it by selecting **Clone definition** in the action menu for a job definition. Update the fields as needed, and click **Save prediction job definition**. Note that the **Jobs schedule** settings are turned off by default. #### Prediction source configurations When you set a data source for a prediction job, DataRobot validates that the data is applicable for the deployed model. DataRobot also displays the user that configured the prediction source, the modification date, and a badge that represents the type of the source (in this case, STATIC). ![](images/rn-prediction-job-description-source-validation.png) #### Select the default prediction instance Now when you create a batch job definition, you can use the default prediction instance. The advanced options now include a **Use default prediction instance** toggle: ![](images/rn-prediction-job-default-pred-instance-on.png) DataRobot checks that the default or previously selected prediction instance is accessible and valid, and if not, displays an error message. If you turn off the toggle, you can select a different prediction instance: ![](images/rn-prediction-job-default-pred-instance-off.png) #### Snowflake and Synapse connection improvements The following improvements have been made to the prediction source and destination configurations for Snowflake and Synapse connections. ![](images/rn-prediction-source-snowflake.png) The **Use external stage** options for Snowflake and Synapse are now optional. Toggling them off updates the connection to use a JDBC adapter directly. You can now switch between a JDBC Snowflake connection setting and a Snowflake top-level connection without losing the connection details. JDBC-Snowflake and JDBC-Synapse connections now display as top-level connections, with the **Use external stage** option toggled off. #### Batch job filtering From the **Prediction Jobs** tab, you can now filter by prediction job ID, along with the existing filters: status, job type (based on the method used to generate the job), job start and end time, deployment, job definition ID, the prediction job ID, and the prediction environment. ![](images/rn-prediction-jobs-filter.png) ### Leaderboard Scoring Code enhancements {: #leaderboard-scoring-code-enhancements } The Leaderboard Scoring Code functionality on the **Portable Predictions** page has been updated so that it is consistent with that of the **Portable Predictions** page for deployments. The page now includes the option of including [Prediction Explanations](pred-explain/index) in the Scoring Code, as well. ![](images/rn-scdl-1.png) See [Download Scoring Code from the Leaderboard](sc-download-leaderboard) for details. ### Scoring Code in Snowflake {: #scoring-code-in-snowflake } Now GA, you can use Scoring Code as a UDF in Snowflake. Bringing Scoring Code inside of the Snowflake database removes the need to extract and load data, resulting in a significant decrease in the time to score large data sets on comparable infrastructure. See how to [generate UDF Scoring Code](snowflake-sc). ### Batch Prediction write support for Presto {: #batch-prediction-write-support-for-presto } You can now write prediction data to Presto. To do so, set up Presto as a JDBC data connection. In your batch prediction job definition (**Predictions > Job Definitions**), select **JDBC** as the **Prediction destination**: ![](images/rn-batch-pred-destination-jdbc.png) From the list of connectors, select the Presto connector: ![](images/rn-batch-pred-select-presto.png) Enter credentials: ![](images/rn-batch-pred-presto-credentials.png) Select the schema: ![](images/rn-batch-pred-presto-schema.png) Select the output table or create a new table. ![](images/rn-batch-pred-presto-tables.png) !!! note Presto requires the use of `auto commit: true` for many of the underlying connectors which can delay writes. ## New governance features ### Data drift separated into target monitoring and feature tracking {: #data-drift-separated-into-target-monitoring-and-feature-tracking } To provide more granular control of data drift, accuracy, and fairness monitoring, the **Enable data drift tracking** setting on a deployment’s **Settings > Data** tab is now divided into two settings: **Enable target monitoring** and **Enable feature drift tracking**. ![](images/rn-split-data-drift.png) You need to enable target monitoring to track accuracy ([Accuracy tab](deploy-accuracy)) and fairness ([Bias and Fairness tab](mlops-fairness)). Feature tracking must be enabled to monitor for data drift ([Data Drift tab](data-drift)). These settings are enabled by default. If you turn off either setting, you can still view historical data in the visualizations on the corresponding tabs. ## Public preview features ### MLOps agent event log {: #mlops-agent-event-log } Now available for public preview, on a deployment's **Service Health** tab, you can view MLOps agent **Management** events (e.g., deployment actions) and **Monitoring** events (e.g., spooler channel events). Using **Monitoring Spooler Channel** error events, you can quickly diagnose and fix [spooler configuration](spooler) issues. ![](images/rn-mlops-agent-events.png) To view **Monitoring** events, you must provide a `predictionEnvironmentID` in the agent configuration file (`conf\mlops.agent.conf.yaml`). If you haven't already installed and configured the MLOps agent, see the [Installation and configuration](agent) guide. For more information on enabling and reading the MLOps agent event log, see the [documentation](agent-event-log). ### Multipart upload for batch prediction API {: #multipart-upload-for-batch-prediction-api } Now available for public preview, multipart upload for the batch prediction API allows you to upload scoring data through multiple files to improve file intake for large datasets. The multipart upload process calls for multiple `PUT` requests followed by a `POST` request (`finalizeMultipart`) to finalize the upload manually. This feature adds two new endpoints to the batch prediction API: | Endpoint | Description | |----------|-------------| | `PUT /api/v2/batchPredictions/:id/csvUpload/part/0/` | Upload scoring data in multiple parts to the URL specified by `csvUpload`. Increment `0` by 1 in sequential order for each part of the upload. | | `POST /api/v2/batchPredictions/:id/csvUpload/finalizeMultipart/` | Finalize the multipart upload process. Make sure each part of the upload has finished before finalizing. | The feature adds two new intake settings for the local file adapter: | Property | Type | Default | Description | |-----------|------|---------|-------------| | `intakeSettings.multipart` | boolean |`false` | <ul><li>`true`: Requires you to submit multiple files via `PUT` request and finalize the process manually via `POST` request (`finalizeMultipart`).</li><li>`false`: Finalizes intake after one file is submitted via `PUT` request.</li> | | `intakeSettings.async` | boolean | `true` | <ul><li>`true`: Starts the scoring job when the initial `PUT` request for file intake is made.</li><li>`false`: Postpones the scoring job until the `PUT` request resolves or the `POST` request for `finalizeMultipart` resolves.</li></ul> | !!! note For more information on the batch prediction API and local file intake, see [Batch Prediction API](batch-prediction-api/index) and [Prediction intake options](intake-options#local-file-streaming). For more information on the multipart upload for batch predictions process, see the [public preview documentation](batch-pred-multipart-upload). _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0-mlops
--- title: Version 8.0.x description: DataRobot Release 8.0 release announcements index page. --- # Version 8.0.x {: #version-80x } Following are the AutoML, time series, and MLOps announcements for the features that make up DataRobot's 8.0.x Self-Managed AI Platform releases, first released _March 14, 2022_. * [v8.0.x AutoML](v8.0.0-aml) * [v8.0.x time series](v8.0.0-ats) * [v8.0.x MLOps](v8.0-mlops) * [Maintenance release notes](8.0-maintenance/index) for DataRobot v8.0.x.
index
--- title: MLOps (V7.2) description: DataRobot Release 7.2 MLOps release notes --- # MLOps (V7.2) {: #mlops-v72} _September 13, 2021_ The DataRobot MLOps v7.2 release includes many new features and capabilities, described below. Release v7.2 provides updated UI string translations for the following languages: * Japanese * French * Spanish ## New features and enhancements {: #new-features-and-enhancements } See details of new features below: * [New deployment features](#new-deployment-features) * [New prediction features](#new-prediction-features) * [New model registry features](#new-model-registry-features) * [New governance features](#new-public-preview-governance-features) ## New deployment features {: #new-deployment-features } Release v7.2 introduces the following new deployment features. ### Set an accuracy baseline for remote models {: #set-an-accuracy-baseline-for-remote-models } Now generally available, you can set an accuracy baseline for models running in remote prediction environments by uploading holdout data. Provide holdout data when [registering](#register-external-model-packages) an external model package and specify the column containing predictions. Once added, you can enable challenger models and target drift for external deployments. [Accuracy baseline documentation](reg-create#set-an-accuracy-baseline) ### Deployment reports {: #deployment-reports } Now generally available, you can generate a deployment report on-demand, detailing essential information about a deployment's status, such as insights about service health, data drift, and accuracy statistics (among many other details). Additionally, you can create a report schedule that acts as a policy to automatically generate deployment reports based on the defined conditions (frequency, time, and day). When the policy is triggered, DataRobot generates a new report and sends an email notification to those who have access to the deployment. ![](images/dep-report-rn.png) [Deployment report documentation](deploy-reports) ### MLOps management agent {: #mlops-management-agent } Now available as a public preview feature, the management agent provides a standard mechanism for automating model deployments to any type of infrastructure. It pairs automated deployment with automated monitoring to ease the burden on remote models in production, which is especially important with critical MLOps features such as challengers and retraining. The agent, accessed from the DataRobot application, ships with an assortment of plugins that support custom configuration. Additionally, 7.2 introduced enhancements to better understand model launching and replacement events, and the ability to configure a service account for prediction environments used by the agent. ![](images/mgmt-agent-1.png) [Management agent documentation](mgmt-agent/index) ### Enhancements to Automated Retraining {: #enhancements-to-automated-retraining } Automatic retraining has received enhancements to support additional Autopilot and model selection options. You can now run retraining in any of the Autopilot modes and create blenders from top models. Additionally, you can search for only models with SHAP value support during modeling and use target-leakage removed feature lists. These enhancements to modeling options allow you to have an improved modeling and post-deployment experience. {% if 'enterprise' not in tags %} Managed AI Platform users can now also use automatic retraining for time series deployments. {% endif %} ![](images/retrain-6.png) [Automatic retraining documentation](set-up-auto-retraining) ## New prediction features {: #new-prediction-features } Release v7.2 introduces the following new prediction features. ### Scheduled batch predictions and job definitions The **Make Predictions** tab for deployments now allows you to create and schedule JDBC and Cloud storage prediction jobs directly from MLOps, without utilizing the API. Additionally, you can use job definitions as flexible templates for creating batch prediction jobs. Then, store definitions inside DataRobot and run new jobs with a single click, API call, or automatically via a schedule. ![](images/batch-7.png) [Batch prediction documentation](batch-pred) ### Scoring Code in Snowflake {: #scoring-code-in-snowflake } Now generally available, DataRobot Scoring Code supports execution directly inside of Snowflake using Snowflake’s new Java UDF functionality. This capability removes the need to extract and load data from Snowflake, resulting in a much faster route to scoring large datasets. ![](images/snowflake-sc-rn.png) ### Portable batch predictions {: #portable-batch-predictions } The Portable Prediction Server can now be paired with an additional container to orchestrate batch predictions jobs using file storage, JDBC, and cloud storage. You no longer need to manually manage the large scale batching of predictions while utilizing the Portable Prediction Server. Additionally, large batch predictions jobs can be collocated at or near the data, or in environments behind firewalls without access to the public Internet ![](images/pps-batch-rn.png) [Portable batch predictions documentation](portable-batch-predictions) ### BigQuery adapter for batch predictions {: #bigquery-adapter-for-batch-predictions } Now available as a public preview feature, DataRobot supports BigQuery for batch predictions. You can use the BigQuery REST API to export data from a table into Google Cloud Storage (GCS) as an asynchronous job, score data with the GCS adapter, and bulk update the BigQuery table with a batch loading job. [BigQuery batch prediction example](pred-examples#end-to-end-scoring-with-bigquery) ### Include Prediction Explanations in Scoring Code {: #include-prediction-explanations-in-scoring-code } You can now receive Prediction Explanations anywhere you deploy a model: in DataRobot, with the Portable Prediction Server, and now in Java Scoring Code (or when executed in Snowflake). Prediction Explanations provide a quantitative indicator of the effect variables have on the predictions, answering why a given model made a certain prediction. Now available as a pubic preview feature, you can enable Prediction Explanations on the **Portable Predictions** tab (available on the [Leaderboard](sc-download-leaderboard) or from a [deployment](sc-download-deployment)) when downloading a model via Scoring Code. ![](images/pe-pps-rn.png) ## New model registry features {: #new-model-registry-features } Release v7.2 introduces the following new model registry features. ### Improved custom model performance testing {: #improved-custom-model-performance-testing } The Custom Model testing framework has been enhanced to provide a better experience when configuring, executing, and understanding the history of model tests. Configure a model's memory and replicas to achieve desired prediction SLAs. Individual tests now offer specific insights. For example, the performance check insight displays a table showing the prediction latency timings at different payload sample sizes. ![](images/cmodel-test-rn.png) [Custom model testing documentation](custom-model-test) ### Model Registry compliance documentation {: #model-registry-compliance-documentation } Now available as a public preview feature, you can generate Automated Compliance Documentation for models from the **Model Registry**, accelerating any pre-deployment review and sign-off that might be necessary for your organization. DataRobot automates many critical compliance tasks associated with developing a model and, by doing so, decreases the time-to-deployment in highly regulated industries. You can generate, for each model, individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. Then, you can download the report as an editable Microsoft Word document (DOCX). The generated report includes the appropriate level of information and transparency necessitated by regulatory compliance demands. ![](images/reg-comp-2.png) See how to [generate compliance documentation](reg-compliance) from the Model Registry. ### Add files to a custom model with GitLab {: #add-files-to-a-custom-model-from-gitlab } If you add a custom model to the Workshop, you can now use Gitlab Cloud and Gitlab Enterprise repositories to pull artifacts and use them to build custom models. Register and authorize a repository to integrate files with the Custom Model Workshop. ![](images/gitlab-1.png) * [Add files from GitLab (cloud)](custom-model-repos#gitlab-cloud-repository) * [Add files from GitLab Enterprise](custom-model-repos#gitlab-enterprise-repository) ## New governance features {: #new-public-preview-governance-features } Release v7.2 introduces the following public preview features. ### Bias and Fairness monitoring for deployments {: #bias-and-fairness-monitoring-for-deployments } Bias and Fairness in MLOps, now generally available, allows you to monitor the fairness of deployed production models over time and receive notifications when a model starts to behave differently for the protected classes. When viewing the Deployment inventory with the Governance lens, the Fairness column provides an at-a-glance indication of how each deployment is performing based on the fairness criteria. ![](images/bf-mlops-rn.png) To investigate a failing test, click on a deployment in the inventory list and navigate to the Fairness tab&mdash;DataRobot calculates Per-Class Bias and Fairness Over Time for each protected feature, allowing you to understand why a deployed model failed the predefined acceptable bias criteria. ![](images/bf-mlops-2-rn.png) [GA documentation](mlops-fairness)(as of 7.3) _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.2-mlops
--- title: Version 7.2.x description: DataRobot Release 7.2 release announcements index page. --- # Version 7.2.x {: #version-72x } Following are the AutoML, time series, and MLOps announcements for the features that make up DataRobot's 7.2.x releases, first released _September 13, 2021_. * [v7.2.x AutoML](v7.2.0-aml) * [v7.2.x time series](v7.2.0-ats) * [v7.2.x MLOps](v7.2-mlops) * [Maintenance release notes](7.2-maintenance/index) for DataRobot v7.2.x.
index
--- title: AutoML (V7.2) description: DataRobot Release 7.2 AutoML release notes --- # AutoML (V7.2) {: #automl-v72 } _September 13, 2021_ The DataRobot v7.2.0 release includes many new AutoML features and enhancements described in this section. See also the new features described in the [time series (AutoTS)](v7.2.0-ats) and [MLOps](v7.2-mlops) release notes. See these important [deprecation](#deprecation-notices) announcements for information about changes to DataRobot's support for older, expiring functionality. This document also describes DataRobot's [fixed](#customer-reported-fixed-issues) issues. ## In the spotlight... {: #in-the-spotlight } The following features are some of the highlights of Release 7.2: * [Purpose-built AI applications with the AI App Builder](#purpose-built-ai-applications-with-the-ai-app-builder) * [Public preview: External prediction insights](#insights-available-for-external-models) * [Public preview: Bias and Fairness monitoring for deployments](v7.2-mlops#bias-and-fairness-monitoring-for-deployments) ## User interface enhancements {: #user-interface-enhancements } ### New login experience {: #new-login-experience } This release introduces a new login experience for DataRobot platform application users. The new page is redesigned to convey the level of innovation and technical revolution this company and product are offering without affecting the existing log in workflow. ![](images/rn-new-login-experience.png) {% if 'enterprise' not in tags %} ### ROC Curve redesign {: #roc-curve-redesign } The ROC Curve tab has been redesigned to streamline the model evaluation strategies you can perform. Along with the Prediction Distribution graph, ROC curve, confusion matrix, and a summary of metrics, you can now generate profit curves, precision-recall curves, and custom charts in the ROC Curve tab. ![](images/rn-roc-tab-overview.png) For details, see [ROC Curve](roc-curve-tab/index). {% endif %} ### New location for tools to share and edit project names {: #new-location-for-tools-to-share-and-edit-project-names } To improve navigation, this release brings a new home for the project sharing and project name editing tools. While still available from the project control center (**Manage Projects**), you can now more quickly access the tools directly from the project dropdown. ![](images/rn-manage-projects-interface.png) ## Data enhancements {: #data-enhancements } ### New Spark version for improved performance {: #new-spark-version-for-improved-performance } Release 7.2 upgrades the Spark version used for Feature Discovery and Spark SQL to Spark 3.0. In addition to Spark performance improvements, the upgrade brings improved JDBC compatibility with the AI Catalog (which uses Java 11) and a smaller shippable codebase. DataRobot now supports all drivers that are compatible with any Java version 8 or later. ### Connect to Snowflake and Google BigQuery using OAuth {: #connect-to-snowflake-and-google-bigquery-using-oauth } Snowflake and Google BigQuery users can now set up a data connection using OAuth single sign-on. Once configured, you can read data from production databases to use for model building and predictions. For details, see [Data connection with OAuth](data-conn#data-connection-with-oauth). ## Feature Discovery features {: #feature-discovery-features } ### Feature Discovery Relationship Editor setup guide {: #feature-discovery-relationship-editor-setup-guide } With Feature Discovery, DataRobot generates new features from multiple datasets so that you don’t need to perform feature engineering manually to consolidate multiple datasets. Use the Relationship Editor to join the datasets to prepare for Feature Discovery. ![](images/safer-rel-editor-rel-notes.png) The Relationship Editor setup guide is a new intermediate screen that displays when you click the **Add datasets** button on the EDA (**Data**) page. It walks you through the process of specifying prediction points for time-aware features and adding the datasets to be joined for Feature Discovery. For details, see [Create a Feature Discovery project](fd-overview). ### Feature Discovery engineering controls {: #feature-discovery-engineering-controls } Feature Discovery engineering controls, now publicly available, let you influence how DataRobot conducts feature engineering. ![](images/safer-feature-engineering-controls.png) You can enable specific controls to use your domain knowledge to guide feature engineering or to improve accuracy. You might want to exclude specific transformations that slow down processing or are difficult to explain to stakeholders. For details, see [Set feature engineering controls](fd-overview#set-feature-engineering-controls). ### Feature Discovery settings enhanced {: #feature-discovery-settings-enhanced } The **Feature Discovery** tab on the **Data** page provides dataset relationship details, a feature derivation summary, and a feature derivation log. You can now see the number of secondary datasets, explored features, and derived features that resulted from Feature Discovery. Click **Show more** to see which feature engineering controls were used during Feature Discovery and to learn about each. ![](images/safer-fd-tab-feature-eng-controls.png) For details, see [Define relationships](fd-overview#define-relationships). ### Categorical Statistics feature type {: #categorical-statistics-feature-type } Categorical Statistics let you explore numeric statistics like sum, max, and average for each category of a categorical feature. In the following example, during Feature Discovery, DataRobot explores Spending numeric statistics for each category of the Product-Type feature: * Spending(30 days min) * Spending(30 days min by Product_Type = A) * Spending(30 days min by Product_Type = B) * Spending(30 days min by Product_Type = C) .. ![](images/rn-fd-cat-stats.png) Categorical Statistics aggregation is turned off by default. You can enable it on the **Feature Engineering** tab of the Feature Discovery Settings page. For details, see [Categorical Statistics](fd-gen#categorical-statistics). ## Modeling features {: #modeling-features } ### Purpose-built AI applications with the AI App Builder {: #purpose-built-ai-applications-with-the-ai-app-builder } The [AI App Builder](app-builder/index), available from the **Applications** tab, provides a no-code platform to enable core DataRobot services (making predictions, optimizing outcomes, simulating scenarios, and more) without having to build models and evaluate their performance in DataRobot. Each application starts with a template and data source&mdash;either a deployment or dataset in the **AI Catalog**. However, the App Builder lets you configure additional widgets, custom features, and pages to tailor the application to a specific use case. Once deployed, applications can be easily shared and do not require users to own full DataRobot licenses in order to use them, offering a great solution for broadening your organization’s ability to use DataRobot’s functionality. ![](images/rn-app-templates.png) #### Widgets {: #widgets} Applications are composed of widgets that create visual, interactive, and purpose-driven end-user applications. There are two types of widgets&mdash;chart widgets and header widgets. Chart widgets add visualizations to an application and can be configured to surface important insights in your data and prediction results. Header widgets provide additional filtering options for your application. ![](images/rn-chart-widget.png) #### What-if and Optimizer widget {: #what-if-and-optimizer-widget} The **What-if and Optimizer** widget provides two tools for interacting with prediction results: * **What-if**: A decision-support tool that allows you to create and compare multiple prediction simulations to identify the option that provides the best outcome. You can also make predictions, then change one or more inputs to create a new simulation, and see how those changes affect the target feature. * **Optimizer**: Identifies the maximum or minimum predicted value for a target by varying the values of a selection of flexible features in the model. ![](images/rn-what-if-optimizer.png) ### Word Cloud blueprints for multiclass projects {: #word-cloud-blueprints-for-multiclass-projects } An improvement has been made so that all Stochastic Gradient Descent (SGD) blueprints create a Word Cloud if even a single text feature is present in a multiclass project. Previously, there was a specialized SGD blueprint, available from the Repository, that had to be run manually. Access the new visualizations from either the model’s **Describe > Word Cloud** or **Insights > Word Cloud** tabs. ![](images/rn-sgd-word-cloud.png) ### New Keras DeepCTR models available in the Repository {: #word-cloud-blueprints-for-multiclass-projects } To support data scientists with CTR data (categoricals with high cardinality), DataRobot introduces three DeepCTR models, available from the Repository. These models&mdash;neural factorization machine, autoint, and deep cross network&mdash;can be particularly useful when building clickthrough rate or recommendation models. ![](images/rn-keras-deepctr-models.png) ### Bias and Fairness improvements {: #bias-and-fairness-improvements} With this release, DataRobot has upgraded the user experience for calculating Bias and Fairness for your models. The first improvement allows you to enable Bias and Fairness insights after modeling has already started. Select a model and navigate to **Bias and Fairness > Settings**. Once configured, Bias and Fairness insights are enabled for every model on the Leaderboard. ![](images/rn-bf-settings.png) The second improvement is the ability to view multiple fairness metrics in the **Per-Class Bias** page. This functionality allows you to view fairness scores for all five fairness metrics using a dropdown menu. ![](images/rn-bf-metrics.png) For details, see the Bias and Fairness documentation. ### TLS options for Portable Prediction Server {: #word-cloud-blueprints-for-multiclass-projects } By default, the Portable Prediction Server (PPS) serves predictions over an insecure listener on an :8080 port (clear text HTTP over TCP). You can now also serve predictions over a secure listener on :8443 port (HTTP over TLS/SSL, or simply HTTPS). When the secure listener is enabled, the insecure listener becomes unavailable. The configuration is accomplished using environment variables, which are described in [the documentation](portable-pps#https-support) along with accompanying examples. ## Public preview features {: #public-preview-features } {% if 'enterprise' in tags %} ### ROC Curve redesign {: #roc-curve-redesign } The ROC Curve tab has been redesigned to streamline the model evaluation strategies you can perform. Along with the Prediction Distribution graph, ROC curve, confusion matrix, and a summary of metrics, you can now generate profit curves, precision-recall curves, and custom charts in the ROC Curve tab. ![](images/rn-roc-tab-overview.png) {% endif %} ### Create feature lists in the Relationship Editor {: #create-feature-lists-in-the-relationship-editor } The ability to create feature lists in the Feature Discovery Relationship Editor is now available as a public preview feature. ![](images/safer-rel-editor-create-new-feature-list.png) Once you create your feature list, you can transform the features directly in the Relationship Editor. ![](images/safer-rel-editor-transform.png) [Public preview documentation](safer-rel-editor-feature-lists) ### Further enhancements to multilabel modeling {: #further-enhancements-to-multilabel-modeling } In response to user feedback, the multilabel modeling public preview feature introduces several usability improvements, including: * Addition of Feature Effects visualization for multilabel projects * Increased speed of per-label metric * New per-label Word Clouds * Ability to easily pin labels * Model packages and access to the Portable Prediction Server for MLOps ![](images/multilabel-21.png) [GA documentation (as of 7.3)](multilabel) ### Insights available for external models {: #insights-available-for-external-models } Through the **External Predictions** advanced option tab, you can bring external model(s) into the DataRobot AutoML environment, view them on the Leaderboard, and run a subset of DataRobot's evaluative insights for comparison against DataRobot models. Simply add external model predictions as a new column in your training dataset, identify the predictions and partition column, and press Start. The external model becomes available on the Leaderboard, which you can then compare against DataRobot models, investigate further using select DataRobot visualizations, and (for binary classification projects) explore bias testing. ![](images/ext-predictions-1.png) [GA documentation (as of 7.3)](external-preds) ## Deprecation notices {: #deprecation-notices } Note the following to better plan for later migration to new releases. ### Hadoop deployment and scoring deprecated {: #hadoop-deployment-and-scoring-deprecated } Hadoop deployment and scoring, including the Standalone Scoring Engine (SSE), will be unavailable and fully deprecated (end-of-life) starting with release v7.3 (December 13th, 2021 for Cloud users). Post-deprecation, Hadoop should not be used to generate predictions. ### Enterprise database integrations deprecated {: #enterprise-database-integrations-deprecated } Enterprise database integrations will be unavailable and fully deprecated (end-of-life) starting with release v7.3 (December 13th, 2021 for Cloud users). Post-deprecation, integrations should not be used to generate predictions with deployments. ### Open source models deprecated {: #open-source-models-deprecated } Open source models have been deprecated. ## Customer-reported fixed issues {: #customer-reported-fixed-issues } The following issues have been fixed since release [7.1.3](v7.1.3-aml). ### Platform {: #platform } * EP-1535: Fixes an issue with the map tile management workflow when Minio is used as storage for Docjer-based installs. * EP-1495: Sets `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` in datarobot scoring to point to DataRobot's python. * UIUX-2520: Fixes an issue with the Insights view when the page is refreshed. * UIUX-2518: Fixes blueprint task descriptions. * UIUX-2510: Fixes the business mode model info view. * UIUX-1950: Fixes an issue related to the add/delete column in beginner mode. * UIUX-2146: Hides the resource usage summary under each model's **Model Info** tab by default. Enable the user-level flag to display this information. * UIUX-3207: The confusion matrix now displays an error message if an issue occurred while loading matrix data. * UIUX-5113: Disables the confusion matrix for multiclass projects that are run with slim-run (no stacked predictions) when the model was trained into validation. ### Time Series {: #time-series} * TIME-8176: Fixes an issue when Prediction Explanations failed to compute with new series modelers. * TIME-8425: Anomaly assessment records are now filtered properly when backtest 0 is specified as filtering condition. * TIME-8992: Fixes an issue with custom feature lists for KIA new series modelers. * TIME-9074: Fixes an issue that caused an error in the computation of a valid forecast point range due to the incorrect minimum number of rows count required to perform the validation. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.2.0-aml
--- title: Time series (V7.2) description: DataRobot Release 7.2 time series release notes --- # Time series (V7.2) {: #time-series-v72 } _September 13, 2021_ The DataRobot v7.2.0 release includes many new [time series](#new-time-series-features) features, described below. See also details of Release 7.2.0 in the [AutoML](v7.2.0-aml) and [MLOps](v7.2-mlops) release notes. ## New time series features {: #new-time-series-features } See details of the following new GA features: * [Nowcasting predicts current values](#nowcasting-to-predict-current-values) * [Partial history blueprints run as part of Autopilot](#partial-history-blueprints-run-in-Autopilot) * [External model comparison](#external-model-comparison) * [Remove redundant features from Feature Impact](#remove-redundant-features-from-feature-impact) * [New series humility rule](#new-series-humility-rule) See details of the following new public preview features: * [Time series data prep tool](#time-series-data-prep-tool) * [Restore features pruned during derivation](#restore-features-pruned-during-derivation) * [High-resolution calendars](#high-resolution-calendars) * [Scoring Code for time series](#scoring-code-for-time-series) ## Generally available features The following new features are now generally available. ### Nowcasting to predict current values {: #nowcasting-to-predict-current-values } Available as a GA feature, nowcasting is a method of time series modeling that predicts the current value of a target based on past and present data&mdash;a forecast window in which the start and end times are 0 (now). In other words, based on the current input values and recent history, what is the target right now? (Forecasting, by contrast, predicts future values based on past and present data.) With release 7.2, DataRobot automatically applies forecast window (FW) settings of [0, 0] for the forecast start and end times when nowcasting is selected at project start. Additionally, the Feature Derivation Window (FDW) end is set at a single time step prior to the current time step, allowing derivation of additional features for the target without risking target leakage. ![](images/nowcast-1.png) For details, see [nowcasting](nowcasting). ### Partial history blueprints run in Autopilot {: #partial-history-blueprints-run-in-Autopilot } Previously available as a public preview feature, the advanced option to allow partial history in predictions has been enhanced for general availability. Now, Autopilot runs approximately 40% fewer models, eliminating those models that produce less accurate results in the event of partial history availability. When enabled, Autopilot supports New Series blueprints as well. In addition, the feature now supports: * single series projects * row-based mode models * classification projects * projects with “do not derive target” enabled. ![](images/rn-time-partial-history.png) For details, see [time series advanced options](ts-adv-opt#allow-partial-history). ### External model comparison improvements {: #external-model-comparison-improvements } Support for baseline accuracy comparison allows you to upload predictions from a non-DataRobot time series model and compare the predictions with DataRobot's time series models. With improvements to the public preview version, the feature now provides the ability to specify multiple forecasts distances and also provides API support. With existing metrics that have been redesigned to scale to the external baseline (uploaded predictions), you can now get an at-a-glance accuracy measure and comparison from the Leaderboard. ![](images/cyob-1.png) For details, see [external prediction comparison](cyob). ### Remove redundant features from Feature Impact {: #remove-redundant-features-from-feature-impact } Previously only available in AutoML, DataRobot now performs a feature redundancy check&mdash;and provides an option to remove redundant features&mdash;when calculating Feature Impact for time series projects. This is important as you may want to create one or more feature lists based on the top feature importances for a model. Note that if they are important to the time-series project, some top impactful, naive features may be retained. ![](images/rn-remove-ts-redundant.png) For details, see the [Feature Impact documentation](feature-impact#remove-redundant-features-time-series). ### New series humility rule {: #new-series-humility-rule } Now generally available, you can set a humility rule that uses a replacement model from the model registry instead of the Leaderboard. This decouples the model from a specific project and allows you to use model packages. Using a backup model from any compatible project provides for more flexibility (compatibility means using the same target, date/time partitioning column, feature derivation window, forecast distances, series name, etc.) This feature requires access to MLOps. ![](images/new-series-humility.png) For details, see [multiseries series humility rules](humility-settings#multiseries-humility-rules). ## Public preview features The following features are part of the public preview program. ### Time series data prep tool {: #time-series-data-prep-tool } With version 7.2, the time series data prep tool, available from both the time series Start page or within the AI Catalog, brings three substantial improvements: * Feature aggregation and imputation on text target features, not just numeric and categorical. * Guardrails to alert you when DataRobot has imputed more than 50% of target value (which could impact model accuracy). * Ability to apply data prep transformations to prediction datasets from the Leaderboard. ![](images/tsd-prep-7.png) [Public preview documentation](ts-data-prep) ### Restore features pruned during derivation {: #restore-features-pruned-during-derivation } As part of the time series functionality, DataRobot generates derived features and then runs a feature reduction algorithm, removing features it detects as low impact. There may, however, be features that you want included in the generated feature lists or evaluated for feature impact. With this release, once [EDA2](eda-explained#eda2) completes, you can add these features back into your available derived modeling data and create new feature lists that include them. ![](images/restore-pruned-1.png) [GA documentation (as of release 7.3)](restore-features) ### High-resolution calendars {: #high-resolution-calendars } You can now derive calendar event-related features at a much more granular, timestamp-based level. Starting from a timestamp instead of the start-of-day helps to capture the effect of a specified calendar event, such as promotional sales that lasts from 9:30am to 11:30am in the morning. Additionally, you can now specify durations to further highlight the event specificity. To ensure accuracy, DataRobot provides guardrails to support calendar-derived features based on calendar events that overlap. ![](images/time-cal-cat.png) [GA documentation (as of 7.3)](ts-adv-opt#calendar-files) ### Scoring Code for time series {: #scoring-code-for-time-series } Now available for public preview, you can export time series models in a Java-based Scoring Code package. Scoring Code is a portable, low-latency method of utilizing DataRobot models outside of the DataRobot application. The following blueprints may produce scoring code: * AUTOARIMA with Fixed Error Terms * ElasticNet Regressor (L2 / Gamma Deviance) * ElasticNet Regressor (L2 / Poisson Deviance) * Eureqa Generalized Additive Model * eXtreme Gradient Boosted Trees Regressor * eXtreme Gradient Boosted Trees Regressor with Early Stopping * eXtreme Gradient Boosting on ElasticNet Predictions * Light Gradient Boosting on ElasticNet Predictions * Performance Clustered Elastic Net Regressor with Forecast Distance Modeling * Performance Clustered eXtreme Gradient Boosting on Elastic Net Predictions * RandomForest Regressor * Ridge Regressor using Linearly Decaying Weights with Forecast Distance Modeling * Ridge Regressor with Forecast Distance Modeling * Vector Autoregressive Model (VAR) with Fixed Error Terms The following capabilities are currently covered: * Calendars (non-high resolution) * Cross-Series * Zero Inflated / Naive Binary * Nowcasting (Historical Range Predictions) * Blind History Gaps ## Time series fixed issues {: #time-series-fixed-issues } The following issues have been fixed since release [7.1.0](v7.1.0-ats). * TIME-8176: Fixes an issue when Prediction Explanations failed to compute with new series modelers. * TIME-8425: Anomaly assessment records are now filtered properly when backtest 0 is specified as filtering condition. * TIME-8992: Fixes an issue with custom feature lists for KIA new series modelers. * TIME-9074: Fixes an issue that caused an error in the computation of a valid forecast point range due to the incorrect minimum number of rows count required to perform the validation. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.2.0-ats
--- title: MLOps (V7.3) description: DataRobot Release 7.3 MLOps release notes --- # MLOps (V7.3) {: #mlops-v73} _December 13, 2021_ The DataRobot MLOps v7.3 release includes many new features and capabilities, described below. Release v7.3 provides updated UI string translations for the following languages: * Japanese * French * Spanish * Korean ## New features and enhancements {: #new-features-and-enhancements } See details of new features below: **New deployment features** * [Automated Retraining](#automated-retraining) * [New MLOps agent channel: Azure Event Hubs](#new-mlops-agent-channel-azure-event-hubs) * [mTLS support for MLOps agent](#mtls-support-for-mlops-agent) * [Public availability of MLOps agent Python libraries](#public-availability-of-mlops-agent-python-libraries) * [Portable batch predictions scoring support](#portable-batch-predictions-scoring-support) * [MLOps agent channel dequeuing](#mlops-agent-channel-dequeuing) **New prediction features** * [Prediction Explanations in Scoring Code](#prediction-explanations-in-scoring-code) * [BigQuery adapter for batch predictions](#bigquery-adapter-for-batch-predictions) * [Oracle write-back in batch predictions](#oracle-write-back-in-batch-predictions) * [Prediction job enhancements](#prediction-job-enhancements) **New Model Registry features** * [Model Registry compliance documentation](#model-registry-compliance-documentation) * [Add files to a custom model with GitLab](#add-files-to-a-custom-model-from-gitlab) **New governance features** * [Fairness monitoring and alerting for deployments](#fairness-monitoring-and-alerting-for-deployments) **Public preview features** * [Champion and challenger comparisons](#champion-and-challenger-comparisons) * [Association ID support in the AI App Builder](#association-id-support-in-the-ai-app-builder) ## New deployment features {: #new-deployment-features } Release v7.3 introduces the following new deployment features. ### Automated Retraining {: #automated-retraining } To maintain model performance after deployment, DataRobot provides automatic retraining for deployments, eliminating extensive manual work. ![](images/rn-retraining.png) To set up automatic retraining, provide a retraining dataset and define up to five retraining policies on each deployment, each consisting of a trigger, a modeling strategy, modeling settings, and a replacement action. When triggered, retraining produces a new model based on these settings and notifies you to consider promoting it. Learn how to set up [automatic retraining](set-up-auto-retraining). ### New MLOps agent channel: Azure Event Hubs {: #new-mlops-agent-channel-azure-event-hubs } The MLOps agent now supports Microsoft Azure Event Hubs as a channel, in addition to previously supported channels: File, AWS SQS, Google Pub, Google Sub, RabbitMQ, and Kafka. To support Azure Event Hubs as a tracking agent spooler type, DataRobot leverages the existing [Kafka spooler type](spooler#kafka). This release also adds the ability to authenticate with Event Hubs using Azure Active Directory. For more details see the [Azure Event Hubs spooler configuration documentation](spooler#azure-event-hubs). ### mTLS support for MLOps agent {: #mtls-support-for-mlops-agent } The RabbitMQ MLOps agent channel now supports mutual Transport Layer Security (mTLS) authentication&mdash;ensuring that traffic is secure in both directions between the client and server. With mTLS, the server originating a message and the server receiving it exchange certificates from a mutually trusted certificate authority (CA). See the [RabbitMQ configuration](spooler#rabbitmq) documentation for details on configuring the spooler. ### Public availability of MLOps agent Python libraries {: #public-availability-of-mlops-agent-python-libraries } You can now download the MLOps agent Python libraries from the public [Python Package Index](https://pypi.org){ target=_blank } site. Download and install the [DataRobot MLOps metrics reporting library](https://pypi.org/project/datarobot-mlops){ target=_blank } and the [DataRobot MLOps Connected Client](https://pypi.org/project/datarobot-mlops-connected-client){ target=_blank }. These pages include instructions for installing the libraries. ### Portable batch predictions scoring support {: #portable-batch-predictions-scoring-support } Portable batch predictions (PBP) now support scoring on time series and visual AI models. For an example illustrating the job definition fields required for PBP time series scoring, see [Time series scoring over Azure Blob](portable-batch-predictions#ts-azure-scoring-with-multi-model-mode-pps). No new fields are required for PBP [visual AI scoring](vai-predictions). ### MLOps agent channel dequeuing {: #mlops-agent-channel-dequeuing } You can now configure the MLOps agent to wait until processing is complete before dequeuing a message. The dequeueing operation behaves as follows in the different channels: * In SQS: Deletes a message. * In RabbitMQ and PubSub: Acknowledges the message as complete. * In Kafka and Filesystem: Moves the offset. This feature ensures that messages are not dropped when there are connectivity issues. Thus, even if there are connection errors, the message can be re-sent. To configure the feature, set the `MLOPS_SPOOLER_DEQUEUE_ACK_RECORDS` environment variable to `true`. Enabling this feature is highly recommended. Learn how to enable [the dequeuing feature](spooler#general-configuration). ## New prediction features {: #new-prediction-features } Release v7.3 introduces the following new prediction features. ### Prediction Explanations in Scoring Code {: #prediction-explanations-in-scoring-code } Prediction explanations provide a quantitative indicator of the effect variables have on predictions. You can now receive Prediction Explanations anywhere you deploy a model: in DataRobot, with the Portable Prediction Server, and now in Java Scoring Code (or when executed in Snowflake). Enable Prediction Explanations on the **Portable Predictions** tab when [downloading a model via Scoring Code](sc-download-deployment). ![](images/pe-pps-rn.png) ### BigQuery adapter for batch predictions {: #bigquery-adapter-for-batch-predictions } Now generally available, DataRobot supports BigQuery for ingest and export of data while scoring with batch predictions. You can use the BigQuery REST API to export data from a table into Google Cloud Storage (GCS) as an asynchronous job, score data with the GCS adapter, and bulk update the BigQuery table with a batch loading job. [BigQuery batch prediction example](pred-examples#end-to-end-scoring-with-bigquery) ### Oracle write-back in batch predictions {: #oracle-write-back-in-batch-predictions } Support has been added for Oracle write-back in batch predictions. See the complete list of [data sources supported for batch predictions](batch-prediction-api/index#data-sources-supported-for-batch-predictions). See also the [intake](intake-options) and [output](output-options) adapter documentation. ### Prediction job enhancements {: #prediction-job-enhancements } Batch prediction jobs have been enhanced as follows: * When creating a prediction job, you can now select BigQuery as a **Prediction source** and a **Prediction destination** on the **Deployments** page. You can also now select the AI Catalog as a **Prediction source**. Following are the supported prediction source and destination types: **Prediction source types**: ![](images/rn-prediction-source.png) **Prediction destination types**: ![](images/rn-prediction-dest.png) * When setting up JDBC connections as prediction sources and destinations, you can edit existing connections rather than configuring them from scratch. * You can now save and run a prediction job immediately when you create the job description rather than locating it on the **Job Definitions** tab and running it from there. ![](images/rn-job-schedule-run-immediately.png) ## New model registry features {: #new-model-registry-features } Release v7.3 introduces the following new model registry features. ### Model Registry compliance documentation {: #model-registry-compliance-documentation } You can now generate Automated Compliance Documentation for models from the Model Registry, accelerating pre-deployment review and sign-off that might be necessary for your organization. ![](images/reg-compliance-tab.png) DataRobot automates many critical compliance tasks associated with developing a model and, by doing so, decreases the time-to-deployment in highly regulated industries. For each model, you can generate individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. Then, you can download the report as an editable Microsoft Word document (DOCX). The generated report includes the appropriate level of information and transparency necessitated by regulatory compliance demands. See how to [generate compliance documentation](reg-compliance) from the Model Registry. ### Add files to a custom model with GitLab {: #add-files-to-a-custom-model-from-gitlab } If you add a custom model to the Custom Model Workshop, you can now use Gitlab Cloud and Gitlab Enterprise repositories to pull artifacts and use them to build custom models. Register and authorize a repository to add files to a custom model. ![](images/gitlab-1.png) * [Add files from GitLab (cloud)](custom-model-repos#gitlab-cloud-repository) * [Add files from GitLab Enterprise](custom-model-repos#gitlab-enterprise-repository) ## New governance features {: #new-governance-features } Release v7.3 introduces the following feature. ### Fairness monitoring and alerting for deployments {: #fairness-monitoring-and-alerting-for-deployments } With MLOps, you can now monitor deployed production models for fairness by configuring tests that make models capable of recognizing, in real-time, when protected features in the dataset fail to meet predefined fairness conditions&mdash;triggering an alert so you can investigate bias as soon as it’s detected. ![](images/rn-mlops-fairness-bias-1.png) The Fairness tab for individual models provides two interactive and exportable charts&mdash;Per-Class Bias and Fairness Over Time&mdash;helping you understand why a deployment is failing fairness tests and which features are below the predefined fairness threshold. Per-Class Bias users the fairness threshold and fairness score of each class to determine if certain classes are experiencing bias in the model's predictive behavior. Fairness Over Time illustrates how the distribution of a protected feature's fairness scores have changed over time. ![](images/rn-mlops-fairness-bias-2.png) ## Public preview features {: #public-preview-features } ### Champion and challenger comparisons {: #champion-and-challenger-comparisons } Now available as a public preview feature, Champion/Challenger Comparison allows you to compare the composition, reliability, and behavior of models using powerful visualizations. Choose two models to go head-to-head so that you can be sure that the currently deployed champion model is the best model for your purposes. ![](images/rn-challenger-model-comparison-tab.png) The Model Comparison page allows you to select models for comparison and provides valuable data and visualizations under **Model Insights**: * The **Accuracy** tab lets you compare metrics: ![](images/challenger-compare-accuracy.png) * The **Dual lift** tab lets you compare how the models over- or under-predict along the distribution of predictions: ![](images/challenger-compare-dual-lift.png) * The **Lift** tab lets you compare how well the models predict the target: ![](images/challenger-compare-lift.png) * The **ROC** tab provides visualizations for comparing classification models: ![](images/challenger-compare-roc.png) * The **Prediction Difference** tab lets you compare predictions of the models on a row-by-row basis: ![](images/challenger-compare-diff.png) If the challenger outperforms the current champion, you can promote the challenger to champion directly from the Model Comparison page. [Public preview documentation](challengers#challenger-model-comparisons) ### Association ID support in the AI App Builder {: #association-id-support-in-the-ai-app-builder } Now available for public preview, you can create applications from deployments with an association ID. When the selected deployment has an association ID, the association ID is added as a field to the **Add New Row** widget for single predictions. Data drift and accuracy are tracked for all single and batch predictions made using the application, however, they are not tracked for synthetic predictions made in the **What-If and Optimizer** widget. For details, see [Association ID support in the AI App Builder](create-app#deployments-with-an-association-id). _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.3-mlops
--- title: Version 7.3.x description: DataRobot Release 7.3 release announcements index page. --- # Version 7.3.x {: #version-73x } Following are the AutoML, time series, and MLOps announcements for the features that make up DataRobot's 7.3.x releases, first released _December 13, 2021_. * [v7.3.x AutoML](v7.3.0-aml) * [v7.3.x time series](v7.3.0-ats) * [v7.3.x MLOps](v7.3-mlops) * [Maintenance release notes](7.3-maintenance/index) for DataRobot v7.3.x.
index
--- title: AutoML (V7.3) description: DataRobot Release 7.3 AutoML release notes --- # AutoML (V7.3) {: #automl-v73 } _December 13, 2021_ The DataRobot v7.3.0 release includes many new AutoML features and enhancements described in this section. See also the new features described in the [time series (AutoTS)](v7.3.0-ats) and [MLOps](v7.3-mlops) release notes. Release v7.3 provides updated UI string translations for the following languages: * Japanese * French * Spanish * Korean See these important [deprecation](#deprecation-notices) announcements for information about changes to DataRobot's support for older, expiring functionality. This document also describes DataRobot's [fixed](#customer-reported-fixed-issues) issues. ## In the spotlight... {: #in-the-spotlight } The following features are some of the highlights of Release 7.3: * [Composable ML adds project linking and bulk training](#composable-ml-adds-project-linking-and-bulk-training-general-improvements) * [Clustering](#clustering) * [Multilabel classification](#multilabel-modeling-adds-pairwise-matrix-management) ## User interface enhancements {: #user-interface-enhancements } ### New XEMP Prediction Explanation interface {: #new-xemp-prediction-explanation-interface } With this release, the XEMP [Prediction Explanations](xemp-pe) visualization has been redesigned to provide cleaner, clearer at-a-glance information about why a model has made a particular prediction. The functionality offers the same insights with an easier, more intuitive interface. ![](images/rn-new-xemp-pe.png) ## Feature Discovery features {: #feature-discovery-features } ### Feature Discovery supports multiple feature derivation windows {: #feature-discovery-supports-multiple-feature-derivation-windows } In Automated Feature Discovery, you can now configure up to three feature derivation windows (FDW) per dataset. To define additional windows, open the **Time-aware feature engineering editor** and click **Add window**. Note that each FDW must be unique. ![](images/safer-mult-fdw.png) For details, see [Define Relationships](fd-overview#define-relationships). ### Feature Discovery Relationship Quality Assessment {: #feature-discovery-relationship-quality-assessment } Feature Discovery introduces a tool to automatically assess the quality of a relationship configuration—warning the user of potential problems—early in the creation process. The Relationship Quality Assessment tool verifies join keys, dataset selection, and time-aware settings before EDA2 begins. Click the **Review configuration** button to trigger the Relationship Quality Assessment. A progress indicator (loading spinner) displays on each dataset and on the review configuration button, which is disabled, to indicate that an assessment is currently running. ![](images/rqa-1.png) Once the assessment is complete, DataRobot marks all tested datasets. Those with identified issues display a yellow warning icon and those with no identified issues display a green tick. Select the dataset to view a summary of the issues with suggested potential fixes. ![](images/rqa-3.png) To resolve warnings, click the orange link displayed below each warning&mdash;Review dataset, Review relationship, or Review window settings&mdash;and a pane appears at the top of the relationship editor allowing you to modify relationship configurations. After addressing the warnings, click **Review configuration** to reassess the relationships. For details, see the [Relationship Quality Assessment](fd-overview#relationship-quality-assessment) documentation. ### Feature Discovery improvements Release 7.3 brings the following improvements to the Feature Discovery UI: * In the Relationship Editor, if the primary dataset is also used as a secondary dataset, the target no longer appears as a suggested join key. * When making changes to a secondary dataset configuration, this no longer causes all dataset names to reload. * Individual dataset import sizes cannot exceed 11GB. * The default snapshot policy for all snapshotted dataset, including JDBC datasets, is Latest. * You can now click a FDW displayed on your dataset to open the FDW editor. ## Modeling features {: #modeling-features } ### Composable ML adds project linking and bulk training, general improvements {: #composable-ml-adds-project-linking-and-bulk-training-general-improvements } Composable ML provides a full-flexibility approach to model building, allowing you to direct your data science and subject matter expertise to the models you build. With Composable ML, you build blueprints that best suit your needs using built-in tasks and custom Python/R code. Then, use your custom blueprint together with other DataRobot capabilities (MLOps, for example) to boost productivity. With release 7.3, in addition to the feature preview capabilities available earlier, come these important improvements: 1. Project linking: Because some blueprints are meant to be used only with a specific project (perhaps they incorporate a step that calls for specific features, for example) DataRobot applies automated project linking. If you then attempt to apply the blueprint to a different project, DataRobot provides a warning that the required columns do not exist in the dataset. ![](images/cml-cat-4.png) 2. Bulk training: You can now train user blueprints in bulk for a specific project, filtered based on compatibility with the selected blueprints. (Note that if selected blueprints don't have at least one common target type, DataRobot prevents bulk training.) From the AI Catalog **Blueprints** tab, you can sort blueprints by target type (binary, regression, multiclass, and unsupervised) for easier selection. ![](images/cml-cat-11.png) The feature is generally available for managed AI Platform users and private preview for Self-Managed AI Platform users (contact your DataRobot representative for enablement information). More information for [managed AI Platform users](modeling/special-workflows/cml/index). ### Word Cloud support for all linear models {: #word-cloud-support-for-all-linear-models } Previously only available for a single model and mode type, Word Cloud now supports a variety of binary classification, multiclass, and regression models. Additionally, Word Cloud is now available for multimodal datasets (i.e., datasets that mix images, text, categorical, etc.), displaying a word cloud for all text from the data. ![](images/rn-wordcloud.png) For details, see the [Word Cloud](word-cloud) documentation. ### Clustering {: #clustering } Clustering, an application of unsupervised learning, lets you explore your data by grouping and identifying natural segments. Use clustering to explore clusters generated from many types of data—numeric, categorical, text, image, and geospatial data—independently or combined. In clustering mode, DataRobot captures a latent behavior that's not explicitly captured by a column in the dataset. To generate clusters, run in unsupervised Clusters mode: ![](images/unsup-cluster-select.png) To investigate the clusters generated during modeling, use the Cluster Insights visualization to understand, name, and explain each cluster in a dataset: ![](images/rn-cluster-insights-overview.png) For details, see the [Clustering](clustering) documentation. ### External predictions now GA {: #external-predictions-now-GA } Released as a public preview feature in v7.2, the [External Predictions](external-preds) capability allows you to bring external model(s) into the DataRobot AutoML environment for comparison against DataRobot models. Simply add external model predictions as a new column in your training dataset and identify the predictions and partition column. When modeling completes, the external model is available on the Leaderboard. From there you can compare it against DataRobot models, investigate further using select DataRobot visualizations, and (for binary classification projects) explore bias testing. Additionally, a new public preview enhancement is available for the feature, providing support for multiple (up to 25) prediction columns, with each mapping into a separate "external model." ![](images/ext-predictions-5.png) ### Feature Effects for multiclass projects {: #feature-effects-for-multiclass-projects } With this release, the Feature Effects visualization is now available for multiclass projects. In addition, using the Select Class dropdown, you can view partial dependence, predicted, and actual values for each class of the target value. By default, DataRobot calculates effects for the top 10 impact-ranked features, but the new feature provides an option to calculate, individually, for all features. ![](images/rn-multiclass-feature-effects.png) For details, see [Feature Effects](feature-effects#select-class). ### Configurable sample size for SHAP Feature Impact {: configurable-sample-size-for-shap-feature-impact } With this release, you can now configure the sample size used for computing Feature Impact in SHAP-based projects. Previously this capability was only available for permutation Feature Impact. Changing sizes can help, for example, to compute SHAP **Feature Impact** quickly with near to the same accuracy. ![](images/fi-resample-2.png) For details, see the [Feature Impact](feature-impact) documentation. ### Unlimited multiclass builds multiclass classifiers for targets with any number of classes {: #unlimited-multiclass-builds-multiclass-classifiers-for-targets-with-any-number-of-classes } !!! info "Availability information" Availability of unlimited classes in multiclass projects is dependent on your DataRobot package. If it is not enabled for your organization, class limit is set to 100. Contact your DataRobot representative to increase this limit. This release extends the multiclass project types, adding an unlimited multiclass option. For multiclass projects with more than 1000 classes, DataRobot, by default, will keep the top 999 most frequent classes and aggregate the remainder into a single "other" bucket. Or, you can configure the aggregation parameters to ensure all classes necessary to your project are represented. Additionally, multiclass visualizations are adjusted to suit the larger class display. With unlimited multiclass, you no longer need to prepare data to suit a class limit and maintain several models. You can now deploy a single model to serve predictions against any number of classes. ![](images/rn-unlimited-multiclass.png) For details, see the [multiclass](multiclass#unlimited-multiclass) documentation. ### Multilabel modeling adds pairwise matrix management {: #multilabel-modeling-adds-pairwise-matrix-management } !!! info "Availability information" Availability of multilabel modeling is dependent on your DataRobot package. If it is not enabled for your organization, contact your DataRobot representative for more information. Multilabel modeling (modeling when each row is associated with one, several, or zero labels) is now generally available. In addition, capabilities have been added that allow you to more easily control the pairwise matrix. The matrix, which shows pairwise statistics for pairs of labels and the occurrence percentage of each label in the dataset, now uses a thumbnail matrix to more easily set the display of the main matrix. You can select an area from the thumbnail or manually set rows and/or columns, ensuring the main matrix focuses on labels of interest. To ensure that data is valid, the [data quality assessment checks](data-quality#multicategorical-format-errors) now checks data against the requirements for multicategorical features. A log provides more detailed error information. ![](images/multilabel-30.png) For details, see the [multilabel](multilabel) documentation. ## API Enhancements {: #api-enhancements } The following is a summary of API new features and enhancements. Reference the [API Documentation home](https://docs.datarobot.com/en/docs/api/index.html){ target=_blank } for more information on each client. !!! tip DataRobot highly recommends updating to the latest API client for Python and R. ### New Features {: #new-features } The following new functionality has been added for API release v2.27.0. * Retrieve and restore discarded features for time series projects. #### Compute and retrieve Feature Effects for multiclass models {: #compute-and-retrieve-feature-effects-for-multiclass-models } * For non-datetime partitioned models: * retrieve `GET /api/v2/projects/(projectId)/models/(modelId)/multiclassFeatureEffects/` * compute `POST /api/v2/projects/(projectId)/models/(modelId)/multiclassFeatureEffects/` * For datetime partitioned models: * retrieve `GET /api/v2/projects/(projectId)/datetimeModels/(modelId)/multiclassFeatureEffects/` * compute `POST /api/v2/projects/(projectId)/datetimeModels/(modelId)/multiclassFeatureEffects/` #### Custom models conversion functionality {: #custom-models-conversion-functionality } * create `POST /api/v2/customModels/(customModelId)/versions/(customModelVersionId)/conversions/` * list `GET /api/v2/customModels/(customModelId)/versions/(customModelVersionId)/conversions/` * retrieve `GET /api/v2/customModels/(customModelId)/versions/(customModelVersionId)/conversions/(conversionId)/` * delete `GET /api/v2/customModels/(customModelId)/versions/(customModelVersionId)/conversions/(conversionId)/` ### Enhancements {: #enhancements } All non-multiclass `featureEffects` and `featureFit` retrieve routes now support returning `individual_conditional_expectation` (ICE) Plots; a new query parameter,`include_ice_plots`, controls this functionality. To access this feature, enable the feature flag `Enable ICE Plots on Feature Fit/Feature Effects`. This includes the following routes: * `GET /api/v2/projects/(projectId)/models/(modelId)/featureEffects/` * `GET /api/v2/projects/(projectId)/datetimeModels/(modelId)/featureEffects/` * `GET /api/v2/projects/(projectId)/models/(modelId)/featureFit/` * `GET /api/v2/projects/(projectId)/datetimeModels/(modelId)/featureFit/` There are new routes to initialize compliance documentation preprocessing, which is required to generate compliance documentation for custom models: Create compliance documentation preprocessing initialization: `POST /api/v2/modelComplianceDocsInitializations/(entityId)/` Check compliance documentation preprocessing initialization: `GET /api/v2/modelComplianceDocsInitializations/(entityId)/` There are now new routes that support multilabel classification project types: Retrieve multilabel pairwise statistics: * `GET /api/v2/multilabelInsights/(multilabelInsightsKey)/pairwiseStatistics/` Retrieve multilabel histograms: * `GET /api/v2/multilabelInsights/(multilabelInsightsKey)/histogram/` Retrieve multilabel labelwise ROC: * `GET /api/v2/projects/(projectId)/models/(modelId)/labelwiseRocCurves/(source)/` Retrieve multilabel labelwise Lift charts: * `GET /api/v2/projects/(projectId)/models/(modelId)/multilabelLiftCharts/(source)/` Retrieve manual label selections for multilabel pairwise statistics: * `GET /api/v2/multilabelInsights/(multilabelInsightsKey)/pairwiseManualSelections/` Save manual label selections for multilabel pairwise statistics: * `POST /api/v2/multilabelInsights/(multilabelInsightsKey)/pairwiseManualSelections/` Update a manual label selection for multilabel pairwise statistics: * `PATCH /api/v2/multilabelInsights/(multilabelInsightsKey)/pairwiseManualSelections/(manualSelectionListId)/` Delete a manual label selection for multilabel pairwise statistics: * `DELETE /api/v2/multilabelInsights/(multilabelInsightsKey)/pairwiseManualSelections/(manualSelectionListId)/` ## Public preview features {: #public-preview-features } ### Connect to Snowflake using external OAuth {: #connect-to-snowflake-using-external-oauth } Snowflake users can now set up a Snowflake data connection in DataRobot using an external identity provider (IdP)&mdash;either Okta or Azure Active Directory&mdash;for user authentication through OAuth single sign-on (SSO). For details, see [External OAuth for Snowflake](dc-snowflake#snowflake-external-oauth). ### Fast registration in the AI Catalog {: #fast-registration-in-the-ai-catalog } You can now quickly register large datasets in the AI Catalog by specifying the first N rows to be used for registration instead of the full dataset&mdash;giving you faster access to data to use for testing and Feature Discovery. In the **AI Catalog**, click **Add to catalog** and select your data source. Fast registration is only available when adding a dataset from a new data connection, an existing data connection, or a URL. Enter information for the data source and select a snapshot policy: * For a snapshot dataset, DataRobot will ingest the specified number of first rows. Subsequent consumption of the data, like creating a project with it, will use this dataset with N rows. * For a dynamic dataset, DataRobot will use the specified number of first N rows to compute EDA1. Subsequent consumption of the data, however, will always use the full dataset. For fast registration, select the partial data upload option and specify the number of rows to ingest. ![](images/fast-reg-2.png) For details, see [AI Catalog fast registration](catalog#configure-fast-registration). ## Deprecation notices {: #deprecation-notices } Note the following to better plan for later migration to new releases. ### Local folder option for custom models will be deprecated {: #local-folder-option-for-custom-models-will-be-deprecated } As of release v8.0 (March 14, 2022 for Cloud users), the ability to use the “Local Folder” option when adding a model via the Deployment inventory will be deprecated. For this release, while it is still available, the preferred method is to use the Custom Model Workshop. With v8.0, only the workshop option will be available (and will be linked to from the inventory page). ![](images/rn-local-folder.png) ### API deprecation notices {: #api-deprecation-notices } Note the following to better plan for later migration to new releases. * Get discarded features information: `GET /api/v2/projects/(projectId)/discardedFeatures/` * Restore a list of discarded features: `POST /api/v2/projects/(projectId)/modelingFeatures/fromDiscardedFeatures/` ## Customer-reported fixed issues {: #customer-reported-fixed-issues } The following issues have been fixed since release [7.2.6](v7.2.6-aml). ### Feature Discovery {: #feature-discovery } * SAFER-4115: Fixes an issue where BigQuery OAuth credentials would not work with Feature Discovery projects. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.3.0-aml
--- title: Time series (V7.3) description: DataRobot Release 7.3 time series release notes --- # Time series (V7.3) {: #time-series-v73 } _December 13, 2021_ The DataRobot v7.3.0 release includes many new [time series](#new-time-series-features) features, described below. See also details of Release 7.3.0 in the [AutoML](v7.3.0-aml) and [MLOps](v7.3-mlops) release notes. ## New time series features {: #new-time-series-features } See details of the following new GA features: * [Segmented modeling for multiseries projects](#segmented-modeling-for-multiseries-projects) * [Time series data prep tool now GA, adds quality check](#time-series-data-prep-tool-now-ga-adds-quality-check) * [High-resolution calendars now GA](#high-resolution-calendars-now-ga) * [Restore features removed by reduction](#restore-features-removed-by-reduction) * [Text column support for multiseries projects](#text-column-support-for-multiseries-projects) * [Multiclass confusion matrix now supports backtests](#multiclass-confusion-matrix-now-supports-backtests) * [Accuracy Over Time performance improvements](#accuracy-over-time-performance-improvements) See details of the following new public preview features: * [Scoring Code for time series](#scoring-code-for-time-series) * [Time series Predictor support in the AI App Builder](#time-series-predictor-support-in-theai-app-builder) ## Generally available features {: #generally-available-features } The following new features are now generally available. ### Segmented modeling for multiseries projects {: #segmented-modeling-for-multiseries-projects} No single model can handle extreme data diversity or can forecast the complexity of human buying patterns at a detailed level. Complex demand forecasting typically requires deep statistical know-how and unlimited budget to spend on lengthy development projects and big data architectures. Prior to the release of multiseries with segmented modeling, you would have to segment your datasets, set up a model factory to model each segment of your data, and then deploy a model for each segment. Now generally available, DataRobot provides segmented modeling, where you can build projects with up to 100 segments. ![](images/ts-segment-4.png) Segments are a group of series; each segment runs Autopilot and has its own Leaderboard. DataRobot then selects and prepares a champion model from each segment’s Leaderboard and feeds that champion to the project’s Combined Model. You can override DataRobot’s champion selection&mdash;the Combined Model updates with the new model information and the deployment is updated to reflect the change. With segmented modeling, all segments are represented in the Combined Model but it only represents a single deployment. ![](images/ts-segment-9.png) For details, see [multiseries modeling with segmentation](ts-segmented). ### Time series data prep tool now GA, adds quality check {: #time-series-data-prep-tool-now-ga-adds-quality-check } The time series data prep tool is now generally available. With this release, the tool adds a data quality check that ensures imputed features are not [leaking](glossary/index#target-leakage) the imputed target (only a potential problem for [known in advance (KA) features](glossary/index#known-in-advance-features)). Any features identified as high or moderate risk for imputation leakage are removed from the set of KA features. ![](images/tsd-prep-13.png) Additionally, when a deployment is created from a model that used a prepped dataset, the model package has the information necessary to apply the transformations to a prediction dataset originating in the AI Catalog. For details, see the [time series data prep tool](ts-data-prep) documentation. ### High-resolution calendars now GA {: high-resolution-calendars-now-ga } With this release, when uploading your own calendar file, you can now derive calendar event-related features at a much more granular, timestamp-based level. Additionally, you can specify durations to further highlight the event specificity. To ensure accuracy, DataRobot provides guardrails to support calendar-derived features based on calendar events that overlap. See the [calendar file requirements](ts-adv-opt#calendar-file-requirements) for using this feature with [Scoring Code](#scoring-code-for-time-series) (public preview) if your calendar has only full-day events. For details, see [calendar file information](ts-adv-opt#upload-your-own-calendar-file). ### Restore features removed by reduction {: #restore-features-removed-by-reduction } The ability to restore derived features back into your modeling data, even if they are low impact, is now generally available. You can then create new feature lists that include them. As an improvement over the public preview version, you can click the index column to re-sort with leading restored features (which are also marked with an icon). ![](images/restore-pruned-8.png) For full details, see the [feature restoration](restore-features) documentation. ### Text column support for multiseries projects {: text-column-support-for-multiseries-projects } In addition to numeric and categorical types, you can now select a text column as the multiseries ID. Previously all variable types were initially accepted but could cause problems at build time. Now improved testing ensures only valid types are available for selection. ### Multiclass confusion matrix now supports backtests {: #multiclass-confusion-matrix-now-supports-backtests } With this release, the Data Selection dropdown in the multiclass confusion matrix now allows you to base the display on an individual backtest, all backtests, or the holdout partition (if unlocked). For details, see [Confusion Matrix](multiclass#data-selection) documentation. ### Accuracy Over Time performance improvements {: #accuracy-over-time-performance-improvements } This release brings performance improvements to the [Accuracy Over Time](aot) tab. The chart helps to visualize how predictions change over time, plotting predicted and actual values for selectable backtests, resolutions, and forecast distances. As a result of the complexity of the chart, the computation is extensive and, depending on dataset size, can require extensive internal resources. Now, computation optimization has resulted in faster performance and less load on resources. For details, see [Accuracy Over Time](aot) documentation. ## Public preview features {: #public-preview-features } The following features are part of the public preview program. ### Scoring Code for time series {: #scoring-code-for-time-series } Scoring Code public preview capabilities for time series have expanded with this release. In addition to the [blueprints and features supported in release 7.2](v7.2.0-ats#scoring-code-for-time-series), this release brings support for Forecast Distance (FD) splits and Weighted Rolling Windows. !!! note If you want Scoring Code support for a project using [calendars](ts-adv-opt#calendar-files), and your calendar has only full-day events (such as holidays), ask your platform administrator to set the *Disable High-Resolution Calendars for Time Series Projects* feature flag for your account. ### Time series Predictor support in the AI App Builder {: #time-series-predictor-support-in-theai-app-builder } Now available for public preview, you can build AI-powered Predictor applications for both multi- and single-series projects. In your time series deployment, click the actions menu and select Create Application. Once created, upload batch predictions to populate the new Time Series Forecasting widget, which allows you to navigate between multiple time unit resolutions, view calendar events (if uploaded), compare forecasted vs actuals for new data, and view insights for Prediction Explanations over time. ![](images/ts-app-3.png) ## Time series fixed issues {: #time-series-fixed-issues } The following customer-reported issues have been fixed since release [7.2.0](v7.2.0-ats). * TIME-9790: Fixes downsampled training predictions for Forecast Distance split models in non-downsampled time series projects. * TIME-9425: Fixes an issue that occasionally causes a blank page when accessing smart-sampled OTV projects with custom backtest settings. * TIME-9796: Fixes Forecast vs Actual chart crash on series change. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.3.0-ats
--- title: MLOps (V7.1) description: DataRobot Release 7.1 MLOps release notes --- # MLOps (V7.1) {: #mlops-v71} _June 14, 2021_ The DataRobot MLOps v7.1 release includes many new features and capabilities, described below. Release v7.1 provides updated UI string translations for the following languages: * Japanese * French * Spanish ## Introducing Pricing 5.0 {: #introducing-pricing-50 } [Pricing 5.0](pricing) is the newest plan available to DataRobot users. With this plan, a number of capabilities supporting DataRobot MLOps are introduced: * Each user or organization has a set number of active deployments they can have at one time. The limit is displayed in the [Deployment Inventory status tiles](deploy-inventory#live-inventory-updates). Pricing 5.0 users can filter the leaderboard by active or inactive deployments. * Users who built models in AutoML can download model packages (.mlpkgs) to use the [Portable Prediction Server](portable-pps) directly from the model Leaderboard without engaging in the deployment workflow. * Users who built models in AutoML can download Scoring Code for the model via the model Leaderboard without engaging in the deployment workflow. Previously, downloading Scoring Code made the associated deployment a permanent fixture. Now, these deployments can be deactivated or deleted. Additionally, users can choose to include prediction explanations with their Scoring Code download. ## New features and enhancements {: #new-features-and-enhancements } See details of [new deployment features](#improved-monitoring-support-for-multiclass-deployments) below: * Now GA: Improved monitoring support for multiclass deployments * Automatic actuals feedback for time series deployments * Now GA: Use challenger models with external deployments The following new deployment features are currently in [public beta](#deployment-reports). Contact your DataRobot representative for information on enabling them: * Deployment reports * Reset deployment statistics * The management agent * Baseline revisions for external models #### New prediction features {: #new-prediction-features } See details of [new prediction features](#batch-prediction-cloud-connectors) below: * Batch prediction cloud connectors The following new prediction features are currently in [public beta](#scoring-code-in-snowflake). Contact your DataRobot representative for information on enabling them: * Scoring Code in Snowflake * Include prediction explanations in Scoring Code * Batch prediction job definitions and scheduling * MLOps agent: Kafka * Portable batch predictions * Batch prediction Parquet support * Improved batch predictions for custom models #### New model registry features {: #new-model-registry-features } See details of [new model registry features](#upload-environments-as-prebuilt-images) below: * Upload environments as prebuilt images * Now GA: Integrate a Bitbucket Server or GitHub enterprise repository with custom inference models * Now GA: Custom Inference Anomaly Detection #### New governance features {: #new-governance-features } See details of [new governance features](#feature-lists-added-to-governance-metadata) below: * Feature lists added to governance metadata ## New deployment features {: #new-deployment-features } Release v7.1 introduces the following generally available deployment features. ### Improved monitoring support for multiclass deployments {: #improved-monitoring-support-for-multiclass-deployments } Now generally available, multiclass deployments have additional monitoring support. Multiclass deployments offer [class-based configuration](deploy-accuracy#class-selector) to modify the data displayed on the Accuracy and Data Drift graphs. Use the class selector to display the desired classes for a deployment. DataRobot provides quick select shortcuts for classes: the five class most common in the training data, the five with the lowest accuracy score, and the five with the greatest amount of data drift. Once specified, the charts on the tab (Accuracy or Data Drift) update to display the selected classes. ![](images/multi-dep-5.png) ### Automatic actuals feedback for time series deployments {: #automatic-actuals-feedback-for-time-series-deployments } Time series deployments that have indicated an [association id](accuracy-settings#association-id ) can enable the automatic submission of actuals, so that you do not need to submit them manually via the UI or API. Once enabled, actuals can be extracted from the data used to generate predictions. As each prediction request is sent, DataRobot can extract an actual value for a given date. This is because when you send prediction rows to forecast, historical data is included. This historical data serves as the actual values for the <em>previous</em> prediction request. ### Challenger models now available for external deployments {: #challenger-models-now-available-for-external-deployments } Deployments in remote prediction environments can use the [**Challengers**](challengers) tab. Remote models can serve as the champion model, and you can compare them to DataRobot and custom models challengers. If you want to replace the champion model with a challenger, you can also replace the model with a custom or DataRobot challenger model and deploy the new champion to your remote prediction environment. ![](images/ext-champ-4.png) ## New public beta deployment features {: #new-public-beta-deployment-features } Release v7.1 introduces the following public beta deployment features. ### Deployment reports {: #deployment-reports } You can now generate a deployment report on-demand, detailing essential information about a deployment's status such as insights about service health, data drift, and accuracy statistic (among many other details). Additionally, you can create a report schedule that acts as a policy to automatically generate deployment reports based on the defined conditions (frequency, time, and day). When the policy is triggered, then a new report is generated and DataRobot sends an email notification to those who have access to the deployment. ![](images/dep-report-rn.png) ### Reset deployment analytics {: #reset-deployment-analytics } Deployments now support the deletion of monitoring data by model or time range. This action is governed by the [approval workflow](dep-admin) to safeguard against accidental deletion. This feature allows you to remove monitoring data sent inadvertently or during the integration testing phase of deploying a model from the deployment. ![](images/delete-dep-rn.png) ### The management agent {: #the-management-agent } DataRobot is introducing the management agent, which understands the state of a deployment and can automate the task of retrieving artifacts, deploying models, and replacing them externally. The agent is extensible to support a variety of use cases in various model formats and prediction environments. Administrators can configure the management agent in their prediction environments to automate the deployment and replacement of models based on user actions within MLOps. It pairs easily with the MLOps Agent to automatically monitor models and integrate them with additional MLOps functionality such as challenger models. The management agent is a tool for standardizing and automating model deployment. ![](images/manage-rn.png) ### Baseline revisions for external models {: #baseline-revisions-for-external-models } Binary classification models deployed to remote environments can now be registered with Holdout data, allowing external deployments to calculate additional drift and accuracy baselines previously only available to models built with DataRobot AutoML. When monitoring accuracy, you can now compare the current accuracy calculation to the baseline at the time of training the model. Additionally, target drift now supports more detailed drift analysis using prediction values prior to the application of the prediction threshold. ![](images/baseline-rn.png) ## New prediction features {: #new-prediction-features_1 } Release v7.1 introduces the following generally available prediction features. ### Batch prediction cloud connectors {: #batch-prediction-cloud-connectors } The [batch prediction API](pred-examples#snowflake-scoring) now supports connectors specific to Snowflake and Azure Synapse for the ingest and export of data while scoring. The use of JDBC to transfer data can be costly in terms of input/output operations per second (IOPS) and expenses for data warehouses. This adapter reduces the load on database engines during prediction scoring by using cloud storage and bulk insert to create a hybrid JDBC-cloud storage solution. ## New public beta prediction features {: #new-public-beta-prediction-features } Release v7.1 introduces the following public beta prediction features. ### Scoring Code in Snowflake {: #scoring-code-in-snowflake } DataRobot Scoring Code now supports execution directly inside of Snowflake using Snowflake’s new Java UDF functionality. This capability removes the need to extract and load data from Snowflake, resulting in a much faster route to scoring large datasets. The **Portable Predictions** tab for deployments has been tailored to enable this functionality when the deployment is created in a Snowflake prediction environment. ![](images/snowflake-sc-rn.png) ### Include prediction explanations in Scoring Code {: #include-prediction-explanations-in-scoring-code } You can now receive prediction explanations anywhere you deploy a model: in DataRobot, with the Portable Prediction Server, and now in Java Scoring Code. Prediction explanations provide a quantitative indicator of the effect variables have on the predictions, answering why a given model made a certain prediction. You can enable prediction explanations on the [**Portable Predictions**](sc-download-deployment) tab when downloading a model via Scoring Code. ![](images/pe-pps-rn.png) ### Batch prediction job definitions and scheduling {: #batch-prediction-job-definitions-and-scheduling } When making batch predictions for deployments via the **Make Predictions** tab, you can now create and schedule JDBC and cloud storage prediction jobs directly from the deployment without utilizing the API. Additionally, you can view a history of the prediction jobs that ran. Define the name of the job, the prediction source, configurations, and the prediction destination. All specifications are saved for later use. ![](images/batch-pred-sched-rn.png) ### New MLOps agent channel: Kafka {: #new-mlops-agent-channel-kafka } The MLOps agent now supports Kafka as a channel, in addition to previously supported channels: File, AWS SQS, Google Pub, Google Sub, and RabbitMQ. The agent can now be easily deployed to many prediction environments and Kafka support, eliminates the need for additional queuing services. ### Portable batch predictions {: #portable-batch-predictions } The [Portable Prediction Server](portable-batch-predictions) can now be paired with an additional container to orchestrate batch predictions jobs using file storage, JDBC, and cloud storage. You no longer need to manually manage the large scale batching of predictions while utilizing the Portable Prediction Server. Additionally, large batch predictions jobs can be collocated at or near the data, or in environments behind firewalls without access to the public Internet ![](images/pps-batch-rn.png) ### Batch prediction parquet support {: #batch-prediction-parquet-support } The batch prediction API has been enhanced to support Parquet-formatted files for both ingest and output. Parquet file format support removes the need to implement an additional conversion step in prediction pipelines. ### Improved batch predictions for custom models {: #improved-batch-predictions-for-custom-models } For custom model deployments, batch prediction replica configuration enhances performance and stabilizes large prediction jobs. ## New model registry features {: #new-model-registry-features_1 } Release v7.1 introduces the following generally available model registry features. ### Upload environments as prebuilt images {: #upload-environments-as-prebuilt-images } You can now upload environments for [custom models](custom-environments#create-a-custom-environment) as prebuilt images. This image is a Docker image saved as a tarball in .tar, .gz, or .tgz format. If you provide a prebuilt image, you do not need to provide a context file for the environment (the tarball archive containing the Dockerfile and any other relevant files). If you supply a prebuilt image or build an environment with a context file, you can then download the built environment image as a .tar file. ![](images/c-env-2.png) ### GitHub Enterprise and Bitbucket Server integration for custom models {: #github-enterprise-and-bitbucket-server-integration-for-custom-models } Users can now register [GitHub Enterprise and Bitbucket Server repositories](custom-inf-model#adding-a-github-enterprise-repository) in the Model Registry to pull artifacts into DataRobot and build custom inference models. Integrating either of these repositories allows you to directly transfer between a governed, code-centric machine learning development environment and a governed MLOps environment. ![](images/remote-2.png) ### Custom inference anomaly detection models {: #custom-inference-anomaly-detection-models } Now available generally available, you can create a custom inference model for anomaly detection problems. When creating a custom model, you can indicate "Anomaly Detection" as a target type. Additionally, access the [DRUM template](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/inference/python3_sklearn_anomaly){ target=_blank } for anomaly detection models. For deployed custom inference anomaly detection models, note that the following functionality is not supported: * Data drift * Accuracy and association IDs * Challenger models * Humility rules * Prediction intervals ![](images/anomaly-custom-rn.png) ## New public beta governance features {: #new-public-beta-governance-features } Release v7.1 introduces the following public beta features. ### Feature lists added to governance metadata {: #feature-lists-added-to-governance-metadata } The Model Registry and deployments have been enhanced to allow you to view a model's feature list and feature importance. You can now access this metadata without navigating back into the original modeling project to understand the full list of features for their model. ![](images/gov-feat-rn.png) _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.1-mlops
--- title: Time series (V7.1) description: DataRobot Release 7.1 time series release notes --- # Time series (V7.1) {: #time-series-v71 } _June 14, 2021_ The DataRobot v7.1.0 release includes many new [time series](#new-time-series-features) features, described below. See also details on other [AutoML new features](v7.1.0-aml) for more details. ## New time series features {: #new-time-series-features } See details of the following new features, below: * New badge identifies baseline models in time series projects * Time series calendar improvements * EWMA setting learns more from recent data * Default to asynchronous partitioning for better accuracy * Beta: Data prep improvements * Beta: Cold start and partial history blueprints * Beta: New Series humility rule ### New badge identifies baseline models in time series projects {: #new-badge-identifies-baseline-models-in-time-series-projects } DataRobot now identifies which model is being used as the baseline model for time series projects with a badge on the leaderboard: ![](images/ts-mase-baseline.png) The baseline model is the model that uses the most recent value that matches the longest periodicity. That is, while a project could have multiple different naive predictions with different periodicity, DataRobot uses the longest naive predictions to compute the MASE score. MASE is a measure of the accuracy of forecasts, and is a comparison of one model to a naive baseline model. It is one of the many selectable optimization metrics; this release also introduces access to documentation from the metric dropdown. ![](images/rn-mase-baseline.png) ### Time series calendar improvements {: #time-series-calendar-improvements } With this release, DataRobot offer calendar improvements that increase size limits and AI Catalog flexibility. Now, DataRobot supports file sizes up to 10MB for calendars used in time series projects. Additionally, calendars uploaded as a local file are automatically added to the AI Catalog. When files are in the catalog, you can view them and use them in a project. Or, you can download them, and, for example, edit and re-upload them. This is particularly useful for a generated calendar, as you can generate it in DataRobot, download and customize it, and re-upload it to the AI Catalog. In all cases, you can share any calendar file to other users in your organization. ![](images/ts-cat-cal-2.png) ### EWMA setting learns more from recent data {: #ewma-setting-learns-more-from-recent-data } This release introduces a setting for exponentially weighted moving averages (EWMA), available from the Advanced options link, that applies exponentially weighted moving average operations to features. EWMA places a greater weight and significance on the most recent data points, measuring trend direction over time, forcing more recent values to have more influence on the variance than older values. ![](images/ewma.png) ### Beta: Data prep improvements {: #beta-data-prep-improvements } Release 7.0 introduced a beta feature [handling gaps](v7.0.0-ats#beta-time-series-data-prep-tool-addresses-gap-handling-to-allow-time-based-mode-with-irregular-time-steps) when in time-based mode to allow datasets with irregular time steps. This release brings improvements to that feature, including allowing access to the tool from both the start screen and the AI Catalog. ![](images/rn-data-prep-improvments.png) ### Default to asynchronous partitioning for better accuracy {: #default-to-asynchronous-partitioning-for-better-accuracy } In time series v6.3 projects, asynchronous partitioning was an optional capability but is now the default behavior. With it, DataRobot automatically adjusts backtests so that they sufficiently cover important events and are representative of your data. Backtests are no longer generated based purely on data length, but are customized to the target so that they highlight and/or include specific regions of interest, for example, seasonal events, holidays, or regular anomalies. In other words, this capability identifies areas that may have insufficient backtest coverage and then improves the data that is sampled for each backtest by automatically adjusting the bounds of the training and validation partitions. The previous ability to manually customize backtest start and end dates is still available. ### Beta: Cold start and partial history blueprints {: #beta-cold-start-and-partial-history-blueprints } "Cold start" is the ability to model on series that were not seen in the training data; partial history refers to predictions datasets with series history that is only partially known (historical rows are partially available within the feature derivation window). While some blueprints are designed to predict on new series given some history, others are not. This can lead to suboptimal predictions and so DataRobot would error when making predictions for a series for which the full history was not provided. (The full history was needed to derive the features for specific forecast points.) With this release, time series introduces blueprints optimized for cold start and also for partial history modeling. The new blueprints are run as part of Autopilot and are available for multiseries regression projects. Set the **Advanced Option** to includes models that support partial history datasets. ![](images/rn-partial.png) ### Beta: "New Series" humility rule {: #beta-new-series-humility-rule } New series humility rules were introduced in v7.0.0, creating rules that triggered off of series that were unseen in training data. With this release, and available as a beta feature, you can now set the humility rule to use a replacement model from the model registry instead of the Leaderboard. This decouples the model from a specific project and allows you to use model packages. Using a backup model from any compatible project provides for more flexibility (compatibility means using the same target, date/time partitioning column, feature derivation window, forecast distances, series name, etc.) This feature requires access to MLOps. ## Time series fixed issues {: #time-series-fixed-issues } The following issues have been fixed since release 7.0.2. ### Time series {: #time-series } * TIME-7599: Fixes an issue previously causing time series projects using a catalog dataset to fail if the dataset had been deleted after project creation and before the feature derivation process. * TIME-7800: Fixes broken model export for Eureqa GAM models. * TIME-7847: Fixes Prediction Explanation computation for time series bulk predictions launched with the Prediction API. * TIME-8176: Fixes an issue when Prediction Explanations previously failed to compute in some cases with New Series Modelers. * TIME-8425: The Anomaly assessment records route now filters output properly when backtest 0 is specified as the filtering condition. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.1.0-ats
--- title: AutoML (V7.1) description: DataRobot Release 7.1 AutoML release notes --- # AutoML (V7.1) {: #automl-v71 } _June 14, 2021_ The DataRobot v7.1.0 release includes many [new UI](#other-new-features) and [API](#api-enhancements) capabilities, described below. See also details on [time series new features](v7.1.0-ats) for more details. See these important [deprecation](#deprecation-notices) announcements for information about changes to DataRobot's support for older, expiring functionality. This document also describes DataRobot's [fixed](#customer-reported-fixed-issues) issues. Release v7.1.0 provides updated UI string translations for the following languages: * Japanese * French * Spanish ## In the spotlight... {: #in-the-spotlight } The following features are some of the highlights of Release 7.1: * [New documentation experience](#new-documentation-experience) * [Automated AI Reports](#automated-ai-reports) * [Feature Discovery Snowflake integration](#feature-discovery-snowflake-integration) * [Beta: Purpose-built AI applications with the AI App Builder](#beta-purpose-built-ai-applications-with-the-ai-app-builder) ### New documentation experience {: #new-documentation-experience } This release introduces a new documentation experience for DataRobot platform application users. ![](images/rn-new-docs.png) In addition to a new look, the organization has been modified to better reflect the end-to-end modeling workflow. Content has been added&mdash;and moved&mdash;to provide easier access to documentation resources. Specifically: * Content changes: * [DataRobot Data Prep](companion-tools/index) documentation, while also available at the legacy location, can now be accessed from the platform site. * A [DataRobot glossary](glossary/index), which will grow over time, is available. * [API documentation](api/index)&mdash;a Quickstart and REST API, Python client, and R client documentation&mdash;can be launched from within the site. * Zoom images for a closer look. When you hover on an image, a magnifier appears. Click once on the image to expand it in-screen, click again to return to the page. ![](images/rn-docs-zoom.png) - Book (left-side) and page-specific (right side) table of contents make navigation easier. ![](images/rn-docs-toc.png) The site will continue to grow, with notebook and tutorial content coming soon. Questions, comments, and suggestions are welcome, send email to [docs@datarobot.com](mailto:docs@datarobot.com). ### Automated AI reports {: #automated-ai-reports } With this release, you can now create and download the DataRobot AI report&mdash;documentation that provides a high-level overview of the modeling building process. ![](images/ai-report.png) The report summarizes the most important findings of a project, allowing you to present them to stakeholders in an easily consumable format. It provides accuracy insights for the top-performing model, including speed and cross-validation scores. It also captures interpretability insights from the Feature Impact histogram for your top-performing model. Detailed model explanations, performance metrics, and ethics insights generated in the AI Report help you build overall trust in your AI projects and prove value to your key stakeholders. ![](images/gen-ai-report.png) ### Feature Discovery Snowflake integration {: #feature-discovery-snowflake-integration } An integration between DataRobot and Snowflake allows joint users to execute Feature Discovery projects in DataRobot while performing computations in Snowflake, when beneficial. The integration minimizes data movement, making the computation of new model features faster, more accurate, and more cost-effective. DataRobot detects whether all secondary datasets configured for the project are dynamic and are referencing tables in Snowflake. If they are, DataRobot automatically pushes joins and filtering operations into Snowflake, and then loads the smaller result set back into DataRobot. Because Feature Discovery is now starting with smaller datasets, DataRobot project runtimes are reduced. ![](images/rn-snowflake.png) ### Beta: Purpose-built AI applications with the AI App Builder {: #beta-purpose-built-ai-applications-with-the-ai-app-builder } The AI App Builder provides a way for users without coding experience to launch, configure, and share AI-powered applications in a visual and interactive interface so that predictions can be easily shared and consumed&mdash;optimizing decision-making for a machine learning use case and delivering increased value from data. Similar to apps deployed from the Application Gallery, each application starts with an application type and data source&mdash;either a deployment or dataset in the **AI Catalog**. However, in the App Builder, you can then configure additional widgets, custom features, and pages to tailor the application to a specific use case. ![](images/rn-app-builder-1.png) In **Applications > Current Applications**, you can view a list of existing applications and click **+ Create Application** to create a new one in the AI App Builder. ![](images/rn-app-builder-2.png) In edit mode, create **pages** to organize your insights, then drag-and-drop **header** and **chart widgets** to tailor the application to a specific use case. ## New AutoML features and enhancements {: #new-automl-features-and-enhancements } The following list the new AutoML features in release v7.1.0, described below. * Keras-specific training dashboard * Feature Discovery now supports anomaly detection * New blueprints and Advanced Tuning parameters for image datasets * Support for dynamic Spark SQL in Feature Discovery * Tiny BERT pretrained featurizer implementation extends NLP * Model compliance documentation for multiclass projects * Improved Eureqa training behavior * Beta: Feature Discovery Relationship Quality Assessment {% if 'enterprise' in tags %} #### Changes for Administrators {: #changes-for-administrators } * Beta: Custom RBAC roles for users and groups {% endif %} ### Keras-specific training dashboard {: #keras-specific-training-dashboard } The **Training Dashboard** tab provides, for each executed candidate model, visualizations of varying hyperparameters over time. In other words, it helps to understand what happened during model training, and how well the model fits the data. The tab provides visualization in areas of training and test loss, accuracy, learning rate, and momentum across all iterations. Select candidate models to compare and modify settings to change the displays and help interpretability. Use the information found in this tab to inform decisions about which parameters to tune in order to improve the final model. From the dashboard you can assess the impact that each parameter has on model performance, and with direct links to the **Advanced Tuning** tab, you can further tune and test the model. ![](images/rn-keras-dash.png) ### Feature Discovery now supports anomaly detection {: #feature-discovery-now-supports-anomaly-detection } Now GA: Now available, you can leverage Feature Discovery's secondary dataset capabilities to detect anomalies. When unsupervised learning (i.e., no target) is enabled, and you add a secondary dataset, Feature Discovery will extract features from the secondary datasets to help detect anomalies, removing the need to script feature engineering outside of DataRobot. ![](images/rn-ad-fd.png) ### New blueprints and Advanced Tuning parameters for image datasets {: #new-blueprints-and-advanced-tuning-parameters-for-image-datasets } This release makes available fine-tuning of Deep Learning convolutional neural network (CNN) with pretrained architecture blueprints. Training a CNN on a small dataset greatly affects the deep convolutional network’s ability to generalize, often resulting in overfitting. Fine-tuning is a process that takes a pretrained network model and applies it to a second similar task, while still allowing further customization of the layer's trainable scope and learning rate. Fine-tuning will often improve the performance and accuracy of a CNN when the project is large and drastically different in context from the pretrained dataset. At times it even outperforms the base pretrained CNN featurizer in small datasets that are close in context from the pretrained dataset. From the **Training Dashboard** you can investigate metric scores, learning rates of each layer, and general learning rates of each iteration. ### Support for dynamic Spark SQL in Feature Discovery {: #support-for-dynamic-spark-sql-in-feature-discovery } Now GA: For Feature Discovery secondary datasets, DataRobot offers the ability to enrich, transform, shape, and blend together snapshotted datasets using Spark SQL queries from within the **AI Catalog**. First introduced as a beta feature, and now generally available, DataRobot adds support for dynamic Spark SQL in secondary datasets for Feature Discovery projects. This new functionality increases flexibility in performing basic data prep. Authentication requirements remain the same. ### Tiny BERT pretrained featurizer implementation extends NLP {: #tiny-bert-pretrained-featurizer-implementation-extends-nlp } Now GA: [BERT](https://arxiv.org/abs/1810.04805){ target=_blank } (Bidirectional Encoder Representations from Transformers) is Google's transformer-based de-facto standard for natural language processing (NLP) transfer learning. Tiny BERT (or any distilled, smaller, version of BERT) is now available with certain blueprints in the DataRobot Repository. These blueprints provide pretrained feature extraction in the NLP field, similar to Visual AI featurizers with no fine-tuning needed. However, for maximum flexibility, DataRobot's implementation offers two additional tunable pooling parameters&mdash;Max Pooling and Average Pooling. Tiny BERT blueprints are available for both UI and API users. ![](images/rn-tiny-bert.png) ### Model compliance documentation for multiclass projects {: #model-compliance-documentation-for-multiclass-projects } The model compliance documentation, which allows users to automatically generate and download documentation that assists with deploying models in highly regulated industries, now supports multiclass models. If available for your organization, you can generate, for each model, individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. Then, you can download the report as an editable Microsoft Word document (.docx). Previously only available for regression and classification, you can now include this documentation for projects with up to 100 classes. Access the option to build the model compliance documentation from a model’s **Compliance** tab on the project Leaderboard. ![](images/rn-compliance-multiclass.png) ### Improved Eureqa training behavior {: #improved-eureqa-training-behavior } With this release, new Eureqa blueprints have been added that use 250 generations (providing comparable accuracy to 1000 generations) for GAM and classic Eureqa models. The addition of these 250-generation blueprints increases the types of projects that will automatically include Eureqa as part of Autopilot (available as part of full, not quick). Other Eureqa generation-count models are still available from the Repository. See the Eureqa documentation for details on when and which models are available for AutoML and time series projects. ### Beta: Feature Discovery Relationship Quality Assessment {: #beta-feature-discovery-relationship-quality-assessment } Introduced as a beta feature with this release, Feature Discovery introduces a tool to automatically assess the quality of a relationship configuration&mdash;warn the user of potential problems&mdash;early in the creation process. The tool verifies join keys, dataset selection, and time-aware settings. Before EDA2 begins: ![](images/rn-fd-quality-assess.png) Click the **Review configuration** button to trigger the Relationship Quality Assessment. A progress indicator (loading spinner) displays on each dataset and on the review configuration button, which is disabled, to indicate that an assessment is currently running. Once the assessment is complete, DataRobot marks all tested datasets. Those with identified issues display a yellow warning icon and those with no identified issues display a green tick. ![](images/rn-fd-quality-assess-done.png) Select the dataset to view a summary of the issues with suggested potential fixes. {% if 'enterprise' in tags %} ## New admin features {: #new-admin-features } The following new feature is available to system administrators. ### Custom RBAC roles for users and groups {: #custom-rbac-roles-for-users-and-groups } With this release, DataRobot introduces custom RBAC roles&mdash;allowing admins to control user and group permissions at a more granular level. In **User Settings > User Roles**, admins can now create and define access for new roles when their use case does not align with the default roles in DataRobot. ![](images/rn-custom-rbac.png) {% endif %} ## API enhancements {: #api-enhancements } The following is a summary of API new features and enhancements. See the [API Support page](https://support.datarobot.com/hc/en-us/articles/215067826-READ-ME-Links-to-DataRobot-API-documentation?flash_digest=b04f17985f45363278b36889b35fe14aaac3dc1f#){ target=_blank } for API Documentation on each client. ### New features {: #new-features } The following new functionality has been added for API release v2.25.0. #### Changes to anomaly assessment insights {: #changes-to-anomaly-assessment-insights } Adds the ability to compute, retrieve, and delete anomaly assessment insights for time series unsupervised projects that support calculation of Shapley values. * Initialize an anomaly assessment insight for the specified subset: `POST /api/v2/projects/(projectId)/models/(modelId)/anomalyAssessmentInitialization/` * Get anomaly assessment records, SHAP explanations, predictions preview: `GET /api/v2/projects/(projectId)/anomalyAssessmentRecords/` `GET /api/v2/projects/(projectId)/anomalyAssessmentRecords/(recordId)/explanations/` `GET /api/v2/projects/(projectId)/anomalyAssessmentRecords/(recordId)/predictionsPreview/` * Delete anomaly assessment records: `DELETE /api/v2/projects/(projectId)/anomalyAssessmentRecords/(recordId)/` #### Cascade sharing of underlying entities for Spark datasets {: #cascade-sharing-of-underlying-entities-for-spark-datasets } Adds cascade sharing of underlying entities for Spark datasets by means of the following endpoint: `PATCH /api/v2/datasets/(datasetId)/sharedRoles/` #### Compute and retrieve Anomaly Over Time plots {: #compute-and-retrieve-anomaly-over-time-plots } Adds the ability to compute and retrieve Anomaly Over Time plots for unsupervised date/time partitioned models. * Computation of Anomaly Over Time plots: `POST /api/v2/projects/(projectId)/datetimeModels/(modelId)/datetimeTrendPlots/` * Retrieve Anomaly Over Time metadata: `GET /api/v2/projects/(projectId)/datetimeModels/(modelId)/anomalyOverTimePlots/metadata/` * Retrieve Anomaly Over Time plots: `GET /api/v2/projects/(projectId)/datetimeModels/(modelId)/anomalyOverTimePlots/` * Retrieve Anomaly Over Time preview plots: `GET /api/v2/projects/(projectId)/datetimeModels/(modelId)/anomalyOverTimePlots/preview/` ### Enhancements {: #enhancements } The `isZeroInflated` property, computed during EDA, has been added to the following endpoints: GET /api/v2/datasets/(datasetId)/allFeaturesDetails/ GET /api/v2/datasets/(datasetId)/versions/(datasetVersionId)/allFeaturesDetails/ GET /api/v2/projects/(projectId)/features/ GET /api/v2/projects/(projectId)/features/(featurename:featureName)/ GET /api/v2/projects/(projectId)/modelingFeatures/ GET /api/v2/projects/(projectId)/modelingFeatures/(featurename:featureName)/ ### Changes {: #changes } The `intakeSettings` and `outputSettings` of the Cloud Adapters for Batch Predictions (GCP, S3, Azure) for endpoint `POST /api/v2/batchPredictions/` now officially expects an intake and/or output URL field to end with a `/` if it should be interpreted as a directory. Otherwise, it is interpreted as a single file. !!! tip DataRobot highly recommends updating to the latest API client for Python and R. ## Deprecation notices {: #deprecation-notices } Note the following to better plan for later migration to new releases. ### Scaleout models deprecated {: #scaleout-models-deprecated } Scaleout models will be deprecated in a future release and should not be used to train new models. ## Customer-reported fixed issues {: #customer-reported-fixed-issues } The following issues have been fixed since release [7.0.2](v7.0.2-aml). ### Feature Discovery {: #feature-discovery } * SAFER-3632: Overrides the Spark configuration from env variable to customize app for individual customer deployments and to fix issues with Feature Discovery projects. ### Platform {: #platform } * PLT-3052: Fixes LDAP group mapping for group names that contain special symbols. * EP-872: Allows ingestion of Parquet files via "Enable conversion of binary files inside worker process" when "read_only_containers" is True. * EP-1062: Resolves an issue when connecting to AWS Cloudwatch behind an HTTPS proxy due to a Python 3 incompatibility in the Boto library. * EP-1307: Adds but comments out example configurations for cluster installations that include `PYTHON3_SERVICES`, so that it can be appropriately enabled for new installs where supported. ### Predictions {: #predictions } * PRED-5919: Fixes an issue previously causing multiple downloads when making a prediction request for a test dataset on a deployment from the **Deployments** page. * PRED-5976: Updates the configurable feature set to match Self-Managed AI Platform specifications. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.1.0-aml
--- title: Version 7.1.x description: DataRobot Release 7.1 release announcements index page. --- # Version 7.1.x {: #version-71x } Following are the AutoML, time series, and MLOps announcements for the features that make up DataRobot's 7.1.x releases, first released _June 14, 2021_. * [v7.1.x AutoML](v7.1.0-aml) * [v7.1.x time series](v7.1.0-ats) * [v7.1.x MLOps](v7.1-mlops) * [Maintenance release notes](7.1-maintenance/index) for DataRobot v7.1.x.
index
--- title: MLOps (V9.0) description: DataRobot Release 9.0 MLOps release announcements. --- # MLOps (V9.0) {: #mlops-v90} The following table lists each new feature. {% if 'enterprise' in tags %} Name | GA | Public Preview ---- | --- | -------------- [Create custom metrics](#create-custom-metrics) | ✔ | | [Export deployment data](#export-deployment-data) | ✔ | | [Management agent](#management-agent) | ✔ | | [Monitor deployment data processing](#monitor-deployment-data-processing) | ✔ | | [View deployment logs](#view-deployment-logs) | ✔ | | [Clear deployment statistics](#clear-deployment-statistics) | ✔ | | [Drill down on the Data Drift tab](#drill-down-on-the-data-drift-tab) | ✔ | | [Visualize drift over time](#visualize-drift-over-time) | ✔ | | [Visualize drift for text features as a word cloud](#visualize-drift-for-text-features-as-a-word-cloud) | ✔ | | [Deployment creation workflow redesign](#deployment-creation-workflow-redesign) | ✔ | | [Creation date sort for the Deployments inventory](#creation-date-sort-for-the-deployment-inventory) | ✔ | | [Model and deployment IDs on the Overview tab](#model-and-deployment-ids-on-the-overview-tab) | ✔ | | [Challenger insights for multiclass and external models](#challenger-insights-for-multiclass-and-external-models) | ✔ | | [View batch prediction job history for challengers](#view-batch-prediction-job-history-for-challengers) | ✔ | | [Enable compliance documentation for models without null imputation](#enable-compliance-documentation-for-models-without-null-imputation) | ✔ | | [Environment limit management for custom models](#environment-limit-management-for-custom-models) | ✔ | | [Prediction API cURL scripting code](#prediction-api-curl-scripting-code) | ✔ | | [Python and Java Scoring Code snippets](#python-and-java-scoring-code-snippets) | ✔ | | [Scoring Code for time series projects](#scoring-code-for-time-series-projects) | ✔ | | [Deployment for time series segmented modeling](#deployment-for-time-series-segmented-modeling) | ✔ | | [Agent event log](#agent-event-log) | ✔ | | [Large-scale monitoring with the MLOps library](#large-scale-monitoring-with-the-mlops-library) | ✔ | | [Dynamically load required agent spoolers in a Java application](#dynamically-load-required-agent-spoolers-in-a-java-application) | ✔ | | [Apache Kafka environment variables for Azure Even Hubs spoolers](#apache-kafka-environment-variables-for-azure-event-hubs-spoolers) | ✔ | | [MLOps Java library and agent public release](#mlops-java-library-and-agent-public-release) | ✔ | | [Create monitoring job definitions](#create-monitoring-job-definitions) | | ✔ | [Automate deployment and replacement of Scoring Code in Snowflake](#automate-deployment-and-replacement-of-scoring-code-in-snowflake) | | ✔ | [Define runtime parameters for custom models](#define-runtime-parameters-for-custom-models) | | ✔ | [Create custom model proxies for external models](#create-custom-model-proxies-for-external-models) | | ✔ | [GitHub Actions for custom models](#github-actions-for-custom-models) | | ✔ | [Remote repository file browser for custom models and tasks](#remote-repository-file-browser-for-custom-models-and-tasks) | | ✔ | [View service health and accuracy history](#view-service-health-and-accuracy-history) | | ✔ | [Model package artifact creation workflow](#model-package-artifact-creation-workflow) | | ✔ | [Model logs for model packages](#model-logs-for-model-packages) | | ✔ | [Batch predictions for TTS and LSTM models](#batch-predictions-for-tts-and-lstm-models) | | ✔ | [Time series model package prediction intervals](#time-series-model-package-prediction-intervals) | | ✔ | {% else %} Name | GA | Public Preview ---- | --- | -------------- [Create custom metrics](#create-custom-metrics) | ✔ | | [Export deployment data](#export-deployment-data) | ✔ | | [Management agent](#management-agent) | ✔ | | [Monitor deployment data processing](#monitor-deployment-data-processing) | ✔ | | [View deployment logs](#view-deployment-logs) | ✔ | | [Clear deployment statistics](#clear-deployment-statistics) | ✔ | | [Drill down on the Data Drift tab](#drill-down-on-the-data-drift-tab) | ✔ | | [Visualize drift over time](#visualize-drift-over-time) | ✔ | | [Visualize drift for text features as a word cloud](#visualize-drift-for-text-features-as-a-word-cloud) | ✔ | | [Deployment creation workflow redesign](#deployment-creation-workflow-redesign) | ✔ | | [Creation date sort for the Deployments inventory](#creation-date-sort-for-the-deployment-inventory) | ✔ | | [Model and deployment IDs on the Overview tab](#model-and-deployment-ids-on-the-overview-tab) | ✔ | | [Challenger insights for multiclass and external models](#challenger-insights-for-multiclass-and-external-models) | ✔ | | [View batch prediction job history for challengers](#view-batch-prediction-job-history-for-challengers) | ✔ | | [Enable compliance documentation for models without null imputation](#enable-compliance-documentation-for-models-without-null-imputation) | ✔ | | [Environment limit management for custom models](#environment-limit-management-for-custom-models) | ✔ | | [Prediction API cURL scripting code](#prediction-api-curl-scripting-code) | ✔ | | [Python and Java Scoring Code snippets](#python-and-java-scoring-code-snippets) | ✔ | | [Scoring Code for time series projects](#scoring-code-for-time-series-projects) | ✔ | | [Deployment for time series segmented modeling](#deployment-for-time-series-segmented-modeling) | ✔ | | [Agent event log](#agent-event-log) | ✔ | | [Large-scale monitoring with the MLOps library](#large-scale-monitoring-with-the-mlops-library) | ✔ | | [Dynamically load required agent spoolers in a Java application](#dynamically-load-required-agent-spoolers-in-a-java-application) | ✔ | | [Apache Kafka environment variables for Azure Even Hubs spoolers](#apache-kafka-environment-variables-for-azure-event-hubs-spoolers) | ✔ | | [MLOps Java library and agent public release](#mlops-java-library-and-agent-public-release) | ✔ | | [Create monitoring job definitions](#create-monitoring-job-definitions) | | ✔ | [Automate deployment and replacement of Scoring Code in Snowflake](#automate-deployment-and-replacement-of-scoring-code-in-snowflake) | | ✔ | [Define runtime parameters for custom models](#define-runtime-parameters-for-custom-models) | | ✔ | [GitHub Actions for custom models](#github-actions-for-custom-models) | | ✔ | [Remote repository file browser for custom models and tasks](#remote-repository-file-browser-for-custom-models-and-tasks) | | ✔ | [View service health and accuracy history](#view-service-health-and-accuracy-history) | | ✔ | [Model package artifact creation workflow](#model-package-artifact-creation-workflow) | | ✔ | [Model logs for model packages](#model-logs-for-model-packages) | | ✔ | [Batch predictions for TTS and LSTM models](#batch-predictions-for-tts-and-lstm-models) | | ✔ | [Time series model package prediction intervals](#time-series-model-package-prediction-intervals) | | ✔ | {% endif %} ## GA {: #ga } ### Create custom metrics { #create-custom-metrics } Now generally available, on a deployment's Custom Metrics tab, you can use the data you collect from the [Data Export tab](data-export) (or data calculated through other custom metrics) to compute and monitor up to 25 custom business or performance metrics. After you add a metric and upload data, a configurable dashboard visualizes a metric’s change over time and allows you to monitor and export that information. This feature enables you to implement your organization's specialized metrics to expand on the insights provided by DataRobot's built-in [Service Health](service-health), [Data Drift](data-drift), and [Accuracy](deploy-accuracy) metrics. ![](images/upload-custom-metric-data.png) !!! note The initial release of the custom metrics feature enforces some row count and file size limitations. For details, review the [upload method considerations](custom-metrics#upload-data-to-custom-metrics) in the feature documentation. For more information, see the [Custom Metrics tab](custom-metrics) documentation. ### Export deployment data { #export-deployment-data } Now generally available, on a deployment’s Data Export tab, you can export stored training data, prediction data, and actuals to compute and monitor custom business or performance metrics on the [Custom Metrics tab](custom-metrics) or outside DataRobot. You can export the available deployment data for a specified model and time range. To export deployment data, make sure your deployment stores prediction data, generate data for the required time range, and then view or download that data. ![](images/dep-data-export.png) !!! note The initial release of the deployment data export feature enforces some row count limitations. For details, review the [data considerations](data-export#prediction-data-and-actuals-considerations) in the feature documentation. For more information, see the [Data Export tab](data-export) documentation. ### Management agent {: #management-agent } Now generally available, the MLOps management agent provides a standard mechanism for automating model deployments in any type of environment or infrastructure. The management agent supports models trained on DataRobot, or models trained with open source tools on external infrastructure. The agent, accessed from the DataRobot application, ships with an assortment of example plugins that support custom configurations. Use the management agent to automate the deployment and monitoring of models to ensure your machine learning pipeline is healthy and reliable. This release introduces usability improvements to the management agent, including [deployment status reporting](mgmt-agent-events-status#deployment-status), [deployment relaunch](mgmt-agent-relaunch), and the option to [force the deletion of a management agent deployment](mgmt-agent-delete). ![](images/mgmt-agent-1.png) For more information on agent installation, configuration, and operation, see the [MLOps management agent](mgmt-agent/index) documentation. ### Monitor deployment data processing {: #monitor-deployment-data-processing } Now generally available, the Usage tab reports on prediction data processing for the [Data Drift](data-drift) and [Accuracy](deploy-accuracy) tabs. Monitoring a deployed model’s data drift and accuracy is a critical task to ensure that model remains effective; however, it requires processing large amounts of prediction data and can be subject to delays or rate limiting. The information on the Usage tab can help your organization identify these data processing issues. The Prediction Tracking chart, a bar chart of the prediction processing status over the last 24 hours or 7 days, tracks the number of processed, rate-limited, and missing association ID prediction rows: ![](images/rn-prediction-usage-tracking.png) On the right side of the page are the processing delays for Predictions Processing (Champion) and Actuals Processing (the delay in actuals processing is for ALL models in the deployment): ![](images/predictions-processing-delay.png) For more information, see the [Usage tab](deploy-usage) documentation. ### View deployment logs {: #view-deployment-logs } On the new **MLOps Logs** tab, you can view important deployment events. These events can help diagnose issues with a deployment or provide a record of the actions leading to the current state of the deployment. Each event has a type and a status. You can filter the event log by event type, event status, or time of occurrence, and you can view more details for an event on the Event Details panel. To access MLOps logs: 1. On a deployment's **Service Health** page, scroll to the **Recent Activity** section at the bottom of the page. 2. In the **Recent Activity** section, click **MLOps Logs**. 3. Under **MLOps Logs**, configure the log filters. 4. On the left panel, the **MLOps Logs** list displays deployment events with any selected filters applied. For each event, you can view a summary that includes the event name and status icon, the timestamp, and an event message preview. 5. Click the event you want to examine and review the **Event Details** panel on the right. ![](images/mlops-logs-details.png) For more information, see the Service Health tab’s [View MLOps Logs](service-health#view-mlops-logs) documentation. ### Clear deployment statistics {: #clear-deployment-statistics } Now generally available, you can clear monitoring data by model version and date range. If your organization has enabled the [deployment approval workflow](dep-admin), approval must be given before any monitoring data can be cleared from the deployment. This feature allows you to remove monitoring data sent inadvertently or during the integration testing phase of deploying a model from the deployment. Choose a deployment for which you want to reset statistics from the inventory. Click the actions menu and select **Clear statistics**. ![](images/reset-dep-1.png) Complete the settings in the **Clear Deployment Statistics** window to configure the conditions of the reset. ![](images/reset-dep-2.png) After fully configuring the settings, click **Clear statistics**. DataRobot clears the monitoring data from the deployment for the indicated date range. For more information, see the [Clear deployment statistics documentation](actions-menu#clear-deployment-statistics). ### Drill down on the Data Drift tab {: #drill-down-on-the-data-drift-tab } Now generally available on the [Data Drift tab](data-drift), the new Drill Down visualization tracks the difference in distribution over time between the training dataset of the deployed model and the datasets used to generate predictions in production. The drift away from the baseline established with the training dataset is measured using the Population Stability Index (PSI). As a model continues to make predictions on new data, the change in the drift status over time is visualized as a heat map for each tracked feature. This heat map can help you identify data drift and compare drift across features in a deployment to identify correlated drift trends: ![](images/rn-drill-down-heat-map.png) In addition, you can select one or more features from the heat map to view a Feature Drift Comparison chart, comparing the change in a feature's data distribution between a reference time period and a comparison time period to visualize drift. This information helps you identify the cause of data drift in your deployed model, including data quality issues, changes in feature composition, or changes in the context of the target variable: ![](images/rn-drill-down-drift-compare.png) For more information, see the [Drill down on the Data Drift tab](data-drift#drill-down-on-the-data-drift-tab) documentation. ### Visualize drift over time {: #visualize-drift-over-time } On a deployment’s [**Data Drift** dashboard](data-drift), the **Drift Over Time** chart visualizes the difference in distribution over time between the training dataset of the deployed model and the datasets used to generate predictions in production. The drift away from the baseline established with the training dataset is measured using the Population Stability Index (PSI). As a model continues to make predictions on new data, the change in the PSI over time is visualized for each tracked feature, allowing you to identify [data drift](glossary/index#data-drift) trends: ![](images/rn-drift-over-time.png) As data drift can decrease your model's predictive power, determining when a feature started drifting and monitoring how that drift changes (as your model continues to make predictions on new data) can help you estimate the severity of the issue. You can then compare data drift trends across the features in a deployment to identify correlated drift trends between specific features. In addition, the chart can help you identify seasonal effects (significant for time-aware models). This information can help you identify the cause of data drift in your deployed model, including data quality issues, changes in feature composition, or changes in the context of the target variable. The example below shows the PSI consistently increasing over time, indicating worsening data drift for the selected feature. For more information, see the [Drift Over Time chart](data-drift#drift-over-time-chart) documentation. ### Visualize drift for text features as a word cloud {: #visualize-drift-for-text-features-as-a-word-cloud } The [Feature Details](data-drift#feature-details-chart) chart plots the differences in a feature's data distribution between the training and scoring periods, providing a bar chart to compare the percentage of records a feature value represents in the training data with the percentage of records in the scoring data. For text features, the feature drift bar chart is replaced with a word cloud, visualizing data distributions for each token and revealing how much each individual token contributes to data drift in a feature. To access the feature drift word cloud for a text feature, open the **Data Drift** tab of a [drift-enabled](data-drift-settings) deployment. On the **Summary** tab, in the **Feature Details** chart, select a text feature from dropdown list: ![](images/pp-drift-word-cloud-details.png) !!! note Next to the **Export** button, you can click the settings icon (![](images/icon-gear.png)) and clear the **Display text features as word cloud** check box to disable the feature drift word cloud and view the standard chart: ![](images/pp-drift-word-cloud-disable.png) For more information, see the Feature Details chart’s [Text features](data-drift#text-features) documentation. ### Deployment creation workflow redesign {: #deployment-creation-workflow-redesign } Now generally available, the redesigned deployment creation workflow provides a better organized and more intuitive interface. Regardless of where you create a new deployment (the Leaderboard, the Model Registry, or the Deployments inventory), you are directed to this new workflow. The new design clearly outlines the capabilities of your current deployment based on the data provided, grouping the settings and capabilities logically and providing immediate confirmation when you enable a capability, or guidance when you’re missing required fields or settings. A new sidebar provides details about the model being used to make predictions for your deployment, in addition to information about the deployment review policy, deployment billing details (depending on your organization settings), and a link to the deployment information documentation. ![](images/deploy-create-settings-1.png) For more information, see the [Configure a deployment](add-deploy-info) documentation. ### Creation date sort for the Deployments inventory {: #creation-date-sort-for-the-deployment-inventory } The deployment inventory on the **Deployments** page is now sorted by creation date (from most recent to oldest, as reported in the new **Creation Date** column). You can click a different column title to sort by that metric instead. A blue arrow appears next to the sort column's header, indicating if the order is ascending or descending. ![](images/rn-deploy-tab-sort-create-date.png) !!! note When you sort the deployment inventory, your most recent sort selection persists in your local settings until you clear your browser's local storage data. As a result, the deployment inventory is usually sorted by the column you selected last. For more information, see the [Deployment inventory](deploy-inventory) documentation. ### Model and deployment IDs on the Overview tab {: #model-and-deployment-ids-on-the-overview-tab } The **Content** section of the **Overview** tab lists a deployment's model and environment-specific information, now including the following IDs: ![](images/rn-deploy-overview-ids.png) * **Model ID:** Copy the ID number of the deployment's current model. * **Deployment ID:** Copy the ID number of the current deployment. In addition, you can find a deployment's model-related events under **History** > **Logs**, including the creation and deployment dates and any model replacements events. From this log, you can copy the **Model ID** of any previously deployed model. ![](images/rn-deploy-log-ids.png) For more information, see the deployment [Overview tab](dep-overview) documentation. ### Challenger insights for multiclass and external models {: #challenger-insights-for-multiclass-and-external-models } Now generally available, you can compute challenger model insights for multiclass models and external models. * Multiclass classification projects only support accuracy comparison. * External models (regardless of project type) require an external challenger comparison dataset. To compare an external model challenger, you need to provide a dataset that includes the actuals *and* the prediction results. When you upload the comparison dataset, you can specify a column containing the prediction results. To add a comparison dataset for an external model challenger, follow the [Generate model comparisons](challengers#generate-model-comparions) process, and on the **Model Comparison** tab, upload your comparison dataset with a **Prediction column** identifier. Make sure the prediction dataset you provide includes the prediction results generated by the external model at the location identified by the **Prediction column**. ![](images/ext-champ-6.png) For more information, see the [View model comparisons documentation](challengers#view-model-comparisons). ### View batch prediction job history for challengers {: #view-batch-prediction-job-history-for-challengers } To improve error surfacing and usability for challenger models, you can now access a challenger's prediction job history from the [**Deployments** > **Challengers**](challengers) tab. After adding one or more challenger models and replaying predictions, click **Job History**: ![](images/challenger-job-history.png) The [**Deployments** > **Prediction Jobs**](batch-pred-jobs#manage-prediction-jobs) page opens and is filtered to display the challenger jobs for the deployment you accessed the job history from. You can also apply this filter directly from the **Prediction Jobs** page: ![](images/rn-challenger-job-filter.png) For more information, see the [View challenger job history](challengers#view-challenger-job-history) documentation. ### Enable compliance documentation for models without null imputation {: #enable-compliance-documentation-for-models-without-null-imputation } To generate the Sensitivity Analysis section of the default **Automated Compliance Document** template, your custom model must support null imputation (the imputation of NaN values), or compliance documentation generation will fail. If the custom model doesn't support null imputation, you can use a specialized template to generate compliance documentation. In the **Report template** drop-down list, select **Automated Compliance Document (for models that do not impute null values)**. This template excludes the Sensitivity Analysis report and is only available for custom models. For more information, see information on [generating compliance documentation](reg-compliance#generate-compliance-documentation). ![](images/rn-comp-doc-no-imputation.png) !!! note If this template option is not available for your version of DataRobot, you can download the <a href="../../../mlops/deployment/registry/custom-template-for-models-without-null-imputation-regression.json" download>custom template for regression models</a> or the <a href="../../../mlops/deployment/registry/custom-template-for-models-without-null-imputation-binary.json" download>custom template for binary classification models</a>. ### Environment limit management for custom models {: #environment-limit-management-for-custom-models } The execution environment limit allows administrators to control how many custom model environments a user can add to the [Custom Model Workshop](custom-model-workshop/index). In addition, the execution environment _version_ limit allows administrators to control how many versions a user can add to _each_ of those environments. These limits can be: 1. **Directly applied to the user**: Set in a user's permissions. Overrides the limits set in the group and organization permissions (if the user limit value is lower). 2. **Inherited from a user group**: Set in the permissions of the group a user belongs to. Overrides the limits set in organization permissions (if the user group limit value is lower). 3. **Inherited from an organization**: Set in the permissions of the organization a user belongs to. If the environment or environment version limits are defined for an organization or a group, the users within that organization or group inherit the defined limits. However, a more specific definition of those limits at a lower level takes precedence. For example, an organization may have the environment limits set to 5, a group to 4, and the user to 3; in this scenario, the final limit for the individual user is 3. For more information on adding custom model execution environments, see the [Custom model environment](custom-model-environments/index) documentation. === "View environment limits" Any user can view their environment and environment version limits. On the **Custom Models** > **Environments** tab, next to the **Add new environment** and the **New version** buttons, a badge indicates how many environments (or environment versions) you've added and how many environments (or environment versions) you can add based on the environment limit: ![](images/rn-env-ver-limits.png) The following status categories are available for this badge: Badge | Description ------|------------ ![](images/env-limit-badge.png){: style="height:22px; width:auto;"} | The number of environments (or versions) is less than 75% of the limit. ![](images/env-limit-badge-alert.png){: style="height:22px; width:auto;"} | The number of environments (or versions) is equal to or greater than 75% of the limit. ![](images/env-limit-badge-warn.png){: style="height:22px; width:auto;"} | The number of environments (or versions) has reached the limit. === "Set environment limits (for administrators)" With the correct permissions, an administrator can set these limits at a [user](manage-users#manage-execution-environment-limits) or [group](manage-groups#manage-execution-environment-limits) level. For a user or a group, on the **Permissions** tab, click **Platform**, and then click **Admin Controls**. Next, under **Admin Controls**, set either or both of the following settings: * **Execution Environments limit**: The maximum number of custom model execution environments users in this group can add. * **Execution Environments versions limit**: The maximum number of versions users in this group can add to each custom model execution environment. ![](images/execution-env-controls.png) For more information, see [Manage user execution environment limits](manage-users#manage-execution-environment-limits) or [Manage group execution environment limits](manage-groups#manage-execution-environment-limits). {% if 'enterprise' in tags %} For Self-Managed AI Platform installations, you can also see [Manage organization execution environment limits](manage-orgs#manage-execution-environment-limits). {% endif %} ### Prediction API cURL scripting code {: #prediction-api-curl-scripting-code } The Prediction API Scripting Code section on a deployment's **Predictions** > **Prediction API** tab now includes a **cURL** scripting code snippet for **Real-time** predictions. cURL is a command-line tool for transferring data using various network protocols, available by default in most Linux distributions and macOS. For more information on Prediction API cURL scripting code, see the [Real-time prediction snippets](code-py#real-time-prediction-snippet-settings) documentation. ### Python and Java Scoring Code snippets {:#python-and-java-scoring-code-snippets } Now generally available, DataRobot allows you to [use Scoring Code via Python and Java](sc-download-leaderboard). Although the underlying Scoringe Code is based off Java, DataRobot now provides the [DataRobot Prediction Library](https://pypi.org/project/datarobot-predict/){ target=_blank } to make predictions using various prediction methods supported by DataRobot via a Python API. The library provides a common interface for making predictions, making it easy to swap out any underlying implementation. Access Scoring Code for Python and Java from a model in the Leaderboard or from a deployed model that supports Scoring Code. ![](images/scoring-code-python-rn.png) ### Scoring Code for time series projects {: #scoring-code-for-time-series-projects } Now generally available, you can export time series models in a Java-based Scoring Code package. [Scoring Code](scoring-code/index) is a portable, low-latency method of utilizing DataRobot models outside the DataRobot application. You can download a model's time series Scoring Code from the following locations: * [Download from the Leaderboard](sc-download-leaderboard) (**Leaderboard > Predict > Portable Predictions**) * [Download from the deployment](sc-download-deployment) (**Deployments > Predictions > Portable Predictions**) === "Download for segmented modeling projects" With [segmented modeling](ts-segmented), you can build individual models for segments of a multiseries project. DataRobot then merges these models into a Combined Model. You can [generate Scoring Code for the resulting Combined Model](sc-time-series#scoring-code-for-segmented-modeling-projects). To generate and download Scoring Code, each segment champion of the Combined Model must have Scoring Code: ![](images/sc-segmented-scoring-code.png) After you ensure each segment champion of the Combined Model has Scoring Code, you can download the Scoring Code [from the Leaderboard](sc-download-leaderboard) or you can deploy the Combined Model and download the Scoring Code [from the deployment](sc-download-deployment). === "Download with prediction intervals" You can now [include prediction intervals in the downloaded Scoring Code JAR](sc-time-series#prediction-intervals-in-scoring-code) for a time series model. You can download Scoring Code with prediction intervals [from the Leaderboard](sc-download-leaderboard) or [from a deployment](sc-download-deployment). ![](images/sc-prediction-intervals.png) === "Score data at the command line" You can score data at the command line using the downloaded time series Scoring Code. This release introduces efficient batch processing for time series Scoring Code to support scoring larger datasets. For more information, see the [Time series parameters for CLI scoring](sc-time-series#time-series-parameters-for-cli-scoring) documentation. For more details on time series Scoring Code, see [Scoring Code for time series projects](sc-time-series). ### Deployment for time series segmented modeling {: #deployment-for-time-series-segmented-modeling } To fully leverage the value of segmented modeling, you can deploy Combined Models like any other time series model. After selecting the champion model for each included project, you can deploy the Combined Model to create a "one-model" deployment for multiple segments; however, the individual segments in the deployed Combined Model still have their own segment champion models running in the deployment behind the scenes. Creating a deployment allows you to use [DataRobot MLOps](mlops/index) for accuracy monitoring, prediction intervals, challenger models, and retraining. !!! note Time series segmented modeling deployments do not support data drift monitoring. For more information, see the [feature considerations](ts-segmented#combined-model-deployment-considerations). After you complete the [segmented modeling workflow](ts-segmented#segmented-modeling-workflow) and Autopilot has finished, the **Model** tab contains one model. This model is the completed **Combined Model**. To deploy, click the **Combined Model**, click **Predict** > **Deploy**, and then click **Deploy model**. ![](images/ts-segmented-deploy-2.png) After deploying a Combined Model, you can change the segment champion for a segment by cloning the deployed Combined Model and modifying the cloned model. This process is automatic and occurs when you attempt to change a segment's champion within a deployed Combined Model. The cloned model you can modify becomes the **Active Combined Model**. This process ensures stability in the deployed model while allowing you to test changes within the same segmented project. !!! note Only one Combined Model on a project's Leaderboard can be the **Active Combined Model** (marked with a badge) Once a **Combined Model** is deployed, it is labeled **Prediction API Enabled**. To modify this model, click the active and deployed **Combined Model**, and then in the **Segments** tab, click the segment you want to modify. ![](images/ts-segmented-deploy-3.png) Next, [reassign the segment champion](ts-segmented#reassign-the-champion-model), and in the dialog box that appears, click **Yes, create new combined model**. ![](images/ts-segmented-deploy-4.png) On the segment's **Leaderboard**, you can now access and modify the **Active Combined Model**. For more information, see the [Deploy a Combined Model](ts-segmented#deploy-a-combined-model) documentation. ### Agent event log {: #agent-event-log } Now generally available, on a deployment's **Service Health** tab, under **Recent Activity**, you can view **Management** events (e.g., deployment actions) and **Monitoring** events (e.g., spooler channel and rate limit events). **Monitoring** events can help you quickly diagnose MLOps agent issues. For example, spooler channel error events can help you diagnose and fix [spooler configuration](spooler) issues. The rate limit enforcement events can help you identify if service health stats or data drift and accuracy values aren't updating because you exceeded the API request rate limit. ![](images/rn-mlops-agent-events.png) To view **Monitoring** events, you must provide a `predictionEnvironmentID` in the agent configuration file (`conf\mlops.agent.conf.yaml`). If you haven't already installed and configured the MLOps agent, see the [Installation and configuration](agent) guide. For more information on enabling and reading the monitoring agent event log, see the [Agent event log](agent-event-log) documentation. ### Large-scale monitoring with the MLOps library {: #large-scale-monitoring-with-the-mlops-library } To support large-scale monitoring, the MLOps library provides a way to calculate statistics from raw data on the client side. Then, instead of reporting raw features and predictions to the DataRobot MLOps service, the client can report anonymized statistics without the feature and prediction data. Reporting prediction data statistics calculated on the client side is the optimal method compared to reporting raw data, especially at scale (with billions of rows of features and predictions). In addition, because client-side aggregation only sends aggregates of feature values, it is suitable for environments where you don't want to disclose the actual feature values. Large-scale monitoring functionality is available for the Java Software Development Kit (SDK), the MLOps Spark Utils Library, and Python. !!! note To support the use of challenger models, you must send raw features. For large datasets, you can report a small sample of raw feature and prediction data to support challengers and reporting; then, you can send the remaining data in aggregate format. For more information, see the [Enable large-scale monitoring](agent-use#enable-large-scale-monitoring) use case. ### Dynamically load required agent spoolers in a Java application {: #dynamically-load-required-agent-spoolers-in-a-java-application } Dynamically loading third-party Monitoring Agent spoolers in your Java application improves security by removing unused code. This functionality works by loading a separate JAR file for the [Amazon SQS](spooler#amazon-sqs), [RabbitMQ](spooler#rabbitmq), [Google Cloud Pub/Sub](spooler#google-cloud-pubsub), and [Apache Kafka](spooler#apache-kafka) spoolers, as needed. The natively supported file system spooler is still configurable without loading a JAR file. Previously, the `datarobot-mlops` and `mlops-agent` packages included all spooler types by default. To use a third-party spooler in your MLOps Java application, you must include the required spoolers as dependencies in your POM (Project Object Model) file, along with `datarobot-mlops`: ``` xml title="Dependencies in a POM file" <properties> <mlops.version>8.3.0</mlops.version> </properties> <dependency> <groupId>com.datarobot</groupId> <artifactId>datarobot-mlops</artifactId> <version>${mlops.version}</version> </dependency> <dependency> <groupId>com.datarobot</groupId> <artifactId>spooler-sqs</artifactId> <version>${mlops.version}</version> </dependency> ``` The spooler JAR files are included in the [MLOps agent tarball](monitoring-agent/index#mlops-agent-tarball). They are also available individually as downloadable JAR files in the public Maven repository for the [DataRobot MLOps Agent](https://mvnrepository.com/artifact/com.datarobot/mlops-agent){ target=_blank }. To use a third-party spooler with the executable agent JAR file, add the path to the spooler to the classpath: ``` shell title="Classpath with Kafka spooler" java ... -cp path/to/mlops-agent-8.3.0.jar:path/to/spooler-kafka-8.3.0.jar com.datarobot.mlops.agent.Agent ``` The `start-agent.sh` script provided as an example automatically performs this task, adding any spooler JAR files found in the `lib` directory to the classpath. If your spooler JAR files are in a different directory, set the `MLOPS_SPOOLER_JAR_PATH` environment variable. For more information, see the [Dynamically load required spoolers in a Java application](spooler#dynamically-load-required-spoolers-in-a-java-application) documentation. ### Apache Kafka environment variables for Azure Even Hubs spoolers {: #apache-kafka-environment-variables-for-azure-event-hubs-spoolers } The `MLOPS_KAFKA_CONFIG_LOCATION` environment variable was removed and replaced by new environment variables for Apache Kafka spooler configuration. These new environment variables eliminate the need for a separate configuration file and simplify support for Azure Event Hubs as a spooler type. For more information on Apache Kafka spooler configuration, see the [Apache Kafka](spooler#apache-kafka) environment variables reference. For more information on leveraging the Apache Kafka spooler type to use a Microsoft Azure Event Hubs spooler, see the [Azure Event Hubs](spooler#azure-event-hubs) spooler configuration reference. ### MLOps Java library and agent public release {: #mlops-java-library-and-agent-public-release } You can now download the MLOps Java library and agent from the public [Maven Repository](https://mvnrepository.com/){ target=_blank } with a `groupId` of `com.datarobot` and an `artifactId` of `datarobot-mlops` (library) and `mlops-agent` (agent). In addition, you can access the [DataRobot MLOps Library](https://mvnrepository.com/artifact/com.datarobot/datarobot-mlops){ target=_blank } and [DataRobot MLOps Agent](https://mvnrepository.com/artifact/com.datarobot/mlops-agent){ target=_blank } artifacts in the Maven Repository to view all versions and download and install the JAR file. ## Public Preview {: #public-preview } ### Create monitoring job definitions {: #create-monitoring-job-definitions } Now available as a public preview feature, monitoring job definitions enable DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot, integrating deployments more closely with external data sources. For example, you can create a monitoring job to connect to Snowflake, fetch raw data from the relevant Snowflake tables, and send the data to DataRobot for monitoring purposes. This integration extends the functionality of the existing [Prediction API](predictions) routes for `batchPredictionJobDefinitions` and `batchPredictions`, adding the `batch_job_type: monitoring` property. This new property allows you to create monitoring jobs. In addition to the Prediction API, you can create monitoring job definitions through the DataRobot UI. You can then view and manage monitoring job definitions as you would any other job definition. **Required feature flag**: Monitoring Job Definitions For more information, see the [Prediction monitoring jobs](pred-monitoring-jobs/index) documentation. ### Automate deployment and replacement of Scoring Code in Snowflake {: #automate-deployment-and-replacement-of-scoring-code-in-snowflake } Now available as a public preview feature, you can create a DataRobot-managed Snowflake prediction environment to deploy DataRobot Scoring Code in Snowflake. With the [Managed by DataRobot option](pp-snowflake-sc-deploy-replace#create-a-snowflake-prediction-environment) enabled, the model deployed externally to Snowflake has access to MLOps management, including automatic Scoring Code replacement: ![](images/pred-env-settings.png) Once you've created a Snowflake prediction environment, you can [deploy a Scoring Code-enabled model to that environment from the Model Registry](pp-snowflake-sc-deploy-replace#deploy-a-model-to-the-snowflake-prediction-environment): ![](images/sf-deploy-target.png) **Required feature flag**: Enable the Automated Deployment and Replacement of Scoring Code in Snowflake Public preview [documentation](pp-snowflake-sc-deploy-replace). ### Define runtime parameters for custom models {: #define-runtime-parameters-for-custom-models } Now available as a public preview feature, you can add runtime parameters to a custom model through the model metadata, making your custom model code easier to reuse. To define runtime parameters, you can add the following `runtimeParameterDefinitions` in `model-metadata.yaml`: Key | Value ---------------|------ `fieldName` | The name of the runtime parameter. `type` | The data type the runtime parameter contains:`string` or `credential`. `defaultValue` | (Optional) The default string value for the runtime parameter (the credential type doesn't support default values). `description` | (Optional) A descripton of the purpose or contents of the runtime parameter. When you add a `model-metadata.yaml` file with `runtimeParameterDefinitions` to DataRobot while creating a custom model, the **Runtime Parameters** section appears on the **Assemble** tab for that custom model: ![](images/assemble-tab-runtime-params.png) **Required feature flag**: Enable the Injection of Runtime Parameters for Custom Models Public preview [documentation](pp-cus-model-runtime-params). {% if 'enterprise' in tags %} ### Create custom model proxies for external models {: create-custom-model-proxies-for-external-models } Now available as a public preview feature, you can create a custom model as a proxy for an externally hosted model. To create a custom model as a proxy for an external model, you can add a new proxy model to the [Custom Model Workshop](custom-model-workshop/index). A proxy model contains the proxy code you created (in `custom.py`) to connect with your external model, allowing you to use features like [compliance documentation](reg-compliance), [challenger analysis](challengers), and [custom model tests](custom-model-test) with a model running on infrastructure outside of DataRobot. You can also use [custom model runtime parameters](pp-cus-model-runtime-params) with proxy models. ![](images/add-proxy-model.png) **Required feature flags:** Enable Proxy Models, Enable the Injection of Runtime Parameters for Custom Models Public preview [documentation](pp-ext-model-proxy). {% endif %} ### GitHub Actions for custom models {: #github-actions-for-custom-models } The custom models action manages custom inference models and their associated deployments in DataRobot via GitHub CI/CD workflows. These workflows allow you to create or delete models and deployments and modify settings. Metadata defined in YAML files enables the custom model action's control over models and deployments. Most YAML files for this action can reside in any folder within your custom model's repository. The YAML is searched, collected, and tested against a schema to determine if it contains the entities used in these workflows. For more information, see the [custom-models-action repository](https://github.com/datarobot-oss/custom-models-action){ target=_blank }. A [quickstart example](custom-model-github-action#github-actions-quickstart), provided in the documentation, uses a [Python Scikit-Learn model template](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/python3_sklearn){ target=_blank } from the [datarobot-user-model repository](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates){ target=_blank }. For more information, see the [Custom Models Action](https://github.com/marketplace/actions/custom-models-action){ target=_blank }. After you configure the workflow and create a model and a deployment in DataRobot, you can access the commit information from the model's version info and package info and the deployment overview: === "Model version info" ![](images/pp-cus-model-github2.png) === "Model package info" ![](images/pp-cus-model-github4.png) === "Deployment overview" ![](images/pp-cus-model-github1.png) **Required feature flag**: Enable Custom Model GitHub CI/CD For more information, see [GitHub Actions for custom models](custom-model-github-action). ### Remote repository file browser for custom models and tasks {: #remote-repository-file-browser-for-custom-models-and-tasks } Now available as a public preview feature, you can browse the folders and files in a remote repository to select the files you want to add to a custom model or task. When you [add a model](custom-inf-model#create-a-new-custom-model) or [add a task](cml-custom-tasks) to the Custom Model Workshop, you can add files to that model or task from a wide range of repositories, including Bitbucket, GitHub, GitHub Enterprise, S3, GitLab, and GitLab Enterprise. After you [add a repository to DataRobot](custom-model-repos#add-a-remote-repository), you can pull files from the repository and include them in the custom model or task. When you [pull from a remote repository](custom-model-repos#pull-files-from-the-repository), in the **Pull from GitHub repository** dialog box, you can select the checkbox for any files or folders you want to pull into the custom model. In addition, you can click **Select all** to select every file in the repository, or, after you select one or more files, you can click **Deselect all** to clear your selections. !!! note This example uses GitHub; however, the process is the same for each repository type. ![](images/pp-custom-model-repo-browse.png) **Required feature flag:** Enable File Browser for Pulling Model or Task Files from Remote Repositories Public Preview [documentation](pp-remote-repo-file-browser). ### View service health and accuracy history {: #view-service-health-and-accuracy-history } Now available as a public preview feature, when analyzing a deployment's [Service Health](service-health) and [Accuracy](deploy-accuracy), you can view the History tab, providing critical information about the performance of current and previously deployed models. This tab improves the usability of service health and accuracy analysis, allowing you to view up to five models in one place and on the same scale, making it easier directly compare model performance. === "Service Health history" On a deployment's [**Service Health > History**](pp-deploy-history#service-health-history) tab, you can access visualizations representing the service health history of up to five of the most recently deployed models, including the currently deployed model. This history is available for each metric tracked in a model's service health, helping you identify bottlenecks and assess capacity, which is critical to proper provisioning. ![](images/service-health-history-details.png) === "Accuracy history" On a deployment's [**Accuracy > History**](pp-deploy-history#accuracy-history) tab, you can access visualizations representing the accuracy history of up to five of the most recently deployed models, including the currently deployed model, allowing you to compare their accuracy directly. These accuracy insights are rendered based on the problem type and its associated optimization metrics. ![](images/accuracy-history-details.png) **Required feature flag**: Enable Deployment History Public preview [documentation](pp-deploy-history). ### Model package artifact creation workflow {: #model-package-artifact-creation-workflow } Now available as a public preview feature, the improved model package artifact creation workflow provides a clearer and more consistent path to model deployment with visible connections between a model and its associated model packages in the [Model Registry](reg-create). Using this new approach, when you deploy a model, you begin by providing model package details and adding the model package to the Model Registry. After you create the model package and allow the build to complete, you can deploy it by [adding the deployment information](add-deploy-info). === "Leaderboard workflow" 1. From the **Leaderboard**, select the model to use for generating predictions and then click **Predict > Deploy**. To follow best practices, DataRobot recommends that you first [prepare the model for deployment](model-rec-process#prepare-a-model-for-deployment). This process runs **Feature Impact**, retrains the model on a reduced feature list, and trains on a higher sample size, followed by the entire sample (latest data for date/time partitioned projects). 2. On the **Deploy model** tab, provide the required model package information, and then click **Add to Model Registry**. 3. Allow the model to build. The **Building** status can take a few minutes, depending on the size of the model. A model package must have a **Status** of **Ready** before you can deploy it. ![](images/pp-model-artifact-creation-building.png) 4. In the **Model Packages** list, locate the model package you want to deploy and click Deploy. ![](images/pp-model-artifact-creation5.png) 5. Add [deployment information and create the deployment](add-deploy-info). === "Model Registry workflow" 1. Click **Model Registry** > **Model Packages**. 2. Click the **Actions** menu for the model package you want to deploy, and then click **Deploy**. The **Status** column shows the build status of the model package. ![](images/pp-model-artifact-creation6.png) If you deploy a model package that has a **Status** of **N/A**, the build process starts: ![](images/pp-model-artifact-creation7.png) 3. Add [deployment information and create the deployment](add-deploy-info). !!! tip You can also open a model package from the Model Registry and deploy it from **Package Info** tab. **Required feature flag**: Enable .mlpkg Artifact Creation for Model Packages Public preview [documentation](pp-model-pkg-artifact-creation). ### Model logs for model packages {: #model-logs-for-model-packages } A model package's model logs display information about the operations of the underlying model. This information can help you identify and fix errors. For example, compliance documentation requires DataRobot to execute many jobs, some of which run sequentially and some in parallel. These jobs may fail, and reading the logs can help you identify the cause of the failure (e.g., the Feature Effects job fails because a model does not handle null values). !!! important In the Model Registry, a model package's **Model Logs** tab _only_ reports the operations of the underlying model, not the model package operations (e.g., model package deployment time). In the **Model Registry**, access a model package, and then click the **Model Logs** tab: ![](images/pp-model-pkg-logs.png) | | Information | Description | |-|-------------|-------------| | ![](images/icon-1.png) | Date / Time | The date and time the model log event was recorded. | | ![](images/icon-2.png) | Status | The status the log entry reports: <ul><li><span style="color:#3BC169">INFO</span>: Reports a successful operation.</li><li><span style="color:#E74D4D">ERROR</span>: Reports an unsuccessful operation.</li></ul> | | ![](images/icon-3.png) | Message | The description of the successful operation (INFO), or the reason for the failed operation (ERROR). This information can help you troubleshoot the root cause of the error. | If you can't locate the log entry for the error you need to fix, it may be an older log entry not shown in the current view. Click **Load older logs** to expand the **Model Logs** view. ![](images/pp-model-pkg-logs-load.png) !!! tip Look for the older log entries at the top of the **Model Logs**; they are added to the top of the existing log history. **Required feature flag:** Enable Model Logs for Model Packages Public preview [documentation](pp-model-pkg-logs). ### Batch predictions for TTS and LSTM models {: #batch-predictions-for-tts-and-lstm-models } Traditional Time Series (TTS) and Long Short-Term Memory (LSTM) models— sequence models that use autoregressive (AR) and moving average (MA) methods—are common in time series forecasting. Both AR and MA models typically require a complete history of past forecasts to make predictions. In contrast, other time series models only require a single row after feature derivation to make predictions. Previously, batch predictions couldn't accept historical data beyond the effective [feature derivation window (FDW)](glossary/index#feature-derivation-window) if the history exceeded the maximum size of each batch, while sequence models required complete historical data beyond the FDW. These requirements made sequence models incompatible with batch predictions. Enabling this public preview feature removes those limitations to allow batch predictions for TTS and LSTM models. Time series Autopilot still doesn't include TTS or LSTM model blueprints; however, you can access the model blueprints in the model [Repository](repository). To allow batch predictions with TTS and LSTM models, this feature: * Updates batch predictions to accept historical data up to the maximum batch size (equal to 50MB or approximately a million rows of historical data). * Updates TTS models to allow refitting on an incomplete history (if the complete history isn't provided). If you don't provide sufficient forecast history at prediction time, you could encounter prediction inconsistencies. For more information on maintaining accuracy in TTS and LSTM models, see the [prediction accuracy considerations](pp-ts-tts-lstm-batch-pred#prediction-accuracy-considerations). With this feature enabled, you can access the [**Predictions > Make Predictions**](batch-pred) and [**Predictions > Job Definitions**](batch-pred-jobs) tabs of a deployed TTS or LSTM model. ![](images/pp-tts-lstm-batch-pred1.png) **Required feature flag**: Enable TTS and LSTM Time Series Model Batch Predictions Public preview [documentation](pp-ts-tts-lstm-batch-pred). ### Time series model package prediction intervals {: #time-series-model-package-prediction-intervals } Now available for public preview, you can enable the computation of a model's time series prediction intervals (from 1 to 100) during model package generation. To run a DataRobot time series model in a remote prediction environment, you download a model package (.mlpkg file) from the model's deployment or the Leaderboard. In both locations, you can now choose to **Compute prediction intervals** during model package generation. You can then run prediction jobs with a [portable prediction server (PPS)](portable-pps) outside DataRobot. Before you download a model package with prediction intervals from a deployment, ensure that your deployment supports model package downloads. The deployment must have a DataRobot build environment and an *external* prediction environment, which you can verify using the [**Governance Lens**](gov-lens) in the deployment inventory: ![](images/pps-1.png) To download a model package with prediction intervals from a *deployment*, in the external deployment, you can use the **Predictions > Portable Predictions** tab: ![](images/pp-ts-deploy-pred-int.png) To download a model package with prediction intervals from a model in the *Leaderboard*, you can use the **Predict > Deploy** or **Predict > Portable Predictions** tab. === "Deploy tab download" ![](images/pp-ts-leaderboard-pred-int.png) === "Portable Prediction Server tab download" ![](images/pp-ts-leaderboard-pps-pred-int.png) **Required feature flag:** Enable computation of all Time-Series Intervals for .mlpkg Public preview [documentation](pp-ts-pred-intervals-mlpkg). <hr> _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
90-mlops
--- title: Version 9.0.x description: DataRobot Release 9.0 release announcements index page. --- # Version 9.0.x {: #version-90x } _March 29, 2023_ View feature announcements for data, AutoML and time series (AutoTS) modeling, MLOps, and platform changes and enhancements for version 9.0.x of DataRobot's Self-Managed AI Platform deployment option. * [Data](90-data) * [Time series](90-autots) * [MLOps](90-mlops) * [Platform](90-platform) * [Maintenance release notes](9.0-maintenance/index) for DataRobot v9.0.x. ### In the spotlight {: #in-the-spotlight } This release brings DataRobot’s new user interface&mdash;Workbench. Workbench is an intuitive, guided, machine learning workflow focused on experimentation and iteration. It lets you easily move from raw data to prepared, partitioned data that’s ready for modeling. Then, as with DataRobot Classic, you can build many models at once and generate value quickly through key insights and predictions. ![](images/wb-uc-2.png) The Workbench user interface lets you group, organize, and share your modeling assets to better leverage DataRobot for enhanced experimentation. These assets&mdash;datasets, experiments, notebooks, and no-code apps&mdash;are housed within folder-like containers known as Use Cases. ![](images/wb-conceptual.png) Workbench is a public preview feature, but is on by default and enabled for all users. The notebooks component, also public preview, is not enabled by default (contact your DataRobot representative or administrator for information on enabling them).
index
--- title: Platform (V9.0) description: DataRobot Release 9.0 Platform release announcements. --- # Platform (V9.0) {: #platform-v90 } The following table lists each new feature. Name | GA | Public Preview ---------- | ---- | --- [**Platform enhancements**](#platform-enhancements) | :~~: | :~~: [DataRobot Notebooks](#public-preview-datarobot-notebooks) | | ✔ | [**API enhancements**](#api-enhancements) | :~~: | :~~: [Access DataRobot REST API documentation from docs.datarobot.com](#access-datarobot-rest-api-documentation-from-docs-datarobot-com) | ✔ | | [Python client v3.0](#python-client-v30) | ✔ | | [Python client v3.1](#python-client-v31) | ✔ | | [R client v2.29](#r-client-v229) | | ✔ | [Calculate Feature Impact for each backtest](#calculate-feature-impact-for-each-backtest) | ✔ | | [Deprecation announcements](#deprecation-announcements) | :~~: | :~~: ## Platform enhancements {: #platform-enhancements } * With DataRobot release version 9.0, deployments are now only supported by Kubernetes. Version 9.0 supports OpenShift 4.10 and AWS EKS with K8s v1.23. Older installation options (i.e., Dockerized, RPM, and Hadoop) are no longer supported. If you are not on a supported version of Kubernetes, you will need to use the 8.x versions of DataRobot with Dockerized, RPM, or Hadoop installs. * Minio will not be packaged with the DataRobot installation. You will need to provide and manage an S3 API-compatible object store to use with DataRobot. * DataRobot will no longer package a container registry. You will need to provide a docker registry for DataRobot containers. ### Public Preview: DataRobot Notebooks {: #public-preview-datarobot-notebooks } The DataRobot application now includes an in-browser editor to create and execute notebooks for data science analysis and modeling. Notebooks display computation results in various formats, including text, images, graphs, plots, tables, and more. You can customize output display by using open-source plugins. Cells can also contain Markdown rich text for commentary and explanation of the coding workflow. As you develop and edit a notebook, DataRobot stores a history of revisions that you can return to at any time. DataRobot Notebooks offer a dashboard that hosts notebook creation, upload, and management. Individual notebooks have containerized, built-in environments with commonly used machine learning libraries that you can easily set up in a few clicks. Notebook environments seamlessly integrate with DataRobot's API, allowing a robust coding experience supported by keyboard shortcuts for cell functions, in-line documentation, and saved environment variables for secrets management and automatic authentication. ![](images/notebooks-rn.png) Public preview [documentation](dr-notebooks/index). ## API enhancements {: #api-enhancements } The following is a summary of API new features and enhancements. Go to the [API Documentation home](https://docs.datarobot.com/en/docs/api/index.html){ target=_blank } for more information on each client. !!! tip DataRobot highly recommends updating to the latest API client for Python and R. ### Access DataRobot REST API documentation from docs.datarobot.com {: #access-datarobot-rest-api-documentation-from-docs-datarobot-com} DataRobot now offers REST API documentation available [directly from the public documentation hub](https://docs.datarobot.com/en/docs/api/reference/public-api/index.html). Previously, REST API docs were only accessible through the application. Now, you can access information about REST endpoints and parameters in the API reference section of the public documentation site. ### Python client v3.0 {: #python-client-v30 } Now generally available, DataRobot has released version 3.0 of the [Python client](https://pypi.org/project/datarobot/){ target=_blank }. This version introduces significant changes to common methods and usage of the client. Many prominent changes are listed below, but **view the [changelog](https://datarobot-public-api-client.readthedocs-hosted.com/page/CHANGES.html){ target=_blank } for a complete list of changes introduced in version 3.0**. #### Python client v3.0 new features {: #python-client-v30-new-features } A summary of some new features for version 3.0 are outlined below: * Version 3.0 of the Python client does not support Python 3.6 and earlier versions. Version 3.0 currently supports Python 3.7+. * The default Autopilot mode for the `project.start_autopilot` method has changed to `AUTOPILOT_MODE.QUICK`. * Pass a file, file path, or DataFrame to a deployment to easily make batch predictions and return the results as a DataFrame using the new method `Deployment.predict_batch`. * You can use a new method to retrieve the canonical URI for a project, model, deployment, or dataset: * `Project.get_uri` * `Model.get_uri` * `Deployment.get_uri` * `Dataset.get_uri` #### New methods for DataRobot projects {: #new-methods-for-datarobot-projects } Review the new methods available for `datarobot.models.Project`: * `Project.get_options` allows you to retrieve saved modeling options. * `Project.set_options` saves `AdvancedOptions` values for use in modeling. * `Project.analyze_and_model` initiates Autopilot or data analysis using data that has been uploaded to DataRobot. * `Project.get_dataset` retrieves the dataset used to create the project. * `Project.set_partitioning_method` creates the correct Partition class for a regular project based on input arguments. * `Project.set_datetime_partitioning` creates the correct Partition class for a time series project. * `Project.get_top_model` returns the highest scoring model for a metric of your choice. ### Python client v3.1 {: #python-client-v31 } The following API enhancements are introduced with version 3.1 of DataRobot's Python client: * Added new methods `BatchPredictionJob.apply_time_series_data_prep_and_score` and `BatchPredictionJob.apply_time_series_data_prep_and_score_to_file` that apply time series data prep to a file or dataset and make batch predictions with a deployment. * Added new methods `DataEngineQueryGenerator.prepare_prediction_dataset` and `DataEngineQueryGenerator.prepare_prediction_dataset_from_catalog` that apply time series data prep to a file or catalog dataset and upload the prediction dataset to a project. * Added new `max_wait` parameter to the method `Project.create_from_dataset`. Values larger than the default can be specified to avoid timeouts when creating a project from Dataset. * Added new method for creating a segmented modeling project from an existing clustering project and model `Project.create_segmented_project_from_clustering_model`. Switch to this function if you are previously using ModelPackage for segmented modeling purposes. * Added new method `is_unsupervised_clustering_or_multiclass` for checking whether the clustering or multiclass parameters are used, quick and efficient without extra API calls. * Added value `PREPARED_FOR_DEPLOYMENT` to the `RECOMMENDED_MODEL_TYPE` enum. * Added two new methods to the ImageAugmentationList class: `ImageAugmentationList.list` and `ImageAugmentationList.update`. * Added `format` key to Batch Prediction intake and output settings for S3, GCP and Azure. * The method `PredictionExplanations.is_multiclass` now adds an additional API call to check for multiclass target validity, which adds a small delay. * `AdvancedOptions` parameter `blend_best_models` defaults to false. * `AdvancedOptions <datarobot.helpers.AdvancedOptions>` parameter `consider_blenders_in_recommendation` defaults to false. * `DatetimePartitioning` now has the parameter `unsupervised_mode`. ### Public Preview: R client v2.29 {: #r-client-v229 } Now available for public preview, DataRobot has released [version 2.29 of the R client](https://github.com/datarobot/rsdk/releases){ target=_blank }. This version brings parity between the R client and version 2.29 of the Public API. As a result, it introduces significant changes to common methods and usage of the client. These changes are encapsulated in a new library (in addition to the `datarobot` library): `datarobot.apicore`, which provides auto-generated functions to access the Public API. The `datarobot` package provides a number of API wrapper functions around the `apicore` package to make it easier to use. Reference the [v2.29 documentation](r-pub-prev/index) for more details on the new R client, including installation instructions, detailed method overviews, and [reference documentation](r-ref). #### New R Functions {: #new-r-functions } * Generated API wrapper functions are organized into categories based on their tags from the OpenAPI specification, which were themselves redone for the entire DataRobot Public API in v2.27. * API wrapper functions use camel-cased argument names to be consistent with the rest of the package. * Most function names follow a `VerbObject` pattern based on the OpenAPI specification. * Some function names match "legacy" functions that existed in v2.18 of the R Client if they invoked the same underlying endpoint. For example, the wrapper function is called `GetModel`, not `RetrieveProjectsModels`, since the latter is what was implemented in the R client for the endpoint `/projects/{mId}/models/{mId}`. * Similarly, these functions use the same arguments as the corresponding "legacy" functions to ensure DataRobot does not break existing code calling those functions. * The R client (both `datarobot` and `datarobot.apicore` packages) outputs a warning when you attempt to access certain resources (projects, models, deployments, etc.) that are deprecated or disabled by the DataRobot platform migration to Python 3. * Added the helper function `EditConfig` that allows you to interactively modify `drconfig.yaml`. * Added the `DownloadDatasetAsCsv` function to retrieve a dataset as a CSV file using `catalogId`. * Added the `GetFeatureDiscoveryRelationships` function to get the feature discovery relationships for a project. * The R client (both `datarobot` and `datarobot.apicore` packages) will output a warning when you attempt to access certain resources (projects, models, deployments, etc.) that are deprecated or disabled by the DataRobot platform migration to Python 3. #### R enhancements {: #r-enchancements } * The function `RequestFeatureImpact` now accepts a `rowCount` argument, which will change the sample size used for Feature Impact calculations. * The internal helper function `ValidateModel` was renamed to `ValidateAndReturnModel` and now works with model classes from the `apicore` package. * The `quickrun` argument has been removed from the function `SetTarget`. Set `mode = AutopilotMode.Quick` instead. * The Transferable Models family of functions (`ListTransferableModels`, `GetTransferableModel`, `RequestTransferableModel`, `DownloadTransferableModel`, `UploadTransferableModel`, `UpdateTransferableModel`, `DeleteTransferableModel`) have been removed. The underlying endpoints&mdash;long deprecated&mdash;were removed from the Public API with the removal of the Standalone Scoring Engine (SSE). * Removed files (code, tests, doc) representing parts of the Public API not present in v2.27-2.29. ### Calculate Feature Impact for each backtest {: #calculate-feature-impact-for-each-backtest } Feature Impact provides a transparent overview of a model, especially in a model's compliance documentation. Time-dependent models trained on different backtests and holdout partitions can have different Feature Impact calculations for each backtest. Now generally available, you can calculate Feature Impact for each backtest using DataRobot's REST API, allowing you to inspect model stability over time by comparing Feature Impact scores from different backtests. ## Deprecation announcements {: #deprecation-announcements } ### API deprecations #### R deprecations {: #r-deprecations } Review the breaking changes introduced in version 2.29: * The `quickrun` argument has been removed from the function SetTarget. Set `mode = AutopilotMode.Quick` instead. * The Transferable Models functions have been removed. Note that the underlying endpoints were also removed from the Public API with the removal of the Standalone Scoring Engine (SSE). The affected functions are listed below: * `ListTransferableModels` * `GetTransferableModel` * `RequestTransferableModel` * `DownloadTransferableModel` * `UploadTransferableModel` * `UpdateTransferableModel` * `DeleteTransferableModel` Review the deprecations introduced in version 2.29: * Compliance Documentation API is deprecated. Instead use the Automated Documentation API. #### Python deprecations {: #python-deprecations } Review the deprecations introduced in version 3.0: * `Project.set_target` has been removed. Use `Project.analyze_and_model` instead. * `PredictJob.create` has been removed. Use `Model.request_predictions` instead. * `Model.get_leaderboard_ui_permalink` has been removed. Use `Model.get_uri` instead. * `Project.open_leaderboard_browser` has been removed. Use `Project.open_in_browser` instead. * `ComplianceDocumentation` has been removed. Use `AutomatedDocument` instead. The following deprecations are introduced in version 3.1: * Deprecated method `Project.create_from_hdfs`. * Deprecated method `DatetimePartitioning.generate`. * Deprecated parameter `in_use` from `ImageAugmentationList.create` as DataRobot will take care of it automatically. * Deprecated property `Deployment.capabilities` from `Deployment`. * `ImageAugmentationSample.compute` was removed in v3.1. You can get the same information with the method `ImageAugmentationList.compute_samples`. * The `sample_id` parameter is now removed from `ImageAugmentationSample.list`. Please use `auglist_id` instead. ### Hadoop is no longer available Starting with version 9.0, you can only install DataRobot on Kubernetes. Dockerized, RPM, and Hadoop installations will no longer be available. Also, the ability to directly ingest data from HDFS for modeling and prediction is deprecated.
90-platform
--- title: Modeling (V9.0) description: DataRobot Release 9.0 modeling release announcements. --- # Modeling (V9.0) {: #modeling-v90 } The following table lists each new feature. See also the [deprecation and end-of-life announcements])#deprecation-announcements). Name | GA | Public Preview ---------- | ---- | --- [Visual AI Image Embeddings visualization adds new filtering capabilities](#visual-ai-embeddings-visualization-adds-new-filtering-capabilities) | ✔ | [Bias Mitigation functionality](#bias-mitigation-functionality) | ✔ | [Prediction Explanations for multiclass projects](#prediction-explanations-for-multiclass-projects) | | ✔ [Text AI parameters now available via Composable ML](#text-ai-parameters-now-available-via-composable-ai) | | ✔ [Text Prediction Explanations](#text-prediction-explanations) | ✔ | | [NLP Autopilot with better language support](#nlp-autopilot-with-better-language-support) | ✔ | [Prediction Explanations for cluster models](#prediction-explanations-for-cluster-models) | |✔ | [Composable ML task categories refined](#composable-ml-task-categories-refined) | ✔ | [Blueprint toggle allows summary and detailed views from Leaderboard](#blueprint-toggle-allows-summary-and-detailed-views-from-leaderboard) | | ✔ | [Quick Autopilot mode improvements for faster experimentation](#quick-autopilot-mode-improvements-speed-experimentation) | ✔ | | [Japanese compliance documentation now generally available, more complete](#japanese-compliance-documentation-now-generally-available-more-complete) | ✔ | | [Changes to blender model defaults](#changes-to-blender-model-defaults) | ✔ | | [ROC Curve enhancements aid model interpretation](#roc-curve-enhancements-aid-model-interpretation) | ✔ | | [Sliced insights show a subpopulation of model data](#sliced-insights-show-a-subpopulation-of-model-data) | | ✔ | [Create AI Apps from Leaderboard models](#create-ai-apps-from-models-on-the-leaderboard) | ✔ | | [Add custom logos to No-Code AI Apps](#add-custom-logos-to-no-code-ai-apps) | ✔ | | [No-Code AI App header enhancements](#no-code-ai-app-header-enhancements) | ✔ | [Multiclass support in No-Code AI Apps](#multiclass-support-in-no-code-ai-apps) | ✔ | | [UI/UX improvements to No-Code AI Apps](#uiux-improvements-to-no-code-ai-apps) | ✔ | [Details page added to time series Predictor applications](#details-page-added-to-time-series-predictor-applications) | | ✔ | [Create Time Series What-if AI Apps](#create-time-series-what-if-ai-apps) | ✔ | | [Use Cases tab renamed to Value Tracker](#use-cases-tab-renamed-to-value-tracker) | ✔ | | [Reorder scheduled modeling jobs](#reorder-scheduled-modeling-jobs) | ✔ | ## GA {: #ga } ### Visual AI Image Embeddings visualization adds new filtering capabilities {: #visual-ai-embeddings-visualization-adds-new-filtering-capabilities } The **Understand > Image Embeddings** tab helps to visualize what predicted results for your AI Project. Now, DataRobot calculates predicted values for the images and allows you to filter by those predictions. In addition, for select project types you can modify the prediction threshold (which may change the predicted label) and filter based on the new results. The image below shows all filtering options&mdash;new and existing&mdash;for all supported project types. ![](images/image-embeddings-all.png) In addition, usability enhancements for clusters make exploring Visual AI results easier. With clustering, images display colored borders to indicate the predicted cluster. ### Bias Mitigation functionality {: #bias-mitigation-functionality } Bias mitigation is now available as a generally available feature for binary classification projects. To clarify relationships between the parent model and any child models with mitigation applied, this release adds a table&mdash;**Models with Mitigation Applied**&mdash; accessible from the parent model on the Leaderboard. Bias Mitigation works by augmenting blueprints with a pre- or post-processing task, causing the blueprint to then attempt to reduce bias across classes in a protected feature. You can apply mitigation either automatically (as part of Autopilot) or manually (after Autopilot completes). When run automatically, you set mitigation criteria as a part of the Bias and Fairness advanced option settings. Autopilot then applies mitigation to the top three Leaderboard models. Or, once Autopilot completes, you can apply mitigation to any non-blender, unmitigated model available from the Leaderboard. Finally, compare mitigated versus unmitigated models from the Bias vs Accuracy insight. ![](images/bias-mit-13.png) See the [Bias Mitigation](fairness-metrics#set-mitigation-techniques) for more information. ### Text Prediction Explanations {: #text-prediction-explanations } [Text Prediction Explanations](predex-text) illustrate how individual words (n-grams) in a text feature influence predictions, helping to validate and understand the model and the importance it is placing on words. Previously, DataRobot evaluated the impact of text in a dataset as the impact of a text feature as a whole, potentially requiring reading the full text for best understanding. With Text Prediction Explanations, which uses the standard color bar spectrum of blue (negative) to red (positive) impact, you can easily visualize and understand your text. An option to display unknown n-grams helps to identify, via gray highlight, those n-grams not recognized by the model (most likely because they were not seen during training). Text Prediction Explanations, either XEMP OR SHAP, are run by default when text is present in a dataset. ![](images/text-predex-unknown.png) For more information, see the [Text Prediction Explanations](predex-text) documentation. ### NLP Autopilot with better language support {: #nlp-autopilot-with-better-language-support } A host of natural language processing (NLP) improvements are now generally available. The most impactful is the application of FastText for language detection at data ingest, which: * Allows DataRobot to generate the appropriate blueprints with parameters optimized for that language. * Adapt tokenization to the detected language for better word clouds and interpretability. * Trigger specific blueprint training heuristics so that accuracy-optimized Advanced Tuning settings are applied. This feature works with multilingual use cases as well; Autopilot will detect multiple languages and adjust various blueprint settings for the greatest accuracy. The following NLP enhancements are also now generally available: * New pre-trained BPE tokenizer (which can handle any language). * Refined Keras blueprints for NLP for improved accuracy and training time. * Various improvements across other NLP blueprints. * New Keras blueprints (with the BPE tokenizer) in the Repository. ### Quick Autopilot mode improvements speed experimentation {: #quick-autopilot-mode-improvements-speed-experimentation } With this month’s release, [Quick](model-ref#quick-autopilot) Autopilot mode now uses a one-stage modeling process to build models and populate the Leaderboard in AutoML projects. In the new version of Quick, all models are trained at a max sample size&mdash;typically 64%. The specific number of Quick models run varies by project and target type. DataRobot selects which models to run based on a variety of criteria, including target and performance metric, but as its name suggests, chooses only models with relatively short training runtimes to support quicker experimentation. Note that to maximize runtime efficiency, DataRobot no longer automatically generates and fits the [DR Reduced Features](feature-lists#automatically-created-feature-lists) list. (Fitting the reduced list requires retraining models.) ![](images/start-with-options.png) ### Changes to blender model defaults {: #changes-to-blender-model-defaults } This release brings changes to the default behavior of [blender models](leaderboard-ref#blender-models). A blender (or ensemble) model combines the predictions of two or more models, potentially improving accuracy. DataRobot can automatically create these models at the end of Autopilot when the [**Create blenders from top models** advanced option](additional) is enabled. Previously the default setting was to enable creating blenders automatically; now, the default is not to build these models. Additionally, the number of models allowed when creating blenders either automatically or manually has changed. While previously there was both no limit, and later a three-model maximum in the number of contributory models, that limit has been adjusted to allow up to eight models per blender. Finally, the automatic creation of advanced blenders has been removed. These blenders used a backwards stage-wise process to eliminate models when it benefits the blend's cross-validation score. * Advanced Average (AVG) Blend * Advanced Generalized Linear Model (GLM) Blend * Advanced Elastic Net (ENET) Blend The following blender types are currently in the process of deprecation: Blender | Deprecation status --------| ------------------- Random Forest Blend (RF) | Existing RF blenders continue to work; you cannot create new RF blenders. Light Gradient Boosting Machine Blend (LGBM) | Existing LGBM blenders continue to work; you cannot create new LGBM blenders. TensorFlow Blend (TF) | Existing TF blenders do not work; you cannot create new TF blenders. These changes have been made in response to customer feedback. Because blenders can extend build times and cause deployment issues, the changes ensure that these impacts only affect those users needing the capability. Testing has determined that, in most cases, the accuracy gain does not justify the extended runtimes imposed on Autopilot. For data scientists who need blender capabilities, manual blending is not affected. ### Japanese compliance documentation now generally available, more complete {: #japanese-compliance-documentation-now-generally-available-more-complete } With this release, [model compliance documentation](compliance/index) is now generally available for users in Japanese. Now, Japanese-language users can generate, for each model, individualized documentation to provide comprehensive guidance on what constitutes effective model risk management and download it as an editable Microsoft Word document. In the public preview version, some sections were untranslated and therefore removed from the report. Now the following previously untranslated sections are translated and available for binary classification and multiclass projects: * Bias and Fairness * Lift Chart * Accuracy Anomaly detection compliance information is not yet translated and is not included. It is available in English if the information is required. Compliance Reports are a premium feature; contact your DataRobot representative for information on availability. ### ROC Curve enhancements aid model interpretation {: #roc-curve-enhancements-aid-model-interpretation } With this release, the [ROC Curve](roc-curve-tab/index) tab introduces several improvements to help increase understanding of model performance at any point on the probability scale. Using the visualization now, you will notice: * Row and column totals are shown in the Confusion Matrix. * The Metrics section now displays up to six accuracy metrics. * You can use ** Display Threshold > View Prediction Threshold** to reset the visualization components (graphs and charts) to the model's default prediction threshold. ![](images/roc-pred-threshold.png) ### Create AI Apps from models on the Leaderboard {: #create-ai-apps-from-models-on-the-leaderboard } You can now create No-Code AI Apps directly from trained models on the Leaderboard. To do so, select the model, click the new **Build app** tab, and select the template that best suits your use case. ![](images/app-leaderboard-2.png) Then, name the application, select an access type, and click **Create**. ![](images/app-leaderboard-3.png) The new app appears in the **Build app** tab of the Leaderboard model as well as the **Applications** tab. For more information, see the [documentation](create-app#from-the-leaderboard) for No-Code AI Apps. ### Add custom logos to No-Code AI Apps {: #add-custom-logos-to-no-code-ai-apps } Now generally available, you can add a custom logo to your No-Code AI Apps, allowing you to keep the branding of the AI App consistent with that of your company before sharing it either externally or internally. ![](images/app-logo-5.png) To upload a new logo, open the application you want to edit and click **Build**. Under **Settings > Configuration Settings**, click **Browse** and select a new image, or drag-and-drop an image into the **New logo** field. ![](images/app-logo-3.png) For more information, see the [No-Code AI App](app-settings#add-a-custom-logo) documentation. ### No-Code AI App header enhancements {: #no-code-ai-app-header-enhancements } This release introduces improvements to the layout and header of [No-Code AI Apps](app-builder/index). Toggle between the tabs below to view the improvements made to the UI when using and editing an application: === "Editing and application" ![](images/app-build-1.png) | &nbsp; | Element | Description | |---|---|---| | ![](images/icon-1.png) | Pages panel | Allows you to rename, reorder, add, hide, and delete application pages. | | ![](images/icon-2.png) | Widget panel | Allows you to add widgets to your application. | | ![](images/icon-3.png) | Settings | Modifies general configurations and permissions as well as displays app usage. | | ![](images/icon-4.png) | Documentation | Opens the DataRobot documentation for No-Code AI Apps. | | ![](images/icon-5.png) | Editing page dropdown | Controls the application page you are currently editing. To view a different page, click the dropdown and select the page you want to edit. Click **Manage pages** to open the **Pages** panel. | | ![](images/icon-6.png) | Preview | Previews the application on different devices. | | ![](images/icon-7.png) | Go to app / Publish | Opens the end-user application, where you can make new predictions, as well as view prediction results and widget visualizations. After editing an application, this button displays **Publish**, which you must click to apply your changes.| | ![](images/icon-8.png) | Widget actions | Moves, hides, edits, and deletes widgets. | === "Using an application" ![](images/use-app-1.png) | &nbsp; | Widget | Description | |---|---| ---| | ![](images/icon-1.png) | Application name | Displays the application name. Click to return to the app's Home page. | | ![](images/icon-2.png) | Pages | Navigates between application pages. | | ![](images/icon-3.png) | Build | Allows you to edit the application. | | ![](images/icon-4.png) | Share | Share the application with users, groups, or organizations within DataRobot. | | ![](images/icon-5.png) | Add new row | Opens the **Create Prediction** page, where you can make single record predictions. | | ![](images/icon-6.png) | Add Data | Upload batch predictions&mdash;from the **AI Catalog** or a local file. | | ![](images/icon-7.png) | All rows | Displays a history of predictions. Select a row to view prediction results for that entry. | ### Multiclass support in No-Code AI Apps {: #multiclass-support-in-no-code-ai-apps } No-Code AI Apps now support multiclass classification deployments across all three template types—Predictor, Optimizer, and What-If. This gives users the ability to create applications that solve a broader range of business problems. ### UI/UX improvements to No-Code AI Apps {: #uiux-improvements-to-no-code-ai-apps } This release introduces the following improvements to No-Code AI Apps: * An in-app tour has been added to help you set up Optimizer applications. Click the **?** in the upper-right and select **Show Optimizer Guide**. * When opening an application, it now opens in Consume mode instead of Build mode. * In **Consume > Optimization Details**, the What-if and Optimizer widgets have been moved towards the top of the page. * In Optimizer applications, you previously needed to select a prediction row to calculate an optimization. Now, you can click the **Optimize Row** button in the All Rows widget to calculate and display the optimized prediction without leaving the page. * In Build mode, widgets no longer display an example. ### Create Time Series What-if AI Apps {: #create-time-series-what-if-ai-apps } Now generally available, you can create What-if Scenario AI Apps from time series projects. This allows you to launch and easily configure applications in an enhanced visual and interactive interface, as well as share your What-if Scenario app with consumers who will be able to effortlessly build upon what’s already been generated by the builder and/or create their own scenarios on the same prediction files. ![](images/ts-whatif-7.png) Additionally, you can edit the known in advance features for multiple scenarios at once using the [**Manage Scenarios**](ts-app#bulk-edit-scenarios) feature. ![](images/ts-app-bulk-1.png) For more information, see the [Time series applications](ts-app#what-if-widget) documentation. ### Use Cases tab renamed to Value Tracker {: #use-cases-tab-renamed-to-value-tracker } With this release, the **Use Cases** tab at the top of the DataRobot is now the **Value Tracker**. While the functionality remains the same, all instances of “use cases” in this feature have been replaced by “value tracker.” ![](images/usecase-valuetrack.png) See the [Value Tracker documentation](value-tracker) for more information. ### Reorder scheduled modeling jobs {: #reorder-scheduled-modeling-jobs } You can now change the order of scheduled modeling and prediction jobs in your project’s Worker Queue&mdash;allowing you to run more important jobs sooner. For more information, see the [Worker Queue](worker-queue#reorder-workers) documentation. ## Public Preview {: #public-preview } ### Prediction Explanations for multiclass projects {: #prediction-explanations-for-multiclass-projects } DataRobot now calculates explanations for each class in an XEMP-based multiclass classification project, both from the Leaderboard and from deployments. With multiclass, you can set the number of classes to compute for as well as select a mode from predicted or actual (if using training data) results or specify to see only a specific set of classes: ![](images/mc-predex-3.png) This capability helps especially with projects that require “humans-in-the-loop” to review multiple options. Previously comparisons required building several binary classification models and use scripting to evaluate. When building a multiclass project, Prediction Explanations can help improve models by highlighting, for example, where a model is too accurate (potential leakage?), where residuals are too large (some data could be missing?), or where a model can’t clearly distinguish two classes (some data could be missing?). See the section on [XEMP Prediction Explanations for Multiclass](xemp-pe#multiclass-prediction-explanations) for more information. ### Text AI parameters now available via Composable ML {: #text-ai-parameters-now-available-via-composable-ai } The ability to modify certain Text AI preprocessing tasks (Lemmatizer, PosTagging, and Stemming) is moving from the Advanced Tuning tab to blueprint tasks accessible via composable ML. The new Text AI preprocessing tasks unlock additional pathways to create unique text blueprints. For example, you can now use lemmatization in any text model that supports that preprocessing task instead of being limited to TF-IDF blueprints. **Required feature flag:** Enable Text AI Composable Vertices ### Prediction Explanations for cluster models {: #prediction-explanations-for-cluster-models } Now available as Public Preview, you can use Prediction Explanations with clustering to uncover which factors most contributed to any given row’s cluster assignment. With this insight, you can easily explain clustering model outcomes to stakeholders and identify high-impact factors to help focus their business strategies. Functioning very much like multiclass Prediction Explanations&mdash;but reporting on clusters instead of classes&mdash;cluster explanations are available from both the Leaderboard and deployments when enabled. They are available for all XEMP-based clustering projects and are not available with time series. **Required feature flag**: Enable Clustering Prediction Explanations Public preview [documentation](cluster-pe). ![](images/pe-clustering-3.png) ### Composable ML task categories refined {: #composable-ml-task-categories-refined } In response to the feedback and widespread adoption of Composable ML and [blueprint editing](cml-blueprint-edit), this release brings some refinements to task categorization. For example, boosting tasks are now available under the specific project/model type: ![](images/cml-blue-14.png) ### Blueprint toggle allows summary and detailed views from Leaderboard {: #blueprint-toggle-allows-summary-and-detailed-views-from-leaderboard } Blueprints that are viewed from the Leaderboard’s **Blueprint** tab are, by default, a read-only, summarized view, showing only those tasks used in the final model. ![](images/blue-toggle-1.png) However, the original modeling algorithm often contains many more “branches,” which DataRobot prunes when they are not applicable to the project data and feature list. Now, you can toggle to see a detailed view while in read-only mode. Prior to the introduction of this feature, viewing the full blueprint required entering edit mode of the blueprint editor. ![](images/blue-toggle-2.png) **Required feature flag:** Enable Blueprint Detailed View Toggle Public preview [documentation](blueprint-toggle). ### Sliced insights show a subpopulation of model data {: #sliced-insights-show-a-subpopulation-of-model-data } Now available as public preview, slices allow you to define filters for categorical, numeric, or both types of features. Viewing and comparing insights based on segments of a project’s data helps to understand how models perform on different subpopulations. You can also compare a slice against the “global” slice--all training data (depending on the insight). Configuring a slice allows you to choose a feature and set operators and values to narrow the data returned. ![](images/sliced-insights-7.png) Sliced insights are available for Lift Chart, ROC Curve, Residual, and Feature Impact visualizations. **Required feature flag**: Enable Sliced Insights Public preview [documentation](sliced-insights). ### Details page added to time series Predictor applications {: #details-page-added-to-time-series-predictor-applications } In time series Predictor No-Code AI Apps, you can now view prediction information for specific predictions or dates, allowing you to not only see the prediction values, but also compare them to other predictions that were made for the same date. Previously, you could only view values for the prediction, residuals, and actuals, as well as the top three Prediction Explanations. To drill down into the prediction details, click on a prediction in either the **Predictions vs Actuals** or **Prediction Explanations** chart. This opens the Forecast details page, which displays the following information: ![](images/ts-pred-2.png) ![](images/ts-pred-detail-3.png) | | Description | |---|---| | ![](images/icon-1.png) | The average prediction value in the forecast window. | | ![](images/icon-2.png) | Up to 10 Prediction Explanations for each prediction. | | ![](images/icon-3.png) | Segmented analysis for each forecast distance within the forecast window. | | ![](images/icon-4.png) | Prediction Explanations for each forecast distance included in the segmented analysis. | **Required feature flag:** Enable Application Builder Time Series Predictor Details Page Public preview [documentation](ts-app#forecast-details-page). ## Deprecation announcements {: #deprecation-announcements } ### Auto-Tuned Word N-gram Text Modeler blueprints removed from the Leaderboard {: #auto-tuned-word-n-gram-text-modeler-blueprints-removed-from-the-leaderboard } Auto-Tuned Word N-gram Text Modeler blueprints are no longer run as part of Autopilot for binary classification, regression, and multiclass/multimodal projects. The modeler blueprints remain available in the repository. Currently, Light GBM (LGBM) models run these auto-tuned text modelers for each text column, and for each, a new blueprint is added to the Leaderboard. However, these Auto-Tuned Word N-gram Text Modelers are not correlated to the original LGBM model (i.e., modifying them does not affect the original LGBM model). Now, Autopilot creates a single, larger blueprint for all Auto-Tuned Word N-gram Text Modeler tasks instead of one for each text column. Note that this change has no backward-compatibility issues; it applies to new projects only. ### DataRobot Prime model creation removed {: #datarobot-prime-model-creation-removed } The ability to create new [DataRobot Prime](prime/index) models has been removed from the application. This does not affect existing Prime models or deployments. It is being replaced with the new ability to export Python or Java code from Rulefit models using the [Scoring Code](scoring-code/index) capabilities. RuleFit models, which differ from Prime only in that they use raw data for their prediction target rather than predictions from a parent model, support Java/Python source code export. There is no change in the availability of Java Scoring Code for other blueprint types, and any existing Prime models will continue to function. ### User/Open source models disabled in November {: #user-open-source-models-disabled-in-november } As of November 2022, DataRobot disabled all models containing User/Open source (“user”) tasks. See the [release announcement](june2022-announce#user-open-source-models-deprecated-and-soon-disabled) for full information on identifying these models. Use the [Composable ML](cml/index) functionality to create custom models.
90-modeling
--- title: Time series (V9.0) description: DataRobot Release 9.0 time series release announcements. --- # Time series (V9.0) {: #time-series-v90 } The following table lists each new feature. Name | GA | Public Preview ---------- | ---- | --- [Quick Autopilot improvements now available for time series](#quick-autopilot-improvements-now-available-for-time-series) | ✔ | | [New Leaderboard and Repository filtering options](#new-leaderboard-and-repository-filtering-options) | ✔ | | [Time series clustering](#time-series-clustering) | ✔ | | [Time series 5GB support](#time-series-5gb-support) | ✔ | | [Accuracy Over Time enhancements](#accuracy-over-time-enhancements) | ✔ | | [Native Prophet and Series Performance blueprints in Autopilot](#native-prophet-and-series-performance-blueprints-in-autopilot) | ✔ | | [Project duplication, with settings, for time series projects](#project-duplication-with-settings-for-time-series-projects) | ✔ | | [Scoring Code for time series projects](#scoring-code-for-time-series-projects) | ✔ | | [Autoexpansion of time series input in Prediction API](#autoexpansion-of-time-series-input-in-prediction-api) | ✔ | | [Calculate Feature Impact for each backtest](#calculate-feature-impact-for-each-backtest) | ✔ | | [Support for Manual mode introduced to segmented modeling](#support-for-manual-mode-introduced-to-segmented-modeling) | ✔ | | [Deployment for time series segmented modeling](#deployment-for-time-series-segmented-modeling) | ✔ | | [New metric support for segmented projects](#new-metric-support-for-segmented-projects) | ✔ | [Retraining Combined Models now faster](#retraining-combined-models-now-faster) | ✔ | | [Prediction Explanations for cluster models](#prediction-explanations-for-cluster-models) | | ✔ | [Period Accuracy allows focus on specific periods in training data](#period-accuracy-allows-focus-on-specific-periods-in-training-data) | | ✔ | [Batch predictions for TTS and LSTM models](#batch-predictions-for-tts-and-lstm-models) | | ✔ | [Time series model package prediction intervals](#time-series-model-package-prediction-intervals) | | ✔ | ## GA {: #ga } ### Quick Autopilot improvements now available for time series {: #quick-autopilot-improvements-now-available-for-time-series } With this release, [Quick](multistep-ta) Autopilot has been streamlined for time series projects, speeding experimentation. In the new version of Quick, to maximize runtime efficiency, DataRobot no longer automatically generates and fits the DR Reduced Features list, as fitting requires retraining models. Models are still trained at the maximum sample size for each backtest, defined by the project’s date/time partitioning. The specific number of models run varying by project and target type. See the documentation on the [model recommendation process](model-rec-process) for alternate methods to build a reduced feature list. ### New Leaderboard and Repository filtering options {: #new-leaderboard-and-repository-filtering-options } With this release, you can now limit the Leaderboard or Repository to display models/blueprints matching the selected filters. Leaderboard filters allow you to set options categorized as: sample size&mdash;or for time series projects, training period&mdash;model family, model characteristics, feature list, and more. Repository filtering includes blueprint characteristics, families, and types. The new, enhanced filtering options are centralized in a single modal (one for the Leaderboard and one for the Repository), where previously, the more limited methods for filtering were in separate locations. ![](images/leaderboard-filter-1.png) See the [Leaderboard reference](leaderboard-ref#use-leaderboard-filters) for more information. ### Time series clustering {: #time-series-clustering } Clustering, an unsupervised learning technique, can be used to identify natural segments in your data. DataRobot now allows you to use clustering to discover the segments to be used for [segmented modeling](ts-segmented). This technique enables you to easily group similar series across a multiseries dataset from within the DataRobot platform. Use the discovered clusters to get a better understanding of your data or use them as input to time series [segmented modeling](ts-segmented). ![](images/rn-pp-ts-cluster-segmented-new.png) This workflow builds a clustering model and uses the model to help define the segments for a segmented modeling project. ![](images/rn-pp-ts-cluster-segmented-cluster-tile.png) A new **Use for Segmentation** tab lets you enable the clusters to be used in the segmented modeling project. ![](images/rn-pp-ts-cluster-segmented-use-for-segmentation.png) The clustering model is saved as a model package in the Model Registry, so that you can use it for subsequent segmented modeling projects. Alternatively, you can save the clustering model to the Model Registry explicitly, without creating a segmented modeling project immediately. In this case, you can later create a segmented modeling project using the saved clustering model package. ![](images/rn-pp-ts-cluster-segmented-existing.png) The general availability of clustering brings some improvements: * A new [**Series Insights**](series-insights) tab specifically for clustering provides information on series/cluster relationships and details. * Clarified project setup that removes extraneous feature lists and window setup. * Clustering models, and their resulting segmented models, use a uniform quantity of data for predictions (with the size based on the training size for the original clustering model). * A [cluster buffer](ts-clustering#cluster-discovery) prevents data leakage and ensures that you are not training a clustering model into what will be the holdout partition in segmentation. * A toggle to control the 10% clustering buffer if you aren’t using the result for segmented modeling. ![](images/cluster-segmented-buffer.png) For more information, see the [Time series clustering](ts-clustering) documentation. ### Time series 5GB support {: #time-series-5gb-support } With this deployment, time series projects on the DataRobot managed AI Platform can support datasets up to 5GB. Previously the limit for time series projects on the cloud was 1GB. For more project- and platform-based information, see the [dataset requirements](file-types#time-series-file-import-sizes) reference. ### Accuracy Over Time enhancements {: #accuracy-over-time-enhancements } Because multiseries modeling supports up to 1 million series and 1000 forecast distances, previously, DataRobot limited the number of series in which the accuracy calculations were performed as part of Autopilot. Now, the visualizations that use these calculations can automatically run a number of series (up to a certain threshold) and then run additional series, either individually or in bulk. ![](images/aot-multi-2.png) The visualizations that can leverage this functionality are: * Accuracy Over Time * Anomaly Over Time * Forecast vs. Actual * Model Comparison For more information, see the [Accuracy Over Time for multiseries](aot#display-by-series) documentation. ### Native Prophet and Series Performance blueprints in Autopilot {: #native-prophet-and-series-performance-blueprints-in-autopilot } Support for native Prophet, ETS, and TBATS models for single and multiseries time series projects was announced as generally available in the June release. (A detailed model description can be found for each model by accessing the model [blueprint](blueprints).) With this release, a slight modification has been made so that these models no longer run as part of Quick Autopilot. DataRobot will run them, as appropriate in full Autopilot and they are also available from the model repository. ### Project duplication, with settings, for time series projects {: #project-duplication-with-settings-for-time-series-projects } Now generally available, you can duplicate ("clone") any DataRobot project type, including unsupervised and time-aware projects like time series, OTV, and segmented modeling. Previously, [this capability](manage-projects#duplicate-a-project) was only available for AutoML projects (non time-aware regression and classification). Duplicating a project provides an option to select the dataset only&mdash;which is faster than re-uploading it&mdash;or a dataset and project settings. For time-aware projects, this means cloning the target, the feature derivation and forecast window values, any selected calendars, KA, features, series IDs&mdash;all time series settings. If you used the [data prep](ts-data-prep#data-prep-for-time-series) tool to address irregular time step issues, cloning uses the modified dataset (which is the one that was used for model building in the parent project.) You can access the **Duplicate** option from either the projects dropdown (upper right corner) or the Manage Project page. ![](images/proj-dup.png) ### Scoring Code for time series projects {: #scoring-code-for-time-series-projects } Now generally available, you can export time series models in a Java-based Scoring Code package. [Scoring Code](scoring-code/index) is a portable, low-latency method of utilizing DataRobot models outside the DataRobot application. You can download a model's time series Scoring Code from the following locations: * [Download from the Leaderboard](sc-download-leaderboard) (**Leaderboard > Predict > Portable Predictions**) * [Download from the deployment](sc-download-deployment) (**Deployments > Predictions > Portable Predictions**) === "Download for segmented modeling projects" With [segmented modeling](ts-segmented), you can build individual models for segments of a multiseries project. DataRobot then merges these models into a Combined Model. You can [generate Scoring Code for the resulting Combined Model](sc-time-series#scoring-code-for-segmented-modeling-projects). To generate and download Scoring Code, each segment champion of the Combined Model must have Scoring Code: ![](images/sc-segmented-scoring-code.png) After you ensure each segment champion of the Combined Model has Scoring Code, you can download the Scoring Code [from the Leaderboard](sc-download-leaderboard) or you can deploy the Combined Model and download the Scoring Code [from the deployment](sc-download-deployment). === "Download with prediction intervals" You can now [include prediction intervals in the downloaded Scoring Code JAR](sc-time-series#prediction-intervals-in-scoring-code) for a time series model. You can download Scoring Code with prediction intervals [from the Leaderboard](sc-download-leaderboard) or [from a deployment](sc-download-deployment). ![](images/sc-prediction-intervals.png) === "Score data at the command line" You can score data at the command line using the downloaded time series Scoring Code. This release introduces efficient batch processing for time series Scoring Code to support scoring larger datasets. For more information, see the [Time series parameters for CLI scoring](sc-time-series#time-series-parameters-for-cli-scoring) documentation. For more details on time series Scoring Code, see [Scoring Code for time series projects](sc-time-series). ### Autoexpansion of time series input in Prediction API {: #autoexpansion-of-time-series-input-in-prediction-api } When making predictions with time series models via the API using a forecast point, you can now skip the forecast window in your prediction data. DataRobot generates a forecast point automatically via autoexpansion. Autoexpansion applies automatically if predictions are made for a specific forecast point and not a forecast range. It also applies if a time series project has a regular time step and does not use Nowcasting. ### Calculate Feature Impact for each backtest {: #calculate-feature-impact-for-each-backtest } Feature Impact provides a transparent overview of a model, especially in a model's compliance documentation. Time-dependent models trained on different backtests and holdout partitions can have different Feature Impact calculations for each backtest. Now generally available, you can calculate Feature Impact for each backtest using DataRobot's REST API, allowing you to inspect model stability over time by comparing Feature Impact scores from different backtests. ### Support for Manual mode introduced to segmented modeling {: #support-for-manual-mode-introduced-to-segmented-modeling } With this release, you can now use manual mode with [segmented modeling](ts-segmented). Previously you could on choose Quick or full Autopilot. When using Manual mode with segmented modeling, DataRobot creates individual projects per segment and completes preparation as far as the modeling stage. However, DataRobot does not create per-project models. It does create the Combined Model (as a placeholder), but does not select a champion. Using Manual mode is a technique you can use to have full manual control over which models are trained in each segment and selected as champions, without taking the time to build models. ### Deployment for time series segmented modeling {: #deployment-for-time-series-segmented-modeling } To fully leverage the value of segmented modeling, you can deploy Combined Models like any other time series model. After selecting the champion model for each included project, you can deploy the Combined Model to create a "one-model" deployment for multiple segments; however, the individual segments in the deployed Combined Model still have their own segment champion models running in the deployment behind the scenes. Creating a deployment allows you to use [DataRobot MLOps](mlops/index) for accuracy monitoring, prediction intervals, challenger models, and retraining. !!! note Time series segmented modeling deployments do not support data drift monitoring. For more information, see the [feature considerations](ts-segmented#combined-model-deployment-considerations). After you complete the [segmented modeling workflow](ts-segmented#segmented-modeling-workflow) and Autopilot has finished, the **Model** tab contains one model. This model is the completed **Combined Model**. To deploy, click the **Combined Model**, click **Predict** > **Deploy**, and then click **Deploy model**. ![](images/ts-segmented-deploy-2.png) After deploying a Combined Model, you can change the segment champion for a segment by cloning the deployed Combined Model and modifying the cloned model. This process is automatic and occurs when you attempt to change a segment's champion within a deployed Combined Model. The cloned model you can modify becomes the **Active Combined Model**. This process ensures stability in the deployed model while allowing you to test changes within the same segmented project. !!! note Only one Combined Model on a project's Leaderboard can be the **Active Combined Model** (marked with a badge) Once a **Combined Model** is deployed, it is labeled **Prediction API Enabled**. To modify this model, click the active and deployed **Combined Model**, and then in the **Segments** tab, click the segment you want to modify. ![](images/ts-segmented-deploy-3.png) Next, [reassign the segment champion](ts-segmented#reassign-the-champion-model), and in the dialog box that appears, click **Yes, create new combined model**. ![](images/ts-segmented-deploy-4.png) On the segment's **Leaderboard**, you can now access and modify the **Active Combined Model**. For more information, see the [Deploy a Combined Model](ts-segmented#deploy-a-combined-model) documentation. #### New metric support for segmented projects {: #new-metric-support-for-segmented-projects } Combined Models, the main umbrella project that acts as a collection point for all segments in a time series [segmented modeling project](ts-segmented), introduces support for RMSE-based metrics. In addition to earlier support for MAD, MAE, MAPE, MASE, and SMAPE, segmented projects now also support RMSE, RMSLE, and Theil’s U (weighted and unweighted). ### Retraining Combined Models now faster {: #retraining-combined-models-now-faster } Now generally available, time series segmented models now support retraining on the same feature list and blueprint as the original model without the need to rerun Autopilot or feature reduction. Previously, rerunning Autopilot was the only way to retrain this model type. This new support creates parity in retraining between retraining a non-segmented time series model and a segmented model. Because the improvement ensures that retraining leverages the feature reduction computations from the original, only newly introduced features need to go through that process, saving time and adding flexibility. Note that retraining retrains the champion of a segment, it does not rerun the project and select a new champion. ## Public Preview {: #public-preview } ### Prediction Explanations for cluster models {: #prediction-explanations-for-cluster-models } Now available as Public Preview, you can use Prediction Explanations with clustering to uncover which factors most contributed to any given row’s cluster assignment. With this insight, you can easily explain clustering model outcomes to stakeholders and identify high-impact factors to help focus their business strategies. Functioning very much like multiclass Prediction Explanations&mdash;but reporting on clusters instead of classes&mdash;cluster explanations are available from both the Leaderboard and deployments when enabled. They are available for all XEMP-based clustering projects and are not available with time series. ![](images/pe-clustering-3.png) **Required feature flag**: Enable Clustering Prediction Explanations Public preview [documentation](cluster-pe). ### Period Accuracy allows focus on specific periods in training data {: #period-accuracy-allows-focus-on-specific-periods-in-training-data } Available as public preview for OTV and time series projects, the Period Accuracy insight lets you define periods within your dataset and then compare their metric scores against the metric score of the model as a whole. Periods are defined in a separate CSV file that identifies rows to group based on the project’s data/time feature. ![](images/period-accuracy-2.png) Once uploaded, and with the insight calculated, DataRobot provides a table of period-based results and an “over time” histogram for each period. ![](images/period-accuracy-1.png) **Required feature flag**: Period Accuracy Insight Public preview [documentation](ts-period-accuracy). ### Batch predictions for TTS and LSTM models {: #batch-predictions-for-tts-and-lstm-models } Traditional Time Series (TTS) and Long Short-Term Memory (LSTM) models— sequence models that use autoregressive (AR) and moving average (MA) methods—are common in time series forecasting. Both AR and MA models typically require a complete history of past forecasts to make predictions. In contrast, other time series models only require a single row after feature derivation to make predictions. Previously, batch predictions couldn't accept historical data beyond the effective [feature derivation window (FDW)](glossary/index#feature-derivation-window) if the history exceeded the maximum size of each batch, while sequence models required complete historical data beyond the FDW. These requirements made sequence models incompatible with batch predictions. Enabling this public preview feature removes those limitations to allow batch predictions for TTS and LSTM models. Time series Autopilot still doesn't include TTS or LSTM model blueprints; however, you can access the model blueprints in the model [Repository](repository). To allow batch predictions with TTS and LSTM models, this feature: * Updates batch predictions to accept historical data up to the maximum batch size (equal to 50MB or approximately a million rows of historical data). * Updates TTS models to allow refitting on an incomplete history (if the complete history isn't provided). If you don't provide sufficient forecast history at prediction time, you could encounter prediction inconsistencies. For more information on maintaining accuracy in TTS and LSTM models, see the [prediction accuracy considerations](pp-ts-tts-lstm-batch-pred#prediction-accuracy-considerations). With this feature enabled, you can access the [**Predictions > Make Predictions**](batch-pred) and [**Predictions > Job Definitions**](batch-pred-jobs) tabs of a deployed TTS or LSTM model. ![](images/pp-tts-lstm-batch-pred1.png) **Required feature flag**: Enable TTS and LSTM Time Series Model Batch Predictions Public preview [documentation](pp-ts-tts-lstm-batch-pred). ### Time series model package prediction intervals {: #time-series-model-package-prediction-intervals } Now available for public preview, you can enable the computation of a model's time series prediction intervals (from 1 to 100) during model package generation. To run a DataRobot time series model in a remote prediction environment, you download a model package (.mlpkg file) from the model's deployment or the Leaderboard. In both locations, you can now choose to **Compute prediction intervals** during model package generation. You can then run prediction jobs with a [portable prediction server (PPS)](portable-pps) outside DataRobot. Before you download a model package with prediction intervals from a deployment, ensure that your deployment supports model package downloads. The deployment must have a DataRobot build environment and an *external* prediction environment, which you can verify using the [**Governance Lens**](gov-lens) in the deployment inventory: ![](images/pps-1.png) To download a model package with prediction intervals from a *deployment*, in the external deployment, you can use the **Predictions > Portable Predictions** tab: ![](images/pp-ts-deploy-pred-int.png) To download a model package with prediction intervals from a model in the *Leaderboard*, you can use the **Predict > Deploy** or **Predict > Portable Predictions** tab. === "Deploy tab download" ![](images/pp-ts-leaderboard-pred-int.png) === "Portable Prediction Server tab download" ![](images/pp-ts-leaderboard-pps-pred-int.png) **Required feature flag:** Enable computation of all Time-Series Intervals for .mlpkg Public preview [documentation](pp-ts-pred-intervals-mlpkg).
90-autots
--- title: Data (V9.0) description: DataRobot Release 9.0 data release announcements. --- # Data V9.0 {: #data-v90 } The following table lists each new feature. Name | GA | Public Preview ---------- | ---- | --- [Bulk action capabilities added to the AI Catalog](#bulk-action-capabilities-added-to-the-ai-catalog) | ✔ | [Improved join feature type compatibility in Feature Discovery](#improved-join-feature-type-compatibility-in-feature-discovery) | ✔ | [Feature Discovery explores Latest features within an FDW by default](#feature-discovery-explores-latest-features-within-an-fdw-by-default) | ✔ | [Feature Discovery memory improvements](#feature-discovery-memory-improvements) | ✔ | | [Connect to Snowflake using external OAuth](#connect-to-snowflake-using-external-oauth) | ✔ | | [Added support for manual transforms of text features](#added-support-for-manual-transforms-of-text-features) |✔ | | [Feature cache for Feature Discovery deployments](#feature-cache-for-feature-discovery-deployments) | | ✔ | [Speed improvements to the Relationship Quality Assessment](#speed-improvements-to-relationship-quality-assessment) | | ✔ | [New data connection UI](#new-data-connection-ui) | | ✔ | ## GA {: #ga } ### Bulk action capabilities added to the AI Catalog {: #bulk-action-capabilities-added-to-the-ai-catalog } With this release, you can share, tag, download, and delete multiple AI Catalog assets at once; making working with these assets more efficient. In the AI Catalog, select the box to the left of the asset(s) you want to manage, then select the appropriate action at the top. ![](images/ai-bulk-action.png) For more information, see the documentation for [managing catalog assets](catalog-asset#bulk-actions-on-datasets). ### Improved join feature type compatibility in Feature Discovery {: #improved-join-feature-type-compatibility-in-feature-discovery } In Feature Discovery projects, you can now join secondary datasets using columns of different types. Previously, columns had to be the same type to execute a join. For information on join compatibility, see the [Feature Discovery](fd-overview#set-join-conditions) documentation. ### Feature Discovery explores Latest features within an FDW by default {: #feature-discovery-explores-latest-features-within-an-fdw-by-default } As part of the Feature Discovery process, DataRobot now defaults to a new setting, _Latest within window_, when performing feature engineering. This new setting explores _Latest_ values within the defined feature discovery window (FDW), as opposed to _Latest_, which generates _Latest_ values by exploring all historical data up until the end point of any defined FDWs. You can change the default settings in **Feature Discovery Settings > Feature Engineering**. ![](images/safer-feature-engineering-controls.png) For more information, see the [Feature Discovery](fd-overview#feature-engineering-controls) documentation. ### Feature Discovery memory improvements {: #feature-discovery-memory-improvements } Feature discovery projects now use less memory, improving overall performance and reducing the risk of error. ### Connect to Snowflake using external OAuth {: #connect-to-snowflake-using-external-oauth } Now generally available, Snowflake users can set up a Snowflake data connection in DataRobot using an external identity provider (IdP)—either Okta or Azure Active Directory— for user authentication through OAuth single sign-on (SSO). For more information, see the [Snowflake External OAuth](dc-snowflake#snowflake-external-oauth) documentation. #### Added support for manual transforms of text features {: #added-support-for-manual-transforms-of-text-features } With this release, DataRobot now allows manual, user-created variable type transformations from categorical-to-text even when a feature is flagged as having "too many values". These transformed variables will not be included in the Informative Features list, but can be manually added to a feature list for modeling. ## Public Preview {: #public-preview } ### Feature cache for Feature Discovery deployments {: #feature-cache-for-feature-discovery-deployments } Now available for public preview, you can schedule feature cache for Feature Discovery deployments, which instructs DataRobot to pre-compute and store features before making predictions. Generating these features in advance makes single-record, low-latency scoring possible for Feature Discovery projects. To enable feature cache, go to the **Settings** tab of a Feature Discovery deployment. Then, turn on the **Feature Cache** toggle and choose a schedule for DataRobot to update cached features. ![](images/ft-cache-2.png) Once feature cache is enabled and configured in the deployment's settings, DataRobot caches features and stores them in a database. When new predictions are made, the primary dataset is sent to the prediction endpoint, which enriches the data from the cache and returns the prediction response. The feature cache is then periodically updated based on the specified schedule. **Required feature flag:** Enable Feature Cache for Feature Discovery Public preview [documentation](safer-ft-cache). ### Speed improvements to Relationship Quality Assessment {: #speed-improvements-to-relationship-quality-assessment } The Relationship Quality Assessment (RQA) allows you to test relationships configurations in Feature Discovery projects by verifying join keys, dataset selection, and time-aware settings—the results of which help you spot and fix these configuration issues before EDA2 begins. Typically, finding the best configuration for a given use case requires multiple relationship iterations, however, current RQA run times are too long to make this feasible. To improve run times, DataRobot now subsamples approximately 10% of the primary dataset, speeding up the computation without impacting the enrichment rate estimation accuracy or the results of the assessment. After the assessment is done, the sampling percentage is included at the top of the report. ![](images/rqa-speed-1.png) **Required feature flag:** Enable Feature Discovery Relationship Quality Assessment Speedup Public preview [documentation](rqa-speed). ### New data connection UI {: #new-data-connection-ui } Now available for public preview, DataRobot introduces improvements to the data connection user interface that simplifies the process of adding and configuring data connections from the **AI Catalog > Data Connection** page. Instead of opening multiple windows to set up a data connection, after selecting a data store, you can configure parameters and authenticate credentials in the same window. For each data connection, only the required fields are displayed, however, you can define additional parameters under **Advanced Options** at the bottom of the page. ![](images/connect-ui-5.png) Additionally, using credentials to connect to data sources has also been simplified. Once you enter credentials when configuring a data connection, DataRobot automatically applies these credentials when you create a new AI Catalog dataset from the connection. **Required feature flag:** Enable New Data Connection UI Public preview [documentation](new-data-ui).
90-data
--- title: MLOps (V7.0) description: DataRobot Release 7.0 MLOps release notes --- # MLOps (V7.0) {: #mlops-v70} _March 15, 2021_ The DataRobot MLOps v7.0 release includes many new features and capabilities, described below. Release v7.0 provides updated UI string translations for the following languages: * Japanese * French * Spanish ## New features and enhancements {: #new-features-and-enhancements } See details of new deployment features below: * [Configure accuracy metrics for a deployment](#configure-accuracy-metrics-for-a-deployment) * [Now GA: Download the Portable Prediction Server image](#now-ga-download-the-portable-prediction-server-image) * [Now GA: Create and manage prediction environments](#now-ga-create-and-manage-prediction-environments) * [Now GA: Create external time series deployments](#now-ga-create-external-time-series-deployments) * [Now GA: Time series deployment support for the Make Predictions tab](#now-ga-time-series-deployment-support-for-the-make-predictions-tab) * [Now GA: Download Scoring Code with MLOps agent integration](#now-ga-download-scoring-code-with-mlops-agent-integration) The following new deployment features are currently in public beta. Contact your DataRobot representative for information on enabling them: * [Challenger models now available for external deployments](#challenger-models-now-available-for-external-deployments) * [Improved monitoring support for multiclass deployments](#improved-monitoring-support-for-multiclass-deployments) * [Integrate the Tableau Analytics Extension with DataRobot deployments](#integrate-the-tableau-analytics-extension-with-datarobot-deployments) * [Feature Discovery deployments support governance workflow to manage secondary datasets](#feature-discovery-deployments-support-governance-workflow-to-manage-secondary-dataset) #### New model registry features {: #new-model-registry-features } See details of new model registry features] below: {% if 'enterprise' in tags %} * [Now GA: Manage custom model resources](#now-ga-manage-custom-model-resources){% endif %} {% if 'enterprise' not in tags %} * [Custom model deployment logs](#custom-model-deployment-logs) {% endif %} * [Now GA: Create unstructured custom inference models](#now-ga-create-unstructured-custom-inference-models) The following new model registry features are currently in public beta. Contact your DataRobot representative for information on enabling them: * [Custom models portable prediction support](#custom-models-portable-prediction-support) * [GitHub Enterprise and Bitbucket Server integration for custom models](#github-enterprise-and-bitbucket-server-integration-for-custom-models) * [Custom inference anomaly detection models](#custom-inference-anomaly-detection-models) ## New deployment features {: #new-deployment-features } ### Configure accuracy metrics for a deployment {: #configure-accuracy-metrics-for-a-deployment } DataRobot now allows you to configure the accuracy metrics displayed as tiles in the [**Accuracy**](deploy-accuracy) tab for deployments. You can choose the number of metrics displayed (up to 10), and select from a variety of metrics specific to the modeling project type (regression or binary classification). ![](images/acc-metric-2.png) ### Now GA: Download the Portable Prediction Server image {: #now-ga-download-the-portable-prediction-server-image } You can now access and download the Portable Prediction Server (PPS) image from the [**Developer Tools**](api-key-mgmt) page. The PPS image is an all-in-one dockerized solution that runs a fully functional Prediction API server with full model monitoring support via the MLOps Agent. Download the image and [configure](portable-pps) it to run prediction jobs outside of DataRobot and report prediction statistics back to a DataRobot deployment. ### Now GA: Create and manage prediction environments {: #now-ga-create-and-manage-prediction-environments } Now generally available, you can manage [prediction environments](pred-env) for deployments running outside of DataRobot. This allows you to fully represent the various platforms you may have running your models externally from within your organization. Prediction environments support the ability to deploy and monitor a model in an external environment (via the [Portable Prediction Server](portable-pps) or with Scoring Code. Create a prediction environment, add it to DataRobot, and deploy a DataRobot model with the environment specified to establishing a deployment scenario for models running outside of DataRobot. ![](images/predenv-rn.png) ### Now GA: Create external time series deployments {: #now-ga-create-external-time-series-deployments } Now generally available, you can create a time series model, deploy that model external to DataRobot, and report prediction statistics back to DataRobot using the MLOps agent. This allows you to develop a time series model in DataRobot, but also export it in an easily usable form while maintaining DataRobot's deployment monitoring and management functionality. ### Now GA: Time series deployment support for the Make Predictions tab {: #now-ga-time-series-deployment-support-for-the-make-predictions-tab } Now generally available, you can use the [**Make Predictions**](batch-pred#predictions-for-time-series-deployments) interface to efficiently score datasets with a deployed time series model. The interface allows you to see information about the model’s feature derivation window and forecast rows, ensuring that the data you are trying to score meets the proper requirements. You can also configure the time series options for the dataset you want to score so that you can make predictions for a specific forecast point without having to modify the dataset. ![](images/ts-makepred-2.png) ### Now GA: Download Scoring Code with MLOps agent integration {: #now-ga-download-scoring-code-with-mlops-agent-integration } Now generally available, you can download the [MLOps agent](agent-sc) packaged with Scoring Code directly from a deployment. This allows you to quickly integrate the agent and report model monitoring statistics back to DataRobot for models running outside of DataRobot. Once configured, at prediction time the MLOps agent automatically starts, reports metrics, and shuts down without additional setup. You can download the Scoring Code and MLOps agent package from a deployment's **Actions** menu. ## New public beta deployment features {: #new-public-beta-deployment-features } ### Beta: Challenger models now available for external deployments {: #beta-challenger-models-now-available-for-external-deployments } Now available as a public beta feature, deployments in remote prediction environments can use the [**Challengers**](challengers) tab. Remote models can serve as the champion model, and you can compare them to DataRobot and custom models challengers. If you want to replace the champion model with a challenger, you can also replace the model with a custom or DataRobot challenger model and deploy the new champion to your remote prediction environment. ![](images/ext-champ-4.png) ### Beta: Improved monitoring support for multiclass deployments {: #beta-improved-monitoring-support-for-multiclass-deployments } Now available as a public beta feature, you can deploy multiclass models (including custom models) with improved monitoring capabilities. Multiclass deployments can now report accuracy and data drift statistics with proper configuration. Additionally, multiclass models deployed to remote prediction environments can be monitored by the MLOps agent. Multiclass deployments offer class-based configuration to modify the data displayed on the graphs of the **Accuracy** and **Data Drift** tabs. By default, the graphs display the five most common classes in the training data. These monitoring capabilities are currently limited to ten classes. ![](images/multi-dep-5.png) ### Beta: Integrate the Tableau Analytics Extension with DataRobot deployments {: #beta-integrate-the-tableau-analytics-extension-with-datarobot-deployments } You can now use the [Tableau Analytics Extension](tableau-extension) to integrate DataRobot predictions into your Tableau project. The extension supports the Tableau Analytics API, which allows its users to create endpoints that can send data to and receive data from Tableau. Using the extension, you can visualize prediction information and update it with information based on prediction responses from DataRobot. Establish a connection between DataRobot and Tableau, access the code snippet from your deployment, and make predictions from Tableau via the DataRobot prediction API. Additionally, you can configure the code snippet from your deployment to re-map any features. ![](images/tab-ext-4.png) ### Beta: Feature Discovery deployments support governance workflow to manage secondary datasets {: #beta-feature-discovery-deployments-support-governance-workflow-to-manage-secondary-datasets } With this release, you can manage updates to [secondary datasets](fd-predict) in Feature Discovery deployments using the [governance workflow](dep-admin). After an admin sets up the “Secondary dataset configuration changed” [approval policy trigger](deploy-approval) in **User Settings > Approval Policies**, any changes to a secondary dataset will prompt a change request that must go through an approval process. The creator of the change request can view its status under **History** in **Deployments > Overview**, and reviewers will see a pending changes notification requesting that they review the update. ![](images/rn-safer-approval.png) ## New model registry features {: #new-model-registry-features_1 } {% if 'enterprise' in tags %} ### Now GA: Manage custom model resources {: #now-ga-manage-custom-model-resources } Custom model creators and organization admins can [configure the resources](custom-inf-model) allocated for a custom model. Configuring these resources facilitates smooth deployment and minimizes potential environment errors in production. You can determine the maximum amount of memory a model can use when making predictions, as well as set the maximum number of replicas executed in parallel to balance workloads when a custom model is running. ![](images/resource-2.png) {% endif %} {% if 'enterprise' not in tags %} ### Custom model deployment logs {: #custom-model-deployment-logs } When you deploy a custom model, it generates unique [log reports](custom-inf-model#custom-model-deployment-logs) that allow you to debug custom code and troubleshoot prediction request failures from within DataRobot. You can access two types of logs: * Runtime logs are captured from the Docker container running the custom model. Use them to troubleshoot failed prediction requests. The logs are cached for 5 minutes after you make a prediction request. * Deployment logs are automatically captured if the custom model fails while deploying. The logs are stored permanently as part of the deployment. ![](images/custom-log-2.png) {% endif %} ### Now GA: Create unstructured custom inference models {: #now-ga-create-unstructured-custom-inference-models } Now generally available, DataRobot supports [unstructured custom models](custom-model-assembly/index) that do not use the conventional regression or binary classification target types. Unstructured models do not need to conform to a specific input/output schema like regression and binary classification models do. They may use any arbitrary data for their inputs and outputs. This allows you to deploy and monitor any type of model with DataRobot, regardless of the target type, and affords you more control over how you read the data from a prediction request and response. ## New public beta model registry features {: #new-public-beta-model-registry-features } ### Beta: Custom Models now support portable predictions {: #beta-custom-models-now-support-portable-predictions } You can now deploy custom models to their own [Portable Prediction Server (PPS)](custom-pps). A downloadable bundle containing the custom model, a custom environment, and the MLOps agent is available to generate and launch a PPS image. Once started, the custom model PPS installation serves predictions via a REST API. The MLOps agent can be configured to report prediction statistics back to a DataRobot deployment for the custom model. ![](images/custom-pps-1.png) ### Beta: GitHub Enterprise and Bitbucket Server integration for custom models {: #beta-github-enterprise-and-bitbucket-server-integration-for-custom-models } Users can now register GitHub Enterprise and Bitbucket Server repositories in the Model Registry to pull artifacts into DataRobot and build custom inference models. Integrating either of these repositories allows you to directly transfer between a governed, code-centric machine learning development environment and a governed MLOps environment. ![](images/remote-2.png) ### Beta: Custom inference anomaly detection models {: #beta-custom-inference-anomaly-detection-models } Now available as a public beta feature, you can create a custom inference model for anomaly detection problems. When creating a custom model, you can indicate "Anomaly Detection" as a target type. Additionally, access the [DRUM template](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/inference/python3_sklearn_anomaly){ target=_blank } for anomaly detection models. For deployed custom inference anomaly detection models, note that the following functionality is not supported: * Data drift * Accuracy and association IDs * Challenger models * Humility rules * Prediction intervals ![](images/anomaly-custom-rn.png) _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.0-mlops
--- title: Time series (V7.0) description: DataRobot Release 7.0 time series release notes --- # Time series (V7.0) {: #time-series-v70 } _March 15, 2021_ The DataRobot v7.0.0 release includes many new [time series](#new-time-series-features) features, described below. See also details on other [AutoML new features](v7.0.0-aml) for more details. ## New time series features {: #new-time-series-features } See details of [new time series features](#new-time-series-features) below: * [Smart and blended clustered blueprints increase accuracy](#smart-and-blended-clustered-blueprints-increase-accuracy- improve-build-time) * [Compare anomaly detection performance between models from the Model Comparison tab](#now-ga-compare-anomaly-detection-performance-between-models-from-the-model-comparison-tab) * [Monotonic modeling now available for time series projects](#now-ga-monotonic-modeling-now-available-for-time-series-projects-enabling-desired-outcomes-and-compliance) * [Create external time series deployments](#now-ga-create-external-time-series-deployments-mlops-required) * [Time series deployment support for the Make Predictions tab](#now-ga-time-series-deployment-support-for-the-make-predictions-tab-mlops-required) * [Detailed time series feature derivation documentation now available](#detailed-time-series-feature-derivation-documentation-now-available) * [Beta: Head-to-head prediction comparison for DataRobot and external time-aware projects](#beta-head-to-head-prediction-comparison-for-datarobot-and-external-time-aware-projects-help-drive-business-decisions) * [Beta: New methods for modeling on new series improve prediction accuracy](#beta-new-methods-for-modeling-on-new-series-improve-prediction-accuracy-trust) * [Beta: New series humility rule expands abilities to train](#beta-new-series-humility-rule-expands-abilities-to-train) ### Smart and blended clustered blueprints increase accuracy, improve build time {: #smart-and-blended-clustered-blueprints-increase-accuracy-improve-build-time } Now available from the Repository, new clustered blueprints create both a "smart" and blended version of assorted blueprints using different clusters with different feature lists. While DataRobot currently provides performance- and similarity-clustered blueprints, the new clustered blueprint selects clusters/feature list pairs from a selection of models, increasing accuracy and reducing build time. DataRobot now uses an extended feature list (any supported list, including custom lists, and especially effective with the Time Series Informative Feature list) that contains all the columns needed for three feature lists (“no-differencing,” “with differencing (latest),” and “with differencing (season)”). From these three groups, DataRobot constructs three column sets&mdash;basic feature columns, short periodicity features, and long periodicity features. It then divides modeling data according to those column sets and performs gridsearch to tune the model. ![](images/rn-clustered-blue.png) ### Now GA: Compare anomaly detection performance between models from the Model Comparison tab {: #now-ga-compare-anomaly-detection-performance-between-models-from-the-model-comparison-tab } The [**Model Comparison**](model-compare) tab introduces additional improvements for time-aware projects. Now, you can select Anomaly Over Time to easily compare anomaly detection models to find where they agree and disagree about discovered anomalies. Drag the anomaly threshold up and down to control what is considered an anomaly, and then set backtest and forecast distances to zero in on the results of most interest. ![](images/comp-anom-det-1.png) ### Now GA: Monotonic modeling now available for time series projects, enabling desired outcomes and compliance {: #now-ga-monotonic-modeling-now-available-for-time-series-projects-enabling-desired-outcomes-and-compliance } In some regulated industries, you want to force the directional relationship between a feature and the target (for example, higher home values should always lead to higher home insurance rates). DataRobot has supported this capability for AutoML projects, but with this release adds the capability to time series projects. By training with [monotonic constraints](feature-con) based on business knowledge, you force certain XGBoost models to learn only monotonic (always increasing or always decreasing) relationships between specific features (raw or derived) and the target. For example, configure a monotonic relation between risk and credit card balance (e.g., with a large negative balance the approval risk is higher). This is not because the model learns that relationship itself, but because the rules are explicitly given as known and the model will use these rules when making predictions. ![](images/rn-monotonic-ts.png) ### Now GA: Create external time series deployments (MLOps required) {: #now-ga-create-external-time-series-deployments-mlops-required } Now generally available, you can create a time series model, deploy that model external to DataRobot, and report prediction statistics back to DataRobot using the MLOps agent. This allows you to develop a time series model in DataRobot, but also export it in an easily usable form while maintaining DataRobot's deployment monitoring and management functionality. ### Now GA: Time series deployment support for the Make Predictions tab (MLOps required) {: #now-ga-time-series-deployment-support-for-the-make-predictions-tab-mlops-required } Now generally available, you can use the [**Make Predictions**](batch-pred#making-predictions-with-time-series-deployments) interface to efficiently score datasets with a deployed time series model. The interface allows you to see information about the model’s feature derivation window and forecast rows, ensuring that the data you are trying to score meets the proper requirements. You can also configure the time series options for the dataset you want to score so that you can make predictions for a specific forecast point without having to modify the dataset. ![](images/ts-makepred-2.png) ### Detailed time series feature derivation documentation now available {: #detailed-time-series-feature-derivation-documentation-now-available } The in-app UI platform documentation now includes a more complete view of the time series feature derivation process. The newly added [documentation](feature-eng) clearly articulates the feature derivation process&mdash;operators used and feature names created&mdash;that create the time series modeling dataset. ![](images/fear-2.png) ### Beta: Time series data prep tool addresses gap handling to allow time-based mode with irregular time steps {: #beta-time-series-data-prep-tool-addresses-gap-handling-to-allow-time-based-mode-with-irregular-time-steps } For some time series, when the dataset is detected as irregular, DataRobot only allows row-based mode. Loss of time-based mode can result in significant gaps in some series. The [time series data prep tool](ts-data-prep), available from the AI Catalog, provides a solution to this issue. It allows you to aggregate a dataset to a specified time step and impute the target for any missing rows. You can modify the dataset using either a selector-based method (**Manual**) or an editable Spark SQL query and then save the new dataset back to the AI Catalog as a Spark asset. ![](images/rn-tsd-prep.png) ### Beta: Head-to-head prediction comparison for DataRobot and external time-aware projects help drive business decisions {: #beta-head-to-head-prediction-comparison-for-datarobot-and-external-time-aware-projects-help-drive-business-decisions } With this release, organizations with existing time-aware models outside of the DataRobot application can now create a prediction file from those models and use it for a [baseline accuracy comparison](cyob). This feature introduces three additional "flavors" based on existing metrics (RMSE, MAE, and LogLoss), all of which are scaled to the external baseline (uploaded predictions), to provide an at-a-glance accuracy measure from the Leaderboard. To use the feature, simply upload the file into DataRobot prior to modeling, apply it through **Advanced options > Time Series**, and select the appropriate metric. Using standard RMSE: ![](images/cyob-3.png) Using RMSE scaled to an external baseline: ![](images/cyob-4.png) ### Beta: New methods for modeling on new series improve prediction accuracy, trust {: #beta-new-methods-for-modeling-on-new-series-improve-prediction-accuracy-trust } With this release, DataRobot introduces “cold start series” modeling&mdash;modeling on a series in which there is not sufficient historical data. For example, when performing demand forecasting for all SKUs in a store, you may want to predict sales for an item that has never been sold before (a “cold start” series). Previously, time series models produced an error when making predictions for a series that did not have the full history needed to derive features for a specific forecast point. Now, new blueprints are added to Autopilot that support features being derived using partial history or no history at all. These blueprints use a two-stage approach. In the first stage, the main effect model is built, which works well on averaged derived features. In the second stage, the known-in-advance features (if available) are used to account for series effects for zero history records and series effects are used for partial history. A special column is added to show how many data points (rows) were used in the feature derivation process. ![](images/rn-cold-start.png) ### Beta: New series humility rule expands abilities to train {: #beta-new-series-humility-rule-expands-abilities-to-train } DataRobot has supported multiseries blueprints that allow predicting on new series&mdash;series that were not trained previously and do not have enough points in the training dataset for accurate predictions. This is useful, for example, in demand forecasting&mdash;when a new product is introduced, you may want initial sales predictions. Now, in conjunction with “cold start modeling” (modeling on a series in which there is not sufficient historical data) you can predict on new series, but also keep accurate predictions for series with a history. This involves new Autopilot blueprints that support feature derivation using partial history or no history at all. With the support in place, you can set up [a humility rule](humility-settings) that: * triggers off a new series (unseen in training data). * takes a specified action. * optionally, returns a custom error message. ![](images/ms-hum-2.png) ## Time series issues fixed in v7.0.0 {: #time-series-issues-fixed-in-v700 } The following issues have been fixed since release 6.3.4. * TIME-4983: Fixes an issue where Eureqa models that were cancelled/errored before completion had a chance to leave unwanted files around that caused subsequent models run on the same parameters to fail. * TIME-6083: Fixes an issue causing the temporal hierarchical model to error on exponential time series projects with the project metric as RMSLE. The fix was made by removing this model from autopilot and the repository. This also removes the hierarchical and two stage time series models under the same conditions. * TIME-6776: DeepAR has been disabled for FW=[0, 0] projects. * TIME-7322: Enables Prediction Explanations for time series "Recommended for Deployment" models. * XAI-3266: Fixes generation of Shapley-based feature insights and Predictions Explanations for OTV models with time window sampling. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.0.0-ats
--- title: AutoML (V7.0) description: DataRobot Release 7.0 AutoML release notes --- # AutoML (V7.0) {: #automl-v70 } _March 15, 2021_ The DataRobot v7.0.0 release includes many [new UI](#other-new-features) and [API](#api-enhancements) capabilities, described below. See also details on [time series new features](v7.0.0-ats) for more details. See these important [deprecation](#deprecation-notices) announcements for information about changes to DataRobot's support for older, expiring functionality. Release v7.0.0 provides updated UI string translations for the following languages: * Japanese * French * Spanish ## In the spotlight... {: #in-the-spotlight } The following features are some of the highlights of Release 7.0: * [Bias detection and analysis](#now-ga-bias-detection-and-analysis-tools) * [Visual AI image augmentation](#now-ga-accuracy-boosting-train-time-image-augmentation) ### Now GA: Bias detection and analysis tools {: #now-ga-bias-detection-and-analysis-tools } **Bias and Fairness** testing, now publicly available, provides methods to calculate fairness for a binary classification model and to identify any biases in the model’s predictive behavior. Before model building, use [**Advanced Options > Bias and Fairness**](fairness-metrics) to define protected features and choose the appropriate fairness metric for your use case. A **Help me choose** questionnaire prompts DataRobot to recommend a metric. Once models are built, **Bias and Fairness** insights help identify bias in a model and visualize results of root-cause analysis into why the model is learning bias from the training data and from where. * [**Per-Class Bias**](per-class) uses the fairness threshold and fairness score of each class to determine if certain classes are experiencing bias in the model’s predictive behavior. ![](images/rn-bias-1.png) * [**Cross-Class Data Disparity**](cross-data) performs root-cause analysis of the model’s bias for the selected classes. The **Data Disparity vs Feature Importance** chart identifies which features impact bias most; the **Feature details** chart reports where bias exists within the feature. ![](images/rn-bias-2.png) * [**Cross-Class Accuracy**](cross-acc) helps to understand how the model is performing and its behavior on a given protected feature/class segment. ![](images/rn-bias-3.png) ### Now GA: Accuracy-boosting train-time image augmentation {: #now-ga-accuracy-boosting-train-time-image-augmentation } Train time image augmentation, a feature available for Visual AI projects, boosts accuracy on image datasets, especially those with few rows. More data usually means better accuracy and better generalization, but often you don’t have the resources (time, money, image availability, labeling expertise, etc.) to easily obtain it. With image augmentation you can create new image data from existing images by applying transformations. ![](images/aug-list-preview.png) You can create image transformations prior to model-building via **Advanced options**. Or, after model building completes, you can continue to tune the image dataset from the Leaderboard's **Evaluate > Advanced Tuning** tab. A new "Image Augmentation" task will appear in image blueprints. Improvements to augmentation, based on Beta feedback, include support for multimodal projects, an increase in the size of augmentation that DataRobot can perform, and an improved UI for previewing augmentation strategies. Also, post-modeling tuning and new augmentation list creation has moved to **Advanced Tuning**. ![](images/vai-ttia-leaderboard-1.png) ## New features and enhancements {: #new-features-and-enhancements } #### Feature Discovery enhancements {: #feature-discovery-enhancements } See details of [Feature Discovery enhancements](#new-feature-discovery-features) below: * [Increased blueprint support of summarized categorical features](#increased-blueprint-support-of-summarized-categorical-features-increases-accuracy-and-leaderboard-diversity) * [Summarized categorical insights now filter stop words](#summarized-categorical-insights-now-filter-stop-words) * [Beta: Feature Discovery now available for unsupervised projects](#beta-feature-discovery-now-available-for-unsupervised-projects) * [Beta: Feature Discovery deployments support governance workflow to manage secondary datasets](#beta-feature-discovery-deployments-support-governance-workflow-to-manage-secondary-datasets-mlops-equired)) * [Beta: Support for Spark SQL queries in dynamic datasets now available in Feature Discovery](#beta-support-for-spark-sql-queries-in-dynamic-datasets-now-available-in-feature-discovery-secondary-datasets) #### Other new features {: #other-new-features } See details of [other new features](#other-new-features) below: * [Prediction threshold gets a UX upgrade](#prediction-threshold-gets-a-ux-upgrade) * [Access additional Scoring Code models](#access-additional-scoring-code-models) * [Developer Tools page now provides access R and Python clients](#developer-tools-page-now-provides-access-to-r-and-python-clients) * [Multiclass Feature Impact now supports custom sample sizes](#multiclass-feature-impact-now-supports-custom-sample-sizes) * [Beta: Multilabel classification capabilities expands classification options](#beta-multilabel-classification-capabilities-expands-classification-options) * [Beta: New Tiny BERT pretrained featurizer implementation extends NLP](#beta-new-tiny-bert-pretrained-featurizer-implementation-extends-nlp-with-no-fine-tuning-needed) * [Beta: Scoring Code support for Keras models](#beta-scoring-code-support-for-keras-models) #### Changes for Self-Managed Administrators {: #changes-for-administrators } * [Enhanced SAML SSO provides additional configuration options through the UI](#enhanced-saml-sso-provides-additional-configuration-options-through-the-ui) ## New Feature Discovery features {: #new-feature-discovery-features } ### Increased blueprint support of summarized categorical features increases accuracy and Leaderboard diversity {: #increased-blueprint-support-of-summarized-categorical-features-increases-accuracy-and-leaderboard-diversity } The [summarized categorical](histogram#summarized-categorical-feature-details) variable type are for features that host a collection of categories (for example, the count of a product by category or department). If your original dataset does not have features of this type, DataRobot creates them (from secondary datasets) as part of the feature discovery process. With this release, DataRobot adds support for this feature type to a wider selection of blueprints, resulting in a greater number of models being run during Autopilot. This addition will be particularly impactful in [Feature Discovery](feature-discovery/index) projects with secondary datasets. ![](images/rn-summ-cat.png) ### Summarized categorical insights now filter stop words {: #summarized-categorical-insights-now-filter-stop-words } With this release, insights for summarized categorical features now filter out stop words on demand (Category Cloud) and by default (Histogram) for single-token text. Removing stop words&mdash;commonly used terms that can be excluded from searches&mdash;improves interpretability if the words are not informative to the model. This is because, by filtering, users can focus on the important non-stopwords to better understand their data. ![](images/rn-filter-stop.png) ### Beta: Feature Discovery now available for unsupervised projects {: #beta-feature-discovery-now-available-for-unsupervised-projects } Previously, Feature Discovery did not support [unsupervised learning](unsupervised/index) projects. While the option was visible at project start when "No Target" was chosen, the UI returned an error message if you tried to configure Feature Discovery settings. Now available as a beta feature, you can set unsupervised mode, add secondary datasets, define relationships, and start a project. DataRobot will generate secondary features as in a supervised project, while eliminating supervised feature reduction (which requires a target). ![](images/rn-ad-fd.png) ### Beta: Feature Discovery deployments support governance workflow to manage secondary datasets (MLOps required) {: #beta-feature-discovery-deployments-support-governance-workflow-to-manage-secondary-datasets-mlops-required } With this release, you can manage updates to [secondary datasets](fd-predict#use-an-alternate-configuration) in Feature Discovery deployments using the [governance workflow](dep-admin). After an admin sets up the “Secondary dataset configuration changed” [approval policy trigger](deploy-approval) in **User Settings > Approval Policies**, any changes to a secondary dataset will prompt a change request that must go through an approval process. The creator of the change request can view its status under **History** in **Deployments > Overview**, and reviewers will see a notification requesting that they review pending changes. ![](images/rn-safer-approval.png) ### Beta: Support for Spark SQL queries in dynamic datasets now available in Feature Discovery secondary datasets {: #beta-support-for-spark-sql-queries-in-dynamic-datasets-now-available-in-feature-discovery-secondary-datasets } DataRobot offers the ability to enrich, transform, shape, and blend together snapshotted (static) datasets using Spark SQL queries from within the **AI Catalog**. This new functionality adds support for dynamic Spark SQL in secondary datasets for [Feature Discovery](feature-discovery/index) projects. When enabled as a beta feature ("Enable Feature Discovery Support of Dynamic Spark SQL"), this new functionality increases flexibility in performing basic data prep. Authentication requirements remain the same. ![](images/rn-dynamic-spark-sql.png) ## Other new features {: #other-new-features_1 } ### Prediction threshold gets a UX upgrade {: #prediction-threshold-gets-a-ux-upgrade } With this release, DataRobot has upgraded the user experience for setting prediction thresholds on the Leaderboard. First, upgrades to the components on the **ROC Curve**, **Profit Curve**, **Make Predictions**, and **Deploy** tabs make assigning or selecting a suggested prediction threshold easier. Next, there is now a convenient one-click copy between the display threshold and the prediction threshold on the **ROC Curve** and P**rofit Curve** tabs. Finally, the selected prediction threshold is now synched across all tabs in a model and for model downloads (such as a model package (.mlpkg) file). ![](images/rn-pred-ux.png) ### Access additional Scoring Code models {: #access-additional-scoring-code-models } In 7.0, Scoring Code coverage has increased. The following models have been rewritten to include Scoring Code: * [SVMR2 (Support Vector Regressor (Radial Kernel))](https://app.datarobot.com/model-docs/tasks/SVMR2-Support-Vector-Regressor-Radial-Kernel-.html){ target=_blank } * [RULEFITC (RuleFit Classifier)](https://app.datarobot.com/model-docs/tasks/RULEFITC-RuleFit-Classifier.html){ target=_blank } ### Developer Tools page now provides access to R and Python clients {: #developer-tools-page-now-provides-access-to-r-and-python-clients } New in this release, [**Developer Tools**](api-key-mgmt) now provides quick links to developer documentation. These include links to: * Current [REST API](http://datarobot.com/apidocs/){ target=_blank }, [Python client API](https://datarobot-public-api-client.readthedocs-hosted.com/en/v2.23.0/){ target=_blank }, and [R client API](https://cran.r-project.org/web/packages/datarobot/index.html){ target=_blank } documentation. * The [developer portal](http://developers.datarobot.com){ target=_blank }. * The [Github community](https://github.com/datarobot-community){ target=_blank } repositories. ![](images/rn-dev-tools-api.png) ### Multiclass Feature Impact now supports custom sample sizes {: #multiclass-feature-impact-now-supports-custom-sample-sizes } Multiclass projects are now able to compute **Feature Impact** using custom sample size. Address inconsistencies in Feature Impact results, and reproduce those results in a much more consistent way, thus reducing friction during the model validation process. ### Beta: Multilabel classification capabilities expands classification options {: #beta-multilabel-classification-capabilities-expands-classification-options } [Multilabel modeling](multilabel), now available as a public beta feature, is a kind of classification task where each data instance (row in a dataset) is associated with none, one, or several labels. Common uses are for text features with a list of topics (food, Boston, Italian) or images with a list of objects in it (a cat, two dogs, a bear). All the labels for a row builds a _label set_ for the row. Multilabel classification then predicts label sets given new observations. While similar to multiclass modeling, multilabel modeling provides more flexibility. | Data type | Description | Allowed as target? | Project type | |--------------------------|------------|---------------|----------------| | Categorical | Single category per row, mutually exclusive | Yes | Multiclass | | Multicategorical | Multiple categories per row, non-exclusive | Yes | Multilabel | | Summarized Categorical | Multiple categories per row, multiple instances of each category allowed | No | Multiregression (not yet available) | ![](images/rn-multilabel.png) ### Beta: New Tiny BERT pretrained featurizer implementation extends NLP with no fine-tuning needed {: #beta-new-tiny-bert-pretrained-featurizer-implementation-extends-nlp-with-no-fine-tuning-needed } [BERT](https://arxiv.org/abs/1810.04805){ target=_blank } (Bidirectional Encoder Representations from Transformers) is Google's transformer-based de-facto standard for natural language processing (NLP) transfer learning. Tiny BERT (or any distilled, smaller, version of BERT) is now available with certain blueprints in the DataRobot Repository. These blueprints provide pretrained feature extraction in the NLP field, similar to [Visual AI featurizers](vai-ref). However, for maximum flexibility, DataRobot's implementation offers two additional tunable pooling parameters&mdash;Max Pooling and Average Pooling. Tiny BERT blueprints are available for both UI and API users. ![](images/rn-tiny-bert.png) ### Beta: Scoring Code support for Keras models {: #beta-scoring-code-support-for-keras-models } Now publicly available, [Keras models](https://app.datarobot.com/model-docs/tasks/KERASC-Keras-Neural-Network-Classifier.html){ target=_blank } have been rewritten to include Scoring Code. ## New admin features {: #new-admin-features } ### Enhanced SAML SSO provides additional configuration options through the UI {: #enhanced-saml-sso-provides-additional-configuration-options-through-the-ui } *Self-Managed only:* [Enhanced SAML SSO](sso-ref), the new SSO configuration option, allows administrators to provision user roles, update user details (first/last name), and determine who has access to DataRobot. Using their existing identity provider (IdP)/SSO solution&mdash;Active Directory, OneLogin, Okta, for example&mdash;users can now seamlessly access DataRobot as long as they are logged in to their organization's IdP system. In addition, enhanced SSO supports flexibility with IdP metadata parameterization, including security parameters, SAML secrets, and user attribute, role, and groups mappings. Enhanced SAML SSO replaces [SAML SSO](manage-access), which will be deprecated in an upcoming release. ![](images/enhanced-sso.png) ## API enhancements {: #api-enhancements } The following is a summary of API enhancements. See the [changelog](http://apidocs.hq.datarobot.com/CHANGES.html#id1){ target=_blank } for more details and fixed issues. See the [API Support page](https://support.datarobot.com/hc/en-us/articles/215067826-READ-ME-Links-to-DataRobot-API-documentation?flash_digest=b04f17985f45363278b36889b35fe14aaac3dc1f#){ target=_blank } for API Documentation on each client. ### New features {: #new-features } * The lists of allowed and forbidden operations over DataStores and DataSources are now provided by new routes. * A new field `canDelete`, has been added to the response of the `GET /api/v2/externalDataSources/` route, which lists all viewable data sources. ### Enhancements {: #enhancements } * Models can be retrained with custom monotonic constraints. * Models can be retrained with cross validation. * Creating a datetime model using POST /api/v2/projects/(projectId)/datetimeModels/ without specifying a featurelist will result in using the recommended featurelist for the specified blueprint. If there is no recommended featurelist, the project’s default featurelist will be used instead. * The new string field parameter `unsupervisedType` has been added to two endpoints to set the type of unsupervised project as anomaly or clustering when a project is run in unsupervisedMode. * A new field, `canUseDatasetData`, indicates whether a user can use dataset data for download, project creation, custom models training, or providing predictions. !!! tip DataRobot highly recommends updating to the latest API client for Python and R. ## Deprecation notices {: #deprecation-notices } ### Scaleout models deprecated {: #scaleout-models-deprecated } Scaleout models will be deprecated in a future release and should not be used to train new models. ## Customer-reported issues fixed in v7.0.0 {: #customer-reported-issues-fixed-in-v700 } The following issues have been fixed since release v6.3.4. ### Platform {: #platform } * DM-4525: Data Connections are now properly listed in Credentials Management Page when the UI language is set to non-English. * DM-4637: Adds a new config setting, `KERBEROS_PEM_ENABLE`, which when set to `True` will allow the `kinit` command to use a service ticket using `PKINIT` preauth instead of using a keytab. * DM-4696: The following variables have changed: * `AZURE_BLOB_STORAGE_CHUNK_SIZE` env variable is configurable (99MB default). * `AZURE_BLOB_STORAGE_TIMEOUT` env variable is configurable (20 second default). * EP-506: Fixes an issue with database timeout during index create/update. ### Platform {: #platform_1 } * EP-750: Fixes an issue with systems using external directory services where some DataRobot containers were unable to resolve the `datarobot_user` user. This change introduces the `os_configuration.remote_user_credentials` parameter by mapping the external directory service credentials into DataRobot containers when set to `true`. * EP-795: For third-party tools, the admin interface for RabbitMQ now can have additional headers. * PLT-3052: Fixed LDAP group mapping for groups with special symbols in the name. ### Modeling {: #modeling } * MODEL-5033: Modified certain Keras Repository blueprints that make use of One Hot Encoding numerics so that they perform NDC before One Hot Encoding. This fix ensures prediction consistency between the ModelingAPI and BatchAPI. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.0.0-aml
--- title: Version 7.0.x description: DataRobot Release 7.0 release announcements index page. --- # Version 7.0.x {: #version-70x } Following are the AutoML, time series, and MLOps announcements for the features that make up DataRobot's 7.0.x releases, first released _March 15, 2021_. * [v7.0.x AutoML](v7.0.0-aml) * [v7.0.x time series](v7.0.0-ats) * [v7.0.x MLOps](v7.0-mlops) * [Maintenance release notes](7.0-maintenance/index) for DataRobot v7.0.x.
index
# Version 8.0.10 {: #version-8010 } _December 19, 2022_ The DataRobot v8.0.10 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. ## Issues fixed in v8.0.10 {: #issues-fixed-in-v8010 } The following issues have been fixed since release v8.0.9: ### Enterprise {: #enterprise } - EPS-691: Upgrades Spark from v3.0.1 to v.3.2.2. As a result, you must update BigQuery drivers using version 1.2.21.1025 or earlier and set up a new data connection with the **Google Bigquery - 2022** driver. Users who are using Hive on non-supported Hadoop versions will get an exception while uploading predictions from a Hive data source. - EPS-688: Disables the host header validation by default. Documentation has been updated to explain how to enable host header validation by configuring the `ALLOWED_HOSTS` setting. ### Platform {: #platform } - ARCH-4023: Updates the HTTP Strict Transport Security (HSTS) configuration settings to a value of 1 year. - DM-8113: Fixes an issue causing Trino data connections to intermittently fail when performing a transaction. - DM-7445: Upgrades the Python library `numpy` to 1.21.6. Users should not experience backward compatibility issues as a result of this change. The `numpy` library handles various numerical transformations related to data processing and preparation in the platform. DataRobot regularly upgrades libraries to improve speed, security, and predictive performance. Note that users who have trained and deployed a model using .xpt or .xport file formats may see predictions change (typically less than 1%). These changes are due to incremental differences in the treatment of floats between the current and target upgrade versions of the `numpy` library. - PLT-7659: User roles now display in the **Roles** column on the **Users list** page. - COPS-11659: Fixes an issue causing SELinux to deny `logrotate` access to files within the DataRobot `app_logs_dir` (e.g., `/opt/datarobot/logs`). ### Core modeling {: #core-modeling } - MODEL-7313: The following fields are now configurable for Self-Managed AI Platform installations: `FROZEN_THRESHOLD`, `MAX_AUTOPILOT_SIZE`, `EDA_SAMPLE_THRESHOLD`, `CROSS_VALIDATION_THRESHOLD`, and `SLIM_RUN_THRESHOLD`. ### MLOps {: #mlops } - MMM-9717: Fixes the ability for Self-Managed AI Platform users to compute insights for challenger model comparisons. ### Time series {: #time-series } - TIME-12076: Fixes an issue causing prediction intervals differences in the UI and downloaded predictions for time series and multiseries projects. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.10-aml
# Version 8.0.2 {: #version-802 } _April 30, 2022_ The DataRobot v8.0.2 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) ## Issues fixed in v8.0.2 {: #issues-fixed-in-v802 } The following issues have been fixed since Enterprise release v8.0.1: ### Platform {: #platform } * DM-5972: List schemas for BigQuery JDBC data connections now only return items from the current catalog asset or project. This fix also speeds up list tables by restricting to the current project if the schema is specified but the catalog asset is not. * PLT-6559: Enhanced global SAML SSO management now generally available for 8.0.x. ### MLOps {: #mlops } * MMM-9371: Fixes an issue that prevented enabling the "Enable ability to reset deployment statistics" feature flag for Self-Managed AI Platform installations. * RAPTOR-7475: The UID and GID for Custom Inference model containers are now configurable (`LONG_RUNNING_SERVICES_SECURITY_CONTEXT_USER_ID`, `LONG_RUNNING_SERVICES_SECURITY_CONTEXT_GROUP_ID`) and no longer fixed to `1000`. * RAPTOR-7478: Fixes an issue with date column handling in Compliance Docs for Custom Inference Models. Date columns will are now excluded from Feature Fit and Feature Effects. ### Time series {: #time-series } * TIME-10937: Fixes an issue preventing users from creating a time series anomaly detection project with datasets larger than 800 MB via the UI. ### Visual AI {: #visual-ai } * VIZAI-3312: Fixes an issue with CSV and PNG exports for external datasets confusion charts. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.2-aml
# Version 8.0.12 {: #version-8012 } _February 27, 2023_ The DataRobot v8.0.12 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. ## Issues fixed in v8.0.12 {: #issues-fixed-in-v8012 } The following issues have been fixed since release v8.0.11: ### Core modeling {: #core-modeling } - MODEL-9497: Fixes the Composable ML task parameter validation logic. The submitted task parameter is now validated against both project settings and corresponding allowed values. ### Data management {: #data-management } - DM-6623: Fixes an issue causing the **Grant access** button to not appear during ADLS authentication. ### Model management {: #model-management } - MMM-11977: Adds the parameter `MMM_MODMON_PREDICTION_RESULTS_DELETE_QUERY_BATCH_SIZE` to make batch size configurable for `PredictionResultCleanup`. ### Predictions {: #predictions } - PRED-7826: Fixes an issue where a schema was required for JDBC connections that do not require one. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.12-aml
# Version 8.0.16 {: #version-8016 } _June 26, 2023_ The DataRobot v8.0.16 release includes some fixed issues in the DataRobot Self-Managed AI Platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. ## Issues fixed in v8.0.16 {: #issues-fixed-in-v8016 } The following issues have been fixed since release v8.0.15: ### MLOps {: #mlops } - MMM-13255: Fixes the calculation of "Accuracy" and "Balanced accuracy" metrics on the model management **Accuracy** tab for models with a threshold other than 0.5. ### Platform {: #platform } - PY3-4137: Fixes an issue for Python 2 projects that were not shown as deprecated after the cluster upgrade to 8.0. ### Predictions {: #predictions } - PRED-8856: Fixes an issue when writing predictions using JDBC for time-series segmented models. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.16-aml
# Version 8.0.6 {: #version-806 } _August 29, 2022_ The DataRobot v8.0.6 release includes a [new public preview feature](#project-duplication-with-settings-for-time-series-projects) as well as fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. ## Project duplication, with settings, for time series projects {: #project-duplication-with-settings-for-time-series-projects } Now available for public preview, you can duplicate ("clone") any DataRobot project type, including unsupervised and time-aware projects like time series, OTV, and segmented modeling. Previously, [this capability](manage-projects#duplicate-a-project) was only available for AutoML projects (non time-aware regression and classification). Duplicating a project provides an option to select the dataset only&mdash;which is faster than re-uploading it&mdash;or a dataset and project settings. For time-aware projects, this means cloning the target, the feature derivation and forecast window values, any selected calendars, KA, features, series IDs&mdash;all time series settings. If you used the [data prep](ts-data-prep#data-prep-for-time-series) tool to address irregular time step issues, cloning uses the modified dataset (which is the dataset used for model building in the parent project). You can access the **Duplicate** option from the Projects dropdown (upper right corner) or the Manage Project page. ![](images/proj-dup.png) **Required feature flag:** Enable Cloning Time-Aware and Unsupervised Projects with Project Settings ## Issues fixed in v8.0.6 {: #issues-fixed-in-v806 } The following issues have been fixed since release v8.0.5: ### Platform {: #platform } - DM-6241: Larger S3 datasets are no longer truncated during ingestion when using an S3 connector. ### MLOps {: #mlops } - MMM-10386: Shows API-provided error messages during deployment creation instead of generic messages. - MMM-10443: Fixes an issue preventing the use of externally submitted scoring data for challenger scoring. ### Predictions {: #predictions } - PRED-7685: Fixes an issue where Prediction Explanations could not be enabled for anomaly detection deployments through the UI. ### Time series {: #time-series } - TIME-11451: The EWMA field can now be left blank. - TIME-11579: Requests in eqpy are bumped to >2.20. - TIME-11676: Optimizes Exponential Weighted Moving Average and Standard Deviation Features for faster execution. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.6-aml
# Version 8.0.4 {: #version-804 } _June 29, 2022_ The DataRobot v8.0.4 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) ## Issues fixed in v8.0.4 {: #issues-fixed-in-v804 } The following issues have been fixed since release v8.0.3: ### Enterprise {: #enterprise } * EPS-451: The `eks-custom-tasks.yaml`, `eks-mlops.yaml`, and `multi-node-mlops.yaml` example configuration files no longer use `journald` as the `docker_log_driver`. #### Platform {: #platform } * DM-5972: Restricts list schemas in BigQuery JDBC connections to only return items from the current catalog or project, speeding up list tables when the schema is specified and the catalog is not. * DM-6784: The endpoint `GET /externalDataStore/` now returns data connection details with only OAuth fields filtered out. * DM-6789: Ensures that list tables properly restrict listed tables based on the specified project ID for BigQuery data connections. * DM-6791: Fixes an issue causing infinite loading in BigQuery data connection schemas. * PLT-6649: Fixes an issue where the user is returned to their identity provider after logging out if `SKIP_LOGIN_UI_IF_SAML_SSO_IS_ENFORCED=True`, causing the user to be automatically signed back in instead of remaining on the DataRobot log in page. #### Predictions {: #predictions } * PRED-7724: Fixes an issue when creating batch prediction jobs without the organization's identification. * PRED-7750: `predictionscompute` and `predictionsccomputerpy2` now successfully authenticate using Kerberos when HDFS is used for file storage. #### Time series {: #time-series } * TIME-11049: Fixes an issue where the identified time series properties like stationarity, exponential trend, and seasonality can differ between time series projects using the same dataset when the subset of the longest series in the dataset is larger than 10 series. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.4-aml
# Version 8.0.14 {: #version-8014 } _April 24, 2023_ The DataRobot v8.0.14 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. ## Issues fixed in v8.0.14 {: #issues-fixed-in-v8014 } The following issues have been fixed since release v8.0.13: ### Core modeling {: #core-modeling } - MODEL-11218: Fixes an issue where Keras models tuned in the API with a single layer resulted in an error. ### MLOps {: #mlops } - AGENT-4288: Deletes an unnecessary import line at the beginning of the MLOps agent code snippet that caused an import error. - MMM-12255: Adds `Project ID` and `Project Name` to User Activity Monitor entries for "Deployment Add" events. - MMM-12742: Marks manually triggered retraining policy runs as failed when encountering a credential error. - MMM-12898: Fixes the target histogram for external deployments built inside DataRobot. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.14-aml
# Version 8.0.8 {: #version-808 } _November 7, 2022_ The DataRobot v8.0.8 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. Note that DataRobot has been tracking developments in the CVE-2022-42889 (aka Text4Shell) remote code security issue. This vulnerability was not exploitable in DataRobot's software, but relevant version updates have been made in order to remove any concerns users might have. ## Issues fixed in v8.0.8 {: #issues-fixed-in-v808 } The following issues have been fixed since release v8.0.7: ### Data management {: #data-management } * DM-7590: Data engine and worker ingest jobs now only support Python 3. ### Predictions {: #predictions } * CODEGEN-1485: If Scoring Code is enabled but wasn't generated after training a model, an in-app message now displays with the reason DataRobot did not generate Scoring Code. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.8-aml
# Version 8.0.11 {: #version-8011 } _January 23, 2023_ The DataRobot v8.0.11 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. ## Issues fixed in v8.0.11 {: #issues-fixed-in-v8011 } The following issues have been fixed since release v8.0.10: ## Platform {: #platform } - PLT-8842: Makes LinkedIn and Kaggle profile fields optional on the User Profile page. ## Time series {: #time-series } - TIME-10673: Ensures that the contained floating point value is always used when scoring time series models. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.11-aml
# Version 8.0.1 {: #version-801 } _March 30, 2022_ The DataRobot v8.0.1 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) ## Issues fixed in v8.0.1 {: #issues-fixed-in-v801 } The following issues have been fixed since Enterprise release v8.0.0: ### Time series {: #time-series } * TIME-9875: Fixes an issue causing the feature over time calculation to fail when the feature is categorical and has a unique value. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.1-aml
--- title: V8.0.x maintenance releases description: Maintenance releases for the DataRobot Release 8.0 major release. --- # V8.0.x maintenance releases {: #v80x-maintenance-releases } The following maintenance release notes include some fixed issues in the DataRobot Self-Managed AI Platform platform. See also the [features introduced in v8.0.0](v8.0/index), which was introduced _March 14, 2022_. Version | Release date ------- | ----------- * [v8.0.16](v8.0.16-aml) | *June 26, 2023* * [v8.0.15](v8.0.15-aml) | *May 22, 2023*) * [v8.0.14)](v8.0.14-aml) | *April 24, 2023* * [v8.0.13)](v8.0.13-aml) | *March 27, 2023* * [v8.0.12)](v8.0.12-aml) | *February 27, 2023* * [v8.0.11)](v8.0.11-aml) | *January 23, 2023* * [v8.0.10)](v8.0.10-aml) | *December 19, 2022* * [v8.0.9)](v8.0.9-aml) | *November 14, 2022* * [v8.0.8)](v8.0.8-aml) | *November 7, 2022* * [v8.0.7)](v8.0.7-aml) | *September 30, 2022* * [v8.0.6)](v8.0.6-aml) | *August 29, 2022* * [v8.0.5)](v8.0.5-aml) | *July 29, 2022* * [v8.0.4)](v8.0.4-aml) | *June 29, 2022* * [v8.0.3)](v8.0.3-aml) | *May 19, 2022* * [v8.0.2)](v8.0.2-aml) | *April 30, 2022* * [v8.0.1)](v8.0.1-aml) | *March 30, 2022*
index
# Version 8.0.3 {: #version-803 } _May 19, 2022_ The DataRobot v8.0.3 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) ## Issues fixed in v8.0.3 {: #issues-fixed-in-v803 } The following issues have been fixed since release v8.0.2: ### Platform {: #platform } * EPS-412: The Enterprise Installer example configuration file (`multi-node-ha.yaml`) now includes the `internalnginx` service alongside the `appsinternalapi` service. * PLT-6734: SAML assertions for role and group membership can now be provided in a delimiter-separated list in a single attribute value. * PY3-3953: The project lifecycle transition can now mark Python 2 projects as **Deprecated** during a background process that runs after the upgrade. * PY3-3957: The **Python Version** field is now added to User Activity Monitor reports for App Usage and Predictions Usage in the UI, API, and exported CSV data. * UIUX-9414: The deprecated project banner can now be dismissed. When you click the **Dismiss** button, the related project ID persists in local storage. The banner won't appear again until you clear your browser's local storage. ### Predictions {: #predictions } * PRED-2620: Fixes a prediction performance issue that could increase latency for multiprocessing batch predictions. * PRED-7418: Fixes an issue that blocked JDBC writing due to duplicate columns in a schema if pass-through columns included a feature with the same name as the deployment's association ID. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.3-aml
# Version 8.0.13 {: #version-8013 } _March 27, 2023_ The DataRobot v8.0.13 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. ## Issues fixed in v8.0.13 {: #issues-fixed-in-v8013 } The following issues have been fixed since release v8.0.12: ### Model management {: #model-management } - MMM-12257: Enables a model management configuration page for app admins, helping to speed up debugging and configuration changes in on-premise environments. - MMM-12467: Fixes an issue with histogram ordering for targets in external binary classification deployments that are created with holdout predictions. - RAPTOR-9117: Fixes an issue with deploying a custom model without training data to external prediction environment. ### No-code apps {: #nocode-apps } - NCA-956: Disables redirects in NGINX to prevent overriding protocol (schema) in 3XX responses from AppsBuilder API. ### Predictions {: #predictions } - PRED-8594: Fixes an issue where setting the `columnNamesRemapping` property for a batch prediction job definition through a `PATCH` request would fail when using a list as a payload, and would cause the keys to be forcefully snake cased when using a dictionary. ### Platform {: #platform } - BUILD-3598: The `expat` library has been upgraded to version 2.5.0. - DM-8224: `org.postgresql` has been upgraded from 42.4.1 to 42.5.1. - MODEL-10852: Updated `tileserver-gl` and node packages. - PCS-40: Embedded a PostgreSQL data upgrade capabilities into DataRobot installer. Details are available in the DataRobot installation guide. ### Time series {: #time-series } - TIME-12821: Removed misleading validation that restricted the insight computation to the max of 1000 forecast distances in absolute numbers. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.13-aml
# Version 8.0.7 {: #version-807 } _September 30, 2022_ The DataRobot v8.0.7 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. {% if 'enterprise' in tags %}See also: * [Deprecation notice](#deprecation-announcement) {% endif %} ## Issues fixed in v8.0.7 {: #issues-fixed-in-v807 } The following issues have been fixed since release v8.0.6: ### Data management {: #data-management } - DM-7142 and DM-7143: Upgrades the underlying ElasticSearch engine in the AI Catalog to 7.17.4. ### MLOps {: #mlops } - MMM-10127: Improves the performance of challenger models replay for deployments with a long history. - MMM-10621: Fixes an issue where model packages from the same DataRobot installation would be reported as coming from different installations. - MMM-10797: Fixes an issue causing incorrect computations of thresholded accuracy metrics for MLOps. - RAPTOR-8567: Fixes an issue when running insights on custom inference models without internet access. ### Predictions {: #predictions } - PRED-7854: Retires DataRobot's bundled Spark 2.4.5 in favor of the system-provided Spark on CDH 5 and CDH 6. ### Time series {: #time-series } - TIME-11677: Skips sorting during modeling when loading series in projects with less than 1000 series. - TIME-11684: Fixes an issue with Predictions Explanation computation in forecast range predictions from a time series deployment. - TIME-11782: Fixes an issue with feature derivation for multiseries datasets that have a multiseries id column named `index`. {% if 'enterprise' in tags %} ## Deprecation announcement {: #deprecation-announcement } ### Custom tasks removed {: #custom-tasks-removed } The initial release of 8.0 included a preview implementation of the custom tasks feature. With this release, DataRobot is removing the preview feature and will replace it in an upcoming release. {% endif %} _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.7-aml
# Version 8.0.5 {: #version-805 } _July 29, 2022_ The DataRobot v8.0.5 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. ## Issues fixed in v8.0.5 {: #issues-fixed-in-v805 } The following issues have been fixed since release v8.0.4: ### Enterprise {: #enterprise } - EPS-534: Fixes issue with LDAP and SMTP password loading. ### Predictions {: #predictions } - CODEGEN-1403: Fixes multiple Scoring Code blenders instantiation in a single class path. - PRED-7672: Fixes an issue when making batch predictions with a JDBC intake adapter and data warehouse output adapters, like BigQuery, Snowflake, and Synapse. - PRED-7708: Fixes an issue that caused BiqQuery write issues when feature names generated from Feature Discovery and feature names with special characters were included in passthrough columns. ### Time series {: #time-series } - TIME-10985: Fixes an issue where time series feature derivation occasionally failed without a proper converged solution during the feature reduction process. - TIME-11312: Fixes an issue causing supervised time series projects to suggest a different set of derived features during the feature reduction process, resulting in inconsistent derived feature sets in different project runs using the same settings. - TIME-11371: Fixes an issue causing unsupervised time series projects to suggest a different set of derived features, resulting in inconsistent derived feature sets in different project runs using the same settings. ### Visual AI {: #visual-ai } - VIZAI-3541: Fixes an issue that caused clustering models to experience increased run times when calculating the silhouette score for larger datatsets. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.5-aml
# Version 8.0.9 {: #version-809 } _November 14, 2022_ The DataRobot v8.0.9 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. ## Issues fixed in v8.0.9 {: #issues-fixed-in-v809 } The following issues have been fixed since release v8.0.8: ### MLOps {: #mlops } - MMM-11197: Fixed an issue where Bias and Fairness settings couldn't be configured if an external prediction environment was selected. ### Platform {: #platform } - DM-8040: Fixed an issue in 8.0.8 when uploading parquet and avro files to the AI Catalog. ### Predictions {: #predictions } - CODEGEN-1522: Time series Scoring Code now supports the 100th percentile for Prediction Intervals. - PRED-8194: Increases the maximum age of the HTTP Strict Transport Secruity (HSTS) to 1 year on the Portable Prediction Server (PPS). _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.9-aml
# Version 8.0.15 {: #version-8015 } _May 22, 2023_ The DataRobot v8.0.15 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v8.0.0 release notes for: * [Features introduced in v8.0.0](v8.0.0-aml) In addition to the issues listed below, DataRobot addressed several outstanding security issues. This release/update is part of DataRobot's continual effort to secure the product and platform. ## Issues fixed in v8.0.15 {: #issues-fixed-in-v8015 } The following issues have been fixed since release v8.0.14: ### Data management {: #data-management } - DM-5189: Adds support for listing tables when connecting to the Athena JDBC driver. - DM-9894: Removes the "Method not supported" error message when using the Treasure Data JDBC driver. ### MLOps {: #mlops } - MMM-12924: Adds "Selected Prediction Environment" as an additional field, when it differs from the available options. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v8.0.15-aml
# Version 7.2.5 {: #version-725 } _November 9, 2021_ The DataRobot v7.2.5 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.2.0 release notes for: * [Features introduced in v7.2.0](v7.2.0-aml) ## Issues fixed in v7.2.5 {: #issues-fixed-in-v725 } The following issues have been fixed since Enterprise release v7.2.3: ### Enterprise {: #enterprise } * EP-1944: Adds support for OpenSSL FIPS. * EP-1965: Updates Patroni to use md5 as password encryption for backward compatibility. ### Platform {: #platform } * PLT-4930: Moves the `ENABLE_USERS_PERMA_DELETE_API` feature flag to the Public Preview section in Settings. * PLT-5079: Fixes an issue in the Azure storage backend that was affecting the speed prediction data processing for external deployments. ### Trust and Explainability {: #trust-and-explainability } * TREX-397: Allows Unicode characters to be used in Bias and Fairness target names. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.2.5-aml
# Version 7.2.7 {: #version-727 } _December 21, 2021_ The DataRobot v7.2.7 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.2.0 release notes for: * [Features introduced in v7.2.0](v7.2.0-aml) ## Issues fixed in v7.2.7 {: #issues-fixed-in-v727 } The following issues have been fixed since Enterprise release v7.2.6: ### Enterprise {: #enterprise } * EP-2133: Updates the Scoring Code build plugins to avoid the inclusion of vulnerable log4j versions. ### Time Series {: #time-series } * TIME-9790: Fixes downsampled training predictions for Forecast Distance split models in non-downsampled Time Series projects. ### Custom Models {: #custom-models } * RAPTOR-7296: Fixes an issue that caused transformed data to be sent to custom models during prediction explanation initialization. ### MLOps {: #mlops } * MMM-8649: Fixes issue with deployment reports failing for clusters running with `PYTHON3_SERVICES` enabled. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.2.7-aml
# Version 7.2.3 {: #version-723 } _October 13, 2021_ The DataRobot v7.2.3 release includes some fixed issues in the DataRobot Self-Managed AI Platform platform. See the v7.2.0 release notes for: * [Features introduced in v7.2.0](v7.2.0-aml) ## Issues fixed in v7.2.3 {: #issues-fixed-in-v723 } The following issues have been fixed since Enterprise release v7.2.2: ### Trust and explainability {: #trust-and-explainability } * TREX-296: Fixes an issue with a database migration that could cause some models to lose their word cloud data during the on-premise upgrade process. * TREX-282: Fixes an issue with a database migration that could cause some models to lose their lift chart data during the on-premise upgrade process. * TREX-250: Fixes an issue with the compliance documentation rendering for slim run models trained to 100% of the data. ### Time Series {: #time-series } * TIME-9425: Fixes issue that causes a blank page when accessing smart-sampled OTV projects with custom backtest settings. _All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
v7.2.3-aml