diff --git "a/grounding_data/datarobot_docs_context.csv" "b/grounding_data/datarobot_docs_context.csv" new file mode 100644--- /dev/null +++ "b/grounding_data/datarobot_docs_context.csv" @@ -0,0 +1,88785 @@ +"document","document_file_path" +"--- +title: Build a recipe +description: DataRobot leverages the compute environment and distributed architecture of your data source to quickly perform exploratory data analysis and apply transformations as you build your recipe. + +--- + +# Build a recipe {: #build-a-recipe } + +Building a recipe is the first step in preparing your data. When you start a Wrangle session, DataRobot connects to your data source, pulls a live random sample, and performs exploratory data analysis on that sample. When you add operations to your recipe, the transformation is applied to the sample and the exploratory data insights are recalculated, allowing you to quickly iterate on and profile your data before publishing. + +See the associated [considerations](wb-data-ref/index#wrangle-data) for important additional information. + +!!! warning ""Wrangling requirement"" + To wrangle data, you must [add a dataset using a configured data connection](wb-connect). + +??? note ""Operation behavior"" + When a wrangling recipe is pushed down to the connected cloud data platform, the operations are executed in their environment. To understand how operations behave, refer to the documentation for your data platform: + + - [Snowflake documentation](https://docs.snowflake.com/en/sql-reference-functions){ target=_blank } + - [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators){ target=_blank } + + + To view which queries were executed by the cloud data platform during pushdown, open the **AI Catalog** and select the new output dataset. The queries are listed on the **Info** tab. + + ![](images/wb-view-query.png) + +## Configure the live sample {: #configure-the-live-sample } + +By default, DataRobot retrieves 10000 rows for the live sample, however, you can modify this number in the wrangling settings. Note that the more rows you retrieve, the longer it will take to render the live sample. + +To configure the live sample: + +1. Click **Settings** in the right panel and open **Interactive sample**. + + ![](images/wb-operation-1.png) + +2. Enter the number of rows (under 10000) you want to include in the live sample and click **Resample**. The live sample updates to display the specified number of rows. + + ![](images/wb-operation-2.png) + +## Analyze the live sample {: #analyze-the-live-sample } + +During data wrangling, DataRobot performs exploratory data analysis on the live sample, generating table- and column-level [summary statistics](histogram){ target=_blank } and [visualizations](histogram#histogram-chart){ target=_blank } that help you profile the dataset and recognize data quality issues as you apply operations. For more information on interacting with the live sample, see the section on [Exploratory Data Insights](wb-data-tab#view-exploratory-data-insights). + +![](images/wb-operation-13.png) + +??? tip ""Speed up live sample"" + To speed up the time it takes to retrieve and render the live sample, use the toggle next to **Show Insights** to hide the feature distribution charts. + +??? faq ""Live sample vs. Exploratory Data Insights on the Datasets tab"" + Although both pages provide similar insights, you can specify the number of rows displayed in the live sample and it updates each time a transformation is added to your recipe. + +## Add operations {: #add-operations } + +A recipe is composed of operations—transformations that will be applied to the source data to prepare it for modeling. Note that operations are applied sequentially, so you may need to [reorder the operations](#reorder-operations) in your recipe to achieve the desired result. + +The table below describes the wrangling operations currently available in Workbench: + +Operation | Description +--------- | ----------- +[Join](#join) (public preview) | Join datasets that are accessible via the same connection instance. +[Aggregate](#aggregate) (public preview) | Apply mathematical aggregations to features in your dataset. +[Compute new feature](#compute-a-new-feature) | Create a new feature using Snowflake scalar subqueries, scalar functions, or window functions. +[Filter row](#filter-row) | Filter the rows in your dataset according to specified value(s) and conditions +[De-duplicate rows](#de-duplicate-row) | Automatically remove all duplicate rows from your dataset. +[Find and replace](#find-and-replace) | Replace specific feature values in a dataset. +[Rename features](#rename-features) | Change the name of one or more features in your dataset. +[Remove features](#remove-features) | Remove one or more features from your dataset. + + +To add an operation to your recipe: + +1. With **Recipe** selected, click **Add Operation** in the right panel. + + ![](images/wb-operation-12.png) + +2. Select and configure an operation. Then, click **Add to recipe**. + + The live sample updates after DataRobot retrieves a new sample from the data source and applies the operation, allowing you to review the transformation in realtime. + +3. Continue adding operations while analyzing their effect on the live sample; when you're done, the [recipe is ready to be published](wb-pub-recipe). + + ![](images/wb-operation-11.png) + +### Join {: #join } + +!!! info ""Public preview"" + The Join operation is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. + + Feature flag: Enables Additional Wrangler Operations + +Use the **Join** operation to combine datasets that are accessible via the same connection instance. + +To join a table or dataset: + +1. Click **Join** in the right panel. + + ![](images/wb-join-1.png) + +2. Click **+ Select dataset** to browse and select a dataset from your connection instance. + + ![](images/wb-join-2.png) + +3. Once you've opened and profiled the dataset you want to add, click **Select**. + + ![](images/wb-join-3.png) + +4. Select the appropriate **Join type** from the dropdown. + + - **Inner** only returns rows that have matching values in both datasets, for example, any rows with matching values in the `order_id` column. + - **Left** returns all rows from the left dataset (the original), and only the rows with matching values in the right dataset (joined). + + ![](images/wb-join-5.png) + +5. Select the **Join condition**, which defines how the two datasets are related. In this example, both the datasets are related by `order_id`. + + ![](images/wb-join-6.png) + +6. Click **Add to recipe**. + +### Aggregate {: #aggregate } + +!!! info ""Public preview"" + The Aggregate operation is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. + + Feature flag: Enables Additional Wrangler Operations + +Use the **Aggregate** operation to apply the following mathematical aggregations to the dataset (available aggregations vary by feature type): + +- Sum +- Min +- Max +- Avg +- Standard deviation +- Count +- Count distinct +- Most frequent (Snowflake only) + +To add an aggregation: + +1. Click **Aggregate** in the right panel. + + ![](images/wb-aggregate-1.png) + +2. Under **Group by key**, select the feature(s) you want to group your aggregation(s) by. + + ![](images/wb-aggregate-2.png) + +3. Click the field below **Feature to aggregate** and select a feature from the dropdown. Then, click the field below **Aggregate function** and choose one or more aggregations to apply to the feature. + + ![](images/wb-aggregate-3.png) + +4. (Optional) Click **+ Add feature** to apply aggregations to additional features in this grouping. + +5. Click **Add to recipe**. + + After adding the operation to the recipe, DataRobot renames aggregated features using the original name with the `_AggregationFunction` suffix attached. In this example, the new columns are `age_max` and `age_most_frequent`. + + ![](images/wb-aggregate-4.png) + +### Compute a new feature {: #compute-a-new-feature } + +Use the **Compute new feature** operation to create a new output feature from existing features in your dataset. By applying domain knowledge, you can create features that do a better job of representing your business problem to the model than those in the original dataset. + +To compute a new feature: + +1. Click **Compute new feature** in the right panel. + + ![](images/wb-operation-10.png) + +2. Enter a name for the new feature, and under **Expression**, define the feature using scalar subqueries, scalar functions, or window functions for your chosen cloud data platform: + + === ""Snowflake"" + + - [Scalar subqueries](https://docs.snowflake.com/en/user-guide/querying-subqueries#scalar-subqueries.){ target=_blank } + - [Scalar functions](https://docs.snowflake.com/en/sql-reference/functions){ target=_blank } + - [Window functions](https://docs.snowflake.com/en/sql-reference/functions-analytic){ target=_blank } + + === ""BigQuery"" + + - [Scalar subqueries](https://cloud.google.com/bigquery/docs/reference/standard-sql/subqueries#scalar_subquery_concepts){ target=_blank } + - [Scalar functions](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators){ target=_blank } + - [Window functions](https://cloud.google.com/bigquery/docs/reference/standard-sql/window-function-calls){ target=_blank } + + ![](images/wb-operation-14.png) + + This example uses `REGEXP_SUBSTR`, to extract the first number from the `[ - )` from the `age` column, and `to_number` to convert the output from a string to a number. + +3. Click **Add to recipe**. + +### Filter row {: #filter-row } + +Use the **Filter row** operation to filter the rows in your dataset according to specified value(s) and conditions. + +To filter rows: + +1. Click **Filter row** in the right panel. + + ![](images/wb-operation-8.png) + +2. Decide if you want to keep the rows that match the defined conditions or exclude them. + +3. Define the filter conditions, by choosing the feature you want to filter, the condition type, and the value you want to filter by. DataRobot highlights the selected column. + + ![](images/wb-operation-7.png) + +4. (Optional) Click **Add condition** to define additional filtering criteria. + +5. Click **Add to recipe**. + +### De-duplicate row {: #de-duplicate-row } + +Use the **De-duplicate rows** operation to automatically remove all rows with duplicate information from the dataset. + +To de-duplicate rows, click De-duplicate rows in the right panel. This operation is immediately added to your recipe and applied to the live sample. + +![](images/wb-operation-15.png) + +### Find and replace {: #find-and-replace } + +Use the **Find and replace** operation to quickly replace specific feature values in a dataset. This is helpful to, for example, fix typos in a dataset. + +To find and replace a feature value: + +1. Click **Find and replace** in the right panel. + + ![](images/wb-operation-9.png) + +2. Under **Select feature**, click the dropdown and choose the feature that contains the value you want to replace. DataRobot highlights the selected column. + + ![](images/wb-operation-3.png) + +3. Under **Find**, choose the match criteria—**Exact**, **Partial**, or **Regular Expression**—and enter the feature value you want to replace. Then, under **Replace**, enter the new value. + + ![](images/wb-operation-4.png) + +4. Click **Add to recipe**. + +### Rename features {: #rename-features } + +Use the **Rename features** operation to rename one or more features in the dataset. + +To rename features: + +1. Click **Rename features** in the right panel. + + ![](images/wb-operation-16.png) + + ??? tip ""Rename specific features from the live sample"" + Alternatively, you can click the **More options** icon next to the feature you want to rename. This opens the operation parameters in the right panel with the feature field already filled in. + + ![](images/wb-operation-21.png) + +2. Under **Feature name**, click the dropdown and choose the feature you want to rename. Then, enter the new feature name in the second field. + + ![](images/wb-operation-18.png) + +4. (Optional) Click **Add feature** to rename additional features. + +5. Click **Add to recipe**. + +### Remove features {: #remove-features } + +Use the **Remove features** operation to remove features from the dataset. + +To remove features: + +1. Click **Remove features** in the right panel. + + ![](images/wb-operation-19.png) + + ??? tip ""Remove specific features from the live sample"" + Alternatively, you can click the **More options** icon next to the feature you want to remove. This opens the operation parameters in the right panel with the feature field already filled in. + + ![](images/wb-operation-21.png) + +2. Under **Feature name**, click the dropdown and either start typing the feature name or scroll through the list to select the feature(s) you want to remove. Click outside of the dropdown when you're done selecting features. + + ![](images/wb-operation-20.png) + +3. Click **Add to recipe**. + +## Reorder operations {: #reorder-operations } + +All operations in a wrangling recipe are applied sequentially, therefore, the order in which they appear affects the results of the output dataset. + +To move an operation to a new location, click and hold the operation you want to move, and then drag it to a new position. + +![](images/wb-op-reorder.png) + +The live sample updates to reflect the new order. + +## Quit wrangling {: #quit-wrangling } + +At any point, you can click **Quit Wrangling** to end your wrangling session, however, any operations applied to the dataset will be removed. + +![](images/wb-operation-quit.png) + +## Next steps {: #next-steps } + +From here, you can: + +- [Publish the recipe to the data source, generating a new output dataset.](wb-pub-recipe) + +## Read more {: #read-more} + +To learn more about the topics discussed on this page, see: + +- [Description of summary statistics and histograms in DataRobot Classic.](histogram){ target=_blank } +","datarobot_english_documentation/datarobot_docs|en|workbench|wb-dataprep|wb-wrangle-data|wb-add-operation.txt" +"--- +title: Workbench overview +description: Understand the components of the DataRobot Workbench interface, including the architecture, some sample workflows, and directory landing page. + +--- + +# Workbench overview {: #workbench-overview } + +{% include 'includes/wb-overview.md' %} +","datarobot_english_documentation/datarobot_docs|en|workbench|wb-getstarted|wb-overview.txt" +"--- +title: Create experiments +description: Describes how to create and manage experiments in the DataRobot Workbench interface. + +--- + +# Create experiments {: #create-experiments } + +Experiments are the individual ""projects"" within a [Use Case](wb-build-usecase). They allow you to vary data, targets, and modeling settings to find the optimal models to solve your business problem. Within each experiment, you have access to its Leaderboard and [model insights](wb-experiment-evaluate#insights), as well as [experiment summary information](wb-experiment-evaluate#view-experiment-info). After selecting a model, you can, from within the experiment: + +* [Make predictions](wb-predict). +* [Create a No-Code AI App](wb-apps/index). +* [Generate a compliance report](wb-experiment-evaluate#compliance-documentation). + +See the associated [FAQ](wb-experiment-ref) for important additional information. + +## Create basic {: #create-basic } + +Follow the steps below to create a new experiment from within a Use Case. + +!!! note + You can also start modeling directly from a dataset by clicking the **Start modeling** button. The **Set up new experiment** page opens. From there, the instructions follow the flow described below. + +### Add experiment {: #add-experiment } + +From within a [Use Case](wb-build-usecase), click **Add new** and select **Add experiment**. The **Set up new experiment** page opens, which lists all data previously loaded to the Use Case. + +![](images/wb-exp-1.png) + +### Add data {: #add-data } + +Add data to the experiment, either by [adding new data](wb-add-data/index) (1) or selecting a dataset that has already been loaded to the Use Case (2). + +![](images/wb-exp-2.png) + +Once the data is loaded to the Use Case (as in option 2 above), click to select the dataset you want to use in the experiment. Workbench opens a preview of the data: + +![](images/wb-exp-6.png) + +From here you can: + +| | Option | Description | +|---|---|---| +|
1
| ![](images/wb-exp-3.png) | Click to return to the data listing and choose a different dataset. +|
2
| ![](images/wb-exp-4.png) | Click the icon to proceed and set the target. +|
3
| ![](images/wb-exp-5.png) | Click **Next** to proceed and set the target. + +### Select target {: #select-target} + + +Once you have proceeded to target selection, Workbench prepares the dataset for modeling ([EDA 1](eda-explained#eda1){ target=_blank }). When the process finishes, set the target either by: + +=== ""Hover on feature name"" + + Scroll through the list of features to find your target. If it is not showing, expand the list from the bottom of the display: + + ![](images/wb-exp-7.png) + + Once located, click the entry in the table to use the feature as the target. + + ![](images/wb-exp-8.png) + +=== ""Enter target name"" + + Type the name of the target feature you would like to predict in the entry box. DataRobot lists matching features as you type: + + ![](images/wb-exp-9.png) + +Once a target is entered, Workbench displays a histogram providing information about the target feature's distribution and, in the right pane, a summary of the experiment settings. + +![](images/wb-exp-10.png) + +From here, you are ready to build models with the default settings. Or, you can [modify the default settings](#customize-settings) and then begin. If using the default settings, click **Start modeling** to begin the [Quick mode](model-data#modeling-modes-explained){ target=_blank } Autopilot modeling process. + +## Customize settings {: #customize-settings } + +Changing experiment parameters is a good way to iterate on a Use Case. Before starting to model, you can: + +* [Modify partitioning settings](#modify-partitioning). +* [Change configuration settings](#change-the-configuration). + +Once you have reset any or all of the above, click **Start modeling** to begin the [Quick mode](model-data#modeling-modes-explained){ target=_blank } modeling process. + + +### Modify partitioning {: #modify-partitioning } + +Partitioning describes the method DataRobot uses to “clump” observations (or rows) together for evaluation and model building. Workbench defaults to [five-fold](data-partitioning){ target=_blank }, [stratified sampling](partitioning#ratio-preserved-partitioning-stratified){ target=_blank } with a 20% holdout fold. + +!!! info ""Availability information"" + Date/time partitioning for building time-aware projects is off by default. Contact your DataRobot representative or administrator for information on enabling the feature. + + Feature flag: Enable Date/Time Partitioning (OTV) in Workbench + + +To change the partitioning method or validation type: + +1. Click the icon for **Additional settings**, **Next**, or the **Partitioning** field in the summary: + + ![](images/wb-exp-12.png) + +2. If there is a date feature available, your experiment is eligible for [out-of-time validation](otv){ target=_blank } partitioning, which allows DataRobot to build time-aware models. In that case, additional information becomes available in the summary. + + ![](images/wb-exp-16.png) + + +2. Set the fields that you want to change. The fields available depend on the selected partitioning method. + + ![](images/wb-exp-13a.png) + + * [Random](partitioning#random-partitioning-random){ target=_blank } assigns observations (rows) randomly to the training, validation, and holdout sets. + * [Stratified](partitioning#ratio-preserved-partitioning-stratified){ target=_blank } randomly assigns rows to training, validation, and holdout sets, preserving (as close as possible to) the same ratio of values for the prediction target as in the original data. + * [Date/time](otv#advanced-options) assigns rows to backtests chronologically instead of, for example, randomly. This is the only valid partitioning method for time-aware projects. + +=== ""Random or Stratified"" + + ![](images/wb-exp-13.png) + + |   | Field | Description | + |---------|--------|--------------| + |
1
| Validation type | Sets the method used on data to validate models.