Datasets:

Modalities:
Text
Formats:
arrow
Languages:
English
Libraries:
Datasets
License:
debadeepta commited on
Commit
7b67bf3
·
verified ·
1 Parent(s): 22a9f63

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .DS_Store +0 -0
  2. examples/dataset_dict.json +1 -0
  3. examples/holdout/data-00000-of-00001.arrow +3 -0
  4. examples/holdout/dataset_info.json +46 -0
  5. examples/holdout/state.json +13 -0
  6. examples/sample/data-00000-of-00001.arrow +3 -0
  7. examples/sample/dataset_info.json +46 -0
  8. examples/sample/state.json +13 -0
  9. examples/test/data-00000-of-00001.arrow +3 -0
  10. examples/test/dataset_info.json +46 -0
  11. examples/test/state.json +13 -0
  12. examples/train/data-00000-of-00001.arrow +3 -0
  13. examples/train/dataset_info.json +46 -0
  14. examples/train/state.json +13 -0
  15. grounding_data/datarobot_docs_context.csv +0 -0
  16. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ai-custom-metrics.md +20 -0
  17. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/aws-mlops.md +15 -0
  18. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/cold-start.md +18 -0
  19. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/custom-bp-nb.md +17 -0
  20. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/custom-lift-chart.md +30 -0
  21. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/demand-flow.md +26 -0
  22. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/deploy-sagemaker.md +33 -0
  23. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/df-retrain.md +13 -0
  24. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/enrich-gcp.md +19 -0
  25. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/fire.md +14 -0
  26. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/gramian.md +14 -0
  27. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/image-databricks.md +15 -0
  28. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/index.md +58 -0
  29. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-analysis.md +15 -0
  30. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-aws.md +25 -0
  31. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-azure.md +21 -0
  32. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-churn.md +13 -0
  33. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-databricks.md +23 -0
  34. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-gcp.md +21 -0
  35. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-sap.md +23 -0
  36. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-snowflake.md +20 -0
  37. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-tables.md +24 -0
  38. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-uplift.md +22 -0
  39. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-viz.md +13 -0
  40. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-what-if.md +15 -0
  41. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/mlflow.md +31 -0
  42. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/model-migrate.md +24 -0
  43. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/no-show.md +13 -0
  44. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/opt-grid.md +36 -0
  45. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/pred-products.md +17 -0
  46. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ready-signal.md +17 -0
  47. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/rec-engine.md +20 -0
  48. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/sc-micro.md +11 -0
  49. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/self-joins.md +19 -0
  50. grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/snowpark-data.md +22 -0
.DS_Store ADDED
Binary file (6.15 kB). View file
 
examples/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "test", "holdout", "sample"]}
examples/holdout/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01a6c467f7eaf4843dba7a9b39b56ce5bee72a3153d477557ff6cec9962e1031
3
+ size 8344
examples/holdout/dataset_info.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": "csv",
3
+ "citation": "",
4
+ "config_name": "default",
5
+ "dataset_name": "csv",
6
+ "dataset_size": 75027,
7
+ "description": "",
8
+ "download_checksums": {
9
+ "/private/var/folders/_v/xml_62d513s9g3b3ymvqkcnr0000gp/T/tmp_242oqu_/examples/datarobot_docs_questions.csv": {
10
+ "num_bytes": 74873,
11
+ "checksum": null
12
+ }
13
+ },
14
+ "download_size": 74873,
15
+ "features": {
16
+ "question": {
17
+ "dtype": "string",
18
+ "_type": "Value"
19
+ },
20
+ "answer": {
21
+ "dtype": "string",
22
+ "_type": "Value"
23
+ },
24
+ "id": {
25
+ "dtype": "int64",
26
+ "_type": "Value"
27
+ }
28
+ },
29
+ "homepage": "",
30
+ "license": "",
31
+ "size_in_bytes": 149900,
32
+ "splits": {
33
+ "train": {
34
+ "name": "train",
35
+ "num_bytes": 75027,
36
+ "num_examples": 100,
37
+ "dataset_name": "csv"
38
+ }
39
+ },
40
+ "version": {
41
+ "version_str": "0.0.0",
42
+ "major": 0,
43
+ "minor": 0,
44
+ "patch": 0
45
+ }
46
+ }
examples/holdout/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "79eaf574fcdfb9e8",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": "train"
13
+ }
examples/sample/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6390cf8164e6597fcf082aa3b727b6cf35ead607ff728050c67d478e77bb94a0
3
+ size 4472
examples/sample/dataset_info.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": "csv",
3
+ "citation": "",
4
+ "config_name": "default",
5
+ "dataset_name": "csv",
6
+ "dataset_size": 75027,
7
+ "description": "",
8
+ "download_checksums": {
9
+ "/private/var/folders/_v/xml_62d513s9g3b3ymvqkcnr0000gp/T/tmp_242oqu_/examples/datarobot_docs_questions.csv": {
10
+ "num_bytes": 74873,
11
+ "checksum": null
12
+ }
13
+ },
14
+ "download_size": 74873,
15
+ "features": {
16
+ "question": {
17
+ "dtype": "string",
18
+ "_type": "Value"
19
+ },
20
+ "answer": {
21
+ "dtype": "string",
22
+ "_type": "Value"
23
+ },
24
+ "id": {
25
+ "dtype": "int64",
26
+ "_type": "Value"
27
+ }
28
+ },
29
+ "homepage": "",
30
+ "license": "",
31
+ "size_in_bytes": 149900,
32
+ "splits": {
33
+ "train": {
34
+ "name": "train",
35
+ "num_bytes": 75027,
36
+ "num_examples": 100,
37
+ "dataset_name": "csv"
38
+ }
39
+ },
40
+ "version": {
41
+ "version_str": "0.0.0",
42
+ "major": 0,
43
+ "minor": 0,
44
+ "patch": 0
45
+ }
46
+ }
examples/sample/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "26a9ccc7109b9ea3",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": "train"
13
+ }
examples/test/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7dfc68c109dc1cf7e3ae170ba0e73e396c4437d0d8ef2ccb223ea64cd5ad2e15
3
+ size 61552
examples/test/dataset_info.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": "csv",
3
+ "citation": "",
4
+ "config_name": "default",
5
+ "dataset_name": "csv",
6
+ "dataset_size": 75027,
7
+ "description": "",
8
+ "download_checksums": {
9
+ "/private/var/folders/_v/xml_62d513s9g3b3ymvqkcnr0000gp/T/tmp_242oqu_/examples/datarobot_docs_questions.csv": {
10
+ "num_bytes": 74873,
11
+ "checksum": null
12
+ }
13
+ },
14
+ "download_size": 74873,
15
+ "features": {
16
+ "question": {
17
+ "dtype": "string",
18
+ "_type": "Value"
19
+ },
20
+ "answer": {
21
+ "dtype": "string",
22
+ "_type": "Value"
23
+ },
24
+ "id": {
25
+ "dtype": "int64",
26
+ "_type": "Value"
27
+ }
28
+ },
29
+ "homepage": "",
30
+ "license": "",
31
+ "size_in_bytes": 149900,
32
+ "splits": {
33
+ "train": {
34
+ "name": "train",
35
+ "num_bytes": 75027,
36
+ "num_examples": 100,
37
+ "dataset_name": "csv"
38
+ }
39
+ },
40
+ "version": {
41
+ "version_str": "0.0.0",
42
+ "major": 0,
43
+ "minor": 0,
44
+ "patch": 0
45
+ }
46
+ }
examples/test/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "84042e4a2186671b",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": "train"
13
+ }
examples/train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c745398968f69fcebb50ce5e00e6239e196278e63a11d7f7f890a14fe087e8a4
3
+ size 8336
examples/train/dataset_info.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": "csv",
3
+ "citation": "",
4
+ "config_name": "default",
5
+ "dataset_name": "csv",
6
+ "dataset_size": 75027,
7
+ "description": "",
8
+ "download_checksums": {
9
+ "/private/var/folders/_v/xml_62d513s9g3b3ymvqkcnr0000gp/T/tmp_242oqu_/examples/datarobot_docs_questions.csv": {
10
+ "num_bytes": 74873,
11
+ "checksum": null
12
+ }
13
+ },
14
+ "download_size": 74873,
15
+ "features": {
16
+ "question": {
17
+ "dtype": "string",
18
+ "_type": "Value"
19
+ },
20
+ "answer": {
21
+ "dtype": "string",
22
+ "_type": "Value"
23
+ },
24
+ "id": {
25
+ "dtype": "int64",
26
+ "_type": "Value"
27
+ }
28
+ },
29
+ "homepage": "",
30
+ "license": "",
31
+ "size_in_bytes": 149900,
32
+ "splits": {
33
+ "train": {
34
+ "name": "train",
35
+ "num_bytes": 75027,
36
+ "num_examples": 100,
37
+ "dataset_name": "csv"
38
+ }
39
+ },
40
+ "version": {
41
+ "version_str": "0.0.0",
42
+ "major": 0,
43
+ "minor": 0,
44
+ "patch": 0
45
+ }
46
+ }
examples/train/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "81c07ba05d94183f",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": "train"
13
+ }
grounding_data/datarobot_docs_context.csv ADDED
The diff for this file is too large to render. See raw diff
 
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ai-custom-metrics.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Select models using custom metrics
3
+ description: This AI Accelerator demonstrates how one can leverage DataRobot's Python client to extract predictions, compute custom metrics, and sort their DataRobot models accordingly.
4
+
5
+ ---
6
+
7
+ # Select models using custom metrics {: #select-models-using-custom-metrics }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/custom_metrics/custom_metrics.ipynb){ .md-button }
10
+
11
+ When it comes to evaluating model performance, DataRobot provides many of the standard metrics [out-of-the box](opt-metric), either on the [Leaderboard](tut-read-leaderboard) or as part of a [model insight](analyze-models/index).
12
+
13
+ However, depending on the industry, you may need to sort your DataRobot leaderboard by a specific metric not natively supported by DataRobot. This AI Accelerator demonstrates how one can leverage DataRobot's Python client to extract predictions, compute custom metrics, and sort their DataRobot models accordingly. The topics covered are as follows:
14
+
15
+ * Setup: import libraries and connect to DataRobot
16
+ * Build models with Autopilot
17
+ * Retrieve predictions and actuals
18
+ * Sort models by Brier Skill Score (BSS)
19
+ * Sort models by Rate@Top1%
20
+ * Sort models by return-on-investment (ROI)
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/aws-mlops.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Monitor AWS Sagemaker models with MLOps
3
+ description: Train and host a SageMaker model that can be monitored in the DataRobot platform.
4
+
5
+ ---
6
+
7
+ # Monitor AWS Sagemaker models with MLOps {: #monitor-aws-sagemaker-models-with-mlops }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/monitor_sagemaker_model_in_DataRobot){ .md-button }
10
+
11
+ DataRobot MLOps provides a central hub to deploy, monitor, manage, and govern all your models in production.
12
+
13
+ You can deploy models to the production environment of your choice and continuously monitor the health and accuracy of your models, among other metrics.
14
+
15
+ AWS Sagemaker is a fully managed service that allows data scientists and developers to build, train, and deploy machine learning models. DataRobot MLOps with its AWS Sagemaker integration provides an end-to-end solution for managing machine learning models at scale, you can easily monitor the performance of your machine learning models in real-time, and quickly identify and resolve any issues that arise.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/cold-start.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Cold start demand forecasting workflow
3
+ description: This accelerator provides a framework to compare several approaches for cold start modeling on series with limited or no history.
4
+ ---
5
+
6
+ # Cold start demand forecasting workflow {: #cold-start-demand-forecasting-workflow}
7
+
8
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Demand_forecasting_cold_start/End_to_end_demand_forecasting_cold_start.ipynb){ .md-button }
9
+
10
+ The cold start demand forecasting problem refers to the challenge of predicting future demand for a new product or service with little or no historical sales data available. This situation typically arises when a company introduces a new product or service to the market or a new product is launched in a store that is already being sold in other stores, and there is no past data available for training a machine learning model to predict future demand.
11
+
12
+ In traditional demand forecasting, historical sales data is used to train a machine learning model that can predict future demand. However, in the case of a new product, there is no historical data available. This presents a significant challenge because accurate demand forecasting is critical for making informed decisions about inventory, pricing, and marketing strategies.
13
+
14
+ This second accelerator of a three-part series on demand forecasting provides the building blocks for cold start modeling workflow on series with limited or no history. This accelerator provides a framework to compare several approaches for cold start modeling.
15
+
16
+ The previous notebook aims to inspect and handle common data and modeling challenges, identifies common pitfalls in real-life time series data, and provides helper functions to scale experimentation with the tools mentioned above and more.
17
+
18
+ The dataset consists of 50 series (46 SKUs across 22 stores) over a 2 year period with varying series history, typical of a business releasing and removing products over time. The test dataset contains 20 additional series with little or no history which were not present in the training dataset.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/custom-bp-nb.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Create custom blueprints with composable ML
3
+ description: Customize models on the Leaderboard using the Blueprint Workshop.
4
+
5
+ ---
6
+
7
+ # Create custom blueprints with composable ML {: #create-custom-blueprints-with-composable-ml}
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/custom_blueprints/create_custom_blueprint.ipynb){ .md-button }
10
+
11
+ DataRobot's [Composable ML](cml/index) allows you to add pre-defined tasks to a blueprint or to insert their own custom code. You're free to add your data science and subject matter expertise to the models you build.
12
+
13
+ This accelerator shows how to customize the models on the leaderboard via Composable ML's API, the Blueprint Workshop. It covers the following activities:
14
+
15
+ * Access the Blueprint Workshop
16
+ * Define and train a custom blueprint using the tasks provided by DataRobot
17
+ * Insert custom code in the form of a CatBoost classifier into the blueprint
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/custom-lift-chart.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Customize lift charts
3
+ description: Leverage popular Python packages with DataRobot's Python client to recreate and augment DataRobot's lift chart visualization.
4
+
5
+ ---
6
+
7
+ # Customize lift charts {: #customize-lift-charts }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/customizing_lift_charts){ .md-button }
10
+
11
+ Ever wanted to plot more than 60 bins in DataRobot's lift chart?
12
+
13
+ Ever needed to present this graphic with a specific color palette?
14
+
15
+ Ever required to display more information in the chart due to regulatory reasons?
16
+
17
+ In this AI Accelerator, leverage popular Python packages with DataRobot's Python client to recreate and augment DataRobot's lift chart visualization. These customizations allow you to:
18
+
19
+ * Plot more than 60 bins in DataRobot's lift chart.
20
+ * Present this lift chart visualizations with a specific color palette.
21
+ * Display more information in the chart.
22
+
23
+ The steps demonstrated in the accompanying notebook are:
24
+
25
+ 1. Connect to DataRobot
26
+ 2. Create a DataRobot project
27
+ 3. Run a single blueprint from the repository
28
+ 4. Obtain predictions and actuals
29
+ 5. Recreate DataRobot’s lift chart
30
+ 6. Add customization to the lift chart
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/demand-flow.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: End-to-end time series demand forecasting Workflow
3
+ description: Perform large-scale demand forecasting using DataRobot's Python package.
4
+ ---
5
+
6
+ # End-to-end time series demand forecasting workflow {: #end-to-end-time-series-demand-forecasting-workflow}
7
+
8
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/End_to_end_demand_forecasting/){ .md-button }
9
+
10
+ Demand forecasting models have many common challenges: large quantities of SKUs or series to predict, partial history or irregular history for many SKUs, multiple locations with different local or regional demand patterns, and cold-start prediction requests from the business for new products. The list goes on.
11
+
12
+ Time series in DataRobot, however, has a diverse range of functionality to help tackle these challenges. For example:
13
+
14
+ - Automatic feature engineering and creation of lagged variables across multiple data types, as well as training dataset creation.
15
+ - Diverse approaches for time series modeling with text data, learning from cross-series interactions and scaling to hundreds or thousands of series.
16
+ - Feature generation from an uploaded calendar of events file specific to your business or use case.
17
+ - Automatic backtesting controls for regular and irregular time-series.
18
+ - Training dataset creation for irregular series via custom aggregations.
19
+ - Segmented modeling, hierarchical clustering for multi-series models, multimodal modeling, and ensembling.
20
+ - Periodicity and stationarity detection, and automatic feature list creation with various differencing strategies.
21
+ - Cold start modeling on series with limited or no history.
22
+ - Insights for all of the above.
23
+
24
+ In this first installment of a three-part series on demand forecasting, this accelerator provides the building blocks for a time-series experimentation and production workflow. This notebook provides a framework to inspect and handle common data and modeling challenges, identifies common pitfalls in real-life time series data, and provides helper functions to scale experimentation with the tools mentioned above and more.
25
+
26
+ The dataset consists of 50 series (46 SKUs across 22 stores) over a two year period with varying series history, typical of a business releasing and removing products over time.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/deploy-sagemaker.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Deploy a model in AWS SageMaker
3
+ description: Learn how to programmatically build a model with DataRobot and export and host the model in AWS SageMaker
4
+
5
+ ---
6
+
7
+ # Deploy a model in AWS SageMaker {: #deploy-a-model-in-aws-sagemaker }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/sagemaker_deployment){ .md-button }
10
+
11
+ In this accelerator, you deploy a model that has been built in DataRobot to AWS SageMaker. If you already use SageMaker for hosting models, you can still make use of the powerful features of DataRobot, including AutoML and time series modeling. You can integrate DataRobot into your existing deployment processes. Likewise, you can use this workflow to deploy a DataRobot-built model into another type of environment.
12
+
13
+ In this accelerator you will follow the manual steps that are [outlined in DataRobot's documentation](sc-sagemaker#use-scoring-code-with-aws-sagemaker), programmatically build a model with DataRobot, and export and host the model in AWS SageMaker. To assist with the setup of AWS services to run the model, this code provisions any extra items that you may not haven yet set up.
14
+
15
+ Review the lists below of what is created in this AI accelerator.
16
+
17
+ ### AWS
18
+
19
+ * ECR Repository
20
+ * S3 Bucket
21
+ * IAM Role for SageMaker
22
+ * SageMaker inference model
23
+ * SageMaker endpoint configuration
24
+ * SageMaker endpoint (for real time predictions)
25
+ * SageMaker batch transform job (for batch predictions)
26
+
27
+ ### DataRobot
28
+
29
+ * DataRobot AutoML Project
30
+ * DataRobot AutoML Models
31
+ * Scoring Code JAR file of AutoML Model
32
+
33
+ Once you have run through the code, you will see how you can leverage the power of DataRobot's automated machine learning capabilities to train a model and then make use of the power of AWS to deploy and host that model in SageMaker.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/df-retrain.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Demand forecasting and retraining workflow
3
+ description: Implement retraining policies with DataRobot MLOps demand forecast deployments.
4
+
5
+ ---
6
+
7
+ # Demand forecasting and retraining workflow {: #demand-forecasting-and-retraining-workflow }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Demand_forecasting_retraining/End_to_end_demand_forecasting_retraining.ipynb){ .md-button }
10
+
11
+ This accelerator demonstrates retraining policies with DataRobot MLOps demand forecast deployments.
12
+
13
+ This accelerator is a another installment of a series on demand forecasting. The [first accelerator](demand-flow) focuses on handling common data and modeling challenges, identifies common pitfalls in real-life time series data, and provides helper functions to scale experimentation. The [second accelerator](cold-start) provides the building blocks for cold start modeling workflow on series with limited or no history. They can be used as a starting point to create a model deployment for the app. The [third accelerator](ml-what-if) is a what-if app that allows you to adjust certain known in advance variable values to see how changes in those factors might affect the forecasted demand.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/enrich-gcp.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Enrich data using the Hyperscaler API
3
+ description: Call the GCP API and enrich a modeling dataset that predicts customer churn.
4
+
5
+ ---
6
+
7
+ # Enrich data using the Hyperscaler API {: #enrich-data-using-the-hyperscaler-api }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/gcp_sentiment/GCP_enrich_sentiment.ipynb){ .md-button }
10
+
11
+ Many companies are recognizing the value of unstructured data, particularly in the form of text, and are looking for ways to extract insights from it. This data includes emails, social media posts, customer feedback, call transcripts, and more. One of the most powerful tools for analyzing text data is sentiment analysis.
12
+
13
+ Sentiment analysis is the process of identifying the emotional tone of a piece of text, such as positive, negative, or neutral. It is a valuable tool to enrich the dataset for building machine learning models. For example, the sentiment expressed through a customer's recent call transcript with customer service could be predictive of the customer's likelihood to churn.
14
+
15
+ However, building sentiment analysis models is not an easy task. It requires a significant investment of time, resources, and expertise, especially in terms of accurately labeled data with large corpus to train. Most companies do not have the resources or expertise to develop their own sentiment analysis models.
16
+
17
+ Fortunately, there are now APIs available that provide sentiment analysis as a service. By using these APIs, companies can save time and money while still gaining the benefits of sentiment analysis. One of the most significant benefits of using hyperscaler APIs for sentiment analysis is their accuracy. The models behind the APIs are trained on large amounts of data, making them highly accurate at identifying emotions and sentiments in text data.
18
+
19
+ This accelerator demonstrates how easy it is to call the GCP API and enrich a modeling dataset that predicts customer churn, which shows an improvement in the model performance because of the retrieved sentiment scores based on the customers' call transcripts.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/fire.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Feature Reduction with FIRE
3
+ description: Learn about the benefits of Feature Importance Rank Ensembling (FIRE)&mdash;a method of advanced feature selection that uses a median rank aggregation of feature impacts across several models created during a run of Autopilot.
4
+ ---
5
+
6
+ # Feature Reduction with FIRE {: #feature-reduction-with-fire}
7
+
8
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/feature_reduction_with_fire/feature_reduction_with_fire.ipynb){ .md-button }
9
+
10
+ You can significantly reduce the number of features in your dataset by leveraging DataRobot's ability to train hundreds of high-quality models in a matter of minutes.
11
+
12
+ Feature Importance Rank Ensembling (FIRE) aggregates the rankings of individual features using Feature Impact from several blueprints on the leaderboard. This approach can provide greater accuracy and robustness over other feature reduction methods.
13
+
14
+ This accelerator shows how to apply FIRE to your dataset and dramatically reduce the number of features without impacting the performance of the final model.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/gramian.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Use Gramian angular fields to improve datasets
3
+ description: Generate advanced features used for high frequency data use cases.
4
+ ---
5
+
6
+ # Use Gramian angular fields to improve datasets {: #use-gramian-angular-fields-to-improve-datasets }
7
+
8
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/game-changer/high_freq_data_to_images/high_frequency_data_classification_using_gramian_angular_fields.ipynb){ .md-button }
9
+
10
+ Prerequisites: [PYTS library](https://pyts.readthedocs.io/){ target=_blank }
11
+
12
+ Traditional feature engineering methods like time aware aggregation and spectrograms can have limitations. Spectrograms cannot capture correlations between each segment of the signal with other segments of the signal. If you try to do this with tabular aggregates it becomes a high dimensionality problem.
13
+
14
+ Gramian Angular Field images of signal data can solve the above problem using a matrix which can be used with computer vision models easily without the limitations of dimensionality.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/image-databricks.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Prepare and leverage image data with Databricks
3
+ description: Import image files using Spark and prepare them into a data frame suitable for ingest into DataRobot.
4
+
5
+ ---
6
+
7
+ # Prepare and leverage image data with Databricks {: #prepare-and-leverage-image-data-with-databricks }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/image_data_databricks/Image%20Data%20Preparation.ipynb){ .md-button }
10
+
11
+ DataRobot's Visual AI allows you to leverage images in your models just like any other type of data. In this accelerator, you will import image files using Spark and prepare them into a data frame suitable for ingest into DataRobot. Then you will leverage DataRobot through code to rapidly train and deploy a powerful multiclass image classifier.
12
+
13
+ While there are other methods of ingesting image data into DataRobot, in this notebook you will encode the image data directly into the data frame using base64 encoding. This methodology allows you to keep all of the relevant data in a single data frame, and works well for a Databricks environment. This technique also extends widely to a wide variety of multimodal datasets.
14
+
15
+ Dive in to go from Databricks image data to a deployed classifier.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/index.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: AI accelerators
3
+ description: Review comprehensive workflows, notebooks, and tutorials that help you find complete examples of common data science and machine learning workflows.
4
+
5
+ ---
6
+
7
+ # AI accelerators {: #ai-accelerators }
8
+
9
+ AI Accelerators are designed to help speed up model experimentation, development, and production using the DataRobot API. They codify and package data science expertise in building and delivering successful machine learning projects into repeatable, code-first workflows and modular building blocks. AI Accelerators are ready right out-of-the-box, work with the notebook of your choice, and can be combined to suit your needs.
10
+
11
+ AI accelerators cover a variety of topics, but primarily aim to assist you by:
12
+
13
+ * Providing curated templates for workflows that utilize best-in-class data science techniques to help frame your business problem (e.g., customize a data visualization to your liking or rank models by ROI).
14
+
15
+ * Getting you started quickly on a new AI or ML project by providing necessary insights, problem-framing, and use cases in notebooks.
16
+
17
+ * Fine-tuning your projects and getting the most value from your existing data and infrastructure investments, including third-party integrations.
18
+
19
+ Topic | Describes... |
20
+ ----- | ------ |
21
+ [Mastering tables in production ML](ml-tables) | Review an AI accelerator that uses a repeatable framework for a production pipeline from multiple tables.
22
+ [End-to-end ML workflow with Google Cloud Platform and BigQuery](ml-gcp) | Use Google Collaboratory to source data from BigQuery, build and evaluate a model using DataRobot, and deploy predictions from that model back into BigQuery and GCP. |
23
+ [End-to-end ML workflow with Databricks](ml-databricks) | Build models in DataRobot with data acquired and prepared in a Spark-backed notebook environment provided by Databricks.
24
+ [Deploy a model in AWS SageMaker](deploy-sagemaker) | Learn how to programmatically build a model with DataRobot and export and host the model in AWS SageMaker.
25
+ [Track ML experiments with MLFlow](mlflow) | Learn how to programmatically build a model with DataRobot, and then export and host the model in AWS SageMaker
26
+ [End-to-end ML workflow with Snowflake](ml-snowflake) | Work with Snowflake and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions.
27
+ [End-to-end ML workflow with AWS](ml-aws) | Work with AWS and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions.
28
+ [Customize lift charts](custom-lift-chart) | Leverage popular Python packages with DataRobot's Python client to recreate and augment DataRobot's lift chart visualization.
29
+ [Select models using custom metrics](ai-custom-metrics) | This AI Accelerator demonstrates how one can leverage DataRobot's Python client to extract predictions, compute custom metrics, and sort their DataRobot models accordingly.
30
+ [End-to-end ML workflow with Azure](ml-azure) | Work with Azure and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions.
31
+ [Tune blueprints for preprocessing and model hyperparameters](opt-grid) | Learn how to access, understand, and tune blueprints for both preprocessing and model hyperparameters.
32
+ [Fine-tune models with Eureqa](tune-eureqa) | Apply symbolic regression to your dataset in the form of the Eureqa algorithm.
33
+ [End-to-end time series demand forecasting workflow](demand-flow) | Perform large-scale demand forecasting using DataRobot's Python package.
34
+ [Cold start demand forecasting workflow](cold-start) | This accelerator provides a framework to compare several approaches for cold start modeling on series with limited or no history.
35
+ [Use Gramian angular fields to improve datasets](gramian) | Generate advanced features used for high frequency data use cases.
36
+ [Migrate a model to a new cluster](model-migrate) | Download a deployed model from DataRobot cluster X, upload it to DataRobot cluster Y, and then deploy and make requests from it.
37
+ [Feature Reduction with FIRE](fire) | Learn about the benefits of Feature Importance Rank Ensembling (FIRE)&mdash;a method of advanced feature selection that uses a median rank aggregation of feature impacts across several models created during a run of Autopilot. |
38
+ [Creating Custom Blueprints with Composable ML](custom-bp-nb) | Customize models on the Leaderboard using the Blueprint Workshop. |
39
+ [Tackle churn before modeling](ml-churn) | Discover the problem-framing and data management steps required to successfully model for churn, using a B2C retail example and a B2B example based on a DataRobot’s churn model. |
40
+ [Demand forecasting with the What-if app](ml-what-if) | Discover the problem framing and data management steps required to successfully model for churn, using a B2C retail example and a B2B example based on a DataRobot’s churn model. |
41
+ [Build a recommendation engine](rec-engine) | Explore how to use historical user purchase data in order to create a recommendation model, which will attempt to guess which products out of a basket of items the customer will be likely to purchase at a given point in time. |
42
+ [Prepare and leverage image data with Databricks](image-databricks) | Import image files using Spark and prepare them into a data frame suitable for ingest into DataRobot. |
43
+ [Gather churn prediction insights with the Streamlit app](streamlit-app) | Use the Streamlit churn predictor app to present the drivers and predictions of your DataRobot model. |
44
+ [Perform multi-model analysis](ml-analysis) | Use Python functions to aggregate DataRobot model insights into visualizations. |
45
+ [Enrich data using the Hyperscaler API](enrich-gcp) | Call the GCP API and enrich a modeling dataset that predicts customer churn. |
46
+ [Use feature engineering and Visual AI with acoustic data](ml-viz) | Generate image features in addition to aggregate numeric features for high frequency data sources. |
47
+ [Monitor AWS Sagemaker models with MLOps](aws-mlops) | Train and host a SageMaker model that can be monitored in the DataRobot platform.|
48
+ [Integrate DataRobot and Snowpark by maximizing the data cloud](snowpark-data) | Leverage Snowflake for data storage and Snowpark for deployment, feature engineering, and model scoring with DataRobot. |
49
+ [Demand forecasting and retraining workflow](df-retrain) | Implement retraining policies with DataRobot MLOps demand forecast deployments. |
50
+ [Predict factory order quantities for new products](pred-products) | Build a model to improve decisions about initial order quantities using future product details and product sketches. |
51
+ [End-to-end workflow with SAP Hana](ml-sap) | Learn how to programmatically build a model with DataRobot using SAP Hana as the data source. |
52
+ [Use self-joins with panel data to improve model accuracy](self-joins) | Explore how to implement self-joins in panel data analysis. |
53
+ [Predict lumber prices with Ready Signal and time series forecasts](ready-signal) | Use Ready Signal to add external control data, such as census and weather data, to improve time series predictions. |
54
+ [Netlift modeling workflow](ml-uplift) | Leverage machine learning to find patterns around the types of people for whom marketing campaigns are most effective. |
55
+ [Create a trading volume profile curve with a time series model factory](ts-factory) | Use a framework to build models that will allow you to predict how much of the next day trading volume will happen at each time interval. |
56
+ [Zero-shot text classification for error analysis](zero-shot) | Use zero-shot text classification with large language models (LLMs), focusing on its application in error analysis of supervised text classification models. |
57
+ [Deploy Scoring Code as a microservice](sc-micro) | Follow a step-by-step procedure to embed Scoring Code in a microservice and prepare it as the Docker container for a deployment on customer infrastructure (it can be self- or hyperscaler-managed K8s). |
58
+ [No-show appointment forecasting](no-show) | How to build a model that identifies patients most likely to miss appointments, with correlating reasons. |
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-analysis.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Perform multi-model analysis
3
+ description: Use Python functions to aggregate DataRobot model insights into visualizations.
4
+
5
+ ---
6
+
7
+ # Perform multi-model analysis {: #perform-multi-model-analysis }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/multi_model_analysis/Multi-Model%20Analysis.ipynb){ .md-button }
10
+
11
+ DataRobot is designed to help you experiment with different modeling approaches, data preparation techniques, and problem framings. You can iterate fast with a tight feedback loop to quickly arrive at the best approach.
12
+
13
+ Sometimes you may wish to break your use case into multiple models, likely across multiple DataRobot projects. Maybe you want to build a separate model for each country or one for different periods of the year. In this case, it helps to bring all of your model performances and insights into one chart.
14
+
15
+ This accelerator shares several Python functions which can take the DataRobot insights - specifically model error, feature effects (partial dependence), and feature importance (SHAP or permutation-based) and bring them together into one chart, allowing you to understand all of your models in one place and more easily share your findings with stakeholders.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-aws.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: End-to-end ML workflow with AWS
3
+ description: Work with AWS and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions.
4
+
5
+ ---
6
+
7
+ # End-to-end ML workflow with AWS {: #end-to-end-ml-workflow-with-aws}
8
+
9
+ Being one of the largest cloud providers in the world, AWS has multiple ways of storing data within its cloud.
10
+
11
+ You can use either of two AI accelerators that allow you to source data from S3 or Athena, build and evaluate a model using DataRobot, and send predictions from that model back to S3.
12
+
13
+ [Access the AI accelerator for S3 on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/AWS_End_to_End/Amazon_S3_End_to_End.ipynb){ .md-button }
14
+
15
+ [Access the AI accelerator for AWS Athena on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/AWS_End_to_End/AWS_Athena_End_to_End.ipynb){ .md-button }
16
+
17
+ Each AI accelerator will perform the following steps to help you integrate DataRobot with your data in AWS:
18
+
19
+ * **Import data for training**:
20
+
21
+ * In the S3 AI accelerator, you will be able to take data in the parquet file format, assemble it, and upload it to DataRobot's AI Catalog.
22
+
23
+ * In the Athena AI Accelerator, you will create a JDBC data source within DataRobot to connect to Athena and then pull data in via a SQL query.
24
+
25
+ * Using the DataRobot Python API, you will have DataRobot build up to 50 different machine learning models while also evaluating how those models perform on this dataset.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-azure.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: End-to-end modeling workflow with Azure
3
+ description: Use data stored in Azure to train a collection of models on DataRobot.
4
+
5
+ ---
6
+
7
+ # End-to-end modeling workflow with Azure {: #end-to-end-modeling-workflow-with-azure}
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Azure_End_to_End.ipynb){ .md-button }
10
+
11
+ DataRobot offers an in-depth API that allows you to produce fully automated workflows in your coding environment of choice. This accelerator shows how to enable end-to-end processing of data stored natively in Azure.
12
+
13
+ In this notebook you'll see how data stored in Azure can be used to train a collection of models on DataRobot. You'll then deploy a recommended model and use DataRobot's batch prediction API to produce predictions and write them back to the source Azure container.
14
+
15
+ This accelerator notebook covers the following activities:
16
+
17
+ * Acquire a training dataset from an Azure storage container
18
+ * Build a new DataRobot project
19
+ * Deploy a recommended model
20
+ * Score via DataRobot's batch prediction API
21
+ * Write results back to the source Azure container
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-churn.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Tackle churn before modeling
3
+ description: Discover the problem framing and data management steps required to successfully model for churn, using a B2C retail example and a B2B example based on a DataRobot’s churn model.
4
+
5
+ ---
6
+
7
+ # Tackle churn before modeling {: #tackle-churn-before-modeling }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/game-changer/churn_blog){ target=_blank }
10
+
11
+ Customer retention is central to any successful business and machine learning is frequently proposed as a way of addressing churn. It is tempting to dive right into a churn dataset, but improving outcomes requires correctly framing the problem. Doing so at the start will determine whether the business can take action based on the trained model and whether your hard work is valuable or not.
12
+
13
+ This accelerator blog will teach the problem framing and data management steps required before modeling begins. It uses two examples to illustrate concepts: a B2C retail example, and a B2B example based on DataRobot’s internal churn model.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-databricks.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: End-to-end ML workflow with Databricks
3
+ description: Build models in DataRobot with data acquired and prepared in a Spark-backed notebook environment provided by Databricks.
4
+
5
+ ---
6
+
7
+ # End-to-end ML workflow with Databricks {: #end-to-end-ml-workflow-with-databricks }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Databricks_End_To_End.ipynb){ .md-button }
10
+
11
+ DataRobot features an in-depth API that allows data scientists to produce fully automated workflows in their coding environment of choice. This accelerator shows how to pair the power of DataRobot with the Spark-backed notebook environment provided by Databricks.
12
+
13
+ In this notebook you'll see how data acquired and prepared in a Databricks notebook can be used to train a collection of models on DataRobot. You'll then deploy a recommended model and use DataRobot's exportable Scoring Code to generate predictions on the Databricks Spark cluster.
14
+
15
+ This accelerator notebook covers the following activities:
16
+
17
+ * Acquiring a training dataset.
18
+ * Building a new DataRobot project.
19
+ * Deploying a recommended model.
20
+ * Scoring via Spark using DataRobot's exportable Java Scoring Code.
21
+ * Scoring via DataRobot's Prediction API.
22
+ * Reporting monitoring data to DataRobot's MLOps agent framework.
23
+ * Writing results back to a new table.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-gcp.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: End-to-end ML workflow with Google Cloud Platform and BigQuery
3
+ description: Use Google Collaboratory to source data from BigQuery, build and evaluate a model using DataRobot, and deploy predictions from that model back into BigQuery and GCP.
4
+
5
+ ---
6
+
7
+ # End-to-end ML workflow with Google Cloud Platform and BigQuery {: #end-to-end-ml-workflow-with-google-cloud-platform-and-bigquery }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/GCP%20DataRobot%20End%20To%20End.ipynb){ .md-button }
10
+
11
+ DataRobot can integrate directly into your GCP environment, helping to accelerate your use of machine learning across all of the GCP services.
12
+
13
+ In this notebook accelerator, you can use Google Collaboratory or another notebook environment to source data from BigQuery, build and evaluate an ML model using DataRobot, and deploy predictions from that model back into BigQuery and GCP.
14
+
15
+ This accelerator covers the following:
16
+
17
+ 1. **Prepare data and ensure connectivity:** In the first section of the notebook, you will load a sample dataset to be used for modeling into BigQuery. Once complete, you will connect your BigQuery data with DataRobot.
18
+
19
+ 2. **Build and evaluate a model:** Using the DataRobot Python API, you will have DataRobot build close to 50 different machine learning models while also evaluating how those models perform on this dataset.
20
+
21
+ 3. **Scoring and hosting:** In the final section, the entire dataset will be scored on the new model with prediction data written back to BigQuery for use in your GCP applications.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-sap.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: End-to-end workflow with SAP Hana
3
+ description: Learn how to programmatically build a model with DataRobot using SAP Hana as the data source.
4
+
5
+ ---
6
+
7
+ # End-to-end workflow with SAP Hana {: #end-to-end-workflow-with-sap-hana }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/SAP_End_to_End/SAP_End_to_End.ipynb){ .md-button }
10
+
11
+ The scope of this accelerator provides instructions on how to use DataRobot's Python client to build a workflow that will use an existing SAP Hana JDBC driver and:
12
+
13
+ * Create credentials
14
+ * Create the training data source
15
+ * Create the predictions data source
16
+ * Create a dataset used to train the models
17
+ * Create a dataset used to make predictions
18
+ * Create a project
19
+ * Create a deployment
20
+ * Make batch and real-time predictions
21
+ * Show the total predictions made so far
22
+
23
+ There is also a playbook at the end of this notebook that describes how to create the back end SAP Hana database that will provide the data required.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-snowflake.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: End-to-end ML workflow with Snowflake
3
+ description: Work with Snowflake and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions.
4
+
5
+ ---
6
+
7
+ # End-to-end ML workflow with Snowflake {: #end-to-end-ml-workflow-with-snowflake }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/Snowflake_End_to_End){ .md-button }
10
+
11
+ This AI accelerator walks through how to work with Snowflake (as a data source) and DataRobot's Python client to import data, build and evaluate models, and deploy a model into production to make new predictions. More broadly, the DataRobot API is a critical tool for data scientists to accelerate their machine learning projects with automation while integrating the platform's capabilities into their code-first workflows and coding environments of choice.
12
+
13
+ By using this accelerator, you will:
14
+
15
+ * Connect to DataRobot.
16
+ * Import data from Snowflake into DataRobot.
17
+ * Create a DataRobot project and run Autopilot.
18
+ * Select and evaluate the top performing model.
19
+ * Deploy the recommended model with MLOps model monitoring.
20
+ * Orchestrate scheduled batch predictions that write results back to Snowflake.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-tables.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Mastering tables in production ML
3
+ description: Review an AI accelerator that uses a repeatable framework for a production pipeline from multiple tables.
4
+
5
+ ---
6
+
7
+ # Mastering tables in production ML {: #mastering-tables-in-production-ml }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/AFD){ .md-button }
10
+
11
+ We've all been there: data for customer transactions are in one table, but the customer membership history is in another. Or, you have sensor-level data at the sub-second level in one table, machine errors in another table, and production demand in yet another table, all at different time frequencies. Electronic Medical Records (EMRs) are another common instance of this challenge. You have a use case for your business you want to explore, so you build a v0 dataset and use simple aggregations from before, perhaps in a feature store. But moving past v0 is hard.
12
+
13
+ The reality is, the hypothesis space of relevant features explodes when considering multiple data sources with multiple data types in them. By dynamically exploring the feature space across tables, you minimize the risk of missing signal by feature omission and further reduce the burden of a priori knowledge of all possible relevant features.
14
+
15
+ Event-based data is present in every vertical and is becoming more ubiquitous across industries. Building the right features can drastically improve performance. However, understanding which joins and time horizons are best suited to your data is challenging, and also time-consuming and error-prone to explore.
16
+
17
+ In this accelerator, you'll find a repeatable framework for a production pipeline from multiple tables. This code uses Snowflake as a data source, but it can be extended to any supported database. Specifically, the accelerator provides a template to:
18
+
19
+ * Build time-aware features across multiple historical time-windows and datasets using DataRobot and multiple tables in Snowflake (or any database).
20
+ * Build and evaluate multiple feature engineering approaches and algorithms for all data types.
21
+ * Extract insights and identify the best feature engineering and modeling pipeline.
22
+ * Test predictions locally.
23
+ * Deploy the best-performing model and all data preprocessing/feature engineering in a Docker container, and expose a REST API.
24
+ * Score from Snowflake and write predictions back to Snowflake.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-uplift.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Netlift modeling workflow
3
+ description: Leverage machine learning to find patterns around the types of people for whom marketing campaigns are most effective.
4
+ ---
5
+
6
+ # Netlift modeling workflow {: #netlift-modeling-workflow}
7
+
8
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/game-changer/uplift_modeling/uplift_modeling.ipynb){ .md-button }
9
+
10
+ Uplift modeling, also referred to as "netlift" modeling, is an approach used often in marketing to isolate the impact of a marketing campaign on specific prospective customers’ propensity to purchase something. The underlying example in this DataRobot AI Accelerator is exactly that, but more generally this approach could be used to isolate the impact of any “intervention” on the propensity of any positive response. The key challenge in uplift modeling is to isolate the effect of the campaign, because no individual person can be observed both receiving the campaign and not receiving the campaign. The accelerator addresses this key challenge, as well as other tips and tricks for uplift modeling.
11
+
12
+ In many cases, the historical strategy for determining who received a campaign targeted those already likely to purchase the product (or generally, produce a favorable response). That approach would suggest a simple trend that receiving the campaign increases the likelihood to purchase, but many other features about the customers may be confounding the isolated impact of the campaign. In fact, it's possible that a campaign that targeted already high-probability buyers actually reduced their probability of purchase. These are the so-called "sleeping dogs'' in marketing lingo. From an ROI standpoint, increasing the probability to purchase on one group of prospects from 25% to 50% is just as valuable as increasing that probability on another group from 50% to 75% (assuming the groups are roughly the same size, with the same expected revenue values). So what you're really trying to ask from machine learning models is this: on which prospective customers will the campaign increase the probability of purchase by the greatest amount?
13
+
14
+ This accelerator uses a generic dataset where the favorable outcome is binary: whether or not a product was purchased. The "treatment", or campaign, is simple: a single campaign type that was sent randomly to some prospective buyers, though it also discusses how these methods can be extrapolated to the common case where there was selection bias in the campaign. Leverage machine learning to find patterns around the types of people for whom the campaign is most effective, controlling for their baseline likelihood to purchase in the case that they don't see a campaign. Uplift use cases require some additional post-processing to extract and evaluate the "uplift score", and thus this use case is an ideal candidate for leveraging the DataRobot programmatic API, to seamlessly integrate powerful machine learning with one's typical coding pipeline.
15
+
16
+ While working through the provided Jupyter Notebook, the following concepts and strategies will be reinforced:
17
+
18
+ 1. Data formatting tricks to extract the most from your uplift models.
19
+ 2. How to leverage DataRobot's API to integrate powerful machine learning into your code-first pipelines.
20
+ 3. How to extract uplift scores from a single, binary classification model.
21
+ 4. How to evaluate and understand those uplift scores, and their implied ROI.
22
+ 5. Considerations for cases where your historical, training data exhibits selection bias, where the campaign was not randomly sent.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-viz.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Use feature engineering and Visual AI with acoustic data
3
+ description: Generate image features in addition to aggregate numeric features for high frequency data sources.
4
+
5
+ ---
6
+
7
+ # Use feature engineering and Visual AI with acoustic data {: #use-feature-engineering-and-visual-ai-with-acoustic-data }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/game-changer/high_freq_data_to_images/high_frequency_data_classification_using_spectrograms_n_numerics/high_frequency_classification_spectrograms_n_numerics.ipynb){ .md-button }
10
+
11
+ The density of high frequency data presents a challenge for standard machine learning workflows that lack specialized feature engineering techniques to condense the signal, extracting and highlighting its uniqueness. DataRobot's multimodal input capability supports simultaneously leveraging numerics and images, which for this use-case is particularly beneficial for including descriptive spectrograms that enable you to leverage well-established computer vision techniques for complex data understanding.
12
+
13
+ This example notebook shows how to generate image features and aggregate numeric features for high frequency data sources. This approach converts audio wav files from the time domain into the frequency domain to create several types of spectrograms. Statistical numeric features computed from the converted signal add additional descriptors to aid classification of the audio source.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ml-what-if.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Demand forecasting with the what-if app
3
+ description: Discover the problem framing and data management steps required to successfully model for churn, using a B2C retail example and a B2B example based on a DataRobot’s churn model.
4
+
5
+ ---
6
+
7
+ # Demand forecasting with the what-if app {: #demand-forecasting-with-the-what-if-app }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/Demand_forecasting_what_if_app){ .md-button }
10
+
11
+ This demand forecasting what-if app allows you to adjust certain known in advance variable values to see how changes in those factors might affect the forecasted demand.
12
+
13
+ Some examples of factors that might be adjusted include marketing promotions, pricing, seasonality, or competitor activity. By using the app to explore different scenarios and adjust key inputs, you can make more accurate predictions about future demand and plan accordingly.
14
+
15
+ This app is a third installment of a three-part series on demand forecasting. The [first accelerator](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/End_to_end_demand_forecasting){ target=_blank } focuses on handling common data and modeling challenges, identifies common pitfalls in real-life time series data, and provides helper functions to scale experimentation. The [second accelerator](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/Demand_forecasting_cold_start){ target=_blank } provides the building blocks for cold start modeling workflow on series with limited or no history. They can be used as a starting point to create a model deployment for the app.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/mlflow.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Track ML experiments with MLFlow
3
+ description: Automate machine learning experimentation using DataRobot, MLFlow, and Papermill.
4
+
5
+ ---
6
+
7
+ # Track ML experiments with MLFlow {: #track-ml-experiments-with-mlflow }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/MLFLOW){ .md-button }
10
+
11
+ Experimentation is a mandatory activity in any machine learning developer’s day-to-day activities. For time series projects, the number of parameters and settings to tune for achieving the best model is in itself a vast search space.
12
+
13
+ Many of the experiments in time series use cases are common and repeatable. Tracking these experiments and logging results is a task that needs streamlining. Manual errors and time limitations may lead to selection of suboptimal models leaving better models lost in global minima.
14
+
15
+ The integration of DataRobot API, Papermill, and MLFlow automates machine learning experimentation so that is becomes easier, robust, and easy to share.
16
+
17
+ As illustrated below, you will use the [orchestration notebook](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/MLFLOW/orchestration_notebook.ipynb) to design and run the [experiment notebook](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/MLFLOW/experiment_notebook.ipynb), with the permutations of parameters handled automatically by DataRobot. At the end of the experiments, copies of the experiment notebook will be available, with the outputs for each permutation for collaboration and reference.
18
+
19
+ ![](images/mlflow.png)
20
+
21
+ You can review [the dependencies](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/MLFLOW/requirements.txt) for the accelerator.
22
+
23
+ This accelerator covers the following activities:
24
+
25
+ * Acquiring a training dataset.
26
+ * Building a new DataRobot project.
27
+ * Deploying a recommended model.
28
+ * Scoring via Spark using DataRobot's exportable Java Scoring Code.
29
+ * Scoring via DataRobot's Prediction API.
30
+ * Reporting monitoring data to DataRobot's MLOps agent framework.
31
+ * Writing results back to a new table.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/model-migrate.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Migrate a model to a new cluster
3
+ description: Download a deployed model from DataRobot cluster X, upload it to DataRobot cluster Y, and then deploy and make requests from it.
4
+
5
+ ---
6
+
7
+ # Migrate a model to a new cluster {: #migrate-a-model-to-a-new-cluster}
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/model_migration/Model_Migration_Example.ipynb){ .md-button }
10
+
11
+ Currently under development, an experimental DataRobot API allows administrators to download a deployed model from DataRobot cluster X, upload it to DataRobot cluster Y, and then deploy and make requests from it.
12
+
13
+ Note that this notebook will not work using https://app.datarobot.com.
14
+
15
+ ### Prerequisites
16
+
17
+ * This notebook must be able to write to the model directory, located in the same directory as this accelerator's notebook. For best results, run this notebook from the local file system
18
+ * Ensure that the model you choose to migrate must be a deployed model.
19
+ * Provide API keys for both the source and destination clusters.
20
+ * The Source and Destination users must have the "Enable Experimental API access" feature flag enabled to follow this workflow.
21
+ * The notebook must have connectivity to the Source and Destination clusters.
22
+ * DataRobot versions on the clusters must be consistent with the Supported Paths above.
23
+ * For models on clusters of DataRobot v7.x, you must have SSH access to the App Node of the cluster.
24
+ * The Source and Destination DataRobot clusters must have the following in the config.yaml:
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/no-show.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: No-show appointment forecasting
3
+ description: How to build a model that identifies patients most likely to miss appointments, with correlating reasons.
4
+
5
+ ---
6
+
7
+ # No-show appointment forecasting {: #no-show-appointment-forecasting }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/appointment_forecasting/no_show.ipynb){ .md-button }
10
+
11
+ Many people are guilty of having canceled a doctor’s appointment. However, although canceling an appointment does not seem too disastrous from the patient’s point of view, no-shows cost outpatient health centers a staggering 14% of anticipated daily revenue (JAOA). Missed appointments trickle into lower utilization rates for not only doctors and nurses but also the overhead costs required to run outpatient centers. In addition, patients missing their appointments risk facing poorer health outcomes as they are unable to access timely care.
12
+
13
+ While outpatient centers employ solutions such as calling patients ahead of time, these high touch resources investments are often not prioritized for patients with the highest risk of no-shows. Low touch solutions such as automated texts are effective tools for mass reminders but do not offer necessary personalization for patients at the highest risk of no-shows. This accelerator shows how to identify clients who are likely to miss appointments ("no-shows") and take action to prevent that from happening.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/opt-grid.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Tune blueprints for preprocessing and model hyperparameters
3
+ description: Learn how to access, understand, and tune blueprints for both preprocessing and model hyperparameters.
4
+
5
+ ---
6
+
7
+ # Tune blueprints for preprocessing and model hyperparameters {: #tune-blueprints-for-preprocessing-and-model-hyperparameters}
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced-experimentation/Hyperparameter_Optimization){ .md-button }
10
+
11
+ In machine learning, hyperparameter tuning is the act of adjusting the "settings" (referred to as hyperparameters) in a machine learning algorithm, whether that's the learning rate for an XGBoost model or the activation function in a neural network. Many methods for doing this exist, with the simplest being a brute-force search over every feasible combination. While this requires little effort, it's extremely time-consuming as each combination requires fitting the machine learning algorithm. To this end, practitioners strive to find more efficient ways to search for the best combination of hyperparameters to use in a given prediction problem. DataRobot employs a proprietary version of pattern search for optimization not only for the machine learning algorithm's specific hyperparameters, but also the respective data preprocessing needed to fit the algorithm, with the goal of quickly producing high-performance models tailored to your dataset.
12
+
13
+ While the approach used at DataRobot is sufficient in most cases, you may want to build upon DataRobot's Autopilot modeling process by custom tuning methods. In this AI Accelerator, you will familiarize yourself with DataRobot's fine-tuning API calls to control DataRobot's pattern search approach as well as implement a modified brute-force grid-search for the text and categorical data pipeline and hyperparameters of an XGBoost model. This accelerator serves as an introductory learning example that other approaches can be built from. Bayesian Optimization, for example, leverages a probabilistic model to judiciously sift through the hyperparameter space to converge on an optimal solution, and will be presented next in this accelerator bundle.
14
+
15
+ Note that as a best practice, it is generally best to wait until the model is in a near-finished state before searching for the best hyperparameters to use. Specifically, the following have already been finalized:
16
+
17
+ - Training data (e.g., data sources)
18
+ - Model validation method (e.g., group cross-validation, random cross-validation, or backtesting. How the problem is framed influences all subsequent steps, as it changes error minimization.)
19
+ - Feature engineering (particularly, calculations driven by subject matter expertise)
20
+ - Preprocessing and data transformations (e.g., word or character tokenizers, PCA, embeddings, normalization, etc.)
21
+ - Algorithm type (e.g. GLM, tree-based, neural net)
22
+
23
+ These decisions typically have a larger impact on model performance compared to adjusting a machine learning algorithm's hyperparameters (especially when using DataRobot, as the hyperparameters chosen automatically are pretty competitive).
24
+
25
+ This AI Accelerator teaches you how to access, understand, and tune blueprints for both preprocessing and model hyperparameters. You'll programmatically work with DataRobot advanced tuning which you can then adapt to your other projects.
26
+
27
+ You'll learn how to:
28
+
29
+ * Prepare for tuning a model via the DataRobot API
30
+ * Load a project and model for tuning
31
+ * Set the validation type for minimizing errors
32
+ * Extract model metadata
33
+ * Get model performance
34
+ * Review hyperparameters
35
+ * Run a single advanced tuning session
36
+ * Implement your own custom gridsearch for single and multiple models to evaluate
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/pred-products.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Predict factory order quantities for new products
3
+ description: Build a model to improve decisions about initial order quantities using future product details and product sketches.
4
+
5
+ ---
6
+
7
+ # Predict factory order quantities for new products {: #predict-factory-order-quantities-for-new-products }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/Retail_Industry_Predicting_Factory_Orders_New_Products/Retail%20Industry%20-%20Predicting%20Factory%20Order%20Quantities%20for%20New%20Products.ipynb){ .md-button }
10
+
11
+ Retailers face many decisions when launching new products. One key decision is the amount of product to order from the manufacturer.
12
+
13
+ Ordering too much wastes working capital and can lead to products being heavily discounted. Ordering too little squanders an opportunity for revenue and may cause customers to purchase other brands.
14
+
15
+ Getting initial orders quantities right is particularly difficult for luxury products where first year demand for a new purse, a new belt or a new shoe can vary by several orders of magnitude based on factors unrelated to the product specifications.
16
+
17
+ This notebook illustrates how to build a model to improve decisions about initial order quantities using future product details and product sketches.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/ready-signal.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Predict lumber prices with Ready Signal and time series forecasts
3
+ description: Use Ready Signal to add external control data, such as census and weather data, to improve time series predictions.
4
+
5
+ ---
6
+
7
+ # Predict lumber prices with Ready Signal and time series forecasts {: #predict-lumber-prices-with-ready-signal-and-time-series-forecasts }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced-experimentation/Ready_Signal_TS/DataRobot_RXA.ipynb){ .md-button }
10
+
11
+ In this accelerator, you will explore how to bring external data from Ready Signal to help augment your time series forecasting accuracy.
12
+
13
+ Ready Signal is an AI-powered data platform that provides access to over 500 normalized, aggregated, and automatically updated data sources for predictive modeling, experimentation, business intelligence, and other data enrichment needs. The data catalog includes micro/macro-economic indicators, labor statistics, demographics, weather, and more. Its AI recommendation engine and auto feature engineering capabilities make it easy to integrate with existing data pipelines and analytics tooling, accelerating and enhancing how relevant third-party data is leveraged.
14
+
15
+ Here, DataRobot provides an example of predicting lumber price combined with the most relevant external data automatically identified by ReadySignal based on correlation with the target variable. The workflow can be applied to any time series forecasting project.
16
+
17
+ If you are interested in learning more about how Ready Signal and DataRobot together can help your time series project, please reach out to Matt Schaefer (matt.schaefer@readysignal.com) or anyone else in the author list.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/rec-engine.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Build a recommendation engine
3
+ description: Explore how to use historical user purchase data in order to create a recommendation model which will attempt to guess which products out of a basket of items the customer will be likely to purchase at a given point in time.
4
+
5
+ ---
6
+
7
+ # Build a recommendation engine {: #build-a-recommendation-engine}
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/game-changer/Recommendation%20Engine/Recommendation%20Engine.ipynb){ .md-button }
10
+
11
+ The accelerator provided in this notebook trains a model on historical customer purchases in order to make recommendations for future visits. The DataRobot features that will be utilized in this notebook are multi-Label modeling and feature discovery. Together the resulting model can provide rank ordered suggestions of content, product, or services that a specific customer might like.
12
+
13
+ In the notebook, you will:
14
+
15
+ * Analyze the datasets required
16
+ * Create a multilabel dataset for training
17
+ * Connect to DataRobot
18
+ * Configure a feature discovery project
19
+ * Generate features and models
20
+ * Generate recommendations for new visits
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/sc-micro.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Deploy Scoring Code as a microservice
3
+ description: Follow a step-by-step procedure to embed Scoring Code in a microservice and prepare it as the Docker container for a deployment on customer infrastructure (it can be self- or hyperscaler-managed K8s).
4
+
5
+ ---
6
+
7
+ # Deploy Scoring Code as a microservice {: #deploy-scoring-code-as-a-microservice}
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/tree/main/end-to-end/scoring-code-as-microservice){ .md-button }
10
+
11
+ This accelerator guides through the step-by-step procedure that makes it possible to embed scoring code in the microservice and to prepare it as the Docker container for the deployment on the customer infrastructure (it can be self or hyperscaler-managed K8s). The K8s configuration and deployment on K8s are out of scope. The accelerator also includes an example Maven project with the Java code.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/self-joins.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Use self-joins with panel data to improve model accuracy
3
+ description: Explore how to implement self-joins in panel data analysis.
4
+
5
+ ---
6
+
7
+ # Use self-joins with panel data to improve model accuracy {: #use-self-joins-with-panel-data-to-improve-model-accuracy }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/Demand_forecasting_retraining/End_to_end_demand_forecasting_retraining.ipynb){ .md-button }
10
+
11
+ In this accelerator, explore how to implement self-joins in panel data analysis. Regardless of your industry, if you work with panel data, this guide is tailored to help you accelerate feature engineering and extract valuable insights.
12
+
13
+ Panel data, with multiple observations for consistent subjects over time, is ubiquitous in various domains. While panel data is often spread across multiple tables, it can also exist in a single dataset with multiple features suitable as panel dimensions. The self-join technique enables automated, time-aware feature engineering with just one dataset, generating hundreds of candidate features of lagged aggregations and statistics. Combining these features within panel dimensions can substantially improve predictive model performance.
14
+
15
+ The accelerator focuses on predicting airline take-off delays of 30 minutes or more to illustrate the self-join technique. However, this framework applies broadly across verticals and can easily be adapted to your use case. Using a single dataset, join it four times across different features, engineer time-based features from each join, using the AI Catalog for data management.
16
+
17
+ The accelerator covers data preparation with multiple joins and time horizons, how to mitigate target leakage with multiple feature lists as well as time gaps in time-aware joins.
18
+
19
+ Panel data analysis unlocks valuable insights into subjects evolving over time, and is often overlooked when there is a singular dataset.
grounding_data/holdout/datarobot_english_documentation/datarobot_docs/en/api/accelerators/snowpark-data.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Integrate DataRobot and Snowpark by maximizing the data cloud
3
+ description: Use Python functions to aggregate DataRobot model insights into visualizations.
4
+
5
+ ---
6
+
7
+ # Integrate DataRobot and Snowpark by maximizing the data cloud {: #integrate-datarobot-and-snowpark-by-maximizing-the-data-cloud }
8
+
9
+ [Access this AI accelerator on GitHub <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](https://github.com/datarobot-community/ai-accelerators/blob/main/end-to-end/snowflake_snowpark/Native%20integration%20DataRobot%20and%20Snowflake%20Snowpark-Maximizing%20the%20Data%20Cloud.ipynb){ .md-button }
10
+
11
+ If you or your team have tried to develop and productionalize machine learning models with Snowflake using Python and Snowpark but are looking to level up your end-to-end ML lifecycle on the data cloud, then this AI Accelerator is for you.
12
+
13
+ Depending on your role within the organization,
14
+
15
+ This accelerator can address a number of use cases:
16
+
17
+ * Providing technical personnel with a hosted notebook.
18
+ * Create an improved developer experience.
19
+ * Improve monitoring capabilities for models within Snowflake.
20
+ * Provide guidance and insights for business personnel who want action items: next steps for customers, sales, marketing, and more.
21
+
22
+ DataRobot addresses these exact needs, which can be found in this notebook. In addition, it is compatible with the Snowflake data science stack and DataRobot 9.0 to give you advantages in terms of speed, accuracy, security, and cost-effectiveness.