itsjhuang's picture
Add dataset files and documentation
a2c830f verified
doc_id,url,title,text,label,label_id,split
0093065541AA4C3E90E47E3ACE89596155EA1735,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressionbuild_functions.html?context=cdpaas&locale=en,Selecting functions (SPSS Modeler),"Selecting functions (SPSS Modeler)
Selecting functions The function list displays all available SPSS Modeler functions and operators. Scroll to select a function from the list, or, for easier searching, use the drop-down list to display a subset of functions or operators. Available functions are grouped into categories for easier searching. Most of these categories are described in the Reference section of the CLEM language description. For more information, see [Functions reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref.htmlclem_function_ref). The other categories are as follows. * General Functions. Contains a selection of some of the most commonly-used functions. * Recently Used. Contains a list of CLEM functions used within the current session. * @ Functions. Contains a list of all the special functions, which have their names preceded by an ""@"" sign. Note: The @DIFF1(FIELD1,FIELD2) and @DIFF2(FIELD1,FIELD2) functions require that the two field types are the same (for example, both Integer or both Long or both Real). * Database Functions. If the flow includes a database connection, this selection lists the functions available from within that database, including user-defined functions (UDFs). For more information, see [Database functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressionbuild_database_functions.htmlexpressionbuild_database_functions). * Database Aggregates. If the flow includes a database connection, this selection lists the aggregation options available from within that database. These options are available in the Expression Builder of the Aggregate node. * Built-In Aggregates. Contains a list of the possible modes of aggregation that can be used. * Operators. Lists all the operators you can use when building expressions. Operators are also available from the buttons in the center of the dialog box. * All Functions. Contains a complete list of available CLEM functions. Double-click a function to insert it into the expression field at the position of the cursor.",conceptual,0,test
7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html?context=cdpaas&locale=en,Building a time series experiment,"Building a time series experiment
Building a time series experiment Use AutoAI to create a time series experiment to predict future activity, such as stock prices or temperatures, over a specified date or time range. Time series overview A time series experiment is a method of forecasting that uses historical observations to predict future values. The experiment automatically builds many pipelines using machine learning models, such as random forest regression and Support Vector Machines (SVMs), as well as statistical time series models, such as ARIMA and Holt-Winters. Then, the experiment recommends the best pipeline according to the pipeline performance evaluated on a holdout data set or backtest data sets. Unlike a standard AutoAI experiment, which builds a set of pipelines to completion then ranks them. A time series experiment evaluates pipelines earlier in the process and only completes and test the best-performing pipelines. ![AutoAI time series pipeline generation process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ts-pipelines.png) For details on the various stages of training and testing a time series experiment, see [Time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html). Predicting anomalies in a time series experiment You can configure your time series experiment to predict anomalies (outliers) in your data or predictions. To configure anomaly prediction for your experiment, follow the steps in [Creating a time series anomaly prediction model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html). Using supporting features to improve predictions When you configure your time series experiment, you can choose to specify supporting features, also known as exogenous features. Supporting features are features that influence or add context to the prediction target. For example, if you are forecasting ice cream sales, daily temperature would be a logical supporting feature that would make the forecast more accurate. Leveraging future values for supporting features If you know the future values for the supporting features, you can leverage those future values when you deploy the model. For example, if you are training a model to forecast future t-shirt sales, you can include promotional discounts as a supporting feature to enhance the prediction. Inputting the future value of the promotion then makes the forecast more accurate. Data requirements These are the current data requirements for training a time series experiment: * The training data must be a single file in CSV format. * The file must contain one or more time series columns and optionally contain a timestamp column. For a list of supported date/time formats, see [AutoAI time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html). * If the data source contains a timestamp column, ensure that the data is sampled at uniform frequency. That is, the difference in timestamps of adjacent rows is the same. For example, data can be in increments of 1 minute, 1 hour, or one day. The specified timestamp is used to determine the lookback window to improve the model accuracy. Note:If the file size is larger than 1 GB, sort the data in descending order by the timestamp, and only the first 1 GB is used to train the experiment. * If the data source does not contain a timestamp column, ensure that the data is sampled at regular intervals and sorted in ascending order according to the sample date/time. That is, the value in the first row is the oldest, and the value in the last row is the most recent. Note: If the file size is larger than 1 GB, truncate the file so it is smaller than 1 GB. * Select what data to use when training the final pipelines. If you choose to include training data only, the generated notebooks will include a cell for retrieving the holdout data used to evaluate each pipeline. Choose data from your project or upload it from your file system or from the asset browser, then click Continue. Click the preview icon ![AutoAI preview data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-preview-icon.png), after the data source name to review your data. Optionally, you can add a second file as holdout data for testing the trained pipelines. Configuring a time series experiment When you configure the details for an experiment, click Yes to Enable time series and complete the experiment details. Field Description Prediction columns The time series columns that you want to predict based on the previous values. You can specify one or more columns to predict. Date/time column The column that indicates the date/time at which the time series values occur. Lookback window A parameter that indicates how many previous time series values are used to predict the current time point. Forecast window The range that you want to predict based on the data in the lookback window. The prediction summary shows you the experiment type and the metric that is selected for optimizing the experiment. Configuring experiment settings To configure more details for your time series experiment, click Experiment settings. General prediction settings On the General panel for prediction settings, you can optionally change the metric used to optimize the experiment or specify the algorithms",how-to,1,test
42E228E8218A4FDEF9F2CA0DB53B5B594A475B88,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_TA_intro.html?context=cdpaas&locale=en,About text mining (SPSS Modeler),"About text mining (SPSS Modeler)
About text mining Today, an increasing amount of information is being held in unstructured and semi-structured formats, such as customer e-mails, call center notes, open-ended survey responses, news feeds, web forms, etc. This abundance of information poses a problem to many organizations that ask themselves: How can we collect, explore, and leverage this information? Text mining is the process of analyzing collections of textual materials in order to capture key concepts and themes and uncover hidden relationships and trends without requiring that you know the precise words or terms that authors have used to express those concepts. Although they are quite different, text mining is sometimes confused with information retrieval. While the accurate retrieval and storage of information is an enormous challenge, the extraction and management of quality content, terminology, and relationships contained within the information are crucial and critical processes.",conceptual,0,test
1C20BD9F24D670DD18B6BC28E020FBB23C742682,https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/CustomRules.html?context=cdpaas&locale=en,Creating advanced custom constraints with Python in the Decision Optimization Modeling Assistant,"Creating advanced custom constraints with Python in the Decision Optimization Modeling Assistant
Creating advanced custom constraints with Python This Decision Optimization Modeling Assistant example shows you how to create advanced custom constraints that use Python. Procedure To create a new advanced custom constraint: 1. In the Build model view of your open Modeling Assistant model, look at the Suggestions pane. If you have Display by category selected, expand the Others section to locate New custom constraint, and click it to add it to your model. Alternatively, without categories displayed, you can enter, for example, custom in the search field to find the same suggestion and click it to add it to your model.A new custom constraint is added to your model. ![New custom constraint in model, with elements highlighted to be completed by user.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/newcustomconstraint.jpg) 2. Click Enter your constraint. Use [brackets] for data, concepts, variables, or parameters and enter the constraint you want to specify. For example, type No [employees] has [onCallDuties] for more than [2] consecutive days and press enter.The specification is displayed with default parameters (parameter1, parameter2, parameter3) for you to customize. These parameters will be passed to the Python function that implements this custom rule. ![Custom constraint expanded to show default parameters and function name.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/customconstraintFillParameters.jpg) 3. Edit the default parameters in the specification to give them more meaningful names. For example, change the parameters to employees, on_call_duties, and limit and click enter. 4. Click function name and enter a name for the function. For example, type limitConsecutiveAssignments and click enter.Your function name is added and an Edit Python button appears. ![Custom rule showing customized parameters and Edit Python button.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/customconstraintParameters.jpg) 5. Click the Edit Python button.A new window opens showing you Python code that you can edit to implement your custom rule. You can see your customized parameters in the code as follows: ![Python code showing block to be customized](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/CustomRulePythoncode.jpg) Notice that the code is documented with corresponding data frames and table column names as you have defined in the custom rule. The limit is not documented as this is a numerical value. 6. Optional: You can edit the Python code directly in this window, but you might find it useful to edit and debug your code in a notebook before using it here. In this case, close this window for now and in the Scenario pane, expand the three vertical dots and select Generate a notebook for this scenario that contains the custom rule. Enter a name for this notebook.The notebook is created in your project assets ready for you to edit and debug. Once you have edited, run and debugged it you can copy the code for your custom function back into this Edit Python window in the Modeling Assistant. 7. Edit the Python code in the Modeling Assistant custom rule Edit Python window. For example, you can define the rule for consecutive days in Python as follows: def limitConsecutiveAssignments(self, mdl, employees, on_call_duties, limit): global helper_add_labeled_cplex_constraint, helper_get_index_names_for_type, helper_get_column_name_for_property print('Adding constraints for the custom rule') for employee, duties in employees.associated(on_call_duties): duties_day_idx = duties.join(Day) Retrieve Day index from Day label for d in Day['index']: end = d + limit + 1 One must enforce that there are no occurence of (limit + 1) working consecutive days duties_in_win = duties_day_idx[((duties_day_idx'index'] >= d) & (duties_day_idx'index'] <= end)) | (duties_day_idx'index'] <= end - 7)] mdl.add_constraint(mdl.sum(duties_in_win.onCallDutyVar) <= limit) 8. Click the Run button to run your model with your custom constraint.When the run is completed you can see the results in the Explore solution view.",how-to,1,test
A447EC7366D2EB328BCE8E44A73B3A825A9B757B,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/setglobals.html?context=cdpaas&locale=en,Set Globals node (SPSS Modeler),"Set Globals node (SPSS Modeler)
Set Globals node The Set Globals node scans the data and computes summary values that can be used in CLEM expressions. For example, you can use a Set Globals node to compute statistics for a field called age and then use the overall mean of age in CLEM expressions by inserting the function @GLOBAL_MEAN(age).",conceptual,0,test
C5F5ACC006CD6F06BE3266EE98F89FABF4F6FBAF,https://dataplatform.cloud.ibm.com/docs/content/wsd/spss_troubleshooting.html?context=cdpaas&locale=en,Troubleshooting information for SPSS Modeler,"Troubleshooting information for SPSS Modeler
Troubleshooting SPSS Modeler The information in this section provides troubleshooting details for issues you may encounter in SPSS Modeler.",how-to,1,test
51426DCF985B97AF6172727AFCF353A481591560,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html?context=cdpaas&locale=en,Create the data handler,"Create the data handler
Create the data handler Each party in a Federated Learning experiment must get a data handler to process their data. You or a data scientist must create the data handler. A data handler is a Python class that loads and transforms data so that all data for the experiment is in a consistent format. About the data handler class The data handler performs the following functions: * Accesses the data that is required to train the model. For example, reads data from a CSV file into a Pandas data frame. * Pre-processes the data so data is in a consistent format across all parties. Some example cases are as follows: * The Date column might be stored as a time epoch or timestamp. * The Country column might be encoded or abbreviated. * The data handler ensures that the data formatting is in agreement. * Optional: feature engineer as needed. The following illustration shows how a data handler is used to process data and make it consumable by the experiment: ![A use case of the data handler unifying data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-data-handler.svg) Data handler template A general data handler template is as follows: your import statements from ibmfl.data.data_handler import DataHandler class MyDataHandler(DataHandler): """""" Data handler for your dataset. """""" def __init__(self, data_config=None): super().__init__() self.file_name = None if data_config is not None: This can be any string field. For example, if your data set is in csv format, <your_data_file_type> can be ""CSV"", "".csv"", ""csv"", ""csv_file"" and more. if '<your_data_file_type>' in data_config: self.file_name = data_config['<your_data_file_type>'] extract other additional parameters from info if any. load and preprocess the training and testing data self.load_and_preprocess_data() """""" Example: (self.x_train, self.y_train), (self.x_test, self.y_test) = self.load_dataset() """""" def load_and_preprocess_data(self): """""" Loads and pre-processeses local datasets, and updates self.x_train, self.y_train, self.x_test, self.y_test. Example: return (self.x_train, self.y_train), (self.x_test, self.y_test) """""" pass def get_data(self): """""" Gets the prepared training and testing data. :return: ((x_train, y_train), (x_test, y_test)) most build-in training modules expect data is returned in this format :rtype: tuple This function should be as brief as possible. Any pre-processing operations should be performed in a separate function and not inside get_data(), especially computationally expensive ones. Example: X, y = load_somedata() x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE, random_state=RANDOM_STATE) return (x_train, y_train), (x_test, y_test) """""" pass def preprocess(self, X, y): pass Parameters * your_data_file_type: This can be any string field. For example, if your data set is in csv format, your_data_file_type can be ""CSV"", "".csv"", ""csv"", ""csv_file"" and more. Return a data generator defined by Keras or Tensorflow The following is a code example that needs to be included as part of the get_data function to return a data generator defined by Keras or Tensorflow: train_gen = ImageDataGenerator(rotation_range=8, width_sht_range=0.08, shear_range=0.3, height_shift_range=0.08, zoom_range=0.08) train_datagenerator = train_gen.flow( x_train, y_train, batch_size=64) return train_datagenerator Data handler examples * [MNIST Keras data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py) * [Adult XGBoost data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/adult_sklearn_data_handler.py) Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)",conceptual,0,test
5DCC543A106EC708FF97817AA0CFDEF8CB89894D,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/ensemblenodeslots.html?context=cdpaas&locale=en,ensemblenode properties,"ensemblenode properties
ensemblenode properties ![Ensemble node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/ensemblenodeicon.png)The Ensemble node combines two or more model nuggets to obtain more accurate predictions than can be gained from any one model. ensemblenode properties Table 1. ensemblenode properties ensemblenode properties Data type Property description ensemble_target_field field Specifies the target field for all models used in the ensemble. filter_individual_model_output flag Specifies whether scoring results from individual models should be suppressed. flag_ensemble_method VotingConfidenceWeightedVotingRawPropensityWeightedVotingAdjustedPropensityWeightedVotingHighestConfidenceAverageRawPropensityAverageAdjustedPropensity Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a flag field. set_ensemble_method VotingConfidenceWeightedVotingHighestConfidence Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a nominal field. flag_voting_tie_selection RandomHighestConfidenceRawPropensityAdjustedPropensity If a voting method is selected, specifies how ties are resolved. This setting applies only if the selected target is a flag field. set_voting_tie_selection RandomHighestConfidence If a voting method is selected, specifies how ties are resolved. This setting applies only if the selected target is a nominal field. calculate_standard_error flag If the target field is continuous, a standard error calculation is run by default to calculate the difference between the measured or estimated values and the true values; and to show how close those estimates matched.",conceptual,0,test
82512A3915BF43DF08D9106027A67D5E059B2719,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html?context=cdpaas&locale=en,Creating an SPSS Modeler batch job with multiple data sources,"Creating an SPSS Modeler batch job with multiple data sources
Creating an SPSS Modeler batch job with multiple data sources In an SPSS Modeler flow, it's common to have multiple import and export nodes, where multiple import nodes can be fetching data from one or more relational databases. Learn how to use Watson Machine Learning to create an SPSS Modeler batch job with multiple data sources from relational databases. Note:The examples use IBM Db2 and IBM Db2 Warehouse, referred to in examples as dashdb. Connecting to multiple relational databases as input to a batch job The number of import nodes in an SPSS Modeler flow can vary. You might use as many as 60 or 70. However, the number of distinct connections to databases in these cases are just a few, though the table names that are accessed through the connections vary. Rather than specifying the details for every table connection, the approach that is described here focuses on the database connections. Therefore, the batch jobs accept a list of data connections or references by node name that are mapped to connection names in the SPSS Modeler flow's import nodes. For example, assume that if a flow has 30 nodes, only three database connections are used to connect to 30 different tables. In this case, you submit three connections (C1, C2, and C3) to the batch job. C1, C2, and C3 are connection names in the import node of the flow and the node name in the input of the batch job. When a batch job runs, the data reference for a node is provided by mapping the node name with the connection name in the import node. This example illustrates the steps for creating the mapping. The following diagram shows the flow from model creation to job submission: ![SPSS Modeler job with multiple inputs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/word_SPSS-multiple-input-job.svg) Limitation: The connection reference for a node in a flow is overridden by the reference that is received from the batch job. However, the table name in the import or export node is not overridden. Deployment scenario with example In this example, an SPSS model is built by using 40 import nodes and a single output. The model has the following configuration: * Connections to three databases: 1 Db2 Warehouse (dashDB) and 2 Db2. * The import nodes are read from 40 tables (30 from Db2 Warehouse and 5 each from the Db2 databases). * A single output table is written to a Db2 database. ![SPSS Modeler flow with multiple inputs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/word_SPSS-multiple-input-job2.svg) Example These steps demonstrate how to create the connections and identify the tables. 1. Create a connection in your project. To run the SPSS Modeler flow, you start in your project and create a connection for each of the three databases your model connects to. You then configure each import node in the flow to point to a table in one of the connected databases. For this example, the database connections in the project are named dashdb_conn, db2_conn1, and db2_conn2. 2. Configure Data Asset to import nodes in your SPSS Modeler flow with connections. Configure each node in the flow to reference one of the three connections you created (dashdb_conn, db2_conn1, and db2_conn2), then specify a table for each node. Note: You can change the name of the connection at the time of the job run. The table names that you select in the flow are referenced when the job runs. You can't overwrite or change them. 3. Save the SPSS model to the Watson Machine Learning repository. For this example, it's helpful to provide the input and output schema when you are saving the model. It simplifies the process of identifying each input when you create and submit the batch job in the Watson Studio user interface. Connections that are referenced in the Data Asset nodes of the SPSS Modeler flow must be provided in the node name field of the input schema. To find the node name, double-click the Data Asset import node in your flow to open its properties: ![Data Asset import node name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/spss-node-name.png) Note:SPSS models that are saved without schemas are still supported for jobs, but you must enter node name fields manually and provide the data asset when you submit the job. This code sample shows how to save the input schema when you save the model (Endpoint: POST /v4/models). { ""name"": ""SPSS Drug Model"", ""label_column"": ""label"", ""type"": ""spss-modeler_18.1"", ""runtime"": { ""href"": ""/v4/runtimes/spss-modeler_18.1"" }, ""space"": { ""href"": ""/v4/spaces/<space_id>"" }, ""schemas"": { ""input"": [ { ""id"": ""dashdb_conn"", ""fields"": ] }, { ""id"": ""db2_conn1 "", ""fields"": ] } , { ""id"": ""db2_conn2"", ""fields"": ] } ], ""output"": [{ ""id"": ""db2_conn2 "",""fields"": ] }] } } Note: The number of fields in each of these connections doesn't matter. They’re not validated or used. What's important is the number of connections that are used. 4. Create the batch deployment for the SPSS model.",how-to,1,test
E6B5EAD096E68A255C5526ADD4C828534891C090,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/gmm.html?context=cdpaas&locale=en,Gaussian Mixture node (SPSS Modeler),"Gaussian Mixture node (SPSS Modeler)
Gaussian Mixture node A Gaussian Mixture© model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians.^1^ The Gaussian Mixture node in watsonx.ai exposes the core features and commonly used parameters of the Gaussian Mixture library. The node is implemented in Python. For more information about Gaussian Mixture modeling algorithms and parameters, see [Gaussian Mixture Models](http://scikit-learn.org/stable/modules/mixture.html) and [Gaussian Mixture](https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html). ^2^ ^1^ [User Guide.](https://scikit-learn.org/stable/modules/mixture.html)Gaussian mixture models. Web. © 2007 - 2017. scikit-learn developers. ^2^ [Scikit-learn: Machine Learning in Python](http://jmlr.csail.mit.edu/papers/v12/pedregosa11a.html), Pedregosa et al., JMLR 12, pp. 2825-2830, 2011.",conceptual,0,test
2ED4D7860687B2EF6F85FF81B6AF4CFD2C6EA839,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_forecast_flow.html?context=cdpaas&locale=en,Creating the flow (SPSS Modeler),"Creating the flow (SPSS Modeler)
Creating the flow 1. Create a new flow and add a Data Asset node that points to catalog_seasfac.csv. 2. Connect a Type node to the Data Asset node and double-click it to open its properties. 3. Click Read Values. For the men field, set the role to Target. Figure 1. Specifying the target field ![Specifying the target field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_fields.png) 4. Set the role for all other fields to None and click Save. 5. Attach a Time Plot graph node to the Type node and double-click it. Figure 2. Plotting the time series ![Plotting the time series](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_plot.png) 6. For the Plot, add the field men to the Series list. 7. Select Use custom x axis field label and select date. 8. Deselect the Normalize option and click Save. 9. Run the flow.",how-to,1,test
98B447B5AF1CD17524E2BA82FED83B8966DDFEFB,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/evalchartnodeslots.html?context=cdpaas&locale=en,evaluationnode properties,"evaluationnode properties
evaluationnode properties ![Evaluation node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/evaluationnodeicon.png)The Evaluation node helps to evaluate and compare predictive models. The evaluation chart shows how well models predict particular outcomes. It sorts records based on the predicted value and confidence of the prediction. It splits the records into groups of equal size (quantiles) and then plots the value of the business criterion for each quantile from highest to lowest. Multiple models are shown as separate lines in the plot. evaluationnode properties Table 1. evaluationnode properties evaluationnode properties Data type Property description chart_type Gains <br>Response <br>Lift <br>Profit <br>ROI <br>ROC inc_baseline flag field_detection_method Metadata <br>Name use_fixed_cost flag cost_value number cost_field string use_fixed_revenue flag revenue_value number revenue_field string use_fixed_weight flag weight_value number weight_field field n_tile Quartiles <br>Quintles <br>Deciles <br>Vingtiles <br>Percentiles <br>1000-tiles cumulative flag style Line <br>Point point_type Rectangle <br>Dot <br>Triangle <br>Hexagon <br>Plus <br>Pentagon <br>Star <br>BowTie <br>HorizontalDash <br>VerticalDash <br>IronCross <br>Factory <br>House <br>Cathedral <br>OnionDome <br>ConcaveTriangleOblateGlobe <br>CatEye <br>FourSidedPillow <br>RoundRectangle <br>Fan export_data flag data_filename string delimiter string new_line flag inc_field_names flag inc_best_line flag inc_business_rule flag business_rule_condition string plot_score_fields flag score_fields [field1 ... fieldN] target_field field use_hit_condition flag hit_condition string use_score_expression flag score_expression string caption_auto flag split_by_partition boolean If a partition field is used to split records into training, test, and validation samples, use this option to display a separate evaluation chart for each partition. use_profit_criteria boolean Enables profit criteria. use_grid boolean Displays grid lines.",conceptual,0,test
C926DFB3758881E6698F630E496F3817101E4176,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=en,AutoAI tutorial: Build a Binary Classification Model,"AutoAI tutorial: Build a Binary Classification Model
AutoAI tutorial: Build a Binary Classification Model This tutorial guides you through training a model to predict if a customer is likely to buy a tent from an outdoor equipment store. Create an AutoAI experiment to build a model that analyzes your data and selects the best model type and algorithms to produce, train, and optimize pipelines. After you review the pipelines, save one as a model, deploy it, and then test it to get a prediction. Watch this video to see a preview of the steps in this tutorial. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 In this video, you will see how to build a binary classification model that assesses the likelihood that a customer of an outdoor equipment company will buy a tent. 00:11 This video uses a data set called ""GoSales"", which you'll find in the Gallery. 00:16 View the data set. 00:20 The feature columns are ""GENDER"", ""AGE"", ""MARITAL_STATUS"", and ""PROFESSION"" and contain the attributes on which the machine learning model will base predictions. 00:31 The label columns are ""IS_TENT"", ""PRODUCT_LINE"", and ""PURCHASE_AMOUNT"" and contain historical outcomes that the models could be trained to predict. 00:44 Add this data set to the ""Machine Learning"" project and then go to the project. 00:56 You'll find the GoSales.csv file with your other data assets. 01:02 Add to the project an ""AutoAI experiment"". 01:08 This project already has the Watson Machine Learning service associated. 01:13 If you haven't done that yet, first, watch the video showing how to run an AutoAI experiment based on a sample. 01:22 Just provide a name for the experiment and then click ""Create"". 01:30 The AutoAI experiment builder displays. 01:33 You first need to load the training data. 01:36 In this case, the data set will be from the project. 01:40 Select the GoSales.csv file from the list. 01:45 AutoAI reads the data set and lists the columns found in the data set. 01:50 Since you want the model to predict the likelihood that a given customer will purchase a tent, select ""IS_TENT"" as the column to predict. 01:59 Now, edit the experiment settings. 02:03 First, look at the settings for the data source. 02:06 If you have a large data set, you can run the experiment on a subsample of rows and you can configure how much of the data will be used for training and how much will be used for evaluation. 02:19 The default is a 90%/10% split, where 10% of the data is reserved for evaluation. 02:27 You can also select which columns from the data set to include when running the experiment. 02:35 On the ""Prediction"" panel, you can select a prediction type. 02:39 In this case, AutoAI analyzed your data and determined that the ""IS_TENT"" column contains true-false information, making this data suitable for a ""Binary classification"" model. 02:52 The positive class is ""TRUE"" and the recommended metric is ""Accuracy"". 03:01 If you'd like, you can choose specific algorithms to consider for this experiment and the number of top algorithms for AutoAI to test, which determines the number of pipelines generated. 03:16 On the ""Runtime"" panel, you can review other details about the experiment. 03:21 In this case, accepting the default settings makes the most sense. 03:25 Now, run the experiment. 03:28 AutoAI first loads the data set, then splits the data into training data and holdout data. 03:37 Then wait, as the ""Pipeline leaderboard"" fills in to show the generated pipelines using different estimators, such as XGBoost classifier, or enhancements such as hyperparameter optimization and feature engineering, with the pipelines ranked based on the accuracy metric. 03:58 Hyperparameter optimization is a mechanism for automatically exploring a search space for potential hyperparameters, building a series of models and comparing the models using metrics of interest. 04:10 Feature engineering attempts to transform the raw data into the combination of features that best represents the problem to achieve the most accurate prediction. 04:21 Okay, the run has completed. 04:24 By default, you'll see the ""Relationship map"". 04:28 But you can swap views to see the ""Progress map"". 04:32 You may want to start with comparing the pipelines. 04:36 This chart provides metrics for the eight pipelines, viewed by cross validation score or by holdout score. 04:46 You can see the pipelines ranked based on other metrics, such as average precision. 04:55 Back on the ""Experiment summary"" tab, expand a pipeline to view the model evaluation measures and ROC curve. 05:03 During AutoAI training, your data set is split into two parts: training data and holdout data. 05:11 The training data is used by the AutoAI training",how-to,1,test
B193A2795BDEF17A5D204CDD18188A767E2FE7B7,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html?context=cdpaas&locale=en,Tokens and tokenization,"Tokens and tokenization
Tokens and tokenization A token is a collection of characters that has semantic meaning for a model. Tokenization is the process of converting the words in your prompt into tokens. You can monitor foundation model token usage in a project on the Environments page on the Resource usage tab. Converting words to tokens and back again Prompt text is converted to tokens before being processed by foundation models. The correlation between words and tokens is complex: * Sometimes a single word is broken into multiple tokens * The same word might be broken into a different number of tokens, depending on context (such as: where the word appears, or surrounding words) * Spaces, newline characters, and punctuation are sometimes included in tokens and sometimes not * The way words are broken into tokens varies from language to language * The way words are broken into tokens varies from model to model For a rough idea, a sentence that has 10 words could be 15 to 20 tokens. The raw output from a model is also tokens. In the Prompt Lab in IBM watsonx.ai, the output tokens from the model are converted to words to be displayed in the prompt editor. Example The following image shows how this sample input might be tokenized: > Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes. ![Visualization of tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-tokenization.png) Notice a few interesting points: * Some words are broken into multiple tokens and some are not * The word ""Tomatoes"" is broken into multiple tokens at the beginning, but later ""tomatoes"" is all one token * Spaces are sometimes included at the beginning of a word-token and sometimes spaces are a token all by themselves * Punctuation marks are tokens Token limits Every model has an upper limit to the number of tokens in the input prompt plus the number of tokens in the generated output from the model (sometimes called context window length, context window, context length, or maximum sequence length.) In the Prompt Lab, an informational message shows how many tokens are used in a given prompt submission and the resulting generated output. In the Prompt Lab, you use the Max tokens parameter to specify an upper limit on the number of output tokens for the model to generate. The maximum number of tokens that are allowed in the output differs by model. For more information, see the Maximum tokens information in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)",how-to,1,test
FF6C435ADBD62DE03C06CE4F90343D3CD04F9E8F,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_process.html?context=cdpaas&locale=en,Extension Transform node (SPSS Modeler),"Extension Transform node (SPSS Modeler)
Extension Transform node With the Extension Transform node, you can take data from an SPSS Modeler flow and apply transformations to the data using R scripting or Python for Spark scripting. When the data has been modified, it's returned to the flow for further processing, model building, and model scoring. The Extension Transform node makes it possible to transform data using algorithms that are written in R or Python for Spark, and enables you to develop data transformation methods that are tailored to a particular problem. After adding the node to your canvas, double-click the node to open its properties.",conceptual,0,test
B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html?context=cdpaas&locale=en,Publishing notebooks on GitHub,"Publishing notebooks on GitHub
Publishing notebooks on GitHub To collaborate with stakeholders and other data scientists, you can publish your notebooks in GitHub repositories. You can also use GitHub to back up notebooks for source code management. Watch this video to see how to enable GitHub integration. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 This video shows you how to publish notebooks from your Watson Studio project to your GitHub account. 00:07 Navigate to your profile and settings. 00:11 On the ""Integrations"" tab, visit the link to generate a GitHub personal access token. 00:17 Provide a descriptive name for the token and select the repo and gist scopes, then generate the token. 00:29 Copy the token, return to the GitHub integration settings, and paste the token. 00:36 The token is validated when you save it to your profile settings. 00:42 Now, navigate to your projects. 00:44 You enable GitHub integration at the project level on the ""Settings"" tab. 00:50 Simply scroll to the bottom and paste the existing GitHub repository URL. 00:56 You'll find that on the ""Code"" tab in the repo. 01:01 Click ""Update"" to make the connection. 01:05 Now, go to the ""Assets"" tab and open the notebook you want to publish. 01:14 Notice that this notebook has the credentials replaced with X's. 01:19 It's a best practice to remove or replace credentials before publishing to GitHub. 01:24 So, this notebook is ready for publishing. 01:27 You can provide the target path along with a commit message. 01:31 You also have the option to publish content without hidden code, which means that any cells in the notebook that began with the hidden cell comment will not be published. 01:42 When you're, ready click ""Publish"". 01:45 The message tells you that the notebook was published successfully and provides links to the notebook, the repository, and the commit. 01:54 Let's take a look at the commit. 01:57 So, there's the commit, and you can navigate to the repository to see the published notebook. 02:04 Lastly, you can publish as a gist. 02:07 Gists are another way to share your work on GitHub. 02:10 Every gist is a git repository, so it can be forked and cloned. 02:15 There are two types of gists: public and secret. 02:19 If you start out with a secret gist, you can convert it to a public gist later. 02:24 And again, you have the option to remove hidden cells. 02:29 Follow the link to see the published gist. 02:32 So that's the basics of Watson Studio's GitHub integration. 02:37 Find more videos in the Cloud Pak for Data as a Service documentation. Enabling access to GitHub from your account Before you can publish notebooks on GitHub, you must enable your IBM watsonx account to access GitHub. You enable access by creating a personal access token with the required access scope in GitHub and linking the token to your IBM watsonx account. Follow these steps to create a personal access token: 1. Click your avatar in the header, and then click Profile and settings. 2. Go to the Integrations tab and click the GitHub personal access tokens link on the dialog and generate a new token. 3. On the New personal access token page, select repo scope and then click to generate a token. 4. Copy the generated access token and paste it in the GitHub integration dialog window in IBM watsonx. Linking a project to a GitHub repository After you have saved the access token, your project must be connected to an existing GitHub repository. You can only link to one existing GitHub repository from a project. Private repositories are supported. To link a project to an existing GitHub repository, you must have administrator permission to the project. All project collaborators, who have adminstrator or editor permission, can publish files to this GitHub repository. However, these users must have permission to access the repository. Granting user permissions to repositories must be done in GitHub. To connect a project to an existing GitHub repository: 1. Select the Manage tab and go to the Services and Integrations page. 2. Click the Third-party integrations tab. 3. Click Connect integration. 4. Enter your generated access token from Github. Now you can begin publishing notebooks on GitHub. Note:For information on how to change your Git integration, refer to [Managing your integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlintegrations). Publishing a notebook on GitHub To publish a notebook on GitHub: 1. Open the notebook in edit mode. 2. Click the GitHub integration icon (![Shows the upload icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/upload.png)) and select Publish on GitHub from the opened notebook's action bar. When you enter the name of the file you want",how-to,1,test
57AB3726FA10435D26878C626F61988F7305B9E8,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_fromgallery.html?context=cdpaas&locale=en,Building a chart from the chart type gallery,"Building a chart from the chart type gallery
Building a chart from the chart type gallery Use chart type gallery for building charts. Following are general steps for building a chart from the gallery. 1. In the Chart Type section, select a chart category. A preview version of the selected chart type is shown on the chart canvas. 2. If the canvas already displays a chart, the new chart replaces the chart's axis set and graphic elements. 1. Depending on the selected chart type, the available variables are presented under a number of different headings in the Details pane (for example, Category for bar charts, X-axis and Y-axis for line charts). Select the appropriate variables for the selected chart type. 3. Click the Save visualization to project control to save the visualization to the project. You can select to also Create a new asset from the visualization, provide a visualization asset name, description, and chart name. 4. Click Apply to save the visualization to the project. The new visualization asset is now available under the Assets tab.",how-to,1,test
23060B35041C9ABD00099B1E0B1D83DAFF453C6D,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html?context=cdpaas&locale=en,Collaboration roles for governance,"Collaboration roles for governance
Collaboration roles for governance Review the collaboration roles for managing access to governance tools such as inventories, AI use cases, and evaluations. User roles and permissions for governance The permissions that you allow you to work with governance artifacts depend on your watsonx roles: * IAM Platform access roles determine your permissions for the IBM Cloud account. At least the Viewer role is required to work with services. * IAM Service access roles determine your permissions within services. * Workspace collaborator roles determine what actions you have permission to perform within workspaces in IBM watsonx. For details, see [Levels of user access roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html). Roles for governance If you have the IAM Platform Admin role, you can: * Provision watsonx.governance * Create inventory * Create platform assets catalog * Enable external model tracking * Create attachment fact definitions * Customize report templates If you have these workspace roles for an inventory, you can: Governance permissions for inventories Enabled permission Viewer Editor Admin/Owner Create and edit AI use cases ✓ ✓ View AI use cases ✓ ✓ ✓ Add collaborators to an inventory ✓ Delete inventory ✓ Evaluate model deployment ✓ ✓ Add collaborators to a use case ✓ ✓ Generate reports ✓ ✓ ✓ Add attachments to a use case ✓ ✓ Update asset type definitions <br>(For example: model_entry_user, modelfacts_user) ✓ If you have these workspace roles for an AI use case, you can: Governance permissions for AI use cases Enabled permission Editor/Collaborator Admin/Owner Delete AI use cases ✓ Add collaborators to the use case ✓ Edit AI use case ✓ ✓ Edit use case ✓ ✓ Add values to custom facts ✓ ✓ Upload attachments to use case ✓ ✓ If you have these workspace roles for a project or space, you can: Governance permissions for project and space roles Enabled permission Viewer Editor/Collaborator Admin/Owner Track/untrack prompt template ✓ ✓ Upload attachments to use case ✓ ✓ Add values to custom facts ✓ ✓ View AI factsheet ✓ ✓ ✓ Generate report ✓ ✓ ✓ Learn more Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)",conceptual,0,test
C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en,Quick start: Prompt a foundation model using Prompt Lab,"Quick start: Prompt a foundation model using Prompt Lab
Quick start: Prompt a foundation model using Prompt Lab Take this tutorial to learn how to use the Prompt Lab in watsonx.ai. There are usually multiple ways to prompt a foundation model for a successful result. In the Prompt Lab, you can experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts. See [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html) to help you successfully prompt most text-generating foundation models. Required services : Watson Studio : Watson Machine Learning Your basic workflow includes these tasks: 1. Open a project. Projects are where you can collaborate with others to work with data. 2. Open the Prompt Lab. The Prompt Lab lets you experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts. 3. Type your prompt in the prompt editor. You can type prompts in either freeform and structured mode. 4. Select the model to use. You can submit your prompt to any of the models supported by watsonx.ai. 5. Save your work as a projet asset. Saving your work as a project asset makes your work available to collaborators in the current project. Read about prompting a foundation model Foundation models are very large AI models. They have billions of parameters and are trained on terabytes of data. Foundation models can perform a variety of tasks, including text-, code-, or image generation, classification, conversation, and more. Large language models are a subset of foundation models used for text- and code-related tasks. In IBM watsonx.ai, there is a collection of deployed large language models that you can use, as well as tools for experimenting with prompts. [Read more about Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) Watch a video about prompting a foundation model ![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial. This video provides a visual method to learn the concepts and tasks in this documentation. Try a tutorial to prompt a foundation model In this tutorial, you will complete these tasks: * [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep01) * [Task 2: Use the Prompt Lab in Freeform mode](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep02) * [Task 3: Use the Prompt Lab in Structured mode](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep03) * [Task 4: Use the sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep04) * [Task 5: Choose a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep05) * [Task 6: Adjust model parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep06) * [Task 7: Save your work](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep07) Expand all sections * Tips for completing this tutorial ### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [watsonx.ai Community discussion forum](https://community.ibm.com/community/user/watsonx/communities/community-home/digestviewer?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side-wx.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview) * Task 1: Open a project You need a project to store Prompt Lab assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project. This video provides a visual method to learn the concepts and tasks in this documentation. 1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep02). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox project in the Projects section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab. ![Home screen with sandbox project listed.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-home-screen.png){: width=""100%"" } [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview) * Task 2: Use the Prompt Lab in Freeform mode ![preview",how-to,1,test
0301D6611A36E44C345083F6E2C3BDE58DE59982,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripts_and_streams.html?context=cdpaas&locale=en,Types of scripts,"Types of scripts
Types of scripts SPSS Modeler uses three types of scripts: * Flow scripts are stored as a flow property and are therefore saved and loaded with a specific flow. For example, you can write a flow script that automates the process of training and applying a model nugget. You can also specify that whenever a particular flow runs, the script should be run instead of the flow's canvas content. * Standalone scripts aren't associated with any particular flow and are saved in external text files. You might use a standalone script, for example, to manipulate multiple flows together. * SuperNode scripts are stored as a SuperNode flow property. SuperNode scripts are only available in terminal SuperNodes. You might use a SuperNode script to control the execution sequence of the SuperNode contents. For nonterminal (import or process) SuperNodes, you can define properties for the SuperNode or the nodes it contains in your flow script directly.",conceptual,0,test
14F850B810E969CE2646D5641300FB407A6C49C5,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_strings.html?context=cdpaas&locale=en,Strings,"Strings
Strings A string is an immutable sequence of characters that's treated as a value. Strings support all of the immutable sequence functions and operators that result in a new string. For example, ""abcdef""[1:4] results in the output ""bcd"". In Python, characters are represented by strings of length one. Strings literals are defined by the use of single or triple quoting. Strings that are defined using single quotes can't span lines, while strings that are defined using triple quotes can. You can enclose a string in single quotes (') or double quotes (""). A quoting character may contain the other quoting character un-escaped or the quoting character escaped, that's preceded by the backslash () character.",conceptual,0,test
595BB1738027C777C1EB5A69631587923690ABC4,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_overview_times_and_dates.html?context=cdpaas&locale=en,Working with strings (SPSS Modeler),"Working with strings (SPSS Modeler)
Working with strings There are a number of operations available for strings. * Converting a string to uppercase or lowercase—uppertolower(CHAR). * Removing specified characters, such as ID_ or $ , from a string variable—stripchar(CHAR,STRING). * Determining the length (number of characters) for a string variable—length(STRING). * Checking the alphabetical ordering of string values—alphabefore(STRING1, STRING2). * Removing leading or trailing white space from values—trim(STRING), trim_start(STRING), or trimend(STRING). * Extract the first or last n characters from a string—startstring(LENGTH, STRING) or endstring(LENGTH, STRING). For example, suppose you have a field named item that combines a product name with a four-digit ID code (ACME CAMERA-D109). To create a new field that contains only the four-digit code, specify the following formula in a Derive node: endstring(4, item) * Matching a specific pattern—STRING matches PATTERN. For example, to select persons with ""market"" anywhere in their job title, you could specify the following in a Select node: job_title matches ""market"" * Replacing all instances of a substring within a string—replace(SUBSTRING, NEWSUBSTRING, STRING). For example, to replace all instances of an unsupported character, such as a vertical pipe ( | ), with a semicolon prior to text mining, use the replace function in a Filler node. Under Fill in fields in the node properties, select all fields where the character may occur. For the Replace condition, select Always, and specify the following condition under Replace with. replace('|',';',@FIELD) * Deriving a flag field based on the presence of a specific substring. For example, you could use a string function in a Derive node to generate a separate flag field for each response with an expression such as: hassubstring(museums,""museum_of_design"") See [String functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.htmlclem_function_ref_string) for more information.",conceptual,0,test
451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html?context=cdpaas&locale=en,Creating deployment spaces,"Creating deployment spaces
Creating deployment spaces Create a deployment space to store your assets, deploy assets, and manage your deployments. Required permissions: All users in your IBM Cloud account with the Editor IAM platform access role for all IAM enabled services or for Cloud Pak for Data can manage to create deployment spaces. For more information, see [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.htmlplatform). A deployment space is not associated with a project. You can publish assets from multiple projects to a space. For example, you might have a test space for evaluating deployments, and a production space for deployments you want to deploy in business applications. Follow these steps to create a deployment space: 1. From the navigation menu, select Deployments > New deployment space. Enter a name for your deployment space. 2. Optional: Add a description and tags. 3. Select a storage service to store your space assets. * If you have a [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) repository that is associated with your IBM Cloud account, choose a repository from the list to store your space assets. * If you do not have a [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) repository that is associated with your IBM Cloud account, you are prompted to create one. 4. Optional: If you want to deploy assets from your space, select a machine learning service instance to associate with your deployment space. To associate a machine learning instance to a space, you must: * Be a space administrator. * Have admin access to the machine learning service instance that you want to associate with the space. For more information, see [Creating a Watson Machine Learning service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html). Tip: If you want to evaluate assets in the space, switch to the Manage tab and associate a Watson OpenScale instance. 5. Optional: Assign the space to a deployment stage. Deployment stages are used for [MLOps](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/modelops-overview.html), to manage access for assets in various stages of the AI lifecycle. They are also used in [governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html), for tracking assets. Choose from: * Development for assets under development. Assets that are tracked for governance are displayed in the Develop stage of their associated use case. * Testing for assets that are being validated. Assets that are tracked for governance are displayed in the Validate stage of their associated use case. * Production for assets in production. Assets that are tracked for governance are displayed in the Operate stage of their associated use case. 6. Optional: Upload space assets, such as [exported project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) or [exported space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html). If the imported space is encrypted, you must enter the password. Tip: If you get an import error, clear your browser cookies and then try again. 7. Click Create. Viewing and managing deployment spaces * To view all deployment spaces that you can access, click Deployments on the navigation menu. * To view any of the details about the space after you create it, such as the associated service instance or storage ID, open your deployment space and then click the Manage tab. * Your space assets are stored in a Cloud Object Storage repository. You can access this repository from IBM Cloud. To find the bucket ID, open your deployment space, and click the Manage tab. Learn more To learn more about adding assets to a space and managing them, see [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). To learn more about creating a space and accessing its details programmatically, see [Notebook on managing spaces](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0967e3). To learn more about handling spaces programmatically, see [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or [REST API](https://cloud.ibm.com/apidocs/machine-learning). Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)",how-to,1,test
C122739764B1EC75B64E1B740F493BAD8616A9DB,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html?context=cdpaas&locale=en,Adding very large objects to a project's Cloud Object Storage,"Adding very large objects to a project's Cloud Object Storage
Adding very large objects to a project's Cloud Object Storage The amount of data you can load to a project's Cloud Object Storage at any one time depends on where you load the data from. If you are loading the data in the product UI, the limit is 5 GB. To add larger objects to a project's Cloud Object Storage, you can use an API or an FTP client. * [The Cloud Object Storage API](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html?context=cdpaas&locale=enapi) * An FTP client * [The IBM Cloud Object Storage Python SDK](https://github.com/IBM/ibm-cos-sdk-python) (in case you can't use an FTP client) Load data in multiple parts by using the Cloud Object Storage API With the Cloud Object Storage API, you can load data objects as large as 5 GB in a single PUT, and objects as large as 5 TB by loading the data into object storage as a set of parts which can be loaded independently in any order and in parallel. After all of the parts have been loaded, they are presented as a single object in Cloud Object Storage. You can load files with these formats and mime types in multiple parts: * application/xml * application/pdf * text/plain; charset=utf-8 To load a data object in multiple parts: 1. Initiate a [multipart load](https://cloud.ibm.com/docs/services/cloud-object-storage/basics?topic=cloud-object-storage-store-very-large-objectsinitiate-a-multipart-upload): curl -X ""POST"" ""https://(endpoint)/(bucket-name)/(object-name)?uploads"" -H ""Authorization: bearer (token)"" The values for bucket-name and token are on the project's General page on the Manage tab. Click Manage in IBM Cloud on the Watson Studio for the endpoint value. 1. Load the parts by specifying arbitrary sequential part numbers and an UploadId for the object: curl -X ""PUT"" ""https://(endpoint)/(bucket-name)/(object-name)?partNumber=(sequential-integer)&uploadId=(upload-id)"" -H ""Authorization: bearer (token)"" -H ""Content-Type: (content-type)"" Replacecontent-type with application/xml, application/pdf or text/plain; charset=utf-8. 1. Complete the multipart load: curl -X ""POST"" ""https://(endpoint)/(bucket-name)/(object-name)?uploadId=(upload-id)"" -H ""Authorization: bearer (token)"" -H ""Content-Type: text/plain; charset=utf-8"" -d $'<CompleteMultipartUpload> <Part> <PartNumber>1</PartNumber> <ETag>(etag)</ETag> </Part> <Part> <PartNumber>2</PartNumber> <ETag>(etag)</ETag> </Part> 1. Add your file to the project as an asset. From the Assets page of your project, click the Upload asset to project icon. Then, from the Files pane, click the action menu and select Add as data set. Next steps * [Refining the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) * [Analyzing the data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) Learn more * [Storing very large objects in Cloud Object Storage](https://cloud.ibm.com/docs/services/cloud-object-storage/basics?topic=cloud-object-storage-store-very-large-objectsstore-very-large-objects) * [Using curl to store very large objects](https://cloud.ibm.com/docs/services/cloud-object-storage/cli?topic=cloud-object-storage-using-curl-using-curl-) Parent topic:[Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)",how-to,1,test
00205C92C52FA28DB619EE1F9C8D76FE8564DB88,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/history.html?context=cdpaas&locale=en,History node (SPSS Modeler),"History node (SPSS Modeler)
History node History nodes are most often used for sequential data, such as time series data. They are used to create new fields containing data from fields in previous records. When using a History node, you may want to use data that is presorted by a particular field. You can use a Sort node to do this.",conceptual,0,test
8EA57CA1AE730686E86FC3B2AABD71C9F8EA9823,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/treeASnuggetnodeslots.html?context=cdpaas&locale=en,applytreeas properties,"applytreeas properties
applytreeas properties You can use Tree-AS modeling nodes to generate a Tree-AS model nugget. The scripting name of this model nugget is applytreenas. For more information on scripting the modeling node itself, see [treeas properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/treeASnodeslots.htmltreeASnodeslots). applytreeas properties Table 1. applytreeas properties applytreeas Properties Values Property description calculate_conf flag This property includes confidence calculations in the generated tree. display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned. enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.",conceptual,0,test
65FFB2E27EACD57BCADC6C1646EB280212D3B2C2,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/anonymizenodeslots.html?context=cdpaas&locale=en,anonymizenode properties,"anonymizenode properties
anonymizenode properties ![Anonymize node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/anonymizenodeicon.png)The Anonymize node transforms the way field names and values are represented downstream, thus disguising the original data. This can be useful if you want to allow other users to build models using sensitive data, such as customer names or other details. anonymizenode properties Table 1. anonymizenode properties anonymizenode properties Data type Property description enable_anonymize flag When set to True, activates anonymization of field values (equivalent to selecting Yes for that field in the Anonymize Values column). use_prefix flag When set to True, a custom prefix will be used if one has been specified. Applies to fields that will be anonymized by the Hash method and is equivalent to choosing the Custom option in the Replace Values settings for that field. prefix string Equivalent to typing a prefix into the text box in the Replace Values settings. The default prefix is the default value if nothing else has been specified. transformation RandomFixed Determines whether the transformation parameters for a field anonymized by the Transform method will be random or fixed. set_random_seed flag When set to True, the specified seed value will be used (if transformation is also set to Random). random_seed integer When set_random_seed is set to True, this is the seed for the random number. scale number When transformation is set to Fixed, this value is used for ""scale by."" The maximum scale value is normally 10 but may be reduced to avoid overflow. translate number When transformation is set to Fixed, this value is used for ""translate."" The maximum translate value is normally 1000 but may be reduced to avoid overflow.",conceptual,0,test
C324305E8F756140B7B96492D73D35BB32794119,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/mark-sensitive.html?context=cdpaas&locale=en,Marking a project as sensitive,"Marking a project as sensitive
Marking a project as sensitive When you create a project, you can mark the project as sensitive to prevent project collaborators from moving sensitive data out of the project. Marking a project as sensitive prevents collaborators of a project, including administrators, from downloading or exporting data assets, connections, or connected data from a project. These sensitive assets cannot be added to a catalog or promoted to a space either. Project collaborators with Admin or Editor role can export assets like notebooks or models from the project. When users open a project that is marked as sensitive, a notification is displayed stating that no data assets can be downloaded or exported from the project. Restrictions * You cannot mark a project as sensitive after the project is created. * You cannot mark projects that use Git integration as sensitive. Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)",conceptual,0,test
2828FD5943ABBA08AA260F1080B850C90FC4EFBE,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_reducing.html?context=cdpaas&locale=en,Reducing input data string length (SPSS Modeler),"Reducing input data string length (SPSS Modeler)
Reducing input data string length For binomial logistic regression, and auto classifier models that include a binomial logistic regression model, string fields are limited to a maximum of eight characters. Where strings are more than eight characters, you can recode them using a Reclassify node. This example uses the flow named Reducing Input Data String Length, available in the example project . The data file is drug_long_name.csv. This example focuses on a small part of a flow to show the type of errors that may be generated with overlong strings, and explains how to use the Reclassify node to change the string details to an acceptable length. Although the example uses a binomial Logistic Regression node, it is equally applicable when using the Auto Classifier node to generate a binomial Logistic Regression model.",conceptual,0,test
D778DF3DC8EF2D3AB4EC511B8D20D35778794B93,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/inaccessible-training-data.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Inaccessible training data ![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg)Risks associated with outputExplainabilityAmplified Description Without access to the training data, the types of explanations a model can provide are limited and more likely to be incorrect. Why is inaccessible training data a concern for foundation models? Low quality explanations without source data make it difficult for users, model validators, and auditors to understand and trust the model. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,test
98AC4398E3EA902007D99E5BDB0686AEF04A4DAA,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_values_collection.html?context=cdpaas&locale=en,Specifying values for collection data (SPSS Modeler),"Specifying values for collection data (SPSS Modeler)
Specifying values for collection data Collection fields display non-geospatial data that's in a list. The only item you can set for the Collection measurement level is the List measure. By default, this measure is set to Typeless, but you can select another value to set the measurement level of the elements within the list. You can choose one of the following options: * Typeless * Continuous * Nominal * Ordinal * Flag",how-to,1,test
DAC8A5E350D74E41C1738F4E2A02258FECF9D20D,https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html?context=cdpaas&locale=en,Managing data for model evaluations in watsonx.governance,"Managing data for model evaluations in watsonx.governance
Managing data for model evaluations in watsonx.governance To enable model evaluations in watsonx.governance, you must prepare your data for logging to generate insights. You must provide your model data to watsonx.governance in a format that it supports to enable model evaluations. watsonx.governance processes your model transactions and logs the data in the watsonx.governance data mart. The data mart is the logging database that stores the data that is used for model evaluations. The following sections describe the different types of data that watsonx.governance logs for model evaluations: Payload data Payload data contains the input and output transactions for your deployment. To configure explainability and fairness and drift evaluations, watsonx.governance must receive payload data from your model that it stores in a payload logging table. The payload logging table contains the feature and prediction columns that exist in your training data and a prediction probability column that contains the model's confidence in the prediction that it provides. The table also includes timestamp and ID columns to identify each scoring request that you send to watsonx.governance as shown in the following example: ![Python SDK sample output of payload logging table](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-ntbok.png) You must send scoring requests to provide watsonx.governance with a log of your model transactions. For more information, see [Managing payload data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-payload-logging.html). Feedback data Feedback data is labeled data that matches the structure of training data and includes known model outcomes that are compared to your model predictions to measure the accuracy of your model. watsonx.governance uses feedback data to enable you to configure quality evaluations. You must upload feedback data regularly to watsonx.governance to continuously measure the accuracy of your model predictions. For more information, see [Managing feedback data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-feedback-data.html). Learn more [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html)",how-to,1,test
CCCF5EC3E34E81E3E25FFE29317CDAC2ED1C936D,https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitor-accuracy.html?context=cdpaas&locale=en,Configuring quality evaluations in watsonx.governance,"Configuring quality evaluations in watsonx.governance
Configuring quality evaluations in watsonx.governance watsonx.governance quality evaluations measure your foundation model's ability to provide correct outcomes When you [evaluate prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html), you can review a summary of quality evaluation results for the text classification task type. The summary displays scores and violations for metrics that are calculated with default settings. To configure quality evaluations with your own settings, you can set a minimum sample size and set threshold values for each metric. The minimum sample size indicates the minimum number of model transaction records that you want to evaluate and the threshold values create alerts when your metric scores violate your thresholds. The metric scores must be higher than the threshold values to avoid violations. Higher metric values indicate better scores. Supported quality metrics When you enable quality evaluations in watsonx.governance, you can generate metrics that help you determine how well your foundation model predicts outcomes. watsonx.governance supports the following quality metrics: * Accuracy - Description: The proportion of correct predictions - Default thresholds: Lower limit = 80% - Problem types: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Understanding accuracy: Accuracy can mean different things depending on the type of algorithm: - Multi-class classification: Accuracy measures the number of times any class was predicted correctly, normalized by the number of data points. For more details, see [Multi-class classification](https://spark.apache.org/docs/2.1.0/mllib-evaluation-metrics.htmlmulticlass-classification){: external} in the Apache Spark documentation. * Weighted true positive rate - Description: Weighted mean of class TPR with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: The True positive rate is calculated by the following formula:number of true positives TPR = _________________________________________________________ number of true positives + number of false negatives * Weighted false positive rate - Description: Weighted mean of class FPR with weights equal to class probability. For more details, see [Multi-class classification](https://spark.apache.org/docs/2.1.0/mllib-evaluation-metrics.htmlmulticlass-classification){: external} in the Apache Spark documentation. - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: The Weighted False Positive Rate is the application of the FPR with weighted data.number of false positives FPR = ______________________________________________________ (number of false positives + number of true negatives) * Weighted recall - Description: Weighted mean of recall with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: Weighted recall (wR) is defined as the number of true positives (Tp) over the number of true positives plus the number of false negatives (Fn) used with weighted data.number of true positives Recall = ______________________________________________________ number of true positives + number of false negatives * Weighted precision - Description: Weighted mean of precision with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: Precision (P) is defined as the number of true positives (Tp) over the number of true positives plus the number of false positives (Fp).number of true positives Precision = ________________________________________________________ number of true positives + the number of false positives * Weighted F1-Measure - Description: Weighted mean of F1-measure with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: The Weighted F1-Measure is the result of using weighted data.precision * recall F1 = 2 * ____________________ precision + recall * Matthews correlation coefficient - Description: Measures the quality of binary and multiclass classifications by accounting for true and false positives and negatives. Balanced measure that can be used even if the classes are different sizes. A correlation coefficient value between -1 and +1. A coefficient of +1 represents a perfect prediction, 0 an average random prediction and -1 and inverse prediction. - Default thresholds: Lower limit = 80 - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix * Label skew - Description: Measures the asymmetry of label distributions. If skewness is 0, the dataset is perfectly balanced, it if is less than -1 or greater than 1, the distribution is highly skewed, anything in between is moderately skewed. - Default thresholds: - Lower limit = -0.5 - Upper limit = 0.5 - Chart values: Last value in the timeframe Parent topic:[Configuring model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html)",how-to,1,test
3491F666270894EE4BE071FD4A8551DF94CB9889,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_define_method.html?context=cdpaas&locale=en,Defining class attributes and methods,"Defining class attributes and methods
Defining class attributes and methods Any variable that's bound in a class is a class attribute. Any function defined within a class is a method. Methods receive an instance of the class, conventionally called self, as the first argument. For example, to define some class attributes and methods, you might enter the following script: class MyClass attr1 = 10 class attributes attr2 = ""hello"" def method1(self): print MyClass.attr1 reference the class attribute def method2(self): print MyClass.attr2 reference the class attribute def method3(self, text): self.text = text instance attribute print text, self.text print my argument and my attribute method4 = method3 make an alias for method3 Inside a class, you should qualify all references to class attributes with the class name (for example, MyClass.attr1). All references to instance attributes should be qualified with the self variable (for example, self.text). Outside the class, you should qualify all references to class attributes with the class name (for example, MyClass.attr1) or with an instance of the class (for example, x.attr1, where x is an instance of the class). Outside the class, all references to instance variables should be qualified with an instance of the class (for example, x.text).",how-to,1,test
2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4,https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.html?context=cdpaas&locale=en,Decision Optimization notebook tutorial,"Decision Optimization notebook tutorial
Solving and analyzing a model: the diet problem This example shows you how to create and solve a Python-based model by using a sample. Procedure To create and solve a Python-based model by using a sample: 1. Download and extract all the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your computer. You can also download just the diet.zip file from the Model_Builder subfolder for your product and version, but in this case, do not extract it. 2. Open your project or create an empty project. 3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window. 4. Select the Assets tab. 5. Select New asset > Solve optimization problems in the Work with models section. 6. Click Local file in the Solve optimization problems window that opens. 7. Browse to find the Model_Builder folder in your downloaded DO-samples. Select the relevant product and version subfolder. Choose the Diet.zip file and click Open. Alternatively use drag and drop. 8. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment. 9. Click New deployment space, enter a name, and click Create (or select an existing space from the drop-down menu). 10. Click Create.A Decision Optimization model is created with the same name as the sample. 11. In the Prepare data view, you can see the data assets imported.These tables represent the min and max values for nutrients in the diet (diet_nutrients), the nutrients in different foods (diet_food_nutrients), and the price and quantity of specific foods (diet_food). ![Tables of input data in Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/Cloudpreparedata2.png) 12. Click Build model in the sidebar to view your model.The Python model minimizes the cost of the food in the diet while satisfying minimum nutrient and calorie requirements. ![Python model for diet problem displayed in Run model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/newrunmodel3.png) Note also how the inputs (tables in the Prepare data view) and the outputs (in this case the solution table to be displayed in the Explore solution view) are specified in this model. 13. Run the model by clicking the Run button in the Build model view.",how-to,1,test
D476F3E93D23F52EF1D5079343D92DB793E3AD5E,https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/OutputDataDefn.html?context=cdpaas&locale=en,Decision Optimization output data definition,"Decision Optimization output data definition
Output data definition When submitting your job you can define what output data you want and how you collect it (as either inline or referenced data). For more information about output file types and names see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.htmltopic_modelIOFileFormats). Some output data definition examples: * To collect solution.csv output as inline data: ""output_data"": [{ ""id"":""solution.csv"" }] * Regexp can be also used as an identifier. For example to collect all csv output files as inline data: ""output_data"": [{ ""id"":""..csv"" }] * Similarly for reference data, to collect all csv files in COS/S3 in job specific folder, you can combine regexp and ${job_id} and ${ attachment_name } place holder ""output_data_references"": [{ ""id"":""..csv"", ""type"": ""connection_asset"", ""connection"": { ""id"" : <connection_guid> }, ""location"": { ""bucket"": ""XXXXXXXXX"", ""path"": ""${job_id}/${attachment_name}"" } }] For example, here if you have a job with identifier <XXXXXXXXX> to generate a solution.csv file, you will have in your COS/S3 bucket, a XXXXXXXXX / solution.csv file.",how-to,1,test
3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html?context=cdpaas&locale=en,Monitoring the experiment and saving the model,"Monitoring the experiment and saving the model
Monitoring the experiment and saving the model Any party or admin with collaborator access to the experiment can monitor the experiment and save a copy of the model. As the experiment runs, you can check the progress of the experiment. After the training is complete, you can view your results, save and deploy the model, and then test the model with new data. Monitoring the experiment When all parties run the party connector script, the experiment starts training automatically. As the training runs, you can view a dynamic diagram of the training progress. For each round of training, you can view the four stages of a training round: * Sending model: Federated Learning sends the model metrics to each party. * Training: The process of training the data locally. Each party trains to produce a local model that is fused. No data is exchanged between parties. * Receiving models: After training is complete, each party sends its local model to the aggregator. The data is not sent and remains private. * Aggregating: The aggregator combines the models that are sent by each of the remote parties to create an aggregated model. Saving your model When the training is complete, a chart that displays the model accuracy over each round of training is drawn. Hover over the points on the chart for more information on a single point's exact metrics. A Training rounds table shows details for each training round. The table displays the participating parties' average accuracy of their model training for each round. ![Screenshot of View Setup Information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-display.png) When you are done with the viewing, click Save model to project to save the Federated Learning model to your project. Rerun the experiment You can rerun the experiment as many times as you need in your project. Note:If you encounter errors when rerunning an experiment, see [Troubleshoot](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html) for more details. Deploying your model After you save your Federated Learning model, you can deploy and score the model like other machine learning models in a Watson Studio platform. See [Deploying models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) for more details. Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)",how-to,1,test
EE838EA978F9A0B0265A8D2B35FF2F64D00A1738,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/collection.html?context=cdpaas&locale=en,Collection node (SPSS Modeler),"Collection node (SPSS Modeler)
Collection node Collections are similar to histograms, but collections show the distribution of values for one numeric field relative to the values of another, rather than the occurrence of values for a single field. A collection is useful for illustrating a variable or field whose values change over time. Using 3-D graphing, you can also include a symbolic axis displaying distributions by category. Two-dimensional collections are shown as stacked bar charts, with overlays where used.",conceptual,0,test
36C8AF3BBAFFF1C227CF611D7327AFA8E378D6EC,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/restructure.html?context=cdpaas&locale=en,Restructure node (SPSS Modeler),"Restructure node (SPSS Modeler)
Restructure node With the Restructure node, you can generate multiple fields based on the values of a nominal or flag field. The newly generated fields can contain values from another field or numeric flags (0 and 1). The functionality of this node is similar to that of the Set to Flag node. However, it offers more flexibility by allowing you to create fields of any type (including numeric flags), using the values from another field. You can then perform aggregation or other manipulations with other nodes downstream. (The Set to Flag node lets you aggregate fields in one step, which may be convenient if you are creating flag fields.) Figure 1. Restructure node ![Restructure node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/restructure_node.png)",conceptual,0,test
7B3616D29E7AC720B73EF3E24C9C807DA05C4DA3,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_seriesarray.html?context=cdpaas&locale=en,Series array charts,"Series array charts
Series array charts Series array charts include individual sub charts and display the Y-axis for all sub charts in the legend.",conceptual,0,test
A4CEA84825E512A73C509437103B89B6EF363D5B,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html?context=cdpaas&locale=en,Importing a project,"Importing a project
Importing a project You can create a project that is preloaded with assets by importing the project. Requirements A local file of a previously exported project : Importing a project from a local file is a method of copying a project. You can import a project from a file on your local system only if the ZIP file that you select was exported from a IBM watsonx project as a compressed file. You can import only projects that you exported from watsonx.ai. You cannot import a compressed file that was exported from a Cloud Pak for Data as a Service project. : If the exported file that you select to import was encrypted, you must enter the password that was used for encryption to enable decrypting sensitive connection properties. A sample project from Samples : You can create a project [from a project sample](https://dataplatform.cloud.ibm.com/samples?context=wx) to learn how to work with data in tools, such as notebooks to prepare data, analyze data, build and train models, and visualize analysis results. : The sample projects show how to accomplish goals, for example, to load and explore data, to create and train machine learning models for predictive analysis. Each project includes the required assets, such as notebooks, and all the data sets that you need to complete the example use case. Importing a project from a local file or sample To import a project: 1. Click New project on the home page or on your Projects page. 2. Choose whether to create a project based on an exported project file or a sample project. 3. Upload a project file or select a sample project. 4. On the New project screen, add a name and optional description for the project. 5. If the project file that you select to import is encrypted, you must enter the password that was used for encryption to enable decrypting sensitive connection properties. If you enter an incorrect password, the project file imports successfully, but sensitive connection properties are falsely decrypted. 6. Select the Restrict who can be a collaborator checkbox to restrict collaborators to members of your organization. You can't change this setting after you create the project. 7. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) or create a new one. 8. Click Create. You can start adding resources if your project is empty, or begin working with the resources you imported. Learn more * [Administering a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) * [Exporting project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) Parent topic:[Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)",how-to,1,test
316974F0A70EE2199BF6CD912E62BFB53D200F0A,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en,Quick start: Build and deploy a machine learning model in a Jupyter notebook,"Quick start: Build and deploy a machine learning model in a Jupyter notebook
Quick start: Build and deploy a machine learning model in a Jupyter notebook You can create, train, and deploy machine learning models with Watson Machine Learning in a Jupyter notebook. Read about the Jupyter notebooks, then watch a video and take a tutorial that’s suitable for intermediate users and requires coding. Required services : Watson Studio : Watson Machine Learning Your basic workflow includes these tasks: 1. Open your sandbox project. Projects are where you can collaborate with others to work with data. 2. Add a notebook to the project. You can create a blank notebook or import a notebook from a file or GitHub repository. 3. Add code and run the notebook. 4. Review the model pipelines and save the desired pipeline as a model. 5. Deploy and test your model. Read about Jupyter notebooks A Jupyter notebook is a web-based environment for interactive computing. If you choose to build a machine learning model in a notebook, you should be comfortable with coding in a Jupyter notebook. You can run small pieces of code that process your data, and then immediately view the results of your computation. Using this tool, you can assemble, test, and run all of the building blocks you need to work with data, save the data to Watson Machine Learning, and deploy the model. [Read more about training models in notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) [Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) Watch a video about creating a model in a Jupyter notebook ![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to see how to train, deploy, and test a machine learning model in a Jupyter notebook. This video provides a visual method to learn the concepts and tasks in this documentation. Try a tutorial to create a model in a Jupyter notebook In this tutorial, you will complete these tasks: * [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep01) * [Task 2: Add a notebook to your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep02) * [Task 3: Set up the environment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep03) * [Task 4: Run the notebook:](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep04) * Build and train a model. * Save a pipeline as a model. * Deploy the model. * Test the deployed model. * [Task 5: View and test the deployed model in the deployment space.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep05) * [(Optional) Clean up.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep06) This tutorial will take approximately 30 minutes to complete. Sample data The sample data used in this tutorial is from data that is part of scikit-learn and will be used to train a model to recognize images of hand-written digits, from 0-9. Expand all sections * Tips for completing this tutorial ### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview) * Task 1: Open a project You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. 1. When the project opens, click the Manage tab and select the Services and integrations page. ![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:07. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click New service. 1. Select Watson Machine Learning. 1. Click Create. 1. Select the new service instance from the list. 1. Click Associate service. 1. If necessary, click Cancel to",how-to,1,test
14A06DE43E6B08188A7672B5BE8068A572DE5B7C,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html?context=cdpaas&locale=en,Scripting and automation,"Scripting and automation
Scripting and automation Scripting in SPSS Modeler is a powerful tool for automating processes in the user interface. Scripts can perform the same types of actions that you perform with a mouse or a keyboard, and you can use them to automate tasks that would be highly repetitive or time consuming to perform manually. You can use scripts to: * Impose a specific order for node executions in a flow. * Set properties for a node as well as perform derivations using a subset of CLEM (Control Language for Expression Manipulation). * Specify an automatic sequence of actions that normally involves user interaction—for example, you can build a model and then test it. * Set up complex processes that require substantial user interaction—for example, cross-validation procedures that require repeated model generation and testing. * Set up processes that manipulate flows—for example, you can take a model training flow, run it, and produce the corresponding model-testing flow automatically.",conceptual,0,test
40DEFBE604B3629CAF8855A6D00EC14A0A6C92F3,https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wml.html?context=cdpaas&locale=en,Watson Machine Learning on IBM watsonx,"Watson Machine Learning on IBM watsonx
Watson Machine Learning on IBM watsonx Watson Machine Learning is part of IBM® watsonx.ai. Watson Machine Learning provides a full range of tools for your team to build, train, and deploy Machine Learning models. You can choose the tool with the level of automation or autonomy that matches your needs. Watson Machine Learning provides the following tools: * AutoAI experiment builder for automatically processing structured data to generate model-candidate pipelines. The best-performing pipelines can be saved as a machine learning model and deployed for scoring. * Deployment spaces give you the tools to view and manage model deployments. * Tools to view and manage model deployments.",conceptual,0,test
21DB0146B79B8256259507C62876E01ADA143BD6,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_levels.html?context=cdpaas&locale=en,Measurement levels (SPSS Modeler),"Measurement levels (SPSS Modeler)
Measurement levels The measure, also referred to as measurement level, describes the usage of data fields in SPSS Modeler. You can specify the Measure in the node properties of an import node or a Type node. For example, you may want to set the measure for an integer field with values of 1 and 0 to Flag. This usually indicates that 1 = True and 0 = False. Storage versus measurement. Note that the measurement level of a field is different from its storage type, which indicates whether data is stored as a string, integer, real number, date, time, or timestamp. While you can modify data types at any point in a flow by using a Type node, storage must be determined at the source when reading data in (although you can subsequently change it using a conversion function). The following measurement levels are available: * Default. Data whose storage type and values are unknown (for example, because they haven't yet been read) are displayed as Default. * Continuous. Used to describe numeric values, such as a range of 0100 or 0.751.25. A continuous value can be an integer, real number, or date/time. * Categorical. Used for string values when an exact number of distinct values is unknown. This is an uninstantiated data type, meaning that all possible information about the storage and usage of the data is not yet known. After data is read, the measurement level will be Flag, Nominal, or Typeless, depending on the maximum number of members for nominal fields specified. * Flag. Used for data with two distinct values that indicate the presence or absence of a trait, such as true and false, Yes and No, or 0 and 1. The values used may vary, but one must always be designated as the ""true"" value, and the other as the ""false"" value. Data may be represented as text, integer, real number, date, time, or timestamp. * Nominal. Used to describe data with multiple distinct values, each treated as a member of a set, such as small/medium/large. Nominal data can have any storage—numeric, string, or date/time. Note that setting the measurement level to Nominal doesn't automatically change the values to string storage. * Ordinal. Used to describe data with multiple distinct values that have an inherent order. For example, salary categories or satisfaction rankings can be typed as ordinal data. The order is defined by the natural sort order of the data elements. For example, 1, 3, 5 is the default sort order for a set of integers, while HIGH, LOW, NORMAL (ascending alphabetically) is the order for a set of strings. The ordinal measurement level enables you to define a set of categorical data as ordinal data for the purposes of visualization, model building, and export to other applications (such as IBM SPSS Statistics) that recognize ordinal data as a distinct type. You can use an ordinal field anywhere that a nominal field can be used. Additionally, fields of any storage type (real, integer, string, date, time, and so on) can be defined as ordinal. * Typeless. Used for data that doesn't conform to any of the Default, Continuous, Categorical, Flag, Nominal, or Ordinal types, for fields with a single value, or for nominal data where the set has more members than the defined maximum. Typeless is also useful for cases in which the measurement level would otherwise be a set with many members (such as an account number). When you select Typeless for a field, the role is automatically set to None, with Record ID as the only alternative. The default maximum size for sets is 250 unique values. * Collection. Used to identify non-geospatial data that is recorded in a list. A collection is effectively a list field of zero depth, where the elements in that list have one of the other measurement levels. * Geospatial. Used with the List storage type to identify geospatial data. Lists can be either List of Integer or List of Real fields with a list depth that's between zero and two, inclusive. You can manually specify measurement levels, or you can allow the software to read the data and determine the measurement level based on the values it reads. Alternatively, where you have several continuous data fields that should be treated as categorical data, you can choose an option to convert them. See [Converting continuous data](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_convert.html).",conceptual,0,test
9F06DF311976F336CB3164B08D5DA7D6F93419E2,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/neuralnetwork.html?context=cdpaas&locale=en,Neural Net node (SPSS Modeler),"Neural Net node (SPSS Modeler)
Neural Net node A neural network can approximate a wide range of predictive models with minimal demands on model structure and assumption. The form of the relationships is determined during the learning process. If a linear relationship between the target and predictors is appropriate, the results of the neural network should closely approximate those of a traditional linear model. If a nonlinear relationship is more appropriate, the neural network will automatically approximate the ""correct"" model structure. The trade-off for this flexibility is that the neural network is not easily interpretable. If you are trying to explain an underlying process that produces the relationships between the target and predictors, it would be better to use a more traditional statistical model. However, if model interpretability is not important, you can obtain good predictions using a neural network. Field requirements. There must be at least one Target and one Input. Fields set to Both or None are ignored. There are no measurement level restrictions on targets or predictors (inputs). The initial weights assigned to neural networks during model building, and therefore the final models produced, depend on the order of the fields in the data. Watsonx.ai automatically sorts data by field name before presenting it to the neural network for training. This means that explicitly changing the order of the fields in the data upstream will not affect the generated neural net models when a random seed is set in the model builder. However, changing the input field names in a way that changes their sort order will produce different neural network models, even with a random seed set in the model builder. The model quality will not be affected significantly given different sort order of field names.",conceptual,0,test
E61658D2BA7D0D13E5A6008E28670D1B1F6CB7BB,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_hidden_variables.html?context=cdpaas&locale=en,Hidden variables,"Hidden variables
Hidden variables You can hide data by creating Private variables. Private variables can be accessed only by the class itself. If you declare names of the form __xxx or __xxx_yyy, that is with two preceding underscores, the Python parser will automatically add the class name to the declared name, creating hidden variables. For example: class MyClass: __attr = 10 private class attribute def method1(self): pass def method2(self, p1, p2): pass def __privateMethod(self, text): self.__text = text private attribute Unlike in Java, in Python all references to instance variables must be qualified with self; there's no implied use of this.",conceptual,0,test
220E465DBC0C22FF06F80DF18B25044DD1EBC787,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en,Quick start: Generate synthetic tabular data,"Quick start: Generate synthetic tabular data
Quick start: Generate synthetic tabular data Take this tutorial to learn how to generate synthetic tabular data in IBM watsonx.ai. The benefit to synthetic data is that you can procure the data on-demand, then customize to fit your use case, and produce it in large quantities. This tutorial helps you learn how to use the graphical flow editor tool, Synthetic Data Generator, to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. Required services : Watson Studio Your basic workflow includes these tasks: 1. Open a project. Projects are where you can collaborate with others to work with data. 2. Add your data to the project. You can add CSV files or data from a remote data source through a connection. 3. Create and run a synthetic data flow to the project. You use the graphical flow editor tool Synthetic Data Generator to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. 4. Review the synthetic data flow and output. Read about synthetic data Synthetic data is information that has been generated on a computer to augment or replace real data to improve AI models, protect sensitive data, and mitigate bias. Synthetic data helps to mitigate many of the logistical, ethical, and privacy issues that come with training machine learning models on real-world examples. [Read more about Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) Watch a video about generating synthetic tabular data ![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial. This video provides a visual method to learn the concepts and tasks in this documentation. Try a tutorial to generate synthetic tabular data In this tutorial, you will complete these tasks: * [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep01) * [Task 2: Add data to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep02) * [Task 3: Create a synthetic data flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep03) * [Task 4: Review the data flow and output](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep04) Expand all sections * Tips for completing this tutorial ### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview) * Task 1: Open a project You need a project to to store the assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project. This video provides a visual method to learn the concepts and tasks in this documentation. 1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep02). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox project in the Projects section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab. ![Home screen with sandbox project listed.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-home-screen.png){: width=""100%"" } [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview) * Task 2: Add data to your project ![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:24. The data set used in this tutorial contains typical information that a company gathers about their customers, and is available in the Samples. Follow these steps to find the data set in the Samples and add it to your project: 1. Access the [Customers data set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/4bfbe430a82e23821aed0647b506da93){: new_window} in the",how-to,1,test
03FF997603B065D2DF1FBB49934CA8C348765ACF,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html?context=cdpaas&locale=en,Deploying Python functions in Watson Machine Learning,"Deploying Python functions in Watson Machine Learning
Deploying Python functions in Watson Machine Learning You can deploy Python functions in Watson Machine Learning the same way that you can deploy models. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions the same way that they send data to deployed models. Deploying Python functions gives you the ability to hide details (such as credentials). You can also preprocess data before you pass it to models. Additionally, you can handle errors and include calls to multiple models, all within the deployed function instead of in your application. Sample notebooks for creating and deploying Python functions For examples of how to create and deploy Python functions by using the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/), refer to these sample notebooks: Sample name Framework Techniques demonstrated [Use Python function to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1eddc77b3a4340d68f762625d40b64f9) Python Use a function to store a sample model and deploy it. [Predict business for cars](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61a8b600f1bb183e2c471e7a64299f0e) Hybrid(Tensorflow) Set up an AI definition <br>Prepare the data <br>Create a Keras model by using Tensorflow <br>Deploy and score the model <br>Define, store, and deploy a Python function [Deploy Python function for software specification](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56825df5322b91daffd39426038808e9) Core Create a Python function <br>Create a web service <br>Score the model The notebooks demonstrate the six steps for creating and deploying a function: 1. Define the function. 2. Authenticate and define a space. 3. Store the function in the repository. 4. Get the software specification. 5. Deploy the stored function. 6. Send data to the function for processing. For links to other sample notebooks that use the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/), refer to [Using Watson Machine Learning in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html). Increasing scalability for a function When you deploy a function from a deployment space or programmatically, a single copy of the function is deployed by default. To increase scalability, you can increase the number of replicas by editing the configuration of the deployment. More replicas allow for a larger volume of scoring requests. The following example uses the Python client API to set the number of replicas to 3. change_meta = { client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { ""name"":""S"", ""num_nodes"":3} } client.deployments.update(<deployment_id>, change_meta) Learn more * To learn more about defining a deployable Python function, see General requirements for deployable functions section in [Writing and storing deployable Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html). * You can deploy a function from a deployment space through the user interface. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)",how-to,1,test
731B218E6E141E88F850B673227AB3C4DF19392E,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-injection.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Prompt injection ![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputInferenceRobustnessNew Description A prompt injection attack forces a model to produce unexpected output due to the structure or information contained in prompts. Why is prompt injection a concern for foundation models? Injection attacks can be used to alter model behavior and benefit the attacker. If not properly controlled, business entities could face fines, reputational harm, and other legal consequences. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,test
BBDEDA771A051A9B1871F9BEC9589D91421E7C0C,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/regression.html?context=cdpaas&locale=en,Refression (SPSS Modeler),"Refression (SPSS Modeler)
Regression node Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values. Requirements. Only numeric fields can be used in a regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node. ) Strengths. Regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because regression modeling is a long-established statistical procedure, the properties of these models are well understood. Regression models are also typically very fast to train. The Regression node provides methods for automatic field selection in order to eliminate nonsignificant input fields from the equation. Note: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields. See [Logistic node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/logreg.htmllogreg) for more information.",conceptual,0,test
F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958,https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html?context=cdpaas&locale=en,Evaluating prompt templates in deployment spaces,"Evaluating prompt templates in deployment spaces
Evaluating prompt templates in deployment spaces You can evaluate prompt templates in deployment spaces to measure the performance of foundation model tasks and understand how your model generates responses. With watsonx.governance, you can evaluate prompt templates in deployment spaces to measure how effectively your foundation models generate responses for the following task types: * [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlclassification) * [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsummarization) * [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlgeneration) * [Question answering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlqa) * [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlextraction) Prompt templates are saved prompt inputs for foundation models. You can evaluate prompt template deployments in pre-production and production spaces. Before you begin You must have access to a watsonx.governance deployment space to evaluate prompt templates. For more information, see [Setting up watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html). To run evaluations, you must log in and [switch](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlaccount) to a watsonx account that has watsonx.governance and watsonx.ai instances that are installed and open a deployment space. You must be assigned the Admin or Editor roles for the account to open deployment spaces. In your project, you must also [create and save a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.htmlcreating-and-running-a-prompt) and [promote a prompt template to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html). You must specify at least one variable when you create prompt templates to enable evaluations. The following sections describe how to evaluate prompt templates in deployment spaces and review your evaluation results: * [Evaluating prompt templates in pre-production spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html?context=cdpaas&locale=enprompt-eval-pre-prod) * [Evaluating prompt templates in production spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html?context=cdpaas&locale=enprompt-eval-prod) Evaluating prompt templates in pre-production spaces Activate evaluation To run prompt template evaluations, you can click Activate on the Evaluations tab when you open a deployment to open the Evaluate prompt template wizard. You can run evaluations only if you are assigned the Admin or Editor roles for your deployment space. ![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-activate-prompt-eval.png) Select dimensions The Evaluate prompt template wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt. You can expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select. ![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimension-preprod-spaces.png) watsonx.governance automatically configures evaluations for each dimension with default settings. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select Advanced settings to set minimum sample sizes and threshold values for each metric as shown in the following example: ![Configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) Select test data You must upload a CSV file that contains test data with reference columns and columns for each prompt variable. When the upload completes, you must also map [prompt variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.htmlcreating-prompt-variables) to the associated columns from your test data. ![Select test data to upload](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-test-data-preprod-spaces.png) Review and evaluate You can review the selections for the prompt task type, the uploaded test data, and the type of evaluation that runs. You must select Evaluate to run the evaluation. ![Review and evaluate prompt template evaluation settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-evaluate-preprod-spaces.png) Reviewing evaluation results When your evaluation finishes, you can review a summary of your evaluation results on the Evaluations tab in watsonx.governance to gain insights about your model performance. The summary provides an overview of metric scores and violations of default score thresholds for your prompt template evaluations. To analyze results, you can click the arrow ![navigation arrow](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-nav-arrow.png) next to your prompt template evaluation to view data visualizations of your results over time. You can also analyze results from the model health evaluation that is run by default during prompt template evaluations to understand how efficiently your model processes your data. The Actions menu also provides the following options to help you analyze your results: * Evaluate now: Run evaluation with a different test data set * All evaluations: Display a history of your evaluations to understand how your results change over time. * Configure monitors: Configure evaluation thresholds and sample sizes. * View model information: View details about your model to understand how your deployment environment is set up. ![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-results-preprod.png) If you [track your prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html), you can review evaluation results to gain insights about your model performance throughout the AI lifecycle. Evaluating prompt templates in production spaces Activate evaluation To run prompt template evaluations, you can click Activate on the Evaluations tab when you open a deployment to open the Evaluate prompt template wizard. You can run evaluations only if you are assigned the Admin or Editor roles for your deployment space. ![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-activate-prompt-eval.png) Select dimensions The Evaluate prompt template wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt. You can provide a label column name for the reference output that you specify in your feedback data. You can also expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select. ![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimensions-pre-prod-spaces.png) watsonx.governance automatically configures evaluations for each dimension with default settings. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select Advanced settings to set minimum sample sizes and threshold values for each metric",how-to,1,test
7F2731C1EBB3F492687A336E1369CD6232512118,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-custom-comp.html?context=cdpaas&locale=en,Creating a custom component for use in the pipeline,"Creating a custom component for use in the pipeline
Creating a custom component for use in the pipeline A custom pipeline component runs a script that you write. You can use custom components to share reusable scripts between pipelines. You create custom components as project assets. You can then use the components in pipelines you create in that project. You can create as many custom components for pipelines as needed. Currently, to create a custom component you must create one programmatically, using a Python function. Creating a component as a project asset To create a custom component, use the Python client to authenticate with IBM Watson Pipelines, code the component, then publish the component to the specified project. After it is available in the project, you can assign it to a node in a pipeline and run it as part of a pipeline flow. This example demonstrates the process of publishing a component that adds two numbers together, then assigning the component to a pipeline node. 1. Publish a function as a component with the latest Python client. Run the following code in a Jupyter Notebook in a project of IBM watsonx. Install libraries ! pip install ibm-watson-pipelines Authentication from ibm_watson_pipelines import WatsonPipelines apikey = '' project_id = 'your_project_id' client = WatsonPipelines.from_apikey(apikey) Define the function of the component If you define the input parameters, users are required to input them in the UI def add_two_numbers(a: int, b: int) -> int: print('Adding numbers: {} + {}.'.format(a, b)) return a + b + 10 Other possible functions might be sending a Slack message, or listing directories in a storage volume, and so on. Publish the component client.publish_component( name='Add numbers', Appears in UI as component name func=add_two_numbers, description='Custom component adding numbers', Appears in UI as component description project_id=project_id, overwrite=True, Overwrites an existing component with the same name ) To generate a new API key: 1. Go to the [IBM Cloud home page](https://cloud.ibm.com/) 2. Click Manage > Access (IAM) 3. Click API keys 4. Click Create 1. Drag the node called Run Pipelines component under Run to the canvas. ![Retrieving the custom component node](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-1.png) 2. Choose the name of the component that you want to use. ![Choosing the actual component function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-2.png) 3. Connect and run the node as part of a pipeline job. ![Connecting the component](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-3.png) Manage pipeline components To manage your components, use the Python client to manage them. Table 1. Manage pipeline components Method Function client.get_components(project_id=project_id) List components from a project client.get_component(project_id=project_id, component_id=component_id) Get a component by ID client.get_component(project_id=project_id, name=component_name) Get a component by name client.publish_component(component name) Publish a new component client.delete_component(project_id=project_id, component_id=component_id) Delete a component by ID Import and export IBM Watson Pipelines can be imported and exported with pipelines only. Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)",how-to,1,test
204F36069EE071B185A1BCE8370946A50BDDCDD5,https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html?context=cdpaas&locale=en,Creating synthetic data from imported data,"Creating synthetic data from imported data
Creating synthetic data from imported data Supported data sources for Synthetic Data Generator. Using Synthetic Data Generator, you can connect to your data no matter where it lives, using either connectors or data files. Data size The Synthetic Data Generator environment can import up to 2.5GB of data. Connectors The following table lists the data sources that you can connect to using Synthetic Data Generator. Connector Read Only Read & Write Notes [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html) ✓ Replace the data set option isn't supported for this connection. [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html) ✓ Replace the data set option isn't supported for this connection. [Amazon Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html) ✓ [Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) ✓ [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) ✓ [Apache Derby](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html) ✓ [Apache HDFS (formerly known as ""Hortonworks HDFS"")](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html) ✓ [Apache Hive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html) ✓ [Box](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html) ✓ ✓ [Cloud Object-Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) ✓ [Cloud Object-Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html) ✓ [Cloudant](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html) ✓ [Cloudera Impala](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html) ✓ [Cognos-Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html) ✓ [Data Virtualization Manager for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html) ✓ [Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) ✓ [Db2 Big SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html) ✓ [Db2 for i](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2i.html) ✓ [Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html) ✓ [Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) ✓ [Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) ✓ [Dropbox](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html) ✓ [FTP (remote file system transfer)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html) ✓ [Google BigQuery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) ✓ [Google Cloud Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html) ✓ [Greenplum](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-greenplum.html) ✓ [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html) ✓ [IBM Cloud Databases for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-compose-mysql.html) ✓ [IBM Cloud Data Engine](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html) ✓ [IBM Cloud Databases for DataStax](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html) ✓ [IBM Cloud Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) ✓ [IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html) ✓ [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html) ✓ [Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html) ✓ [Looker](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-looker.html) ✓ [MariaDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html) [Microsoft Azure Blob Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html) ✓ [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html) ✓ [Microsoft Azure Data Lake Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azuredls.html) ✓ [Microsoft Azure File Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azurefs.html) ✓ [Microsoft Azure SQL Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html) ✓ [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) ✓ SQL pushback isn't supported when Active Directory is enabled. [MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) ✓ [MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html) ✓ [Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html) ✓ [OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html) ✓ [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html) ✓ [Planning Analytics (formerly known as ""IBM TM1"")](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html) ✓ Only the Replace the data set option is supported. [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html) ✓ [Presto](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html) ✓ [Salesforce.com](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-salesforce.html) ✓ [SAP ASE](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-ase.html) ✓ [SAP IQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-iq.html) ✓ [SAP OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html) ✓ [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html) ✓ [Tableau](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-tableau.html) ✓ [Teradata](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html) ✓ Data files In addition to using data from remote data sources or integrated databases, you can use data from files. You can work with data from the following types of files using Synthetic Data Generator. Connector Read Only Read & Write AVRO ✓ CSV/delimited ✓ Excel (XLS, XLSX) ✓ JSON ✓ ORC Parquet SAS ✓ SAV ✓ SHP XML ✓",conceptual,0,test
839B16AC73C000ECE7BAC7D50BAF6F7E37F2CAD9,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/time_formats_clem_language.html?context=cdpaas&locale=en,Time (SPSS Modeler),"Time (SPSS Modeler)
Time The CLEM language supports the time formats listed in this section. CLEM language time formats Table 1. CLEM language time formats Format Examples HHMMSS 120112, 010101, 221212 HHMM 1223, 0745, 2207 MMSS 5558, 0100 HH:MM:SS 12:01:12, 01:01:01, 22:12:12 HH:MM 12:23, 07:45, 22:07 MM:SS 55:58, 01:00 (H)H:(M)M:(S)S 12:1:12, 1:1:1, 22:12:12 (H)H:(M)M 12:23, 7:45, 22:7 (M)M:(S)S 55:58, 1:0 HH.MM.SS 12.01.12, 01.01.01, 22.12.12 HH.MM 12.23, 07.45, 22.07 MM.SS 55.58, 01.00 (H)H.(M)M.(S)S 12.1.12, 1.1.1, 22.12.12 (H)H.(M)M 12.23, 7.45, 22.7 (M)M.(S)S 55.58, 1.0",conceptual,0,test
653FFEDFAC00F360750F776A3A60F6AAD38ED954,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html?context=cdpaas&locale=en,Creating batch deployments in Watson Machine Learning,"Creating batch deployments in Watson Machine Learning
Creating batch deployments in Watson Machine Learning A batch deployment processes input data from a file, data connection, or connected data in a storage bucket, and writes the output to a selected destination. Before you begin 1. Save a model to a deployment space. 2. Promote or add the input file for the batch deployment to the space. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). Supported frameworks Batch deployment is supported for these frameworks and asset types: * Decision Optimization * PMML * Python functions * PyTorch-Onnx * Tensorflow * Scikit-learn * Python scripts * Spark MLlib * SPSS * XGBoost Notes: * You can create batch deployments only of Python functions and models based on the PMML framework programmatically. * Your list of deployment jobs can contain two types of jobs: WML deployment job and WML batch deployment. * When you create a batch deployment (through the UI or programmatically), an extra default deployment job is created of the type WML deployment job. The extra job is a parent job that stores all deployment runs generated for that batch deployment that were triggered by the Watson Machine Learning API. * The standard WML batch deployment type job is created only when you create a deployment from the UI. You cannot create a WML batch deployment type job by using the API. * The limitations of WML deployment job are as follows: * The job cannot be edited. * The job cannot be deleted unless the associated batch deployment is deleted. * The job doesn't allow scheduling. * The job doesn't allow notifications. * The job doesn't allow changing retention settings. For more information, see [Data sources for scoring batch deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html). For more information, see [Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) Creating a batch deployment To create a batch deployment: 1. From the deployment space, click the name of the saved model that you want to deploy. The model detail page opens. 2. Click New deployment. 3. Choose Batch as the deployment type. 4. Enter a name and an optional description for your deployment. 5. Select a [hardware specification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-hardware-configs.html). 6. Click Create. When status changes to Deployed, your deployment is created. Note: Additionally, you can create a batch deployment by using any of these interfaces: * Watson Studio user interface, from an Analytics deployment space * Watson Machine Learning Python Client * Watson Machine Learning REST APIs Creating batch deployments programmatically See [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for sample notebooks that demonstrate creating batch deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). Viewing deployment details Click the name of a deployment to view the details. ![View deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/batch-details.png) You can view the configuration details such as hardware and software specifications. You can also get the deployment ID, which you can use in API calls from an endpoint. For more information, see [Looking up a deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html). Learn more * For more information, see [Creating jobs in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html). * Refer to [Machine Learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks that demonstrate creating batch deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning-cp) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)",how-to,1,test
536EF493AB96990DE8E237EDB8A97DB989EF15C8,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html?context=cdpaas&locale=en,Creating a pipeline,"Creating a pipeline
Creating a pipeline Create a pipeline to run an end-to-end scenario to automate all or part of the AI lifecycle. For example, create a pipeline that creates and trains an asset, promotes it to a space, creates a deployment, then scores the model. Watch this video to see how to create and run a sample pipeline. This video provides a visual method to learn the concepts and tasks in this documentation. Overview: Adding a pipeline to a project Follow these steps to add a pipeline to a project: 1. Open a project. 2. Click New task > Automate model lifecycle. 3. Enter a name and an optional description. 4. Click Create to open the canvas. Pipeline access When you use a pipeline to automate a flow, you must have access to all of the elements in the pipeline. Make sure that you create and run pipelines with the proper access to all assets, projects, and spaces used in the pipeline. Related services In addition to access to all elements in a pipeline, you must have the services available to run all assets you add to a pipeline. For example, if you automate a pipeline that trains and deploys a model, you must have the Watson Studio and Watson Machine Learning services. If a required service is missing, the pipeline will not run. This table lists assets that require services in addition to Watson Studio: Asset Required service AutoAI experiment Watson Machine Learning Batch deployment job Watson Machine Learning Online deployment (web service) Watson Machine Learning Overview: Building a pipeline Follow these high-level steps to build and run a pipeline. 1. Drag any node objects onto the canvas. For example, drag a Run notebook job node onto the canvas. 2. Use the action menu for each node to view and select options. 3. Configure a node as required. You are prompted to supply the required input options. For some nodes, you can view or configure output options as well. For examples of configuring nodes, see [Configuring pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html). 4. Drag from one node to another to connect and order the pipeline. 5. Optional: Click the Global objects icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) in the toolbar to configure runtime options for the pipeline. 6. When the pipeline is complete, click the Run icon on the toolbar to run the pipeline. You can run a trial to test the pipeline, or you can schedule a job when you are confident in the pipeline. Configuring nodes As you add nodes to a pipeline, you must configure them to provide all of the required details. For example, if you add a node to run an AutoAI experiment, you must configure the node to specify the experiment, load the training data, and specify the output file: ![AutoAI node parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/OE-run-autoai-node.png) Connecting nodes When you build a complete pipeline, the nodes must be connected in the order in which they run in the pipeline. To connect nodes, hover over a node and drag a connection to the target node. Disconnected nodes are run in parallel. ![Connecting nodes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipelines_conecting_nodes_gif.gif) Defining pipeline parameters A pipeline parameter defines a global variable for the whole pipeline. Use pipeline parameters to specify data from one of these categories: Parameter type Can specify Basic JSON types such as string, integer, or a JSON object CPDPath Resources available within the platform, such as assets, asset containers, connections, notebooks, hardware specs, projects, spaces, or jobs InstanceCRN Storage, machine learning instances, and other services. Other Various configuration types, such as status, timeout length, estimator, error policies and other various configuration types. To specify a pipeline parameter: 1. Click the global objects icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) in the toolbar to open the Manage global objects window. 2. Select the Pipeline parameters tab to configure parameters. 3. Click Add pipeline parameter. 4. Specify a name and an optional description. 5. Select a type and provide any required information. 6. Click Add when the definition is complete, and repeat the previous steps until you finish defining the parameters. 7. Close the Manage global objects dialog. The parameters are now available to the pipeline. Next steps [Configure pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html) Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)",how-to,1,test
EC433541F7F0C2DC7620FF10CF44884F96EF7AA5,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html?context=cdpaas&locale=en,Importing scripts into a notebook,"Importing scripts into a notebook
Importing scripts into a notebook If you want to streamline your notebooks, you can move some of the code from your notebooks into a script that your notebook can import. For example, you can move all helper functions, classes, and visualization code snippets into a script, and the script can be imported by all of the notebooks that share the same runtime. Without all of the extra code, your notebooks can more clearly communicate the results of your analysis. To import a script from your local machine to a notebook and write to the script from the notebook, use one of the following options: * Copy the code from your local script file into a notebook cell. * For Python: At the beginning of this cell, add %%writefile myfile.py to save the code as a Python file to your working directory. Notebooks that use the same runtime can also import this file. The advantage of this method is that the code is available in your notebook, and you can edit and save it as a new Python script at any time. * For R: If you want to save code in a notebook as an R script to the working directory, you can use the writeLines(myfile.R) function. * Save your local script file in Cloud Object Storage and then make the file available to the runtime by adding it to the runtime's local file system. This is only supported for Python. 1. Click the Upload asset to project icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)), and then browse the script file or drag it into your notebook sidebar. The script file is added to Cloud Object Storage bucket associated with your project. 2. Make the script file available to the Python runtime by adding the script to the runtime's local file system: 1. Click the Code snippets icon (![Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)), and then select Read data. ![Read data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-read-data.png) 2. Click Select data from project and then select Data asset. 3. From the list of data assets available in your project's COS, select your script and then click Select. ![Select data from project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/select-data-from-project.png). 4. Click an empty cell in your notebook and then from the Load as menu in the notebook sidebar select Insert StreamingBody object. ![Insert StreamingBody object to notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/read-as-streaming-body.png) 5. Write the contents of the StreamingBody object to a file in the local runtime`s file system: f = open('<myScript>.py', 'wb') f.write(streaming_body_1.read()) f.close() This opens a file with write access and calls the write method to write to the file. 6. Import the script: import <myScript> To import the classes to access the methods in a script in your notebook, use the following command: * For Python: from <python file name> import <class name> * For R: source(""./myCustomFunctions.R"") available in base R To source an R script from the web: source_url(""<insert URL here>"") available in devtools Parent topic:[Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html)",how-to,1,test
C32FE380CF3083B6D85554063B5ACB153FC1C8BE,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=en,Quick start tutorials,"Quick start tutorials
Quick start tutorials Take quick start tutorials to learn how to perform specific tasks, such as refine data or build a model. These tutorials help you quickly learn how to do a specific task or set of related tasks. The quick start tutorials are categorized by task: * [Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enprepare) * [Analyzing and visualizing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enanalyze) * [Building, deploying, and trusting models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enbuild) * [Working with foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enprompt) Each tutorial requires one or more service instances. Some services are included in multiple tutorials. The tutorials are grouped by task. You can start with any task. Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources. The tags for each tutorial describe the level of expertise ( Beginner , Intermediate , or Advanced ), and the amount of coding required ( No code , Low code , or All code ). After completing these tutorials, see the [Other learning resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enresources) section to continue your learning. Preparing data To get started with preparing, transforming, and integrating data, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform. Your data preparation workflow has these basic steps: 1. Create a project. 2. If necessary, create the service instance that provides the tool you want to use and associate it with the project. 3. Add data to your project. You can add data files from your local system, data from a remote data source that you connect to, data from a catalog, or sample data. 4. Choose a tool to analyze your data. Each of the tutorials describes a tool. 5. Run or schedule a job to prepare your data. Tutorials for preparing data Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources: Tutorial Description Expertise for tutorial [Refine and visualize data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) Prepare and visualize tabular data with a graphical flow editor. Select operations to manipulate data. <br><br>Beginner<br><br>No code [Generate synthetic tabular data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html) Generate synthetic tabular data using a graphical flow editor. Select operations to generate data. <br><br>Beginner<br><br>No code Analyzing and visualizing data To get started with analyzing and visualizing data, understand the overall workflow, choose a tutorial, and check out other learning resources for working with other tools. Your analyzing and visualizing data workflow has these basic steps: 1. Create a project. 2. If necessary, create the service instance that provides the tool you want to use and associate it with the project. 3. Add data to your project. You can add data files from your local system, data from a remote data source that you connect to, data from a catalog, or sample data. 4. Choose a tool to analyze your data. Each of the tutorials describes a tool. Tutorials for analyzing and visualizing data Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources: Tutorial Description Expertise for tutorial [Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) Load data, run, and share a notebook. Understand generated Python code. <br><br>Intermediate<br><br>All code [Refine and visualize data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) Prepare and visualize tabular data with a graphical flow editor. Select operations to manipulate data. <br><br>Beginner<br><br>No code Building, deploying, and trusting models To get started with building, deploying, and trusting models, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform. The model workflow has three main steps: build a model asset, deploy the model, and build trust in the model. ![Overview of model workflow](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-engineer-overview-wx.svg) Tutorials for building, deploying, and trusting models Each tutorial provides a description of the tool, a video, the instructions, and additional learning resources: Tutorial Description Expertise for tutorial [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) Automatically build model candidates with the AutoAI tool. Build, deploy, and test a model without coding. <br><br>Beginner<br><br>No code [Build and deploy a machine learning model in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html) Build a model by updating and running a notebook that uses Python code and the Watson Machine Learning APIs. Build, deploy, and test a scikit-learn model that uses Python code. <br><br>Intermediate<br><br>All code [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html) Build a C5.0 model that uses the SPSS Modeler tool. Drop data and operation nodes on a canvas and select properties. <br><br>Beginner<br><br>No code [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html) Automatically build scenarios with the Modeling Assistant. Solve and explore scenarios, then deploy and test a model without coding. <br><br>Intermediate<br><br>No code [Automate the lifecycle for a model with pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html) Create and run a pipeline to automate building and deploying a machine learning model. Drop operation nodes on a canvas and select properties. <br><br>Beginner<br><br>No code Prompting foundation models To get started with prompting foundation models, understand the overall workflow, choose a tutorial, and check",how-to,1,test
39AD64C9004E83507A968C5C0B1C8EF952B3EACE,https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en,Setting up IBM Cloud Object Storage for use with IBM watsonx,"Setting up IBM Cloud Object Storage for use with IBM watsonx
Setting up IBM Cloud Object Storage for use with IBM watsonx An IBM Cloud Object Storage service instance is provisioned automatically with a Lite plan when you join IBM watsonx. Workspaces, such as projects, require IBM Cloud Object Storage to store files that are related to assets, including uploaded data files or notebook files. You can also connect to IBM Cloud Object Storage as a data source. See [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html). Overview of setting up Cloud Object Storage To set up Cloud Object Storage, complete these tasks: 1. [Generate an administrative key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=engen-key). 2. [Ensure that Global location is set in each user's profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=englobal). 3. [Provide access to Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enaccess). * [Assign roles to enable access](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enassign). * [Enable storage delegation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enstor-del). 4. [Optional: Protect sensitive data](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enprotect). 5. [Optional: Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enbyok). Watch the following video to see how administrators set up Cloud Object Storage for use with Cloud Pak for Data as a Service. This video provides a visual method to learn the concepts and tasks in this documentation. Generate an administrative key You generate an administrative key for Cloud Object Storage by creating an initial test project. The test project can be deleted after its creation. Its sole purpose is to generate the key. To automatically generate the administrative key for your Cloud Object Storage instance: 1. From the IBM watsonx main menu, select Projects > View all projects and then click New project. 2. Specify to create an empty project. 3. Enter a project name, such as ""Test Project"". 4. Select your Cloud Object Storage instance. 5. Click Create. The administrative key is generated. 6. Delete the test project. Ensure that Global location is set for Cloud Object Storage in each user's profile Cloud Object Storage requires the Global location to be configured in each user's profile. The Global location is configured automatically, but it might be changed by mistake. An error occurs when a project is created if the Global location is not enabled in the user's profile. Ask users to check that Global location is enabled. [Check for the Global location in each user's profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html). Provide access to Cloud Object Storage You can provide different levels of access to Cloud Object Storage for people who need to work in IBM watsonx. Using the storage delegation setting on the Cloud Object Storage instance, you can provide quick access to most users to create projects and catalogs. However, another option is to provide targeted access by using IAM roles and access groups. Role-based access enacts stricter controls for viewing the Cloud Object Storage instance directly and for creating projects and catalogs. If you decide to provide controlled access with IAM roles and access groups, you must disable storage delegation for the Cloud Object Storage instance. You enable storage delegation for the Cloud Object Storage instance to provide access to nonadministrative users. Users with minimal IAM permissions can create projects and catalogs, which automatically create buckets in the Cloud Object Storage instance. See [Enable storage delegation for nonadministrative users](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enstor-del). You provide more controlled access with IAM roles and access groups. For example, the Cloud Object Storage Manager role provides permissions to create projects and spaces together with the corresponding buckets in the Cloud Object Storage instance. It also provides permissions to view all buckets and encryption root keys in the Cloud Object Storage instance, to view the metadata for a bucket and delete buckets, and to perform other administrative tasks that are related to buckets. See [Assign roles to enable access](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enassign). No role assignments are needed for collaborators who work with the data in a project or catalog. Users who are given collaborator roles can work in the project or catalog without storage delegation or an IAM role. See [Project collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html). Assign roles to enable access The IBM Cloud account owner or administrator assigns appropriate roles to users to provide access to Cloud Object Storage. Storage delegation must be disabled when using role-based access. Rather than assigning each individual user a set of roles, you can create an access group. Access groups expedite role assignments by grouping permissions. For instructions on creating access groups, see [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups&interface=ui). Enable storage delegation Storage delegation for the Cloud Object Storage instance allows nonadministrative users to create projects, the Platform assets catalog, and the corresponding Cloud Object Storage buckets. Storage delegation provides wide access to Cloud Object Storage and allows users with minimal permissions to create projects. Storage delegation for projects also includes deployment spaces. To enable storage delegation for the Cloud Object Storage instance: 1. From the navigation menu, select Administration > Configurations and settings > Storage delegation. 2. Set storage delegation for Projects to on. 3. Optional. If you want a non-administrative user to",how-to,1,test