doc_id,url,title,text,label,label_id,split
E5EA38444D60150C0FD2EB498BF33793DDE5FED2,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html?context=cdpaas&locale=en,Language support for the product and the documentation,"Language support for the product and the documentation
Language support for the product and the documentation IBM watsonx is translated into multiple languages. Supported languages The IBM watsonx user interface is translated into these languages: * Brazilian Portuguese * Simplified Chinese * Traditional Chinese * French * German * Italian * Japanese * Korean * Spanish * Swedish The documentation is automatically translated into these languages: * Brazilian Portuguese * Simplified Chinese * French * German * Italian * Japanese * Korean * Spanish IBM is not responsible for any damages or losses resulting from the use of automatically (machine) translated content. When the translated documentation is not as current as the English content, you see a message and have the option of switching to the English content. Changing languages To change the language for this documentation, scroll to the end of any documentation page, and select a language from the language selector.  To change the language for both the product user interface and this documentation, select a different language for your browser: * In the Google Chrome browser, you can change the language in the advanced settings. * In the Mozilla Firefox browser, you can change the language in the general settings. Learn more * [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html) Parent topic:[FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html)",conceptual,0,train
2B2899A3878E20A4B73B0F11CFC4FD815A81E13F,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/questnuggetnodeslots.html?context=cdpaas&locale=en,applyquestnode properties,"applyquestnode properties
applyquestnode properties You can use QUEST modeling nodes can be used to generate a QUEST model nugget. The scripting name of this model nugget is applyquestnode. For more information on scripting the modeling node itself, see [questnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/questnodeslots.html). applyquestnode properties Table 1. applyquestnode properties applyquestnode Properties Values Property description sql_generate Never
NoMissingValues
MissingValues
native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. calculate_conf flag display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned. calculate_raw_propensities flag calculate_adjusted_propensities flag",conceptual,0,train
9E77548AF396E9E9474371705BCFFF55684C5760,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_object_oriented.html?context=cdpaas&locale=en,Object-oriented programming,"Object-oriented programming
Object-oriented programming Object-oriented programming is based on the notion of creating a model of the target problem in your programs. Object-oriented programming reduces programming errors and promotes the reuse of code. Python is an object-oriented language. Objects defined in Python have the following features: * Identity. Each object must be distinct, and this must be testable. The is and is not tests exist for this purpose. * State. Each object must be able to store state. Attributes, such as fields and instance variables, exist for this purpose. * Behavior. Each object must be able to manipulate its state. Methods exist for this purpose. Python includes the following features for supporting object-oriented programming: * Class-based object creation. Classes are templates for the creation of objects. Objects are data structures with associated behavior. * Inheritance with polymorphism. Python supports single and multiple inheritance. All Python instance methods are polymorphic and can be overridden by subclasses. * Encapsulation with data hiding. Python allows attributes to be hidden. When hidden, you can access attributes from outside the class only through methods of the class. Classes implement methods to modify the data.",conceptual,0,train
F839CD35991DF790F17239C9C63BFCAE701F3D65,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html?context=cdpaas&locale=en,Tips for writing foundation model prompts: prompt engineering,"Tips for writing foundation model prompts: prompt engineering
Tips for writing foundation model prompts: prompt engineering Part art, part science, prompt engineering is the process of crafting prompt text to best effect for a given model and parameters. When it comes to prompting foundation models, there isn't just one right answer. There are usually multiple ways to prompt a foundation model for a successful result. Use the Prompt Lab to experiment with crafting prompts. * For help using the prompt editor, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). * Try the samples that are available from the Sample prompts tab. * Learn from documented samples. See [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html). As you experiment, remember these tips. The tips in this topic will help you successfully prompt most text-generating foundation models. Tip 1: Always remember that everything is text completion Your prompt is the text you submit for processing by a foundation model. The Prompt Lab in IBM watsonx.ai is not a chatbot interface. For most models, simply asking a question or typing an instruction usually won't yield the best results. That's because the model isn't answering your prompt, the model is appending text to it. This image demonstrates prompt text and generated output: * Prompt text: ""I took my dog "" * Generated output: ""to the park.""  Tip 2: Include all the needed prompt components Effective prompts usually have one or more of the following components: instruction, context, examples, and cue. Instruction An instruction is an imperative statement that tells the model what to do. For example, if you want the model to list ideas for a dog-walking business, your instruction could be: ""List ideas for starting a dog-walking business:"" Context Including background or contextual information in your prompt can nudge the model output in a desired direction. Specifically, (tokenized) words that appear in your prompt text are more likely to be included in the generated output. Examples To indicate the format or shape that you want the model response to be, include one or more pairs of example input and corresponding desired output showing the pattern you want the generated text to follow. (Including one example in your prompt is called one-shot prompting, including two or more examples in your prompt is called few-shot prompting, and when your prompt has no examples, that's called zero-shot prompting.) Note that when you are prompting models that have been fine-tuned, you might not need examples. Cue A cue is text at the end of the prompt that is likely to start the generated output on a desired path. (Remember, as much as it seems like the model is responding to your prompt, the model is really appending text to your prompt or continuing your prompt.) Tip 3: Include descriptive details The more guidance, the better. Experiment with including descriptive phrases related to aspects of your ideal result: content, style, and length. Including these details in your prompt can cause a more creative or more complete result to be generated. For example, you could improve upon the sample instruction given previously: * Original: ""List ideas for starting a dog-walking business"" * Improved: ""List ideas for starting a large, wildly successful dog-walking business"" Example Before In this image, you can see a prompt with the original, simple instruction. This prompt doesn't produce great results.  After In this image, you can see all the prompt components: instruction (complete with descriptive details), context, example, and cue. This prompt produces a much better result.  You can experiment with this prompt in the Prompt Lab yourself: Model: gpt-neox-20b Decoding: Sampling * Temperature: 0.7 * Top P: 1 * Top K: 50 * Repetition penalty: 1.02 Stopping criteria: * Stop sequence: Two newline characters * Min tokens: 0 * Max tokens: 80 Prompt text: Copy this prompt text and paste it into the freeform prompt editor in Prompt Lab, then click Generate to see a result. With no random seed specified, results will vary each time you submit the prompt. Based on the following industry research, suggest ideas for starting a large, wildly successful dog-walking business. Industry research: The most successful dog-walking businesses cater to owners' needs and desires while also providing great care to the dogs. For example, owners want flexible hours, a shuttle to pick up and drop off dogs at home, and personalized services, such as custom meal and exercise plans. Consider too how social media has permeated our lives. Web-enabled interaction provide images and video that owners will love to share online, which is great advertising for the business. Ideas for starting a lemonade business: - Set up a lemonade stand - Partner with a restaurant - Get a celebrity to endorse the lemonade Ideas for starting a large, wildly successful dog-walking business: Learn",how-to,1,train
F1B21B1232720492424BB07CD73C93DF2B9CD229,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/coxregnodeslots.html?context=cdpaas&locale=en,coxregnode properties,"coxregnode properties
coxregnode properties The Cox regression node enables you to build a survival model for time-to-event data in the presence of censored records. The model produces a survival function that predicts the probability that the event of interest has occurred at a given time (t) for given values of the input variables. coxregnode properties Table 1. coxregnode properties coxregnode Properties Values Property description survival_time field Cox regression models require a single field containing the survival times. target field Cox regression models require a single target field, and one or more input fields. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. method Enter
Stepwise
BackwardsStepwise groups field model_type MainEffects
Custom custom_terms [BP*Sex"" ""BP*Age] mode Expert
Simple max_iterations number p_converge 1.0E-4
1.0E-5
1.0E-6
1.0E-7
1.0E-8
0 l_converge 1.0E-1
1.0E-2
1.0E-3
1.0E-4
1.0E-5
0 removal_criterion LR
Wald
Conditional probability_entry number probability_removal number output_display EachStep
LastStep ci_enable flag ci_value 90
95
99 correlation flag display_baseline flag survival flag hazard flag log_minus_log flag one_minus_survival flag separate_line field value number or string If no value is specified for a field, the default option ""Mean"" will be used for that field.",conceptual,0,train
E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB,https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options2.html?context=cdpaas&locale=en,Watson OpenScale offering plan options,"Watson OpenScale offering plan options
Watson OpenScale offering plan options The Watson OpenScale enables responsible, transparent, and explainable AI. With Watson OpenScale you can: * Evaluate machine learning models for dimensions such as fairness, quality, or drift. * Explore transactions to gain insights about your model. Watson OpenScale legacy offering plans Important:The legacy offering plan for Watson OpenScale is available only in the Frankfurt region. In the Dallas region, the [watsonx.governance plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html) are available instead. Watson OpenScale Standard v2 plan Watson OpenScale offers a Standard v2 plan that charge users on a per model basis. There are no restrictions or limitations on payload data, feedback rows, or explanations under the Standard v2 instance. Regional limitations Watson OpenScale is not available in some regions. See [Regional availability for services and features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html) for more details. Note:The regional availability for every service can also be found in the [IBM watsonx catalog](https://dataplatform.cloud.ibm.com/data/catalog?target=services&context=cpdaas). Quota limits To avoid performance issues and manage resources efficiently, Watson OpenScale sets the following quota limits: Asset Limit DataMart 100 per instance Service providers 100 per instance Integrated systems 100 per instance Subscriptions 100 per service provider Monitor instances 100 per subscription Every asset in Watson OpenScale has a hard limitation of 10000 instances of the asset per service instance. PostgreSQL databases for Watson OpenScale You can use a PostgreSQL database for your Watson OpenScale instance. PostgreSQL is a powerful, open source object-relational database that is highly customizable and compliant with many security standards. If your model processes personally identifiable information (PII), use a PostgreSQL database for your model. PostgreSQL is compliant with: * GDPR * HIPAA * PCI-DSS * SOC 1 Type 2 * SOC 2 Type 2 * ISO 27001 * ISO 27017 * ISO 27018 * ISO 27701 Next steps [Managing the Watson OpenScale service](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html) Parent topic:[watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/aiopenscale.html)",conceptual,0,train
9A5011652C8FAD610EF217B82B7F28C8256DCE8B,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/featureselectionnuggetnodeslots.html?context=cdpaas&locale=en,applyfeatureselectionnode properties,"applyfeatureselectionnode properties
applyfeatureselectionnode properties You can use Feature Selection modeling nodes to generate a Feature Selection model nugget. The scripting name of this model nugget is applyfeatureselectionnode. For more information on scripting the modeling node itself, see [featureselectionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/featureselectionnodeslots.htmlfeatureselectionnodeslots). applyfeatureselectionnode properties Table 1. applyfeatureselectionnode properties applyfeatureselectionnode Properties Values Property description ranked_values Specifies which ranked fields are checked in the model browser. screened_values Specifies which screened fields are checked in the model browser.",conceptual,0,train
EBB83F528AC02840EFE18510ED95979D2CDA5641,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en,AutoAI implementation details,"AutoAI implementation details
AutoAI implementation details AutoAI automatically prepares data, applies algorithms, or estimators, and builds model pipelines that are best suited for your data and use case. The following sections describe some of these technical details that go into generating the pipelines and provide a list of research papers that describe how AutoAI was designed and implemented. * [Preparing the data for training (pre-processing)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=endata-prep) * [Automated model selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enauto-select) * [Algorithms used for classification models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enestimators-classification) * [Algorithms used for regression models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enestimators-regression) * [Metrics by model type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enmetric-by-model) * [Data transformations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=endata-transformations) * [Automated Feature Engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enfeat-eng) * [Hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enhyper-opt) * [AutoAI FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enautoai-faq) * [Learn more](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enadd-resource) Preparing the data for training (data pre-processing) During automatic data preparation, or pre-processing, AutoAI analyzes the training data and prepares it for model selection and pipeline generation. Most data sets contain missing values but machine learning algorithms typically expect no missing values. On exception to this rule is described in [xgboost section 3.4](https://arxiv.org/abs/1603.02754). AutoAI algorithms perform various missing value imputations in your data set by using various techniques, making your data ready for machine learning. In addition, AutoAI detects and categorizes features based on their data types, such as categorical or numerical. It explores encoding and scaling strategies that are based on the feature categorization. Data preparation involves these steps: * [Feature column classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=encol-classification) * [Feature engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enfeature-eng) * [Pre-processing (data imputation and encoding)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enpre-process) Feature column classification * Detects the types of feature columns and classifies them as categorical or numerical class * Detects various types of missing values (default, user-provided, outliers) Feature engineering * Handles rows for which target values are missing (drop (default) or target imputation) * Drops unique value columns (except datetime and timestamps) * Drops constant value columns Pre-processing (data imputation and encoding) * Applies Sklearn imputation/encoding/scaling strategies (separately on each feature class). For example, the current default method for missing value imputation strategies, which are used in the product are most frequent for categorical variables and mean for numerical variables. * Handles labels of test set that were not seen in training set * HPO feature: Optimizes imputation/encoding/scaling strategies given a data set and algorithm Automatic model selection The second stage in an AutoAI experiment training is automated model selection. The automated model selection algorithm uses the Data Allocation by using Upper Bounds strategy. This approach sequentially allocates small subsets of training data among a large set of algorithms. The goal is to select an algorithm that gives near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. The system currently supports all Scikit-learn algorithms, and the popular XGBoost and LightGBM algorithms. Training and evaluation of models on large data sets is costly. The approach of starting small subsets and allocating incrementally larger ones to models that work well on the data set saves time, without sacrificing performance. Snap machine learning algorithms were added to the system to boost the performance even more. Selecting algorithms for a model Algorithms are selected to match the data and the nature of the model, but they can also balance accuracy and duration of runtime, if the model is configured for that option. For example, Snap ML algorithms are typically faster for training than Scikit-learn algorithms. They are often the preferred algorithms AutoAI selects automatically for cases where training is optimized for a shorter run time and accuracy. You can manually select them if training speed is a priority. For details, see [Snap ML documentation](https://snapml.readthedocs.io/). For a discussion of when SnapML algorithms are useful, see this [blog post on using SnapML algorithms](https://lukasz-cmielowski.medium.com/watson-studio-autoai-python-api-and-covid-19-data-78169beacf36). Algorithms used for classification models These algorithms are the default algorithms that are used for model selection for classification problems. Table 1: Default algorithms for classification Algorithm Description Decision Tree Classifier Maps observations about an item (represented in branches) to conclusions about the item's target value (represented in leaves). Supports both binary and multiclass labels, and both continuous and categorical features. Extra Trees Classifier An averaging algorithm based on randomized decision trees. Gradient Boosted Tree Classifier Produces a classification prediction model in the form of an ensemble of decision trees. It supports binary labels and both continuous and categorical features. LGBM Classifier Gradient boosting framework that uses leaf-wise (horizontal) tree-based learning algorithm. Logistic Regression Analyzes a data set where one or more independent variables that determine one of two outcomes. Only binary logistic regression is supported Random Forest Classifier Constructs multiple decision trees to produce the label that is a mode of each decision tree. It supports both binary and multiclass labels, and both continuous and categorical features. SnapDecisionTreeClassifier This algorithm provides a decision tree classifier by using the IBM Snap ML library. SnapLogisticRegression This algorithm provides regularized logistic regression by using the IBM Snap ML solver. SnapRandomForestClassifier This algorithm provides a random forest classifier by using the IBM Snap ML library. SnapSVMClassifier This",conceptual,0,train
9555087B12B80060FB337F8974FEA9261174115E,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition.html?context=cdpaas&locale=en,Condition monitoring (SPSS Modeler),"Condition monitoring (SPSS Modeler)
Condition monitoring This example concerns monitoring status information from a machine and the problem of recognizing and predicting fault states. The data is created from a fictitious simulation and consists of a number of concatenated series measured over time. Each record is a snapshot report on the machine in terms of the following: * Time. An integer. * Power. An integer. * Temperature. An integer. * Pressure. 0 if normal, 1 for a momentary pressure warning. * Uptime. Time since last serviced. * Status. Normally 0, changes to an error code if an error occurs (101, 202, or 303). * Outcome. The error code that appears in this time series, or 0 if no error occurs. (These codes are available only with the benefit of hindsight.) This example uses the flow named Condition Monitoring, available in the example project . The data files are cond1n.csv and cond2n.csv. For each time series, there's a series of records from a period of normal operation followed by a period leading to the fault, as shown in the following table: Time Power Temperature Pressure Uptime Status Outcome 0 1059 259 0 404 0 0 1 1059 259 0 404 0 0 ... 51 1059 259 0 404 0 0 52 1059 259 0 404 0 0 53 1007 259 0 404 0 303 54 998 259 0 404 0 303 ... 89 839 259 0 404 0 303 90 834 259 0 404 303 303 0 965 251 0 209 0 0 1 965 251 0 209 0 0 ... 51 965 251 0 209 0 0 52 965 251 0 209 0 0 53 938 251 0 209 0 101 54 936 251 0 209 0 101 ... 208 644 251 0 209 0 101 209 640 251 0 209 101 101 The following process is common to most data mining projects: * Examine the data to determine which attributes may be relevant to the prediction or recognition of the states of interest. * Retain those attributes (if already present), or derive and add them to the data, if necessary. * Use the resultant data to train rules and neural nets. * Test the trained systems using independent test data.",conceptual,0,train
FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=en,Federated Learning XGBoost tutorial for UI,"Federated Learning XGBoost tutorial for UI
Federated Learning XGBoost tutorial for UI This tutorial demonstrates the usage of Federated Learning with the goal of training a machine learning model with data from different users without having users share their data. The steps are done in a low code environment with the UI and with an XGBoost framework. In this tutorial you learn to: * [Step 1: Start Federated Learning as the admin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-1) * [Before you begin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enbefore-you-begin) * [Start the aggregator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstart-the-aggregator) * [Step 2: Train model as a party](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-2) * [Step 3: Save and deploy the model online](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-3) * [Step 4: Score the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-4) Notes: * This is a step-by-step tutorial for running a UI driven Federated Learning experiment. To see a code sample for an API driven approach, go to [Federated Learning XGBoost samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html). * In this tutorial, admin refers to the user that starts the Federated Learning experiment, and party refers to one or more users who send their model results after the experiment is started by the admin. While the tutorial can be done by the admin and multiple parties, a single user can also complete a full run through as both the admin and the party. For a simpler demonstrative purpose, in the following tutorial only one data set is submitted by one party. For more information on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html). Step 1: Start Federated Learning In this section, you learn to start the Federated Learning experiment. Before you begin 1. Log in to [IBM Cloud](https://cloud.ibm.com/). If you don't have an account, create one with any email. 2. [Create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning) if you do not have it set up in your environment. 3. Log in to [watsonx](https://dataplatform.cloud.ibm.com/home2?context=wx). 4. Use an existing [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) or create a new one. You must have at least admin permission. 5. Associate the Watson Machine Learning service with your project. 1. In your project, click the Manage > Service & integrations. 2. Click Associate service. 3. Select your Watson Machine Learning instance from the list, and click Associate; or click New service if you do not have one to set up an instance.  Start the aggregator 1. Create the Federated learning experiment asset: 1. Click the Assets tab in your project. 1. Click New asset > Train models on distributed data. 2. Type a Name for your experiment and optionally a description. 3. Verify the associated Watson Machine Learning instance under Select a machine learning instance. If you don't see a Watson Machine Learning instance associated, follow these steps: 1. Click Associate a Machine Learning Service Instance. 2. Select an existing instance and click Associate, or create a New service. 3. Click Reload to see the associated service.  4. Click Next. 2. Configure the experiment. 1. On the Configure page, select a Hardware specification. 2. Under the Machine learning framework dropdown, select scikit-learn. 3. For the Model type, select XGBoost. 4. For the Fusion method, select XGBoost classification fusion  3. Define the hyperparameters. 1. Set the value for the Rounds field to 5. 2. Accept the default values for the rest of the fields.  3. Click Next. 4. Select remote training systems. 1. Click Add new systems.  2. Give your Remote Training System a name. 3. Under Allowed identities, select the user that will participate in the experiment, and then click Add. You can add as many allowed identities as participants in this Federated Experiment training instance. For this tutorial, choose only yourself. Any allowed identities must be part of the project and have at leastAdmin permission. 4. When you are finished, click Add systems.  5. Return to the Select remote training systems page, verify that your system is selected, and then click Next.  5. Review your settings, and then click Create. 6. Watch the status. Your Federated Learning experiment status is Pending when it starts. When your experiment is ready for parties to connect, the status will change to Setup – Waiting for remote systems. This may take a few minutes. Step 2: Train model as a party 1. Ensure that you are using the same Python version as the admin. Using a different Python version might cause compatibility issues. To see Python versions compatible with different frameworks, see [Frameworks and Python version compatibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.htmlfl-py-fmwk). 2. Create a new local directory. 3. Download the Adult data set into the directory with this command: wget https://api.dataplatform.cloud.ibm.com/v2/gallery-assets/entries/5fcc01b02d8f0e50af8972dc8963f98e/data -O adult.csv. 4. Download the data handler by running wget https://raw.githubusercontent.com/IBMDataScience/sample-notebooks/master/Files/adult_sklearn_data_handler.py -O adult_sklearn_data_handler.py. 5. Install Watson Machine Learning. * If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'. * If you are using Mac OS with M-series CPU and Conda, download the [installation script](https://raw.github.ibm.com/WML/federated-learning/master/docs/install_fl_rt22.2_macos.sh?token=AAAXW7VVQZF7LYMTX5VOW7DEDULLE)",how-to,1,train
717B697E0045B5D7DFF6ACC93AD5DEC98E27EBDC,https://dataplatform.cloud.ibm.com/docs/content/wsd/parameters.html?context=cdpaas&locale=en,Flow and SuperNode parameters,"Flow and SuperNode parameters
Flow and SuperNode parameters You can define parameters for use in CLEM expressions and in scripting. They are, in effect, user-defined variables that are saved and persisted with the current flow or SuperNode and can be accessed from the user interface as well as through scripting. If you save a flow, for example, any parameters you set for that flow are also saved. (This distinguishes them from local script variables, which can be used only in the script in which they are declared.) Parameters are often used in scripting to control the behavior of the script, by providing information about fields and values that don't need to be hard coded in the script. You can set flow parameters in a flow script or in a flow's properties (right-click the canvas in your flow and select Flow properties), and they're available to all nodes in the flow. They're displayed in the Parameters list in the Expression Builder. You can also set parameters for SuperNodes, in which case they're visible only to nodes encapsulated within that SuperNode. Tip: For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide.",conceptual,0,train
7BF4B8F1F49406EEC43BE3B7350092F9165B0757,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/classificationandregression-guides.html?context=cdpaas&locale=en,SPSS predictive analytics classification and regression algorithms in notebooks,"SPSS predictive analytics classification and regression algorithms in notebooks
SPSS predictive analytics classification and regression algorithms in notebooks You can use generalized linear model, linear regression, linear support vector machine, random trees, or CHAID SPSS predictive analytics algorithms in notebooks. Generalized Linear Model The Generalized Linear Model (GLE) is a commonly used analytical algorithm for different types of data. It covers not only widely used statistical models, such as linear regression for normally distributed targets, logistic models for binary or multinomial targets, and log linear models for count data, but also covers many useful statistical models via its very general model formulation. In addition to building the model, Generalized Linear Model provides other useful features such as variable selection, automatic selection of distribution and link function, and model evaluation statistics. This model has options for regularization, such as LASSO, ridge regression, elastic net, etc., and is also capable of handling very wide data. For more details about how to choose distribution and link function, see Distribution and Link Function Combination. Example code 1: This example shows a GLE setting with specified distribution and link function, specified effects, intercept, conducting ROC curve, and printing correlation matrix. This scenario builds a model, then scores the model. Python example: from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear from spss.ml.classificationandregression.params.effect import Effect gle1 = GeneralizedLinear(). setTargetField(""Work_experience""). setInputFieldList([""Beginning_salary"", ""Sex_of_employee"", ""Educational_level"", ""Minority_classification"", ""Current_salary""]). setEffects([ Effect(fields=""Beginning_salary""], nestingLevels=0]), Effect(fields=""Sex_of_employee""], nestingLevels=0]), Effect(fields=""Educational_level""], nestingLevels=0]), Effect(fields=""Current_salary""], nestingLevels=0]), Effect(fields=""Sex_of_employee"", ""Educational_level""], nestingLevels=0, 0])]). setIntercept(True). setDistribution(""NORMAL""). setLinkFunction(""LOG""). setAnalysisType(""BOTH""). setConductRocCurve(True) gleModel1 = gle1.fit(data) PMML = gleModel1.toPMML() statXML = gleModel1.statXML() predictions1 = gleModel1.transform(data) predictions1.show() Example code 2: This example shows a GLE setting with unspecified distribution and link function, and variable selection using the forward stepwise method. This scenario uses the forward stepwise method to select distribution, link function and effects, then builds and scores the model. Python example: from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear from spss.ml.classificationandregression.params.effect import Effect gle2 = GeneralizedLinear(). setTargetField(""Work_experience""). setInputFieldList([""Beginning_salary"", ""Sex_of_employee"", ""Educational_level"", ""Minority_classification"", ""Current_salary""]). setEffects([ Effect(fields=""Beginning_salary""], nestingLevels=0]), Effect(fields=""Sex_of_employee""], nestingLevels=0]), Effect(fields=""Educational_level""], nestingLevels=0]), Effect(fields=""Current_salary""], nestingLevels=0])]). setIntercept(True). setDistribution(""UNKNOWN""). setLinkFunction(""UNKNOWN""). setAnalysisType(""BOTH""). setUseVariableSelection(True). setVariableSelectionMethod(""FORWARD_STEPWISE"") gleModel2 = gle2.fit(data) PMML = gleModel2.toPMML() statXML = gleModel2.statXML() predictions2 = gleModel2.transform(data) predictions2.show() Example code 3: This example shows a GLE setting with unspecified distribution, specified link function, and variable selection using the LASSO method, with two-way interaction detection and automatic penalty parameter selection. This scenario detects two-way interaction for effects, then uses the LASSO method to select distribution and effects using automatic penalty parameter selection, then builds and scores the model. Python example: from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear from spss.ml.classificationandregression.params.effect import Effect gle3 = GeneralizedLinear(). setTargetField(""Work_experience""). setInputFieldList([""Beginning_salary"", ""Sex_of_employee"", ""Educational_level"", ""Minority_classification"", ""Current_salary""]). setEffects([ Effect(fields=""Beginning_salary""], nestingLevels=0]), Effect(fields=""Sex_of_employee""], nestingLevels=0]), Effect(fields=""Educational_level""], nestingLevels=0]), Effect(fields=""Current_salary""], nestingLevels=0])]). setIntercept(True). setDistribution(""UNKNOWN""). setLinkFunction(""LOG""). setAnalysisType(""BOTH""). setDetectTwoWayInteraction(True). setUseVariableSelection(True). setVariableSelectionMethod(""LASSO""). setUserSpecPenaltyParams(False) gleModel3 = gle3.fit(data) PMML = gleModel3.toPMML() statXML = gleModel3.statXML() predictions3 = gleModel3.transform(data) predictions3.show() Linear Regression The linear regression model analyzes the predictive relationship between a continuous target and one or more predictors which can be continuous or categorical. Features of the linear regression model include automatic interaction effect detection, forward stepwise model selection, diagnostic checking, and unusual category detection based on Estimated Marginal Means (EMMEANS). Example code: Python example: from spss.ml.classificationandregression.linearregression import LinearRegression le = LinearRegression(). setTargetField(""target""). setInputFieldList([""predictor1"", ""predictor2"", ""predictorn""]). setDetectTwoWayInteraction(True). setVarSelectionMethod(""forwardStepwise"") leModel = le.fit(data) predictions = leModel.transform(data) predictions.show() Linear Support Vector Machine The Linear Support Vector Machine (LSVM) provides a supervised learning method that generates input-output mapping functions from a set of labeled training data. The mapping function can be either a classification function or a regression function. LSVM is designed to resolve large-scale problems in terms of the number of records and the number of variables (parameters). Its feature space is the same as the input space of the problem, and it can handle sparse data where the average number of non-zero elements in one record is small. Example code: Python example: from spss.ml.classificationandregression.linearsupportvectormachine import LinearSupportVectorMachine lsvm = LinearSupportVectorMachine(). setTargetField(""BareNuc""). setInputFieldList([""Clump"", ""UnifSize"", ""UnifShape"", ""MargAdh"", ""SingEpiSize"", ""BlandChrom"", ""NormNucl"", ""Mit"", ""Class""]). setPenaltyFunction(""L2"") lsvmModel = lsvm.fit(df) predictions = lsvmModel.transform(data) predictions.show() Random Trees Random Trees is a powerful approach for generating strong (accurate) predictive models. It's comparable and sometimes better than other state-of-the-art methods for classification or regression problems. Random Trees is an ensemble model consisting of multiple CART-like trees. Each tree grows on a bootstrap sample which is obtained by sampling the original data cases with replacement. Moreover, during the tree growth, for each node the best split variable is selected from a specified smaller number of variables that are drawn randomly from the full set of variables. Each tree grows to the largest extent possible, and there is no pruning. In scoring, Random Trees combines individual tree scores by majority voting (for classification) or average (for regression). Example code: Python example: from spss.ml.classificationandregression.ensemble.randomtrees import RandomTrees Random trees required a ""target"" field and some input fields. If ""target"" is continuous, then regression trees will be generate else classification . You can use the SPSS Attribute or Spark ML Attribute to indicate",how-to,1,train
7FEB0313C4AA5133F215A847F2ABAA025E83BB38,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en,Quick start: Evaluate and track a prompt template,"Quick start: Evaluate and track a prompt template
Quick start: Evaluate and track a prompt template Take this tutorial to learn how to evaluate and track a prompt template. You can evaluate prompt templates in projects or deployment spaces to measure the performance of foundation model tasks and understand how your model generates responses. Then, you can track the prompt template in an AI use case to capture and share facts about the asset to help you meet governance and compliance goals. Required services : watsonx.governance Your basic workflow includes these tasks: 1. Open a project that contains the prompt template to evaluate. Projects are where you can collaborate with others to work with assets. 2. Evaluate a prompt template using test data. 3. Review the results on the AI Factsheet. 4. Track the evaluated prompt template in an AI use case. 5. Deploy and test your evaluated prompt template. Read about prompt templates With watsonx.governance, you can evaluate prompt templates in projects to measure how effectively your foundation models generate responses for the following task types: * Classification * Summarization * Generation * Question answering * Entity extraction [Read more about evaluating prompt templates in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html) [Read more about evaluating prompt templates in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html) Watch a video about evaluating and tracking a prompt template  Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial. This video provides a visual method to learn the concepts and tasks in this documentation. Try a tutorial to evaluating and tracking a prompt template In this tutorial, you will complete these tasks: * [Task 1: Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep01) * [Task 2: Evaluate the sample prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep02) * [Task 3: Create a model inventory and AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep03) * [Task 4: Start tracking the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep04) * [Task 5: Create a new project for validation](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep05) * [Task 6: Validate the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep06) * [Task 7: Deploy the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep07) Expand all sections * Tips for completing this tutorial ### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview) * Task 1: Create a project  To preview this task, watch the video beginning at 00:08. You need a project store the prompt template and the evaluation. Follow these steps to create a project based on a sample: 1. Access the [Getting started with watsonx governance](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1b6c8d6e-a45c-4bf1-84ee-8fe9a6daa56d){: external} project in the Samples. 1. Click Create project. 1. Accept the default values for the project name, and click Create. 1. Click View new project when the project is successfully created. 1. Associate a Watson Machine Learning service with the project: 1. When the project opens, click the Manage tab, and select the Services and integrations page. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click New service. 1. Select Watson Machine Learning. 1. Click Create. 1. Select the new service instance from the list. 1. Click Associate service. 1. If necessary, click Cancel to return to the Services & Integrations page. 1. Click the Assets tab in the project to see the sample assets. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}.For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html){: new_window}. ### {: iih} Check your progress The following image shows the project Assets tab. You are now ready to evaluate the sample prompt template in the project. {: width=""100%"" } [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview) * Task 2: Evaluate the sample prompt template",how-to,1,train
9F27A4650B0B0BF36223937D0CF60E460B66A723,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_statements.html?context=cdpaas&locale=en,Statement syntax,"Statement syntax
Statement syntax The statement syntax for Python is very simple. In general, each source line is a single statement. Except for expression and assignment statements, each statement is introduced by a keyword name, such as if or for. Blank lines or remark lines can be inserted anywhere between any statements in the code. If there's more than one statement on a line, each statement must be separated by a semicolon (;). Very long statements can continue on more than one line. In this case, the statement that is to continue on to the next line must end with a backslash (). For example: x = ""A loooooooooooooooooooong string"" + ""another looooooooooooooooooong string"" When you enclose a structure by parentheses (()), brackets ([]), or curly braces ({}), the statement can be continued on a new line after any comma, without having to insert a backslash. For example: x = (1, 2, 3, ""hello"", ""goodbye"", 4, 5, 6)",conceptual,0,train
F964EFDA57733A3B39890B30FF22BD5C47EED893,https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html?context=cdpaas&locale=en,Managing IBM watsonx,"Managing IBM watsonx
Managing IBM watsonx As the owner or an administrator of the IBM Cloud account, you can monitor and manage services and the platform. * [Configuring services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html?context=cdpaas&locale=encore) An IBM Cloud account administrator is a user in the account who was assigned the Administrator role in IBM Cloud for the All Identity and Access enabled services option in IAM. If you're not sure of your roles, see [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html). You perform some administrative tasks within IBM watsonx, and others in IBM Cloud. Some tasks require steps in both areas, depending on your goals. Configuring services The services that are included in watsonx.ai are Watson Studio and Watson Machine Learning. Task In IBM watsonx? In IBM Cloud? [Manage services in IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.htmlmanage) ✓ ✓ [Switch service region](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html?context=cdpaas&locale=enregion) ✓ [Upgrade your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.htmlaccount) ✓ ✓ [Upgrade your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.htmlapp) ✓ [Configure private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html) ✓ [Remove users](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-removeusers.html) ✓ ✓ [Stop using IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html) ✓ ✓ [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) ✓ ✓ [View and manage environment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.htmlmonitor-cuh) ✓ [Set up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) ✓ ✓ [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) ✓ ✓ [Set resources scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) ✓ [Set type of credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) ✓ [Manage IBM Cloud account in IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) ✓ [Manage all projects in the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html) ✓ ✓ [Secure IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) ✓ ✓ [Set up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html) ✓ [Delegate encryption keys for IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok) ✓ ✓ Switch service region The platform and services are available in multiple IBM Cloud service regions and you can have services in more than one region. Your projects, catalogs, and data are specific to the region in which they were saved and can be accessed only from your services in that region. If you provision Watson Studio services in both the Dallas and the Frankfurt regions, you can't access projects that you created in the Frankfurt region from the Dallas region. To switch your service region: 1. Log in to IBM watsonx. 2. Click the Region Switcher in the home page header. 3. Select the region that contains your services and projects. For wider browsers, you can select the region from the dropdown menu. Learn more * [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) * [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Roles in the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html) Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)",how-to,1,train
B4B2E864E1ABD4EA20845750E9567225BB3F417E,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=en,Relations extraction,"Relations extraction
Relations extraction Watson Natural Language Processing Relations extraction encapsulates algorithms for extracting relations between two entity mentions. For example, in the text Lionel Messi plays for FC Barcelona. a relation extraction model may decide that the entities Lionel Messi and F.C. Barcelona are in a relationship with each other, and the relationship type is works for. Capabilities Use this model to detect relations between discovered entities. The following table lists common relations types that are available out-of-the-box after you have run the entity models. Table 1. Available common relation types between entities Relation Description affiliatedWith Exists between two entities that have an affiliation or are similarly connected. basedIn Exists between an Organization and the place where it is mainly, only, or intrinsically located. bornAt Exists between a Person and the place where they were born. bornOn Exists between a Person and the Date or Time when they were born. clientOf Exists between two entities when one is a direct business client of the other (that is, pays for certain services or products). colleague Exists between two Persons who are part of the same Organization. competitor Exists between two Organizations that are engaged in economic competition. contactOf Relates contact information with an entity. diedAt Exists between a Person and the place at which he, she, or it died. diedOn Exists between a Person and the Date or Time on which he, she, or it died. dissolvedOn Exists between an Organization or URL and the Date or Time when it was dissolved. educatedAt Exists between a Person and the Organization at which he or she is or was educated. employedBy Exists between two entities when one pays the other for certain work or services; monetary reward must be involved. In many circumstances, marking this relation requires world knowledge. foundedOn Exists between an Organization or URL and the Date or Time on which it was founded. founderOf Exists between a Person and a Facility, Organization, or URL that they founded. locatedAt Exists between an entity and its location. managerOf Exists between a Person and another entity such as a Person or Organization that he or she manages as his or her job. memberOf Exists between an entity, such as a Person or Organization, and another entity to which he, she, or it belongs. ownerOf Exists between an entity, such as a Person or Organization, and an entity that he, she, or it owns. The owner does not need to have permanent ownership of the entity for the relation to exist. parentOf Exists between a Person and their children or stepchildren. partner Exists between two Organizations that are engaged in economic cooperation. partOf Exists between a smaller and a larger entity of the same type or related types in which the second entity subsumes the first. If the entities are both events, the first must occur within the time span of the second for the relation to be recognized. partOfMany Exists between smaller and larger entities of the same type or related types in which the second entity, which must be plural, includes the first, which can be singular or plural. populationOf Exists between a place and the number of people located there, or an organization and the number of members or employees it has. measureOf This relation indicates the quantity of an entity or measure (height, weight, etc) of an entity. relative Exists between two Persons who are relatives. To identify parents, children, siblings, and spouses, use the parentOf, siblingOf, and spouseOf relations. residesIn Exists between a Person and a place where they live or previously lived. shareholdersOf Exists between a Person or Organization, and an Organization of which the first entity is a shareholder. siblingOf Exists between a Person and their sibling or stepsibling. spokespersonFor Exists between a Person and an Facility, Organization, or Person that he or she represents. spouseOf Exists between two Persons that are spouses. subsidiaryOf Exists between two Organizations when the first is a subsidiary of the second. In [Runtime 22.2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=enruntime-222), relation extraction is provided as an analysis block, which depends on the Syntax analysis block and a entity mention extraction block. Starting with [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=enruntime-231), relation extraction is provided as a workflow, which is directly run on the input text. Relation extraction in Runtime 23.1 Workflow name relations_transformer-workflow_multilingual_slate.153m.distilled Supported languages The Relations Workflow is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). ar, de, en, es, fr, it, ja, ko, pt Code sample import watson_nlp Load the workflow model relations_workflow = watson_nlp.load('relations_transformer-workflow_multilingual_slate.153m.distilled') Run the relation extraction workflow on the input text relations = relations_workflow.run('Anna Smith is an engineer. Anna works at IBM.', language_code=""en"") print(relations.get_relation_pairs_by_type()) Output of the code sample {'employedBy': [(('Anna', 'Person'), ('IBM', 'Organization'))]} Relation extraction in Runtime 22.2 Block name relations_transformer_en_stock Supported languages The Relations",how-to,1,train
589D9B0A7150AF5485E6F7452EB39D15ADDB35F9,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/nonconsensual-use.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Nonconsensual use Risks associated with outputMisuseAmplified Description The possibility that a model could be misused to imitate others through video (deepfakes), images, audio, or other modalities without their consent is the risk of nonconsensual use. Why is nonconsensual use a concern for foundation models? Intentionally imitating others for the purposes of deception without their consent is unethical and might be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences. Example FBI Warning on Deepfakes The FBI recently warned the public of malicious actors creating synthetic, explicit content “for the purposes of harassing victims or sextortion schemes”. They noted that advancements in AI have made this content higher quality, more customizable, and more accessible than ever. Sources: [FBI, June 2023](https://www.ic3.gov/Media/Y2023/PSA230605) Example Deepfakes A deepfake is the creation of an audio or video where the people speaking are created by AI not the actual person. Sources: [CNN, January 2019](https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/) Example Misleading Voicebot Interaction The article cited a case where a deepfake voice was used to scam a CEO out of $243,000. The CEO believed he was on the phone with his boss, the chief executive of his firm’s parent company, when he followed the orders to transfer €220,000 (approximately $243,000) to the bank account of a Hungarian supplier. Sources: [Forbes, September 2019](https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=10432a7d2241) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-concept-ext_cloud.html?context=cdpaas&locale=en,Concepts extraction block,"Concepts extraction block
Concepts extraction block The Watson Natural Language Processing Concepts block extracts general DBPedia concepts (concepts drawn from language-specific Wikipedia versions) that are directly referenced or alluded to, but not directly referenced, in the input text. Block name concepts_alchemy__stock Supported languages The Concepts block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). de, en, es, fr, it, ja, ko, pt Capabilities Use this block to assign concepts from [DBPedia](https://www.dbpedia.org/) (2016 edition). The output types are based on DBPedia. Dependencies on other blocks The following block must run before you can run the Concepts extraction block: * syntax_izumo__stock Code sample import watson_nlp Load Syntax and a Concepts model for English syntax_model = watson_nlp.load('syntax_izumo_en_stock') concepts_model = watson_nlp.load('concepts_alchemy_en_stock') Run the syntax model on the input text syntax_prediction = syntax_model.run('IBM announced new advances in quantum computing') Run the concepts model on the result of syntax concepts = concepts_model.run(syntax_prediction) print(concepts) Output of the code sample: { ""concepts"": [ { ""text"": ""IBM"", ""relevance"": 0.9842190146446228, ""dbpedia_resource"": ""http://dbpedia.org/resource/IBM"" }, { ""text"": ""Quantum_computing"", ""relevance"": 0.9797260165214539, ""dbpedia_resource"": ""http://dbpedia.org/resource/Quantum_computing"" }, { ""text"": ""Computing"", ""relevance"": 0.9080164432525635, ""dbpedia_resource"": ""http://dbpedia.org/resource/Computing"" }, { ""text"": ""Shor's_algorithm"", ""relevance"": 0.7580527067184448, ""dbpedia_resource"": ""http://dbpedia.org/resource/Shor's_algorithm"" }, { ""text"": ""Quantum_dot"", ""relevance"": 0.7069802284240723, ""dbpedia_resource"": ""http://dbpedia.org/resource/Quantum_dot"" }, { ""text"": ""Quantum_algorithm"", ""relevance"": 0.7063655853271484, ""dbpedia_resource"": ""http://dbpedia.org/resource/Quantum_algorithm"" }, { ""text"": ""Qubit"", ""relevance"": 0.7063655853271484, ""dbpedia_resource"": ""http://dbpedia.org/resource/Qubit"" }, { ""text"": ""DNA_computing"", ""relevance"": 0.7044616341590881, ""dbpedia_resource"": ""http://dbpedia.org/resource/DNA_computing"" }, { ""text"": ""Computation"", ""relevance"": 0.7044616341590881, ""dbpedia_resource"": ""http://dbpedia.org/resource/Computation"" }, { ""text"": ""Computer"", ""relevance"": 0.7044616341590881, ""dbpedia_resource"": ""http://dbpedia.org/resource/Computer"" } ], ""producer_id"": { ""name"": ""Alchemy Concepts"", ""version"": ""0.0.1"" } } Parent topic:[Watson Natural Language Processing block catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)",conceptual,0,train
D91044A492D05F87613BBA485CD2FAE1F54764DB,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/filternodeslots.html?context=cdpaas&locale=en,filternode properties,"filternode properties
filternode properties The Filter node filters (discards) fields, renames fields, and maps fields from one import node to another. Using the default_include property. Note that setting the value of the default_include property doesn't automatically include or exclude all fields; it simply determines the default for the current selection. This is functionally equivalent to selecting the Include All Fields option in the Filter node properties. For example, suppose you run the following script: node = modeler.script.stream().create(""filter"", ""Filter"") node.setPropertyValue(""default_include"", False) Include these two fields in the list for f in [""Age"", ""Sex""]: node.setKeyedPropertyValue(""include"", f, True) This will cause the node to pass the fields Age and Sex and discard all others. Now suppose you run the same script again but name two different fields: node = modeler.script.stream().create(""filter"", ""Filter"") node.setPropertyValue(""default_include"", False) Include these two fields in the list for f in [""BP"", ""Na""]: node.setKeyedPropertyValue(""include"", f, True) This will add two more fields to the filter so that a total of four fields are passed (Age, Sex, BP, Na). In other words, resetting the value of default_include to False doesn't automatically reset all fields. Alternatively, if you now change default_include to True, either using a script or in the Filter node dialog box, this would flip the behavior so the four fields listed previously would be discarded rather than included. When in doubt, experimenting with the controls in the Filter node properties may be helpful in understanding this interaction. filternode properties Table 1. filternode properties filternode properties Data type Property description default_include flag Keyed property to specify whether the default behavior is to pass or filter fields: Note that setting this property doesn't automatically include or exclude all fields; it simply determines whether selected fields are included or excluded by default. include flag Keyed property for field inclusion and removal. new_name string",conceptual,0,train
BB659D7B00DB3096C4082BB93C7FDB933738B013,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_scatterplot.html?context=cdpaas&locale=en,Creating a scatterplot (SPSS Modeler),"Creating a scatterplot (SPSS Modeler)
Creating a scatterplot Now let's take a look at what factors might influence Drug, the target variable. As a researcher, you know that the concentrations of sodium and potassium in the blood are important factors. Since these are both numeric values, you can create a scatterplot of sodium versus potassium, using the drug categories as a color overlay. Figure 1. Plot node  1. Place a Plot node on the canvas and connect it to the drug1n.csv Data Asset node. Then double-click the Plot node to edit its properties. 2. Select Na as the X field, K as the Y field, and Drug as the Color (overlay) field. Click Save, then right-click the Plot node and select Run. A plot chart is added to the Outputs pane. The plot clearly shows a threshold above which the correct drug is always drug Y and below which the correct drug is never drug Y. This threshold is a ratio -- the ratio of sodium (Na) to potassium (K). Figure 2. Scatterplot of drug distribution ",how-to,1,train
C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0,https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/advancedMA.html?context=cdpaas&locale=en,Creating constraints and custom decisions with the Decision Optimization Modeling Assistant,"Creating constraints and custom decisions with the Decision Optimization Modeling Assistant
Adding multi-concept constraints and custom decisions: shift assignment This Decision Optimization Modeling Assistant example shows you how to use multi-concept iterations, the associated keyword in constraints, how to define your own custom decisions, and define logical constraints. For illustration, a resource assignment problem, ShiftAssignment, is used and its completed model with data is provided in the DO-samples. Procedure To download and open the sample: 1. Download the ShiftAssignment.zip file from the Model_Builder subfolder in the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Select the relevant product and version subfolder. 2. Open your project or create an empty project. 3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window. 4. Select the Assets tab. 5. Select New asset > Solve optimization problems in the Work with models section. 6. Click Local file in the Solve optimization problems window that opens. 7. Browse locally to find and choose the ShiftAssignment.zip archive that you downloaded. Click Open. Alternatively use drag and drop. 8. Associate a Machine Learning service instance with your project and reload the page. 9. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment. 10. Click Create.A Decision Optimization model is created with the same name as the sample. 11. Open the scenario pane and select the AssignmentWithOnCallDuties scenario. Using multi-concept iteration Procedure To use multi-concept iteration, follow these steps. 1. Click Build model in the sidebar to view your model formulation.The model formulation shows the intent as being to assign employees to shifts, with its objectives and constraints. 2. Expand the constraint For each Employee-Day combination , number of associated Employee-Shift assignments is less than or equal to 1. Defining custom decisions Procedure To define custom decisions, follow these steps. 1. Click Build model to see the model formulation of the AssignmentWithOnCallDuties Scenario. The custom decision OnCallDuties is used in the second objective. This objective ensures that the number of on-call duties are balanced over Employees. The constraint  ensures that the on-call duty requirements that are listed in the Day table are satisfied. The following steps show you how this custom decision OnCallDuties was defined. 2. Open the Settings pane and notice that the Visualize and edit decisions is set to true (or set it to true if it is set to the default false). This setting adds a Decisions tab to your Add to model window.  Here you can see OnCallDuty is specified as an assignment decision (to assign employees to on-call duties). Its two dimensions are defined with reference to the data tables Day and Employee. This means that your model will also assign on-call duties to employees. The Employee-Shift assignment decision is specified from the original intent. 3. Optional: Enter your own text to describe the OnCallDuty in the [to be documented] field. 4. Optional: To create your own decision in the Decisions tab, click the enter name, type in a name and click enter. A new decision (intent) is created with that name with some highlighted fields to be completed by using the drop-down menus. If you, for example, select assignment as the decision type, two dimensions are created. As assignment involves assigning at least one thing to another, at least two dimensions must be defined. Use select a table fields to define the dimensions. Using logical constraints Procedure To use logical constraints: 1. Look at the constraint This constraint ensures that, for each employee and day combination, when no associated assignments exist (for example, the employee is on vacation on that day), that no on-call duties are assigned to that employee on that day. Note the use of the if...then keywords to define this logical constraint. 2. Optional: Add other logical constraints to your model by searching in the suggestions.",how-to,1,train
87D2FF4289EDCBF7FCFA7FC7FD460DEB02ECC71B,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/logregnodeslots.html?context=cdpaas&locale=en,logregnode properties,"logregnode properties
logregnode properties Logistic regression is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression but takes a categorical target field instead of a numeric range. logregnode properties Table 1. logregnode properties logregnode Properties Values Property description target field Logistic regression models require a single target field and one or more input fields. Frequency and weight fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. logistic_procedure BinomialMultinomial include_constant flag mode SimpleExpert method EnterStepwiseForwardsBackwardsBackwardsStepwise binomial_method EnterForwardsBackwards model_type MainEffectsFullFactorialCustom When FullFactorial is specified as the model type, stepping methods will not run, even if specified. Instead, Enter will be the method used. If the model type is set to Custom but no custom fields are specified, a main-effects model will be built. custom_terms [[BP Sex][BP][Age]] multinomial_base_category string Specifies how the reference category is determined. binomial_categorical_input string binomial_input_contrast IndicatorSimpleDifferenceHelmertRepeatedPolynomialDeviation Keyed property for categorical input that specifies how the contrast is determined. See the example for usage. binomial_input_category FirstLast Keyed property for categorical input that specifies how the reference category is determined. See the example for usage. scale NoneUserDefinedPearsonDeviance scale_value number all_probabilities flag tolerance 1.0E-51.0E-61.0E-71.0E-81.0E-91.0E-10 min_terms number use_max_terms flag max_terms number entry_criterion ScoreLR removal_criterion LRWald probability_entry number probability_removal number binomial_probability_entry number binomial_probability_removal number requirements HierarchyDiscreteHierarchyAllContainmentNone max_iterations number max_steps number p_converge 1.0E-41.0E-51.0E-61.0E-71.0E-80 l_converge 1.0E-11.0E-21.0E-31.0E-41.0E-50 delta number iteration_history flag history_steps number summary flag likelihood_ratio flag asymptotic_correlation flag goodness_fit flag parameters flag confidence_interval number asymptotic_covariance flag classification_table flag stepwise_summary flag info_criteria flag monotonicity_measures flag binomial_output_display at_each_stepat_last_step binomial_goodness_of_fit flag binomial_parameters flag binomial_iteration_history flag binomial_classification_plots flag binomial_ci_enable flag binomial_ci number binomial_residual outliersall binomial_residual_enable flag binomial_outlier_threshold number binomial_classification_cutoff number binomial_removal_criterion LRWaldConditional calculate_variable_importance flag calculate_raw_propensities flag",conceptual,0,train
50636405C61E0AF7D2EE0EE31256C4CD0F6C5DED,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/factor.html?context=cdpaas&locale=en,PCA/Factor node (SPSS Modeler),"PCA/Factor node (SPSS Modeler)
PCA/Factor node The PCA/Factor node provides powerful data-reduction techniques to reduce the complexity of your data. Two similar but distinct approaches are provided. * Principal components analysis (PCA) finds linear combinations of the input fields that do the best job of capturing the variance in the entire set of fields, where the components are orthogonal (perpendicular) to each other. PCA focuses on all variance, including both shared and unique variance. * Factor analysis attempts to identify underlying concepts, or factors, that explain the pattern of correlations within a set of observed fields. Factor analysis focuses on shared variance only. Variance that is unique to specific fields is not considered in estimating the model. Several methods of factor analysis are provided by the Factor/PCA node. For both approaches, the goal is to find a small number of derived fields that effectively summarize the information in the original set of fields. Requirements. Only numeric fields can be used in a PCA-Factor model. To estimate a factor analysis or PCA, you need one or more fields with the role set to Input fields. Fields with the role set to Target, Both, or None are ignored, as are non-numeric fields. Strengths. Factor analysis and PCA can effectively reduce the complexity of your data without sacrificing much of the information content. These techniques can help you build more robust models that execute more quickly than would be possible with the raw input fields.",conceptual,0,train
AAE40F1CC335A650C1EB806E404394DA596FB433,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en,Known issues and limitations,"Known issues and limitations
Known issues and limitations The following limitations and known issues apply to watsonx. * [Regional limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/region-lims.html) * [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=ennotebooks) * [Machine learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enwmlissues) * [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enspssissues) * [Connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enconnectissues) * [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enpipeline-issues) * [watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enxgov-issues) Notebook issues You might encounter some of these issues when getting started with and using notebooks. Manual installation of some tensor libraries is not supported Some tensor flow libraries are preinstalled, but if you try to install additional tensor flow libraries yourself, you get an error. Connection to notebook kernel is taking longer than expected after running a code cell If you try to reconnect to the kernel and immediately run a code cell (or if the kernel reconnection happened during code execution), the notebook doesn't reconnect to the kernel and no output is displayed for the code cell. You need to manually reconnect to the kernel by clicking Kernel > Reconnect. When the kernel is ready, you can try running the code cell again. Using the predefined sqlContext object in multiple notebooks causes an error You might receive an Apache Spark error if you use the predefined sqlContext object in multiple notebooks. Create a new sqlContext object for each notebook. See [this Stack Overflow explanation](http://stackoverflow.com/questions/38117849/you-must-build-spark-with-hive-export-spark-hive-true/3811811238118112). Connection failed message If your kernel stops, your notebook is no longer automatically saved. To save it, click File > Save manually, and you should get a Notebook saved message in the kernel information area, which appears before the Spark version. If you get a message that the kernel failed, to reconnect your notebook to the kernel click Kernel > Reconnect. If nothing you do restarts the kernel and you can't save the notebook, you can download it to save your changes by clicking File > Download as > Notebook (.ipynb). Then you need to create a new notebook based on your downloaded notebook file. Hyperlinks to notebook sections don't work in preview mode If your notebook contains sections that you link to from an introductory section at the top of the notebook for example, the links to these sections will not work if the notebook was opened in view-only mode in Firefox. However, if you open the notebook in edit mode, these links will work. Can't connect to notebook kernel If you try to run a notebook and you see the message Connecting to Kernel, followed by Connection failed. Reconnecting and finally by a connection failed error message, the reason might be that your firewall is blocking the notebook from running. If Watson Studio is installed behind a firewall, you must add the WebSocket connection wss://dataplatform.cloud.ibm.com to the firewall settings. Enabling this WebSocket connection is required when you're using notebooks and RStudio. Insufficient resources available error when opening or editing a notebook If you see the following message when opening or editing a notebook, the environment runtime associated with your notebook has resource issues: Insufficient resources available A runtime instance with the requested configuration can't be started at this time because the required hardware resources aren't available. Try again later or adjust the requested sizes. To find the cause, try checking the status page for IBM Cloud incidents affecting Watson Studio. Additionally, you can open a support case at the IBM Cloud Support portal. Machine learning issues You might encounter some of these issues when working with machine learning tools. Region requirements You can only associate a Watson Machine Learning service instance with your project when the Watson Machine Learning service instance and the Watson Studio instance are located in the same region. Accessing links if you create a service instance while associating a service with a project While you are associating a Watson Machine Learning service to a project, you have the option of creating a new service instance. If you choose to create a new service, the links on the service page might not work. To access the service terms, APIs, and documentation, right click the links to open them in new windows. Federated Learning assets cannot be searched in All assets, search results, or filter results in the new projects UI You cannot search Federated Learning assets from the All assets view, the search results, or the filter results of your project. Workaround: Click the Federated Learning asset to open the tool. Deployment issues * A deployment that is inactive (no scores) for a set time (24 hours for the free plan or 120 hours for a paid plan) is automatically hibernated. When a new scoring request is submitted, the deployment is reactivated and the score request is served. Expect a brief delay of 1 to 60 seconds for the first score request after activation, depending on the model framework. * For some frameworks, such as SPSS modeler, the first score request for a deployed model after hibernation might result in a 504 error. If this happens, submit the",how-to,1,train
3E24051D290E000441A4FDB326D73BB81505BD05,https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot.html?context=cdpaas&locale=en,Troubleshooting,"Troubleshooting
Troubleshooting If you encounter an issue in IBM watsonx, use the following resources to resolve the problem. * [View IBM Cloud service status](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html) * [Troubleshoot connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-conn.html) * [Troubleshoot Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html) * [Troubleshoot Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ts_sd.html) * [Troubleshoot IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html) * [Troubleshoot Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html) * [Troubleshoot Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html) * [Troubleshoot Watson Studio on IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wscloud-troubleshoot.html) * [Known issues](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html) * [Get help](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html)",how-to,1,train
66E7B1F986535FCE165F0CB5C553A6305339204E,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_matrixscatter.html?context=cdpaas&locale=en,Scatter matrix charts,"Scatter matrix charts
Scatter matrix charts Scatter plot matrices are a good way to determine whether linear correlations exist between multiple variables.",conceptual,0,train
F140F179614D126E483732933A5CA8DCF0A32876,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_intro_summary.html?context=cdpaas&locale=en,Summary (SPSS Modeler),"Summary (SPSS Modeler)
Summary This example Introduction to Modeling flow demonstrates the basic steps for creating, evaluating, and scoring a model. * The modeling node estimates the model by studying records for which the outcome is known, and creates a model nugget. This is sometimes referred to as training the model. * The model nugget can be added to any flow with the expected fields to score records. By scoring the records for which you already know the outcome (such as existing customers), you can evaluate how well it performs. * After you're satisfied that the model performs acceptably well, you can score new data (such as prospective customers) to predict how they will respond. * The data used to train or estimate the model may be referred to as the analytical or historical data; the scoring data may also be referred to as the operational data.",conceptual,0,train
69EAABE17802ED870302F2D2789B3B476DFDD11F,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html?context=cdpaas&locale=en,Configuring a classification or regression experiment,"Configuring a classification or regression experiment
Configuring a classification or regression experiment AutoAI offers experiment settings that you can use to configure and customize your classification or regression experiments. Experiment settings overview After you upload the experiment data and select your experiment type and what to predict, AutoAI establishes default configurations and metrics for your experiment. You can accept these defaults and proceed with the experiment or click Experiment settings to customize configurations. By customizing configurations, you can precisely control how the experiment builds the candidate model pipelines. Use the following tables as a guide to experiment settings for classification and regression experiments. For details on configuring a time series experiment, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html). Prediction settings Most of the prediction settings are on the main General page. Review or update the following settings. Setting Description Prediction type You can change or override the prediction type. For example, if AutoAI only detects two data classes and configures a binary classification experiment but you know that there are three data classes, you can change the type to multiclass. Positive class For binary classification experiments optimized for Precision, Average Precision, Recall, or F1, a positive class is required. Confirm that the Positive Class is correct or the experiment might generate inaccurate results. Optimized metric Change the metric for optimizing and ranking the model candidate pipelines. Optimized algorithm selection Choose how AutoAI selects the algorithms to use for generating the model candidate pipelines. You can optimize for the alorithms with the best score, or optimize for the algorithms with the highest score in the shortest run time. Algorithms to include Select which of the available algorithms to evaluate when the experiment is run. The list of algorithms are based on the selected prediction type. Algorithms to use AutoAI tests the specified algorithms and use the best performers to create model pipelines. Choose how many of the best algorithms to apply. Each algorithm generates 4-5 pipelines, which means that if you select 3 algorithms to use, your experiment results will include 12 - 15 ranked pipelines. More algorithms increase the runtime for the experiment. Data fairness settings Click the Fairness tab to evaluate your experiment for fairness in predicted outcomes. For details on configuring fairness detection, see [Applying fairness testing to AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html). Data source settings The General tab of data source settings provides options for configuring how the experiment consumes and processes the data for training and evaluating the experiment. Setting Description Duplicate rows To accelerate training, you can opt to skip duplicate rows in your training data. Pipeline selection subsample method For a large data set, use a subset of data to train the experiment. This option speeds up results but might affect accuracy. Data imputation Interpolate missing values in your data source. For details on managing data imputation, see [Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html). Text feature engineering When enabled, columns that are detected as text are transformed into vectors to better analyze semantic similarity between strings. Enabling this setting might increase run time. For details, see [Creating a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html). Final training data set Select what data to use for training the final pipelines. If you choose to include training data only, the generated notebooks include a cell for retrieving the holdout data that is used to evaluate each pipeline. Outlier handling Choose whether AutoAI excludes outlier values from the target column to improve training accuracy. If enabled, AutoAI uses the interquartile range (IQR) method to detect and exclude outliers from the final training data, whether that is training data only or training plus holdout data. Training and holdout method Training data is used to train the model, and holdout data is withheld from training the model and used to measure the performance of the model. You can either split a singe data source into training and testing (holdout) data, or you can use a second data file specifically for the testing data. If you split your training data, specify the percentages to use for training data and holdout data. You can also specify the number of folds, from the default of three folds to a maximum of 10. Cross validation divides training data into folds, or groups, for testing model performance. Select features to include Select columns from your data source that contain data that supports the prediction column. Excluding extraneous columns can improve run time. Runtime settings Review experiment settings or change the compute resources that are allocated for running the experiment. Next steps [Configure a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html) Parent topic:[Building an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html)",how-to,1,train
0108F00736882AC35E3C56CD3CE0D91BCB5798A8,https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-functions.html?context=cdpaas&locale=en,Time series functions,"Time series functions
Time series functions Time series functions are aggregate functions that operate on sequences of data values measured at points in time. The following sections describe some of the time series functions available in different time series packages. Transforms Transforms are functions that are applied on a time series resulting in another time series. The time series library supports various types of transforms, including provided transforms (by using from tspy.functions import transformers) as well as user defined transforms. The following sample shows some provided transforms: Interpolation >>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0]) >>> periodicity = 2 >>> interp = interpolators.nearest(0.0) >>> interp_ts = ts.resample(periodicity, interp) >>> interp_ts.print() TimeStamp: 0 Value: 1.0 TimeStamp: 2 Value: 3.0 TimeStamp: 4 Value: 5.0 Fillna >>> shift_ts = ts.shift(2) print(""shifted ts to add nulls"") print(shift_ts) print(""nfilled ts to make nulls 0s"") null_filled_ts = shift_ts.fillna(interpolators.fill(0.0)) print(null_filled_ts) shifted ts to add nulls TimeStamp: 0 Value: null TimeStamp: 1 Value: null TimeStamp: 2 Value: 1.0 TimeStamp: 3 Value: 2.0 TimeStamp: 4 Value: 3.0 TimeStamp: 5 Value: 4.0 filled ts to make nulls 0s TimeStamp: 0 Value: 0.0 TimeStamp: 1 Value: 0.0 TimeStamp: 2 Value: 1.0 TimeStamp: 3 Value: 2.0 TimeStamp: 4 Value: 3.0 TimeStamp: 5 Value: 4.0 Additive White Gaussian Noise (AWGN) >>> noise_ts = ts.transform(transformers.awgn(mean=0.0,sd=.03)) >>> print(noise_ts) TimeStamp: 0 Value: 0.9962378841388397 TimeStamp: 1 Value: 1.9681980879378596 TimeStamp: 2 Value: 3.0289374962174405 TimeStamp: 3 Value: 3.990728648807705 TimeStamp: 4 Value: 4.935338359740761 TimeStamp: 5 Value: 6.03395072999318 Segmentation Segmentation or windowing is the process of splitting a time series into multiple segments. The time series library supports various forms of segmentation and allows creating user-defined segments as well. * Window based segmentation This type of segmentation of a time series is based on user specified segment sizes. The segments can be record based or time based. There are options that allow for creating tumbling as well as sliding window based segments. >>> import tspy >>> ts_orig = tspy.builder() .add(tspy.observation(1,1.0)) .add(tspy.observation(2,2.0)) .add(tspy.observation(6,6.0)) .result().to_time_series() >>> ts_orig timestamp: 1 Value: 1.0 timestamp: 2 Value: 2.0 timestamp: 6 Value: 6.0 >>> ts = ts_orig.segment_by_time(3,1) >>> ts timestamp: 1 Value: original bounds: (1,3) actual bounds: (1,2) observations: [(1,1.0),(2,2.0)] timestamp: 2 Value: original bounds: (2,4) actual bounds: (2,2) observations: [(2,2.0)] timestamp: 3 Value: this segment is empty timestamp: 4 Value: original bounds: (4,6) actual bounds: (6,6) observations: [(6,6.0)] * Anchor based segmentation Anchor based segmentation is a very important type of segmentation that creates a segment by anchoring on a specific lambda, which can be a simple value. An example is looking at events that preceded a 500 error or examining values after observing an anomaly. Variants of anchor based segmentation include providing a range with multiple markers. >>> import tspy >>> ts_orig = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0]) >>> ts_orig timestamp: 0 Value: 1.0 timestamp: 1 Value: 2.0 timestamp: 2 Value: 3.0 timestamp: 3 Value: 4.0 timestamp: 4 Value: 5.0 >>> ts = ts_orig.segment_by_anchor(lambda x: x % 2 == 0, 1, 2) >>> ts timestamp: 1 Value: original bounds: (0,3) actual bounds: (0,3) observations: [(0,1.0),(1,2.0),(2,3.0),(3,4.0)] timestamp: 3 Value: original bounds: (2,5) actual bounds: (2,4) observations: [(2,3.0),(3,4.0),(4,5.0)] * Segmenters There are several specialized segmenters provided out of the box by importing the segmenters package (using from tspy.functions import segmenters). An example segmenter is one that uses regression to segment a time series: >>> ts = tspy.time_series([1.0,2.0,3.0,4.0,5.0,2.0,1.0,-1.0,50.0,53.0,56.0]) >>> max_error = .5 >>> skip = 1 >>> reg_sts = ts.to_segments(segmenters.regression(max_error,skip,use_relative=True)) >>> reg_sts timestamp: 0 Value: range: (0, 4) outliers: {} timestamp: 5 Value: range: (5, 7) outliers: {} timestamp: 8 Value: range: (8, 10) outliers: {} Reducers A reducer is a function that is applied to the values across a set of time series to produce a single value. The time series reducer functions are similar to the reducer concept used by Hadoop/Spark. This single value can be a collection, but more generally is a single object. An example of a reducer function is averaging the values in a time series. Several reducer functions are supported, including: * Distance reducers Distance reducers are a class of reducers that compute the distance between two time series. The library supports numeric as well as categorical distance functions on sequences. These include time warping distance measurements such as Itakura Parallelogram, Sakoe-Chiba Band, DTW non-constrained and DTW non-time warped contraints. Distribution distances such as Hungarian distance and Earth-Movers distance are also available. For categorical time series distance measurements, you can use Damerau Levenshtein and Jaro-Winkler distance measures. >>> from tspy.functions import >>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0]) >>> ts2 = ts.transform(transformers.awgn(sd=.3)) >>> dtw_distance = ts.reduce(ts2,reducers.dtw(lambda obs1, obs2: abs(obs1.value - obs2.value))) >>> print(dtw_distance) 1.8557981638880405 * Math reducers Several convenient math reducers for numeric time series are provided. These include basic ones such as average, sum, standard deviation, and moments. Entropy, kurtosis, FFT and variants of it, various correlations, and histogram are",how-to,1,train
F67E458A29CF154C33221A8889789241725FE5C7,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/jython_basics.html?context=cdpaas&locale=en,Python and Jython,"Python and Jython
Python and Jython Jython is an implementation of the Python scripting language, which is written in the Java language and integrated with the Java platform. Python is a powerful object-oriented scripting language. Jython is useful because it provides the productivity features of a mature scripting language and, unlike Python, runs in any environment that supports a Java virtual machine (JVM). This means that the Java libraries on the JVM are available to use when you're writing programs. With Jython, you can take advantage of this difference, and use the syntax and most of the features of the Python language. As a scripting language, Python (and its Jython implementation) is easy to learn and efficient to code, and has minimal required structure to create a running program. Code can be entered interactively, that is, one line at a time. Python is an interpreted scripting language; there is no precompile step, as there is in Java. Python programs are simply text files that are interpreted as they're input (after parsing for syntax errors). Simple expressions, like defined values, as well as more complex actions, such as function definitions, are immediately executed and available for use. Any changes that are made to the code can be tested quickly. Script interpretation does, however, have some disadvantages. For example, use of an undefined variable is not a compiler error, so it's detected only if (and when) the statement in which the variable is used is executed. In this case, you can edit and run the program to debug the error. Python sees everything, including all data and code, as an object. You can, therefore, manipulate these objects with lines of code. Some select types, such as numbers and strings, are more conveniently considered as values, not objects; this is supported by Python. There is one null value that's supported. This null value has the reserved name None. For a more in-depth introduction to Python and Jython scripting, and for some example scripts, see [http://www.ibm.com/developerworks/java/tutorials/j-jython1/j-jython1.html](http://www.ibm.com/developerworks/java/tutorials/j-jython1/j-jython1.html) and [http://www.ibm.com/developerworks/java/tutorials/j-jython2/j-jython2.html](http://www.ibm.com/developerworks/java/tutorials/j-jython2/j-jython2.html).",conceptual,0,train
F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en,Quick start: Build a model using SPSS Modeler,"Quick start: Build a model using SPSS Modeler
Quick start: Build a model using SPSS Modeler You can create, train, and deploy models using SPSS Modeler. Read about SPSS Modeler, then watch a video and follow a tutorial that’s suitable for beginners and requires no coding. Your basic workflow includes these tasks: 1. Open your sandbox project. Projects are where you can collaborate with others to work with data. 2. Add an SPSS Modeler flow to the project. 3. Configure the nodes on the canvas, and run the flow. 4. Review the model details and save the model. 5. Deploy and test your model. Read about SPSS Modeler With SPSS Modeler flows, you can quickly develop predictive models using business expertise and deploy them into business operations to improve decision making. Designed around the long-established SPSS Modeler client software and the industry-standard CRISP-DM model it uses, the flows interface supports the entire data mining process, from data to better business results. SPSS Modeler offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics. The methods available on the node palette allow you to derive new information from your data and to develop predictive models. Each method has certain strengths and is best suited for particular types of problems. [Read more about SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) [Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) Watch a video about creating a model using SPSS Modeler  Watch this video to see how to create and run an SPSS Modeler flow to train a machine learning model. This video provides a visual method to learn the concepts and tasks in this documentation. Try a tutorial to create a model using SPSS Modeler In this tutorial, you will complete these tasks: * [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep01) * [Task 2: Add a data set to your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep02) * [Task 3: Create the SPSS Modeler flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep03) * [Task 4: Add the nodes to the SPSS Modeler flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep04) * [Task 5: Run the SPSS Modeler flow and explore the model details.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep05) * [Task 6: Evaluate the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep06) * [Task 7: Deploy and test the model with new data.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep07) This tutorial will take approximately 30 minutes to complete. Example data The data set used in this tutorial is from the University of California, Irvine, and is the result of an extensive study based on hospital admissions over a period of time. The model will use three important factors to help predict chronic kidney disease. Expand all sections * Tips for completing this tutorial ### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview) * Task 1: Open a project You need a project to store the SPSS Modeler flow. You can use your sandbox project or create a project. 1. From the navigation menu {: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. ### {: iih} Check your progress The following image shows the new project. {: width=""100%"" } [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview) * Task 2: Add the data set to your project  To preview this task, watch the video beginning at 00:13. This tutorial uses a sample data set. Follow these steps to add the sample data set to your project: 1. Access the [UCI ML Repository: Chronic Kidney Disease Data Set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/a25870b7249ad55605de7a2e59567a7e){: new_window} in the Samples. 1. Click Preview.",how-to,1,train
C1CA39FF2C12CC12697E62A37C7C52A256248AF7,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/questnodeslots.html?context=cdpaas&locale=en,questnode properties,"questnode properties
questnode properties The Quest node provides a binary classification method for building decision trees, designed to reduce the processing time required for large C&R Tree analyses while also reducing the tendency found in classification tree methods to favor inputs that allow more splits. Input fields can be numeric ranges (continuous), but the target field must be categorical. All splits are binary. questnode properties Table 1. questnode properties questnode Properties Values Property description target field Quest models require a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html) for more information. continue_training_existing_model flag objective StandardBoostingBaggingpsm psm is used for very large datasets, and requires a server connection. model_output_type SingleInteractiveBuilder use_tree_directives flag tree_directives string use_max_depth DefaultCustom max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom. prune_tree flag Prune tree to avoid overfitting. use_std_err flag Use maximum difference in risk (in Standard Errors). std_err_multiplier number Maximum difference. max_surrogates number Maximum surrogates. use_percentage flag min_parent_records_pc number min_child_records_pc number min_parent_records_abs number min_child_records_abs number use_costs flag costs structured Structured property. priors DataEqualCustom custom_priors structured Structured property. adjust_priors flag trails number Number of component models for boosting or bagging. set_ensemble_method VotingHighestProbabilityHighestMeanProbability Default combining rule for categorical targets. range_ensemble_method MeanMedian Default combining rule for continuous targets. large_boost flag Apply boosting to very large data sets. split_alpha number Significance level for splitting. train_pct number Overfit prevention set. set_random_seed flag Replicate results option. seed number calculate_variable_importance flag calculate_raw_propensities flag calculate_adjusted_propensities flag adjusted_propensity_partition TestValidation",conceptual,0,train
ABCA967CD96AB805BE518E8A52EF984499C62F6C,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-tone.html?context=cdpaas&locale=en,Tone classification,"Tone classification
Tone classification The Tone model in the Watson Natural Language Processing classification workflow classifies the tone in the input text. Workflow name ensemble_classification-workflow_en_tone-stock Supported languages * English and French Capabilities The Tone classification model is a pre-trained document classification model for the task of classifying the tone in the input document. The model identifies the tone of the input document and classifies it as: * Excited * Frustrated * Impolite * Polite * Sad * Satisfied * Sympathetic Unlike the Sentiment model, which classifies each individual sentence, the Tone model classifies the entire input document. As such, the Tone model works optimally when the input text to classify is no longer than 1000 characters. If you would like to classify texts longer than 1000 characters, split the text into sentences or paragraphs for example and apply the Tone model on each sentence or paragraph. A document may be classified into multiple categories or into no category. Capabilities of tone classification Capabilities Example Identifies the tone of a document and classifies it ""I'm really happy with how this was handled, thank you!"" --> excited, satisfied Dependencies on other blocks None Code sample import watson_nlp Load the Tone workflow model for English tone_model = watson_nlp.load('ensemble_classification-workflow_en_tone-stock') Run the Tone model tone_result = tone_model.run(""I'm really happy with how this was handled, thank you!"") print(tone_result) Output of the code sample: { ""classes"": [ { ""class_name"": ""excited"", ""confidence"": 0.6896854620082722 }, { ""class_name"": ""satisfied"", ""confidence"": 0.6570277557333078 }, { ""class_name"": ""polite"", ""confidence"": 0.33628806679460566 }, { ""class_name"": ""sympathetic"", ""confidence"": 0.17089694967744093 }, { ""class_name"": ""sad"", ""confidence"": 0.06880583874412932 }, { ""class_name"": ""frustrated"", ""confidence"": 0.010365418217209686 }, { ""class_name"": ""impolite"", ""confidence"": 0.002470793624966174 } ], ""producer_id"": { ""name"": ""Voting based Ensemble"", ""version"": ""0.0.1"" } } Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)",conceptual,0,train
1EC0AABFA78901776901CB2C57AFF822855B6B5E,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-hierarchical-cat.html?context=cdpaas&locale=en,Hierarchical text categorization,"Hierarchical text categorization
Hierarchical text categorization The Watson Natural Language Processing Categories block assigns individual nodes within a hierarchical taxonomy to an input document. For example, in the text IBM announces new advances in quantum computing, examples of extracted categories are technology and computing/hardware/computer and technology and computing/operating systems. These categories represent level 3 and level 2 nodes in a hierarchical taxonomy. This block differs from the Classification block in that training starts from a set of seed phrases associated with each node in the taxonomy, and does not require labeled documents. Note that the Hierarchical text categorization block can only be used in a notebook that is started in an environment based on Runtime 22.2 or Runtime 23.1 that includes the Watson Natural Language Processing library. Block name categories_esa_en_stock Supported languages The Categories block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). de, en Capabilities Use this block to determine the topics of documents on the web by categorizing web pages into a taxonomy of general domain topics, for ad placement and content recommendation. The model was tested on data from news reports and general web pages. For a list of the categories that can be returned, see [Category types](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-returned-categories.html). Dependencies on other blocks The following block must run before you can run the hierarchical categorization block: * syntax_izumo__stock Code sample import watson_nlp Load Syntax and a Categories model for English syntax_model = watson_nlp.load('syntax_izumo_en_stock') categories_model = watson_nlp.load('categories_esa_en_stock') Run the syntax model on the input text syntax_prediction = syntax_model.run('IBM announced new advances in quantum computing') Run the categories model on the result of syntax categories = categories_model.run(syntax_prediction) print(categories) Output of the code sample: { ""categories"": [ { ""labels"": ""technology & computing"", ""computing"" ], ""score"": 0.992489, ""explanation"": ] }, { ""labels"": ""science"", ""physics"" ], ""score"": 0.945449, ""explanation"": ] } ], ""producer_id"": { ""name"": ""ESA Hierarchical Categories"", ""version"": ""1.0.0"" } } Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)",conceptual,0,train
37DC9376A7FB6EB772D242B85909A023C43C2417,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=en,Federated Learning Tensorflow tutorial,"Federated Learning Tensorflow tutorial
Federated Learning Tensorflow tutorial This tutorial demonstrates the usage of Federated Learning with the goal of training a machine learning model with data from different users without having users share their data. The steps are done in a low code environment with the UI and with a Tensorflow framework. Note:This is a step-by-step tutorial for running a UI driven Federated Learning experiment. To see a code sample for an API driven approach, see [Federated Learning Tensorflow samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html). Tip:In this tutorial, admin refers to the user that starts the Federated Learning experiment, and party refers to one or more users who send their model results after the experiment is started by the admin. While the tutorial can be done by the admin and multiple parties, a single user can also complete a full runthrough as both the admin and the party. For a simpler demonstrative purpose, in the following tutorial only one data set is submitted by one party. For more information on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html). Watch this short video tutorial of how to create a Federated Learning experiment with Watson Studio. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. In this tutorial you will learn to: * [Step 1: Start Federated Learning as the admin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-1) * [Step 2: Train model as a party](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-2) * [Step 3: Save and deploy the model online](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-3) Step 1: Start Federated Learning as the admin In this tutorial, you train a Federated Learning experiment with a Tensorflow framework and the MNIST data set. Before you begin 1. Log in to [IBM Cloud](https://cloud.ibm.com/). If you don't have an account, create one with any email. 2. [Create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning) if you do not have it set up in your environment. 3. Log in to [watsonx](https://dataplatform.cloud.ibm.com/home2?context=wx). 4. Use an existing [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) or create a new one. You must have at least admin permission. 5. Associate the Watson Machine Learning service with your project. 1. In your project, click the Manage > Service & integrations. 2. Click Associate service. 3. Select your Watson Machine Learning instance from the list, and click Associate; or click New service if you do not have one to set up an instance.  Start the aggregator 1. Create the Federated learning experiment asset: 1. Click the Assets tab in your project. 2. Click New asset > Train models on distributed data. 3. Type a Name for your experiment and optionally a description. 4. Verify the associated Watson Machine Learning instance under Select a machine learning instance. If you don't see a Watson Machine Learning instance associated, follow these steps: 1. Click Associate a Machine Learning Service Instance. 2. Select an existing instance and click Associate, or create a New service. 3. Click Reload to see the associated service.  4. Click Next. 2. Configure the experiment. 1. On the Configure page, select a Hardware specification. 2. Under the Machine learning framework dropdown, select Tensorflow 2. 3. Select a Model type. 4. Download the [untrained model](https://github.com/IBMDataScience/sample-notebooks/raw/master/Files/tf_mnist_model.zip). 5. Back in the Federated Learning experiment, click Select under Model specification. 6. Drag the downloaded file named tf_mnist_model.zip onto the Upload file box.1. Select runtime-22.2-py3.10 for the Software Specification dropdown. 7. Give your model a name, and then click Add.  8. Click Weighted average for the Fusion method, and click Next.  3. Define the hyperparameters. 1. Accept the default hyperparameters or adjust as needed. 2. When you are finished, click Next. 4. Select remote training systems. 1. Click Add new systems.  2. Give your Remote Training System a name. 3. Under Allowed identities, choose the user that is your party, and then click Add. In this tutorial, you can add a dummy user or yourself, for demonstrative purposes. This user must be added to your project as a collaborator with Editor or higher permissions. Add additional systems by repeating this step for each remote party you intent to use. 4. When you are finished, click Add systems.  5. Return to the Select remote training systems page, verify that your system is selected, and then click Next. 5. Review your settings, and then click Create. 6. Watch the status. Your Federated Learning experiment status is Pending when it starts. When your experiment is ready for parties to connect, the status will change to Setup – Waiting for remote systems. This may take a few minutes. 7. Click View setup information to download the party configuration and the party connector script that can be run on the remote party. 8. Click the",how-to,1,train
163EEB3DBAFF3B01D831F717EEB7487642C93080,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-troubleshoot.html?context=cdpaas&locale=en,Troubleshooting AutoAI experiments,"Troubleshooting AutoAI experiments
Troubleshooting AutoAI experiments The following list contains the common problems that are known for AutoAI. If your AutoAI experiment fails to run or deploy successfully, review some of these common problems and resolutions. Passing incomplete or outlier input value to deployment can lead to outlier prediction After you deploy your machine learning model, note that providing input data that is markedly different from data that is used to train the model can produce an outlier prediction. When linear regression algorithms such as Ridge and LinearRegression are passed an out of scale input value, the model extrapolates the values and assigns a relatively large weight to it, producing a score that is not in line with conforming data. Time Series pipeline with supporting features fails on retrieval If you train an AutoAI Time Series experiment by using supporting features and you get the error 'Error: name 'tspy_interpolators' is not defined' when the system tries to retrieve the pipeline for predictions, check to make sure your system is running Java 8 or higher. Running a pipeline or experiment notebook fails with a software specification error If supported software specifications for AutoAI experiments change, you might get an error when you run a notebook built with an older software specification, such as an older version of Python. In this case, run the experiment again, then save a new notebook and try again. Resolving an Out of Memory error If you get a memory error when you run a cell from an AutoAI generated notebook, create a notebook runtime with more resources for the AutoAI notebook and execute the cell again. Notebook for an experiment with subsampling can fail generating predictions If you do pipeline refinery to prepare the model, and the experiment uses subsampling of the data during training, you might encounter an “unknown class” error when you run a notebook that is saved from the experiment. The problem stems from an unknown class that is not included in the training data set. The workaround is to use the entire data set for training or re-create the subsampling that is used in the experiment. To subsample the training data (before fit()), provide sample size by number of rows or by fraction of the sample (as done in the experiment). * If number of records was used in subsampling settings, you can increase the value of n. For example: train_df = train_df.sample(n=1000) * If subsampling is represented as a fraction of the data set, increase the value of frac. For example: train_df = train_df.sample(frac=0.4, random_state=experiment_metadata['random_state']) Pipeline creation fails for binary classification AutoAI analyzes a subset of the data to determine the best fit for experiment type. If the sample data in the prediction column contains only two values, AutoAI recommends a binary classification experiment and applies the related algorithms. However, if the full data set contains more than two values in the prediction column the binary classification fails and you get an error that indicates that AutoAI cannot create the pipelines. In this case, manually change the experiment type from binary to either multiclass, for a defined set of values, or regression, for an unspecified set of values. 1. Click the Reconfigure Experiment icon to edit the experiment settings. 2. On the Prediction page of Experiment Settings, change the prediction type to the one that best matches the data in the prediction column. 3. Save the changes and run the experiment again. Next steps [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)",how-to,1,train
DE79F406DB76B8D50A2B8AB35D4A385983AA5F54,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html?context=cdpaas&locale=en,Project collaborator roles and permissions,"Project collaborator roles and permissions
Project collaborator roles and permissions When you add a collaborator to a project, you specify which actions that the user can do by assigning a role. These roles provide these permissions for projects: Action Viewer Editor Admin View all information for data assets ✓ ✓ ✓ View jobs ✓ ✓ ✓ Add and read data assets ✓ ✓ View Data Refinery flows and SPSS Modeler flows ✓ ✓ View all other types of assets ✓ ✓ ✓ Create, add, modify, or delete all types of assets ✓ ✓ Submit inference requests to foundation models, including tuned foundation models ✓ ✓ Run and schedule assets that run in tools and jobs ✓ ✓ Create and modify data asset visualizations ✓ ✓ ✓ Save visualizations to your project ✓ ✓ Create and modify data asset profiles ✓ ✓ Share notebooks ✓ ✓ Promote assets to deployment spaces ✓ ✓ Edit the project readme ✓ ✓ Use project access tokens ✓ ✓ Manage environment templates ✓ ✓ Stop your own environment runtimes ✓ ✓ Export a project to desktop ✓ ✓ Manage project collaborators * ✓ Set up integrations ✓ Manage associated services ✓ Manage project access tokens ✓ Mark project as sensitive ✓ * To add collaborators or change collaborator roles, users with the Admin role in the project must also belong to the project creator's IBM Cloud account. Learn more * [Adding collaborators to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) * [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html) Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)",conceptual,0,train
F837935A2FEFED20E2CAC93656E376F9868CC515,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/smote.html?context=cdpaas&locale=en,SMOTE node (SPSS Modeler),"SMOTE node (SPSS Modeler)
SMOTE node The Synthetic Minority Over-sampling Technique (SMOTE) node provides an over-sampling algorithm to deal with imbalanced data sets. It provides an advanced method for balancing data. The SMOTE node in watsonx.ai is implemented in Python and requires the imbalanced-learn© Python library. For details about the imbalanced-learn library, see [imbalanced-learn documentation](https://imbalanced-learn.org/stable/index.html)^1^. The Modeling tab on the nodes palette contains the SMOTE node and other Python nodes. ^1^Lemaître, Nogueira, Aridas. ""Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning."" Journal of Machine Learning Research, vol. 18, no. 17, 2017, pp. 1-5. (http://jmlr.org/papers/v18/16-365.html)",conceptual,0,train
56DC9CABDA3980A4D5D41AA5B3E5612E727B289A,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/reordernodeslots.html?context=cdpaas&locale=en,reordernode properties,"reordernode properties
reordernode properties The Field Reorder node defines the natural order used to display fields downstream. This order affects the display of fields in a variety of places, such as tables, lists, and when selecting fields. This operation is useful when working with wide datasets to make fields of interest more visible. reordernode properties Table 1. reordernode properties reordernode properties Data type Property description mode CustomAuto You can sort values automatically or specify a custom order. sort_by NameTypeStorage ascending flag start_fields [field1 field2 … fieldn] New fields are inserted after these fields. end_fields [field1 field2 … fieldn] New fields are inserted before these fields.",conceptual,0,train
DCE39CA6C888CA6D5CF3F9B9D18D06FD3BD2DFBE,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/kmeansas.html?context=cdpaas&locale=en,K-Means-AS node (SPSS Modeler),"K-Means-AS node (SPSS Modeler)
K-Means-AS node K-Means is one of the most commonly used clustering algorithms. It clusters data points into a predefined number of clusters. The K-Means-AS node in SPSS Modeler is implemented in Spark. See [K-Means Algorithms](https://spark.apache.org/docs/2.2.0/ml-clustering.html) for more details.^1^ Note that the K-Means-AS node performs one-hot encoding automatically for categorical variables. ^1^ ""Clustering."" Apache Spark. MLlib: Main Guide. Web. 3 Oct 2017.",conceptual,0,train
484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-get-started.html?context=cdpaas&locale=en,Getting started with the Watson Pipelines editor,"Getting started with the Watson Pipelines editor
Getting started with the Watson Pipelines editor The Watson Pipelines editor is a graphical canvas where you can drag and drop nodes that you connect together into a pipeline for automating machine model operations. You can open the Pipelines editor by creating a new Pipelines asset or editing an existing Pipelines asset. To create a new asset in your project from the Assets tab, click New asset > Automate model lifecycle. To edit an existing asset, click the pipeline asset name on the Assets tab. The canvas opens with a set of annotated tools for you to use to create a pipeline. The canvas includes the following components:  * The node palette provides nodes that represent various actions for manipulating assets and altering the flow of control in a pipeline. For example, you can add nodes to create assets such as data files, AutoAI experiments, or deployment spaces. You can configure node actions based on conditions if files import successfully, such as feeding data into a notebook. You can also use nodes to run and update assets. As you build your pipeline, you connect the nodes, then configure operations on the nodes to create the pipeline. These pipelines create a dynamic flow that addresses specific stages of the machine learning lifecycle. * The toolbar includes shortcuts to options related to running, editing, and viewing the pipeline. * The parameters pane provides context-sensitive options for configuring the elements of your pipeline. The toolbar  Use the Pipeline editor toolbar to: * Run the pipeline as a trial run or a scheduled job * View the history of pipeline runs * Cut, copy, or paste canvas objects * Delete a selected node * Drop a comment onto the canvas * Configure global objects, such as pipeline parameters or user variables * Manage default settings * Arrange nodes vertically * View last saved timestamp * Zoom in or out * Fit the pipeline to the view * Show or hide global messages Hover over an icon on the toolbar to view the shortcut text. The node palette The node palette provides the objects that you need to create an end-to-end pipeline. Click a top-level node in the palette to see the related nodes. Node category Description Node type Copy Use nodes to copy an asset or file, import assets, or export assets Copy assets
Export assets
Import assets Create Create assets or containers for assets Create AutoAI experiment
Create AutoAI time series experiment
Create batch deployment
Create data asset
Create deployment space
Create online deployment Wait Specify node-level conditions for advancing the pipeline run Wait for all results
Wait for any result
Wait for file Control Specify error handling Loop in parallel
Loop in sequence
Set user variables
Terminate pipeline Update Update the configuration settings for a space, asset, or job. Update AutoAI experiment
Update batch deployment
Update deployment space
Update online deployment Delete Remove a specified asset, job, or space. Delete AutoAI experiment
Delete batch deployment
Delete deployment space
Delete online deployment Run Run an existing or ad hoc job. Run AutoAI experiment
Run Bash script
Run batch deployment
Run Data Refinery job
Run notebook job
Run pipeline job
Run Pipelines component job
Run SPSS Modeler job The parameters pane Double-click a node to edit its configuration options. Depending on the type, a node can define various input and output options or even allow the user to add inputs or outputs dynamically. You can define the source of values in various ways. For example, you can specify that the source of value for ""ML asset"" input for a batch deployment must be the output from a run notebook node. For more information on parameters, see [Configuring pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html). Next steps * [Planning a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-planning.html) * [Explore the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html) * [Create a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) Parent topic:[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)",how-to,1,train
895CD261C9F06F272286BCCA3555846FB1ED8AA3,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autodata_build.html?context=cdpaas&locale=en,Building the flow (SPSS Modeler),"Building the flow (SPSS Modeler)
Building the flow 1. Add a Data Asset node that points to telco.csv. Figure 1. Auto Data Prep example flow  2. Attach a Type node to the Data Asset node. Set the measure for the churn field to Flag, and set the role to Target. Make sure the role for all other fields is set to Input. Figure 2. Setting the measurement level and role  3. Attach a Logistic node to the Type node. 4. In the Logistic node's properties, under MODEL SETTINGS, select the Binomial procedure. For Model Name, select Custom and enter No ADP - churn. Figure 3. Choosing model options  5. Attach an Auto Data Prep node to the Type node. Under OBJECTIVES, leave the default settings in place to analyze and prepare your data by balancing both speed and accuracy. 6. Run the flow to analyze and process your data. Other Auto Data Prep node properties allow you to specify that you want to concentrate more on accuracy, more on the speed of processing, or to fine tune many of the data preparation processing steps. Note: If you want to adjust the node properties and run the flow again in the future, since the model already exists, you must first click Clear Analysis, under OBJECTIVES before running the flow again. Figure 4. Auto Data Prep default objectives  7. Attach a Logistic node to the Auto Data Prep node. 8. In the Logistic node's properties, under MODEL SETTINGS, select the Binomial procedure. For Model Name, select Custom and enter After ADP - churn.",how-to,1,train
95C10FDC6D0C3B142DA650044E1A0581D04EF8E4,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_web.html?context=cdpaas&locale=en,Creating a web chart (SPSS Modeler),"Creating a web chart (SPSS Modeler)
Creating a web chart Since many of the data fields are categorical, you can also try plotting a web chart, which maps associations between different categories. Figure 1. Web node  1. Place a Web node on the canvas and connect it to the drug1n.csv Data Asset node. Then double-click the Web node to edit its properties. 2. Select the fields BP (for blood pressure) and Drug. Click Save, then right-click the Web node and select Run. A web chart is added to the Outputs pane. Figure 2. Web graph of drugs vs. blood pressure  From the plot, it appears that drug Y is associated with all three levels of blood pressure. This is no surprise; you have already determined the situation in which drug Y is best. But if you ignore drug Y and focus on the other drugs, you can see that drugs A and B are also associated with high blood pressure. And drugs C and X are associated with low blood pressure. And normal blood pressure is associated with drug X. At this point, though, you still don't know how to choose between drugs A and B or between drugs C and X, for a given patient. This is where modeling can help.",how-to,1,train
1BB1684259F93D91580690D898140D98F12611ED,https://dataplatform.cloud.ibm.com/docs/content/DO/wml_cpd_home.html?context=cdpaas&locale=en,Deploying Decision Optimization models,"Deploying Decision Optimization models
Decision Optimization When you have created and solved your Decision Optimization models, you can deploy them using Watson Machine Learning. See the [Decision Optimization experiment UI](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.htmltopic_buildingmodels) for building and solving models. The following sections describe how you can deploy your models.",how-to,1,train
E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html?context=cdpaas&locale=en,Overview of IBM watsonx as a Service,"Overview of IBM watsonx as a Service
Overview of IBM watsonx as a Service IBM watsonx.ai is a studio of integrated tools for working with generative AI capabilities that are powered by foundation models and for building machine learning models. The IBM watsonx.ai component provides a secure and collaborative environment where you can access your organization's trusted data, automate AI processes, and deliver AI in your applications. The IBM watsonx.governance component provides end-to-end monitoring for machine learning and generative AI models to accelerate responsible, transparent, and explainable AI workflows. Watch this short video that introduces watsonx.ai. Looking for watsonx.data? Go to [IBM watsonx.data documentation](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-getting-started). You can accomplish the following goals with watsonx: * Build machine learning models Build models by using open source frameworks and code-based, automated, or visual data science tools. * Experiment with foundation models Test prompts to generate, classify, summarize, or extract content from your input text. Choose from IBM models or open source models from Hugging Face. * Manage the AI lifecycle Manage and automate the full AI model lifecycle with all the integrated tools and runtimes to train, validate, and deploy AI models. * Govern AI Track and document the detailed history of AI models to help ensure compliance. Watsonx.ai provides these tools for working with data and models: Tools for working with data and models What you can use What you can do Best to use when [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Access and refine data from diverse data source connections.
Materialize the resulting data sets as snapshots in time that might combine, join, or filter data for other data scientists to analyze and explore. You need to visualize the data when you want to shape or cleanse it.
You want to simplify the process of preparing large amounts of raw data for analysis. [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) Experiment with IBM and open source foundation models by inputting prompts. You want to engineer prompts for your generative AI solution. [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) Tailor the output that a foundation model returns to better meet your needs. You want to adjust foundation model outputs for use in your generative AI solution. [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) Use AutoAI to automatically select algorithms, engineer features, generate pipeline candidates, and train machine learning model pipeline candidates.
Then, evaluate the ranked pipelines and save the best as models.
Deploy the trained models to a space, or export the model training pipeline that you like from AutoAI into a notebook to refine it. You want an advanced and automated way to build a good set of training pipelines and machine learning models quickly.
You want to be able to export the generated pipelines to refine them. [Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) Prompt foundation models with the Python library.
Use notebooks and scripts to write your own feature engineering, model training, and evaluation code in Python or R. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage.
Code with your favorite open source frameworks and libraries. You want to use Python or R coding skills to have full control over the code that you use to work with models. [SPSS Modeler flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Use SPSS Modeler flows to create your own machine learning model training, evaluation, and scoring flows. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage. You want a simple way to explore data and define machine learning model training, evaluation, and scoring flows. [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) Analyze data and build and test machine learning models by working with R in RStudio. You want to use a development environment to work in R. [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) Prepare data, import models, solve problems and compare scenarios, visualize data, find solutions, produce reports, and save models to deploy with Watson Machine Learning. You need to evaluate millions of possibilities to find the best solution to a prescriptive analytics problem. [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) Train a common machine learning model that uses distributed data. You need to train a machine learning model without moving, combining, or sharing data that is distributed across multiple locations. [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) Use pipelines to create repeatable and scheduled flows that automate notebook, Data Refinery, and machine learning pipelines, from data ingestion to model training, testing, and deployment. You want to automate some or all of the steps in an MLOps flow. [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) Generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. You want to mask or mimic production data or you want to generate synthetic data from a custom data schema. Watsonx.governance provides these tools for governing models. Tools for governing models What you can use What you can do Best to use when [Factsheets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html) View model lifecycle status, general model and deployment details, training",conceptual,0,train
37D9428BD2E4A45CA968DAD59D1005FB5FC4DE9C,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/cart.html?context=cdpaas&locale=en,C&R Tree node (SPSS Modeler),"C&R Tree node (SPSS Modeler)
C&R Tree node The Classification and Regression (C&R) Tree node is a tree-based classification and prediction method. Similar to C5.0, this method uses recursive partitioning to split the training records into segments with similar output field values. The C&R Tree node starts by examining the input fields to find the best split, measured by the reduction in an impurity index that results from the split. The split defines two subgroups, each of which is subsequently split into two more subgroups, and so on, until one of the stopping criteria is triggered. All splits are binary (only two subgroups).",conceptual,0,train
0EFC1AA12637C84918CEF9FA5DE5DA424822330C,https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=en,Decision Optimization Modeling Assistant scheduling tutorial,"Decision Optimization Modeling Assistant scheduling tutorial
Formulating and running a model: house construction scheduling This tutorial shows you how to use the Modeling Assistant to define, formulate and run a model for a house construction scheduling problem. The completed model with data is also provided in the DO-samples, see [Importing Model Builder samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.htmlExamples__section_modelbuildersamples). In this section: * [Modeling Assistant House construction scheduling tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_The_problem) * [More about the model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_tbl_kdj_t1b) * [Generating a Python notebook from your scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_j2m_xnh_4bb)",how-to,1,train
9DEAC0E5B403BAEDEABE9C76A295651289E6416C,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_intro_evaluate.html?context=cdpaas&locale=en,Evaluating the model (SPSS Modeler),"Evaluating the model (SPSS Modeler)
Evaluating the model We've been browsing the model to understand how scoring works. But to evaluate how accurately it works, we need to score some records and compare the responses predicted by the model to the actual results. We're going to score the same records that were used to estimate the model, allowing us to compare the observed and predicted responses. Figure 1. Attaching the model nugget to output nodes for model evaluation  1. To see the scores or predictions, attach the Table node to the model nugget and then right-click the Table node and select Run. A table will be generated and added to the Outputs panel. Double-click it to open it. The table displays the predicted scores in a field named $R-Credit rating, which was created by the model. We can compare these values to the original Credit rating field that contains the actual responses. By convention, the names of the fields generated during scoring are based on the target field, but with a standard prefix. Prefixes $G and $GE are generated by the Generalized Linear Model, $R is the prefix used for the prediction generated by the CHAID model in this case, $RC is for confidence values, $X is typically generated by using an ensemble, and $XR, $XS, and $XF are used as prefixes in cases where the target field is a Continuous, Categorical, Set, or Flag field, respectively. Different model types use different sets of prefixes. A confidence value is the model's own estimation, on a scale from 0.0 to 1.0, of how accurate each predicted value is. Figure 2. Table showing generated scores and confidence values  As expected, the predicted value matches the actual responses for many records but not all. The reason for this is that each CHAID terminal node has a mix of responses. The prediction matches the most common one, but will be wrong for all the others in that node. (Recall the 18% minority of low-income customers who did not default.) To avoid this, we could continue splitting the tree into smaller and smaller branches, until every node was 100% pure—all Good or Bad with no mixed responses. But such a model would be extremely complicated and would probably not generalize well to other datasets. To find out exactly how many predictions are correct, we could read through the table and tally the number of records where the value of the predicted field $R-Credit rating matches the value of Credit rating. Fortunately, there's a much easier way; we can use an Analysis node, which does this automatically. 2. Connect the model nugget to the Analysis node. 3. Right-click the Analysis node and select Run. An Analysis entry will be added to the Outputs panel. Double-click it to open it. Figure 3. Attaching an Analysis node  The analysis shows that for 1960 out of 2464 records—over 79%—the value predicted by the model matched the actual response. Figure 4. Analysis results comparing observed and predicted responses  This result is limited by the fact that the records being scored are the same ones used to estimate the model. In a real situation, you could use a Partition node to split the data into separate samples for training and evaluation. By using one sample partition to generate the model and another sample to test it, you can get a much better indication of how well it will generalize to other datasets. The Analysis node allows us to test the model against records for which we already know the actual result. The next stage illustrates how we can use the model to score records for which we don't know the outcome. For example, this might include people who are not currently customers of the bank, but who are prospective targets for a promotional mailing.",how-to,1,train
A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=en,Managing feature groups (beta),"Managing feature groups (beta)
Managing feature groups (beta) Create a feature group to preserve a set of columns of a data asset along with associated metadata for use with Machine Learning models. Required service : You must have these services. - Watson Studio (for projects) Required permissions : To view this page, you can have any role in a project. : To edit or update information on this page, you must have the Editor or Admin role in the project. Workspaces : You can view the asset feature group in these workspaces: : Projects Types of assets : These types of assets can have a feature group: : Tabular: CSV, TSV, Parquet, xls, xslx, avro, text, json files : [Connected data types](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) that are structured and supported in Watson Studio. Data size : No limit Feature groups (beta) Create a feature group to preserve a set of columns of a particular data asset along with the metadata used for Machine Learning. For example, if you have a set of features for a credit approval model, you can preserve the features used to train the model, as well as some metadata, including which column is used as the prediction target, and which columns are used for bias detection. Feature groups make it simple to preserve the metadata for the features used to train a machine learning model so other data scientists can use the same features. You can see the feature group tab when you preview a particular asset. * [Creating a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=encreate-featuregrp) * [Editing a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=enedit-featuregrp) * [Removing features or a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=enremove-featuregrp) * [Using the Python API for feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=enapi-featuregrp) Creating a feature group in a project Before you begin If you create a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) for the data asset before creating a feature group you can select profile metadata to add values to the feature. Create a feature group You can select particular columns of data assets to form a feature group. 1. In the project Assets tab, click the name of the relevant asset to open the preview and select the Feature group tab. Here you can create a feature group or view and edit an existing one. An asset can have only one feature group. Click New feature group.  2. Select the columns that you want to be used in the feature group. Select the Name checkbox to include all the columns as features.  Editing a feature group When you have selected the columns of the data asset to be used in the feature group, you can then view each feature and edit it to specify the role it will have in Machine Learning models.  1. Click a feature name and click Edit this feature. A window opens displaying the following tabs: * Details - provide the following information about the feature.  Select a Role to be assigned to the feature: * Input: the feature can be used as input for training a Machine Learning model. * Target: the feature to be used as the prediction target when the data is used to train a Machine Learning model. * Identifier: the primary key, such as customer ID, used to identify the input data. Enter a Description, Recipe (any method or formula used to create values for the feature) and any Tags. * Value descriptions  Value descriptions allow you to clarify the meaning of specific values. For example, consider a column ""credit evaluation"" with the values -1, 0 and 1. You can use value descriptions to provide meaning for these values. For example, -1 might mean ""evaluation rejected"". You can enter descriptions for particular values. For numerical values, you can also specify a range. To specify a range of numerical values, enter the following text [n,m] where n is the start and m is the end of the range, surrounded by brackets, and click Add. For example, to describe all age values between 18 and 24 as ""millenials"", enter [18,24] as the value and millenials as the description. If you have a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) defined, the profile values are displayed in the value descriptions list. From here you can select one value or multiple values. * Fairness information  You can define Monitor or Reference groups of values for monitoring bias. The values that are more at risk of biased outcomes can be placed in the Monitor group. These values are then compared to values in the Reference group. To specify a range of numerical values, enter the following text [n,m] where n is the start and m is the end of the range, surrounded by brackets. For example, to monitor all age values between 18 and 35, enter [18,35]. Then select Monitor or Reference and click Add. You can also specify Favorable outcomes. See",how-to,1,train
DA0357B0ADE596E1A23F676F76FF4304B97AEF2B,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/stream_scripttab_javalimits.html?context=cdpaas&locale=en,Jython code size limits,"Jython code size limits
Jython code size limits Jython compiles each script to Java bytecode, which the Java Virtual Machine (JVM) then runs. However, Java imposes a limit on the size of a single bytecode file. So when Jython attempts to load the bytecode, it can cause the JVM to crash. SPSS Modeler is unable to prevent this from happening. Ensure that you write your Jython scripts using good coding practices (such as minimizing duplicated code by using variables or functions to compute common intermediate values). If necessary, you may need to split your code over several source files or define it using modules as these are compiled into separate bytecode files.",conceptual,0,train
7BAB40E15D18920009E4168C32265A950A8AFE38,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=en,Managing compute resources,"Managing compute resources
Managing compute resources If you have the Admin role or Editor in a project, you can perform management tasks for environments. * [Create an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) * [Customize an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) * [Stop active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enstop-active-runtimes) * [Promote an environment template to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/promote-envs.html) * [Track capacity unit consumption of runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html) Stop active runtimes You should stop all active runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). Jupyter notebook runtimes are started per user and not per notebook. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment. You should only stop a notebook runtime if you are sure that no other notebook kernels are active. Only runtimes that are started for jobs are automatically shut down after the scheduled job has completed. For example, if you schedule to run a notebook once a day for 2 months, the runtime instance will be activated every day for the duration of the scheduled job and deactivated again after the job has finished. Project users with Admin role can stop all runtimes in the project. Users added to the project with Editor role can stop the runtimes they started, but can't stop other project users' runtimes. Users added to the project with the viewer role can't see the runtimes in the project. You can stop runtimes from: * The Environment Runtimes page, which lists all active runtimes across all projects for your account, by clicking Administration > Environment runtimes from the Watson Studio navigation menu. * Under Tool runtimes on the Environments page on the Manage tab of your project, which lists the active runtimes for a specific project. * The Environments page when you click the Notebook Info icon () from the notebook toolbar in the notebook editor. You can stop the runtime under Runtime status. Idle timeouts for: * [Jupyter notebook runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=encpu) * [Spark runtimes for notebooks and Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enspark) * [Notebook with GPU runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=engpu) * [RStudio runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enrstudio) Jupyter notebook idle timeout Runtime idle times differ for the Jupyter notebook runtimes depending on your Watson Studio plan. Idle timeout for default CPU runtimes Plan Idle timeout Lite - Idle stop time: 1 hour
- CUH limit: 10 CUHs Professional - Idle stop time: 1 hour
- CUH limit: no limit Standard (Legacy) - Idle stop time: 1 hour
- CUH limit: no limit Enterprise (Legacy) - Idle stop time: 3 hours
- CUH limit: no limit All plans
Free runtime - Idle stop time: 1 hour
- Maximum lifetime: 12 hours Important: A runtime is started per user and not per notebook. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment. Only stop a runtime if you are sure that no kernels are active. Spark idle timeout All Spark runtimes, for example for notebook and Data Refinery, are stopped after 3 hours of inactivity. The Default Data Refinery XS runtime that is used when you refine data in Data Refinery is stopped after an idle time of 1 hour. Spark runtimes that are started when a job is started, for example to run a Data Refinery flow or a notebook, are stopped when the job finishes. GPU idle timeout All GPU runtimes are automatically stopped after 3 hours of inactivity for Enterprise plan users and after 1 hour of inactivity for other paid plan users. RStudio idle timeout An RStudio is stopped for you after an idle time of 2 hour. During this idle time, you will continue to consume CUHs for which you are billed. Long compute-intensive jobs are hard stopped after 24 hours. Parent topic:[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)",how-to,1,train
445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=en,Configuring global objects for Watson Pipelines,"Configuring global objects for Watson Pipelines
Configuring global objects for Watson Pipelines Use global objects to create configurable constants to configure your pipeline at run time. Use parameters or user variables in pipelines to specify values at run time, rather than hardcoding the values. Unlike pipeline parameters, user variables can be dynamically set during the flow. Learn about creating: * [Pipeline parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enflow) * [Parameter sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enparam-set) * [User variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enuser) Pipeline parameters Use pipeline parameters to specify a value at pipeline runtime. For example, if you want a user to enter a deployment space for pipeline output, use a parameter to prompt for the space name to use when the pipeline runs. Specifying the value of the parameter each time that you run the job helps you use the correct resources. About pipeline parameters: * can be assigned as a node value or assign it for the pipeline job. * can be assigned to any node, and a status indicator alerts you. * can be used for multiple nodes. Defining a pipeline parameter 1. Create a pipeline parameter from the node configuration panel from the toolbar. 2. Enter a name and an optional description. The name must be lower snake case with lowercase letters, numbers, and underscores. For example, lower_snake_case_with_numbers_123 is a valid name. The name must begin with a letter. If the name does not comply, you get a 404 error when you try to run the pipeline. 3. Assign a parameter type. Depending on the parameter type, you might need to provide more details or assign a default value. 4. Click Add to list to save the pipeline parameter. Note: Parameter types Parameter types are categorized as: * Basic: including data types to structure input to a pipeline or options for handling the creation of a duplicate space or asset. * Resource: for selecting a project, catalog, space, or asset. * Instance: for selecting a machine learning instance or a Cloud Object Storage instance. * Other: for specifying details, such as creation mode or error policy. Example of using pipeline types To create a parameter of the type Path: 1. Create a parameter set called MASTER_PARAMETER_SET. 2. Create a parameter called file_path and set the type to Path. 3. Set the value of file_path to mnts/workspace/masterdir. 4. Drag the node Wait for file onto the canvas and set the File location value to MASTER_PARAMETER_SET.file_path. 5. Connect the Wait for file with the Run Bash script node so that the latter node runs after the former. 6. Optional: Test your parameter variable: 1. Add the environment variable parameter to your MASTER_PARAMETER_SET parameter set, for example FILE_PATH. 2. Paste the following command into the Script code of the Run Bash script: echo File: $FILE_PATH cat $FILE_PATH 7. Run the pipeline. The path mnts/workspace/masterdir is in both of the nodes' execution logs to see they passed successfully. Configuring a node with a pipeline parameter When you configure a node with a pipeline parameter, you can choose an existing pipeline parameter or create a new one as part of configuring a node. For example: 1. Create a pipeline parameter called creationmode and save it to the parameter list. 2. Configure a Create deployment space node and click to open the configuration panel. 3. Choose the Pipeline parameter as the input for the Creation mode option. 4. Choose the creationmode pipeline parameter and save the configuration. When you run the flow, the pipeline parameter is assigned when the space is created. Parameter sets Parameter sets are a group of related parameters to use in a pipeline. For example, you might create one set of parameters to use in a test environment and another for use in a production environment. Parameter sets can be created as a project asset. Parameter sets created in the project are then available for use in pipelines in that project. Creating a parameter set as a project asset You can create a parameter set as a reusable project asset to use in pipelines. 1. Open an existing project or create a project. 2. Click New task > Collect multiple job parameters with specified values to reuse in jobs from the available tasks. 3. Assign a name for the set, and specify the details for each parameter in the set, including: * Name for the parameter * Data type * Prompt * Default value 4. Optionally create value sets for the parameters in the parameter set. The value sets can be the different values for different contexts. For example, you can create a Test value set with values for a test environment, and a production set for production values. 5. Save the parameter set after you create all the parameters, s. It becomes available for use in pipelines that are created in that project. Adding a parameter set for use in a pipeline To add a parameter set",how-to,1,train
B416F3605ADF246170E1B462EE0F2CFCDF5E591B,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_setting_properties.html?context=cdpaas&locale=en,Setting properties,"Setting properties
Setting properties Nodes, flows, models, and outputs all have properties that can be accessed and, in most cases, set. Properties are typically used to modify the behavior or appearance of the object. The methods that are available for accessing and setting object properties are summarized in the following table. Methods for accessing and setting object properties Table 1. Methods for accessing and setting object properties Method Return type Description p.getPropertyValue(propertyName) Object Returns the value of the named property or None if no such property exists. p.setPropertyValue(propertyName, value) Not applicable Sets the value of the named property. p.setPropertyValues(properties) Not applicable Sets the values of the named properties. Each entry in the properties map consists of a key that represents the property name and the value that should be assigned to that property. p.getKeyedPropertyValue( propertyName, keyName) Object Returns the value of the named property and associated key or None if no such property or key exists. p.setKeyedPropertyValue( propertyName, keyName, value) Not applicable Sets the value of the named property and key. For example, the following script sets the value of a Derive node for a flow: stream = modeler.script.stream() node = stream.findByType(""derive"", None) node.setPropertyValue(""name_extension"", ""new_derive"") Alternatively, you might want to filter a field from a Filter node. In this case, the value is also keyed on the field name. For example: stream = modeler.script.stream() Locate the filter node ... node = stream.findByType(""filter"", None) ... and filter out the ""Na"" field node.setKeyedPropertyValue(""include"", ""Na"", False)",conceptual,0,train
D8BD7C30F776F7218860187F535C6B72D1A8DC74,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=en,Adding data assets to a deployment space,"Adding data assets to a deployment space
Adding data assets to a deployment space Learn about various ways of adding and promoting data assets to a space and data types that are used in deployments. Data can be: * A data file such as a .csv file * A connection to data that is located in a repository such as a database. * Connected data that is located in a storage bucket. For more information, see [Using data from the Cloud Object Storage service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=encos-data). Notes: * For definitions of data-related terms, refer to [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). You can add data to a space in one of these ways: * [Add data and connections to space by using UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=enadd-directly) * [Promote a data source, such as a file or a connection from an associated project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html) * [Save a data asset to a space programmatically](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=enadd-programmatically) * [Import a space or a project, including data assets, into an existing space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html). Data added to a space is managed in a similar way to data added to a Watson Studio project. For example: * Adding data to a space creates a new copy of the asset and its attachments within the space, maintaining a reference back to the project asset. If an asset such as a data connection requires access credentials, they persist and are the same whether you are accessing the data from a project or from a space. * Just like with data connection in a project, you can edit data connection details from the space. * Data assets are stored in a space in the same way that they are stored in a project. They use the same file structure for the space as the structure used for the project. Adding data and connections to space by using UI To add data or connections to space by using UI: 1. From the Assets tab of your deployment space, click Import assets. 2. Choose between adding a connected data asset, local file, or connection to a data source: * If you want to add a connected data asset, select Connected data. Choose a connection and click Import. * If you want to add a local file, select Local file > Data asset. Upload your file and click Done. * If you want to add a connection to a data source, select Data access > Connection. Choose a connection and click Import. The data asset displays in the space and is available for use as an input data source in a deployment job. Note:Some types of connections allow for using your personal platform credentials. If you add a connection or connected data that uses your personal platform credentials, tick the Use my platform login credentials checkbox. Adding data to space programmatically If you are using APIs to create, update, or delete Watson Machine Learning assets, make sure that you are using only Watson Machine Learning [API calls](https://cloud.ibm.com/apidocs/machine-learning). For an example of how to add assets programmatically, refer to this sample notebook: [Use SPSS and batch deployment with Db2 to predict customer churn](https://github.com/IBM/watson-machine-learning-samples/blob/df8e5122a521638cb37245254fe35d3a18cd3f59/cloud/notebooks/python_sdk/deployments/spss/Use%20SPSS%20and%20batch%20deployment%20with%20DB2%20to%20predict%20customer%20churn.ipynb) Data source reference types in Watson Machine Learning Data source reference types are referenced in Watson Machine Learning requests to represent input data and results locations. Use data_asset and connection_asset for these types of data sources: * Cloud Object Storage * Db2 * Database data Notes: * For Decision Optimization, the reference type is url. Example data_asset payload {""input_data_references"": [{ ""type"": ""data_asset"", ""connection"": { }, ""location"": { ""href"": ""/v2/assets/?space_id="" } }] Example connection_asset payload ""input_data_references"": [{ ""type"": ""connection_asset"", ""connection"": { ""id"": """" }, ""location"": { ""bucket"": """", ""file_name"": ""/"" } }] For more information, see: * Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) Using data from the Cloud Object Storage service Cloud Object Storage service can be used with deployment jobs through a connected data asset or a connection asset. To use data from the Cloud Object Storage service: 1. Create a connection to IBM Cloud Object Storage by adding a Connection to your project or space and selecting Cloud Object Storage (infrastructure) or Cloud Object Storage as the connector. Provide the secret key, access key, and login URL. Note:When you are creating a connection to Cloud Object Storage or Cloud Object Storage (Infrastructure), you must specify both access_key and secret_key. If access_key and secret_key are not specified, downloading the data from that connection doesn't work in a batch deployment job. For reference, see [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) and [IBM Cloud Object Storage (infrastructure) connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html). 2. Add input and output files to the deployment space as connected data by using the Cloud Object Storage connection that you created. Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)",how-to,1,train
542F90CA456DCCC3D79DBF6DC9E8A6755B3BA69E,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_stream_execution.html?context=cdpaas&locale=en,Running a flow,"Running a flow
Running a flow The following example runs all executable nodes in the flow, and is the simplest type of flow script: modeler.script.stream().runAll(None) The following example also runs all executable nodes in the flow: stream = modeler.script.stream() stream.runAll(None) In this example, the flow is stored in a variable called stream. Storing the flow in a variable is useful because a script is typically used to modify either the flow or the nodes within a flow. Creating a variable that stores the flow results in a more concise script.",how-to,1,train
1924AE74643C2D9D416204693C9BB84D5212E3B0,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel_build.html?context=cdpaas&locale=en,Building an deploying the model (SPSS Modeler),"Building an deploying the model (SPSS Modeler)
Building and deploying the model 1. When your model is ready, click Generate a model to generate a text nugget. Figure 1. Generate a new model  Figure 2. Build a category model  2. If you want to save the Text Analytics Workbench session, instead click Return to flow and then Save and exit. Figure 3. Saving your session The generated text nugget appears on your flow canvas. Figure 4. Generated text nugget After the category model has been validated and generated in the Text Analytics Workbench, you can deploy it in your flow and score the same data set or score a new one. Figure 5. Example flow with two modes for scoring This example flow illustrates the two modes for scoring: * Categories as fields. With this option, there are just as many output records as there were in the input. However, each record now contains one new field for every category that was selected on the Model tab. For each field, enter a flag value for true and for false, such as True/False, or 1/0. In this flow, values are set to 1 and 0 to aggregate results and count the number of positive, negative, mixed (both positive and negative), or no score (no opinion) answers. Figure 6. Model results - categories as fields  * Categories as records. With this option, a new record is created for each category, document pair. Typically, there are more records in the output than there were in the input. Along with the input fields, new fields are also added to the data depending on what kind of model it is. Figure 7. Model results - categories as records  3. You can add a Select node after the DeriveSentiment SuperNode, include Sentiments=Pos, and add a Charts node to gain quick insight about what guests appreciate about the hotel: Figure 8. Chart of positive opinions ",how-to,1,train
337CC5401082DFD6C8C79D49CD97F7BC197C7303,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glmmnuggetnodeslots.html?context=cdpaas&locale=en,applyglmmnode properties,"applyglmmnode properties
applyglmmnode properties You can use GLMM modeling nodes to generate a GLMM model nugget. The scripting name of this model nugget is applyglmmnode. For more information on scripting the modeling node itself, see [glmmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glmmnodeslots.htmlglmmnodeslots). applyglmmnode properties Table 1. applyglmmnode properties applyglmmnode Properties Values Property description confidence onProbabilityonIncrease Basis for computing scoring confidence value: highest predicted probability, or difference between highest and second highest predicted probabilities. score_category_probabilities flag If set to True, produces the predicted probabilities for categorical targets. A field is created for each category. Default is False. max_categories integer Maximum number of categories for which to predict probabilities. Used only if score_category_probabilities is True. score_propensity flag If set to True, produces raw propensity scores (likelihood of ""True"" outcome) for models with flag targets. If partitions are in effect, also produces adjusted propensity scores based on the testing partition. Default is False. enable_sql_generation falsetruenative Used to set SQL generation options during flow execution. The options are to push back to the database, or to score within SPSS Modeler.",conceptual,0,train
6068B2555E5014D386397335D0ED56B430082FF7,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_dewindow.html?context=cdpaas&locale=en,The Resource editor tab (SPSS Modeler),"The Resource editor tab (SPSS Modeler)
The Resource editor tab Text Analytics rapidly and accurately captures key concepts from text data by using an extraction process. This process relies on linguistic resources to dictate how large amounts of unstructured, textual data should be analyzed and interpreted. You can use the Resource editor tab to view the linguistic resources used in the extraction process. These resources are stored in the form of templates and libraries, which are used to extract concepts, group them under types, discover patterns in the text data, and other processes. Text Analytics offers several preconfigured resource templates, and in some languages, you can also use the resources in text analysis packages. Figure 1. Resource editor tab ",conceptual,0,train
074C9BAEB0177E3CF57BAC36E5FCBD13063498A1,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html?context=cdpaas&locale=en,Governing assets in AI use cases,"Governing assets in AI use cases
Governing assets in AI use cases Create an AI use case to track and govern AI assets from request through production. Factsheets capture details about the asset for each stage of the AI lifecycle to help you meet governance and compliance goals. To learn about AI use cases, you can follow a tutorial in the Getting started with watsonx.governance sample project. Assets in the sample are prompt templates for a car insurance claim processing use case. The prompts use car insurance claims as input and then use large language models to help insurance agents process the claims. One prompt summarizes claims, another prompt extracts key information such as make and model, and the last prompt generates suggestions for the insurance agent. In Projects, start a new project, then choose to create a project from a sample. The project gallery includes the getting started sample.  When your project is ready, open the Readme for a step-by-step tutorial.  Get started with AI use cases Set up or work with AI use cases: * [Create an inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html) for storing AI use cases * [Set up an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html) * [Track assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html) in an AI use case * [View factsheets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-factsheet-viewing.html) for tracked assets Parent topic:[Governing AI assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html)",conceptual,0,train
9F78EEC8E37DB19F2C3220F8E43029B2C5370B5D,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties_modelingnodes_container.html?context=cdpaas&locale=en,Modeling node properties,"Modeling node properties
Modeling node properties Refer to this section for a list of available properties for Modeling nodes.",conceptual,0,train
7C4F082004DBA0B946D64AA6C0127041F4622C7B,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/lsvmnodeslots.html?context=cdpaas&locale=en,lsvmnode properties,"lsvmnode properties
lsvmnode properties With the Linear Support Vector Machine (LSVM) node, you can classify data into one of two groups without overfitting. LSVM is linear and works well with wide data sets, such as those with a very large number of records. lsvmnode properties Table 1. lsvmnode properties lsvmnode Properties Values Property description intercept flag Includes the intercept in the model. Default value is True. target_order AscendingDescending Specifies the sorting order for the categorical target. Ignored for continuous targets. Default is Ascending. precision number Used only if measurement level of target field is Continuous. Specifies the parameter related to the sensitiveness of the loss for regression. Minimum is 0 and there is no maximum. Default value is 0.1. exclude_missing_values flag When True, a record is excluded if any single value is missing. The default value is False. penalty_function L1L2 Specifies the type of penalty function used. The default value is L2. lambda number Penalty (regularization) parameter. calculate_variable_importance flag For models that produce an appropriate measure of importance, this option displays a chart that indicates the relative importance of each predictor in estimating the model. Note that variable importance may take longer to calculate for some models, particularly when working with large datasets, and is off by default for some models as a result. Variable importance is not available for decision list models.",conceptual,0,train
DE0C1913D6D770641762ED518FEFE8FFFC5A1F13,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/logreg.html?context=cdpaas&locale=en,Logistic node (SPSS Modeler),"Logistic node (SPSS Modeler)
Logistic node Logistic regression, also known as nominal regression, is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression but takes a categorical target field instead of a numeric one. Both binomial models (for targets with two discrete categories) and multinomial models (for targets with more than two categories) are supported. Logistic regression works by building a set of equations that relate the input field values to the probabilities associated with each of the output field categories. After the model is generated, you can use it to estimate probabilities for new data. For each record, a probability of membership is computed for each possible output category. The target category with the highest probability is assigned as the predicted output value for that record. Binomial example. A telecommunications provider is concerned about the number of customers it is losing to competitors. Using service usage data, you can create a binomial model to predict which customers are liable to transfer to another provider and customize offers so as to retain as many customers as possible. A binomial model is used because the target has two distinct categories (likely to transfer or not). Note: For binomial models only, string fields are limited to eight characters. If necessary, longer strings can be recoded using a Reclassify node or by using the Anonymize node. Multinomial example. A telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. Using demographic data to predict group membership, you can create a multinomial model to classify prospective customers into groups and then customize offers for individual customers. Requirements. One or more input fields and exactly one categorical target field with two or more categories. For a binomial model the target must have a measurement level of Flag. For a multinomial model the target can have a measurement level of Flag, or of Nominal with two or more categories. Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated. Strengths. Logistic regression models are often quite accurate. They can handle symbolic and numeric input fields. They can give predicted probabilities for all target categories so that a second-best guess can easily be identified. Logistic models are most effective when group membership is a truly categorical field; if group membership is based on values of a continuous range field (for example, high IQ versus low IQ), you should consider using linear regression to take advantage of the richer information offered by the full range of values. Logistic models can also perform automatic field selection, although other approaches such as tree models or Feature Selection might do this more quickly on large datasets. Finally, since logistic models are well understood by many analysts and data miners, they may be used by some as a baseline against which other modeling techniques can be compared. When processing large datasets, you can improve performance noticeably by disabling the likelihood-ratio test, an advanced output option.",conceptual,0,train
B23F48A4757500FEA641245CFFA69CB3B72AE0E8,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html?context=cdpaas&locale=en,Creating a time series anomaly prediction (Beta),"Creating a time series anomaly prediction (Beta)
Creating a time series anomaly prediction (Beta) Create a time series anomaly prediction experiment to train a model that can detect anomalies, or unexpected results, when the model predicts results based on new data. Tech preview This is a technology preview and is not yet supported for use in production environments. Detecting anomalies in predictions You can use anomaly prediction to find outliers in model predictions. Consider the following scenarios for training a time series model with anomaly prediction. For example, suppose you have operational metrics from monitoring devices that were collected in the date range of 2022.1.1 through 2022.3.31. You are confident that no anomalies exist in the data for that period, even if the data is unlabeled. You can use a time series anomaly prediction experiment to: * Train model candidate pipelines and auto-select the top-ranked model candidate * Deploy a selected model to predict new observations if: * A new time point is an anomaly (for example, an online score predicts a time point 2022.4.1 that is outside of the expected range) * A new time range has anomalies (for example, a batch score predicts values of 2022.4.1 to 2022.4.7, outside the expected range) Working with a sample To create an AutoAI Time series experiment with anomaly prediction that uses a sample: 1. Create an AutoAI experiment. 2. Select Samples.  3. Click the tile for Electricity usage anomalies sample data. 4. Follow the prompts to configure and run the experiment.  5. Review the details about the pipelines and explore the visualizations. Configuring a time series experiment with anomaly prediction 1. Load the data for your experiment. Restriction: You can upload only a single data file for an anomaly prediction experiment. If you upload a second data file (for holdout data) the Anomaly prediction option is disabled, and only the Forecast option is available. By default, Anomaly prediction experiments use a subset of the training data for validation. 2. Click Yes to Enable time series. 3. Select Anomaly prediction as the experiment type. 4. Configure the feature columns from the data source that you want to predict based on the previous values. You can specify one or more columns to predict. 5. Select the date/time column. The prediction summary shows you the experiment type and the metric that is selected for optimizing the experiment. Configuring experiment settings To configure more details for your time series experiment, open the Experiment settings pane. Options that are not available for anomaly prediction experiments are unavailable. General prediction settings On the General panel for prediction settings, configure details for training the experiment. Field Description Prediction type View or change the prediction type based on prediction column for your experiment. For time series experiments, Time series anomaly prediction is selected by default. Note: If you change the prediction type, other prediction settings for your experiment are automatically changed. Optimized metric Choose a metric for optimizing and ranking the pipelines. Optimized algorithm selection Not supported for time series experiments. Algorithms to include Select [algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html?context=cdpaas&locale=enimplementation) based on which you want your experiment to create pipelines. The algorithms support anomaly prediction. Pipelines to complete View or change the number of pipelines to generate for your experiment. Time series configuration details On the Time series pane for prediction settings, configure the details for how to train the experiment and generate predictions. Field Description Date/time column View or change the date/time column for the experiment. Lookback window Not supported for anomaly prediction. Forecast window Not supported for anomaly prediction. Configuring data source settings To configure details for your input data, open the Experiment settings panel and select the Data source. General data source settings On the General panel for data source settings, you can choose options for how to use your experiment data. Field Description Duplicate rows Not supported for time series anomaly prediction experiments. Subsample data Not supported for time series anomaly prediction experiments. Text feature engineering Not supported for time series anomaly prediction experiments. Final training data set Anomaly prediction uses a single data source file, which is the final training data set. Supporting features Not supported for time series anomaly prediction experiments. Data imputation Not supported for time series anomaly prediction experiments. Training and holdout data Anomaly prediction does not support a separate holdout file. You can adjust how the data is split between training and holdout data. Note: In some cases, AutoAI can overwrite your holdout settings to ensure the split is valid for the experiment. In this case, you see a notification and the change is noted in the log file. Reviewing the experiment results When you run the experiment, the progress indicator displays the pathways to pipeline creation. Ranked pipelines are listed on the leaderboard. Pipeline score represents how well the pipeline performed for the optimizing metric.",how-to,1,train
91B834E69C2153740973C59CF6B4D66260640342,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_dendrogram.html?context=cdpaas&locale=en,Dendrogram charts,"Dendrogram charts
Dendrogram charts Dendrogram charts are similar to tree charts and are typically used to illustrate a network structure (for example, a hierarchical structure). Dendrogram charts consist of a root node that is connected to subordinate nodes through edges or branches. The last nodes in the hierarchy are called leaves.",conceptual,0,train
B0DA6CD45BFD0D3A91F0B3C4E7615DE23FE4F350,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html?context=cdpaas&locale=en,Setting up the Watson Studio and Watson Machine Learning services,"Setting up the Watson Studio and Watson Machine Learning services
Setting up the Watson Studio and Watson Machine Learning services The Watson Studio and Watson Machine Learning services are provisioned automatically with a Lite plan when you sign up for IBM watsonx. To set up Watson Studio and Watson Machine Learning for an organization, you upgrade the service plans. You allow the node IP addresses access through the firewall. To set up the Watson Studio and Watson Machine Learning services, complete these tasks: 1. [Upgrade the services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html?context=cdpaas&locale=enupgrade). 2. [Allow IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html?context=cdpaas&locale=ennode-ips). Step 1: Upgrade the services to the appropriate plans Required roles : You must be the IBM Cloud account Owner or Administrator. To upgrade the services: 1. Determine the Watson Studio service plan that you need. The features and compute resources of Watson Studio vary across the service plans. See [Watson Studio service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html). 2. While logged in to IBM watsonx, from the main menu, click Administration > Services > Service instances. 3. Click the menu next to the Watson Studio service and choose Upgrade service. 4. Choose the plan you want and click Upgrade. 5. Repeat the steps for the Watson Machine Learning service. The resources and number of deployment jobs vary across the Watson Machine Learning service plans. See [Watson Machine Learning service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Make sure that object storage is configured to allow these users to create catalogs and projects. See [Setting up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlcos-delegation). All users in your IBM Cloud account with the Editor IAM platform access role for all IAM enabled services can now create projects and use all the Watson Studio and Watson Machine Learning tools. Step 2: Allow IP addresses for Watson Studio for your region The IP addresses for the Watson Studio nodes in each region must be configured as allowed IP addresses for the IBM Cloud account. When allowing specific IP addresses for Watson Studio, you include the CIDR ranges for the Watson Studio nodes in each region to allow a secure connection through the firewall. Required roles : You must have the Editor or higher IBM Cloud IAM Platform role to allow IP addresses. First look up the CIDR blocks in IBM watsonx, and then enter them into the Access(IAM) > Settings screen in IBM Cloud. Follow these steps: 1. From the IBM watsonx main menu, select Administration > Cloud integrations. 2. Click Firewall configuration to display the IP addresses for the current region. 3. Checkmark Show IP ranges in CIDR notation. 4. Click the icon to copy a CIDR block to the clipboard. 5. Enter the CIDR block of IP addresses into the Access(IAM) > Settings > Restrict IP address access > Allowed IP addresses for the IBM Cloud account. 6. Then click Save. 7. Repeat for each CIDR block until all are entered. 8. Repeat for each region. For step-by-step instructions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips). Next steps Finish the remaining steps for [setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html). Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)",how-to,1,train
88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html?context=cdpaas&locale=en,Managing the user API key,"Managing the user API key
Managing the user API key Certain operations in IBM watsonx require an API key for secure authorization. You can generate and rotate a user API key as needed to help ensure your operations run smoothly. User API key overview Operations running within services in IBM watsonx require credentials for secure authorization. These operations use an API key for authorization. A valid API key is required for many long-running tasks, including the following: * Model training in Watson Machine Learning * Problem solving with Decision Optimization * Data transformation with DataStage flows * Other runtime services (for example, Data Refinery and Pipelines) that accept API key references Both scheduled and ad hoc jobs require an API key for authorization. An API key is used for jobs when: * Creating a job schedule with a predefined key * Updating the API key for a scheduled job * Providing an API key for an ad hoc job User API keys give control to the account owner to secure and renew credentials, thus helping to ensure operations run without interruption. Keys are unique to the IBMid and account. If you change the account you are working in, you must generate a new key. Active and Phased out keys When you create an API key, it is placed in Active state. The Active key is used for authorization for operations in IBM watsonx. When you rotate a key, a new key is created in Active state and the existing key is changed to Phased out state. A Phased out key is not used for authorization and can be deleted. Viewing the current API key Click your avatar and select Profile and settings to open your account profile. Select User API key to view the Active and Phased out keys. Creating an API key If you do not have an API key, you can create a key by clicking Create a key. A new key is created in Active state. The key automatically authorizes operations that require a secure credential. The key is stored in both IBM Cloud and IBM watsonx. You can view the API keys for your IBM Cloud account at [API keys](https://cloud.ibm.com/iam/apikeys). User API Keys take the form cpd-apikey-{username}-{timeStamp}, where username is the IBMid of the account owner and timestamp indicates when the key was created. Rotating an API key If the API key becomes stale or invalid, you can generate a new Active key for use by all operations. To rotate a key, click Rotate. A new key is created to replace the current key. The rotated key is placed in Phased out status. A Phased out key is not available for use. Deleting a phased out API key When you are certain the phased out key is no longer needed for operations, click the minus sign to delete it. Deleting keys might cause running operations to fail. Deleting all API keys Delete all keys (both Active and Phased out) by clicking the trash can. Deleting keys might cause running operations to fail. Learn more * [Creating and managing jobs in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) * [Adding task credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/task-credentials.html) * [Understanding API keys](https://cloud.ibm.com/docs/account?topic=account-manapikey&interface=ui) Parent topic:[Administering your accounts and services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)",how-to,1,train
4E83416B551F557D5BDA600450E6CCB7742EB51D,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en,Quick start: Prompt a foundation model with the retrieval-augmented generation pattern,"Quick start: Prompt a foundation model with the retrieval-augmented generation pattern
Quick start: Prompt a foundation model with the retrieval-augmented generation pattern Take this tutorial to learn how to use foundation models in IBM watsonx.ai to generate factually accurate output grounded in information in a knowledge base by applying the retrieval-augmented generation pattern. Foundation models can generate output that is factually inaccurate for a variety of reasons. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text. This tutorial uses a sample notebook using the retrieval-augmented generation pattern method to improve the accuracy of the generated output. Required services : Watson Studio : Watson Machine Learning Your basic workflow includes these tasks: 1. Open a project. Projects are where you can collaborate with others to work with data. 2. Add a notebook to your project. You can create your own notebook, or add a [sample notebook](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) to your project. 3. Add and edit code, then run the notebook. 4. Review the notebook output. Read about retrieval-augmented generation pattern You can scale out the technique of including context in your prompts by leveraging information in a knowledge base. The retrieval-augmented generation pattern involves three basic steps: * Search for relevant content in your knowledge base * Pull the most relevant content into your prompt as context * Send the combined prompt text to the model to generate output [Read more about the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html?context=wx) Watch a video about using the retrieval-augmented generation pattern  Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial. This video provides a visual method to learn the concepts and tasks in this documentation. Try a tutorial to prompt a foundation model with the retrieval-augmented generation pattern In this tutorial, you will complete these tasks: * [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep01) * [Task 2: Add a sample notebook to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep02) * [Task 3: Edit the notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep03) * [Task 4: Run the notebook and review the output](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep04) Expand all sections * Tips for completing this tutorial ### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [watsonx.ai Community discussion forum](https://community.ibm.com/community/user/watsonx/communities/community-home/digestviewer?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview) * Task 1: Open a project You need a project to store the sample notebook. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project. This video provides a visual method to learn the concepts and tasks in this documentation. Follow the steps to verify that you have an existing project or create a project. 1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Associate the Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enassociate). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox in the Projects section. 1. Open an existing project or the new sandbox project. \ Associate the Watson Machine Learning service with the project You will use Watson Machine Learning to prompt the foundation model, so follow these steps to associate your Watson Machine Learning service instance with your project. 1. In the project, click the Manage tab. 1. Click the Services & Integrations page. 1. Check if this project has an associated Watson Machine Learning service. If there is no associated service, then follow these steps: 1. Click Associate service. 1. Check the box next to your Watson Machine Learning service instance. 1. Click Associate. 1. If necessary, click Cancel to return",how-to,1,train
09BB38FB6DF4C562A478D6D3DC54D22823F922FB,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_record_operations.html?context=cdpaas&locale=en,Record Operations nodes (SPSS Modeler),"Record Operations nodes (SPSS Modeler)
Record Operations Record Operations nodes are useful for making changes to data at the record level. These operations are important during the data understanding and data preparation phases of data mining because they allow you to tailor the data to your particular business need. For example, based on the results of a data audit conducted using the Data Audit node (Outputs palette), you might decide that you would like to merge customer purchase records for the past three months. Using a Merge node, you can merge records based on the values of a key field, such as Customer ID. Or you might discover that a database containing information about web site hits is unmanageable with over one million records. Using a Sample node, you can select a subset of data for use in modeling.",conceptual,0,train
A6D3281CF9382FA606CF60727452A304A5CCDFA5,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html?context=cdpaas&locale=en,Adding platform connections,"Adding platform connections
Adding platform connections You can add connections to the Platform assets catalog to share them across your organization. All collaborators in the Platform assets catalog can see the connections in the catalog. However, only users with the credentials for the data source can use a platform connection in a project to create a connected data asset. Required permissions : To create a platform connection, you must be a collaborator in the Platform assets catalog with one of these roles: * Editor * Admin If you're not a collaborator in the Platform assets catalog, ask someone who is a collaborator to add you or tell you who has the Admin role in the catalog. You create connections to these types of data sources: * IBM Cloud services * Other cloud services * On-premises databases See [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) for a full list of data sources. Watch this video to see how to add platform connections. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. To create a platform connection: 1. From the main menu, choose Data > Platform connections. 2. Click New connection. 3. Choose a data source. 4. If necessary, enter the connection information required for your data source. Typically, you need to provide information like the host, port number, username, and password. 5. If prompted, specify whether you want to use personal or shared credentials. You cannot change this option after you create the connection. The credentials type for the connection, either Personal or Shared, is set by the account owner on the [Account page](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html). The default setting is Shared. * Personal: With personal credentials, each user must specify their own credentials to access the connection. Each user's credentials are saved but are not shared with any other users. Use personal credentials instead of shared credentials to protect credentials. For example, if you use personal credentials and another user changes the connection properties (such as the hostname or port number), the credentials are invalidated to prevent malicious redirection. * Shared: With shared credentials, all users access the connection with the credentials that you provide. 6. To connect to a database that is not externalized to the internet (for example, behind a firewall), see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). 7. Click Create. The connection appears on the Connections page. You can edit the connection by clicking the connection name. Alternatively, you can create a connection in a project and then publish it to the Platform assets catalog. To publish a connection from a project to the Platform assets catalog: 1. Locate the connection in the project's Assets tab in the Data assets section. 2. From the Actions menu (), select Publish to catalog. 3. Select Platform assets catalog and click Publish. Next step * [Add a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Learn more * [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) * [Creating the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html) * [Set the credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-credentials-for-connections) Parent topic:[Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html)",how-to,1,train
EED66538A3E4854D56210AB1D6AC49016F1E40A2,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/timeser_as_nodeslots_streaming.html?context=cdpaas&locale=en,streamingtimeseries properties,"streamingtimeseries properties
streamingtimeseries properties The Streaming Time Series node builds and scores time series models in one step. streamingtimeseries properties Table 1. streamingtimeseries properties streamingtimeseries properties Values Property description targets field The Streaming TS node forecasts one or more targets, optionally using one or more input fields as predictors. Frequency and weight fields aren't used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. candidate_inputs [field1 ... fieldN] Input or predictor fields used by the model. use_period flag date_time_field field input_interval NoneUnknownYearQuarterMonthWeekDayHourHour_nonperiodMinuteMinute_nonperiodSecondSecond_nonperiod period_field field period_start_value integer num_days_per_week integer start_day_of_week SundayMondayTuesdayWednesdayThursdayFridaySaturday num_hours_per_day integer start_hour_of_day integer timestamp_increments integer cyclic_increments integer cyclic_periods list output_interval NoneYearQuarterMonthWeekDayHourMinuteSecond is_same_interval flag cross_hour flag aggregate_and_distribute list aggregate_default MeanSumModeMinMax distribute_default MeanSum group_default MeanSumModeMinMax missing_imput Linear_interpSeries_meanK_meanK_medianLinear_trend k_span_points integer use_estimation_period flag estimation_period ObservationsTimes date_estimation list Only available if you use date_time_field. period_estimation list Only available if you use use_period. observations_type LatestEarliest observations_num integer observations_exclude integer method ExpertModelerExsmoothArima expert_modeler_method ExpertModelerExsmoothArima consider_seasonal flag detect_outliers flag expert_outlier_additive flag expert_outlier_innovational flag expert_outlier_level_shift flag expert_outlier_transient flag expert_outlier_seasonal_additive flag expert_outlier_local_trend flag expert_outlier_additive_patch flag consider_newesmodels flag exsmooth_model_type SimpleHoltsLinearTrendBrownsLinearTrendDampedTrendSimpleSeasonalWintersAdditiveWintersMultiplicativeDampedTrendAdditiveDampedTrendMultiplicativeMultiplicativeTrendAdditiveMultiplicativeSeasonalMultiplicativeTrendMultiplicativeMultiplicativeTrend futureValue_type_method Computespecify exsmooth_transformation_type NoneSquareRootNaturalLog arima.p integer arima.d integer arima.q integer arima.sp integer arima.sd integer arima.sq integer arima_transformation_type NoneSquareRootNaturalLog arima_include_constant flag tf_arima.p.fieldname integer For transfer functions. tf_arima.d.fieldname integer For transfer functions. tf_arima.q.fieldname integer For transfer functions. tf_arima.sp.fieldname integer For transfer functions. tf_arima.sd.fieldname integer For transfer functions. tf_arima.sq.fieldname integer For transfer functions. tf_arima.delay.fieldname integer For transfer functions. tf_arima.transformation_type.fieldname NoneSquareRootNaturalLog For transfer functions. arima_detect_outliers flag arima_outlier_additive flag arima_outlier_level_shift flag arima_outlier_innovational flag arima_outlier_transient flag arima_outlier_seasonal_additive flag arima_outlier_local_trend flag arima_outlier_additive_patch flag conf_limit_pct real events fields forecastperiods integer extend_records_into_future flag conf_limits flag noise_res flag max_models_output integer Specify the maximum number of models you want to include in the output. Note that if the number of models built exceeds this threshold, the models aren't shown in the output but they're still available for scoring. Default value is 10. Displaying a large number of models may result in poor performance or instability. custom_fields boolean This option tells the node to use the field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required. arima array A list with p, d, q, sp, sd, sq. tf_arima array A list with name, p, q, d, sp, sq, sd, delay and type.",conceptual,0,train
27DB2218237B89F557D3702F4270288E4460E9CB,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html?context=cdpaas&locale=en,Setting up the IBM watsonx platform for administrators,"Setting up the IBM watsonx platform for administrators
Setting up the IBM watsonx platform for administrators To set up the watsonx platform for your organization, sign up for IBM watsonx.ai, upgrade to a paid plan, set up the services that you need, and add your users with the appropriate permissions. IBM watsonx.ai on the watsonx platform includes cloud-based services that provide data preparation, data science, and AI modeling capabilities. The watsonx platform is protected by the same powerful security constraints that are available on IBM Cloud. Table 1. Configuration steps for IBM watsonx Task Location Required Role Description [Set up the IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html) IBM Cloud Account Owner Set up a paid account. [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) IBM Cloud Administrator Invite users to join the account, create user access groups, and assign roles or access groups to users to provide access. [Set up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) IBM Cloud and IBM watsonx Administrator Create a test project to initialize IBM Cloud Object Storage and set the location to Global in each user's profile. [Set up the Watson Studio and Watson Machine Learning services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html) IBM Cloud and IBM watsonx Administrator Upgrade to a paid plan. [Create the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html) IBM watsonx Administrator or Manager role for the Cloud Pak for Data service Add connections to the platform assets catalog for use by collaborators. [Set up watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html) IBM Cloud and IBM watsonx Administrator or Editor Create access policies and assign roles to users. [Configure firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) (if necessary) IBM watsonx and cloud provider firewall configuration Administrator Configure inbound access through a firewall. Optional. [Configure security mechanisms](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) IBM Cloud Administrator IBM watsonx has five security levels to ensure that data, application endpoints, and identity are protected. For a list of common security mechanisms, see [Common security mechanisms](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html?context=cdpaas&locale=ensecurity). Optional. [Connect to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) IBM Cloud Administrator Securely connect to databases that are hosted behind a firewall. Optional. [Configure integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) IBM Cloud and IBM watsonx Administrator Connect to services on other cloud platforms. Common security mechanisms As an IBM Cloud account owner or administrator, you set up security for the account by providing single sign-on, IAM role-based access control, secure communication, and other security constraints. Following are common security mechanisms for the IBM watsonx platform: * Encrypt your instance with your own key. See [Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok). * Use IBM Key Protect to encrypt key data assets in Cloud Object Storage. See [Encrypting at rest data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.htmlencrypting-at-rest-data). * Support single sign-on using SAML federation or Active Directory. See [SSO with Federated IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.htmlsso-with-federated-ids). * Configure secure connections to databases that are behind a firewall. See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) * Configure secure communication between services with Service Endpoints. See [Private network service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlprivate-network-service-endpoints). * Control access at the IP address level. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlallow-specific-ip-addresses). * Require personal credentials when creating connections. The default setting is shared credentials. See [Managing your account settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-credentials-for-connections). Learn more * HIPAA readiness is available for some regions and plans. See [HIPAA readiness](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.htmlhipaa). * See [Security for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) for a complete list of security constraints available in IBM watsonx. * See [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) to understand the architecture of the platform. Parent topic:[Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)",how-to,1,train
1D1659B46A454170A597B0450FD99C16EEC5B1AD,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_bitwise.html?context=cdpaas&locale=en,Bitwise integer operations (SPSS Modeler),"Bitwise integer operations (SPSS Modeler)
Bitwise integer operations These functions enable integers to be manipulated as bit patterns representing two's-complement values, where bit position N has weight 2N. Bits are numbered from 0 upward. These operations act as though the sign bit of an integer is extended indefinitely to the left. Thus, everywhere above its most significant bit, a positive integer has 0 bits and a negative integer has 1 bit. CLEM bitwise integer operations Table 1. CLEM bitwise integer operations Function Result Description INT1 Integer Produces the bitwise complement of the integer INT1. That is, there is a 1 in the result for each bit position for which INT1 has 0. It is always true that INT = –(INT + 1). INT1 INT2 Integer The result of this operation is the bitwise ""inclusive or"" of INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in either INT1 or INT2 or both. INT1 /& INT2 Integer The result of this operation is the bitwise ""exclusive or"" of INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in either INT1 or INT2 but not in both. INT1 && INT2 Integer Produces the bitwise ""and"" of the integers INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in both INT1 and INT2. INT1 && INT2 Integer Produces the bitwise ""and"" of INT1 and the bitwise complement of INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in INT1 and a 0 in INT2. This is the same as INT1&& (INT2) and is useful for clearing bits of INT1 set in INT2. INT << N Integer Produces the bit pattern of INT1 shifted left by N positions. A negative value for N produces a right shift. INT >> N Integer Produces the bit pattern of INT1 shifted right by N positions. A negative value for N produces a left shift. INT1 &&=_0 INT2 Boolean Equivalent to the Boolean expression INT1 && INT2 /== 0 but is more efficient. INT1 &&/=_0 INT2 Boolean Equivalent to the Boolean expression INT1 && INT2 == 0 but is more efficient. integer_bitcount(INT) Integer Counts the number of 1 or 0 bits in the two's-complement representation of INT. If INT is non-negative, N is the number of 1 bits. If INT is negative, it is the number of 0 bits. Owing to the sign extension, there are an infinite number of 0 bits in a non-negative integer or 1 bits in a negative integer. It is always the case that integer_bitcount(INT) = integer_bitcount(-(INT+1)). integer_leastbit(INT) Integer Returns the bit position N of the least-significant bit set in the integer INT. N is the highest power of 2 by which INT divides exactly. integer_length(INT) Integer Returns the length in bits of INT as a two's-complement integer. That is, N is the smallest integer such that INT < (1 << N) if INT >= 0 INT >= (–1 << N) if INT < 0. If INT is non-negative, then the representation of INT as an unsigned integer requires a field of at least N bits. Alternatively, a minimum of N+1 bits is required to represent INT as a signed integer, regardless of its sign. testbit(INT, N) Boolean Tests the bit at position N in the integer INT and returns the state of bit N as a Boolean value, which is true for 1 and false for 0.",conceptual,0,train
E73062F1E8466AB5604358A0AD0D66F31C81507C,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-az3-tempcreds.html?context=cdpaas&locale=en,Setting up temporary credentials or a Role ARN for Amazon S3,"Setting up temporary credentials or a Role ARN for Amazon S3
Setting up temporary credentials or a Role ARN for Amazon S3 Instead of adding another IAM user to your Amazon S3 account, you can grant them access with temporary security credentials and a Session token. Or, you can create a Role ARN (Amazon Resource Name) and then grant permission to that role to access the account. The trusted user can then use the role. You can assign role policies to the temporary credentials to limit the permissions. For example, you can assign read-only access or access to a particular S3 bucket. Prerequisite: You must be the IAM owner of the Amazon S3 account. You can set up one of the following authentication combinations: * Access key, Secret key, and Session token * Access key, Secret key, Role ARN, Role session name, and optional Duration seconds * Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds Access key, Secret key, and Session token Use the AWS Security Token Service (AWS STS) operations in the AWS API to obtain temporary security credentials. These credentials consist of an Access key, a Secret key, and a Session token that expires within a configurable amount of time. For instructions, see the AWS documentation: [Requesting temporary security credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html). Access key, Secret key, Role ARN, Role session name, and optional Duration seconds If someone else has their own S3 account, you can create a temporary role for that person to access your S3 account. Create the role either with the AWS Management Console or the AWS CLI. See [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) The Role ARN is the Amazon Resource Name for connection's role. The Role session name identifies the session to S3 administrators. For example, your IAM username. The Duration seconds parameter is optional. The minimum is 15 minutes. The maximum is 36 hours, the default is 1 hour. The duration seconds timer starts every time that the connection is established. You then provide values for the Access key, Secret key, Role ARN, Role session name, and optional Duration seconds to the user who will create the connection. Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds If someone else has their own S3 account, you can create a temporary role for that person to access your S3 account. With this combination, the External ID is a unique string that you specify and that the user must enter for extra security. First, create the role either with the AWS Management Console or the AWS CLI. See [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html). To create the External ID, see [How to use an external ID when granting access to your AWS resources to a third party](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html). You then provide the values for the Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds to the user who will create the connection. Learn more [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) Parent topic:[Amazon S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html)",how-to,1,train
342AD3ABFEECA87987ED595047CC869E15F148BF,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_generate_model.html?context=cdpaas&locale=en,Generating a model nugget (SPSS Modeler),"Generating a model nugget (SPSS Modeler)
Generating a model nugget When you're working in the Text Analytics Workbench, you may want to use the work you've done to generate a category model nugget. A model generated from a Text Analytics Workbench session is a category model nugget. You must first have at least one category before you can generate a category model nugget.",how-to,1,train
FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-access-detailed-info.html?context=cdpaas&locale=en,Accessing asset details,"Accessing asset details
Accessing asset details Display details about an asset and preview data assets in a deployment space. To display details about the asset, click the asset name. For example, click a model name to view details such as the associated software and hardware specifications, the model creation date, and more. Some details, such as the model name, description, and tags, are editable. For data assets, you can also preview the data. Previewing data assets To preview a data asset, click the data asset name. * User's access to the data is based on the API layer. This means that if user's bearer token allows for viewing data, the data preview is displayed. * For tabular data, only a subset of the data is displayed. Also, column names are displayed but their data types are not inferred. * For data in XLS files, only the first worksheet is displayed for preview. * All data from Cloud Object Storage connectors is assumed to be tabular data. MIME types supported for preview: Format Mime types Image image/bmp, image/cmu-raster, image/fif, image/florian, image/g3fax, image/gif, image/ief, image/jpeg, image/jutvision, image/naplps, image/pict, image/png, image/svg+xml, image/vnd.net-fpx, image/vnd.rn-realflash, image/vnd.rn-realpix, image/vnd.wap.wbmp, image/vnd.xiff, image/x-cmu-raster, image/x-dwg, image/x-icon, image/x-jg, image/x-jps, image/x-niff, image/x-pcx, image/x-pict, image/x-portable-anymap, image/x-portable-bitmap, image/x-portable-greymap, image/x-portable-pixmap, image/x-quicktime, image/x-rgb, image/x-tiff, image/x-windows-bmp, image/x-xwindowdump, image/xbm, image/xpm Text application/json, text/asp, text/css, text/csv, text/html, text/mcf, text/pascal, text/plain, text/richtext, text/scriplet, text/tab-separated-values, text/tab-separated-values, text/uri-list, text/vnd.abc, text/vnd.fmi.flexstor, text/vnd.rn-realtext, text/vnd.wap.wml, text/vnd.wap.wmlscript, text/webviewhtml, text/x-asm, text/x-audiosoft-intra, text/x-c, text/x-component, text/x-fortran, text/x-h, text/x-java-source, text/x-la-asf, text/x-m, text/x-pascal, text/x-script, text/x-script.csh, text/x-script.elisp, text/x-script.ksh, text/x-script.lisp, text/x-script.perl, text/x-script.perl-module, text/x-script.python, text/x-script.rexx, text/x-script.tcl, text/x-script.tcsh, text/x-script.zsh, text/x-server-parsed-html, text/x-setext, text/x-sgml, text/x-speech, text/x-uil, text/x-uuencode, text/x-vcalendar, text/xml Tabular data text/csv, application/excel, application/vnd.ms-excel, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, data from connections Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)",how-to,1,train
4A7F60F563F15CC32060C5F17CB44699A221AD5E,https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html?context=cdpaas&locale=en,IBM Cloud services status,"IBM Cloud services status
IBM Cloud services status If you're having a problem with one of your services, go to the IBM Cloud Status page. The Status page shows unplanned incidents, planned maintenance, announcements, and security bulletin notifications about key events that affect the IBM Cloud platform, infrastructure, and major services. You can find the Status page by logging in to the IBM Cloud console. Click Support from the menu bar, and then click View cloud status from the Support Center. Or, you can access the page directly at [IBM Cloud - Status](https://cloud.ibm.com/status?type=incident&component=ibm-cloud-platform&selected=status). Search for the service to view its status. Learn more [Viewing cloud status](https://cloud.ibm.com/docs/get-support?topic=get-support-viewing-cloud-status)",conceptual,0,train
658967520625FAC8039485004A1E80C32992077E,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/harmful-code-generation.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Harmful code generation Risks associated with outputHarmful code generationNew Description Models might generate code that causes harm or unintentionally affects other systems. Why is harmful code generation a concern for foundation models? Without human review and testing of generated code, its use might cause unintentional behavior and open new system vulnerabilities. Business entities could face fines, reputational harms, and other legal consequences. Example Undisclosed AI Interaction According to their paper, researchers at Stanford University have investigated the impact of code-generation tools on code quality and found that programmers tend to include more bugs in their final code when using AI assistants. These bugs could increase the code's security vulnerabilities, yet the programmers believed their code to be more secure. Sources: [Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh. 2023. Do Users Write More Insecure Code with AI Assistants?. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS '23), November 26-30, 2023, Copenhagen, Denmark. ACM, New York, NY, USA, 15 pages.](https://dl.acm.org/doi/10.1145/3576915.3623157) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
B6EC6454711B4946DBC663324DC478953723B1DD,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_nodes_and_streams.html?context=cdpaas&locale=en,Creating nodes and modifying flows,"Creating nodes and modifying flows
Creating nodes and modifying flows In some situations, you might want to add new nodes to existing flows. Adding nodes to existing flows typically involves the following tasks: 1. Creating the nodes. 2. Linking the nodes into the existing flow.",how-to,1,train
2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html?context=cdpaas&locale=en,Creating and managing IBM Cloud services,"Creating and managing IBM Cloud services
Creating and managing IBM Cloud services You can create IBM Cloud service instances within IBM watsonx from the Services catalog. Prerequisite : You must be [signed up for watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html). Required permissions : For creating or managing a service instance, you must have Administrator or Editor platform access roles in the IBM Cloud account for IBM watsonx. If you signed up for IBM watsonx with your own IBM Cloud account, you are the owner of the account. Otherwise, you can [check your IBM Cloud account roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html). Creating a service To view the Services catalog, select Administration > Services > Services catalog from the main menu. For a description of each service, see [Services](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html). To check which service instances you have, select Administration > Services > Service instances from the main menu. You can filter which services you see by resource group, organization, and region. To create a service: 1. Log in to IBM watsonx. 2. Select Administration > Services > Services catalog from the main menu. 3. Click the service you want to create. 4. Specify the IBM Cloud service region. 5. Select a plan. 6. If necessary, select the resource group or organization. 7. Click Create. Managing services To manage a service: 1. Select Administration > Services > Services instances from the main menu. 2. Click the Action menu next to the service name and select Manage in IBM Cloud. The service page in IBM Cloud opens in a separate browser tab. 3. To change pricing plans, select Plan and choose the desired plan. Learn more * [Associate a service with a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html) * [Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) Parent topic:[IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)",how-to,1,train
262C45D286C9B8A7EDBA8635E636824F2B043D73,https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_howitworks.html?context=cdpaas&locale=en,SQL optimization (SPSS Modeler),"SQL optimization (SPSS Modeler)
How does SQL pushback work? The initial fragments of a flow leading from the data import nodes are the main targets for SQL generation. When a node is encountered that can't be compiled to SQL, the data is extracted from the database and subsequent processing is performed. During flow preparation and prior to running, the SQL generation process happens as follows: * The software reorders flows to move downstream nodes into the “SQL zone” where it can be proven safe to do so. * Working from the import nodes toward the terminal nodes, SQL expressions are constructed incrementally. This phase stops when a node is encountered that can't be converted to SQL or when the terminal node (for example, a Table node or a Graph node) is converted to SQL. At the end of this phase, each node is labeled with an SQL statement if the node and its predecessors have an SQL equivalent. * Working from the nodes with the most complicated SQL equivalents back toward the import nodes, the SQL is checked for validity. The SQL that was successfully validated is chosen for execution. * Nodes for which all operations have generated SQL are highlighted with a SQL icon next to the node on the flow canvas. Based on the results, you may want to further reorganize your flow where appropriate to take full advantage of database execution.",conceptual,0,train
6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=en,Tutorial: Create a time series anomaly prediction experiment,"Tutorial: Create a time series anomaly prediction experiment
Tutorial: Create a time series anomaly prediction experiment This tutorial guides you through using AutoAI and sample data to train a time series experiment to detect if daily electricity usage values are normal or anomalies (outliers). When you set up the sample experiment, you load data that analyzes daily electricity usage from Industry A to determine whether a value is normal or an anomaly. Then, the experiment generates pipelines that use algorithms to label these predicted values as normal or an anomaly. After generating the pipelines, AutoAI chooses the best performers, and presents them in a leaderboard for you to review. Tech preview This is a technology preview and is not yet supported for use in production environments. Data set overview This tutorial uses the Electricity usage anomalies sample data set from the Watson Studio Gallery. This data set describes the annual electricity usages for Industry A. The first column indicates the electricity usages and the second column indicates the date, which is in a day-by-day format.  Tasks overview In this tutorial, follow these steps to create an anomaly prediction experiment: 1. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep1) 2. [View the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep2) 3. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep3) 4. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep4) 5. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep5) Create an AutoAI experiment Create an AutoAI experiment and add sample data to your experiment. 1. From the navigation menu , click Projects > View all projects. 2. Open an existing project or [create a new project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) to store the anomaly prediction experiment. 3. On the Assets tab from within your project, click New asset > Build machine learning models automatically. 4. Click Samples > Electricity usage anomalies sample data, then select Next. The AutoAI experiment name and description are pre-populated by the sample data. 5. If prompted, associate a Watson Machine Learning instance with your AutoAI experiment. 1. Click Associate a Machine Learning service instance and select an instance of Watson Machine Learning. 2. Click Reload to confirm your configuration. 6. Click Create. View the experiment details AutoAI pre-populates the details fields for the sample experiment:  * Type series analysis type: Anomaly prediction predicts whether future values in a series are anomalies (outliers). A prediction of 1 indicates a normal value and a prediction of -1 indicates an anomaly. * Feature column: industry_a_usage is the predicted value and indicates how much electricity Industry A consumes. * Date/Time column: date indicates the time increments for the experiment. For this experiment, there is one prediction value per day. * This experiment is optimized for the model performance metric: Average Precision. Average precision evaluates the performance of object detection and segmentation systems. Click Run experiment to train the model. The experiment takes several minutes to complete. Review the experiment results The relationship map shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance.  1. The leaderboard lists and saves the three best performing pipelines. Click the pipeline name with Rank 1 to review the details of the pipeline. For details on anomaly prediction metrics, see [Creating a time series anomaly prediction experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html). 2. Select the pipeline with Rank 1 and Save the pipeline as a model. The model name is pre-populated with the default name. 3. Click Create to confirm your pipeline selection. Deploy the trained model Before the trained model can make predictions on external values, you must deploy the model. Follow these steps to promote your trained model to a deployment space. 1. Deploy the model from the Model details page. To access the Model details page, choose one of these options: * From the notification displayed when you save the model, click View in project. * From the project's Assets, select the model’s name in Models. 2. From the Model details page, click Promote to Deployment Space. Then, select or create a deployment space to deploy the model. 3. Select Go to the model in the space after promoting it and click Promote to promote the model. Testing the model After promoting the model to the deployment space, you are ready to test your trained model with new data values. 1. Select New Deployment and create a new deployment with the following fields: 1. Deployment type: Online 2. Name: Electricity usage online deployment 2. Click Create and wait for the status to update to Deployed. 3. After the deployment initializes, click the deployment. Use Test input to manually enter and evaluate values or use JSON input to attach a data set.  4. Click Predict to see whether there are any anomalies in the values. Note:-1 indicates an anomaly; 1 indicates a normal value. . Setting the field delimiter, quote character, and decimal symbol Different countries use different symbols to separate the integer part from the fractional part of a number and to separate fields in data. For example, you might use a comma instead of a period to separate the integer part from the fractional part of numbers. And, rather than using commas to separate fields in your data, you might use colons or tabs. With a Data Asset import or export node, you can specify these symbols and other options. Double-click the node to open its properties and specify data formats as desired. ",how-to,1,train
C11E8DEEDBABE64F4789061D10E55AEA415FD51E,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-delete.html?context=cdpaas&locale=en,Deleting deployment spaces,"Deleting deployment spaces
Deleting deployment spaces Delete existing deployment spaces that you don't require anymore. Important:Before you delete a deployment space, you must delete all the deployments that are associated with it. Only a project admin can delete a deployment space. For more information, see [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html). To remove a deployment space, follow these steps: 1. From the navigation menu, click Deployments. 2. In the deployments list, click the Spaces tab and find the deployment space that you want to delete. 3. Hover over the deployment space, select the menu () icon, and click Delete. 4. In the confirmation dialog box, click Delete. Learn more To learn more about how to clean up a deployment space and delete it programmatically, refer to: * [Notebook on managing machine learning artifacts](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d093d7b) * [Notebook on managing spaces](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0967e3) Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)",how-to,1,train
61F714F5629AD260B0D9776FC53CDA2EAA10DF24,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_radar.html?context=cdpaas&locale=en,Radar charts,"Radar charts
Radar charts Radar charts compare multiple quantitative variables and are useful for visualizing which variables have similar values, or if outliers exist among the variables. Radar charts consists of a sequence of spokes, with each spoke representing a single variable. Radar Charts are also useful for determining which variables are scoring high or low within a data set.",conceptual,0,train
751ABCAB00F67C93C253EC74D686E2CFCC0062AD,https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html?context=cdpaas&locale=en,Troubleshooting Data Refinery,"Troubleshooting Data Refinery
Troubleshooting Data Refinery Use this information to resolve questions about using Data Refinery. * [Cannot refine data from an Excel data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html?context=cdpaas&locale=endr-excel) * [Data Refinery flow job fails with a large data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html?context=cdpaas&locale=enbigdata-dr) Cannot refine data from an Excel data asset The Data Refinery flow might fail if it cannot read the data. Confirm the format of the Excel file. By default, the first line of the file is treated as the header. You can change this setting in the Flow settings . Go to the Source data sets tab and click the overflow menu () next to the data source, and select Edit format. You can also specify the first line property, which designates which row is the first row in the data set to be read. Changing these properties affects how the data is displayed in Data Refinery as well as the Data Refinery job run and flow output. Data Refinery flow job fails with a large data asset If your Data Refinery flow job fails with a large data asset, try these troubleshooting tips to fix the problem: * Instead of using a project data asset as the target of the Data Refinery flow (default), use Cloud storage. For example, IBM Cloud Object Storage, Amazon S3, or Google Cloud Storage. * Select a Spark & R environment for the Data Refinery flow job or create a new Spark & R environment template.",how-to,1,train
BBD1F022A8393101199ABB731534C10BE99CF1E4,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/TextMiningWorkbench.html?context=cdpaas&locale=en,Mining for concepts and categories (SPSS Modeler),"Mining for concepts and categories (SPSS Modeler)
Mining for concepts and categories The Text Mining node uses linguistic and frequency techniques to extract key concepts from the text and create categories with these concepts and other data. Use the node to explore the text data contents or to produce either a concept model nugget or category model nugget. When you run this node, an internal linguistic extraction engine extracts and organizes the concepts, patterns, and categories by using natural language processing methods. Two build modes are available in the Text Mining node's properties: * The Generate directly (concept model nugget) mode automatically produces a concept or category model nugget when you run the node. * The Build interactively (category model nugget) is a more hands-on, exploratory approach. You can use this mode to not only extract concepts, create categories, and refine your linguistic resources, but also run text link analysis and explore clusters. This build mode launches the Text Analytics Workbench. And you can use the Text Mining node to generate one of two text mining model nuggets: * Concept model nuggets uncover and extract important concepts from your structured or unstructured text data. * Category model nuggets score and assign documents and records to categories, which are made up of the extracted concepts (and patterns). The extracted concepts and patterns and the categories from your model nuggets can all be combined with existing structured data, such as demographics, to yield better and more-focused decisions. For example, if customers frequently list login issues as the primary impediment to completing online account management tasks, you might want to incorporate ""login issues"" into your models.",conceptual,0,train
D2F4F71189D7F5C92DDC2CCB38F2BCE1EFD4BC65,https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-payload-data.html?context=cdpaas&locale=en,Managing payload data for watsonx.governance,"Managing payload data for watsonx.governance
Managing payload data for watsonx.governance You must provide payload data to configure drift v2 and generative AI quality evaluations in watsonx.governance. Payload data contains all of your model transactions. You can log payload data with watsonx.governance to enable evaluations. To log payload data, watsonx.governance must receive scoring requests. Logging payload data When you send a scoring request, watsonx.governance processes your model transactions to enable model evaluations. watsonx.governance scores the data and stores it as records in a payload logging table within the watsonx.governance data mart. The payload logging table contains the following columns when you evaluate prompt templates: * Required columns: * Prompt variable(s): Contains the values for the variables that are created for prompt templates * generated_text: Contains the output that's generated by the foundation model * Optional columns: * input_token_count: Contains the number of tokens in the input text * generated_token_count: Contains the number of tokens in the generated text * prediction_probability: Contains the aggregate value of log probabilities of generated tokens that represent the winning output The table can also include timestamp and ID columns to store your data as scoring records. You can view your payload logging table by accessing the database that you specified for the data mart or by using the [Watson OpenScale Python SDK](https://client-docs.aiopenscale.cloud.ibm.com/html/index.html) as shown in the following example:  Sending payload data If you are using IBM Watson Machine Learning as your machine learning provider, watsonx.governance automatically logs payload data when your model is scored. After you configure evaluations, you can also use a payload logging endpoint to send scoring requests to run on-demand evaluations. For production models, you can also upload payload data with a CSV file to send scoring requests. For more information see, [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html). Parent topic:[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html)",how-to,1,train
2D5B33F1352D8BA7CEF029D1979CCF0D44AAD63E,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autocont_build.html?context=cdpaas&locale=en,Building the flow (SPSS Modeler),"Building the flow (SPSS Modeler)
Building the flow 1. Add a Data Asset node that points to property_values_train.csv. 2. Add a Type node, and select taxable_value as the target field (Role = Target). Other fields will be used as predictors. Figure 1. Setting the measurement level and role  3. Attach an Auto Numeric node, and select Correlation as the metric used to rank models (under BASICS in the node properties). 4. Set the Number of models to use to 3. This means that the three best models will be built when you run the node. Figure 2. Auto Numeric node BASICS  5. Under EXPERT, leave the default settings in place. The node will estimate a single model for each algorithm, for a total of six models. (Alternatively, you can modify these settings to compare multiple variants for each model type.) Because you set Number of models to use to 3 under BASICS, the node will calculate the accuracy of the six algorithms and build a single model nugget containing the three most accurate. Figure 3. Auto Numeric node EXPERT options  6. Under ENSEMBLE, leave the default settings in place. Since this is a continuous target, the ensemble score is generated by averaging the scores for the individual models.",how-to,1,train
C64A69EBC1360788037B11E8B0DC5BB74D913819,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/svmnodeslots.html?context=cdpaas&locale=en,svmnode properties,"svmnode properties
svmnode properties The Support Vector Machine (SVM) node enables you to classify data into one of two groups without overfitting. SVM works well with wide data sets, such as those with a very large number of input fields. svmnode properties Table 1. svmnode properties svmnode Properties Values Property description all_probabilities flag stopping_criteria 1.0E-1
1.0E-2
1.0E-3
1.0E-4
1.0E-5
1.0E-6 Determines when to stop the optimization algorithm. regularization number Also known as the C parameter. precision number Used only if measurement level of target field is Continuous. kernel RBF
Polynomial
Sigmoid
Linear Type of kernel function used for the transformation. RBF is the default. rbf_gamma number Used only if kernel is RBF. gamma number Used only if kernel is Polynomial or Sigmoid. bias number degree number Used only if kernel is Polynomial. calculate_variable_importance flag calculate_raw_propensities flag calculate_adjusted_propensities flag adjusted_propensity_partition Test
Validation",conceptual,0,train
B851271C134A1B282412BD7A667C1C9813B4E8B2,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/TMWBModelApplier.html?context=cdpaas&locale=en,Text Mining model nuggets (SPSS Modeler),"Text Mining model nuggets (SPSS Modeler)
Text Mining model nuggets You can run a Text Mining node to automatically generate a concept model nugget using the Generate directly option in the node settings. Or you can use a more hands-on, exploratory approach using the Build interactively mode to generate category model nuggets from within the Text Analytics Workbench.",conceptual,0,train
35A87CAEDB1F1B6739159B9C7A31CCE7C8978431,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/anomalydetection.html?context=cdpaas&locale=en,Anomaly node (SPSS Modeler),"Anomaly node (SPSS Modeler)
Anomaly node Anomaly detection models are used to identify outliers, or unusual cases, in the data. Unlike other modeling methods that store rules about unusual cases, anomaly detection models store information on what normal behavior looks like. This makes it possible to identify outliers even if they do not conform to any known pattern, and it can be particularly useful in applications, such as fraud detection, where new patterns may constantly be emerging. Anomaly detection is an unsupervised method, which means that it does not require a training dataset containing known cases of fraud to use as a starting point. While traditional methods of identifying outliers generally look at one or two variables at a time, anomaly detection can examine large numbers of fields to identify clusters or peer groups into which similar records fall. Each record can then be compared to others in its peer group to identify possible anomalies. The further away a case is from the normal center, the more likely it is to be unusual. For example, the algorithm might lump records into three distinct clusters and flag those that fall far from the center of any one cluster. Each record is assigned an anomaly index, which is the ratio of the group deviation index to its average over the cluster that the case belongs to. The larger the value of this index, the more deviation the case has than the average. Under the usual circumstance, cases with anomaly index values less than 1 or even 1.5 would not be considered as anomalies, because the deviation is just about the same or a bit more than the average. However, cases with an index value greater than 2 could be good anomaly candidates because the deviation is at least twice the average. Anomaly detection is an exploratory method designed for quick detection of unusual cases or records that should be candidates for further analysis. These should be regarded as suspected anomalies, which, on closer examination, may or may not turn out to be real. You may find that a record is perfectly valid but choose to screen it from the data for purposes of model building. Alternatively, if the algorithm repeatedly turns up false anomalies, this may point to an error or artifact in the data collection process. Note that anomaly detection identifies unusual records or cases through cluster analysis based on the set of fields selected in the model without regard for any specific target (dependent) field and regardless of whether those fields are relevant to the pattern you are trying to predict. For this reason, you may want to use anomaly detection in combination with feature selection or another technique for screening and ranking fields. For example, you can use feature selection to identify the most important fields relative to a specific target and then use anomaly detection to locate the records that are the most unusual with respect to those fields. (An alternative approach would be to build a decision tree model and then examine any misclassified records as potential anomalies. However, this method would be more difficult to replicate or automate on a large scale.) Example. In screening agricultural development grants for possible cases of fraud, anomaly detection can be used to discover deviations from the norm, highlighting those records that are abnormal and worthy of further investigation. You are particularly interested in grant applications that seem to claim too much (or too little) money for the type and size of farm. Requirements. One or more input fields. Note that only fields with a role set to Input using a source or Type node can be used as inputs. Target fields (role set to Target or Both) are ignored. Strengths. By flagging cases that do not conform to a known set of rules rather than those that do, Anomaly Detection models can identify unusual cases even when they don't follow previously known patterns. When used in combination with feature selection, anomaly detection makes it possible to screen large amounts of data to identify the records of greatest interest relatively quickly.",conceptual,0,train
D5863A9857F07023885A810210DFB819AD692ED7,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.html?context=cdpaas&locale=en,Setting algorithm properties,"Setting algorithm properties
Setting algorithm properties For the Auto Classifier, Auto Numeric, and Auto Cluster nodes, you can set properties for specific algorithms used by the node by using the general form: autonode.setKeyedPropertyValue(, , ) For example: node.setKeyedPropertyValue(""neuralnetwork"", ""method"", ""MultilayerPerceptron"") Algorithm names for the Auto Classifier node are cart, chaid, quest, c50, logreg, decisionlist, bayesnet, discriminant, svm and knn. Algorithm names for the Auto Numeric node are cart, chaid, neuralnetwork, genlin, svm, regression, linear and knn. Algorithm names for the Auto Cluster node are twostep, k-means, and kohonen. Property names are standard as documented for each algorithm node. Algorithm properties that contain periods or other punctuation must be wrapped in single quotes. For example: node.setKeyedPropertyValue(""logreg"", ""tolerance"", ""1.0E-5"") Multiple values can also be assigned for a property. For example: node.setKeyedPropertyValue(""decisionlist"", ""search_direction"", [""Up"", ""Down""]) To enable or disable the use of a specific algorithm: node.setPropertyValue(""chaid"", True) Note: In cases where certain algorithm options aren't available in the Auto Classifier node, or when only a single value can be specified rather than a range of values, the same limits apply with scripting as when accessing the node in the standard manner.",conceptual,0,train
E0D36A6F5028FC5ED005E87FAF9F65F976E62A37,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html?context=cdpaas&locale=en,Set up your system,"Set up your system
Set up your system Before you can use IBM Federated Learning, ensure that you have the required hardware, software, and dependencies. Core requirements by role Each entity that participates in a Federated Learning experiment must meet the requirements for their role. Admin software requirements Designate an admin for the Federated Learning experiment. The admin must have: * Access to the platform with Watson Studio and Watson Machine Learning enabled. You must [create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning). * A [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) for assembling the global model. You must [associate the Watson Machine Learning service instance with your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html). Party hardware and software requirements Each party must have a system that meets these minimum requirements. Note: Remote parties participating in the same Federated Learning experiment can use different hardware specs and architectures, as long as they each meet the minimum requirement. Supported architectures * x86 64-bit * PPC * Mac M-series * 4 GB memory or greater Supported environments * Linux * Mac OS/Unix * Windows Software dependencies * A supported [Python version and a machine learning framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html). * The Watson Machine Learning Python client. 1. If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'. 2. If you are using Mac OS with M-series CPU and Conda, download the installation script and then run ./install_fl_rt22.2_macos.sh . Network requirements An outbound connection from the remote party to aggregator is required. Parties can use firewalls that restrict internal connections with each other. Data sources requirements Data must comply with these requirements. * Data must be in a directory or storage repository that is accessible to the party that uses them. * Each data source for a federate model must have the same features. IBM Federated Learning supports horizontal federated learning only. * Data must be in a readable format, but the formats can vary by data source. Suggested formats include: * Hive * Excel * CSV * XML * Database Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)",how-to,1,train
93A3A5E1A633EB2AB616759DFB76DC433ABD4D38,https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wscloud-troubleshoot.html?context=cdpaas&locale=en,Troubleshooting Watson Studio on IBM Cloud,"Troubleshooting Watson Studio on IBM Cloud
Troubleshooting Watson Studio on IBM Cloud You can use the following techniques to work around problems you might encounter with Watson Studio on IBM Cloud. Project limit exceeded Symptoms When you create a project, the following error occurs: The number of projects created by the authenticated user exceeds the designated limit. Possible Causes The number of projects an authenticated user can create per data center (region) is 100. The limit applies only to projects that a user creates. Projects for which the user is listed as a collaborator are not included in this limit. Possible Resolutions Although most customers do not reach this limit, possible resolutions include: * Delete projects. * Any authenticated user can request a project limit increase by contacting [IBM Cloud Support](https://www.ibm.com/cloud/support), provided that an adequate justification is specified. Blank screen when loading Symptoms A blank screen appears when you open Watson Studio. Possible Causes A cached version is loading. Possible Resolutions 1. Clear the browser cache and cookies and re-open Watson Studio. 2. Try a different type of browser. For example, switch from Firefox to Chrome. 3. If the blank screen still occurs, [open a support case](https://cloud.ibm.com/unifiedsupport/supportcenter), generate a .har file, compress it, and upload the compressed har file to the support case.",how-to,1,train
C6B0055426C9E91760F4923ED42BE91D64FCA6C8,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html?context=cdpaas&locale=en,Notebooks and scripts,"Notebooks and scripts
Notebooks and scripts You can create, edit and execute Python and R code using Jupyter notebooks and scripts in code editors, for example the notebook editor or an integrated development environment (IDE), like RStudio. Notebooks : A Jupyter notebook is a web-based environment for interactive computing. You can use notebooks to run small pieces of code that process your data, and you can immediately view the results of your computation. Notebooks include all of the building blocks you need to work with data, namely the data, the code computations that process the data, the visualizations of the results, and text and rich media to enhance understanding. Scripts : A script is a file containing a set of commands and comments. The script can be saved and used later to re-execute the saved commands. Unlike in a notebook, the commands in a script can only be executed in a linear fashion. Notebooks Required permissions : Editor or Admin role in a project Tools : Notebook editor Programming languages : Python and R Data format : All types Code support is available for loading and accessing data from project assets for: : Data assets, such as CSV, JSON and .xlsx and .xls files : Database connections and connected data assets See [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html). for the supported file and database types. Data size : 5 GB. If your files are larger, you must load the data in multiple parts. Scripts Required permissions : Editor or Admin role in a project Tools : RStudio Programming languages : R Data format : All types Code support is available for loading and accessing data from project assets for: : Data assets, such as CSV, JSON and .xlsx and .xls files : Database connections and connected data assets See [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html). for the supported file and database types. Data size : 5 GB. If your files are larger, you must load the data in multiple parts. Working in the notebook editor The notebook editor is largely used for interactive, exploratory data analysis programming and data visualization. Only one person can edit a notebook at a time. All other users can access opened notebooks in view mode only, while they are locked. You can use the preinstalled open source libraries that come with the notebook runtime environments, add your own libraries, and benefit from the IBM libraries provided at no extra cost. When your notebooks are ready, you can create jobs to run the notebooks directly from the notebook editor. Your job configurations can use environment variables that are passed to the notebooks with different values when the notebooks run. Working in RStudio RStudio is an integrated development environment for working with R scripts or Shiny apps. Although the RStudio IDE cannot be started in a Spark with R environment runtime, you can use Spark in your R scripts and Shiny apps by accessing Spark kernels programmatically. R scripts and Shiny apps can only be created and used in the RStudio IDE. You can't create jobs for R scripts or R Shiny deployments. Learn more * [Quick start: Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) * [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) * [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)",conceptual,0,train
18C44D2A29B576F708BC515CEDE91227B6B4FC4E,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/ts.html?context=cdpaas&locale=en,Time Series node (SPSS Modeler),"Time Series node (SPSS Modeler)
Time Series node The Time Series node can be used with data in either a local or distributed environment. With this node, you can choose to estimate and build exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), or multivariate ARIMA (or transfer function) models for time series, and produce forecasts based on the time series data. Exponential smoothing is a method of forecasting that uses weighted values of previous series observations to predict future values. As such, exponential smoothing is not based on a theoretical understanding of the data. It forecasts one point at a time, adjusting its forecasts as new data come in. The technique is useful for forecasting series that exhibit trend, seasonality, or both. You can choose from various exponential smoothing models that differ in their treatment of trend and seasonality. ARIMA models provide more sophisticated methods for modeling trend and seasonal components than do exponential smoothing models, and, in particular, they allow the added benefit of including independent (predictor) variables in the model. This involves explicitly specifying autoregressive and moving average orders as well as the degree of differencing. You can include predictor variables and define transfer functions for any or all of them, as well as specify automatic detection of outliers or an explicit set of outliers. Note: In practical terms, ARIMA models are most useful if you want to include predictors that might help to explain the behavior of the series that is being forecast, such as the number of catalogs that are mailed or the number of hits to a company web page. Exponential smoothing models describe the behavior of the time series without attempting to understand why it behaves as it does. For example, a series that historically peaks every 12 months will probably continue to do so even if you don't know why. An Expert Modeler option is also available, which attempts to automatically identify and estimate the best-fitting ARIMA or exponential smoothing model for one or more target variables, thus eliminating the need to identify an appropriate model through trial and error. If in doubt, use the Expert Modeler option. If predictor variables are specified, the Expert Modeler selects those variables that have a statistically significant relationship with the dependent series for inclusion in ARIMA models. Model variables are transformed where appropriate using differencing and/or a square root or natural log transformation. By default, the Expert Modeler considers all exponential smoothing models and all ARIMA models and picks the best model among them for each target field. You can, however, limit the Expert Modeler only to pick the best of the exponential smoothing models or only to pick the best of the ARIMA models. You can also specify automatic detection of outliers.",conceptual,0,train
077AFC6B667F6747FF066182E2F04AF486C13368,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_values_flag.html?context=cdpaas&locale=en,Specifying values for a flag (SPSS Modeler),"Specifying values for a flag (SPSS Modeler)
Specifying values for a flag Use flag fields to display data that has two distinct values. The storage types for flags can be string, integer, real number, or date/time. True. Specify a flag value for the field when the condition is met. False. Specify a flag value for the field when the condition is not met. Labels. Specify labels for each value in the flag field. These labels appear in a variety of locations, such as graphs, tables, output, and model browsers.",how-to,1,train
E3EFB6106AE81DB5A8B3379C3EDCF86E31F95AB0,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_create_instance.html?context=cdpaas&locale=en,Creating a class instance,"Creating a class instance
Creating a class instance You can use classes to hold class (or shared) attributes or to create class instances. To create an instance of a class, you call the class as if it were a function. For example, consider the following class: class MyClass: pass Here, the pass statement is used because a statement is required to complete the class, but no action is required programmatically. The following statement creates an instance of the class MyClass: x = MyClass()",how-to,1,train
53019DD52EDB5790460DFF9A02363856B83CAFB7,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html?context=cdpaas&locale=en,Managing predictive deployments,"Managing predictive deployments
Managing predictive deployments For proper deployment, you must set up a deployment space and then select and configure a specific deployment type. After you deploy assets, you can manage and update them to make sure they perform well and to monitor their accuracy. To be able to deploy assets from a space, you must have a machine learning service instance that is provisioned and associated with that space. For more information, see [Associating a service instance with a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.htmlassociating-instance-with-space). Online and batch deployments provide simple ways to create an online scoring endpoint or do batch scoring with your models. If you want to implement a custom logic: * Create a Python function to use for creating your online endpoint * Write a notebook or script for batch scoring Note: If you create a notebook or a script to perform batch scoring such an asset runs as a platform job, not as a batch deployment. Deployable assets Following is the list of assets that you can deploy from a Watson Machine Learning space, with information on applicable deployment types: List of assets that you can deploy Asset type Batch deployment Online deployment Functions Yes Yes Models Yes Yes Scripts Yes No An R Shiny app is the only asset type that is supported for web app deployments. Notes: * A deployment job is a way of running a batch deployment, or a self-contained asset like a flow in Watson Machine Learning. You can select the input and output for your job and choose to run it manually or on a schedule. For more information, see [Creating a deployment job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html). * Notebooks and flows use notebook environments. You can run them in a deployment space, but they are not deployable. For more information, see: * [Creating online deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html) * [Creating batch deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) * [Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) * [Deploying scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html) After you deploy assets, you can manage and update them to make sure they perform well and to monitor their accuracy. Some ways to manage or update a deployment are as follows: * [Manage deployment jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html). After you create one or more jobs, you can view and manage them from the Jobs tab of your deployment space. * [Update a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html). For example, you can replace a model with a better-performing version without having to create a new deployment. * [Scale a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html) to increase availability and throughput by creating replicas of the deployment. * [Delete a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-delete.html) to remove a deployment and free up resources. Learn more * [Full list of asset types that can be added to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) Parent topic:[Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)",how-to,1,train
AAC6535CAB0B4600A9683433FCAB805B2C4EAA53,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/structured_slot_parameters.html?context=cdpaas&locale=en,Structured properties,"Structured properties
Structured properties There are two ways in which scripting uses structured properties for increased clarity when parsing: * To give structure to the names of properties for complex nodes, such as Type, Filter, or Balance nodes. * To provide a format for specifying multiple properties at once.",conceptual,0,train
78488A77CB39BDD413DBB7682F1DBE2675B3E3A0,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_create_class.html?context=cdpaas&locale=en,Defining a class,"Defining a class
Defining a class Within a Python class, you can define both variables and methods. Unlike in Java, in Python you can define any number of public classes per source file (or module). Therefore, you can think of a module in Python as similar to a package in Java. In Python, classes are defined using the class statement. The class statement has the following form: class name (superclasses): statement or class name (superclasses): assignment . . function . . When you define a class, you have the option to provide zero or more assignment statements. These create class attributes that are shared by all instances of the class. You can also provide zero or more function definitions. These function definitions create methods. The superclasses list is optional. The class name should be unique in the same scope, that is within a module, function, or class. You can define multiple variables to reference the same class.",how-to,1,train
FB7F7B9A220C66F7E3407CA9553D974CD4A14402,https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-feedback-data.html?context=cdpaas&locale=en,Managing feedback data for watsonx.governance,"Managing feedback data for watsonx.governance
Managing feedback data for watsonx.governance You must provide feedback data to watsonx.governance to enable you to configure quality and generative AI quality evaluations and determine any changes in your model predictions. When you provide feedback data to watsonx.governance, you can regularly evaluate the accuracy of your model predictions. Feedback logging watsonx.governance stores the feedback data that you provide as records in a feedback logging table. The feedback logging table contains the following columns when you evaluate prompt templates: * Required columns: * Prompt variable(s): Contains the values for the variables that are created for prompt templates * reference_output: Contains the ground truth value * Optional columns: * _original_prediction: Contains the output that's generated by the foundation model Uploading feedback data You can use a feedback logging endpoint to upload data for quality evaluations. You can also upload feedback data with a CSV file. For more information, see [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html). Learn more [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html) Parent topic:[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html)",how-to,1,train
835B998310E6E268F648D4AA28528190EBBB48CA,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_pyspark_examples.html?context=cdpaas&locale=en,Examples (SPSS Modeler),"Examples (SPSS Modeler)
Examples This section provides Python for Spark scripting examples.",conceptual,0,train
97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html?context=cdpaas&locale=en,Creating your own models,"Creating your own models
Creating your own models Certain algorithms in Watson Natural Language Processing can be trained with your own data, for example you can create custom models based on your own data for entity extraction, to classify data, to extract sentiments, and to extract target sentiments. Starting with Runtime 23.1 you can use the new built-in transformer-based IBM foundation model called Slate to create your own models. The Slate model has been trained on a very large data set that was preprocessed to filter hate, bias, and profanity. To create your own classification, entity extraction model, or sentiment model you can fine-tune the Slate model on your own data. To train the model in reasonable time, it's recommended to use GPU-based environments. * [Detecting entities with a custom dictionary](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-dict.html) * [Detecting entities with regular expressions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-regex.html) * [Detecting entities with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-transformer.html) * [Classifying text with a custom classification model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html) * [Extracting sentiment with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html) * [Extracting targets sentiment with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html) Language support for custom models You can create custom models and use the following pretrained dictionary and classification models for the shown languages. For a list of the language codes and the corresponding languages, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). Supported languages for out-of-the-box custom models Custom model Supported language codes Dictionary models af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw (all languages supported in the Syntax part of speech tagging) Regexes af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw (all languages supported in the Syntax part of speech tagging) SVM classification with TFIDF af, ar, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw SVM classification with USE ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh_cn, zh_tw CNN classification with GloVe ar, de, en, es, fr, it, ja, ko, nl, pt, zh_cn BERT Multilingual classification af, ar, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw Transformer model af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw Stopword lists ar, de, en, es, fr, it, ja, ko Saving and loading custom models If you want to use your custom model in another notebook, save it as a Data Asset to your project. This way, you can export the model as part of a project export. Use the ibm-watson-studio-lib library to save and load custom models. To save a custom model in your notebook as a data asset to export and use in another project: 1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have viewer or editor access permissions. Only editors can inject the token into a notebook. 2. Add the project token to a notebook by clicking More > Insert project token from the notebook action bar and then run the cell. When you run the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-waton-studio-lib library. For details on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html). 3. Run the train() method to create a custom dictionary, regular expression, or classification model and assign this custom model to a variable. For example: custom_block = CNN.train(train_stream, embedding_model.embedding, verbose=2) 4. If you want to save a custom dictionary or regular expression model, convert it to a RBRGeneric block. Converting a custom dictionary or regular expression model to a RBRGeneric block is useful if you want to load and execute the model using the [API for Watson Natural Language Processing for Embed](https://www.ibm.com/docs/en/watson-libraries?topic=home-api-reference). To date, Watson Natural Language Processing for Embed supports running dictionary and regular expression models only as RBRGeneric blocks. To convert a model to a RBRGeneric block, run the following commands: Create the custom regular expression model custom_regex_block = watson_nlp.resources.feature_extractor.RBR.train(module_folder, language='en', regexes=regexes) Save the model to the local file system custom_regex_model_path = 'some/path' custom_regex_block.save(custom_regex_model_path) The model was saved in a file ""executor.zip"" in the provided path, in this case ""some/path/executor.zip"" model_path = os.path.join(custom_regex_model_path, 'executor.zip') Re-load the model as a RBRGeneric block custom_block = watson_nlp.blocks.rules.RBRGeneric(watson_nlp.toolkit.rule_utils.RBRExecutor.load(model_path), language='en') 5. Save the model as a Data Asset to your project using ibm-watson-studio-lib: wslib.save_data("""", custom_block.as_bytes(), overwrite=True)",how-to,1,train
093BFFCB43C46F1068A59A6B6338C955BF20AABF,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/multiplotnodeslots.html?context=cdpaas&locale=en,multiplotnode properties,"multiplotnode properties
multiplotnode properties The Multiplot node creates a plot that displays multiple Y fields over a single X field. The Y fields are plotted as colored lines; each is equivalent to a Plot node with Style set to Line and X Mode set to Sort. Multiplots are useful when you want to explore the fluctuation of several variables over time. multiplotnode properties Table 1. multiplotnode properties multiplotnode properties Data type Property description x_field field y_fields list panel_field field animation_field field normalize flag use_overlay_expr flag overlay_expression string records_limit number if_over_limit PlotBinsPlotSamplePlotAll x_label_auto flag x_label string y_label_auto flag y_label string use_grid flag graph_background color Standard graph colors are described at the beginning of this section. page_background color Standard graph colors are described at the beginning of this section.",conceptual,0,train
DEB599F49C3E459A08E8BF25304B063B50CAA294,https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelUI-WML.html?context=cdpaas&locale=en,Deploying a Decision Optimization model by using the user interface,"Deploying a Decision Optimization model by using the user interface
Deploying a Decision Optimization model by using the user interface You can save a model for deployment in the Decision Optimization experiment UI and promote it to your Watson Machine Learning deployment space. Procedure To save your model for deployment: 1. In the Decision Optimization experiment UI, either from the Scenario or from the Overview pane, click the menu icon  for the scenario that you want to deploy, and select Save for deployment 2. Specify a name for your model and add a description, if needed, then click Next. 1. Review the Input and Output schema and select the tables you want to include in the schema. 2. Review the Run parameters and add, modify or delete any parameters as necessary. 3. Review the Environment and Model files that are listed in the Review and save window. 4. Click Save. The model is then available in the Models section of your project. To promote your model to your deployment space: 3. View your model in the Models section of your project.You can see a summary with input and output schema. Click Promote to deployment space. 4. In the Promote to space window that opens, check that the Target space field displays the name of your deployment space and click Promote. 5. Click the link deployment space in the message that you receive that confirms successful promotion. Your promoted model is displayed in the Assets tab of your Deployment space. The information pane shows you the Type, Software specification, description and any defined tags such as the Python version used. To create a new deployment: 6. From the Assets tab of your deployment space, open your model and click New Deployment. 7. In the Create a deployment window that opens, specify a name for your deployment and select a Hardware specification.Click Create to create the deployment. Your deployment window opens from which you can later create jobs. Creating and running Decision Optimization jobs You can create and run jobs to your deployed model. Procedure 1. Return to your deployment space by using the navigation path and (if the data pane isn't already open) click the data icon to open the data pane. Upload your input data tables, and solution and kpi output tables here. (You must have output tables defined in your model to be able to see the solution and kpi values.) 2. Open your deployment model, by selecting it in the Deployments tab of your deployment space and click New job. 3. Define the details of your job by entering a name, and an optional description for your job and click Next. 4. Configure your job by selecting a hardware specification and Next.You can choose to schedule you job here, or leave the default schedule option off and click Next. You can also optionally choose to turn on notifications or click Next. 5. Choose the data that you want to use in your job by clicking Select the source for each of your input and output tables. Click Next. 6. You can now review and create your model by clicking Create.When you receive a successful job creation message, you can then view it by opening it from your deployment space. There you can see the run status of your job. 7. Open the run for your job.Your job log opens and you can also view and copy the payload information.",how-to,1,train
CE13AE6812F1E2CA6AD429D4B01AF25F9F398148,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html?context=cdpaas&locale=en,Deploying models with Watson Machine Learning,"Deploying models with Watson Machine Learning
Deploying models with Watson Machine Learning Using IBM Watson Machine Learning, you can deploy models, scripts, and functions, manage your deployments, and prepare your assets to put into production to generate predictions and insights. This graphic illustrates a typical process for a machine learning model. After you build and train a machine learning model, use Watson Machine Learning to deploy the model, manage the input data, and put your machine learning assets to use.  IBM Watson Machine Learning architecture and services Watson Machine Learning is a service on IBM Cloud with features for training and deploying machine learning models and neural networks. Built on a scalable, open source platform based on Kubernetes and Docker components, Watson Machine Learning enables you to build, train, deploy, and manage machine learning and deep learning models. Deploying and managing models with Watson Machine Learning Watson Machine Learning supports popular frameworks, including: TensorFlow, Scikit-Learn, and PyTorch to build and deploy models. For a list of supported frameworks, refer to [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). To build and train a model: * Use one of the tools that are listed in [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html). * [Import a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html) that you built and trained outside of Watson Studio. Deployment infrastructure * [Deploy trained models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) as a web service or for batch processing. * [Deploy Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) to simplify AI solutions. Programming Interfaces * Use [Python client library](https://ibm.github.io/watson-machine-learning-sdk/) to work with all of your Watson Machine Learning assets in a notebook. * Use [REST API](https://cloud.ibm.com/apidocs/machine-learning) to call methods from the base URLs for the Watson Machine Learning API endpoints. * When you call the API, use the URL and add the path for each method to form the complete API endpoint for your requests. For details on checking endpoints, refer to [Looking up a deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html). Parent topic:[Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)",how-to,1,train
A1365CD1E2ACBEE6E9BF025DD493FEB17A0D428F,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_advanced_linguistic.html?context=cdpaas&locale=en,Advanced linguistic settings (SPSS Modeler),"Advanced linguistic settings (SPSS Modeler)
Advanced linguistic settings When you build categories, you can select from a number of advanced linguistic category building techniques such as concept inclusion and semantic networks (English text only). These techniques can be used individually or in combination with each other to create categories. Keep in mind that because every dataset is unique, the number of methods and the order in which you apply them may change over time. Since your text mining goals may be different from one set of data to the next, you may need to experiment with the different techniques to see which one produces the best results for the given text data. None of the automatic techniques will perfectly categorize your data; therefore we recommend finding and applying one or more automatic techniques that work well with your data. The following advanced settings are available for the Use linguistic techniques to build categories option in the category settings.",conceptual,0,train
0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-hardware-configs.html?context=cdpaas&locale=en,Managing hardware configurations,"Managing hardware configurations
Managing hardware configurations When you deploy certain assets in Watson Machine Learning, you can choose the type, size, and power of the hardware configuration that matches your computing needs. Deployment types that require hardware specifications Selecting a hardware specification is available for all [batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) types. For [online deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html), you can select a specific hardware specification if you're deploying: * Python Functions * Tensorflow models * Models with custom software specifications Hardware configurations available for deploying assets * XS: 1x4 = 1 vCPU and 4 GB RAM * S: 2x8 = 2 vCPU and 8 GB RAM * M: 4x16 = 4 vCPU and 16 GB RAM * L: 8x32 = 8 vCPU and 32 GB RAM * XL: 16x64 = 16 vCPU and 64 GB RAM You can use the XS configuration to deploy: * Python functions * Python scripts * R scripts * Models based on custom libraries and custom images For Decision Optimization deployments, you can use these hardware specifications: * S * M * L * XL Learn more * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)",how-to,1,train
577964B0C132F5EA793054C3FF67417DDA6511D3,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html?context=cdpaas&locale=en,Watson Machine Learning Python client samples and examples,"Watson Machine Learning Python client samples and examples
Watson Machine Learning Python client samples and examples Review and use sample Jupyter Notebooks that use Watson Machine Learning Python library to demonstrate machine learning features and techniques. Each notebook lists learning goals so you can find the one that best meets your goals. Training and deploying models from notebooks If you choose to build a machine learning model in a notebook, you must be comfortable with coding in a Jupyter Notebook. A Jupyter Notebook is a web-based environment for interactive computing. You can run small pieces of code that process your data, and then immediately view the results of your computation. Using this tool, you can assemble, test, and run all of the building blocks you need to work with data, save the data to Watson Machine Learning, and deploy the model. Learn from sample notebooks Many ways exist to build and train models and then deploy them. Therefore, the best way to learn is to look at annotated samples that step you through the process by using different frameworks. Review representative samples that demonstrate key features. The samples are built by using the V4 version of the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). Video disclaimer: Some minor steps and graphical elements in the videos might differ from your deployment. Watch this video to learn how to train, deploy, and test a machine learning model in a Jupyter Notebook. This video mirrors the Use scikit-learn to recognize hand-written digits found in the Deployment samples table. This video provides a visual method to learn the concepts and tasks in this documentation. Watch this video to learn how to test a model that was created with AutoAI by using the Watson Machine Learning APIs in Jupyter Notebook. This video provides a visual method to learn the concepts and tasks in this documentation. Helpful variables Use the pre-defined PROJECT_ID environment variable to call the Watson Machine Learning Python client APIs. PROJECT_ID is the guide of the project where your environment is running. Deployment samples View or run these Jupyter Notebooks to see how techniques are implemented by using various frameworks. Some of the samples rely on trained models, which are also available for you to download from the public repository. Sample name Framework Techniques demonstrated [Use scikit-learn and custom library to predict temperature](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/9365d34eeacef267026a2b75b92bfa2f) Scikit-learn Train a model with custom defined transformer
Persist the custom-defined transformer and the model in Watson Machine Learning repository
Deploy the model by using Watson Machine Learning Service
Perform predictions that use the deployed model [Use PMML to predict iris species](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b16f5b4) PMML Deploy and score a PMML model [Use Python function to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1eddc77b3a4340d68f762625d40b64f9) Python Use a function to store a sample model, then deploy the sample model. [Use scikit-learn to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21717d4) Scikit-learn Train sklearn model
Persist trained model in Watson Machine Learning repository
Deploy model for online scoring by using client library
Score sample records by using client library [Use Spark and batch deployment to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21719c1) Spark Load a CSV file into an Apache Spark DataFrame
Explore data
Prepare data for training and evaluation
Create an Apache Spark machine learning pipeline
Train and evaluate a model
Persist a pipeline and model in Watson Machine Learning repository
Explore and visualize prediction result by using the plotly package
Deploy a model for batch scoring by using Watson Machine Learning API [Use Spark and Python to predict Credit Risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c2173364) Spark Load a CSV file into an Apache® Spark DataFrame
Explore data
Prepare data for training and evaluation
Persist a pipeline and model in Watson Machine Learning repository from tar.gz files
Deploy a model for online scoring by using Watson Machine Learning API
Score sample data by using the Watson Machine Learning API
Explore and visualize prediction results by using the plotly package [Use SPSS to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c2175eb9) SPSS Work with the instance
Perform an online deployment of the SPSS model
Score data by using deployed model [Use XGBoost to classify tumors](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ac820b22cc976f5cf6487260f4c8d9c8) XGBoost Load a CSV file into numpy array
Explore data
Prepare data for training and evaluation
Create an XGBoost machine learning model
Train and evaluate a model
Use cross-validation to optimize the model's hyperparameters
Persist a model in Watson Machine Learning repository
Deploy a model for online scoring
Score sample data [Predict business for cars](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61a8b600f1bb183e2c471e7a64299f0e) Spark Download an externally trained Keras model with dataset.
Persist an external model in the Watson Machine Learning repository.
Deploy a model for online scoring by using client library.
Score sample records by using client library. [Deploy Python function for software specification](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56825df5322b91daffd39426038808e9) Core Create a Python function
Create a web service
Score the model [Machine Learning artifact management](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/55ef73c276cd1bf2bae266613d08c0f3) Core Export and import artifacts
Load, deploy, and score externally created models [Use Decision Optimization to plan your diet](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/5502accad754a3c5dcb3a08f531cea5a) Core Create a diet planning model by using Decision Optimization [Use",how-to,1,train
277C8CB678CAF766466EDE03C506EB0A822FD400,https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=en,Supported data sources in Decision Optimization,"Supported data sources in Decision Optimization
Supported data sources in Decision Optimization Decision Optimization supports the following relational and nonrelational data sources on . watsonx.ai. * [IBM data sources](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=enDOConnections__ibm-data-src) * [Third-party data sources](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=enDOConnections__third-party-data-src)",conceptual,0,train
3EAAFDDADE769D3B0300BE1401BB3D7E68B312DD,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autonumericnuggetnodeslots.html?context=cdpaas&locale=en,applyautonumericnode properties,"applyautonumericnode properties
applyautonumericnode properties You can use Auto Numeric modeling nodes to generate an Auto Numeric model nugget. The scripting name of this model nugget is applyautonumericnode.For more information on scripting the modeling node itself, see [autonumericnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rangepredictornodeslots.htmlrangepredictornodeslots). applyautonumericnode properties Table 1. applyautonumericnode properties applyautonumericnode Properties Values Property description calculate_standard_error flag",conceptual,0,train
0999F59BB8E2E2AB7722D57CDBC051A0984ABE45,https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en,Managing Data Refinery flows,"Managing Data Refinery flows
Managing Data Refinery flows A Data Refinery flow is an ordered set of steps to cleanse, shape, and enhance data. As you [refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.htmlrefine) by [applying operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html) to a data set, you dynamically build a customized Data Refinery flow that you can modify in real time and save for future use. These are actions that you can do while you refine your data: Working with the Data Refinery flow * [Save a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensave) * [Run or schedule a job for Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enjobs) * [Rename a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enrename) Steps * [Undo or redo a step](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enundo) * [Edit, duplicate, insert, or delete a step](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enedit-duplicate) * [View the Data Refinery flow steps in a ""snapshot view""](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensnapshot) * [Export the Data Refinery flow data to a CSV file](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enexport) Working with the data sets * [Change the source of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enchange) * [Edit the sample size](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensample) * [Edit the source properties ](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enedit-source) * [Change the target of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enoutput) * [Edit the target properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enedit-target) * [Change the name of the Data Refinery flow target](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enchange-name) Actions on the project page * [Reopen a Data Refinery flow to continue working](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enreopen) * [Duplicate a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enclone) * [Delete a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enremove) * [Promote a Data Refinery flow to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enpromote) Working with the Data Refinery flow Save a Data Refinery flow Save a Data Refinery flow by clicking the Save Data Refinery flow icon  in the Data Refinery toolbar. Data Refinery flows are saved to the project that you're working in. Save a Data Refinery flow so that you can continue refining a data set later. The default output of the Data Refinery flow is saved as a data asset source-file-name_shaped.csv. For example, if the source file is mydata.csv, the default name and output for the Data Refinery flow is mydata_csv_shaped. You can edit the name and add an extension by [changing the target of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enoutput). Run or schedule a job for a Data Refinery flow Data Refinery supports large data sets, which can be time-consuming and unwieldy to refine. So that you can work quickly and efficiently, Data Refinery operates on a sample subset of rows in the data set. The sample size is 1 MB or 10,000 rows, whichever comes first. When you run a job for the Data Refinery flow, the entire data set is processed. When you run the job, you select the runtime and you can add a one-time or repeating schedule. In Data Refinery, from the Data Refinery toolbar click the Jobs icon , and then select Save and create a job or Save and view jobs. After you save a Data Refinery flow, you can also create a job for it from the Project page. Go to the Assets tab, select the Data Refinery flow, choose New job from the overflow menu (). You must have the Admin or Editor role to view the job details or to edit or run the job. With the Viewer role for the project, you can view only the job details. For more information about jobs, see [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html). Rename a Data Refinery flow On the Data Refinery toolbar, open the Info pane . Or open the Flow settings  and go to the General tab. Steps Undo or redo a step Click the undo () icon or the redo () icon on the toolbar. Edit, duplicate, insert, or delete a step In the Steps pane, click the overflow menu () on the step for the operation that you want to change. Select the action (Edit, Duplicate, Insert step before, Insert step after, or Delete). * If you select Edit, Data Refinery goes into edit mode and either displays the operation to be edited on the command line or in the Operation pane. Apply the edited operation. * If you select Duplicate, the duplicated step is inserted after the selected step. Note:The Duplicate action is not available for the Join or Union operations. Data Refinery updates the Data Refinery flow to reflect the changes and reruns all the operations. View the Data Refinery flow steps in a ""snapshot view"" To see what your data looked like at any point in time, click a previous step to put Data Refinery into snapshot view. For example, if you click Data source, you see what your data looked like before you started refining it. Click any operation step to see what your data looked like after that operation was applied. To leave snapshot view, click Viewing step x of y or click the same step that you selected to get into snapshot view. Export the Data Refinery flow data to a CSV file Click Export ()",how-to,1,train
870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en,Quick start: Automate the lifecycle for a model with pipelines,"Quick start: Automate the lifecycle for a model with pipelines
Quick start: Automate the lifecycle for a model with pipelines You can create an end-to-end pipeline to deliver concise, pre-processed, and up-to-date data stored in an external data source. Read about Watson Pipelines, then watch a video and take a tutorial. Required services : Watson Studio : Watson Machine Learning Your basic workflow includes these tasks: 1. Open your sandbox project. Projects are where you can collaborate with others to work with data. 2. Add connections and data to the project. You can add CSV files or data from a remote data source through a connection. 3. Create a pipeline in the project. 4. Add nodes to the pipeline to perform tasks. 5. Run the pipeline and view the results. Read about pipelines The Watson Pipelines editor provides a graphical interface for orchestrating an end-to-end flow of assets from creation through deployment. Assemble and configure a pipeline to create, train, deploy, and update machine learning models and Python scripts. Putting a model into production is a multi-step process. Data must be loaded and processed, models must be trained and tuned before they are deployed and tested. Machine learning models require more observation, evaluation, and updating over time to avoid bias or drift. [Read more about pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) [Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) Watch a video about pipelines  Watch this video to preview the steps in this tutorial. You might notice slight differences in the user interface that is shown in the video. The video is intended to be a companion to the written tutorial. This video provides a visual method to learn the concepts and tasks in this documentation. Try a tutorial to create a model with Pipelines This tutorial guides you through exploring and running an AI pipeline to build and deploy a model. The model predicts if a customer is likely subscribe to a term deposit based on a marketing campaign. In this tutorial, you will complete these tasks: * [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep01) * [Task 2: Create a deployment space.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep02) * [Task 3: Create the sample pipeline.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep03) * [Task 4: Explore an existing pipeline.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep04) * [Task 5: Run the pipeline.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep05) * [Task 6: View the assets, deployed model, and online deployment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep06) This tutorial takes approximately 30 minutes to complete. Sample data The sample data that is used in the guided experience is UCI: Bank marketing data used to predict whether a customer enrolls in a marketing promotion.  Expand all sections * Tips for completing this tutorial ### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview) * Task 1: Open a project You need a project to store Prompt Lab assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project. This video provides a visual method to learn the concepts and tasks in this documentation. 1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep02). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox project in the Projects section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ {: iih} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab. {: width=""100%"" } [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview) * Task 2: Create a",how-to,1,train
6F51A9033343574AEE2D292CB23F09D542456389,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-model-tracking.html?context=cdpaas&locale=en,Enabling model tracking with AI factsheets,"Enabling model tracking with AI factsheets
Enabling model tracking with AI factsheets If your organization is using AI Factsheets as part of an AI governance strategy, you can track models after adding them to a space. Tracking a model populates a factsheet in an associated model use case. The model use cases are maintained in a model inventory in a catalog, providing a way for all stakeholders to view the lifecyle details for a machine learning model. From the inventory, collaborators can view the details for a model as it moves through the model lifecycle, including the request, development, deployment, and evaluation of the model. To enable model tracking by using AI Factsheets: 1. From the asset list in your space, click a model name and then click the Model details tab. 2. Click Track this model. 3. Associate the model with an existing model use case in the inventory or create a new use case. 4. Specify the details for the new use case, including specifying a catalog if you have access to more than one, and save to register the model. A link to the model inventory is added to the model details page. 5. Click the link to open the model use case in the inventory. 6. Optional: update the model use case. For example, add tags, supporting documentation, or other details.",how-to,1,train
BF6A65F061558B6AED8A438A887B6474A0FDFFC3,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/report.html?context=cdpaas&locale=en,Report node (SPSS Modeler),"Report node (SPSS Modeler)
Report node You can use the Report node to create formatted reports containing fixed text, data, or other expressions derived from the data. Specify the format of the report by using text templates to define the fixed text and the data output constructions. You can provide custom text formatting using HTML tags in the template and by setting output options. Data values and other conditional output are included in the report using CLEM expressions in the template.",conceptual,0,train
3CF77633A489E42B01086588D6613D65BFD51F7F,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_intro_score.html?context=cdpaas&locale=en,Scoring records (SPSS Modeler),"Scoring records (SPSS Modeler)
Scoring records Earlier, we scored the same records used to estimate the model so we could evaluate how accurate the model was. Now we'll score a different set of records from the ones used to create the model. This is the goal of modeling with a target field: Study records for which you know the outcome, to identify patterns that will allow you to predict outcomes you don't yet know. Figure 1. Attaching new data for scoring  You could update the data asset Import node to point to a different data file, or you could add a new Import node that reads in the data you want to score. Either way, the new dataset must contain the same input fields used by the model (Age, Income level, Education and so on), but not the target field Credit rating. Alternatively, you could add the model nugget to any flow that includes the expected input fields. Whether read from a file or a database, the source type doesn't matter as long as the field names and types match those used by the model.",conceptual,0,train
F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9,https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/createScenario.html?context=cdpaas&locale=en,Decision Optimization notebook tutorial create new scenario,"Decision Optimization notebook tutorial create new scenario
Create new scenario To solve with different versions of your model or data you can create new scenarios in the Decision Optimization experiment UI. Procedure To create a new scenario: 1. Click the Open scenario pane icon  to open the Scenario panel. 2. Use the Create Scenario drop-down menu to create a new scenario from the current one. 3. Add a name for the duplicate scenario and click Create. 4. Working in your new scenario, in the Prepare data view, open the diet_food data table in full mode. 5. Locate the entry for Hotdog at row 9, and set the qmax value to 0 to exclude hot dog from possible solutions. 6. Switch to the Build model view and run the model again. 7. You can see the impact of your changes on the solution by switching from one scenario to the other.",how-to,1,train
B69246113E589F088E8E1302B32B57720BD27720,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/fields.html?context=cdpaas&locale=en,Fields (SPSS Modeler),"Fields (SPSS Modeler)
Fields Names in CLEM expressions that aren’t names of functions are assumed to be field names. You can write these simply as Power, val27, state_flag, and so on, but if the name begins with a digit or includes non-alphabetic characters, such as spaces (with the exception of the underscore), place the name within single quotation marks (for example, 'Power Increase', '2nd answer', '101', '$P-NextField'). Note: Fields that are quoted but undefined in the data set will be misread as strings.",conceptual,0,train
2D08EDD168FBEE078290F386F7EC3EB1998ADF02,https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html?context=cdpaas&locale=en,Time reference system,"Time reference system
Time reference system Time reference system (TRS) is a local, regional or global system used to identify time. A time reference system defines a specific projection for forward and reverse mapping between a timestamp and its numeric representation. A common example that most users are familiar with is UTC time, which maps a timestamp, for example, (1 Jan 2019, 12 midnight (GMT) into a 64-bit integer value (1546300800000), which captures the number of milliseconds that have elapsed since 1 Jan 1970, 12 midnight (GMT). Generally speaking, the timestamp value is better suited for human readability, while the numeric representation is better suited for machine processing. In the time series library, a time series can be associated with a TRS. A TRS is composed of a: * Time tick that captures time granularity, for example 1 minute * Zoned date time that captures a start time, for example 1 Jan 2019, 12 midnight US Eastern Daylight Savings time (EDT). A timestamp is mapped into a numeric representation by computing the number of elapsed time ticks since the start time. A numeric representation is scaled by the granularity and shifted by the start time when it is mapped back to a timestamp. Note that this forward + reverse projection might lead to time loss. For instance, if the true time granularity of a time series is in seconds, then forward and reverse mapping of the time stamps 09:00:01 and 09:00:02 (to be read as hh:mm:ss) to a granularity of one minute would result in the time stamps 09:00:00 and 09:00:00 respectively. In this example, a time series, whose granularity is in seconds, is being mapped to minutes and thus the reverse mapping looses information. However, if the mapped granularity is higher than the granularity of the input time series (more specifically, if the time series granularity is an integral multiple of the mapped granularity) then the forward + reverse projection is guaranteed to be lossless. For example, mapping a time series, whose granularity is in minutes, to seconds and reverse projecting it to minutes would result in lossless reconstruction of the timestamps. Setting TRS When a time series is created, it is associated with a TRS (or None if no TRS is specified). If the TRS is None, then the numeric values cannot be mapped to timestamps. Note that TRS can only be set on a time series at construction time. The reason is that a time series by design is an immutable object. Immutability comes in handy when the library is used in multi-threaded environments or in distributed computing environments such as Apache Spark. While a TRS can be set only at construction time, it can be changed using the with_trs method as described in the next section. with_trs produces a new time series and thus has no impact on immutability. Let us consider a simple time series created from an in-memory list: values = [1.0, 2.0, 4.0] x = tspy.time_series(values) x This returns: TimeStamp: 0 Value: 1.0 TimeStamp: 1 Value: 2.0 TimeStamp: 2 Value: 4.0 At construction time, the time series can be associated with a TRS. Associating a TRS with a time series allows its numeric timestamps to be as per the time tick and offset/timezone. The following example shows 1 minute and 1 Jan 2019, 12 midnight (GMT): zdt = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=datetime.timezone.utc) x_trs = tspy.time_series(data, granularity=datetime.timedelta(minutes=1), start_time=zdt) x_trs This returns: TimeStamp: 2019-01-01T00:00Z Value: 1.0 TimeStamp: 2019-01-01T00:01Z Value: 2.0 TimeStamp: 2019-01-01T00:02Z Value: 4.0 Here is another example where the numeric timestamps are reinterpreted with a time tick of one hour and offset/timezone as 1 Jan 2019, 12 midnight US Eastern Daylight Savings time (EDT). tz_edt = datetime.timezone.edt zdt = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=tz_edt) x_trs = tspy.time_series(data, granularity=datetime.timedelta(hours=1), start_time=zdt) x_trs This returns: TimeStamp: 2019-01-01T00:00-04:00 Value: 1.0 TimeStamp: 2019-01-01T00:01-04:00 Value: 2.0 TimeStamp: 2019-01-01T00:02-04:00 Value: 4.0 Note that the timestamps now indicate an offset of -4 hours from GMT (EDT timezone) and captures the time tick of one hour. Also note that setting a TRS does NOT change the numeric timestamps - it only specifies a way of interpreting numeric timestamps. x_trs.print(human_readable=False) This returns: TimeStamp: 0 Value: 1.0 TimeStamp: 1 Value: 2.0 TimeStamp: 2 Value: 4.0 Changing TRS You can change the TRS associated with a time series using the with_trs function. Note that this function will throw an exception if the input time series is not associated with a TRS (if TRS is None). Using with_trs changes the numeric timestamps. The following code sample shows TRS set at contructions time without using with_trs: 1546300800 is the epoch time in seconds for 1 Jan 2019, 12 midnight GMT zdt1 = datetime.datetime(1970,1,1,0,0,0,0,tzinfo=datetime.timezone.utc) y = tspy.observations.of(tspy.observation(1546300800, 1.0),tspy.observation(1546300860, 2.0), tspy.observation(1546300920, 4.0)).to_time_series(granularity=datetime.timedelta(seconds=1), start_time=zdt1) y.print() y.print(human_readable=False) This returns: TimeStamp: 2019-01-01T00:00Z Value: 1.0 TimeStamp: 2019-01-01T00:01Z Value: 2.0 TimeStamp: 2019-01-01T00:02Z Value: 4.0 TRS has been set during construction time - no changes to",how-to,1,train
7A9F4CDF362D1F06C3644EDBD634B2A77DDC6005,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/balancenodeslots.html?context=cdpaas&locale=en,balancenode properties,"balancenode properties
balancenode properties  The Balance node corrects imbalances in a dataset, so it conforms to a specified condition. The balancing directive adjusts the proportion of records where a condition is true by the factor specified. balancenode properties Table 1. balancenode properties balancenode properties Data type Property description directives Structured property to balance proportion of field values based on number specified. training_data_only flag Specifies that only training data should be balanced. If no partition field is present in the stream, then this option is ignored. This node property uses the format: [[ number, string ] \ [ number, string] \ ... [number, string ]]. Note: If strings (using double quotation marks) are embedded in the expression, they must be preceded by the escape character "" "". The "" "" character is also the line continuation character, which you can use to align the arguments for clarity.",conceptual,0,train
C535650C17CDE010EACBF5B6BF85FD8E593B77D6,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en,"Quick start: Build, run, and deploy a Decision Optimization model","Quick start: Build, run, and deploy a Decision Optimization model
Quick start: Build, run, and deploy a Decision Optimization model You can build and run Decision Optimization models to help you make the best decisions to solve business problems based on your objectives. Read about Decision Optimization, then watch a video and take a tutorial that’s suitable for users with some knowledge of prescriptive analytics, but does not require coding. Your basic workflow includes these tasks: 1. Open your sandbox project. Projects are where you can collaborate with others to work with data. 2. Add a Decision Optimization Experiment to the project. You can add compressed files or data from sample files. 3. Associate a Watson Machine Learning Service with the project. 4. Create a deployment space to associate with the project's Watson Machine Learning Service. 5. Review the data, model objectives, and constraints in the Modeling Assistant. 6. Run one or more scenarios to test your model and review the results. 7. Deploy your model. Read about Decision Optimization Decision Optimization can analyze data and create an optimization model (with the Modeling Assistant) based on a business problem. First, an optimization model is derived by converting a business problem into a mathematical formulation that can be understood by the optimization engine. The formulation consists of objectives and constraints that define the model that the final decision is based on. The model, together with your input data, forms a scenario. The optimization engine solves the scenario by applying the objectives and constraints to limit millions of possibilities and provides the best solution. This solution satisfies the model formulation or relaxes certain constraints if the model is infeasible. You can test scenarios using different data, or by modifying the objectives and constraints and re-running them and viewing solutions. Once satisfied you can deploy your model. [Read more about Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) Watch a video about creating a Decision Optimization model  Watch this video to see how to run a sample Decision Optimization experiment to create, solve, and deploy a Decision Optimization model with Watson Studio and Watson Machine Learning. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. The user interface is frequently improved. This video provides a visual method to learn the concepts and tasks in this documentation. Try a tutorial to create a model that uses Decision Optimization In this tutorial, you will complete these tasks: * [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep01) * [Task 2: Create a Decision Optimization experiment in the project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep02) * [Task 3: Build a model and visualize a scenario result.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep03) * [Task 4: Change model objectives and constraints.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep04) * [Task 5: Deploy the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep05) * [Task 6: Test the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep06) This tutorial will take approximately 30 minutes to complete. Expand all sections * Tips for completing this tutorial ### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview) * Task 1: Open a project You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu {: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. 1. When the project opens, click the Manage tab and select the Services and integrations page. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps:",how-to,1,train
18A7A354C4B46E26DF8304755C8BE954BB922B04,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_browse.html?context=cdpaas&locale=en,Browsing a model (SPSS Modeler),"Browsing a model (SPSS Modeler)
Browsing the model When the C5.0 node runs, its model nugget is added to the flow. To browse the model, right-click the model nugget and choose View Model. The Tree Diagram displays the set of rules generated by the C5.0 node in a tree format. Now you can see the missing pieces of the puzzle. For people with an Na-to-K ratio less than 14.829 and high blood pressure, age determines the choice of drug. For people with low blood pressure, cholesterol level seems to be the best predictor. Figure 1. Tree diagram  You can hover over the nodes in the tree to see more details such as the number of cases for each blood pressure category and the confidence percentage of cases.",conceptual,0,train
9D9C67189BE5D6DB22575CF01A75BD5826B92074,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/autonumeric.html?context=cdpaas&locale=en,Auto Numeric node (SPSS Modeler),"Auto Numeric node (SPSS Modeler)
Auto Numeric node The Auto Numeric node estimates and compares models for continuous numeric range outcomes using a number of different methods, enabling you to try out a variety of approaches in a single modeling run. You can select the algorithms to use, and experiment with multiple combinations of options. For example, you could predict housing values using neural net, linear regression, C&RT, and CHAID models to see which performs best, and you could try out different combinations of stepwise, forward, and backward regression methods. The node explores every possible combination of options, ranks each candidate model based on the measure you specify, and saves the best for use in scoring or further analysis. Example : A municipality wants to more accurately estimate real estate taxes and to adjust values for specific properties as needed without having to inspect every property. Using the Auto Numeric node, the analyst can generate and compare a number of models that predict property values based on building type, neighborhood, size, and other known factors. Requirements : A single target field (with the role set to Target), and at least one input field (with the role set to Input). The target must be a continuous (numeric range) field, such as age or income. Input fields can be continuous or categorical, with the limitation that some inputs may not be appropriate for some model types. For example, C&R Tree models can use categorical string fields as inputs, while linear regression models cannot use these fields and will ignore them if specified. The requirements are the same as when using the individual modeling nodes. For example, a CHAID model works the same whether generated from the CHAID node or the Auto Numeric node. Frequency and weight fields : Frequency and weight are used to give extra importance to some records over others because, for example, the user knows that the build dataset under-represents a section of the parent population (Weight) or because one record represents a number of identical cases (Frequency). If specified, a frequency field can be used by C&R Tree and CHAID algorithms. A weight field can be used by C&RT, CHAID, Regression, and GenLin algorithms. Other model types will ignore these fields and build the models anyway. Frequency and weight fields are used only for model building and are not considered when evaluating or scoring models. Prefixes : If you attach a table node to the nugget for the Auto Numeric Node, there are several new variables in the table with names that begin with a $ prefix. : The names of the fields that are generated during scoring are based on the target field, but with a standard prefix. Different model types use different sets of prefixes. : For example, the prefixes $G, $R, $C are used as the prefix for predictions that are generated by the Generalized Linear model, CHAID model, and C5.0 model, respectively. $X is typically generated by using an ensemble, and $XR, $XS, and $XF are used as prefixes in cases where the target field is a Continuous, Categorical, or Flag field, respectively. : $..E prefixes are used for the prediction confidence of a Continuous target; for example, $XRE is used as a prefix for ensemble Continuous prediction confidence. $GE is the prefix for a single prediction of confidence for a Generalized Linear model.",conceptual,0,train
DE6C4CB72844FC59FD80FC0B26ACC8C94A3BA994,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/cache_nodes.html?context=cdpaas&locale=en,Caching options for nodes (SPSS Modeler),"Caching options for nodes (SPSS Modeler)
Caching options for nodes To optimize the running of flows, you can set up a cache on any nonterminal node. When you set up a cache on a node, the cache is filled with the data that passes through the node the next time you run the data flow. From then on, the data is read from the cache (which is stored temporarily) rather than from the data source. Caching is most useful following a time-consuming operation such as a sort, merge, or aggregation. For example, suppose that you have an import node set to read sales data from a database and an Aggregate node that summarizes sales by location. You can set up a cache on the Aggregate node rather than on the import node because you want the cache to store the aggregated data rather than the entire data set. Note: Caching at import nodes, which simply stores a copy of the original data as it is read into SPSS Modeler, won't improve performance in most circumstances. Nodes with caching enabled are displayed with a special circle-backslash icon. When the data is cached at the node, the icon changes to a check mark. Figure 1. Node with empty cache vs. node with full cache  A circle-backslash icon by node indicates that its cache is empty. When the cache is full, the icon becomes a check mark. If you want to replace the contents of the cache, you must first flush the cache and then re-run the data flow to refill it. In your flow, right-click the node and select .",how-to,1,train
6576530EC5D705B8BF323F6C459C32A87AE3F9A4,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/mlpas.html?context=cdpaas&locale=en,MultiLayerPerceptron-AS node (SPSS Modeler),"MultiLayerPerceptron-AS node (SPSS Modeler)
MultiLayerPerceptron-AS node Multilayer perceptron is a classifier based on the feedforward artificial neural network and consists of multiple layers. Each layer is fully connected to the next layer in the network. See [Multilayer Perceptron Classifier (MLPC)](https://spark.apache.org/docs/latest/ml-classification-regression.htmlmultilayer-perceptron-classifier) for details.^1^ The MultiLayerPerceptron-AS node in watsonx.ai is implemented in Spark. To use a this node, you must set up an upstream Type node. The MultiLayerPerceptron-AS node will read input values from the Type node (or from the Types of an upstream import node). ^1^ ""Multilayer perceptron classifier."" Apache Spark. MLlib: Main Guide. Web. 5 Oct 2018.",conceptual,0,train
F4F623D5A7C8913E227E962BD1F347B36AAB7B51,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressions_and_conditions.html?context=cdpaas&locale=en,Expressions and conditions (SPSS Modeler),"Expressions and conditions (SPSS Modeler)
Expressions and conditions CLEM expressions can return a result (used when deriving new values). For example: Weight * 2.2 Age + 1 sqrt(Signal-Echo) Or, they can evaluate true or false (used when selecting on a condition). For example: Drug = ""drugA"" Age < 16 not(PowerFlux) and Power > 2000 You can combine operators and functions arbitrarily in CLEM expressions. For example: sqrt(abs(Signal))* max(T1, T2) + Baseline Brackets and operator precedence determine the order in which the expression is evaluated. In this example, the order of evaluation is: * abs(Signal) is evaluated, and sqrt is applied to its result * max(T1, T2) is evaluated * The two results are multiplied: x has higher precedence than + * Finally, Baseline is added to the result The descending order of precedence (that is, operations that are performed first to operations that are performed last) is as follows: * Function arguments * Function calls * xx * x / mod div rem * + – * > < >= <= /== == = /= If you want to override precedence, or if you're in any doubt of the order of evaluation, you can use parentheses to make it explicit. For example: sqrt(abs(Signal))* (max(T1, T2) + Baseline)",conceptual,0,train
03A70C271775C3B15541B86E53E467844EF87296,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_remarks.html?context=cdpaas&locale=en,Remarks,"Remarks
Remarks Remarks are comments that are introduced by the pound (or hash) sign (). All text that follows the pound sign on the same line is considered part of the remark and is ignored. A remark can start in any column. The following example demonstrates the use of remarks: The HelloWorld application is one of the most simple print 'Hello World' print the Hello World line",conceptual,0,train
CB81643BE8EE3B3DC2F6BCCDB77BD2CEC32C8926,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-google.html?context=cdpaas&locale=en,Integrating with Google Cloud Platform,"Integrating with Google Cloud Platform
Integrating with Google Cloud Platform You can configure an integration with the Google Cloud Platform (GCP) to allow IBM watsonx users to access data sources from GCP. Before proceeding, make sure you have proper permissions. After you configure an integration, you'll see it under Service instances. For example, you'll see a new GCP tab that lists your BigQuery data sets and Storage buckets. To configure an integration with GCP: 1. Log on to the Google Cloud Platform at [https://console.cloud.google.com](https://console.cloud.google.com). 2. Go to IAM & Admin > Service Accounts. 3. Open your project and then click CREATE SERVICE ACCOUNT.1. Specify a name and description for the new service account and click CREATE. Specify other options as desired and click DONE.1. Click the actions menu next to the service instance and select Create key. For key type, select JSON and then click CREATE. The JSON key file will be downloaded to your machine. Important: Write down your key ID and secret and store them in a sStore the key file in a secure location. 4. In IBM watsonx, under Administrator > Cloud integrations, go to the GCP tab, enable integration, and then paste the contents from the JSON key file into the text field. Only certain properties from the JSON will be stored, and the private_key property will be encrypted. 5. Go back to Google Cloud Platform and edit the service account you created previously. Add the following roles: 6. Confirm that you can see your GCP services. From the main menu, choose Administration > Services > Services instances. Click the GCP tab to see those services, for example, BigQuery data sets and Storage buckets. Now users who have credentials to your GCP services can can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Next steps * [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Parent topic:",how-to,1,train
EC10AC085BA8A12BA0D8AF2DC66ADFBE759B3183,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/simgen.html?context=cdpaas&locale=en,Sim Gen node (SPSS Modeler),"Sim Gen node (SPSS Modeler)
Sim Gen node The Simulation Generate node provides an easy way to generate simulated data, either without historical data using user specified statistical distributions, or automatically using the distributions obtained from running a Simulation Fitting node on existing historical data. Generating simulated data is useful when you want to evaluate the outcome of a predictive model in the presence of uncertainty in the model inputs.",conceptual,0,train
A3022FF9DB2732F0AB3091884B428763D3879FD2,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_intro_build.html?context=cdpaas&locale=en,Building the flow (SPSS Modeler),"Building the flow (SPSS Modeler)
Building the flow Figure 1. Modeling flow  To build a flow that will create a model, we need at least three elements: * A Data Asset node that reads in data from an external source, in this case a .csv data file * An Import or Type node that specifies field properties, such as measurement level (the type of data that the field contains), and the role of each field as a target or input in modeling * A modeling node that generates a model nugget when the flow runs In this example, we're using a CHAID modeling node. CHAID, or Chi-squared Automatic Interaction Detection, is a classification method that builds decision trees by using a particular type of statistics known as chi-square statistics to work out the best places to make the splits in the decision tree. If measurement levels are specified in the source node, the separate Type node can be eliminated. Functionally, the result is the same. This flow also has Table and Analysis nodes that will be used to view the scoring results after the model nugget has been created and added to the flow. The Data Asset import node reads data in from the sample tree_credit.csv data file. The Type node specifies the measurement level for each field. The measurement level is a category that indicates the type of data in the field. Our source data file uses three different measurement levels: A Continuous field (such as the Age field) contains continuous numeric values, while a Nominal field (such as the Credit rating field) has two or more distinct values, for example Bad, Good, or No credit history. An Ordinal field (such as the Income level field) describes data with multiple distinct values that have an inherent order—in this case Low, Medium and High. Figure 2. Setting the target and input fields with the Type node  For each field, the Type node also specifies a role to indicate the part that each field plays in modeling. The role is set to Target for the field Credit rating, which is the field that indicates whether or not a given customer defaulted on the loan. This is the target, or the field for which we want to predict the value. Role is set to Input for the other fields. Input fields are sometimes known as predictors, or fields whose values are used by the modeling algorithm to predict the value of the target field. The CHAID modeling node generates the model. In the node's properties, under FIELDS, the option Use custom field roles is available. We could select this option and change the field roles, but for this example we'll use the default targets and inputs as specified in the Type node. 1. Double-click the CHAID node (named Creditrating). The node properties are displayed. Figure 3. CHAID modeling node properties  Here there are several options where we could specify the kind of model we want to build. We want a brand-new model, so under OBJECTIVES we'll use the default option Build new model. We also just want a single, standard decision tree model without any enhancements, so we'll also use the default objective option Create a standard model. Figure 4. CHAID modeling node objectives  For this example, we want to keep the tree fairly simple, so we'll limit the tree growth by raising the minimum number of cases for parent and child nodes. 2. Under STOPPING RULES, select Use absolute value. 3. Set Minimum records in parent branch to 400. 4. Set Minimum records in child branch to 200. Figure 5. Setting the stopping criteria for decision tree building  We can use all the other default options for this example, so click Save and then click the Run button on the toolbar to create the model. (Alternatively, right-click the CHAID node and choose Run from the context menu.)",how-to,1,train
02D819D225558542A49AB6E43F94FE062A509EA5,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/dataassetexportnodeslots.html?context=cdpaas&locale=en,dataassetexport properties,"dataassetexport properties
dataassetexport properties You can use the Data Asset Export node to write to remove data sources using connections, write to a data file on your local computer, or write data to a project. dataassetexport properties Table 1. dataassetexport properties dataassetexport properties Data type Property description user_settings string Escaped JSON string containing the interaction properties for the connection. Contact IBM for details about available interaction points.
Example:
user_settings: ""{""interactionProperties"":{""write_mode"":""write"",""file_name"":""output.csv"",""file_format"":""csv"",""quote_numerics"":true,""encoding"":""utf-8"",""first_line_header"":true,""include_types"":false}}""
Note that these values will change based on the type of connection you're using.",conceptual,0,train
5E4D2166BB8C2B95E515591E014E7CA00B87BCA2,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel_iwb.html?context=cdpaas&locale=en,Using the Text Analytics Workbench (SPSS Modeler),"Using the Text Analytics Workbench (SPSS Modeler)
Using the Text Analytics Workbench The Text Analytics Workbench contains the extraction results and the category model contained in the text analytics package.",conceptual,0,train
B7E56BEBF29F9AA59A9ABC9E299F19613E5859DA,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/twostepAS.html?context=cdpaas&locale=en,TwoStep-AS cluster node (SPSS Modeler),"TwoStep-AS cluster node (SPSS Modeler)
TwoStep-AS cluster node TwoStep Cluster is an exploratory tool that is designed to reveal natural groupings (or clusters) within a data set that would otherwise not be apparent. The algorithm that is employed by this procedure has several desirable features that differentiate it from traditional clustering techniques. * Handling of categorical and continuous variables. By assuming variables to be independent, a joint multinomial-normal distribution can be placed on categorical and continuous variables. * Automatic selection of number of clusters. By comparing the values of a model-choice criterion across different clustering solutions, the procedure can automatically determine the optimal number of clusters. * Scalability. By constructing a cluster feature (CF) tree that summarizes the records, the TwoStep algorithm can analyze large data files. For example, retail and consumer product companies regularly apply clustering techniques to information that describes their customers' buying habits, gender, age, income level, and other attributes. These companies tailor their marketing and product development strategies to each consumer group to increase sales and build brand loyalty.",conceptual,0,train
114EBF33612531C5020FD739010049E5126E0E5B,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/xgboostas.html?context=cdpaas&locale=en,XGBoost-AS node (SPSS Modeler),"XGBoost-AS node (SPSS Modeler)
XGBoost-AS node XGBoost© is an advanced implementation of a gradient boosting algorithm. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost-AS node in Watson Studio exposes the core features and commonly used parameters. The XGBoost-AS node is implemented in Spark. For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html). ^1^ Note that the XGBoost cross-validation function is not supported in Watson Studio. You can use the Partition node for this functionality. Also note that XGBoost in Watson Studio performs one-hot encoding automatically for categorical variables. Notes: * On Mac, version 10.12.3 or higher is required for building XGBoost-AS models. * XGBoost isn't supported on IBM POWER. ^1^ ""XGBoost Tutorials."" Scalable and Flexible Gradient Boosting. Web. © 2015-2016 DMLC.",conceptual,0,train
6E50438308B85E969B79DED22CC5E15F6872EE85,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autocont.html?context=cdpaas&locale=en,Automated modeling for a continuous target (SPSS Modeler),"Automated modeling for a continuous target (SPSS Modeler)
Automated modeling for a continuous target You can use the Auto Numeric node to automatically create and compare different models for continuous (numeric range) outcomes, such as predicting the taxable value of a property. With a single node, you can estimate and compare a set of candidate models and generate a subset of models for further analysis. The node works in the same manner as the Auto Classifier node, but for continuous rather than flag or nominal targets.",conceptual,0,train
5466D9A71E87BB01000DC957683E9CD3C10AD8BC,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_bubble.html?context=cdpaas&locale=en,Bubble charts,"Bubble charts
Bubble charts Bubble charts display categories in your groups as nonhierarchical packed circles. The size of each circle (bubble) is proportional to its value. Bubble charts are useful for comparing relationships in your data.",conceptual,0,train
8BC347015FD7CE2AF13B17DE4D287471CB994F38,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_api_intro.html?context=cdpaas&locale=en,The scripting API,"The scripting API
The scripting API The Scripting API provides access to a wide range of SPSS Modeler functionality. All the methods described so far are part of the API and can be accessed implicitly within the script without further imports. However, if you want to reference the API classes, you must import the API explicitly with the following statement: import modeler.api This import statement is required by many of the scripting API examples.",conceptual,0,train
E990E009903E315FA6752E7E82C2634AF4A425B9,https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOintro.html?context=cdpaas&locale=en,Ways to use Decision Optimization,"Ways to use Decision Optimization
Ways to use Decision Optimization To build Decision Optimization models, you can create Python notebooks with DOcplex, a native Python API for Decision Optimization, or use the Decision Optimization experiment UI that has more benefits and features.",conceptual,0,train
97B722619AFC616F13BEB20CD7A8FBC29CFF50D1,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/job-views-projs.html?context=cdpaas&locale=en,Viewing jobs across projects,"Viewing jobs across projects
Viewing jobs across projects You can view the jobs that exist across projects for assets that run in tools, such as notebooks, Data Refinery flows, and SPSS Modeler flows. To view the status of jobs or job runs in projects: 1. From the navigation menu, select Projects > Jobs. 2. Select a view scope: * Jobs with finished runs: all jobs that contain finished runs * Finished runs: all job runs that have finished * Jobs with active runs: all jobs that contain that contain active runs * Active runs: all job runs that are still active 3. Click  from the table toolbar to further narrow down the returned search results for the view scope you selected. The filter options vary depending the view scope selection, for example, for jobs with active runs, you can filter by run state, job type and project, whereas for finished runs by time, run state, whether the runs were started manually or by a schedule, job type, run duration and project. Parent topic:[Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)",how-to,1,train
9346A72CFCD74DFDA05213A2A321BF9CFB823358,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/apriori.html?context=cdpaas&locale=en,Apriori node (SPSS Modeler),"Apriori node (SPSS Modeler)
Apriori node The Apriori node discovers association rules in your data. Association rules are statements of the form: if antecedent(s) then consequent(s) For example, if a customer purchases a razor and after shave, then that customer will purchase shaving cream with 80% confidence. Apriori extracts a set of rules from the data, pulling out the rules with the highest information content. The Apriori node also discovers association rules in the data. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to efficiently process large data sets. Requirements. To create an Apriori rule set, you need one or more Input fields and one or more Target fields. Input and output fields (those with the role Input, Target, or Both) must be symbolic. Fields with the role None are ignored. Fields types must be fully instantiated before executing the node. Data can be in tabular or transactional format. Strengths. For large problems, Apriori is generally faster to train. It also has no arbitrary limit on the number of rules that can be retained and can handle rules with up to 32 preconditions. Apriori offers five different training methods, allowing more flexibility in matching the data mining method to the problem at hand.",conceptual,0,train
27E7AD16129A9DC8AC8CE2EE79C9B584D441F0DE,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_accessresults.html?context=cdpaas&locale=en,Accessing flow run results,"Accessing flow run results
Accessing flow run results Many SPSS Modeler nodes produce output objects such as models, charts, and tabular data. Many of these outputs contain useful values that can be used by scripts to guide subsequent runs. These values are grouped into content containers (referred to as simply containers) which can be accessed using tags or IDs that identify each container. The way these values are accessed depends on the format or ""content model"" used by that container. For example, many predictive model outputs use a variant of XML called PMML to represent information about the model such as which fields a decision tree uses at each split, or how the neurons in a neural network are connected and with what strengths. Model outputs that use PMML provide an XML Content Model that can be used to access that information. For example: stream = modeler.script.stream() Assume the flow contains a single C5.0 model builder node and that the datasource, predictors, and targets have already been set up modelbuilder = stream.findByType(""c50"", None) results = [] modelbuilder.run(results) modeloutput = results[0] Now that we have the C5.0 model output object, access the relevant content model cm = modeloutput.getContentModel(""PMML"") The PMML content model is a generic XML-based content model that uses XPath syntax. Use that to find the names of the data fields. The call returns a list of strings match the XPath values dataFieldNames = cm.getStringValues(""/PMML/DataDictionary/DataField"", ""name"") SPSS Modeler supports the following content models in scripting: * Table content model provides access to the simple tabular data represented as rows and columns. * XML content model provides access to content stored in XML format. * JSON content model provides access to content stored in JSON format. * Column statistics content model provides access to summary statistics about a specific field. * Pair-wise column statistics content model provides access to summary statistics between two fields or values between two separate fields. Note that the following nodes don't contain these content models: * Time Series * Discriminant * SLRM * All Extension nodes * All Database Modeling nodes",how-to,1,train
85E9CAC1F581E61092CFF1F6BE38570EE734C115,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html?context=cdpaas&locale=en,Exporting space assets from deployment spaces,"Exporting space assets from deployment spaces
Exporting space assets from deployment spaces You can export assets from a deployment space so that you can share the space with others or reuse the assets in another space. For a list of assets that you can export from space, refer to [Assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). Exporting space assets from the UI Important:To avoid problems with importing the space, export all dependencies together with the space. For more information, see [Exporting a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html). To export space assets from the UI: 1. From your deployment space, click the import and export space () icon. From the list, select Export space. 2. Click New export file. Specify a file name and an optional description. Tip: To encrypt sensitive data in the exported archive, type the password in the Password field. 3. Select the assets that you want to export with the space. 4. Click Create to create the export file. 5. After the space is exported, click the download () to save the file. You can reuse this space by choosing Create a space from a file when you create a new space. Learn more * [Importing spaces and projects into existing deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html). Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)",how-to,1,train
8109B6380043CE464115025DD32A7A821FD56DB7,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en,Quick start: Tune a foundation model,"Quick start: Tune a foundation model
Quick start: Tune a foundation model There are a couple of reasons to tune your foundation model. By tuning a model on many labeled examples, you can enhance the model performance compared to prompt engineering alone. By tuning a base model to perform similarly to a bigger model in the same model family, you can reduce costs by deploying that smaller model. Required services : Watson Studio : Watson Machine Learning Your basic workflow includes these tasks: 1. Open a project. Projects are where you can collaborate with others to work with data. 2. Add your data to the project. You can upload data files, or add data from a remote data source through a connection. 3. Create a Tuning experiment in the project. The tuning experiment uses the Tuning Studio experiment builder. 4. Review the results of the experiment and the tuned model. The results include a Loss Function chart and the details of the tuned model. 5. Deploy and test your tuned model. Test your model in the Prompt Lab. Read about tuning a foundation model Prompt tuning adjusts the content of the prompt that is passed to the model. The underlying foundation model and its parameters are not edited. Only the prompt input is altered. You tune a model with the Tuning Studio to guide an AI foundation model to return the output you want. [Read more about Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) Watch a video about tuning a foundation model  Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface that is shown in the video. The video is intended to be a companion to the written tutorial. This video provides a visual method to learn the concepts and tasks in this documentation. Try a tutorial to tune a foundation model In this tutorial, you will complete these tasks: * [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep01) * [Task 2: Test your base model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep02) * [Task 3: Add your data to the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep03) * [Task 4: Create a Tuning experiment in the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep04) * [Task 5: Configure the Tuning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep05) * [Task 6: Deploy your tuned model to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep06) * [Task 7: Test your tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep07) Expand all sections * Tips for completing this tutorial ### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview) * Task 1: Open a project  To preview this task, watch the video beginning at 00:04. You need a project to store the tuning experiment. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project. This video provides a visual method to learn the concepts and tasks in this documentation. ### Verify a existing project or create a new project 1. From the watsonx home screen, scroll to the Projects section. If you see any projects that are listed, then skip to [Associate the Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enassociate). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you see the sandbox in the Projects section. 1. Open an existing project or the new sandbox project. \ Associate the Watson Machine Learning service with the project You use Watson Machine Learning to tune the foundation model, so follow these steps to associate your Watson Machine Learning service instance with your project. 1. In the project, click the Manage tab. 1. Click the Services & Integrations page. 1. Check whether this project has an associated Watson Machine Learning service. If there is no associated",how-to,1,train
1B0AB9084C7DD9546BDC2F376B58E32C0ECFEE85,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_build.html?context=cdpaas&locale=en,Extension Model node (SPSS Modeler),"Extension Model node (SPSS Modeler)
Extension Model node With the Extension Model node, you can run R scripts or Python for Spark scripts to build and score models. After adding the node to your canvas, double-click the node to open its properties.",conceptual,0,train
FC8DBF139A485E98914CBB73B8BA684B283AE983,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html?context=cdpaas&locale=en,Deploying a tuned foundation model,"Deploying a tuned foundation model
Deploying a tuned foundation model Deploy a tuned model so you can add it to a business workflow and start to use foundation models in a meaningful way. Before you begin The tuning experiment that you used to tune the foundation model must be finished. For more information, see [Tuning a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html). Deploy a tuned model To deploy a tuned model, complete the following steps: 1. From the navigation menu, expand Projects, and then click All projects. 2. Click to open your project. 3. From the Assets tab, click the Experiments asset type. 4. Click to open the tuning experiment for the model you want to deploy. 5. From the Tuned models list, find the completed tuning experiment, and then click New deployment. 6. Name the tuned model. The name of the tuning experiment is used as the tuned model name if you don't change it. The name has a number after it in parentheses, which counts the deployments. The number starts at one and is incremented by one each time you deploy this tuning experiment. 7. Optional: Add a description and tags. 8. In the Target deployment space field, choose a deployment space. The deployment space must be associated with a machine learning instance that is in the same account as the project where the tuned model was created. If you don't have a deployment space, choose Create a new deployment space, and then follow the steps in [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). For more information, see [What is a deployment space?](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html?context=cdpaas&locale=endeployment-space) 9. In the Deployment serving name field, add a label for the deployment. The serving name is used in the URL for the API endpoint that identifies your deployment. Adding a name is helpful because the human-readable name that you add replaces a long, system-generated ID that is assigned otherwise. The serving name also abstracts the deployment from its service instance details. Applications refer to this name that allows for the underlying service instance to be changed without impacting users. The name can have up to 36 characters. The supported characters are [a-z,0-9,_]. The name must be unique across the IBM Cloud region. You might be prompted to change the serving name if the name you choose is already in use. 10. Tip: Select View deployment in deployment space after creating. Otherwise, you need to take more steps to find your deployed model. 11. Click Deploy. After the tuned model is promoted to the deployment space and deployed, a copy of the tuned model is stored in your project as a model asset. What is a deployment space? When you create a new deployment, a tuned model is promoted to a deployment space, and then deployed. A deployment space is separate from the project where you create the asset. A deployment space is associated with the following services that it uses to deploy assets: * Watson Machine Learning: A product with tools and services you can use to build, train, and deploy machine learning models. This service hosts your turned model. * IBM Cloud Object Storage: A secure platform for storing structured and unstructured data. Your deployed model asset is stored in a Cloud Object Storage bucket that is associated with your project. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). Testing the deployed model The true test of your tuned model is how it responds to input that follows tuned-for patterns. You can test the tuned model from one of the following pages: * Prompt Lab: A tool with an intuitive user interface for prompting foundation models. You can customize the prompt parameters for each input. You can also save the prompt as a notebook so you can interact with it programmatically. * Deployment space: Useful when you want to test your model programmatically. From the API Reference tab, you can find information about the available endpoints and code examples. You can also submit input as text and choose to return the output or in a stream, as the output is generated. However, you cannot change the prompt parameters for the input text. To test your tuned model, complete the following steps: 1. From the navigation menu, select Deployments. 2. Click the name of the deployment space where you deployed the tuned model. 3. Click the name of your deployed model. 4. Follow the appropriate steps based on where you want to test the tuned model: * From Prompt Lab: 1. Click Open in Prompt Lab, and then choose the project where you want to work with the model. Prompt Lab opens and the tuned model that you deployed is selected from the Model field. 2. In the Try section, add a prompt to the Input field that follows the prompt pattern that your tuned model is trained to recognize, and then click Generate. For more information about how",how-to,1,train
F0EF147DBC0554F53B331E7B6D5715D0269FFBA8,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_reference.html?context=cdpaas&locale=en,Referencing existing nodes,"Referencing existing nodes
Referencing existing nodes A flow is often pre-built with some parameters that must be modified before the flow runs. Modifying these parameters involves the following tasks: 1. Locating the nodes in the relevant flow. 2. Changing the node or flow settings (or both).",conceptual,0,train
784686DA695F28F867BC35C4416CB8D767D58B7A,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html?context=cdpaas&locale=en,Managing assets in projects,"Managing assets in projects
Managing assets in projects You can manage assets in a project by adding them, editing them, or deleting them. * [Add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * You can add other types of assets by clicking New asset or Import assets on the project's Assets page. * [Edit assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html?context=cdpaas&locale=eneditassets) * [Download assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html) * [Delete assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html?context=cdpaas&locale=enremove-asset) * [Search for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html) Edit assets You can edit the properties of all types of assets, such as the asset name, description, and tags. See [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). The role you need to edit an asset depends on the asset type. See [Project collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html). Data assets from files, connected data assets, or imported data assets : - Click the data asset name to open the asset. For some types of data, you can see an [asset preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html). : - To edit the data asset properties, such as its name, tags, and description, click the corresponding edit icon () on the information pane. : - To create or update a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of relational data, click the Profile tab. : - To cleanse and shape relational data, click Prepare data to open the data asset in Data Refinery. : When you change the name of data assets with file attachments that you uploaded into the project, the file attachments are also renamed. You must update any references to the data asset in code-based assets, like notebooks, to the new data asset name, otherwise, the code-based asset won't run. Connection assets : Click the connection asset name to edit the connection properties, such as the name, description, and connection details. Assets that you create with tools : Click the name of the asset on the Assets page to open it in its tool. On the Assets page of a project, the lock icon () indicates that another collaborator is editing the asset or locked the asset to prevent editing by other users. * Enabled lock: You can unlock the asset if you locked it or if you have the Admin role in the project. * Disabled lock: You can't unlock a locked asset if you didn't lock it and you have the Editor or Viewer role in the project. When you unlock an asset that another collaborator is editing, you take control of the asset. The other collaborator is not notified and any changes made by that collaborator are overwritten by your edits. Delete an asset from a project Required permissions : You must have the Admin or Editor role to delete assets from the project. To delete an asset from a project, choose the Delete or the Remove option from the action menu next to the asset on the project Assets page. When you delete an asset, its associated file, if it has one, is also deleted. However, when you delete a connected data asset, the data in the associated data source is not affected. Depending on the type of asset, other related assets might also be deleted. Learn more * [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)",how-to,1,train
D669435B8D1C91D913BD24768E52644B95C675AE,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/unreliable-source-attribution.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Unreliable source attribution Risks associated with outputExplainabilityAmplified Description Source attribution is the AI system's ability to describe from what training data it generated a portion or all its output. Since current techniques are based on approximations, these attributions might be incorrect. Why is unreliable source attribution a concern for foundation models? Low quality explanations make it difficult for users, model validators, and auditors to understand and trust the model. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
22B8136F68AC74838B9C2B9EAF3996CCFAA14921,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/transpose.html?context=cdpaas&locale=en,Transpose node (SPSS Modeler),"Transpose node (SPSS Modeler)
Transpose node By default, columns are fields and rows are records or observations. If necessary, you can use a Transpose node to swap the data in rows and columns so that fields become records and records become fields. For example, if you have time series data where each series is a row rather than a column, you can transpose the data prior to analysis.",conceptual,0,train
67B99E436854F015A9DB19C775639BA4BB4D5F9B,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/cplex.html?context=cdpaas&locale=en,CPLEX Optimization node (SPSS Modeler),"CPLEX Optimization node (SPSS Modeler)
CPLEX Optimization node With the CPLEX Optimization node, you can use complex mathematical (CPLEX) based optimization via an Optimization Programming Language (OPL) model file. For more information about CPLEX optimization and OPL, see the [IBM ILOG CPLEX Optimization Studio documentation](https://www.ibm.com/support/knowledgecenter/SSSA5P). When outputting the data generated by the CPLEX Optimization node, you can output the original data from the data sources together as single indexes, or as multiple dimensional indexes of the result. Note: * When running a flow containing a CPLEX Optimization node, the CPLEX library has a limitation of 1000 variables and 1000 constraints.",conceptual,0,train
F3C0AD81BBF56463510440F7F81EB146A6C0015C,https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lazy-evaluation.html?context=cdpaas&locale=en,Time series lazy evaluation,"Time series lazy evaluation
Time series lazy evaluation Lazy evaluation is an evaluation strategy that delays the evaluation of an expression until its value is needed. When combined with memoization, lazy evaluation strategy avoids repeated evaluations and can reduce the running time of certain functions by a significant factor. The time series library uses lazy evaluation to process data. Notionally an execution graph is constructed on time series data whose evaluation is triggered only when its output is materialized. Assuming an object is moving in a one dimensional space, whose location is captured by x(t). You can determine the harsh acceleration/braking (h(t)) of this object by using its velocity (v(t)) and acceleration (a(t)) time series as follows: 1d location timeseries x(t) = input location timeseries velocity - first derivative of x(t) v(t) = x(t) - x(t-1) acceleration - second derivative of x(t) a(t) = v(t) - v(t-1) harsh acceleration/braking using thresholds on acceleration h(t) = +1 if a(t) > threshold_acceleration = -1 if a(t) < threshold_deceleration = 0 otherwise This results in a simple execution graph of the form: x(t) --> v(t) --> a(t) --> h(t) Evaluations are triggered only when an action is performed, such as compute h(5...10), i.e. compute h(5), ..., h(10). The library captures narrow temporal dependencies between time series. In this example, h(5...10) requires a(5...10), which in turn requires v(4...10), which then requires x(3...10). Only the relevant portions of a(t), v(t) and x(t) are evaluated. h(5...10) <-- a(5...10) <-- v(4...10) <-- x(3...10) Furthermore, evaluations are memoized and can thus be reused in subsequent actions on h. For example, when a request for h(7...12) follows a request for h(5...10), the memoized values h(7...10) would be leveraged; further, h(11...12) would be evaluated using a(11...12), v(10...12) and x(9...12), which would in turn leverage v(10) and x(9...10) memoized from the prior computation. In a more general example, you could define a smoothened velocity timeseries as follows: 1d location timeseries x(t) = input location timeseries velocity - first derivative of x(t) v(t) = x(t) - x(t-1) smoothened velocity alpha is the smoothing factor n is a smoothing history v_smooth(t) = (v(t)1.0 + v(t-1)alpha + ... + v(t-n)alpha^n) / (1 + alpha + ... + alpha^n) acceleration - second derivative of x(t) a(t) = v_smooth(t) - v_smooth(t-1) In this example h(l...u) has the following temporal dependency. Evaluation of h(l...u) would strictly adhere to this temporal dependency with memoization. h(l...u) <-- a(l...u) <-- v_smooth(l-1...u) <-- v(l-n-1...u) <-- x(l-n-2...u) An Example The following example shows a python code snippet that implements harsh acceleration on a simple in-memory time series. The library includes several built-in transforms. In this example the difference transform is applied twice to the location time series to compute acceleration time series. A map operation is applied to the acceleration time series using a harsh lambda function, which is defined after the code sample, that maps acceleration to either +1 (harsh acceleration), -1 (harsh braking) and 0 (otherwise). The filter operation selects only instances wherein either harsh acceleration or harsh braking is observed. Prior to calling get_values, an execution graph is created, but no computations are performed. On calling get_values(5, 10), the evaluation is performed with memoization on the narrowest possible temporal dependency in the execution graph. import tspy from tspy.builders.functions import transformers x = tspy.time_series([1.0, 2.0, 4.0, 7.0, 11.0, 16.0, 22.0, 29.0, 28.0, 30.0, 29.0, 30.0, 30.0]) v = x.transform(transformers.difference()) a = v.transform(transformers.difference()) h = a.map(harsh).filter(lambda h: h != 0) print(h[5, 10]) The harsh lambda is defined as follows: def harsh(a): threshold_acceleration = 2.0 threshold_braking = -2.0 if (a > threshold_acceleration): return +1 elif (a < threshold_braking): return -1 else: return 0 Learn more To use the tspy Python SDK, see the [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/). Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)",how-to,1,train
0A507FF5262BAD7A3FB3F3C478388CFF78949941,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en,Managing feature groups with assetframe-lib for Python (beta),"Managing feature groups with assetframe-lib for Python (beta)
Managing feature groups with assetframe-lib for Python (beta) You can use the assetframe-lib to create, view and edit feature group information for data assets in Watson Studio notebooks. Feature groups define additional metadata on columns of your data asset that can be used in downstream Machine Learning tasks. See [Managing feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html) for more information about using feature groups in the UI. Setting up the assetframe-lib and ibm-watson-studio-lib libraries The assetframe-lib library for Python is pre-installed and can be imported directly in a notebook in Watson Studio. However, it relies on the [ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html) library. The following steps describe how to set up both libraries. To insert the project token to your notebook: 1. Click the More icon on your notebook toolbar and then click Insert project token. If a project token exists, a cell is added to your notebook with the following information: from ibm_watson_studio_lib import access_project_or_space wslib = access_project_or_space({""token"":""""}) is the value of the project token. If you are told in a message that no project token exists, click the link in the message to be redirected to the project's Access Control page where you can create a project token. You must be eligible to create a project token. To create a project token: 1. From the Manage tab, select the Access Control page, and click New access token under Access tokens. 2. Enter a name, select Editor role for the project, and create a token. 3. Go back to your notebook, click the More icon on the notebook toolbar and then click Insert project token. 2. Import assetframe-lib and initialize it with the created ibm-watson-studio-lib instance. from assetframe_lib import AssetFrame AssetFrame._wslib = wslib The assetframe-lib functions and methods The assetframe-lib library exposes a set of functions and methods that are grouped in the following way: * [Creating an asset frame](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=encreate-assetframe) * [Creating, retrieving and removing features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=encreate-features) * [Specifying feature attributes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enspecify-featureatt) * [Role](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enrole) * [Description](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=endescription) * [Fairness information for favorable and unfavorable outcomes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enfairnessinfo) * [Fairness information for monitored and reference groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enmonitoredreference) * [Value descriptions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=envalue-desc) * [Recipe](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enrecipe) * [Tags](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=entags) * [Previewing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enpreview-data) * [Getting fairness information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enget-fairness) Creating an asset frame An asset frame is used to define feature group metadata on an existing data asset or on a pandas DataFrame. You can have exactly one feature group for each asset. If you create an asset frame on a pandas DataFrame, you can store the pandas DataFrame along with the feature group metadata as a data asset in your project. You can use one of the following functions to create your asset frame: * AssetFrame.from_data_asset(asset_name, create_default_features=False) This function creates a new asset frame wrapping an existing data asset in your project. If there is already a feature group for this asset, for example created in the user interface, it is read from the asset metadata. Parameters: - asset_name: (Required) The name of a data asset in your project. - create_default_features: (Optional) Creates features for all columns in the data asset. * AssetFrame.from_pandas(name, dataframe, create_default_features=False) This function creates a new asset frame wrapping a pandas DataFrame. Parameters: * name: (Required) The name of the asset frame. This name will be used as the name of the data asset if you store your feature group in your project in a later step. * dataframe: (Required) A pandas DataFrame that you want to store along with feature group information. * create_default_features: (Optional) Create features for all columns in the dataframe. Example of creating a asset frame from a pandas DataFrame: Create an asset frame from a pandas DataFrame and set the name of the asset frame. af = AssetFrame.from_pandas(dataframe=credit_risk_df, name=""Credit Risk Training Data"") Creating, retrieving and removing features A feature defines metadata that can be used by downstream Machine Learning tasks. You can create one feature per column in your data set. You can use one of the following functions to create, retrieve or remove columns from your asset frame: * add_feature(column_name, role='Input') This function adds a new feature to your asset frame with the given role. Parameters: * column_name: (Required) The name of the column to create a feature for. * role: (Optional) The role of the feature. It defaults to Input. Valid roles are: * Input: The input for a machine learning model * Target: The target of a prediction model * Identifier: The identifier of a row in your data set. * create_default_features() This function creates features for all columns in your data set. The roles of the features will default to Input. * get_features() This function retrieves all features of the asset frame. * get_feature(column_name) This function retrieves the feature for the given column name. Parameters: * column_name: (Required) The string name of the column to create the feature for. * get_features_by_role(role) This function retrieves all features of the dataframe with the given role. Parameters: * role: (Required) The",how-to,1,train
2200315EA9DA921EDFF8A3322417BB211F15B4EB,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html?context=cdpaas&locale=en,Adding data from a connection to a project,"Adding data from a connection to a project
Adding data from a connection to a project A connected data asset is a pointer to data that is accessed through a connection to an external data source. You create a connected data asset by specifying a connection, any intermediate structures or paths, and a relational table or view, a set of partitioned data files, or a file. When you access a connected data asset, the data is dynamically retrieved from the data source. You can also add a connected folder asset that is accessed through a connection in the same way. See [Add a connected folder asset to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html). Partitioned data assets have previews and profiles like relational tables. However, you cannot yet shape and cleanse partitioned data assets with the Data Refinery tool. To add a data asset from a connection to a project: 1. From the project page, click the Assets tab, and then click Import assets > Connected data. 2. Select an existing connection asset as the source of the data. If you don't have any connection assets, cancel and go to New asset > Connect to a data source, and [create a connection asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). 3. Select the data you want. You can select multiple connected data assets from the same connection. Click Import. For partitioned data, select the folder that contains the files. If the files are recognized as partitioned data, you see the message This folder contains a partitioned data set. 4. Type a name and description. 5. Click Create. The asset appears on the project Assets page. When you click on the asset name, you can see this information about connected assets: * The asset name and description * The tags for the asset * The name of the person who created the asset * The size of the data * The date when the asset was added to the project * The date when the asset was last modified * A [preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) of relational data * A [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of relational data Watch this video to see how to create a connection and add connected data to a project. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 This video shows you how to set up a connection to a data source and add connected data to a Watson Studio project. 00:08 If you have data stored in a data source, you can set up a connection to that data source from any project. 00:16 From here, you can add different elements to the project. 00:20 In this case, you want to add a connection. 00:24 You can create a new connection to an IBM service, such as IBM Db2 and Cloud Object Storage, or to a service from third parties, such as Amazon, Microsoft or Apache. 00:39 And you can filter the list based on compatible services. 00:45 You can also add a connection that was created at the platform level, which can be used across projects and catalogs. 00:54 Or you can create a connection to one of your provisioned IBM Cloud services. 00:59 In this case, select the provisioned IBM Cloud service for Db2 Warehouse on Cloud. 01:08 If the credentials are not prepopulated, you can get the credentials for the instance from the IBM Cloud service launch page. 01:17 First, test the connection and then create the connection. 01:25 The new connection now displays in the list of data assets. 01:30 Next, add connected data assets to this project. 01:37 Select the source - in this case, it's the Db2 Warehouse on Cloud connection just created. 01:43 Then select the schema and table. 01:50 You can see that this will add a reference to the data within this connection and include it in the target project. 01:58 Provide a name and a description and click ""Create"". 02:06 The data now displays in the list of data assets. 02:09 Open the data set to get a preview; and from here you can move directly into refining the data. 02:17 Find more videos in the Cloud Pak for Data as a Service documentation. Next steps * [Refine the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) * [Analyze the data or build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) Learn more * [Connected folder assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html) * [Connection assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Parent topic: [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)",how-to,1,train
FD48879C34D316981B4F67C2B82C8179E0042F74,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html?context=cdpaas&locale=en,Credentials for prompting foundation models (IBM Cloud API key and IAM token),"Credentials for prompting foundation models (IBM Cloud API key and IAM token)
Credentials for prompting foundation models (IBM Cloud API key and IAM token) To prompt foundation models in IBM watsonx.ai programmatically, you need an IBM Cloud API key and sometimes an IBM Cloud IAM token. IBM Cloud API key To use the [foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html), you need an IBM Cloud API key. Python pseudo-code my_credentials = { ""url"" : ""https://us-south.ml.cloud.ibm.com"", ""apikey"" : } ... model = Model( ... credentials=my_credentials ... ) You can create this API key by using multiple interfaces. For full instructions, see [Creating an API key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=uicreate_user_key) IBM Cloud IAM token When you click the View code button in the Prompt Lab, a curl command is displayed that you can call outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response. In the command, there is a placeholder for an IBM Cloud IAM token. For information about generating that access token, see: [Generating an IBM Cloud IAM token](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey) Parent topic:[Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)",how-to,1,train
A73CA4F67523DBB58FD3521AE9BFF83AEE634607,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_distribution.html?context=cdpaas&locale=en,Creating a distribution chart (SPSS Modeler),"Creating a distribution chart (SPSS Modeler)
Creating a distribution chart During data mining, it is often useful to explore the data by creating visual summaries. Watson Studio offers many different types of charts to choose from, depending on the kind of data you want to summarize. For example, to find out what proportion of the patients responded to each drug, use a Distribution node. Figure 1. Distribution node  1. Under Graphs on the Palette, add a Distribution node to the flow and connect it to the drug1n.csv Data Asset node. Then double-click the node to edit its options. 2. Select Drug as the target field whose distribution you want to show. Then click Save, right-click the Distribution node, and select Run. A distribution chart is added to the Outputs panel. The chart helps you see the shape of the data. It shows that patients responded to drug Y most often and to drugs B and C least often. Alternatively, you can attach and run a Data Audit node for a quick glance at distributions and histograms for all fields at once. The Data Audit node is available under Outputs on the Palette.",how-to,1,train
E334A64775AE571C661CDCC847669F0E20C207FF,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html?context=cdpaas&locale=en,Video library,"Video library
Video library Watch short videos for data scientists, data engineers, and data stewards to learn about watsonx. The videos and accompanying tutorials are task-focused and provide hands-on experience by using the tools in watsonx. Note: These videos provides a visual method to learn the concepts and tasks in this documentation. If you are having difficulty viewing any of the videos on this page, visit the [Video playlists](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx-docs.html) page. First watch the IBM watsonx.ai overview video.  Select any video from the lists below to watch here. Quick start IBM watsonx.ai overview * Classify text * Summarize large, complex documents * Generate content * Extract text from complex documents Get started * Create a project * Collaborate in projects * Tour the samples collection * Load and analyze public data sets Work with data * Prepare data with Data Refinery * Generate synthetic tabular data * Analyze data in a Jupyter notebook IBM watsonx.governance * Track a model in an AI use case * Evaluate a prompt template * Track a prompt template Work with foundation models * Prompt a foundation model using Prompt Lab * Prompt tips: Get started prompting foundation models * Introduction to the retrieval-augmented generation pattern * Tune a foundation model Build models * Build and deploy a model with AutoAI * Build and deploy a model in a Jupyter notebook * Build and deploy a model with SPSS Modeler * Build and deploy a Decision Optimization model * Create a pipeline to automate the lifecycle for a model",conceptual,0,train
40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-provenance.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Data provenance Risks associated with inputTraining and tuning phaseTransparencyAmplified Description Without standardized and established methods for verifying where data came from, there are no guarantees that available data is what it claims to be. Why is data provenance a concern for foundation models? Not all data sources are trustworthy. Data might have been unethically collected, manipulated, or falsified. Using such data can result in undesirable behaviors in the model. Business entities could face fines, reputational harms, and other legal consequences. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html?context=cdpaas&locale=en,Security for IBM watsonx,"Security for IBM watsonx
Security for IBM watsonx Security mechanisms in IBM watsonx provide protection for data, applications, identity, and resources. You can configure security mechanisms on five levels for IBM Cloud security functions. Security levels in IBM watsonx Security for IBM watsonx is configured on levels to ensure that your data, application endpoints, and identity are protected on any cloud. The security levels are: 1. [Network security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html) – Network security protects the network infrastructure and the points where your database or applications interact with the cloud. For example, you can protect your network by allowing IP addresses, by connecting securely to databases and third-party clouds, and by securing endpoints. 2. [Enterprise security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-enterprise.html) – Enterprises are multiple IBM Cloud accounts in a hierarchy. For example, your company might have many teams that require one or more separate accounts for development, testing, and production environments. Or, you can configure an enterprise to isolate workloads in separate accounts to meet compliance guidelines. 3. [Account security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html) – Account security includes IAM and Access group roles, Service IDs, monitoring, and other security mechanisms that are configured on IBM Cloud for your IBM Cloud account. 4. [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html) – Data security protects the IBM Cloud Object Storage service instance, provides data encryption for at-rest and in-motion data, and other security mechanisms related to data. 5. [Collaborator security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-collab.html) – Protect your workspaces by assigning role-based access controls to collaborators in IBM watsonx. IBM watsonx conforms to IBM Cloud security requirements. See [IBM Cloud docs: How do I know that my data is safe?](https://cloud.ibm.com/docs/overview?topic=overview-security). Resiliency IBM watsonx is disaster resistant: * The metadata for your projects and catalogs is stored in a three-node dedicated Cloudant Enterprise cluster that spans multiple geographic locations. * The files that are associated with projects and catalogs are protected by the level of resiliency that is specified by the IBM Cloud Object Storage plan. Compliance See [Keep your data secure and compliant](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html). Learn more * [watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=endetail-document) * [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883) * [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747) * [IBM Cloud Object Storage terms](https://www.ibm.com/software/sla/sladb.nsf/sla/bm-7857-03) * [Managing security and compliance in IBM Cloud](https://cloud.ibm.com/docs/overview?topic=overview-manage-security-compliance) * [Software Product Compatibility Reports: IBM Watson Studio](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=95E9BEA0B35711E7A9EB066095601ABB). * [Software Product Compatibility Reports: IBM Watson Machine Learning service](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=850D9360405711E5B2E4A36A7B0C4479). Parent topic:[Administering your accounts and services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)",conceptual,0,train
19BA0BFC40B6212B42F38487F1533BB65647850E,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en,Importing models to a deployment space,"Importing models to a deployment space
Importing models to a deployment space Import machine learning models trained outside of IBM Watson Machine Learning so that you can deploy and test the models. Review the model frameworks that are available for importing models. Here, to import a trained model means: 1. Store the trained model in your Watson Machine Learning repository 2. Optional: Deploy the stored model in your Watson Machine Learning service and repository means a Cloud Object Storage bucket. For more information, see [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). You can import a model in these ways: * [Directly through the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enui-import) * [By using a path to a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-file-import) * [By using a path to a directory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-dir-import) * [Import a model object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enobject-import) For more information, see [Importing models by ML framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). For more information, see [Things to consider when you import models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-import-considerations). For an example of how to add a model programmatically by using the Python client, refer to this notebook: * [Use PMML to predict iris species.](https://github.com/IBM/watson-machine-learning-samples/blob/df8e5122a521638cb37245254fe35d3a18cd3f59/cloud/notebooks/python_sdk/deployments/pmml/Use%20PMML%20to%20predict%20iris%20species.ipynb) For an example of how to add a model programmatically by using the REST API, refer to this notebook: * [Use scikit-learn to predict diabetes progression](https://github.com/IBM/watson-machine-learning-samples/blob/be84bcd25d17211f41fb34ec262b418f6cd6c87b/cloud/notebooks/rest_api/curl/deployments/scikit/Use%20scikit-learn%20to%20predict%20diabetes%20progression.ipynb) Available ways to import models, per framework type This table lists the available ways to import models to Watson Machine Learning, per framework type. Import options for models, per framework type Import option Spark MLlib Scikit-learn XGBoost TensorFlow PyTorch [Importing a model object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enobject-import) ✓ ✓ ✓ [Importing a model by using a path to a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-file-import) ✓ ✓ ✓ ✓ [Importing a model by using a path to a directory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-dir-import) ✓ ✓ ✓ ✓ Adding a model by using UI Note:If you want to import a model in the PMML format, you can directly import the model .xml file. To import a model by using UI: 1. From the Assets tab of your space in Watson Machine Learning, click Import assets. 2. Select Local file and then select Model. 3. Select the model file that you want to import and click Import. The importing mechanism automatically selects a matching model type and software specification based on the version string in the .xml file. Importing a model object Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). To import a model object: 1. If your model is located in a remote location, follow [Downloading a model that is stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download). 2. Store the model object in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo). Importing a model by using a path to a file Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). To import a model by using a path to a file: 1. If your model is located in a remote location, follow [Downloading a model that is stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download) to download it. 2. If your model is located locally, place it in a specific directory: !cp !cd 3. For Scikit-learn, XGBoost, Tensorflow, and PyTorch models, if the downloaded file is not a .tar.gz archive, make an archive: !tar -zcvf .tar.gz The model file must be at the top-level folder of the directory, for example: assets/ variables/ variables/variables.data-00000-of-00001 variables/variables.index 4. Use the path to the saved file to store the model file in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo). Importing a model by using a path to a directory Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). To import a model by using a path to a directory: 1. If your model is located in a remote location, refer to [Downloading a model stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download). 2. If your model is located locally, place it in a specific directory: !cp !cd For scikit-learn, XGBoost, Tensorflow, and PyTorch models, the model file must be at the top-level folder of the directory, for example: assets/ variables/ variables/variables.data-00000-of-00001 variables/variables.index 3. Use the directory path to store the model file in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo). Downloading a model stored in a remote location Follow this sample code to download your model from a remote location: import os from wget import download target_dir = '' if not os.path.isdir(target_dir): os.mkdir(target_dir) filename = os.path.join(target_dir, '') if not os.path.isfile(filename): filename = download('', out = target_dir) Things to consider when you import models To learn more about importing a specific model type, see:",how-to,1,train
538ECAE0B5AA21E499F39C2637764A05BFF7B6B6,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-reports.html?context=cdpaas&locale=en,Managing and customizing report templates,"Managing and customizing report templates
Managing and customizing report templates If the default report templates that are provided with AI Factsheets do not meet your needs, you can download a default report template, customize it, and upload the new template. Customizing a custom report template Any user with at least Editor access can create a report from an AI use case that captures all the details from An AI use case. You can use reports for compliance verification, archiving, or other purposes. If the default templates for the reports do not meet the needs of your organization, you can customize the report templates, the branding file, or the default stylesheet. For example, you can replace the IBM logo with your own logo image file. You must have the Admin role for managing inventories to customize report templates. Follow these steps to customize a report template. Downloading a report To download a report template from the UI: 1. Open the AI uses cases settings and click the Report templates tab. If you do not see this tab, you might have insufficient access. 2. In the options menu for a report template, click Download.  3. Open the .ftl file in an editor. 4. Edit the template by using instructions from [Apache FreeMarker](https://freemarker.apache.org/) or the API commands. To download a report template by using APIs: 1. Use the GET endpoint for /v1/aigov/report_templates in the [IBM Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api) to list the available templates. Note the ID for the template that you want to download. 2. Use the GET endpoint /v1/aigov/report_templates/{template_id}/content with the template ID to download the template file. 3. Open the .ftl file in an editor. 4. Edit the template by using instructions from [Apache FreeMarker](https://freemarker.apache.org/) or the API commands. Uploading a template 1. Open the AI uses cases settings and click the Report templates tab. If you do not see this tab, you might have insufficient access. 2. Click Add template. 3. Specify a name for the template and an optional description. 4. Choose the type of template: model or model use case. The reports are available for external models and Watson Machine Learning models. 5. Upload the updated FTL file. Restriction:The ftl file that you upload must not import any other files. Support is not yet available for import statements other than system templates in the ftl file. The custom template displays in the Report templates section and is available for creating reports. Click Edit or Delete from the action menu for a custom template to update the template details or to remove the template. Parent topic:[Creating and managing inventories](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html)",how-to,1,train
5A6081124D93ACD0A12843F64984257A02BB3871,https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-conn.html?context=cdpaas&locale=en,Troubleshooting connections,"Troubleshooting connections
Troubleshooting connections Use these solutions to resolve problems that you might encounter with connections. IBM Db2 for z/OS: Error retrieving the schema list when you try to connect to a Db2 for z/OS server When you test the connection to a Db2 for z/OS server and the connection cannot retrieve the schema list, you might receive the following error: CDICC7002E: The assets request failed: CDICO2064E: The metadata for the column TABLE_SCHEM could not be obtained: Sql error: [jcc] [10300] Invalid parameter: Unknown column name TABLE_SCHEM. ERRORCODE=-4460, SQLSTATE=null Workaround: On the Db2 for z/OS server, set the DESCSTAT subsystem parameter to No. For more information, see [DESCRIBE FOR STATIC field (DESCSTAT subsystem parameter)](https://www.ibm.com/docs/SSEPEK_13.0.0/inst/src/tpc/db2z_ipf_descstat.html). Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)",how-to,1,train
05275F4EC521878B13AD7DCE825E167B2FC7EF93,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_advanced_frequencies.html?context=cdpaas&locale=en,Advanced frequency settings (SPSS Modeler),"Advanced frequency settings (SPSS Modeler)
Advanced frequency settings You can build categories based on a straightforward and mechanical frequency technique. With this technique, you can build one category for each item (type, concept, or pattern) that was found to be higher than a given record or document count. Additionally, you can build a single category for all of the less frequently occurring items. By count, we refer to the number of records or documents containing the extracted concept (and any of its synonyms), type, or pattern in question as opposed to the total number of occurrences in the entire text. Grouping frequently occurring items can yield interesting results, since it may indicate a common or significant response. The technique is very useful on the unused extraction results after other techniques have been applied. Another application is to run this technique immediately after extraction when no other categories exist, edit the results to delete uninteresting categories, and then extend those categories so that they match even more records or documents. Instead of using this technique, you could sort the concepts or concept patterns by descending number of records or documents in the extraction results pane and then drag-and-drop the ones with the most records into the categories pane to create the corresponding categories. The following advanced settings are available for the Use frequencies to build categories option in the category settings. Generate category descriptors at. Select the kind of input for descriptors. * Concepts level. Selecting this option means that concepts or concept patterns frequencies will be used. Concepts will be used if types were selected as input for category building and concept patterns are used, if type patterns were selected. In general, applying this technique to the concept level will produce more specific results, since concepts and concept patterns represent a lower level of measurement. * Types level. Selecting this option means that type or type patterns frequencies will be used. Types will be used if types were selected as input for category building and type patterns are used, if type patterns were selected. By applying this technique to the type level, you can get a quick view of the kind of information given. Minimum record/doc. count for items to have their own category. With this option, you can build categories from frequently occurring items. This option restricts the output to only those categories containing a descriptor that occurred in at least X number of records or documents, where X is the value to enter for this option. Group all remaining items into a category called. Use this option if you want to group all concepts or types occurring infrequently into a single catch-all category with the name of your choice. By default, this category is named Other. Category input. Select the group to which to apply the techniques: * Unused extraction results. This option enables categories to be built from extraction results that aren't used in any existing categories. This minimizes the tendency for records to match multiple categories and limits the number of categories produced. * All extraction results. This option enables categories to be built using any of the extraction results. This is most useful when no or few categories already exist. Resolve duplicate category names by. Select how to handle any new categories or subcategories whose names would be the same as existing categories. You can either merge the new ones (and their descriptors) with the existing categories with the same name, or you can choose to skip the creation of any categories if a duplicate name is found in the existing categories.",conceptual,0,train
496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html?context=cdpaas&locale=en,Integrating with Microsoft Azure,"Integrating with Microsoft Azure
Integrating with Microsoft Azure You can configure an integration with the Microsoft Azure platform to allow IBM watsonx users access to data sources from Microsoft Azure. Before proceeding, make sure you have proper permissions. For example, you'll need permission in your subscription to create an application integration in Azure Active Directory. After you configure an integration, you'll see it under Service instances. You'll see a new Azure tab that lists your instances of Data Lake Storage Gen1 and SQL Database. To configure an integration with Microsoft Azure: 1. Log on to your Microsoft Azure account at [https://portal.azure.com](https://portal.azure.com). 2. Navigate to the Subscriptions panel and copy your subscription ID. 1. In IBM watsonx, go to Administration > Cloud integrations and click the Azure tab. Paste the subscription ID you copied in the previous step into the Subscription ID field. 1. In Microsoft Azure Active Directory, navigate to Manage > App registrations and click New registration to register an application. Give it a name such as IBM integration and select the desired option for supported account types. 1. Copy the Application (client) ID and the Tenant ID and paste them into the appropriate fields on the IBM watsonx Integrations page, as you did with the subscription ID. 1. In Microsoft Azure, navigate to Certificates & secrets > New client secret to create a new secret. Important! * Write down your secret and store it in a safe place. After you leave this page, you won't be able to retrieve the secret again. You'd need to delete the secret and create a new one. * If you ever need to revoke the secret for some reason, you can simply delete it from this page. * Pay attention to the expiration date. When the secret expires, integration will stop working. 2. Copy the secret from Microsoft Azure and paste it into the appropriate field on the Integrations page as you did with the subscription ID and client ID. 3. Configure [firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html?context=cdpaas&locale=enfirewall). 4. Confirm that you can see your Azure services. From the main menu, choose Administration > Services > Services instances. Click the Azure tab to see those services. Now users who have credentials to your Azure services can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Configuring firewall access You must also configure access so IBM watsonx can access data through the firewall. For Microsoft Azure SQL Database firewall: 1. Open the database instance in Microsoft Azure. 2. From the top list of actions, select Set server firewall. 3. Set Deny public network access to No. 4. In a separate tab or window, open IBM watsonx and go to Administration > Cloud integrations. In the Firewall configuration panel, for each firewall IP range, copy the start and end address values into the list of rules in the Microsoft Azure QL Database firewall. For Microsoft Azure Data Lake Storage Gen1 firewall: 1. Open the Data Lake instance. 2. Go to Settings > Firewall and virtual networks. 3. In a separate tab or window, open IBM watsonx and go to Administration > Cloud integrations. In the Firewall configuration panel, for each firewall IP range, copy the start and end address values into the list of rules under Firewall in the Data Lake instance. You can now create connections, preview data from Microsoft Azure data sources, and access Microsoft Azure data in Notebooks, Data Refinery, SPSS Modeler, and other tools in projects and in catalogs. You can see your Microsoft Azure instances under Services > Service instances. Next steps * [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Parent topic:[Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html)",how-to,1,train
1C863B2624AB2712318442337C917143C19E7DDD,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-pipelines.html?context=cdpaas&locale=en,Creating jobs for Pipelines,"Creating jobs for Pipelines
Creating jobs for Pipelines You can create jobs for Pipelines. To create a Pipelines job: 1. Open your Pipelines asset from the project. 2. Click Run pipeline > Create a job. 3. On the Create a job page, you can choose the asset version that you'd like to run. The most recently saved version of the Pipelines is used by default. 4. Give a name and optional description for your job. Click next. 5. Define your IAM API key. The most recently used API key is used by default. If you'd like to use a new API key, click Generate new API key. Click next. 6. You can schedule your job by toggling Schedule off to Schedule to run. You can choose either or both options: * Start on: Choose a date for your scheduled job to run. The time zone is GMT-0400 (Eastern Daylight Time). If you do not choose a start date, the job will never run automatically and must be started manually. * Repeat: You can choose to schedule the repeated frequency (every minute to every month), exclude running the job on certain days, and choose an end date. If you do not choose to repeat the job, it runs one time if a start date is given, or does not run. 7. Review your job settings and click Create. The Pipelines job is listed under Jobs in your project. Learn more * [Viewing jobs across projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/job-views-projs.html) Parent topic:[Creating and managing jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)",how-to,1,train
28C4D682B46E9723F538988BB2BDB1EB65618E5E,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html?context=cdpaas&locale=en,Creating and managing jobs in a project,"Creating and managing jobs in a project
Creating and managing jobs in a project You create jobs to run assets or files in tools, such as Data Refinery flows, SPSS Modeler flows, Notebooks, and scripts, in a project. When you create a job you define the properties for the job, such as the name, definition, environment runtime, schedule and notification specifications on different pages. You can run a job immediately or wait for the job to run at the next scheduled interval. Each time a job is started, a job run is created, which you can monitor and use to compare with the job run history of previous runs. You can view detailed information about each job run, job state changes, and job failures in the job run log. How you create a job depends on the asset or file. Job creation options for assets or files Asset or file Create job in tool Create job from the Assets page More information Data Refinery flow ✓ ✓ [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html) SPSS Modeler flow ✓ ✓ [Creating jobs in SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-spss.html) Notebook created in the Notebook editor ✓ ✓ [Creating jobs in the Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) Pipelines ✓ [Creating jobs for Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-pipelines.html) Creating jobs from the Assets page You can create a job to run an asset from the project's Assets page. Required permissions : You must have an Editor or Admin role in the project. Restriction:You cannot run a job by using an API key from a [service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). To create jobs for a listed asset from the Assets page of a project: 1. Select the asset from the section for your asset type and choose New job from the menu icon with the lists of options () at the end of the table row. 2. Define the job details by entering a name and a description (optional). 3. If you can select Setting, specify the settings that you want for the job. 4. If you can select Configure, choose an environment runtime for the job. Depending on the asset type, you can optionally configure more settings, for example environment variables or script arguments. To avoid accumulating too many finished job runs and job run artifacts, set how long to retain finished job runs and job run artifacts like logs or notebook results. You can either select the number of days to retain the job runs or the last number of job runs to keep. 5. On the Schedule page, you can optionally add a one-time or repeating schedule. If you select the Repeat option and unit of Minutes with the value of n, the job runs at the start of the hour, and then at every multiple of n. For example, if you specify a value of 11 it will run at 0, 11, 22, 33, 44 and 55 minutes of each hour. If you also select the Start of Schedule option, the job starts to run at the first multiple of n of the hour that occurs after the time that you provide in the Start Time field. For example, if you enter 10:24 for the Start of Time value, and you select Repeat and set the job to repeat every 14 minutes, then your job will run at 10:42, 10:56, 11:00, 11:14. 11:28, 11:42, 11:56, and so on. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. An API key is generated when you create a scheduled job, and future runs will use this API key. If you didn't create a scheduled job but choose to modify one, an API key is generated for you when you modify the job and future runs will use this API key. 6. (Optional): Select to see notifications for the job. You can select the type of alerts to receive. 7. Review the job settings. Then, create the job and run it immediately, or create the job and run it later. Managing jobs You can view all of the jobs that exist for your project from the project's Jobs page. With Admin or Editor role for the project, you can view and edit the job details. You can run jobs manually and you can delete jobs. With Viewer role for the project, you can only view the job details. You can't run or delete jobs with Viewer role. To view the details of a specific job, click the job. From the job's details page, you can: * View the runs for that job and the status of each run. If a run",how-to,1,train
F1CDB96AD5A56206F662BB3025B93F6D5820242B,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/plotnodeslots.html?context=cdpaas&locale=en,plotnode properties,"plotnode properties
plotnode properties The Plot node shows the relationship between numeric fields. You can create a plot by using points (a scatterplot) or lines. plotnode properties Table 1. plotnode properties plotnode properties Data type Property description x_field field Specifies a custom label for the x axis. Available only for labels. y_field field Specifies a custom label for the y axis. Available only for labels. three_D flag Specifies a custom label for the y axis. Available only for labels in 3-D graphs. z_field field color_field field Overlay field. size_field field shape_field field panel_field field Specifies a nominal or flag field for use in making a separate chart for each category. Charts are paneled together in one output window. animation_field field Specifies a nominal or flag field for illustrating data value categories by creating a series of charts displayed in sequence using animation. transp_field field Specifies a field for illustrating data value categories by using a different level of transparency for each category. Not available for line plots. overlay_type NoneSmootherFunction Specifies whether an overlay function or LOESS smoother is displayed. overlay_expression string Specifies the expression used when overlay_type is set to Function. style PointLine point_type Rectangle Dot Triangle Hexagon Plus Pentagon Star BowTie HorizontalDash VerticalDash IronCross Factory House Cathedral OnionDome ConcaveTriangle OblateGlobe CatEye FourSidedPillow RoundRectangle Fan x_mode SortOverlayAsRead x_range_mode AutomaticUserDefined x_range_min number x_range_max number y_range_mode AutomaticUserDefined y_range_min number y_range_max number z_range_mode AutomaticUserDefined z_range_min number z_range_max number jitter flag records_limit number if_over_limit PlotBinsPlotSamplePlotAll x_label_auto flag x_label string y_label_auto flag y_label string z_label_auto flag z_label string use_grid flag graph_background color Standard graph colors are described at the beginning of this section. page_background color Standard graph colors are described at the beginning of this section. use_overlay_expr flag Deprecated in favor of overlay_type.",conceptual,0,train
98AA3E34D14723232D266A85CBB9E2B1816B1AA5,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en,Quick start: Refine data,"Quick start: Refine data
Quick start: Refine data You can save data preparation time by quickly transforming large amounts of raw data into consumable, high-quality information that is ready for analytics. Read about the Data Refinery tool, then watch a video and take a tutorial that’s suitable for beginners and does not require coding. Your basic workflow includes these tasks: 1. Open your sandbox project. Projects are where you can collaborate with others to work with data. 2. Add your data to the project. You can add CSV files or data from a remote data source through a connection. 3. Open the data in Data Refinery. 4. Perform steps using operations to refine the data. 5. Create and run a job to transform the data. Read about Data Refinery Use Data Refinery to cleanse and shape tabular data with a graphical flow editor. You can also use interactive templates to code operations, functions, and logical operators. When you cleanse data, you fix or remove data that is incorrect, incomplete, improperly formatted, or duplicated. When you shape data, you customize it by filtering, sorting, combining or removing columns, and performing operations. You create a Data Refinery flow as a set of ordered operations on data. Data Refinery includes a graphical interface to profile your data to validate it and over 20 customizable charts that give you perspective and insights into your data. When you save the refined data set, you typically load it to a different location than where you read it from. In this way, your source data remains untouched by the refinement process. [Read more about refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Watch a video about refining data  Watch this video to see how to refine data. This video provides a visual method to learn the concepts and tasks in this documentation. Try a tutorial to refine data In this tutorial, you will complete these tasks: * [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep01) * [Task 2: Open the data set in Data Refinery.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep02) * [Task 3: Review the data with Profile and Visualizations.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep03) * [Task 4: Refine the data.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep04) * [Task 5: Run a job for the Data Refinery flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep05) * [Task 6: Create another data asset from the Data Refinery flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep06) * [Task 7: View the data assets and your Data Refinery flow in your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep07) This tutorial will take approximately 30 minutes to complete. Expand all sections * Tips for completing this tutorial ### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview) * Task 1: Open a project You need a project to store the data and the Data Refinery flow. You can use your sandbox project or create a project. 1. From the navigation menu {: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. ### {: iih} Check your progress The following image shows a new, empty project. {: width=""100%"" } For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. [Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview) * Task 2: Open the data set in Data Refinery  To preview this task, watch the video beginning at 00:05. Follow these steps to add a data asset to your project and create a Data Refinery flow. The data set you will use in this tutorial is available in the Samples. 1. Access the [Airline data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8fa07e57e69f7d0cb970c86c6ae52d41){: new_window} in the Samples. 1. Click Add",how-to,1,train
6F544922DE2638796837398F7EC15A4AFE6B0781,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html?context=cdpaas&locale=en,SPSS predictive analytics algorithms,"SPSS predictive analytics algorithms
SPSS predictive analytics algorithms You can use the following SPSS predictive analytics algorithms in your notebooks. Code samples are provided for Python notebooks. Notebooks must run in a Spark with Python environment runtime. To run the algorithms described in this section, you don't need the SPSS Modeler service. * [Data preparation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/datapreparation-guides.html) * [Classification and regression](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/classificationandregression-guides.html) * [Clustering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/clustering-guides.html) * [Forecasting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/forecasting-guides.html) * [Survival analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/survivalanalysis-guides.html) * [Score](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/score-guides.html) Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)",conceptual,0,train
2C0EBF0CCB497F41C14A5895EF97C01864BFC3D2,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_spatial.html?context=cdpaas&locale=en,Spatial functions (SPSS Modeler),"Spatial functions (SPSS Modeler)
Spatial functions Spatial functions can be used with geospatial data. For example, they allow you to calculate the distances between two points, the area of a polygon, and so on. There can also be situations that require a merge of multiple geospatial data sets that are based on a spatial predicate (within, close to, and so on), which can be done through a merge condition. Notes: * These spatial functions don't apply to three-dimensional data. If you import three-dimensional data into a flow, only the first two dimensions are used by these functions. The z-axis values are ignored. * Geospatial functions aren't supported. CLEM spatial functions Table 1. CLEM spatial functions Function Result Description close_to(SHAPE,SHAPE,NUM) Boolean Tests whether 2 shapes are within a certain DISTANCE of each other. If a projected coordinate system is used, DISTANCE is in meters. If no coordinate system is used, it is an arbitrary unit. crosses(SHAPE,SHAPE) Boolean Tests whether 2 shapes cross each other. This function is suitable for 2 linestring shapes, or 1 linestring and 1 polygon. overlap(SHAPE,SHAPE) Boolean Tests whether there is an intersection between 2 polygons and that the intersection is interior to both shapes. within(SHAPE,SHAPE) Boolean Tests whether the entirety of SHAPE1 is contained within a POLYGON. area(SHAPE) Real Returns the area of the specified POLYGON. If a projected system is used, the function returns meters squared. If no coordinate system is used, it is an arbitrary unit. The shape must be a POLYGON or a MULTIPOLYGON. num_points(SHAPE,LIST) Integer Returns the number of points from a point field (MULTIPOINT) which are contained within the bounds of a POLYGON. SHAPE1 must be a POLYGON or a MULTIPOLYGON. distance(SHAPE,SHAPE) Real Returns the distance between SHAPE1 and SHAPE2. If a projected coordinate system is used, the function returns meters. If no coordinate system is used, it is an arbitrary unit. SHAPE1 and SHAPE2 can be any geo measurement type.",conceptual,0,train
033F114BFF6D5479C2B4BE7C1542A4C778ABA53E,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_add_attributes.html?context=cdpaas&locale=en,Adding attributes to a class instance,"Adding attributes to a class instance
Adding attributes to a class instance Unlike in Java, in Python clients can add attributes to an instance of a class. Only the one instance is changed. For example, to add attributes to an instance x, set new values on that instance: x.attr1 = 1 x.attr2 = 2 . . x.attrN = n",how-to,1,train
C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/output-bias.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Output bias Risks associated with outputFairnessNew Description Generated model content might unfairly represent certain groups or individuals. For example, a large language model might unfairly stigmatize or stereotype specific persons or groups. Why is output bias a concern for foundation models? Bias can harm users of the AI models and magnify existing exclusive behaviors. Business entities can face reputational harms and other consequences. Example Biased Generated Images Lensa AI is a mobile app with generative features trained on Stable Diffusion that can generate “Magic Avatars” based on images users upload of themselves. According to the source report, some users discovered that generated avatars are sexualized and racialized. Sources: [Business Insider, January 2023](https://www.businessinsider.com/lensa-ai-raises-serious-concerns-sexualization-art-theft-data-2023-1) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
C3552C5E0F334C8BC3557960821DC5EF931851A1,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=en,Activities for assets,"Activities for assets
Activities for assets For some asset types, you can see the activities of each asset in projects. The activities graph shows the history of the events that are performed on the asset for some tools. An event is an action that changes or copies the asset. For example, editing the asset description is an event, but viewing the asset is not an event. Requirements and restrictions You can view the activities of assets under the following circumstances. * Workspaces You can view the asset activities in projects. * Limitations Activities have the following limitations: * Activities graphs are currently available only for Watson Machine Learning models and data assets. * Activities graphs do not appear in Microsoft Internet Explorer 11 browsers. Activities events To view activities for an asset in a project, click the asset name and click . The activities panel shows a timeline of events. Summary information about the asset shows where asset was created, what the last event for it was, and when the last event happened. The first event for each asset is its creation. Activities events can describe actions that are applicable to all asset types or actions that are specific to an asset type: * [General events](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=engeneral) * [Events specific to Watson Machine Learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=enwml) * [Events specific to data assets from files and connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=endata) You can see this type of information about each event: * Where: In which catalog or project the event occurred. * Who: The name of the user who performed the action, unless the action was automated. Automated actions generate events, but don't show usernames. * What: A description of the action. Some events show details about the original and updated values. * When: The date and time of the event. Activities also track relationships between assets. In the activities panel, the creation of a new asset based on the original asset is shown at the top of the list. Click See details to view asset details. General events You can see these general events: * Name updated * Description updated * Tags updated Events specific to Watson Machine Learning models Activities tracking is available for all Watson Machine Learning service plans, however, you wouldn't see events for actions that are not available with your plan. In addition to general events, you can see these events that are specific to models: * Model created * Model deployed * Model re-evaluated * Model retrained * Set as active model A model asset shows this information in the Created from field, depending on how it was created: * The name of the associated data asset * The name of the associated connection asset * The project name where it was created Events specific to data assets from files and connected data assets In addition to general events, you can see these events that are specific to data assets from files and connected data assets: * Added to project from a Data Refinery flow * Added to a project from a file * Data classes updated * Schema updated by a Data Refinery flow * Profile created * Profile updated * Profile deleted * Downloaded A data asset shows this information in the Created from field, depending on how it was created: * The name of the Data Refinery flow that created it * Its associated connection name * The project name where it was created or came from Parent topic:[Finding and viewing an asset in a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/view-asset.html)",conceptual,0,train
A8A2D53661EB9EF173F7CC4794096A134123DACA,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=en,Entity extraction,"Entity extraction
Entity extraction The Watson Natural Language Processing Entity extraction models extract entities from input text. For details, on available extraction types, refer to these sections: * [Machine-learning-based extraction for general entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enmachine-learning-general) * [Machine-learning-based extraction for PII entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enmachine-learning-pii) * [Rule-based extraction for general entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enrule-based-general) * [Rule-based extraction for PII entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enrule-based-pii) Machine-learning-based extraction for general entities The machine-learning-based extraction models are trained on labeled data for the more complex entity types such as person, organization and location. Capabilities The entity models extract entities from the input text. The following types of entities are recognized: * Date * Duration * Facility * Geographic feature * Job title * Location * Measure * Money * Ordinal * Organization * Person * Time Capabilities of machine-learning-based extraction based on an example Capabilities Examples Extracts entities from the input text. IBM's CEO Arvind Krishna is based in the US -> IBMOrganization , CEOJobTitle, Arvind KrishnaPerson, USLocation Available workflows and blocks differ, depending on the runtime used. Blocks and workflows for handling general entities with their corresponding runtimes Block or workflow name Available in runtime entity-mentions_transformer-workflow_multilingual_slate.153m.distilled [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-231) entity-mentions_transformer-workflow_multilingual_slate.153m.distilled-cpu [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-231) entity-mentions_bert_multi_stock [Runtime 22.2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-222) Machine-learning-based workflows for general entities in Runtime 23.1 Workflow names * entity-mentions_transformer-workflow_multilingual_slate.153m.distilled: this workflow can be used on both CPUs and GPUs. * entity-mentions_transformer-workflow_multilingual_slate.153m.distilled-cpu: this workflow is optimized for CPU-based runtimes. Supported languages Entity extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes): ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn Code sample import watson_nlp Load the workflow model entities_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled') Run the entity extraction workflow on the input text entities = entities_workflow.run('IBM's CEO Arvind Krishna is based in the US', language_code=""en"") print(entities.get_mention_pairs()) Output of the code sample: [('IBM', 'Organization'), ('CEO', 'JobTitle'), ('Arvind Krishna', 'Person'), ('US', 'Location')] Machine-learning-based blocks for general entities in Runtime 22.2 Block namesentity-mentions_bert_multi_stock Supported languages Entity extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn Dependencies on other blocks The following block must run before you can run the Entity extraction block: * syntax_izumo__stock Code sample import watson_nlp Load Syntax Model for English, and the multilingual BERT Entity model syntax_model = watson_nlp.load('syntax_izumo_en_stock') bert_entity_model = watson_nlp.load('entity-mentions_bert_multi_stock') Run the syntax model on the input text syntax_prediction = syntax_model.run('IBM's CEO Arvind Krishna is based in the US') Run the entity mention model on the result of syntax model bert_entity_mentions = bert_entity_model.run(syntax_prediction) print(bert_entity_mentions.get_mention_pairs()) Output of the code sample: [('IBM', 'Organization'), ('CEO', 'JobTitle'), ('Arvind Krishna', 'Person'), ('US', 'Location')] Machine-learning-based extraction for PII entities Block namesentity-mentions_bilstm_en_pii Blocks for handling Personal Identifiable Information (PII) entities with their corresponding runtimes Block name Available in runtime entity-mentions_bilstm_en_pii Runtime 22.2, Runtime 23.1 The entity-mentions_bilstm_en_pii machine-learning based extraction model is trained on labeled data for types person and location. Capabilities The entity-mentions_bilstm_en_pii block recognizes the following types of entities: Entities extracted by the entity-mentions_bilstm_en_pii block Entity type name Description Supported languages Location All geo-political regions, continents, countries, and street names, states, provinces, cities, towns or islands. en Person Any being; living, nonliving, fictional or real. en Dependencies on other blocks The following block must run before you can run the entity-mentions_bilstm_en_pii block: * syntax_izumo_en_stock Code sample import os import watson_nlp Load Syntax and a Entity Mention BiLSTM model for English syntax_model = watson_nlp.load('syntax_izumo_en_stock') entity_model = watson_nlp.load('entity-mentions_bilstm_en_pii') text = 'Denver is the capital of Colorado. The total estimated government spending in Colorado in fiscal year 2016 was $36.0 billion. IBM office is located in downtown Denver. Michael Hancock is the mayor of Denver.' Run the syntax model on the input text syntax_prediction = syntax_model.run(text) Run the entity mention model on the result of the syntax analysis entity_mentions = entity_model.run(syntax_prediction) print(entity_mentions) Output of the code sample: { ""mentions"": [ { ""span"": { ""begin"": 0, ""end"": 6, ""text"": ""Denver"" }, ""type"": ""Location"", ""producer_id"": { ""name"": ""BiLSTM Entity Mentions"", ""version"": ""1.0.0"" }, ""confidence"": 0.6885626912117004, ""mention_type"": ""MENTT_UNSET"", ""mention_class"": ""MENTC_UNSET"", ""role"": """" }, { ""span"": { ""begin"": 25, ""end"": 33, ""text"": ""Colorado"" }, ""type"": ""Location"", ""producer_id"": { ""name"": ""BiLSTM Entity Mentions"", ""version"": ""1.0.0"" }, ""confidence"": 0.8509215116500854, ""mention_type"": ""MENTT_UNSET"", ""mention_class"": ""MENTC_UNSET"", ""role"": """" }, { ""span"": { ""begin"": 78, ""end"": 86, ""text"": ""Colorado"" }, ""type"": ""Location"", ""producer_id"": { ""name"": ""BiLSTM Entity Mentions"", ""version"": ""1.0.0"" }, ""confidence"": 0.9928259253501892, ""mention_type"": ""MENTT_UNSET"", ""mention_class"": ""MENTC_UNSET"", ""role"": """" }, { ""span"": { ""begin"": 151, ""end"": 166, ""text"": ""downtown Denver"" }, ""type"": ""Location"", ""producer_id"": { ""name"": ""BiLSTM Entity Mentions"", ""version"": ""1.0.0"" }, ""confidence"": 0.48378944396972656, ""mention_type"": ""MENTT_UNSET"", ""mention_class"": ""MENTC_UNSET"", ""role"": """" }, { ""span"": { ""begin"": 168, ""end"": 183, ""text"": ""Michael Hancock"" }, ""type"": ""Person"", ""producer_id"": { ""name"": ""BiLSTM Entity Mentions"",",conceptual,0,train
8ED36D5E1CCDFB0139D9D3DB3AEA2B90AE1B405E,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/svm.html?context=cdpaas&locale=en,SVM node (SPSS Modeler),"SVM node (SPSS Modeler)
SVM node The SVM node uses a support vector machine to classify data. SVM is particularly suited for use with wide datasets, that is, those with a large number of predictor fields. You can use the default settings on the node to produce a basic model relatively quickly, or you can use the Expert settings to experiment with different types of SVM models. After the model is built, you can: * Browse the model nugget to display the relative importance of the input fields in building the model. * Append a Table node to the model nugget to view the model output. Example. A medical researcher has obtained a dataset containing characteristics of a number of human cell samples extracted from patients who were believed to be at risk of developing cancer. Analysis of the original data showed that many of the characteristics differed significantly between benign and malignant samples. The researcher wants to develop an SVM model that can use the values of similar cell characteristics in samples from other patients to give an early indication of whether their samples might be benign or malignant.",conceptual,0,train
4D299EFFF5B982097A5B9D48EA16041E4820A8BB,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/derive.html?context=cdpaas&locale=en,Derive node (SPSS Modeler),"Derive node (SPSS Modeler)
Derive node One of the most powerful features in watsonx.ai is the ability to modify data values and derive new fields from existing data. During lengthy data mining projects, it is common to perform several derivations, such as extracting a customer ID from a string of Web log data or creating a customer lifetime value based on transaction and demographic data. All of these transformations can be performed, using a variety of field operations nodes. Several nodes provide the ability to derive new fields: * The Derive node modifies data values or creates new fields from one or more existing fields. It creates fields of type formula, flag, nominal, state, count, and conditional. * The Reclassify node transforms one set of categorical values to another. Reclassification is useful for collapsing categories or regrouping data for analysis. * The Binning node automatically creates new nominal (set) fields based on the values of one or more existing continuous (numeric range) fields. For example, you can transform a continuous income field into a new categorical field containing groups of income as deviations from the mean. After you create bins for the new field, you can generate a Derive node based on the cut points. * The Set to Flag node derives multiple flag fields based on the categorical values defined for one or more nominal fields. * The Restructure node converts a nominal or flag field into a group of fields that can be populated with the values of yet another field. For example, given a field named payment type, with values of credit, cash, and debit, three new fields would be created (credit, cash, debit), each of which might contain the value of the actual payment made. Tip: The Control Language for Expression Manipulation (CLEM) is a powerful tool you can use to analyze and manipulate the data used in your flows. For example, you might use CLEM in a node to derive values. For more information, see the [CLEM (legacy) language reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_language_reference.html).",conceptual,0,train
2904E26946523BB3E78975F68A822F5F2A32B9F5,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_trigonometric.html?context=cdpaas&locale=en,Trigonometric functions (SPSS Modeler),"Trigonometric functions (SPSS Modeler)
Trigonometric functions All of the functions in this section either take an angle as an argument or return one as a result. CLEM trigonometric functions Table 1. CLEM trigonometric functions Function Result Description arccos(NUM) Real Computes the arccosine of the specified angle. arccosh(NUM) Real Computes the hyperbolic arccosine of the specified angle. arcsin(NUM) Real Computes the arcsine of the specified angle. arcsinh(NUM) Real Computes the hyperbolic arcsine of the specified angle. arctan(NUM) Real Computes the arctangent of the specified angle. arctan2(NUM_Y, NUM_X) Real Computes the arctangent of NUM_Y / NUM_X and uses the signs of the two numbers to derive quadrant information. The result is a real in the range - pi < ANGLE <= pi (radians) – 180 < ANGLE <= 180 (degrees) arctanh(NUM) Real Computes the hyperbolic arctangent of the specified angle. cos(NUM) Real Computes the cosine of the specified angle. cosh(NUM) Real Computes the hyperbolic cosine of the specified angle. pi Real This constant is the best real approximation to pi. sin(NUM) Real Computes the sine of the specified angle. sinh(NUM) Real Computes the hyperbolic sine of the specified angle. tan(NUM) Real Computes the tangent of the specified angle. tanh(NUM) Real Computes the hyperbolic tangent of the specified angle.",conceptual,0,train
6F35B89192B6C9A233B859CF66FCC435F3F9E650,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kmeansnodeslots.html?context=cdpaas&locale=en,kmeansnode properties,"kmeansnode properties
kmeansnode properties The K-Means node clusters the data set into distinct groups (or clusters). The method defines a fixed number of clusters, iteratively assigns records to clusters, and adjusts the cluster centers until further refinement can no longer improve the model. Instead of trying to predict an outcome, k-means uses a process known as unsupervised learning to uncover patterns in the set of input fields. kmeansnode properties Table 1. kmeansnode properties kmeansnode Properties Values Property description inputs [field1 ... fieldN] K-means models perform cluster analysis on a set of input fields but do not use a target field. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. num_clusters number gen_distance flag cluster_label StringNumber label_prefix string mode SimpleExpert stop_on DefaultCustom max_iterations number tolerance number encoding_value number optimize SpeedMemory Specifies whether model building should be optimized for speed or for memory.",conceptual,0,train
3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=en,Tutorial: AutoAI univariate time series experiment,"Tutorial: AutoAI univariate time series experiment
Tutorial: AutoAI univariate time series experiment Use sample data to train a univariate (single prediction column) time series experiment that predicts minimum daily temperatures. When you set up the experiment, you load data that tracks daily minimum temperatures for the city of Melbourne, Australia. The experiment will generate a set of pipelines that use algorithms to predict future minimum daily temperatures. After generating the pipelines, AutoAI compares and tests them, chooses the best performers, and presents them in a leaderboard for you to review. Data set overview The [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set describes the minimum daily temperatures over 10 years (1981-1990) in the city Melbourne, Australia. The units are in degrees celsius and the data set contains 3650 observations. The source of the data is the Australian Bureau of Meteorology. Details about the data set are described here:  * You will use the Min_Temp column as the prediction column to build pipelines and forecast the future daily minimum temperatures. Before the pipeline training, the date column and Min_Temp column are used together to figure out the appropriate lookback window. * The prediction column forecasts a prediction for the daily minimum temperature on a specified day. * The sample data is structured in rows and columns and saved as a .csv file. Tasks overview In this tutorial, you follow these steps to create a univariate time series experiment: 1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep0) 2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep1) 3. [Configure the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep2) 4. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep3) 5. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep4) 6. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep5) Create a project Follow these steps to download the [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set from the Samples and create an empty project: 1. From the navigation menu , click Samples and download a local copy of the [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set. 2. From the navigation menu , click Projects > View all projects, then click New Project. 1. Click Create an empty project. 2. Enter a name and optional description for your project. 3. Click Create. Create an AutoAI experiment Follow these steps to create an AutoAI experiment and add sample data to your experiment: 1. On the Assets tab from within your project, click New asset > Build machine learning models automatically. 2. Specify a name and optional description for your experiment, then select Create. 3. Select Associate a Machine Learning service instance to create a new service instance or associate an existing instance with your project. Click Reload to confirm your configuration. 4. Click Create. 5. To add the sample data, choose one of the these methods: * If you downloaded your file locally, upload the training data file, Daily_Min_Temperatures.csv, by clicking Browse and then following the prompts. * If you already uploaded your file to your project, click Select from project, then select the Data asset tab and choose Daily_Min_Temperatures.csv. Configure the experiment Follow these steps to configure your univariate AutoAI time series experiment: 1. Click Yes for the option to create a Time Series Forecast. 2. Choose as prediction columns: Min_Temp. 3. Choose as the date/time column: Date.  4. Click Experiment settings to configure the experiment: 1. In the Data source page, select the Time series tab. 2. For this tutorial, accept the default value for Number of backtests (4), Gap length (0 steps), and Holdout length (20 steps). Note: The validation length changes if you change the value of any of the parameters: Number of backtests, Gap length, or Holdout length. c. Click Cancel to exit from the Experiment settings.  5. Click Run experiment to begin the training. Review experiment results The experiment takes several minutes to complete. As the experiment trains, a visualization shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance. 1. (Optional): Hover over any node in the visualization to get details on the transformation for a particular pipeline.  2. (Optional): After the pipelines are listed on the leaderboard, click Pipeline comparison to see how they differ. For example:  3. (Optional): When the training completes, the top three best performing pipelines are saved to the leaderboard. Click View discarded pipelines to review pipelines with the least performance.  4. Select the pipeline with Rank 1 and click Save as to create your model. Then, select Create. This action saves the pipeline under the Models section in the Assets tab. Deploy the trained model Before you can use your trained model to make predictions on new data, you must deploy the model. Follow these steps to promote your trained model to",how-to,1,train
4CD539B8153216F80B26729A35AD4CD04A9C27DB,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=en,Creating the initial model,"Creating the initial model
Creating the initial model Parties can create and save the initial model before training by following a set of examples. * [Save the Tensorflow model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=entf-config) * [Save the Scikit-learn model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensklearn-config) * [Save the Pytorch model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=enpytorch) Consider the configuration examples that match your model type. Save the Tensorflow model import tensorflow as tf from tensorflow.keras import from tensorflow.keras.layers import import numpy as np import os class MyModel(Model): def __init__(self): super(MyModel, self).__init__() self.conv1 = Conv2D(32, 3, activation='relu') self.flatten = Flatten() self.d1 = Dense(128, activation='relu') self.d2 = Dense(10) def call(self, x): x = self.conv1(x) x = self.flatten(x) x = self.d1(x) return self.d2(x) Create an instance of the model model = MyModel() loss_object = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True) optimizer = tf.keras.optimizers.Adam() acc = tf.keras.metrics.SparseCategoricalAccuracy(name='accuracy') model.compile(optimizer=optimizer, loss=loss_object, metrics=[acc]) img_rows, img_cols = 28, 28 input_shape = (None, img_rows, img_cols, 1) model.compute_output_shape(input_shape=input_shape) dir = ""./model_architecture"" if not os.path.exists(dir): os.makedirs(dir) model.save(dir) If you choose Tensorflow as the model framework, you need to save a Keras model as the SavedModel format. A Keras model can be saved in SavedModel format by using tf.keras.model.save(). To compress your files, run the command zip -r mymodel.zip model_architecture. The contents of your .zip file must contain: mymodel.zip └── model_architecture ├── assets ├── keras_metadata.pb ├── saved_model.pb └── variables ├── variables.data-00000-of-00001 └── variables.index Save the Scikit-learn model * [SKLearn classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-class) * [SKLearn regression](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-reg) * [SKLearn Kmeans](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-k) SKLearn classification SKLearn classification from sklearn.linear_model import SGDClassifier import numpy as np import joblib model = SGDClassifier(loss='log', penalty='l2') model.classes_ = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) You must specify the class label for IBM Federated Learning using model.classes. Class labels must be contained in a numpy array. In the example, there are 10 classes. joblib.dump(model, ""./model_architecture.pickle"") SKLearn regression Sklearn regression from sklearn.linear_model import SGDRegressor import pickle model = SGDRegressor(loss='huber', penalty='l2') with open(""./model_architecture.pickle"", 'wb') as f: pickle.dump(model, f) SKLearn Kmeans SKLearn Kmeans from sklearn.cluster import KMeans import joblib model = KMeans() joblib.dump(model, ""./model_architecture.pickle"") You need to create a .zip file that contains your model in pickle format by running the command zip mymodel.zip model_architecture.pickle. The contents of your .zip file must contain: mymodel.zip └── model_architecture.pickle Save the PyTorch model import torch import torch.nn as nn model = nn.Sequential( nn.Flatten(start_dim=1, end_dim=-1), nn.Linear(in_features=784, out_features=256, bias=True), nn.ReLU(), nn.Linear(in_features=256, out_features=256, bias=True), nn.ReLU(), nn.Linear(in_features=256, out_features=256, bias=True), nn.ReLU(), nn.Linear(in_features=256, out_features=100, bias=True), nn.ReLU(), nn.Linear(in_features=100, out_features=50, bias=True), nn.ReLU(), nn.Linear(in_features=50, out_features=10, bias=True), nn.LogSoftmax(dim=1), ).double() torch.save(model, ""./model_architecture.pt"") You need to create a .zip file containing your model in pickle format. Run the command zip mymodel.zip model_architecture.pt. The contents of your .zip file should contain: mymodel.zip └── model_architecture.pt Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)",how-to,1,train
B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=en,Compute options for model training and scoring,"Compute options for model training and scoring
Compute options for model training and scoring When you train or score a model or function, you choose the type, size, and power of the hardware configuration that matches your computing needs. * [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=endefault) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=encompute) Default hardware configurations Choose the hardware configuration for your Watson Machine Learning asset when you train the asset or when you deploy it. Hardware configurations available for training and deploying assets Capacity type Capacity units per hour Extra small: 1x4 = 1 vCPU and 4 GB RAM 0.5 Small: 2x8 = 2 vCPU and 8 GB RAM 1 Medium: 4x16 = 4 vCPU and 16 GB RAM 2 Large: 8x32 = 8 vCPU and 32 GB RAM 4 Extra large: 16x64 = 16 vCPU and 64 GB RAM 8 Compute usage for Watson Machine Learning assets Deployments and scoring consume compute resources as capacity unit hours (CUH) from the Watson Machine Learning service. To check the total monthly CUH consumption for your Watson Machine Learning services, from the navigation menu, select Administration -> Environment runtimes. Additionally, you can monitor the monthly resource usage in each specific deployment space. To do that, from your deployment space, go to the Manage tab and then select Resource usage. The summary shows CUHs used by deployment type: separately for AutoAI deployments, Federated Learning deployments, batch deployments, and online deployments. Compute usage details The rate of consumed CUHs is determined by the computing requirements of your deployments. It is based on such variables as: * type of deployment * type of framework * complexity of scoring Scaling a deployment to support more concurrent users and requests also increases CUH consumption. As many variables affect resource consumption for a deployment, it is recommended that you run tests on your models and deployments to analyze CUH consumption. The way that online deployments consume capacity units is based on framework. For some frameworks, CUHs are charged for the number of hours that the deployment asset is active in a deployment space. For example, SPSS models in online deployment mode that run for 24 hours a day, seven days a week, consume CUHs and are charged for that period. An active online deployment has no idle time. For other frameworks, CUHs are charged according to scoring duration. Refer to the CUH consumption table for details on how CUH usage is calculated. Compute time is calculated to the millisecond, with a 1-minute minimum for each distinct operation. For example: * A training run that takes 12 seconds is billed as 1 minute * A training run that takes 83.555 seconds is billed exactly as calculated CUH consumption by deployment and framework type CUH consumption is calculated by using these formulas: Deployment type Framework CUH calculation Online AutoAI, AI function, SPSS, Scikit-Learn custom libraries, Tensorflow, RShiny Deployment active duration * Number of nodes * CUH rate for capacity type framework Online Spark, PMML, Scikit-Learn, Pytorch, XGBoost Score duration in seconds * Number of nodes * CUH rate for capacity type framework Batch all frameworks Job duration in seconds * Number of nodes * CUH rate for capacity type framework Learn more * [Deploying assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) * [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)",conceptual,0,train
BACAF30043E33912E3D7F174B3F8CF858CB3093A,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_sequence.html?context=cdpaas&locale=en,Sequence functions (SPSS Modeler),"Sequence functions (SPSS Modeler)
Sequence functions For some operations, the sequence of events is important. The application allows you to work with the following record sequences: * Sequences and time series * Sequence functions * Record indexing * Averaging, summing, and comparing values * Monitoring change—differentiation * @SINCE * Offset values * Additional sequence facilities For many applications, each record passing through a stream can be considered as an individual case, independent of all others. In such situations, the order of records is usually unimportant. For some classes of problems, however, the record sequence is very important. These are typically time series situations, in which the sequence of records represents an ordered sequence of events or occurrences. Each record represents a snapshot at a particular instant in time; much of the richest information, however, might be contained not in instantaneous values but in the way in which such values are changing and behaving over time. Of course, the relevant parameter may be something other than time. For example, the records could represent analyses performed at distances along a line, but the same principles would apply. Sequence and special functions are immediately recognizable by the following characteristics: * They are all prefixed by @ * Their names are given in uppercase Sequence functions can refer to the record currently being processed by a node, the records that have already passed through a node, and even, in one case, records that have yet to pass through a node. Sequence functions can be mixed freely with other components of CLEM expressions, although some have restrictions on what can be used as their arguments.",conceptual,0,train
C143A9F5185D9303301630D3FC53B604D3DCED2E,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_build.html?context=cdpaas&locale=en,Creating the flow (SPSS Modeler),"Creating the flow (SPSS Modeler)
Creating the flow 1. Add a Data Asset node that points to broadband_1.csv. 2. To simplify the model, use a Filter node to filter out the Market_6 to Market_85 fields and the MONTH_ and YEAR_ fields. Figure 1. Example flow to show Time Series modeling ",how-to,1,train
F650943069620AA0BD7652DF1ABDCE2C076DE464,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties_pythonnodes_container.html?context=cdpaas&locale=en,Python node properties,"Python node properties
Python node properties Refer to this section for a list of available properties for Python nodes.",conceptual,0,train
45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en,Writing deployable Python functions,"Writing deployable Python functions
Writing deployable Python functions Learn how to write a Python function and then store it as an asset that allows for deploying models. For a list of general requirements for deployable functions refer to [General requirements for deployable functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enreqs). For information on what happens during a function deployment, refer to [Function deployment process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enfundepro) General requirements for deployable functions To be deployed successfully, a function must meet these requirements: * The Python function file on import must have the score function object as part of its scope. Refer to [Score function requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enscore) * Scoring input payload must meet the requirements that are listed in [Scoring input requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enscoinreq) * The output payload expected as output of score must include the schema of the score_response variable for status code 200. Note that the prediction parameter, with an array of JSON objects as its value, is mandatory in the score output. * When you use the Python client to save a Python function that contains a reference to an outer function, only the code in the scope of the outer function (including its nested functions) is saved. Therefore, the code outside the outer function's scope will not be saved and thus will not be available when you deploy the function. Score function requirements * Two ways to add the score function object exist: * explicitly, by user * implicitly, by the method that is used to save the Python function as an asset in the Watson Machine Learning repository * The score function must accept a single, JSON input parameter. * The score function must return a JSON-serializable object (for example: dictionaries or lists) Scoring input requirements * The scoring input payload must include an array with the name values, as shown in this example schema. {""input_data"": [!{ ""values"": ""Hello world""]] }] } Note: - The input_data parameter is mandatory in the payload. - The input_data parameter can also include additional name-value pairs. * The scoring input payload must be passed as input parameter value for score. This way you can ensure that the value of the score input parameter is handled accordingly inside the score. * The scoring input payload must match the input requirements for the concerned Python function. * The scoring input payload must include an array that matches the [Example input data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enexschema). Example input data schema {""input_data"": [!{ ""values"": ""Hello world""]] }] } Example Python code wml_python_function def my_deployable_function(): def score( payload ): message_from_input_payload = payload.get(""input_data"")[0].get(""values"")[0] response_message = ""Received message - {0}"".format(message_from_input_payload) Score using the pre-defined model score_response = { 'predictions': [{'fields': 'Response_message_field'], 'values': response_message]] }] } return score_response return score score = my_deployable_function() You can test your function like this: input_data = { ""input_data"": [{ ""fields"": ""message"" ]!, ""values"": ""Hello world"" ]] } ] } function_result = score( input_data ) print( function_result ) It returns the message ""Hello world!"". Function deployment process The Python code of your Function asset gets loaded as a Python module by the Watson Machine Learning engine by using an import statement. This means that the code will be executed exactly once (when the function is deployed or each time when the corresponding pod gets restarted). The score function that is defined by the Function asset is then called in every prediction request. Handling deployable functions Use one of these methods to create a deployable Python function: * [Creating deployable functions through REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enrest) * [Creating deployable functions through the Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpy) Creating deployable functions through REST API For REST APIs, because the Python function is uploaded directly through a file, the file must already contain the score function. Any one time import that needs to be done to be used later within the score function can be done within the global scope of the file. When this file is deployed as a Python function, the one-time imports available in the global scope get executed during the deployment and later simply reused with every prediction request. Important:The function archive must be a .gz file. Sample score function file: Score function.py --------------------- def score(input_data): return {'predictions': [{'values': 'Just a test']]}]} Sample score function with one time imports: import subprocess subprocess.check_output('pip install gensim --user', shell=True) import gensim def score(input_data): return {'predictions': [{'fields': 'gensim_version'], 'values': gensim.__version__]]}]} Creating deployable functions through the Python client To persist a Python function as an asset, the Python client uses the wml_client.repository.store_function method. You can do that in two ways: * [Persisting a function through a file that contains the Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpersfufile) * [Persisting a function through the function object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpersfunob) Persisting a function through a file that contains the Python function This method is the same as persisting the Python function file through REST APIs (score must be defined in the scope of the Python source file). For details, refer to [Creating deployable functions through REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enrest). Important:When you are calling the",how-to,1,train
C8B4A993CB8642BC87432FCB305EEE744C16A154,https://dataplatform.cloud.ibm.com/docs/content/wsd/migration.html?context=cdpaas&locale=en,Importing a stream (SPSS Modeler),"Importing a stream (SPSS Modeler)
Importing an SPSS Modeler stream You can import a stream ( .str) that was created in SPSS Modeler Subscription or SPSS Modeler client. 1. From your project's Assets tab, click . 2. Select Local file, select the .str file you want to import, and click Create. If the imported stream contains one or more source (import) or export nodes, you'll be prompted to convert the nodes. Watsonx.ai will walk you through the migration process. Watch the following video for an example of this easy process: This video provides a visual method to learn the concepts and tasks in this documentation. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. [https://www.ustream.tv/embed/recorded/127732173](https://www.ustream.tv/embed/recorded/127732173) If the stream contains multiple import nodes that use the same data file, then you must first add that file to your project as a data asset before migrating because the conversion can't upload the same file to more than one import node. After adding the data asset to your project, reopen the flow and proceed with the migration using the new data asset. Nodes with the same name will be automatically mapped to project assets. Configure export nodes to export to your project or to a connection. The following export nodes are supported: Table 1. Export nodes that can be migrated Supported SPSS Modeler export nodes Analytic Server Database Flat File Statistics Export Data Collection Excel IBM Cognos Analytics Export TM1 Export SAS XML Export Notes: Keep the following information in mind when migrating nodes. * When migrating export nodes, you're converting node types that don't exist in watsonx.ai. The nodes are converted to Data Asset export nodes or a connection. Due to a current limitation for automatically migrating nodes, only existing project assets or connections can be selected as export targets. These assets will be overwritten during export when the flow runs. * To preserve any type or filter information, when an import node is replaced with Data Asset nodes, they're converted to a SuperNode. * After migration, you can go back later and use the Convert button if you want to migrate a node that you skipped previously. * If the stream you imported uses scripting, you may encounter an error when you run the flow even after completing a migration. This could be due to the flow script containing a reference to an unsupported import or export node. To avoid such errors, you must remove the scripting code that references the unsupported node. * If the stream you're importing contains unsupported data file types, you need to convert them to a supported type (CSV, Excel, or SPSS Statistics .sav). * In some cases, some settings from your original stream may not be restored during migration. For example, if the field delimiter in your original stream was tabs, it may be changed to commas after migration. Settings such as custom SQL also aren't migrated currently. Compare the new migrated flow to your original stream and making adjustments as needed.",how-to,1,train
D21CD926CA1FE170C8C1645CA0EC65AEDDDB4AEF,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html?context=cdpaas&locale=en,Publishing a notebook as a gist,"Publishing a notebook as a gist
Publishing a notebook as a gist A gist is a simple way to share a notebook or parts of a notebook with other users. Unlike when you publish to a GitHub repository, you don't need to manage your gists; you can edit your gists directly in the browser. All project collaborators, who have administrator or editor permission, can share notebooks or parts of a notebook as gists. The latest saved version of your notebook is published as a gist. Before you can create a gist, you must be logged in to GitHub and have authorized access to gists in GitHub from Watson Studio. See [Publish notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). If this information is missing, you are prompted for it. To publish a notebook as a gist: 1. Open the notebook in edit mode. 2. Click the GitHub integration icon () and select Publish as gist. Watch this video to see how to enable GitHub integration. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 This video shows you how to publish notebooks from your Watson Studio project to your GitHub account. 00:07 Navigate to your profile and settings. 00:11 On the ""Integrations"" tab, visit the link to generate a GitHub personal access token. 00:17 Provide a descriptive name for the token and select the repo and gist scopes, then generate the token. 00:29 Copy the token, return to the GitHub integration settings, and paste the token. 00:36 The token is validated when you save it to your profile settings. 00:42 Now, navigate to your projects. 00:44 You enable GitHub integration at the project level on the ""Settings"" tab. 00:50 Simply scroll to the bottom and paste the existing GitHub repository URL. 00:56 You'll find that on the ""Code"" tab in the repo. 01:01 Click ""Update"" to make the connection. 01:05 Now, go to the ""Assets"" tab and open the notebook you want to publish. 01:14 Notice that this notebook has the credentials replaced with X's. 01:19 It's a best practice to remove or replace credentials before publishing to GitHub. 01:24 So, this notebook is ready for publishing. 01:27 You can provide the target path along with a commit message. 01:31 You also have the option to publish content without hidden code, which means that any cells in the notebook that began with the hidden cell comment will not be published. 01:42 When you're, ready click ""Publish"". 01:45 The message tells you that the notebook was published successfully and provides links to the notebook, the repository, and the commit. 01:54 Let's take a look at the commit. 01:57 So, there's the commit, and you can navigate to the repository to see the published notebook. 02:04 Lastly, you can publish as a gist. 02:07 Gists are another way to share your work on GitHub. 02:10 Every gist is a git repository, so it can be forked and cloned. 02:15 There are two types of gists: public and secret. 02:19 If you start out with a secret gist, you can convert it to a public gist later. 02:24 And again, you have the option to remove hidden cells. 02:29 Follow the link to see the published gist. 02:32 So that's the basics of Watson Studio's GitHub integration. 02:37 Find more videos in the Cloud Pak for Data as a Service documentation. Parent topic:[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html)",how-to,1,train
CB130D4E1AE505CE39CBD49BF9D22359B9EC80AB,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/cartnodeslots.html?context=cdpaas&locale=en,cartnode properties,"cartnode properties
cartnode properties The Classification and Regression (C&R) Tree node generates a decision tree that allows you to predict or classify future observations. The method uses recursive partitioning to split the training records into segments by minimizing the impurity at each step, where a node in the tree is considered ""pure"" if 100% of cases in the node fall into a specific category of the target field. Target and input fields can be numeric ranges or categorical (nominal, ordinal, or flags); all splits are binary (only two subgroups). cartnode properties Table 1. cartnode properties cartnode Properties Values Property description target field C&R Tree models require a single target and one or more input fields. A frequency field can also be specified. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. continue_training_existing_model flag objective StandardBoostingBaggingpsm psm is used for very large datasets, and requires a Server connection. model_output_type SingleInteractiveBuilder use_tree_directives flag tree_directives string Specify directives for growing the tree. Directives can be wrapped in triple quotes to avoid escaping newlines or quotes. Note that directives may be highly sensitive to minor changes in data or modeling options and may not generalize to other datasets. use_max_depth DefaultCustom max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom. prune_tree flag Prune tree to avoid overfitting. use_std_err flag Use maximum difference in risk (in Standard Errors). std_err_multiplier number Maximum difference. max_surrogates number Maximum surrogates. use_percentage flag min_parent_records_pc number min_child_records_pc number min_parent_records_abs number min_child_records_abs number use_costs flag costs structured Structured property. priors DataEqualCustom custom_priors structured Structured property. adjust_priors flag trails number Number of component models for boosting or bagging. set_ensemble_method VotingHighestProbabilityHighestMeanProbability Default combining rule for categorical targets. range_ensemble_method MeanMedian Default combining rule for continuous targets. large_boost flag Apply boosting to very large data sets. min_impurity number impurity_measure GiniTwoingOrdered train_pct number Overfit prevention set. set_random_seed flag Replicate results option. seed number calculate_variable_importance flag calculate_raw_propensities flag calculate_adjusted_propensities flag adjusted_propensity_partition TestValidation",conceptual,0,train
6647035446FC3A28586EBABC619D10DB5FE3F4FD,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/merge.html?context=cdpaas&locale=en,Merge node (SPSS Modeler),"Merge node (SPSS Modeler)
Merge node The function of a Merge node is to take multiple input records and create a single output record containing all or some of the input fields. This is a useful operation when you want to merge data from different sources, such as internal customer data and purchased demographic data. You can merge data in the following ways. * Merge by Order concatenates corresponding records from all sources in the order of input until the smallest data source is exhausted. It is important if using this option that you have sorted your data using a Sort node. * Merge using a Key field, such as Customer ID, to specify how to match records from one data source with records from the other(s). Several types of joins are possible, including inner join, full outer join, partial outer join, and anti-join. * Merge by Condition means that you can specify a condition to be satisfied for the merge to take place. You can specify the condition directly in the node, or build the condition using the Expression Builder. * Merge by Ranked Condition is a left sided outer join in which you specify a condition to be satisfied for the merge to take place and a ranking expression which sorts into order from low to high. Most often used to merge geospatial data, you can specify the condition directly in the node, or build the condition using the Expression Builder.",conceptual,0,train
AF2AC67B66D3A2DB0D4F2AF2D6743F903F1385D7,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html?context=cdpaas&locale=en,Installing custom libraries through notebooks,"Installing custom libraries through notebooks
Installing custom libraries through notebooks The prefered way of installing additional Python libraries to use in a notebook is to customize the software configuration of the environment runtime associated with the notebook. You can add the conda or PyPi packages through a customization template when you customize the environment template. See [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html). However, if you want to install packages from somewhere else or packages you created on your local machine, for example, you can install and import the packages through the notebook. To install packages other than conda or PyPi packages through your notebook: 1. Add the package to your project storage by clicking the Upload asset to project icon (), and then browsing the package file or dragging it into your notebook sidebar. 2. Add a project token to the notebook by clicking More > Insert project token from the notebook action bar. The code that is generated by this action initializes the variable project, which is required to access the library you uploaded to object storage. Example of an inserted project token: @hidden_cell The project token is an authorization token that is used to access project resources like data sources, connections, and used by platform APIs. from project_lib import Project project = Project(project_id='7c7a9455-1916-4677-a2a9-a61a75942f58', project_access_token='p-9a4c487075063e610471d6816e286e8d0d222141') pc = project.project_context If you don't have a token, you need to create one. See [Adding a project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html). 3. Install the library: Fetch the library file, for example the tar.gz or whatever installable distribution you created with open(""xxx-0.1.tar.gz"",""wb"") as f: f.write(project.get_file(""xxx-0.1.tar.gz"").read()) Install the library !pip install xxx-0.1.tar.gz 1. Now you can import the library: import xxx Parent topic:[Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html)",how-to,1,train
9A83A33ABB4C6A12A7457D3711C2511EB3982B2C,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.html?context=cdpaas&locale=en,String functions (SPSS Modeler),"String functions (SPSS Modeler)
String functions With CLEM, you can run operations to compare strings, create strings, or access characters. In CLEM, a string is any sequence of characters between matching double quotation marks (""string quotes""). Characters (CHAR) can be any single alphanumeric character. They're declared in CLEM expressions using single back quotes in the form of , such as z , A , or 2 . Characters that are out-of-bounds or negative indices to a string will result in undefined behavior. Note: Comparisons between strings that do and do not use SQL pushback may generate different results where trailing spaces exist. CLEM string functions Table 1. CLEM string functions Function Result Description allbutfirst(N, STRING) String Returns a string, which is STRING with the first N characters removed. allbutlast(N, STRING) String Returns a string, which is STRING with the last characters removed. alphabefore(STRING1, STRING2) Boolean Used to check the alphabetical ordering of strings. Returns true if STRING1 precedes STRING2. count_substring(STRING, SUBSTRING) Integer Returns the number of times the specified substring occurs within the string. For example, count_substring(""foooo.txt"", ""oo"") returns 3. endstring(LENGTH, STRING) String Extracts the last N characters from the specified string. If the string length is less than or equal to the specified length, then it is unchanged. hasendstring(STRING, SUBSTRING) Integer This function is the same as isendstring(SUBSTRING, STRING). hasmidstring(STRING, SUBSTRING) Integer This function is the same as ismidstring(SUBSTRING, STRING) (embedded substring). hasstartstring(STRING, SUBSTRING) Integer This function is the same as isstartstring(SUBSTRING, STRING). hassubstring(STRING, N, SUBSTRING) Integer This function is the same as issubstring(SUBSTRING, N, STRING), where N defaults to 1. hassubstring(STRING, SUBSTRING) Integer This function is the same as issubstring(SUBSTRING, 1, STRING), where N defaults to 1. isalphacode(CHAR) Boolean Returns a value of true if CHAR is a character in the specified string (often a field name) whose character code is a letter. Otherwise, this function returns a value of 0. For example, isalphacode(produce_num(1)). isendstring(SUBSTRING, STRING) Integer If the string STRING ends with the substring SUBSTRING, then this function returns the integer subscript of SUBSTRING in STRING. Otherwise, this function returns a value of 0. islowercode(CHAR) Boolean Returns a value of true if CHAR is a lowercase letter character for the specified string (often a field name). Otherwise, this function returns a value of 0. For example, both () and islowercode(country_name(2)) are valid expressions. ismidstring(SUBSTRING, STRING) Integer If SUBSTRING is a substring of STRING but does not start on the first character of STRING or end on the last, then this function returns the subscript at which the substring starts. Otherwise, this function returns a value of 0. isnumbercode(CHAR) Boolean Returns a value of true if CHAR for the specified string (often a field name) is a character whose character code is a digit. Otherwise, this function returns a value of 0. For example, isnumbercode(product_id(2)). isstartstring(SUBSTRING, STRING) Integer If the string STRING starts with the substring SUBSTRING, then this function returns the subscript 1. Otherwise, this function returns a value of 0. issubstring(SUBSTRING, N, STRING) Integer Searches the string STRING, starting from its Nth character, for a substring equal to the string SUBSTRING. If found, this function returns the integer subscript at which the matching substring begins. Otherwise, this function returns a value of 0. If N is not given, this function defaults to 1. issubstring(SUBSTRING, STRING) Integer Searches the string STRING. If found, this function returns the integer subscript at which the matching substring begins. Otherwise, this function returns a value of 0. issubstring_count(SUBSTRING, N, STRING) Integer Returns the index of the Nth occurrence of SUBSTRING within the specified STRING. If there are fewer than N occurrences of SUBSTRING, 0 is returned. issubstring_lim(SUBSTRING, N, STARTLIM, ENDLIM, STRING) Integer This function is the same as issubstring, but the match is constrained to start on STARTLIM and to end on ENDLIM. The STARTLIM or ENDLIM constraints may be disabled by supplying a value of false for either argument—for example, issubstring_lim(SUBSTRING, N, false, false, STRING) is the same as issubstring. isuppercode(CHAR) Boolean Returns a value of true if CHAR is an uppercase letter character. Otherwise, this function returns a value of 0. For example, both () and isuppercode(country_name(2)) are valid expressions. last(STRING) String Returns the last character CHAR of STRING (which must be at least one character long). length(STRING) Integer Returns the length of the string STRING (that is, the number of characters in it). locchar(CHAR, N, STRING) Integer Used to identify the location of characters in symbolic fields. The function searches the string STRING for the character CHAR, starting the search at the Nth character of STRING. This function returns a value indicating the location (starting at N) where the character is found. If the character is not found, this function returns a value of 0. If the function has an invalid offset (N) (for example, an offset that is beyond the",conceptual,0,train
422554C1DCEBABC93CB859B4A896908DA48A540D,https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html?context=cdpaas&locale=en,Setting up watsonx.governance,"Setting up watsonx.governance
Setting up watsonx.governance You can set up watsonx.governance to monitor model assets in your IBM watsonx projects or deployment spaces. To set up watsonx.governance, you can manage users and roles for your organization to control access to your projects or deployment spaces. To set up watsonx.governance, complete the following tasks: * [Creating access policies](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html?context=cdpaas&locale=enwos-access-policies) * [Managing users and roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html?context=cdpaas&locale=enwos-users-wx) Creating access policies You can complete the following steps to invite users to an IBM Cloud account that has a watsonx.governance instance installed and assign service access. Required roles : Users must have have the Reader, Writer, or higher IBM Cloud IAM Platform roles for service access. Users that are assigned the Writer role or higher can access information across projects and deployment spaces in watsonx.governance. 1. From the IBM Cloud homepage, click Manage > Access (IAM). 2. From the IAM dashboard, click Users and select Invite user. 3. Complete the following fields: * How do you want to assign access? : Access policy. * Which service do you want to assign access to? : watsonx.governance and click Next. * How do you want to scope the access : Select the scope of access for users and click Next. * If you select Specific resources, select an attribute type and specify a value for each condition that you add. * If you select Service instance in the Attribute type list, specify your instance in the Value field. 4. If you have multiple instances, you must find the data mart ID to specify the instance that you want to assign users access to. You can use one of the following methods to find the data mart ID: * On the Insights dashboard, click a model deployment tile and go to Actions > View model information to find the data mart ID. * On the Insights dashboard, click the navigation menu on a model deployment tile and select Configure monitors. Then, go to the Endpoints tab and find the data mart ID in the Integration details section of the Model information tab. 5. Select the Reader role in the Service access list. 6. Assign access to users. * If you are assigning access to new users, click Add, and then click Invite in the Access summary pane. * If you are assigning access to existing users, click Add, and then click Assign in the Access summary pane. watsonx.governance users and roles You can assign roles to watsonx.governance users to collaborate on model evaluations in [projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.htmladd-collaborators) and [deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.htmladding-collaborators). The following table lists permissions for roles that you can assign for access to evaluations. The Operator and Viewer roles are equivalent. Table 1. Operations by role The first row of the table describes separate roles that you can choose from when creating a user. Each column provides a checkmark in the role category for the capability associated with that role. Operations Admin role Editor role Viewer/Operator role Evaluation ✔ ✔ View evaluation result ✔ ✔ ✔ Configure monitoring condition ✔ ✔ View monitoring condition ✔ ✔ ✔ Upload training data CSV file in model risk management ✔ ✔ Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)",how-to,1,train
3F3162BCD9976ED764717AA7004D9A755648B465,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=en,Building an AutoAI model,"Building an AutoAI model
Building an AutoAI model AutoAI automatically prepares data, applies algorithms, and builds model pipelines that are best suited for your data and use case. Learn how to generate the model pipelines that you can save as machine learning models. Follow these steps to upload data and have AutoAI create the best model for your data and use case. 1. [Collect your input data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=entrain-data) 2. [Open the AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enopen-autoai) 3. [Specify details of your model and training data and start AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enmodel-details) 4. [View the results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enview-results) Collect your input data Collect and prepare your training data. For details on allowable data sources, see [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html). Note:If you are creating an experiment with a single training data source, you have the option of using a second data source specifically as testing, or holdout, data for validating the pipelines. Open the AutoAI tool For your convenience, your AutoAI model creation uses the default storage that is associated with your project to store your data and to save model results. 1. Open your project. 2. Click the Assets tab. 3. Click New asset > Build machine learning models automatically. Note: After you create an AutoAI asset it displays on the Assets page for your project in the AutoAI experiments section, so you can return to it. Specify details of your experiment 1. Specify a name and description for your experiment. 2. Select a machine learning service instance and click Create. 3. Choose data from your project or upload it from your file system or from the asset browser, then press Continue. Click the preview icon to review your data. (Optional) Add a second file as holdout data for testing the trained pipelines. 4. Choose the Column to predict for the data you want the experiment to predict. * Based on analyzing a subset of the data set, AutoAI selects a default model type: binary classification, multiclass classification, or regression. Binary is selected if the target column has two possible values. Multiclass has a discrete set of 3 or more values. Regression has a continuous numeric variable in the target column. You can optionally override this selection. Note: The limit on values to classify is 200. Creating a classification experiment with many unique values in the prediction column is resource-intensive and affects the experiment's performance and training time. To maintain the quality of the experiment: - AutoAI chooses a default metric for optimizing. For example, the default metric for a binary classification model is Accuracy. - By default, 10% of the training data is held out to test the performance of the model. 5. (Optional): Click Experiment settings to view or customize options for your AutoAI run. For details on experiment settings, see [Configuring a classification or regression experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html). 6. Click Run Experiment to begin model pipeline creation. An infographic shows you the creation of pipelines for your data. The duration of this phase depends on the size of your data set. A notification message informs you if the processing time will be brief or require more time. You can work in other parts of the product while the pipelines build.  Hover over nodes in the infographic to explore the factors that pipelines share and their unique properties. You can see the factors that pipelines share and the properties that make a pipeline unique. For a guide to the data in the infographic, click the Legend tab in the information panel. Or, to see a different view of the pipeline creation, click the Experiment details tab of the notification pane, then click Switch views to view the progress map. In either view, click a pipeline node to view the associated pipeline in the leaderboard. View the results When the pipeline generation process completes, you can view the ranked model candidates and evaluate them before you save a pipeline as a model. Next steps * [Build an experiment from sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) * [Configuring experiment settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html) * [Configure a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html) Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. * Watch this video to see how to build a binary classification model This video provides a visual method to learn the concepts and tasks in this documentation. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. * Watch this video to see how to build a multiclass classification model This video provides a visual method to learn the concepts and tasks in this documentation. Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)",how-to,1,train
ED7AFE85422B1DB8EAED166840D275DDDB63CAFA,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=en,Managing your account settings,"Managing your account settings
Managing your account settings From the Account window you can view information about your IBM Cloud account and set the Resource scope, Credentials for connections, and Regional project storage settings for IBM watsonx. * [View account information](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enview-account-information) * [Set the scope for resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-the-scope-for-resources) * [Set the type of credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-the-credentials-for-connections) * [Set the login session expiration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-expiration) You must be the IBM Cloud account owner or administrator to manage the account settings. View account information You can see the account name, ID and type. 1. Select Administration > Account and billing > Account to open the account window. 2. If you need to manage your Cloud account, click the Manage in IBM Cloud link to navigate to the Account page on IBM Cloud. Set the scope for resources By default, account users see resources based on membership. You can restrict the resource scope to the current account to control access. By setting the resource scope to the current account, users cannot access resources outside of their account, regardless of membership. The scope applies to projects, catalogs, and spaces. To restrict resources to current account: 1. Select Administration > Account and billing > Account to open the account settings window. 2. Set Resource scope to On. Access is updated immediately to be restricted to the current account. Set the credentials for connections The credentials for connections setting determines the type of credentials users must specify when creating a new connection. This setting applies only when new connections are created; existing connections are not affected. Either personal or shared credentials You can allow users the ability to specify personal or shared credentials when creating a new connection. Radio buttons will appear on the new connection form, allowing the user to select personal or shared. To allow the credential type to be chosen on the new connection form: 1. Select Administration > Account and billing > Account to open the account settings window. 2. Set both Shared credentials and Personal credentials to Enabled. Personal credentials When personal credentials are specified, each user enters their own credentials when creating a new connection or when using a connection to access data. To require personal credentials for all new connections: 1. Select Administration > Account and billing > Account to open the account settings window. 2. Set Personal credentials to Enabled. 3. Set Shared credentials to Disabled. Shared credentials With shared credentials, the credentials that were entered by the creator of the connection are made available to all other users when accessing data with the connection. To require shared credentials for all new connections: 1. Select Administration > Account and billing > Account to open the account settings window. 2. Set Shared credentials to Enabled. 3. Set Personal credentials to Disabled. Set the login session expiration Active and inactive session durations are managed through IBM Cloud. You are notified of a session expiration 5 minutes before the session expires. Unless your service supports autosaving, your work is not saved when your session expires. You can change the default durations for active and inactive sessions. For more information on required permissions and duration limits, see [Setting limits for login sessions](https://cloud.ibm.com/docs/account?topic=account-iam-work-sessions&interface=ui). To change the default durations: 1. From the watsonx navigation menu, select Administration > Access (IAM). 2. In IBM Cloud, select Manage > Access (IAM) > Settings. 3. Select the Login session tab. 4. For each expiration time that you want to change, edit the time and click Save. The inactivity duration cannot be longer than the maximum session duration, and the token lifetime cannot be longer than the inactivity duration. IBM Cloud prevents you from inputing an invalid combination of settings. Learn more * [Managing all projects in the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html) * [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)",how-to,1,train
C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag_models.html?context=cdpaas&locale=en,Generating and comparing models (SPSS Modeler),"Generating and comparing models (SPSS Modeler)
Generating and comparing models 1. Attach an Auto Classifier node, open its BUILD OPTIONS properties, and select Overall accuracy as the metric used to rank models. 2. Set the Number of models to use to 3. This means that the three best models will be built when you run the node. Figure 1. Auto Classifier node, build options  Under the EXPERT options, you can choose from many different modeling algorithms. 3. Deselect the Discriminant and SVM model types. (These models take longer to train on this data, so deselecting them will speed up the example. If you don't mind waiting, feel free to leave them selected.) Because you set Number of models to use to 3 under BUILD OPTIONS, the node will calculate the accuracy of the remaining algorithms and generate a single model nugget containing the three most accurate. Figure 2. Auto Classifier node, expert options  4. Under the ENSEMBLE options, select Confidence-weighted voting for the ensemble method. This determines how a single aggregated score is produced for each record. With simple voting, if two out of three models predict yes, then yes wins by a vote of 2 to 1. In the case of confidence-weighted voting, the votes are weighted based on the confidence value for each prediction. Thus, if one model predicts no with a higher confidence than the two yes predictions combined, then no wins. Figure 3. Auto Classifier node, ensemble options  5. Run the flow. After a few minutes, the generated model nugget is built and placed on the canvas, and results are added to the Outputs panel. You can view the model nugget, or save or deploy it in a number of other ways. 6. Right-click the model nugget and select View Model. You'll see details about each of the models created during the run. (In a real situation, in which hundreds of models may be created on a large dataset, this could take many hours.) If you want to explore any of the individual models further, you can click their links in the Estimator column to drill down and browse the individual model results. Figure 4. Auto Classifier results  By default, models are sorted based on overall accuracy, because this was the measure you selected in the Auto Classifier node properties. The XGBoost Tree model ranks best by this measure, but the C5.0 and C&RT models are nearly as accurate. Based on these results, you decide to use all three of these most accurate models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy. 7. In the USE column, select the three models. Return to the flow. 8. Attach an Analysis output node after the model nugget. Right-click the Analysis node and choose Run to run the flow. Figure 5. Auto Classifier example flow  The aggregated score generated by the ensembled model is shown in a field named $XF-response. When measured against the training data, the predicted value matches the actual response (as recorded in the original response field) with an overall accuracy of 92.77%. While not quite as accurate as the best of the three individual models in this case (92.82% for C5.0), the difference is too small to be meaningful. In general terms, an ensembled model will typically be more likely to perform well when applied to datasets other than the training data. Figure 6. Analysis of the three ensembled models ",how-to,1,train
A304B9E82543C150236ECAD30F1594E1B832B8B1,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/attribute-inference-attack.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Attribute inference attack Risks associated with inputInferencePrivacyAmplified Description An attribute inference attack is used to detect whether certain sensitive features can be inferred about individuals who participated in training a model. These attacks occur when an adversary has some prior knowledge about the training data and uses that knowledge to infer the sensitive data. Why is attribute inference attack a concern for foundation models? With a successful attack, the attacker can gain valuable information such as sensitive personal information or intellectual property. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html?context=cdpaas&locale=en,IBM Federated Learning,"IBM Federated Learning
IBM Federated Learning Federated Learning provides the tools for multiple remote parties to collaboratively train a single machine learning model without sharing data. Each party trains a local model with a private data set. Only the local model is sent to the aggregator to improve the quality of the global model that benefits all parties. Data format Any data format including but not limited to CSV files, JSON files, and databases for PostgreSQL. How Federated Learning works Watch this overview video to learn the basic concepts and elements of a Federated Learning experiment. Learn how you can apply the tools for your company's analytics enhancements. This video provides a visual method to learn the concepts and tasks in this documentation. An example for using Federated Learning is when an aviation alliance wants to model how a global pandemic impacts airline delays. Each participating party in the federation can use their data to train a common model without ever moving or sharing their data. They can do so either in application silos or any other scenario where regulatory or pragmatic considerations prevent users from sharing data. The resulting model benefits each member of the alliance with improved business insights while lowering risk from data migration and privacy issues. As the following graphic illustrates, parties can be geographically distributed and run on different platforms.  Why use IBM Federated Learning IBM Federated Learning has a wide range of applications across many enterprise industries. Federated Learning: * Enables sites with large volumes of data to be collected, cleaned, and trained on an enterprise scale without migration. * Accommodates for the differences in data format, quality, and constraints. * Complies with data privacy and security while training models with different data sources. Learn more * [Federated Learning tutorials and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) * [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html) * [Federated Learning Tensorflow samples for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html) * [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html) * [Federated Learning XGBoost sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html) * [Federated Learning homomorphic encryption sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html) * [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html) * [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html) * [Federated Learning architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html) * [Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html) * [Hyperparameter definitions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html) * [Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html) * [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html) * [Creating the initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html) * [Create the data handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html) * [Starting the aggregator (Admin)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html) * [Connecting to the aggregator (Party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html) * [Monitoring and saving the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html) * [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html) * [Limitations and troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html) Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)",conceptual,0,train
0F58073F0D5B237C3241126E98851A9E0C912792,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/ta_upload_tap-template_TMnode.html?context=cdpaas&locale=en,Uploading a text analysis package (TAP) in a Text Mining node (SPSS Modeler),"Uploading a text analysis package (TAP) in a Text Mining node (SPSS Modeler)
Uploading a custom asset in a Text Mining node You can add a custom text analysis package (TAP) or template directly in the Text Mining node. When your SPSS Modeler flow runs, it will use your custom asset. Procedure 1. If you want to download a TAP, save it locally. 1. Click Text analysis package while in the Text Analytics Workbench. 2. Enter details about the asset, and then click Submit. The text analysis package is saved locally as a .tap file. 2. If you want to download a template, see [Linguistic resources](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb-linguistic-resource.htmltmwb-templates-intro__DownloadAssetsSteps). 3. Add the TAP or template file to another Text Mining node. 1. In the Text Mining node, click Select resources. 2. Click the Text analysis package or Resource template tab depending on the asset you want. 3. Click Import , and then browse to or drag-and-drop your TAP or template. 4. Enter details about the asset, and then click Add. You can now see the uploaded TAP in the list of resources. It is also saved to your project as a project asset. 5. Click Ok.",how-to,1,train
E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html?context=cdpaas&locale=en,Analyzing data and working with models,"Analyzing data and working with models
Analyzing data and working with models You can analyze data and build or work with models in projects. The methods that you choose for preparing data or working models help you determine which tools best fit your needs. Each tool has a specific, primary task. Some tools have capabilities for multiple types of tasks. You can choose a tool based on how much automation you want: * Code editor tools: Use to write code in Python or R, all also with Spark. * Graphical builder tools: Use menus and drag-and-drop functionality on a builder to visually program. * Automated builder tools: Use to configure automated tasks that require limited user input. Tool to tasks Tool Primary task Tool type Work with data Work with models [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Prepare and visualize data Graphical builder ✓ [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) Build graphs to visualize data Graphical builder ✓ [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) Experiment with foundation models and prompts Graphical builder ✓ [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) Tune a foundation model to return output in a certain style or format Graphical builder ✓ ✓ [Jupyter notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) Work with data and models in Python or R notebooks Code editor ✓ ✓ [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) Train models on distributed data Code editor ✓ [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) Work with data and models in R Code editor ✓ ✓ [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Build models as a visual flow Graphical builder ✓ ✓ [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) Solve optimization problems Graphical builder, code editor ✓ ✓ [AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) Build machine learning models automatically Automated builder ✓ ✓ [Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) Automate model lifecycle Graphical builder ✓ ✓ [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) Generate synthetic tabular data Graphical builder ✓ ✓ Learn more * [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) * [Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)",conceptual,0,train
BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/bias.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Data bias Risks associated with inputTraining and tuning phaseFairnessAmplified Description Historical, representational, and societal biases present in the data used to train and fine tune the model can adversely affect model behavior. Why is data bias a concern for foundation models? Training an AI system on data with bias, such as historical or representational bias, could lead to biased or skewed outputs that may unfairly represent or otherwise discriminate against certain groups or individuals. In addition to negative societal impacts, business entities could face legal consequences or reputational harms from biased model outcomes. Example Healthcare Bias Research on reinforcing disparities in medicine highlights that using data and AI to transform how people receive healthcare is only as strong as the data behind it, meaning use of training data with poor minority representation can lead to growing health inequalities. Sources: [Science, September 2022](https://www.science.org/doi/10.1126/science.abo2788) [Forbes, December 2022](https://www.forbes.com/sites/adigaskell/2022/12/02/minority-patients-often-left-behind-by-health-ai/?sh=31d28a225b41) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
4C83F9C21CA1E70077C8004BD26FE5FB0FC947EB,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/append.html?context=cdpaas&locale=en,Append node (SPSS Modeler),"Append node (SPSS Modeler)
Append node You can use Append nodes to concatenate sets of records. Unlike Merge nodes, which join records from different sources together, Append nodes read and pass downstream all of the records from one source until there are no more. Then the records from the next source are read using the same data structure (number of records, number of fields, and so on) as the first, or primary, input. When the primary source has more fields than another input source, the system null string ($null$) will be used for any incomplete values. Append nodes are useful for combining datasets with similar structures but different data. For example, you might have transaction data stored in different files for different time periods, such as a sales data file for March and a separate one for April. Assuming that they have the same structure (the same fields in the same order), the Append node will join them together into one large file, which you can then analyze. Note: To append files, the field measurement levels must be similar. For example, a Nominal field cannot be appended with a field whose measurement level is Continuous.",conceptual,0,train
9933646421686556C9AE8459EE2E51ED9DAB1C33,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/nodes_cache_disable.html?context=cdpaas&locale=en,Disabling or caching nodes in a flow (SPSS Modeler),"Disabling or caching nodes in a flow (SPSS Modeler)
Disabling or caching nodes in a flow You can disable a node so it's ignored when the flow runs. And you can set up a cache on a node.",how-to,1,train
AA213D259727545C26401AD5CFB4916B6EFBD18D,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=en,Profiles of data assets,"Profiles of data assets
Profiles of data assets An asset profile includes generated information and statistics about the asset content. You can see the profile on an asset's Profile page. * [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=enprereqs) * [Creating a profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=encreate-profile) * [Profile information](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=enprofile-results) Requirements and restrictions You can view the profile of assets under the following circumstances. Required permissions : To view a data asset's Profile page, you can have any role in a project or catalog. : To create or update a profile, you must have the Admin or Editor role in the project or catalog. Workspaces : You can view the asset profile in projects. Types of assets : These types of assets have a profile: * Data assets from relational or nonrelational databases from a connection to the data sources, except Cloudant * Data assets from partitioned data sets, where a partitioned data set consists of multiple files and is represented by a single folder uploaded from the local file system or from file-based connections to the data sources * Data assets from files uploaded from the local file system or from file-based connections to the data sources, with these formats: * CSV * XLS, XLSM, XLSX (Only the first sheet in a workbook is profiled.) * TSV * Avro * Parquet However, structured data files are not profiled when data assets do not explicitly reference them, such as in these circumstances: * The files are within a connected folder asset. Files that are accessible from a connected folder asset are not treated as assets and are not profiled. * The files are within an archive file. The archive file is referenced by the data asset and the compressed files are not profiled. Creating a profile In projects, you can create a profile for a data asset by clicking Create profile. You can update an existing profile when the data changes. Profiling results When you create or update an asset profile, the columns in the data asset are analyzed. By default, the profile is created based on the first 5,000 rows of data. If the data asset has more than 250 columns, the profile is created based on the first 1,000 rows of data. The profile of a data asset shows information about each column in the data set: * When was the profile created or last updated. * How many columns and rows were analyzed. * The data types for columns and data types distribution. * The data formats for columns and formats distribution. * The percentage of matching, mismatching, or missing data for each column. * The frequency distribution for all values identified in a column. * Statistics about the data for each column: * The number of distinct values indicates how many different values exist in the sampled data for the column. * The percentage of unique values indicates the percentage of distinct values that appear only once in the column. * The minimum, maximum, or mean, and sometimes the standard deviation in that column. Depending on a column’s data format, the statistics vary slightly. For example, statistics for a column of data type integer have minimum, maximum, and mean values and a standard deviation value while statistics for a column of data type string have minimum length, maximum length, and mean length values. Parent topic:[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)",conceptual,0,train
9CAD0018634FF820D32F3FE714194D4BD42C5386,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-data.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Personal information in data Risks associated with inputTraining and tuning phasePrivacyTraditional Description Inclusion or presence of personal identifiable information (PII) and sensitive personal information (SPI) in the data used for training or fine tuning the model might result in unwanted disclosure of that information. Why is personal information in data a concern for foundation models? If not properly developed to protect sensitive data, the model might expose personal information in the generated output. Additionally, personal or sensitive data must be reviewed and handled with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation. Example Training on Private Information According to the article, Google and its parent company Alphabet were accused in a class-action lawsuit of misusing vast amount of personal information and copyrighted material taken from what is described as hundreds of millions of internet users to train its commercial AI products, which includes Bard, its conversational generative artificial intelligence chatbot. This follows similar lawsuits filed against Meta Platforms, Microsoft, and OpenAI over their alleged misuse of personal data. Sources: [Reuters, July 2023](https://www.reuters.com/legal/litigation/google-hit-with-class-action-lawsuit-over-ai-data-scraping-2023-07-11/) [J.L. v. Alphabet Inc., July 2023](https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmodloqvr/GOOGLE%20AI%20LAWSUIT%20complaint.pdf) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
9FD50170823EF108E2CF4EBF083B0085845FC3BE,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=en,Setting up the IBM Cloud account,"Setting up the IBM Cloud account
Setting up the IBM Cloud account As an IBM Cloud account owner or administrator, you sign up for IBM watsonx.ai and set up payment for services in the IBM Cloud account. These steps describe the typical tasks for an IBM Cloud account owner to set up the account for an organization: 1. [Sign up for watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=ensign-up). 2. [Update your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=enpaid-account) to add or update billing information. 3. [(Optional) Configure restrictions for the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=enrestrict). Step 1: Sign up for watsonx To sign up for watsonx.ai: 1. Go to [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx) or [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south). 2. Select the service region. 3. Agree to the terms, Data Use Policy, and Cookie Use. 4. Log in with your IBMid (usually an email address) if you have an existing IBM Cloud account. If you don't have an IBM Cloud account, click Create an IBM Cloud account to create a new account. You must enter a credit card to create a Pay-As-You-Go IBM Cloud account. However, you are not charged until you buy paid service plans. Lite plans for Watson Studio and Watson Machine Learning are automatically provisioned for you. Step 2: Update your IBM Cloud account You can skip this step if your IBM Cloud account has billing information with a Pay-As-You-Go or a subscription plan. You must update your IBM Cloud account in the following circumstances: * You have a Trial account from signing up for watsonx. * You have a Trial account that you [registered through an academic institution](https://ibm.biz/academic). * You have a [Lite account](https://cloud.ibm.com/docs/account?topic=account-accountsliteaccount) that you created before 25 October 2021. * You want to change a Pay-As-You-Go plan to a subscription plan. Setting up a Pay-As-You-Go account You set up a Pay-As-You-Go by adding a credit card number and billing information. You pay only for billable services that you use, with no long-term contracts or commitments. You can provision paid plans for all services in the IBM Cloud services catalog, including plans in the watsonx services catalog. To set up a Pay-As-You-Go account: 1. From the watsonx navigation menu, select Administration > Account and billing > Account. 2. Click Manage in IBM Cloud. 3. Log in to IBM Cloud. 4. Select Account settings. 5. Click Add credit card and enter your credit card and billing information. 6. Click Create account to submit your information. After your payment information is processed, your account is upgraded and you receive a monthly invoice for billable resource usage or instance fees. Setting up a subscription account With subscriptions, you commit to a minimum spending amount for a certain period and receive a discount on the overall cost. Subscriptions are limited to service plans in the watsonx catalog. Subscription credits are activated using a unique code that you receive by email. To activate the subscription, you apply the subscription code to an account. Be careful when selecting the account, because after you apply the subscription to an account, you can't undo it. To set up a watsonx subscription: 1. From the watsonx navigation menu, select Administration > Account and billing > Upgrade service plans. 2. On the Upgrade service plans page, click Contact sales. Complete and submit the form to communicate with IBM Sales that you want to set up a subscription account for watsonx. An associate from IBM Sales will contact you to set up a subscription. When your subscription is ready, you receive an email from IBM containing a unique subscription code. To apply the subscription code to your account: 1. Locate the unique code from the email that you received from IBM. 2. Log in to your IBM Cloud account, and select Manage > Account from the header. Be sure to select the correct account. 3. Select Account settings and locate the Subscription and feature codes section on the page. 4. Click Apply code. 5. Copy and paste the code from the email into the Apply a code field and click Apply. Your subscription account is active and you can upgrade your watsonx.ai services. Step 3: (Optional) Configure restrictions for the account Complete these optional tasks to secure your account: * Restrict the scope of resources that are available in IBM watsonx to the current account. See [Set the scope of resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-scope-for-resources). * Restrict access to specific IP addresses to protect the IBM Cloud account from unwanted access from unknown IP addresses. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlallow-specific-ip-addresses). Next steps * [Add users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html) * [Add more security constraints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html) Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)",how-to,1,train
5A328CF6319859F041C48974E44046BCFCEA3B87,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_screening_flow.html?context=cdpaas&locale=en,Building the flow (SPSS Modeler),"Building the flow (SPSS Modeler)
Building the flow Figure 1. Feature Selection example flow  1. Add a Data Asset node that points to customer_dbase.csv. 2. Add a Type node after the Data Asset node. 3. Double-click the Type node to open its properties, and change the role for response_01 to Target. Change the role to None for the other response fields (response_02 and response_03) and for the customer ID (custid) field. Leave the role set to Input for all other fields. Figure 2. Adding a Type node  4. Click Read Values and then click Save. 5. Add a Feature Selection modeling node after the Type node. In the node properties, the rules and criteria used for screening or disqualifying fields are defined. Figure 3. Adding a Feature Selection node  6. Run the flow to generate the Feature Selection model nugget. 7. To look at the results, right-click the model nugget and choose View Model. The results show the fields found to be useful in the prediction, ranked by importance. By examining these fields, you can decide which ones to use in subsequent modeling sessions. 8. To compare results without feature selection, you must add two CHAID modeling nodes to the flow: one that uses feature selection and one that doesn't. Add two CHAID nodes, one connected to the Type node and the other connected to the Feature Selection model nugget, as shown in the example flow at the beginning of this section. 9. Double-click each CHAID node to open its properties. Under Objectives, make sure that Build new model and Create a standard model are selected. Under , select Custom and set it to 5.",how-to,1,train
41AD83283A66CC3C467F70EA638B9C1C6681A160,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=en,Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service,"Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service
Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service IBM watsonx as a Service and Cloud Pak for Data as a Service have similar platform functionality and are compatible in many ways. The watsonx platform provides a subset of the tools and services that are provided by Cloud Pak for Data as a Service. However, watsonx.ai and watsonx.governance on watsonx provide more functionality than the same set of tools on Cloud Pak for Data as a Service. * [Common platform functionality](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=enplatform) * [Services on each platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=enservices) * [Data science and MLOps tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=entools) * [AI governance tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=engov) Common platform functionality The following platform functionality is common to both watsonx and Cloud Pak for Data as a Service: * Security, compliance, and isolation * Compute resources for running workloads * Global search for assets across the platform * The Platform assets catalog for sharing connections across the platform * Role-based user management within workspaces * A services catalog for adding services * View compute usage from the Administration menu * Connections to remote data sources * Connection credentials that are personal or shared * Sample assets and projects If you are signed up for both watsonx and Cloud Pak for Data as a Service, you can switch between platforms. See [Switching your platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html). Services on each platform Both platforms provide services for data science and MLOps and AI governance use cases: * Watson Studio * Watson Machine Learning * Watson OpenScale However, the services for watsonx.ai and watsonx.governance on the watsonx platform include features for working with foundation models and generative AI that are not included in these services on Cloud Pak for Data as a Service. Cloud Pak for Data as a Service also provides services for these use cases: * Data integration * Data governance Data science and AI tools Both platforms provide a common set of data science and AI tools. However, on watsonx, you can also perform foundation model inferencing with the Prompt Lab tool or with a Python library in notebooks. Foundation model inferencing and the Prompt Lab tool are not available on Cloud Pak for Data as a Service. The following table shows which data science and AI tools are available on each platform. Tools on watsonx and Cloud Pak for Data Tool On watsonx? On Cloud Pak for Data? [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) ✓ No [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) ✓ No [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) ✓ ✓ [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) ✓ ✓ [Jupyter notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) ✓ ✓ [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) ✓ ✓ [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) ✓ ✓ [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) ✓ ✓ [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) ✓ ✓ [AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) ✓ ✓ [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) ✓ ✓ If you are signed up for Cloud Pak for Data as a Service, you can access watsonx and you can move your projects and deployment spaces that meet the requirements from one platform to the other. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) and [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html). AI governance tools Both platforms contain the same AI use case inventory and evaluation tools. However, on watsonx, you can track and evaluate generative AI assets and dimensions. See [Comparison of governance solutions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-comparison.html). Learn more * [Switching your platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html) * [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) * [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html) * [Overview of IBM watsonx as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) * [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) Parent topic:[Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)",conceptual,0,train
F290D0C61B4A664E303DE559BBC559015FD375F9,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_api_search.html?context=cdpaas&locale=en,Example: Searching for nodes using a custom filter,"Example: Searching for nodes using a custom filter
Example: Searching for nodes using a custom filter The section [Finding nodes](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_find.htmlpython_node_find) includes an example of searching for a node in a flow using the type name of the node as the search criterion. In some situations, a more generic search is required and this can be accomplished by using the NodeFilter class and the flow findAll() method. This type of search involves the following two steps: 1. Creating a new class that extends NodeFilter and that implements a custom version of the accept() method. 2. Calling the flow findAll() method with an instance of this new class. This returns all nodes that meet the criteria defined in the accept() method. The following example shows how to search for nodes in a flow that have the node cache enabled. The returned list of nodes can be used to either flush or disable the caches of these nodes. import modeler.api class CacheFilter(modeler.api.NodeFilter): """"""A node filter for nodes with caching enabled"""""" def accept(this, node): return node.isCacheEnabled() cachingnodes = modeler.script.stream().findAll(CacheFilter(), False)",how-to,1,train
B508DA024EE4722C3919C4D1118CF0410713A9C5,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=en,Adding data to a project,"Adding data to a project
Adding data to a project After you create a project, the next step is to add data assets to it so that you can work with data. All the collaborators in the project are automatically authorized to access the data in the project. Different asset types can have duplicate names. However, you can't add an asset type with the same name multiple times. You can use the following methods to add data assets to projects: Method When to use [Add local files](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=enfiles) You have data in CSV or similar files on your local system. [Add Samples data sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=encommunity) You want to use sample data sets. [Add database connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) You need to connect to a remote data source. [Add data from a connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html) You need one or more tables or files from a remote data source. [Add connected folder assets from IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html) You need a folder in IBM Cloud Object Storage that contains a dynamic set of files, such as a news feed. [Convert files in project storage to assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=enos) You want to convert files that you created in the project into data assets. Add local files You can add a file from your local system as a data asset in a project. Required permissions : You must have the Editor or Admin role in the project. Restrictions : - The file cannot be empty. : - The file name can't exceed 255 characters. : - The maximum size for files that you can load with the UI is 5 GB. You can [load larger files to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html) with APIs. Important: You can't add executable files to a project. All other types files that you add to a project are not checked for malicious code. You must ensure that your files do not contain malware or other types of malicious software that other collaborators might download. To add data files to a project: 1. From your project's Assets page, click the Upload asset to project icon (). You can also click the same icon () from within a notebook or canvas. 2. In the pane that opens, browse for the files or drag them onto the pane. You must stay on the page until the load is complete. The files are saved in the object storage that is associated with your project and are listed as data assets on the Assets page of your project. When you click the data asset name, you can see this information about data assets from files: * The asset name and description * The tags for the asset * The name of the person who created the asset * The size of the data * The date when the asset was added to the project * The date when the asset was last modified * A [preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) of the data, for CSV, Avro, Parquet, TSV, Microsoft Excel, PDF, text, JSON, and image files * A [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of the data, for CSV, Avro, Parquet, TSV, and Microsoft Excel files You can update the contents of a data asset from a file by adding a file with the same name and format to the project and then choosing to replace the existing data asset. You can remove the data asset by choosing the Delete option from the action menu next to the asset name. Choose the Prepare data option to refine the data with [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html). Add Samples data sets You can add data sets from Samples to your project: 1. In Samples, find the card for the data set that you want to add. 2. Click the Add to Project icon from the action bar, select the project, and click Add. This video provides a visual method to learn the concepts and tasks in this documentation. Convert files in project storage to assets The storage for the project contains the data assets that you uploaded to the project, but it can also contain other files. For example, you can save a DataFrame in a notebook in the project environment storage. You can convert files in project storage to assets. To convert files in project storage to assets: 1. From the Assets tab of your project, click Import asset. 2. Select Project files. 3. Select the data_asset folder. 4. Select the asset and click Import. Next steps * [Refine the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) * [Analyze the data and work with models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) Learn more * [Downloading data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html) * [Publishing data assets to a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html) Parent topic:[Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html)",how-to,1,train
E6A30655CBD3745ACBCBF18E79B4C3979CA6B35B,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html?context=cdpaas&locale=en,Managing your IBM Cloud account,"Managing your IBM Cloud account
Managing your IBM Cloud account You can manage your IBM Cloud account to view billing and usage, manage account users, and manage services. Required permissions : You must be the IBM Cloud account owner or administrator. To manage your IBM Cloud account, choose Administration > Account and billing > Account > Manage in IBM Cloud from IBM watsonx. Then from the IBM Cloud console, choose an option from the Manage menu. * Account: See [Adding orgs and spaces](https://cloud.ibm.com/docs/account?topic=account-orgsspacesusersorgsspacesusers) and [Managing resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs). * Billing and Usage: See [How you're charged](https://cloud.ibm.com/docs/billing-usage?topic=billing-usage-chargescharges). * Access (IAM): See [Inviting users](https://cloud.ibm.com/docs/account?topic=account-access-getstarted). Learn more * [Activity Tracker events](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html) * [Manage your settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html) * [Set up IBM watsonx for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) * [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) * [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide) * [Delete your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.htmldeletecloud) * [Check the status of IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html) * [Configure private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html) Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)",how-to,1,train
F6CC81E55C6AAD12849A56837F14538576F5A42C,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-disclosure.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Confidential data disclosure Risks associated with inputTraining and tuning phaseIntellectual propertyTraditional Description Models might be trained or fine-tuned using confidential data or the company’s intellectual property, which could result in unwanted disclosure of that information. Why is confidential data disclosure a concern for foundation models? If not developed in accordance with data protection rules and regulations, the model might expose confidential information or IP in the generated output or through an adversarial attack. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
D171FCF10D8A1699FD8AC67E44053BBF6405631C,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_conceptstab.html?context=cdpaas&locale=en,The Concepts tab (SPSS Modeler),"The Concepts tab (SPSS Modeler)
The Concepts tab In the Text Analytics Workbench, you can use the Concepts tab to create and explore concepts as well as explore and tweak the extraction results. Concepts are the most basic level of extraction results available to use as building blocks, called descriptors, for your categories. Categories are a group of closely related ideas and patterns to which documents and records are assigned through a scoring process. Text mining is an iterative process in which extraction results are reviewed according to the context of the text data, fine-tuned to produce new results, and then reevaluated. Extraction results can be refined by modifying the linguistic resources. To simplify the process of fine-tuning your linguistic resources, you can perform common dictionary tasks directly from the Concepts tab. You can fine-tune other linguistic resources directly from the Resource editor tab. Figure 1. Concepts tab ",conceptual,0,train
A11374B50B49477362FA00BBB32A277776F7E8E2,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=en,Importing space and project assets into deployment spaces,"Importing space and project assets into deployment spaces
Importing space and project assets into deployment spaces You can import assets that you export from a deployment space or a project (either a project export or a Git archive) into a new or existing deployment space. This way, you can add assets or update existing assets (for example, replacing a model with its newer version) to use for your deployments. You can import a space or a project export file to [a new deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=enimport-to-new) or an [existing deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=enimport-to-existing) to populate the space with assets. Tip: The export file can come from a Git-enabled project and a Watson Studio project. To create the file to export, create a compressed file for the project that contains the assets to import. Then, follow the steps for importing the compressed file into a new or existing space. Importing a space or a project to a new deployment space To import a space or a project when you are creating a new deployment space: 1. Click New deployment space. 2. Enter the details for the space. For more information, see [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). 3. In the Upload space assets section, upload the exported compressed file that contains data assets and click Create. The assets from the exported file is added as space assets. Importing a space or a project to an existing deployment space To import a space or a project into an existing space: 1. From your deployment space, click the import and export space () icon. From the list, select Import space. 2. Add your compressed file that contains assets from a Watson Studio project or deployment space. Tip: If the space that you are importing is encrypted, enter the password in the Password field. 3. After your asset is imported, click Done. The assets from the exported file is added as space assets. Resolving issues with asset duplication The importing mechanism compares assets that exist in your space with the assets that are being imported. If it encounters an asset with the same name and of the same type: * If the asset type supports revisions, the importing mechanism creates a new revision of the existing asset and fixes the new revision. * If the asset type does not support revisions, the importing mechanism fixes the existing asset. This table describes how import works to resolve cases where assets are duplicated between the import file and the existing space. Scenarios for importing duplicated assets Your space File being imported Result No assets with matching name or type One or more assets with matching name or type All assets are imported. If multiple assets in the import file have the same name, they are imported as duplicate assets in the target space. One asset with matching name or type One asset with matching name or type Matching asset is updated with new version. Other assets are imported normally. One asset with matching name or type More than one asset with matching name or type The first matching asset that is processed is imported as a new version for the existing asset in the space, extra assets with matching name are created as duplicates in the space. Other assets are imported normally. Multiple assets with matching name or type One or more assets with matching name or type Assets with matching names fail to import. Other assets are imported normally. Warning: Multiple assets of the same name in an existing space or multiple assets of the same name in an import file are not fully supported scenarios. The import works as described for the scenarios in the table, but you cannot use versioning capabilities specific to the import. Existing deployments get updated differently, depending on deployment type: * If a batch deployment was created by using the previous version of the asset, the next invocation of the batch deployment job will refer to the updated state of the asset. * If an online deployment was created by using the previous version of the asset, the next ""restart"" of the deployment refers to the updated state of the asset. Learn more * To learn about adding other types of assets to a space, refer to [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). * To learn about exporting assets from a deployment space, refer to [Exporting space assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html). Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)",how-to,1,train
13D83AF5CCD616F312472FBAB4AC7D7A56D0F41D,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_analysis.html?context=cdpaas&locale=en,Using an Analysis node (SPSS Modeler),"Using an Analysis node (SPSS Modeler)
Using an Analysis node You can assess the accuracy of the model using an Analysis node. From the Palette, under Outputs, place an Analysis node on the canvas and attach it to the C5.0 model nugget. Then right-click the Analysis node and select Run. Figure 1. Analysis node The Analysis node output shows that with this artificial dataset, the model correctly predicted the choice of drug for every record in the dataset. With a real dataset you are unlikely to see 100% accuracy, but you can use the Analysis node to help determine whether the model is acceptably accurate for your particular application. Figure 2. Analysis node output ",how-to,1,train
F975B9964D088181CF34A1341083BC82053812D8,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/values_and_data_types.html?context=cdpaas&locale=en,Values and data types (SPSS Modeler),"Values and data types (SPSS Modeler)
Values and data types CLEM expressions are similar to formulas constructed from values, field names, operators, and functions. The simplest valid CLEM expression is a value or a field name. Examples of valid values are: 3 1.79 'banana' Examples of field names are: Product_ID '$P-NextField' where Product is the name of a field from a market basket data set, '$P-NextField' is the name of a parameter, and the value of the expression is the value of the named field. Typically, field names start with a letter and may also contain digits and underscores (_). You can use names that don't follow these rules if you place the name within quotation marks. CLEM values can be any of the following: * Strings (for example, ""c1"", ""Type 2"", ""a piece of free text"") * Integers (for example, 12, 0, –189) * Real numbers (for example, 12.34, 0.0, –0.0045) * Date/time fields (for example, 05/12/2002, 12/05/2002, 12/05/02) It's also possible to use the following elements: * Character codes (for example, a or 3) * Lists of items (for example, [1 2 3], ['Type 1' 'Type 2']) Character codes and lists don't usually occur as field values. Typically, they're used as arguments of CLEM functions.",conceptual,0,train
8892A757ECB2C4A02806A7B262712FF2E30CE044,https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=en,OPL models,"OPL models
OPL models You can build OPL models in the Decision Optimization experiment UI in watsonx.ai. In this section: * [Inputs and Outputs](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=entopic_oplmodels__section_oplIO) * [Engine settings](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=entopic_oplmodels__engsettings) To create an OPL model in the experiment UI, select in the model selection window. You can also import OPL models from a file or import a scenario .zip file that contains the OPL model and the data. If you import from a file or scenario .zip file, the data must be in .csv format. However, you can import other file formats that you have as project assets into the experiment UI. You can also import data sets including connected data into your project from the model builder in the [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata). For more information about the OPL language and engine parameters, see: * [OPL language reference manual](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllangref/topics/opl_langref_modeling_language.html) * [OPL Keywords](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllang_quickref/topics/opl_keywords_top.html) * [A list of CPLEX parameters](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/CPLEX/Parameters/topics/introListTopical.html) * [A list of CPO parameters](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/CP_Optimizer/Parameters/topics/paramcpoptimizer.html)",how-to,1,train
2BB452B4C9E3458BC02A9D392961E9C643E402DE,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=en,Feature differences between watsonx deployments,"Feature differences between watsonx deployments
Feature differences between watsonx deployments IBM watsonx as a Service and watsonx on Cloud Pak for Data software have some differences in features and implementation. IBM watsonx as a Service is a set of IBM Cloud services. Watsonx services on Cloud Pak for Data 4.8 are offered as software that you must install and maintain. Services that are available on both deployments also have differences in features on IBM watsonx as a Service compared to watsonx software on Cloud Pak for Data 4.8. * [Platform differences](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enplatform) * [Common features across services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=encommon) * [Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enws) * [Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enwml) * [Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enwos) Platform differences IBM watsonx as a Service and watsonx software on Cloud Pak for Data share a common code base, however, they differ in the following key ways: Platform differences Features As a service Software Software, hardware, and installation IBM watsonx is fully managed by IBM on IBM Cloud. Software updates are automatic. Scaling of compute resources and storage is automatic. You sign up at [https://dataplatform.cloud.ibm.com](https://dataplatform.cloud.ibm.com). You provide and maintain hardware. You install, maintain, and upgrade the software. See [Software requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/software-reqs.html). Storage You provision a IBM Cloud Object Storage service instance to provide storage. See [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-object-storage.html). You provide persistent storage on a Red Hat OpenShift cluster. See [Storage requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/storage-requirements.html). Compute resources for running workloads Users choose the appropriate runtime for their jobs. Compute usage is billed based on the rate for the runtime environment and the duration of the job. See [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html). You set up the number of Red Hat OpenShift nodes with the appropriate number of vCPUs. See [Hardware requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/hardware-reqs.html) and [Monitoring the platform](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/admin/platform-management.html). Cost You buy each service that you need at the appropriate plan level. Many services bill for compute and other resource consumption. See each service page in the [IBM Cloud catalog](https://cloud.ibm.com/catalog) or in the services catalog on IBM watsonx, by selecting Administration > Services > Services catalog from the navigation menu. You buy a software license based on the services that you need. See [Cloud Pak for Data](https://cloud.ibm.com/catalog/content/ibm-cp-datacore-6825cc5d-dbf8-4ba2-ad98-690e6f221701-global). Security, compliance, and isolation The data security, network security, security standards compliance, and isolation of IBM watsonx are managed by IBM Cloud. You can set up extra security and encryption options. See [Security of IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html). Red Hat OpenShift Container Platform provides basic security features. Cloud Pak for Data is assessed for various Privacy and Compliance regulations and provides features that you can use in preparation for various privacy and compliance assessments. You are responsible for additional security features, encryption, and network isolation. See [Security considerations](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/plan/security.html). Available services Most watsonx services are available in both deployment environments.
See [Services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html). Includes many other services for other components and solutions. See [Services for Cloud Pak for Data 4.8](https://www.ibm.com/docs/SSQNUZ_4.8.x/svc-nav/head/services.html). User management You add users and user groups and manage their account roles and permissions with IBM Cloud Identity and Access Management. See [Add users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html).
You can also set up SAML federation on IBM Cloud. See [IBM Cloud docs: How IBM Cloud IAM works](https://cloud.ibm.com/docs/account?topic=account-iamoverview). You can add users and create user groups from the Administration menu. You can use the Identity and Access Management Service or use your existing SAML SSO or LDAP provider for identity and password management. You can create dynamic, attribute-based user groups. See [User management](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/admin/users.html). Common features across services The following features that are provided with the platform are effectively the same for services on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8: * Global search for assets across the platform * The Platform assets catalog for sharing connections across the platform * Role-based user management within collaborative workspaces across the platform * Common infrastructure for assets and workspaces * A services catalog for adding services * View compute usage from the Administration menu The following table describes differences in features across services between IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8: Differences in common features across services Feature As a service Software Manage all projects Users with the Manage projects permission from the IAM service access Manager role for the IBM Cloud Pak for Data service can join any project with the Admin role and then manage or delete the project. Users with the Manage projects permission can join any project with the Admin role and then manage or delete the project. Connections to remote data sources Most supported data sources are common to both deployment environments.
See [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). See [Supported data sources](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/access/data-sources.html). Connection credentials that are personal or shared Connections in projects and catalogs can require personal credentials or allow shared credentials. Shared credentials can be disabled at the account level. Platform connections can require personal credentials or allow shared credentials. Shared credentials can be disabled at the platform",how-to,1,train
44BA508199B214448CB22B7658127E16DD4E7ABF,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en,Connecting to data behind a firewall,"Connecting to data behind a firewall
Connecting to data behind a firewall To connect to a database that is not accessible via the internet (for example, behind a firewall), you must set up a secure communication path between your on-premises data source and IBM Cloud. Use a Satellite Connector, a Satellite location, or a Secure Gateway instance for the secure communication path. * [Set up a Satellite Connector](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=ensatctr): Satellite Connector is the replacement for Secure Gateway. Satellite Connector uses a lightweight Docker-based communication that creates secure and auditable communications from your on-prem, cloud, or Edge environment back to IBM Cloud. Your infrastructure needs only a container host, such as Docker. For more information, see [Satellite Connector overview](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=ui). * [Set up a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=ensl): A Satellite location provides the same secure communications to IBM Cloud as a Satellite Connector but adds high availability access by default plus the ability to communicate from IBM Cloud to your on-prem location. It supports managed cloud services on on-premises, such as Managed OpenShift and Managed Databases, supported remotely by IBM Cloud PaaS SRE resources. A Satellite location requires at least three x86 hosts in your infrastructure for the HA control plane. A Satellite location is a superset of the capabilities of the Satellite Connector. If you need only client data communication, set up a Satellite Connector. * [Configure a Secure Gateway](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=engateway): Secure Gateway is IBM Cloud's former solution for communication between on-prem or third-party cloud environments. Secure Gateway is now [deprecated by IBM Cloud](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview). For a new connection, set up a Satellite Connector instead. Set up a Satellite Connector To set up a Satellite Connector, you create the Connector in your IBM Cloud account. Next, you configure agents to run in your local Docker host platform on-premises. Finally, you create the endpoints for your data source that IBM watsonx uses to access the data source from IBM Cloud. Requirements for a Satellite Connector Required permissions : You must have Administrator access to the Satellite service in IAM access policies to do the steps in IBM Cloud. Required host systems : Minimum one x86 Docker host in your own infrastructure to run the Connector container. See [Minimum requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=uimin-requirements). Setting up a Satellite Connector Note: Not all connections support Satellite. If the connection supports Satellite, the IBM Cloud Satellite tile will be available in the Private Connectivity section of the Create connection form. Alternatively, you can filter all the connections that support Satellite in the New connection page. 1. Access the Create connector page in IBM Cloud from one of these places: * Log in to the [Connectors](https://cloud.ibm.com/satellite/connectors) page in IBM Cloud. * In IBM watsonx: 1. Go to the project page. Click the Assets tab. 2. Click New asset > Connect to a data source. 3. Select the IBM watsonx connector. 4. In the Create connection page, scroll down to the Private connectivity section, and click the IBM Cloud Satellite tile. 5. Click Configure Satellite and then log in to IBM Cloud. 6. Click Create connector. 2. Follow the steps for [Creating a Connector](https://cloud.ibm.com/docs/satellite?topic=satellite-create-connector). 3. Set up the Connector agent containers in your local Docker host environment. For high availability, use three agents per connector that are deployed on separate Docker hosts. It is best to use a separate infrastructure and network connectivity for each agent. Follow the steps for [Running a Connector agent](https://cloud.ibm.com/docs/satellite?topic=satellite-run-agent-locally). The agents will appear in the Active Agents list for the connector. 4. In IBM watsonx, go back to the Create connection page. In the Private connectivity section, click Reload, and then select the Satellite Connector that you created. In the [Satellite Connectors dashboard](https://cloud.ibm.com/satellite/connectors) in IBM Cloud, for each connection that you create, a user endpoint is added in the Satellite Connector. Set up a Satellite location Use the Satellite location feature of IBM Cloud Satellite to securely connect to a Satellite location that you configure for your IBM Cloud account. Requirements for a Satellite location Required permissions : You must be the Admin in the IBM Cloud account to do the tasks in IBM Cloud. Required host systems : You need at least three computers or virtual machines in your own infrastructure to act as Satellite hosts. Confirm the [host system requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-host-reqs). (The IBM Cloud docs instructions for additional features such as Red Hat OpenShift clusters and Kubernetes are not required for a connection in IBM watsonx.) Note: Not all connections support Satellite. If the connection supports Satellite, the IBM Cloud Satellite tile will be available in the Private Connectivity section of the Create connection form. Alternatively, you can filter all the connections that support Satellite in the New connection page. Setting up a Satellite location Configure the Satellite location in IBM Cloud. * [Task 1: Create a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=entask1) * [Task 2: Attach the hosts to the Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=entask2) * [Task 3: Assign the hosts to",how-to,1,train
B2CA734AE719BA79AB4B5F877CF044F47090FAEC,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth.html?context=cdpaas&locale=en,Forecasting bandwidth utilization (SPSS Modeler),"Forecasting bandwidth utilization (SPSS Modeler)
Forecasting bandwidth utilization An analyst for a national broadband provider is required to produce forecasts of user subscriptions to predict utilization of bandwidth. Forecasts are needed for each of the local markets that make up the national subscriber base. You'll use time series modeling to produce forecasts for the next three months for a number of local markets.",conceptual,0,train
83A5FC83AA65717942A3437217F2114454552144,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_create.html?context=cdpaas&locale=en,Creating nodes,"Creating nodes
Creating nodes Flows provide a number of ways to create nodes. These methods are summarized in the following table. Methods for creating nodes Table 1. Methods for creating nodes Method Return type Description s.create(nodeType, name) Node Creates a node of the specified type and adds it to the specified flow. s.createAt(nodeType, name, x, y) Node Creates a node of the specified type and adds it to the specified flow at the specified location. If either x < 0 or y < 0, the location is not set. s.createModelApplier(modelOutput, name) Node Creates a model applier node that's derived from the supplied model output object. For example, you can use the following script to create a new Type node in a flow: stream = modeler.script.stream() Create a new Type node node = stream.create(""type"", ""My Type"")",how-to,1,train
5FE3DE32EFB5DEA4094DCA22CBC77E24D23EF67A,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_blanksnulls.html?context=cdpaas&locale=en,Functions handling blanks and null values (SPSS Modeler),"Functions handling blanks and null values (SPSS Modeler)
Functions handling blanks and null values Using CLEM, you can specify that certain values in a field are to be regarded as ""blanks,"" or missing values. The following functions work with blanks. CLEM blank and null value functions Table 1. CLEM blank and null value functions Function Result Description @BLANK(FIELD) Boolean Returns true for all records whose values are blank according to the blank-handling rules set in an upstream Type node or Import node (Types tab). @LAST_NON_BLANK(FIELD) Any Returns the last value for FIELD that was not blank, as defined in an upstream Import or Type node. If there are no nonblank values for FIELD in the records read so far, $null$ is returned. Note that blank values, also called user-missing values, can be defined separately for each field. @NULL(FIELD) Boolean Returns true if the value of FIELD is the system-missing $null$. Returns false for all other values, including user-defined blanks. If you want to check for both, use @BLANK(FIELD) and@NULL(FIELD). undef Any Used generally in CLEM to enter a $null$ value—for example, to fill blank values with nulls in the Filler node. Blank fields may be ""filled in"" with the Filler node. In both Filler and Derive nodes (multiple mode only), the special CLEM function @FIELD refers to the current field(s) being examined.",conceptual,0,train
4292721E4524AC59FA259576D39665946DB8849D,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rfmanalysisnodeslots.html?context=cdpaas&locale=en,rfmanalysisnode properties,"rfmanalysisnode properties
rfmanalysisnode properties The Recency, Frequency, Monetary (RFM) Analysis node enables you to determine quantitatively which customers are likely to be the best ones by examining how recently they last purchased from you (recency), how often they purchased (frequency), and how much they spent over all transactions (monetary). rfmanalysisnode properties Table 1. rfmanalysisnode properties rfmanalysisnode properties Data type Property description recency field Specify the recency field. This may be a date, timestamp, or simple number. frequency field Specify the frequency field. monetary field Specify the monetary field. recency_bins integer Specify the number of recency bins to be generated. recency_weight number Specify the weighting to be applied to recency data. The default is 100. frequency_bins integer Specify the number of frequency bins to be generated. frequency_weight number Specify the weighting to be applied to frequency data. The default is 10. monetary_bins integer Specify the number of monetary bins to be generated. monetary_weight number Specify the weighting to be applied to monetary data. The default is 1. tied_values_method NextCurrent Specify which bin tied value data is to be put in. recalculate_bins AlwaysIfNecessary add_outliers flag Available only if recalculate_bins is set to IfNecessary. If set, records that lie below the lower bin will be added to the lower bin, and records above the highest bin will be added to the highest bin. binned_field RecencyFrequencyMonetary recency_thresholds value value Available only if recalculate_bins is set to Always. Specify the upper and lower thresholds for the recency bins. The upper threshold of one bin is used as the lower threshold of the next—for example, [10 30 60] would define two bins, the first bin with upper and lower thresholds of 10 and 30, with the second bin thresholds of 30 and 60. frequency_thresholds value value Available only if recalculate_bins is set to Always. monetary_thresholds value value Available only if recalculate_bins is set to Always.",conceptual,0,train
67241853FC2471C6C0719F1B98E40625358B2E19,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/languageidentifier.html?context=cdpaas&locale=en,Reading in source text (SPSS Modeler),"Reading in source text (SPSS Modeler)
Reading in source text You can use the Language Identifier node to identify the natural language of a text field within your source data. The output of this node is a derived field that contains the detected language code.  Data for text mining can be in any of the standard formats that are used by SPSS Modeler flows, including databases or other ""rectangular"" formats that represent data in rows and columns. * To read in text from any of the standard data formats used by SPSS Modeler flows, such as a database with one or more text fields for customer comments, you can use an Import node. * When you're processing large amounts of data, which might include text in several different languages, use the Language Identifier node to identify the language used in a specific field.",conceptual,0,train
E3526B694C68C40EDC206E216B454E63B83F3EBA,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en,Asset contents or previews,"Asset contents or previews
Asset contents or previews In projects and other workspaces, you can see a preview of data assets that contain relational data. * [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire) * [Previews of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=endata) Requirements and restrictions You can view the contents or previews of assets under the following conditions and restrictions. * Workspaces You can view the preview or contents of assets in these workspaces: * Projects * Deployment spaces * Types of assets * Data assets from files * Connected data assets * Models * Notebooks * Required permissions To see the asset contents or preview, these conditions must be true: * You have any collaborator role in the workspace. * Restrictions for data assets Additional requirements apply to connected data assets and data assets from files. See [Requirements for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire-data). Previews are not available for data assets that were added as managed assets by using the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-apicreateattachmentnewv2). Previews of data assets The previews of data assets show a view of the data. You can see when the data in the preview was last fetched and refresh the preview data by clicking the refresh icon. * [Requirements for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire-data) * [Preview information for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enpreview-info) * [File extensions and mime types of previewed files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enfiles) Requirements for data assets The additional requirements for viewing previews of data assets depend on whether the data is accessed through a connection or from a file. Connected data assets You can see previews of data assets that are accessed through a connection if all these conditions are true: * You have access to the data asset and its associated connection. See [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire). * The data asset contains structured data. Structured data resides in fixed fields within a record or file, for example, relational database data or spreadsheets. * You have credentials for the connection: * For connections with shared credentials, the username in the connection details has access to the object at the data source. * For connections with personal credentials, you must enter your personal credentials when you see a key icon (). This is a one-time step that permanently unlocks the connection for you. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). Data assets from files You can see previews of data assets from files if the following conditions are true: * You have access to the data asset. See [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire). * The file is stored in IBM Cloud Object Storage. For preview of text or image files from an IBM Cloud Object Storage connection to work, the connection credentials must include an access key and a secret key. If you’re using an existing Cloud Object Storage connection that doesn’t have these keys, edit the connection asset and add them. See [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html). * The file type is supported. See [File extensions and mime types of previewed files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enfiles). Preview information for data assets For structured data, the preview displays a limited number of rows and columns: * The number of rows in the preview is limited to 1,000. * The amount of data is limited to 800 KB. The more columns the data asset has, the fewer rows that appear in the preview. Previews show different information for different types of data assets and files. Structured data For structured data, the preview shows column names, data types, and a subset of columns and rows of data. The supported formats of structured data area: Relational data, CSV, TSV, Avro, partitioned data, and Parquet (projects). Assets from file based connections like Apache Kafka and Apache Cassandra are not supported. Unstructured data Unstructured data files must be stored in IBM Cloud Object Storage to have previews. For these unstructured data files, the preview shows the whole document: Text, JSON, HTML, PDF, images, and Microsoft Excel documents. HTML files are supported in text format. Images stored in IBM Cloud Object Storage support JPG, JPEG, PNG, GIF, BMP, and BMP1. Microsoft Excel document previews show the first sheet. For connected folder assets, the preview shows the files and subfolders, which you can also preview. File extensions and mime types of previewed files These types of files that contain structured data have previews: Structured data files Extension Mime type AVRO CSV text/csv CSV1 application/csv JSON application/json PARQ TSV TXT text/plain XLSX application/vnd.openxmlformats-officedocument.spreadsheetml.sheet XLS application/vnd.ms-excel XLSM application/vnd.ms-excel.sheet.macroEnabled.12 These types of image files have previews: Image files Extension Mime type BMP image/bmp GIF image/gif JPG image/jpeg JPEG image/jpeg PNG image/png These types of document files have previews: Document files Extension Mime type HTML text/html PDF application/pdf TXT text/plain Learn more * [Searching for assets across the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html) * [Profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) * [Activities](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html) * [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html) Parent topic:[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)",conceptual,0,train
5EE63FCC911BA90930D413B58E1310EFE0E24243,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_traverse.html?context=cdpaas&locale=en,Traversing through nodes in a flow,"Traversing through nodes in a flow
Traversing through nodes in a flow A common requirement is to identify nodes that are either upstream or downstream of a particular node. The flow provides a number of methods that can be used to identify these nodes. These methods are summarized in the following table. Methods to identify upstream and downstream nodes Table 1. Methods to identify upstream and downstream nodes Method Return type Description s.iterator() Iterator Returns an iterator over the node objects that are contained in the specified flow. If the flow is modified between calls of the next() function, the behavior of the iterator is undefined. s.predecessorAt(node, index) Node Returns the specified immediate predecessor of the supplied node or None if the index is out of bounds. s.predecessorCount(node) int Returns the number of immediate predecessors of the supplied node. s.predecessors(node) List Returns the immediate predecessors of the supplied node. s.successorAt(node, index) Node Returns the specified immediate successor of the supplied node or None if the index is out of bounds. s.successorCount(node) int Returns the number of immediate successors of the supplied node. s.successors(node) List Returns the immediate successors of the supplied node.",conceptual,0,train
AE57C56703B39C9097516D1466B70A3DE57AA1C4,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-run-save.html?context=cdpaas&locale=en,Running a pipeline,"Running a pipeline
Running a pipeline You can run a pipeline in real time to test a flow as you work. When you are satisfied with a pipeline, you can then define a job to run a pipeline with parameters or to run on a schedule. To run a pipeline: 1. Click Run pipeline on the toolbar. 2. Choose an option: * Trial run runs the pipeline without creating a job. Use this to test a pipeline. * Create a job presents you with an interface for configuring and scheduling a job to run the pipeline. You can save and reuse run details, such as pipeline parameters, for a version of your pipeline. * View history compares all of your runs over time. You must make sure requirements are met when you run a pipeline. For example, you might need a deployment space or an API key to run some of your nodes before you can begin. Using a job run name You can optionally specify a job run name when running a pipeline flow or a pipeline job and see the different jobs in the Job details dashboard. Otherwise, you can also assign a local parameter DSJobInvocationId to either a Run pipeline job node or Run DataStage job node. If both the parameter DSJobInvocationId and job name of the node are set, DSJobInvocationId will be used. If neither are set, the default value ""job run"" is used. Notes on running a pipeline * When you run a pipeline from a trial run or a job, click the node output to view the results of a successful run. If the run fails, error messages and logs are provided to help you correct issues. * Errors in the pipeline are flagged with an error badge. Open the node or condition with an error to change or complete the configuration. * View the consolidated logs to review operations or identify issues with the pipeline. Creating a pipeline job The following are all the configuration options for defining a job to run the pipeline. 1. Name your pipeline job and choose a version. 2. Input your IBM API key. 3. (Optional) Schedule your job by toggling the Schedule button. 1. Choose the start date and fine tune your schedule to repeat by any minute, hour, day, week, month. 2. Add exception days to prevent the job from running on certain days. 3. Add a time for terminating the job. 4. (Optional) Enter the pipeline parameters needed for your job, for example assigning a space to a deployment node. To see how to create a pipeline parameter, see Defining pipeline parameters in [Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html). 5. (Optional) Choose if you want to be notified of pipeline job status after running. Saving a version of a pipeline You can save a version of a pipeline and revert to it at a later time. For example, if you want to preserve a particular configuration before you make changes, save a version. You can revert the pipeline to a previous version. When you share a pipeline, the latest version is used. To save a version: 1. Click the Versions icon on the toolbar. 2. In the Versions pane, click Save version to create a new version with a version number incremented by 1. When you run the pipeline, you can choose from available saved versions. Note: You cannot delete a saved version. Exporting pipeline assets When you export project or space assets to import them into a deployment space, you can include pipelines in the list of assets you export to a zip file and then import into a project or space. Importing a pipeline into a space extends your MLOps capabilities to run jobs for various assets from a space, or to move all jobs from a pre-production to a production space. Note these considerations for working with pipelines in a space: * Pipelines in a space are read-only. You cannot edit the pipeline. You must edit the pipeline from the project, then export the updated pipeline and import it into the space. * Although you cannot edit the pipeline in a space, you can create new jobs to run the pipeline. You can also use parameters to assign values for jobs so you can have different values for each job you configure. * If there is already a pipeline in the space with the same name, the pipeline import will fail. * If there is no pipeline in the space with the same name, a pipeline with version 1 is created in the space. * Any supporting assets or references required to run a pipeline job must also be part of the import package or the job will fail. * If your pipeline contains assets or tools not supported in a space, such as an SPSS modeler job,",how-to,1,train
ADBD308EEB761B4A1516D49F68C880EAF3F08D78,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spark.html?context=cdpaas&locale=en,Batch deployment input details for Spark models,"Batch deployment input details for Spark models
Batch deployment input details for Spark models Follow these rules when you are specifying input details for batch deployments of Spark models. Data type summary table: Data Description Type Inline File formats N/A Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)",conceptual,0,train
FB9D913B400E9F00E6AA6EFF7A7C8A84F5762DC9,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html?context=cdpaas&locale=en,Creating jobs in the Notebook editor,"Creating jobs in the Notebook editor
Creating jobs in the Notebook editor You can create a job to run a notebook directly in the Notebook editor. To create a notebook job: 1. In the Notebook editor, click  from the menu bar and select Create a job. 2. Define the job details by entering a name and a description (optional). 3. On the Configure page, select: * A notebook version. The most recently saved version of the notebook is used by default. If no version of the notebook exists, you must create a version by clicking  from the notebook action bar. * A runtime. By default, the job uses the same environment template that was selected for the notebook. * Advanced configuration to add environment variables and select the job run retention settings. * The environment variables that are passed to the notebook when the job is started and affect the execution of the notebook. Each variable declaration must be made for a single variable in the following format VAR_NAME=foo and appear on its own line. For example, to determine which data source to access if the same notebook is used in different jobs, you can set the variable DATA_SOURCE to DATA_SOURCE=jdbc:db2//db2.server.com:1521/testdata in the notebook job that trains a model and to DATA_SOURCE=jdbc:db2//db2.server.com:1521/productiondata in the job where the model runs on real data. In another example, the variables BATCH_SIZE, NUM_CLASSES and EPOCHS that are required for a Keras model can be passed to the same notebook with different values in separate jobs. * Select the job run result output. You can select: * Log & notebook to store the output files of specific runs, the log file, and the resulting notebook. This is the default that is set for all new jobs. Select: * To compare the results of different job runs, not just by viewing the log file. By keeping the output files of specific job runs, you can compare the results of job runs to fine tune your code. For example, by configuring different environment variables when the job is started, you can change the way the code in the notebook behaves and then compare these differences (including graphics) step by step between runs. Note: * The job run retention value is set to 5 by default to avoid creating too many run output files. This means that the last 5 job run output files will be retained. You need to adjust this value if you want to compare more run output files. * You cannot use the results of a specific job run to create a URL to enable ""Sharing by URL"". If you want to use a specific job result run as the source of what is shown via ""Share by URL"", you must create a new job and select Log & updated version. * To view the logs. * Log only to store the log file only. The resulting notebook is discarded. Select: * To view the logs. * Log & updated version to store the log file and update the output cells of the version you used as input to this task. Select: * To view the logs. * To share the result of a job run via ""Share by URL"". * Retention configuration to set how long to retain finished job runs and job run artifacts like logs or notebook results. You can either select the number of days to retain the job runs or the last number of job runs to keep. The retention value is set to 5 by default (the last 5 job run output files are retained). Be mindful when changing the default as too many job run files can quickly use up project storage. 4. On the Schedule page, you can optionally add a one-time or repeating schedule. If you define a start day and time without selecting Repeat, the job will run exactly one time at the specified day and time. If you define a start date and time and you select Repeat, the job will run for the first time at the timestamp indicated in the Repeat section. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. An API key is generated when you create a scheduled job, and future runs will use this API key. If you didn't create a scheduled job but choose to modify one, an API key is generated for you when you modify the job and future runs will use this API key. 5. Optionally set to see notifications for the",how-to,1,train
43785386700CF73E37A8F76ADC4EF9FB01EE0AEB,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-factual-accuracy.html?context=cdpaas&locale=en,Generating accurate output,"Generating accurate output
Generating accurate output Foundation models sometimes generate output that is not factually accurate. If factual accuracy is important for your project, set yourself up for success by learning how and why these models might sometimes get facts wrong and how you can ground generated output in correct facts. Why foundation models get facts wrong Foundation models can get facts wrong for a few reasons: * Pre-training builds word associations, not facts * Pre-training data sets contain out-of-date facts * Pre-training data sets do not contain esoteric or domain-specific facts and jargon * Sampling decoding is more likely to stray from the facts Pre-training builds word associations, not facts During pre-training, a foundation model builds up a vocabulary of words ([tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)) encountered in the pre-training data sets. Also during pre-training, statistical relationships between those words become encoded in the model weights. For example, ""Mount Everest"" often appears near ""tallest mountain in the world"" in many articles, books, speeches, and other common pre-training sources. As a result, a pre-trained model will probably correctly complete the prompt ""The tallest mountain in the world is "" with the output ""Mount Everest."" These word associations can make it seem that facts have been encoded into these models too. For very common knowledge and immutable facts, you might have good luck generating factually accurate output using pre-trained foundation models with simple prompts like the tallest-mountain example. However, it is a risky strategy to rely on only pre-trained word associations when using foundation models in applications where accuracy matters. Pre-training data sets contain out-of-date facts Collecting pre-training data sets and performing pre-training runs can take a significant amount of time, sometimes months. If a model was pre-trained on a data set from several years ago, the model vocabulary and word associations encoded in the model weights won't reflect current world events or newly popular themes. For this reason, if you submit the prompt ""The most recent winner of the world cup of football (soccer) is "" to a model pre-trained on information a few years old, the generated output will be out of date. Pre-training data sets do not contain esoteric or domain-specific facts and jargon Common foundation model pre-training data sets, such as [The Pile (Wikipedia)](https://en.wikipedia.org/wiki/The_Pile_%28dataset%29), contain hundreds of millions of documents. Given how famous Mount Everest is, it's reasonable to expect a foundation model to have encoded a relationship between ""tallest mountain in the world"" and ""Mount Everest"". However, if a phenomenon, person, or concept is mentioned in only a handful of articles, chances are slim that a foundation model would have any word associations about that topic encoded in its weights. Prompting a pre-trained model about information that was not in its pre-training data sets is unlikely to produce factually accurate generated output. Sampling decoding is more likely to stray from the facts Decoding is the process a model uses to choose the words (tokens) in the generated output: * Greedy decoding always selects the token with the highest probability * Sampling decoding selects tokens pseudo-randomly from a probability distribution Greedy decoding generates output that is more predictable and more repetitive. Sampling decoding is more random, which feels ""creative"". If, based on pre-training data sets, the most likely words to follow ""The tallest mountain is "" are ""Mount Everest"", then greedy decoding could reliably generate that factually correct output, whereas sampling decoding might sometimes generate the name of some other mountain or something that's not even a mountain. How to ground generated output in correct facts Rather than relying on only pre-trained word associations for factual accuracy, provide context in your prompt text. Use context in your prompt text to establish facts When you prompt a foundation model to generate output, the words (tokens) in the generated output are influenced by the words in the model vocabulary and the words in the prompt text. You can use your prompt text to boost factually accurate word associations. Example 1 Here's a prompt to cause a model to complete a sentence declaring your favorite color: My favorite color is Given that only you know what your favorite color is, there's no way the model could reliably generate the correct output. Instead, a color will be selected from colors mentioned in the model's pre-training data: * If greedy decoding is used, whichever color appears most frequently with statements about favorite colors in pre-training content will be selected. * If sampling decoding is used, a color will be selected randomly from colors mentioned most often as favorites in the pre-training content. Example 2 Here's a prompt that includes context to establish the facts: I recently painted my kitchen yellow, which is my favorite color. My favorite color is If you prompt a model with text that includes factually accurate context like this, then the output the model generates",how-to,1,train
20D6B2732BE17C12226F186559FBEA647799F3B8,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_examples.html?context=cdpaas&locale=en,Examples,"Examples
Examples The print keyword prints the arguments immediately following it. If the statement is followed by a comma, a new line isn't included in the output. For example: print ""This demonstrates the use of a"", print "" comma at the end of a print statement."" This will result in the following output: This demonstrates the use of a comma at the end of a print statement. The for statement iterates through a block of code. For example: mylist1 = [""one"", ""two"", ""three""] for lv in mylist1: print lv continue In this example, three strings are assigned to the list mylist1. The elements of the list are then printed, with one element of each line. This results in the following output: one two three In this example, the iterator lv takes the value of each element in the list mylist1 in turn as the for loop implements the code block for each element. An iterator can be any valid identifier of any length. The if statement is a conditional statement. It evaluates the condition and returns either true or false, depending on the result of the evaluation. For example: mylist1 = [""one"", ""two"", ""three""] for lv in mylist1: if lv == ""two"" print ""The value of lv is "", lv else print ""The value of lv is not two, but "", lv continue In this example, the value of the iterator lv is evaluated. If the value of lv is two, a different string is returned to the string that's returned if the value of lv is not two. This results in the following output: The value of lv is not two, but one The value of lv is two The value of lv is not two, but three",how-to,1,train
9EE303CB0D99042537564DCDFC134B592BF0A3FE,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_inheritance.html?context=cdpaas&locale=en,Inheritance,"Inheritance
Inheritance The ability to inherit from classes is fundamental to object-oriented programming. Python supports both single and multiple inheritance. Single inheritance means that there can be only one superclass. Multiple inheritance means that there can be more than one superclass. Inheritance is implemented by subclassing other classes. Any number of Python classes can be superclasses. In the Jython implementation of Python, only one Java class can be directly or indirectly inherited from. It's not required for a superclass to be supplied. Any attribute or method in a superclass is also in any subclass and can be used by the class itself, or by any client as long as the attribute or method isn't hidden. Any instance of a subclass can be used wherever an instance of a superclass can be used; this is an example of polymorphism. These features enable reuse and ease of extension.",conceptual,0,train
1F14865C04B28B02EE0760D7099554A916E26926,https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html?context=cdpaas&locale=en,Creating the catalog for platform connections,"Creating the catalog for platform connections
Creating the catalog for platform connections You can create a Platform assets catalog to share connections across your organization. Any user who you add as a collaborator to the catalog can see these connections. You can add an unlimited number of collaborators and connection assets to the Platform assets catalog. If you are signed up for both Cloud Pak for Data as a Service and watsonx, you share a single Platform assets catalog between the two platforms. Any connection assets that you add to the catalog on either platform are available in both platforms. However, if you add other types of assets to the Platform assets catalog on Cloud Pak for Data as a Service, you can't access those types of assets on watsonx. Requirements Before you create the Platform assets catalog, understand the required permissions and the requirements for storage and duplicate handling. Required permission : You must have the IAM Administrator role in the IBM Cloud account. : To view your roles, go to Administration > Access (IAM). Then select Roles in the IBM Cloud console. Storage requirement : You must specify the IBM Cloud Object Storage instance configured during [IBM Cloud account setup](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html). If you are not an administrator for the IBM Cloud Object Storage instance, it must be [configured to allow catalog creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html). Duplicate asset handling : Assets are considered duplicates if they have the same asset type and the same name. : Select how to handle duplicate assets: : - Update original assets : - Overwrite original assets : - Allow duplicates (default) : - Preserve original assets and reject duplicates : You can change the duplicate handling preferences at any time on the catalog Settings page. Creating the Platform assets catalog To create the Platform assets catalog: 1. From the main menu, choose Data > Platform connections. 2. Click Create catalog. 3. Select the IBM Cloud Object Storage service. If you don't have an existing service instance, [create a IBM Cloud Object Storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) and then refresh the page. 4. Click Create. The Platform assets catalog is created in a dedicated storage bucket. Initially, you are the only collaborator in the catalog. 5. Add collaborators to the catalog. Go to the Access control page in the catalog and add collaborators. You assign each user a [role](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html?context=cdpaas&locale=enroles): * Assign the Admin role at least one other user so that you are not the only person who can add collaborators. * Assign the Editor role to all users who are responsible for adding connections to the catalog. * Assign the Viewer role to the users who need to find connections and use them in projects. You can give all the users access to the Platform assets catalog by assigning the Viewer role to the Public Access group. By default, all users in your account are members of the Public Access group. See [add collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/catalog-collaborators.html). 6. Add connections to the catalog. You can delegate this step to other collaborators who have the Admin or Editor role. See [Add connections to the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Platform assets catalog collaborator roles The Platform assets catalog roles provide the permissions in the following table. Action Viewer Editor Admin View connections ✓ ✓ ✓ Use connections in projects ✓ ✓ ✓ Use connections in spaces ✓ ✓ ✓ View collaborators ✓ ✓ ✓ Add connections ✓ ✓ Modify connections ✓ ✓ Delete connections ✓ ✓ Add or remove collaborators ✓ Change collaborator roles ✓ Delete the catalog ✓ Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)",how-to,1,train
9E71F112F9AF39E61A59914D87689B4B8DB13F50,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-aws.html?context=cdpaas&locale=en,Integrating with AWS,"Integrating with AWS
Integrating with AWS You can configure an integration with the Amazon Web Services (AWS) platform to allow IBM watsonx users access to data sources from AWS. Before proceeding, make sure you have proper permissions. For example, you'll need to be able to create services and credentials in the AWS account. After you configure an integration, you'll see it under Service instances. You'll see a new AWS tab that lists your instances of Redshift and S3. To configure an integration with AWS: 1. Log on to the [AWS Console](https://aws.amazon.com/console/). 2. From the account drop-down at the upper right, select My Security Credentials. 3. Under Access keys (access key ID and secret access key), click Create New Access Key. 4. Copy the key ID and secret. Important: Write down your key ID and secret and store them in a safe place. 5. In IBM watsonx, under Administration > Cloud integrations, go to the AWS tab, enable integration, and then paste the access key ID and access key secret into the appropriate fields. 6. If you need to access Redshift, [configure firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-redshift.html). 7. Confirm that you can see your AWS services. From the main menu, choose Administration > Services > Services instances. Click the AWS tab to see those services. Now users who have credentials to your AWS services can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Next steps * [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Parent topic:[Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html)",how-to,1,train
BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=en,Monitoring account resource usage,"Monitoring account resource usage
Monitoring account resource usage Some service plans charge for compute usage and other types of resource usage. If you are the IBM Cloud account owner or administrator, you can monitor the resources usage to ensure the limits are not exceeded. For Lite plans, you cannot exceed the limits of the plan. You must wait until the start of your next billing month to use resources that are calculated monthly. Alternatively, you can [upgrade to a paid plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html). For most paid plans, you pay for the resources that the tools and processes that are provided by the service consume each month. To see the costs of your plan, log in to IBM Cloud, open your service instance from your IBM Cloud dashboard, and click Plan. * [Capacity unit hours (CUH) for compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=encompute) * [Resource units for foundation model inferencing](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=enrus) * [Monitor monthly billing](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=enbilling) Capacity unit hours (CUH) for compute usage Many tools consume compute usage that is measured in capacity unit hours (CUH). A capacity unit hour is a specific amount of compute capability with a set cost. How compute usage is calculated Different types of processes and different levels of compute power are billed at different rates of capacity units per hour. For example, the hourly rate for a data profiling process is 6 capacity units. Compute usage for Watson Studio is charged by the minute, with a minimum charge of 10 minutes (0.16 hours). Compute usage for Watson Machine Learning is charged by the minute with a minimum charge of one minute. Compute usage is calculated by adding the minimum number of minutes billed for each process plus the number of minutes the process runs beyond the minimum minutes, then multiplying the total by the capacity unit rate for the process. The following table shows examples of how the billed CUH is calculated. Rate Usage time Calculation Total CUH billed 1 CUH/hour 1 hour 1 hour * 1 CUH/hour 1 CUH 2 CUH/hour 45 minutes 0.75 hours * 2 CUH/hour 1.5 CUH 6 CUH/hour 5 minutes 0.16 hours * 6 CUH/hour 0.96 CUH. The minimum charge for Watson Studio applies. 6 CUH/hour 30 minutes 0.5 hours * 6 CUH/hour 3 CUH 6 CUH/hour 1 hour 1 hour * 6 CUH/hour 6 CUH Processes that consume capacity unit hours Some types of processes, such as AutoAI and Federated Learning, have a single compute rate for the runtime. However, with many tools you have a choice of compute resources for the runtime. The notebook editor, Data Refinery, SPSS Modeler, and other tools have different rates that reflect the memory and compute power for the environment. Environments with more memory and compute power consume capacity unit hours at a higher rate. This table shows each process that consumes CUH, where it runs, and against which service CUH is billed, and whether you can choose from more than one environment. Follow the links to view the available CUH rates for each process. Tool or Process Workspace Service that provides CUH Multiple CUH rates? [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) Project Watson Studio, Analytics Engine (Spark) Multiple rates [Invoking the machine learning API from a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlwml) Project Watson Machine Learning Multiple rates [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html) Project Watson Studio Multiple rates [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html) Project Watson Studio Multiple rates [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html) Project Watson Studio Multiple rates [AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html) Project Watson Machine Learning Multiple rates [Decision Optimization experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html) Spaces Watson Machine Learning Multiple rates [Running deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) Spaces Watson Machine Learning Multiple rates [Profiling](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.htmlprofiling) Project Watson Studio One rate [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html) Project Watson Studio One rate Monitoring compute usage You can monitor compute usage for all services at the account level. To view the monthly CUH usage for a service, open the service instance from your IBM Cloud dashboard and click Plan. You can also monitor compute usage in a project on the Environments page on the Manage tab. To see the total amount of capacity unit hours that are used and that are remaining for Watson Studio and Watson Machine Learning, look at the Environment Runtimes page. From the navigation menu, select Administration > Environment runtimes. The Environment Runtimes page shows details of the [CUH used by environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.htmltrack-account). You can calculate the amount of CUH you use for data flows and profiling by subtracting the amount used by environments from the total amount used. Resource units for foundation model inferencing Calling a foundation model to generate output in response to a prompt is known as inferencing. Foundation model inferencing is measure in resource units (RU). Each RU equals 1,000 tokens. A token is a basic unit of text (typically 4 characters or 0.75 words) used in the input or output for a foundation model prompt. For details on tokens, see [Tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html). Resource unit billing is based on the rate of the foundation model class multipled by the number of tokens.",how-to,1,train
450CAAACD51ABDEDAB940CAFB4BC47EBFBCBBA67,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_pyspark_metadata.html?context=cdpaas&locale=en,Data metadata (SPSS Modeler),"Data metadata (SPSS Modeler)
Data metadata This section describes how to set up the data model attributes based on pyspark.sql.StructField.",how-to,1,train
C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=en,Managing your settings,"Managing your settings
Managing your settings You can manage your profile, services, integrations, and notifications while logged in to IBM watsonx. * [Manage your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enprofile) * [Manage user API keys](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html) * [Switch accounts](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enaccount) * [Manage your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) * [Manage your integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enintegrations) * [Manage your notification settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enbell) * [View and personalize your project summary](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enproject-summary) Manage your profile You can manage your profile on the Profile page by clicking your avatar in the banner and then clicking Profile and settings. You can make these changes to your profile: * Add or change your avatar photo. * Change your IBMid or password. Do not change your IBMid (email address) after you register with the IBM watsonx platform. The IBMid (email address) uniquely identifies users in the platform and also authorizes access to various IBM watsonx resources, including projects, spaces, models, and catalogs. If you change your IBMid (email address) in your IBM Cloud profile after you have registered with IBM watsonx, you will lose access to the platform and associated resources. * Set your service locations filters by resource group and location. The filters apply throughout the platform. For example, the Service instances page that you access through the Services menu shows only the filtered services. Ensure you have selected the region where Watson Studio is located, for example, Dallas, as well as the Global location. Global is required to provide access to your IBM Cloud Object Storage instance. * Access your IBM Cloud account. * [Leave IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.htmldeactivate). Switch accounts If you are added to a shared IBM Cloud account that is different from your individual account, you can switch your account by selecting a different account from the account list in the menu bar, next to your avatar. Manage your integrations To set up or modify an integration to GitHub: 1. Click your avatar in the banner. 2. Click Profile and settings. 3. Click the Git integrations tab. See [Publish notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). Manage your notification settings To see your notification settings, click the notification bell icon and then click the settings icon. You can make these changes to your notification settings: * Specify to receive push notifications that appear briefly on screen. If you select Do not disturb, you continue to see notifications on the home page and the number of notifications on the bell. * Specify to receive notifications by email. * Specify for which projects or spaces you receive notifications. View and personalize your project summary Use the Overview page of a project to view a summary of what's happening in your project. You can jump back into your most recent work and keep up to date with alerts, tasks, project history, and compute usage. View recent asset activity in the Assets pane on the Overview page, and filter the assets by selecting By you or By all using the dropdown. Selecting By you lists assets edited by you, ordered by most recent at the top. Selecting By all lists assets edited by others and also by you, ordered by most recent at the top. You can use the readme file on the Overview page to document the status or results of the project. The readme file uses standard [Markdown formatting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html). Collaborators with the Admin or Editor role can edit the readme file. Learn more * [Managing your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html) * [Managing your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.htmlmanage) Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)",how-to,1,train
1D1783967CBF46A0B75539BADBAA1D601BC9F412,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html?context=cdpaas&locale=en,"Frameworks, fusion methods, and Python versions","Frameworks, fusion methods, and Python versions
Frameworks, fusion methods, and Python versions These are the available machine learning model frameworks and model fusion methods for the Federated Learning model. The software spec and frameworks are also compatible with specific Python versions. Frameworks and fusion methods This table lists supported software frameworks for building Federated Learning models. For each framework you can see the supported model types, fusion methods, and hyperparameter options. Table 1. Frameworks and fusion methods Frameworks Model Type Fusion Method Description Hyperparameters TensorFlow
Used to build neural networks.
See [Save the Tensorflow model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmltf-config). Any Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds
- Termination predicate (Optional)
- Quorum (Optional)
- Max Timeout (Optional) Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds
- Termination predicate (Optional)
- Quorum (Optional)
- Max Timeout (Optional) Scikit-learn
Used for predictive data analysis.
See [Save the Scikit-learn model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmlsklearn-config). Classification Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds
- Termination predicate (Optional) Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds
- Termination predicate (Optional) Regression Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds XGBoost XGBoost Classification Use to build classification models that use XGBoost. - Learning rate
- Loss
- Rounds
- Number of classes XGBoost Regression Use to build regression models that use XGBoost. - Learning rate
- Rounds
- Loss K-Means/SPAHM Used to train KMeans (unsupervised learning) models when parties have heterogeneous data sets. - Max Iter
- N cluster Pytorch
Used for training neural network models.
See [Save the Pytorch model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmlpytorch). Any Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds
- Epochs
- Quorum (Optional)
- Max Timeout (Optional) Neural Networks Probabilistic Federated Neural Matching (PFNM) Communication-efficient method for fully connected neural networks when parties have heterogeneous data sets. - Rounds
- Termination accuracy (Optional)
- Epochs
- sigma
- sigma0
- gamma
- iters Software specifications and Python version by framework This table lists the software spec and Python versions available for each framework. Software specifications and Python version by framework Watson Studio frameworks Python version Software Spec Python Client Extras Framework package scikit-learn 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 scikit-learn 1.1.1 Tensorflow 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 tensorflow 2.9.2 PyTorch 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 torch 1.12.1 Learn more [Hyperparameter definitions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html) Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)",conceptual,0,train
E3B9F33C36E5636808B137CFA4745E39F3B48D62,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/forecasting-guides.html?context=cdpaas&locale=en,SPSS predictive analytics forecasting using data preparation for time series data in notebooks,"SPSS predictive analytics forecasting using data preparation for time series data in notebooks
SPSS predictive analytics forecasting using data preparation for time series data in notebooks Data preparation for time series data (TSDP) provides the functionality to convert raw time data (in Flattened multi-dimensional format, which includes transactional (event) based and column-based data) into regular time series data (in compact row-based format) which is required by the subsequent time series analysis methods. The main job of TSDP is to generate time series in terms of the combination of each unique value in the dimension fields with metric fields. In addition, it sorts the data based on the timestamp, extracts metadata of time variables, transforms time series with another time granularity (interval) by applying an aggregation or distribution function, checks the data quality, and handles missing values if needed. Python example code: from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation tsdp = TimeSeriesDataPreparation(). setMetricFieldList([""Demand""]). setDateTimeField(""Date""). setEncodeSeriesID(True). setInputTimeInterval(""MONTH""). setOutTimeInterval(""MONTH""). setQualityScoreThreshold(0.0). setConstSeriesThreshold(0.0) tsdpOut = tsdp.transform(data) TimeSeriesDataPreparationConvertor This is the date/time convertor API that's used to provide some functionalities of the date/time convertor inside TSDP for applications to use. There are two use cases for this component: * Compute the time points between a specified start and end time. In this case, the start and end time both occur after the first observation in the previous TSDP\'s output. * Compute the time points between a start index and end index referring to the last observation in the previous TSDP\'s output. Temporal causal modeling Temporal causal modeling (TCM) refers to a suite of methods that attempt to discover key temporal relationships in time series data by using a combination of Granger causality and regression algorithms for variable selection. Python example code: from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation from spss.ml.common.wrapper import LocalContainerManager from spss.ml.forecasting.temporalcausal import TemporalCausal from spss.ml.forecasting.params.predictor import MaxLag, MaxNumberOfPredictor, Predictor from spss.ml.forecasting.params.temporal import FieldNameList, FieldSettings, Forecast, Fit from spss.ml.forecasting.reversetimeseriesdatapreparation import ReverseTimeSeriesDataPreparation tsdp = TimeSeriesDataPreparation().setDimFieldList([""Demension1"", ""Demension2""]). setMetricFieldList([""m1"", ""m2"", ""m3"", ""m4""]). setDateTimeField(""date""). setEncodeSeriesID(True). setInputTimeInterval(""MONTH""). setOutTimeInterval(""MONTH"") tsdpOutput = tsdp.transform(changedDF) lcm = LocalContainerManager() lcm.exportContainers(""TSDP"", tsdp.containers) estimator = TemporalCausal(lcm). setInputContainerKeys([""TSDP""]). setTargetPredictorList([Predictor( targetList="""", """", """"]], predictorCandidateList="""", """", """"]])]). setMaxNumPredictor(MaxNumberOfPredictor(False, 4)). setMaxLag(MaxLag(""SETTING"", 5)). setTolerance(1e-6) tcmModel = estimator.fit(tsdpOutput) transformer = tcmModel.setDataEncoded(True). setCILevel(0.95). setOutTargetValues(False). setTargets(FieldSettings(fieldNameList=FieldNameList(seriesIDList=[""da1"", ""db1"", ""m1""]]))). setReestimate(False). setForecast(Forecast(outForecast=True, forecastSpan=5, outCI=True)). setFit(Fit(outFit=True, outCI=True, outResidual=True)) predictions = transformer.transform(tsdpOutput) rtsdp = ReverseTimeSeriesDataPreparation(lcm). setInputContainerKeys([""TSDP""]). setDeriveFutureIndicatorField(True) rtsdpOutput = rtsdp.transform(predictions) rtsdpOutput.show() Temporal Causal Auto Regressive Model Autoregressive (AR) models are built to compute out-of-sample forecasts for predictor series that aren't target series. These predictor forecasts are then used to compute out-of-sample forecasts for the target series. Model produced by TemporalCausal TemporalCausal exports outputs: * a JSON file that contains TemporalCausal model information * an XML file that contains multi series model Python example code: from spss.ml.common.wrapper import LocalContainerManager from spss.ml.forecasting.temporalcausal import TemporalCausal, TemporalCausalAutoRegressiveModel from spss.ml.forecasting.params.predictor import MaxLag, MaxNumberOfPredictor, Predictor from spss.ml.forecasting.params.temporal import FieldNameList, FieldSettingsAr, ForecastAr lcm = LocalContainerManager() arEstimator = TemporalCausal(lcm). setInputContainerKeys([tsdp.uid]). setTargetPredictorList([Predictor( targetList = ""da1"", ""db1"", ""m2""]], predictorCandidateList = ""da1"", ""db1"", ""m1""], ""da1"", ""db2"", ""m1""], ""da1"", ""db2"", ""m2""], ""da1"", ""db3"", ""m1""], ""da1"", ""db3"", ""m2""], ""da1"", ""db3"", ""m3""]])]). setMaxNumPredictor(MaxNumberOfPredictor(False, 5)). setMaxLag(MaxLag(""SETTING"", 5)) arEstimator.fit(df) tcmAr = TemporalCausalAutoRegressiveModel(lcm). setInputContainerKeys([arEstimator.uid]). setDataEncoded(True). setOutTargetValues(True). setTargets(FieldSettingsAr(FieldNameList( seriesIDList=[""da1"", ""db1"", ""m1""], ""da1"", ""db2"", ""m2""], ""da1"", ""db3"", ""m3""]]))). setForecast(ForecastAr(forecastSpan = 5)) scored = tcmAr.transform(df) scored.show() Temporal Causal Outlier Detection One of the advantages of building TCM models is the ability to detect model-based outliers. Outlier detection refers to a capability to identify the time points in the target series with values that stray too far from their expected (fitted) values based on the TCM models. Temporal Causal Root Cause Analysis The root cause analysis refers to a capability to explore the Granger causal graph in order to analyze the key/root values that resulted in the outlier in question. Temporal Causal Scenario Analysis Scenario analysis refers to a capability of the TCM models to ""play-out"" the repercussions of artificially setting the value of a time series. A scenario is the set of forecasts that are performed by substituting the values of a root time series by a vector of substitute values. Temporal Causal Summary TCM Summary selects Top N models based on one model quality measure. There are five model quality measures: Root Mean Squared Error (RMSE), Root Mean Squared Percentage Error (RMSPE), Bayesian Information Criterion (BIC), Akaike Information Criterion (AIC), and R squared (RSQUARE). Both N and the model quality measure can be set by the user. Time Series Exploration Time Series Exploration explores the characteristics of time series data based on some statistics and tests to generate preliminary insights about the time series before modeling. It covers not only analytic methods for expert users (including time series clustering, unit root test, and correlations), but also provides an automatic exploration process based on a simple time series decomposition method for business users. Python example code: from spss.ml.forecasting.timeseriesexploration import TimeSeriesExploration tse = TimeSeriesExploration(). setAutoExploration(True). setClustering(True) tseModel = tse.fit(data) predictions = tseModel.transform(data) predictions.show() Reverse Data preparation for time series data Reverse Data preparation for time series data (RTSDP)",how-to,1,train
8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html?context=cdpaas&locale=en,Regional availability for services and features,"Regional availability for services and features
Regional availability for services and features IBM watsonx is deployed on the IBM Cloud multi-zone region network. The availability of services and features can vary across regional data centers. You can view the regional availability for every service in the [Services catalog](https://dataplatform.cloud.ibm.com/data/catalog?target=services&context=wx). Regional availability of the Watson Studio and Watson Machine Learning services Watsonx.ai includes the Watson Studio and Watson Machine Learning services to provide foundation and machine learning model tools. The Watson Studio and Watson Machine Learning services are available in the following regional data centers: * Dallas (us-south), in Texas US * Frankfurt (eu-de), in Germany Regional availability of foundation models The following table shows the IBM Cloud data centers where each foundation model is available. A checkmark indicates that the model is hosted in the region. Table 1. IBM Cloud data center support Model name Dallas Frankfurt flan-t5-xl-3b ✓ flan-t5-xxl-11b ✓ ✓ flan-ul2-20b ✓ ✓ gpt-neox-20b ✓ ✓ granite-13b-chat-v2 ✓ ✓ granite-13b-chat-v1 ✓ ✓ granite-13b-instruct-v2 ✓ ✓ granite-13b-instruct-v1 ✓ ✓ llama-2-13b-chat ✓ ✓ llama-2-70b-chat ✓ ✓ mpt-7b-instruct2 ✓ ✓ mt0-xxl-13b ✓ ✓ starcoder-15.5b ✓ ✓ Tool and environment limitations for the Frankfurt region Table 2. Frankfurt regional limitations Service Limitation Watson Studio If you need a Spark runtime, you must use the [Spark environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/jupyter-spark.html) in Watson Studio for the SPSS Modeler and notebook editor tools. Watson Studio [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) is not supported. Watson Studio [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) is not supported. Regional availability of watsonx.governance Watsonx.governance Lite and Essentials plans are available only in the Dallas region. Regional availability of Watson OpenScale Watson OpenScale legacy plans are available only in the Frankfurt region. Regional availability of the Cloud Object Storage service The region for the Cloud Object Storage service is Global. Cloud Object Storage buckets for workspaces are Regional buckets. For more information, see [IBM Cloud docs: Cloud Object Storage endpoints and storage locations](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-endpoints). Learn more * [IBM Cloud docs: IBM Cloud global data centers](https://www.ibm.com/cloud/data-centers) * [Services in the IBM watsonx catalog](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html) Parent topic:[Services and integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/svc-int.html)",conceptual,0,train
27A861059A73E83BC02C633EE194DAC6F8ACE374,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-pytorch.html?context=cdpaas&locale=en,Batch deployment input details for Pytorch models,"Batch deployment input details for Pytorch models
Batch deployment input details for Pytorch models Follow these rules when you are specifying input details for batch deployments of Pytorch models. Data type summary table: Data Description Type inline, data references File formats .zip archive that contains JSON files Data sources Input or output data references: * Local or managed assets from the space * Connected (remote) assets: Cloud Object Storage If you are specifying input/output data references programmatically: * Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). * If you deploy Pytorch models with ONNX format, specify the keep_initializers_as_inputs=True flag and set opset_version to 9 (always set opset_version to the most recent version that is supported by the deployment runtime). torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9) Note: The environment variables parameter of deployment jobs is not applicable. Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)",conceptual,0,train
2D9ACE87F4859BF7EF8CDF4EBBF8307C51034471,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/linearas.html?context=cdpaas&locale=en,Linear-AS node (SPSS Modeler),"Linear-AS node (SPSS Modeler)
Linear-AS node Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values. Requirements. Only numeric fields and categorical predictors can be used in a linear regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node.) Strengths. Linear regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because linear regression is a long-established statistical procedure, the properties of these models are well understood. Linear models are also typically very fast to train. The Linear node provides methods for automatic field selection in order to eliminate non-significant input fields from the equation. Note: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields.",conceptual,0,train
67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=en,Managing the Watson Machine Learning service endpoint,"Managing the Watson Machine Learning service endpoint
Managing the Watson Machine Learning service endpoint You can use IBM Cloud connectivity options for accessing cloud services securely by using service endpoints. When you provision a Watson Machine Learning service instance, you can choose if you want to access your service through the public internet, which is the default setting, or over the IBM Cloud private network. For more information, refer to [IBM Cloud service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint).{: new_window} You can use the Service provisioning page to choose a default endpoint from the following options: * [Public network](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=enpublic_net) * [Private network](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=enprivate_net) * Both, public and private networks Public network You can use public network endpoints to connect to Watson Machine Learning service instance on the public network. Your environment needs to have internet access to connect. Private network You can use private network endpoints to connect to your IBM Watson Machine Learning service instance over the IBM Cloud Private network. After you configure your Watson Machine Learning service to use private endpoints, the service is not accessible from the public internet. Private URLs for Watson Machine Learning Private URLs for Watson Machine Learning for each region are as follows: * Dallas - [https://private.us-south.ml.cloud.ibm.com](https://private.us-south.ml.cloud.ibm.com) * London - [https://private.eu-gb.ml.cloud.ibm.com](https://private.eu-gb.ml.cloud.ibm.com) * Frankfurt - [https://private.eu-de.ml.cloud.ibm.com](https://private.eu-de.ml.cloud.ibm.com) * Tokyo - [https://private.jp-tok.ml.cloud.ibm.com](https://private.jp-tok.ml.cloud.ibm.com) Using IBM Cloud service endpoints Follow these steps to enable private network endpoints on your clusters: 1. Use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started) to enable your account to use IBM Cloud service endpoints. 2. Provision a Watson Machine Learning service instance with private endpoints. Provisioning with service endpoints You can provision a Watson Machine Learning service instance with service endpoint by using IBM Cloud UI or IBM Cloud CLI. Provisioning a service endpoint with IBM Cloud UI To configure the endpoints of your IBM Watson Machine Learning service instance, you can use the Endpoints field on the IBM Cloud catalog page. You can configure a public, private, or a mixed network.  IBM Cloud CLI If you provision an IBM Watson Machine Learning service instance by using the IBM Cloud CLI, use the command-line option service-endpoints to configure the Watson Machine Learning endpoints. You can specify the value public (the default value), private, or public-and-private: ibmcloud resource service-instance-create pm-20 --service-endpoints For example: ibmcloud resource service-instance-create wml-instance pm-20 standard us-south -p --service-endpoints private or ibmcloud resource service-instance-create wml-instance pm-20 standard us-south --service-endpoints public-and-private Parent topic:[First steps](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html)",how-to,1,train
299CEE894DFF422AAC8BF49B53CAC700DE1B172D,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_global.html?context=cdpaas&locale=en,Global functions (SPSS Modeler),"Global functions (SPSS Modeler)
Global functions The functions @MEAN, @SUM, @MIN, @MAX, and @SDEV work on, at most, all of the records read up to and including the current one. In some cases, however, it is useful to be able to work out how values in the current record compare with values seen in the entire data set. Using a Set Globals node to generate values across the entire data set, you can access these values in a CLEM expression using the global functions. For example, @GLOBAL_MAX(Age) returns the highest value of Age in the data set, while the expression (Value - @GLOBAL_MEAN(Value)) / @GLOBAL_SDEV(Value) expresses the difference between this record's Value and the global mean as a number of standard deviations. You can use global values only after they have been calculated by a Set Globals node. CLEM global functions Table 1. CLEM global functions Function Result Description @GLOBAL_MAX(FIELD) Number Returns the maximum value for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric, date/time/datetime, or string field. If the corresponding global value has not been set, an error occurs. @GLOBAL_MIN(FIELD) Number Returns the minimum value for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric, date/time/datetime, or string field. If the corresponding global value has not been set, an error occurs. @GLOBAL_SDEV(FIELD) Number Returns the standard deviation of values for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric field. If the corresponding global value has not been set, an error occurs. @GLOBAL_MEAN(FIELD) Number Returns the mean average of values for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric field. If the corresponding global value has not been set, an error occurs. @GLOBAL_SUM(FIELD) Number Returns the sum of values for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric field. If the corresponding global value has not been set, an error occurs.",conceptual,0,train
74706148818BD2ACE30029492DD8AD7D47283EDC,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/userinput.html?context=cdpaas&locale=en,User Input node (SPSS Modeler),"User Input node (SPSS Modeler)
User Input node The User Input node provides an easy way for you to create synthetic data--either from scratch or by altering existing data. This is useful, for example, when you want to create a test dataset for modeling.",conceptual,0,train
E5895BC081EDBF0CD7340015DECD0D0180AAC44A,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html?context=cdpaas&locale=en,Creating a Federated Learning experiment,"Creating a Federated Learning experiment
Creating a Federated Learning experiment Learn how to create a Federated Learning experiment to train a machine learning model. Watch this short overview video of how to create a Federated Learning experiment. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. Follow these steps to create a Federated Learning experiment: * [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html) * [Creating the initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html) * [Create the data handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html) * [Starting the aggregator (Admin)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html) * [Connecting to the aggregator (Party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html) * [Monitoring and saving the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html) Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)",how-to,1,train
1D46D1240377AEA562F14A560CB9F24DF33EDF88,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_output.html?context=cdpaas&locale=en,Extension Output node (SPSS Modeler),"Extension Output node (SPSS Modeler)
Extension Output node With the Extension Output node, you can run R scripts or Python for Spark scripts to produce output. After adding the node to your canvas, double-click the node to open its properties.",conceptual,0,train
27FCAB0041FEB8B819E329A319B12D2F4167318A,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-spss.html?context=cdpaas&locale=en,Creating SPSS Modeler jobs,"Creating SPSS Modeler jobs
Creating SPSS Modeler jobs You can create a job to run an SPSS Modeler flow. To create an SPSS Modeler job: 1. In SPSS Modeler, click the Create a job icon  from the toolbar and select Create a job. A wizard will appear. Click Next to proceed through each page of the wizard as described here. 2. Define the job details by entering a name and a description (optional). If desired, you can also specify retention settings for the job. Select Job run retention settings to set how long to retain finished job runs and job run artifacts such as logs. You can select one of the following retention methods. Be mindful when changing the default as too many job run files can quickly use up project storage. * By duration (days). Specify the number of days to retain job runs and job artifacts. The retention value is set to 7 days by default (the last 7 days of job runs retained). * By amount. Specify the last number of finished job runs and job artifacts to keep. The retention value is set to 200 jobs by default. 3. On the Flow parameters page, you can set values for flow parameters if any exist for the flow. They are, in effect, user-defined variables that are saved and persisted with the flow. Parameters are often used in scripting to control the behavior of the script by providing information about fields and values that don't need to be hard coded in the script. See [Setting properties for flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/flow_properties.html) for more information. For example, your flow might contain a parameter called age_param that you choose to set to 40 here, and a parameter called bp_param you might set to HIGH. 4. On the Configuration page, you can choose whether the job will run the entire flow or one or more branches of the flow. 5. On the Schedule page, you can optionally add a one-time or repeating schedule. If you define a start day and time without selecting Repeat, the job will run exactly one time at the specified day and time. If you define a start date and time and you select Repeat, the job will run for the first time at the timestamp indicated in the Repeat section. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. 6. Optionally turn on notifications for the job. You can select the type of alerts to receive. 7. Review the job settings. Click Save to create the job. The SPSS Modeler job is listed under Jobs in your project. Learn more * [Viewing job details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlview-job-details) * [SPSS Modeler documentation](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Parent topic: [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)",how-to,1,train
448502B5D06CD5BCAA58F569AA43AA2E0394A794,https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en,Troubleshoot Watson Machine Learning,"Troubleshoot Watson Machine Learning
Troubleshoot Watson Machine Learning Here are the answers to common troubleshooting questions about using IBM Watson Machine Learning. Getting help and support for Watson Machine Learning If you have problems or questions when using Watson Machine Learning, you can get help by searching for information or by asking questions through a forum. You can also open a support ticket. When using the forums to ask a question, tag your question so that it is seen by the Watson Machine Learning development teams. If you have technical questions about Watson Machine Learning, post your question on [Stack Overflow ](http://stackoverflow.com/search?q=machine-learning+ibm-bluemix) and tag your question with ""ibm-bluemix"" and ""machine-learning"". For questions about the service and getting started instructions, use the [IBM developerWorks dW Answers ](https://developer.ibm.com/answers/topics/machine-learning/?smartspace=bluemix) forum. Include the ""machine-learning"" and ""bluemix"" tags. Contents * [Authorization token has not been provided](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_authorization_token) * [Invalid authorization token](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_authorization_token) * [Authorization token and instance_id which was used in the request are not the same](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_matching_authorization_token) * [Authorization token is expired](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_expired_authorization_token) * [Public key needed for authentication is not available](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_public_key) * [Operation timed out after {{timeout}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_operation_timeout) * [Unhandled exception of type {{type}} with {{status}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_status) * [Unhandled exception of type {{type}} with {{response}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_response) * [Unhandled exception of type {{type}} with {{json}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_json) * [Unhandled exception of type {{type}} with {{message}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_message) * [Requested object could not be found](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_found) * [Underlying database reported too many requests](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_too_many_cloudant_requests) * [The definition of the evaluation is not defined neither in the artifactModelVersion nor in the deployment. It needs to be specified \"" +\n \""at least in one of the places](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_evaluation_definition) * [Data module not found in IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enfl_data_module_missing) * [Evaluation requires learning configuration specified for the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_learning_configuration) * [Evaluation requires spark instance to be provided in X-Spark-Service-Instance header](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_spark_definition_for_evaluation) * [Model does not contain any version](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_latest_model_version) * [Patch operation can only modify existing learning configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_non_existing_learning_configuration) * [Patch operation expects exactly one replace operation](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_multiple_ops) * [The given payload is missing required fields: FIELD or the values of the fields are corrupted](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_request_payload) * [Provided evaluation method: METHOD is not supported. Supported values: VALUE](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_evaluation_method_not_supported) * [There can be only one active evaluation per model. Request could not be completed because of existing active evaluation: {{url}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_active_evaluation_conflict) * [The deployment type {{type}} is not supported](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_supported_deployment_type) * [Incorrect input: ({{message}})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_deserialization_error) * [Insufficient data - metric {{name}} could not be calculated](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_metric) * [For type {{type}} spark instance must be provided in X-Spark-Service-Instance header](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_prediction_spark_definition) * [Action {{action}} has failed with message {{message}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_http_client_error) * [Path {{path}} is not allowed. Only allowed path for patch stream is /status](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_wrong_stream_patch_path) * [Patch operation is not allowed for instance of type {{$type}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_not_supported) * [Data connection {{data}} is invalid for feedback_data_ref](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_feedback_data_connection) * [Path {{path}} is not allowed. Only allowed path for patch model is /deployed_version/url or /deployed_version/href for V2](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_model_path_not_allowed) * [Parsing failure: {{msg}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_parsing_error) * [Runtime environment for selected model: {{env}} is not supported for learning configuration. Supported environments: - [{{supported_envs}}]](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_runtime_env_not_supported) * [Current plan \'{{plan}}\' only allows {{limit}} deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_deployments_plan_limit_reached) * [Database connection definition is not valid ({{code}})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_sql_error) * [There were problems while connecting underlying {{system}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_stream_tcp_error) * [Error extracting X-Spark-Service-Instance header: ({{message}})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_spark_header_deserialization_error) * [This functionality is forbidden for non beta users](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_beta_user) * [{{code}} {{message}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_underlying_api_error) * [Rate limit exceeded](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_rate_limit_exceeded) * [Invalid query parameter {{paramName}} value: {{value}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_query_parameter_value) * [Invalid token type: {{type}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_token_type) * [Invalid token format. Bearer token format should be used](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_token_format) * [Input JSON file is missing or invalid: 400](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_invalid_input) * [Authorization token has expired: 401](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_expired_authorization_token) * [Unknown deployment identification:404](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_unkown_depid) * [Internal server error:500](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_internal_error) * [Invalid type for ml_artifact: Pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_invalid_type_artifact) * [ValueError: Training_data_ref name and connection cannot be None, if Pipeline Artifact is not given.](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enpipeline_error) Authorization token has not been provided. What's happening The REST API cannot be invoked successfully. Why it's happening Authorization token has not been provided in the Authorization header. How to fix it Pass authorization token in the Authorization header. Invalid authorization token. What's happening The REST API cannot be invoked successfully. Why it's happening Authorization token which has been provided cannot be decoded or parsed. How to fix it Pass correct authorization token in the Authorization header. Authorization token and instance_id which was used in the request are not the same. What's happening The REST API cannot be invoked successfully. Why it's happening The Authorization token which has been used is not generated for the service instance against which it was used. How to fix it Pass authorization token in the Authorization header which corresponds to the service instance which is being used. Authorization token is expired. What's happening The REST API cannot be invoked successfully. Why it's happening Authorization token is expired. How to fix it Pass not expired authorization token in the Authorization header. Public key needed for authentication is not available. What's happening The REST API cannot be invoked successfully. Why it's happening This is internal service issue. How to fix it The issue needs to be fixed by support team. Operation",how-to,1,train
A10DE0E026BA0CF397108621D5927E16436ACF58,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=en,Configuring App ID with your identity provider,"Configuring App ID with your identity provider
Configuring App ID with your identity provider To use App ID for user authentication for IBM watsonx, you configure App ID as a service on IBM Cloud. You configure an identity provider (IdP) such as Azure Active Directory. You then configure App ID and the identity provider to communicate with each other to grant access to authorized users. To configure App ID and your identity provider to work together, follow these steps: * [Configure your identity provider to communicate with IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_idp) * [Configure App ID to communicate with your identify provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_appid) * [Configure IAM to enable login through your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_iam) Configuring your identity provider To configure your identity provider to communicate with IBM Cloud, you enter the entityID and Location into your SAML configuration for your identity provider. An overview of the steps for configuring Azure Active Directory is provided as an example. Refer to the documentation for your identity provider for detailed instructions for its platform. The prerequisites for configuring App ID with an identity provider are: * An IBM Cloud account * An App ID instance * An identity provider, for example, Azure Active Directory To configure your identity provider for SAML-based single sign-on: 1. Download the SAML metadata file from App ID to find the values for entityID and Location. These values are entered into the identity provider configuration screen to establish communication with App ID on IBM Cloud. (The corresponding values from the identity provider, plus the primary certificate, are entered in App ID. See [Configuring App ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_appid)). * In App ID, choose Identity providers > SAML 2.0 federation. * Download the appid-metadata.xml file. * Find the values for entityID and Location. 2. Copy the values for entityID and Location from the SAML metadata file and paste them into the corresponding fields on your identity provider. For Azure Active Directory, the fields are located in Section 1: Basic SAML Configuration in the Enterprise applications configuration screen. App ID value Active Directory field Example entityID Identifier (Entity ID) urn:ibm:cloud:services:appid:value Location Reply URL (Assertion Consumer Service URL) https://us-south.appid.cloud.ibm.com/saml2/v1/value/login-acs 3. In Section 2: Attributes & Claims for Azure Active Directory, you map the username parameter to user.mail to identify the users by their unique email address. IBM watsonx requires that you set username to the user.mail attribute. For other identity providers, a similar field that uniquely identifies users must be mapped to user.mail. Configuring App ID You establish communication between App ID and your identity provider by entering the SAML values from the identity provider into the corresponding App ID fields. An example is provided for configuring App ID to communicate with an Active Directory Enterprise Application. 1. Choose Identity providers > SAML 2.0 federation and complete the Provide metadata from SAML IdP section. 2. Download the Base64 certificate from Section 3: SAML Certificates in Active Directory (or your identity provider) and paste it into the Primary certificate field. 3. Copy the values from Section 4: Set up your-enterprise-application in Active Directory into the corresponding fields in Provide metadata from SAML IdP in IBM App ID. App ID field Value from Active Directory Entity ID Azure AD Identifier Sign in URL Login URL Primary certificate Certificate (Base64) 4. Click Test on the App ID page to test that App ID can connect to the identity provider. The happy face response indicates that App ID can communicate with the identity provider.  Configuring IAM You must assign the appropriate role to the users in IBM Cloud IAM and also configure your identity provider in IAM. Users require at least the Viewer role for All Identity and IAM enabled services. Create an identity provider reference in IBM Cloud IAM Create an identity provider reference to connect your external repository to your IBM Cloud account. 1. Navigate to Manage > Access(IAM) > Identity providers. 2. For the type, choose IBM Cloud App ID. 3. Click Create. 4. Enter a name for the identity provider. 5. Select the App ID service instance. 6. Select how to on board users. Static adds users when they log in for the first time. 7. Enable the identity provider for logging in by checking the Enable for account login? box. 8. If you have more than one identity providers, set the identity provider as the default by checking the box. 9. Click Create. Change the App ID login alias A login alias is generated for App ID. Users enter the alias when logging on to IBM Cloud. You can change the default alias string to be easier to remember. 1. Navigate to Manage > Access(IAM) > Identity providers. 2. Select IBM Cloud App ID as the type. 3. Edit the Default IdP URL to make it simpler. For example, https://cloud.ibm.com/authorize/540f5scc241a24a70513961 can be changed to https://cloud.ibm.com/authorize/my-company. Users log in with the alias my-company instead of",how-to,1,train
8CF8260D0474AD73D9878CCD361C83102B724733,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html?context=cdpaas&locale=en,Configuring pipeline nodes,"Configuring pipeline nodes
Configuring pipeline nodes Configure the nodes of your pipeline to specify inputs and to create outputs as part of your pipeline. Specifying the workspace scope By default, the scope for a pipeline is the project that contains the pipeline. You can explicitly specify a scope other than the default, to locate an asset used in the pipeline. The scope is the project, catalog, or space that contains the asset. From the user interface, you can browse for the scope. Changing the input mode When you are configuring a node, you can specify any resources that include data and notebooks in various ways. Such as directly entering a name or ID, browsing for an asset, or by using the output from a prior node in the pipeline to populate a field. To see what options are available for a field, click the input icon for the field. Depending on the context, options can include: * Select resource: use the asset browser to find an asset such as a data file. * Assign pipeline parameter: assign a value by using a variable configured with a pipeline parameter. For more information, see [Configuring global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html). * Select from another node: use the output from a node earlier in the pipeline as the value for this field. * Enter the expression: enter code to assign values or identify resources. For more information, see [Coding elements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html). Pipeline nodes and parameters Configure the following types of pipeline nodes: Copy nodes Use Copy nodes to add assets to your pipeline or to export pipeline assets. * Copy assets Copy selected assets from a project or space to a nonempty space. You can copy these assets to a space: - AutoAI experiment - Code package job - Connection - Data Refinery flow - Data Refinery job - Data asset - Deployment job - Environment - Function - Job - Model - Notebook - Notebook job - Pipelines job - Script - Script job - SPSS Modeler job #### Input parameters |Parameter|Description| |---|---| |Source assets |Browse or search for the source asset to add to the list. You can also specify an asset with a pipeline parameter, with the output of another node, or by entering the asset ID| |Target|Browse or search for the target space| |Copy mode|Choose how to handle a case where the flow tries to copy an asset and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Output assets |List of copied assets| * Export assets Export selected assets from the scope, for example, a project or deployment space. The operation exports all the assets by default. You can limit asset selection by building a list of resources to export. #### Input parameters |Parameter|Description| |---|---| |Assets |Choose Scope to export all exportable items or choose List to create a list of specific items to export| |Source project or space |Name of project or space that contains the assets to export| |Exported file |File location for storing the export file| |Creation mode (optional)|Choose how to handle a case where the flow tries to create an asset and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Exported file|Path to exported file| Notes: - If you export a project that contains a notebook, the latest version of the notebook is included in the export file. If the Pipeline with the Run notebook job node was configured to use a different notebook version other than the latest version, the exported Pipeline is automatically reconfigured to use the latest version when imported. This might produce unexpected results or require some reconfiguration after the import. - If assets are self-contained in the exported project, they are retained when you import a new project. Otherwise, some configuration might be required following an import of exported assets. * Import assets Import assets from a ZIP file that contains exported assets. #### Input parameters |Parameter|Description| |---|---| |Path to import target |Browse or search for the assets to import| |Archive file to import |Specify the path to a ZIP file or archive| Notes: After you import a file, paths and references to the imported assets are updated, following these rules: - References to assets from the exported project or space are updated in the new project or space after the import. - If assets from the exported project refer to external assets (included in a different project), the reference to the external asset will persist after the import. - If the external asset no longer exists, the parameter is replaced with an empty value and you must reconfigure the field to point to a valid asset. Create nodes Configure the nodes for creating assets in your pipeline. * Create AutoAI experiment Use this node to train an [AutoAI classification or",how-to,1,train
99B0C1C962E0642E5B877747ED37E9BB27238664,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_tsne.html?context=cdpaas&locale=en,t-SNE charts,"t-SNE charts
t-SNE charts T-distributed Stochastic Neighbor Embedding (t-SNE) is a machine learning algorithm for visualization. t-SNE charts model each high-dimensional object by a two-or-three dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability.",conceptual,0,train
759B6927189FEA6BE3124BF79FA527873CB84EA6,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/ocsvm.html?context=cdpaas&locale=en,One-Class SVM node (SPSS Modeler),"One-Class SVM node (SPSS Modeler)
One-Class SVM node The One-Class SVM© node uses an unsupervised learning algorithm. The node can be used for novelty detection. It will detect the soft boundary of a given set of samples, to then classify new points as belonging to that set or not. This One-Class SVM modeling node is implemented in Python and requires the scikit-learn© Python library. For details about the scikit-learn library, see [Support Vector Machines](http://scikit-learn.org/stable/modules/svm.htmlsvm-outlier-detection)^1^. The Modeling tab on the palette contains the One-Class SVM node and other Python nodes. Note: One-Class SVM is used for usupervised outlier and novelty detection. In most cases, we recommend using a known, ""normal"" dataset to build the model so the algorithm can set a correct boundary for the given samples. Parameters for the model – such as nu, gamma, and kernel – impact the result significantly. So you may need to experiment with these options until you find the optimal settings for your situation. ^1^Smola, Schölkopf. ""A Tutorial on Support Vector Regression."" Statistics and Computing Archive, vol. 14, no. 3, August 2004, pp. 199-222. (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.114.4288)",conceptual,0,train
773F81DD69D3ADBBE1998FF5974CA83347EFFC76,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-privacy-rights.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Data privacy rights Risks associated with inputTraining and tuning phasePrivacyAmplified Description In some countries, privacy laws give individuals the right to access, correct, verify, or remove certain types of information that companies hold or process about them. Tracking the usage of an individual’s personal information in training a model and providing appropriate rights to comply with such laws can be a complex endeavor. Why is data privacy rights a concern for foundation models? The identification or improper usage of data could lead to violation of privacy laws. Improper usage or a request for data removal could force organizations to retrain the model, which is expensive. In addition, business entities could face fines, reputational harms, and other legal consequences if they fail to comply with data privacy rules and regulations. Example Right to Be Forgotten (RTBF) As stated in the article, laws in multiple locales, including Europe (GDPR); Canada (CPPA); and Japan (APPI), grant users’ rights for their personal data to be “forgotten” by technology (Right To Be Forgotten). However, the emerging and increasingly popular AI (LLMs) services present new challenges for the right to be forgotten (RTBF). According to Data61’s research, the only way for users to identify usage of their personal information in an LLM is “by either inspecting the original training dataset or perhaps prompting the model.” However, training data is either not public or companies do not disclose it, citing safety and other concerns, and guardrails may prevent users from accessing the information via prompting. Due to these barriers, users cannot initiate RTBF procedures and companies deploying LLMs may be unable to meet RTBF laws. Sources: [Zhang et al., Sept 2023](https://arxiv.org/pdf/2307.03941.pdf) Example Lawsuit About LLM Unlearning According to the report, a lawsuit was filed against Google that alleges the use of copyright material and personal information as training data for its AI systems, which includes its Bard chatbot. Opt-out and deletion rights are guaranteed rights for California residents under the CCPA and children in the United States below 13 under the COPPA. The plaintiffs allege that because there is no way for Bard to “unlearn” or fully remove all the scraped PI it has been fed. The plaintiffs note that Bard’s privacy notice states that Bard conversations cannot be deleted by the user once they have been reviewed and annotated by the company and may be kept up to 3 years, which plaintiffs allege further contributes to non-compliance with these laws. Sources: [Reuters, July 2023](https://www.reuters.com/legal/litigation/google-hit-with-class-action-lawsuit-over-ai-data-scraping-2023-07-11/) [J.L. v. Alphabet Inc., July 2023](https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmodloqvr/GOOGLE%20AI%20LAWSUIT%20complaint.pdf) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
8BE1A39CDBAAA858051954548474DD3E307B20CB,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html?context=cdpaas&locale=en,Setting up an AI use case,"Setting up an AI use case
Setting up an AI use case Create an AI use case to define a business problem and track the related AI assets through their lifecycle. View details about governed assets or generate reports to help meet governance and compliance goals. Creating AI use cases in an inventory An inventory presents a view of all the AI use cases that you can access that are assigned to that inventory. Use multiple inventories to manage groups of AI use cases. For example, you might create an inventory for governing prompt templates and another for governing machine learning assets. Add collaborators to inventories so they can view or contribute to AI uses cases. Before you begin * Enable watsonx.governance and provision Watson OpenScale. * You must have access to an existing inventory or have sufficient access to [create a new inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html). For details on watsonx.governance roles and managing access for governance, see [Collaboration roles for governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html). If you do not have sufficient access to create or contribute to an inventory, contact your administrator. Viewing AI use cases 1. Click AI use cases from the navigation menu to view all existing AI use cases you can access, or click Request a model with an AI use case from the home page. From the primary view, you can search for a specific use case or filter the view to focus on certain use cases. For example, filter the view by Inventory to view all the AI use cases in a particular inventory.  2. Click the name of an AI use case to open it and view the details on these tabs: * Overview shows the essential details for the use case. * Lifecycle shows the assets that are tracked in the use case, which is organized by the phases of the AI lifecycle. * Access lists collaborators for the use case and assigned roles. 3. Click the name of an asset to view the associated factsheet. Generating a report from a use case You can generate reports from use cases or factsheets to share or preserve records. By default, the reports generate these default reports: * Basic report contains the set of facts visible on the Overview and Lifecycle tabs. * Full report contains all facts about the use case and the models, prompt templates, and deployments it contains. The inventory admin can customize reports to include custom branding or to change the fields included in reports. For details, see [Customizing report templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-reports.html). To create a report: 1. Open a use case in an inventory. 2. Click the Export report icon to generate a PDF record of the use case. 3. Choose a format option and export the report. Creating an AI use case 1. Click AI use cases from the navigation menu. 2. Click New AI use case. 3. Enter a name and choose an inventory for the use case. If you do not have access to an inventory, you must create one before you can define a use case. See [Managing an inventory for AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html) for details. 4. Complete optional fields as needed: Option Notes Description Define the business problem and provide any details about the proposed solution. Risk level Assign a risk level that reflects the nature of the business problem and the anticipated solution according to your governance policies. For example, assign a risk level of High for a model that processes sensitive personal data. Supporting data Enter links to supporting documents that support or clarify the purpose of the use case Owner For a use case with multiple owners, you can edit ownership Status By default, a new AI use case is assigned a status of default, as it is typically waiting for assets to be added for tracking. You can manually change the status. For example, change to Awaiting development if you do not require any additional review or approval for a requested model. Change to Developed if you already have a model to add to governance. Review the complete list of status options in the following section. Tags Assign or create tags to make your AI use cases easier to find or group. Use case status details Update the status field to provide users of the use case an immediate reflection of the current state. Status Description Ready for use case approval Use case is defined and ready for review Use case approved Use case ready for model or prompt template development Use case rejected Use case not ready for model or prompt development Awaiting development Awaiting delivery of AI asset (model or prompt) Development in progress AI asset (model or prompt) in development Developed Trained model or prompt template added to use case Ready for AI asset validation AI asset ready for testing or evaluation Validation complete AI",how-to,1,train
2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html?context=cdpaas&locale=en,Creating a text analysis experiment,"Creating a text analysis experiment
Creating a text analysis experiment Use AutoAI's text analysis feature to perform text analysis of your experiments. For example, perform basic sentiment analysis to predict an outcome based on text comments. Note: Text analysis is only available for AutoAI classification and regression experiments. This feature is not available for time series experiments. Text analysis overview When you create an experiment that uses the text analysis feature, the AutoAI process uses the word2vec algorithm to transform the text into vectors, then compares the vectors to establish the impact on the prediction column. The word2vec algorithm takes a corpus of text as input and outputs a set of vectors. By turning text into a numerical representation, it can detect and compare similar words. When trained with enough data, word2vec can make accurate predictions about a word's meaning or relationship to other words. The predictions can be used to analyze text and guess at the meaning in sentiment analysis applications. During the feature engineering phase of the experiment training, 20 features are generated for the text column, by using the word2vec algorithm. Auto-detection of text features is based on analyzing the number of unique values in a column and the number of tokens in a record (minimum number = 3). If the number of unique values is less than number of all values divided by 5, the column is not treated as text. When the experiment completes, you can review the feature engineering results from the pipeline details page. You can also save a pipeline as a notebook, where you can review the transformations and see a visualization of the transformations. Note: When you review the experiment, if you determine that a text column was not detected and processed by the auto-detection, you can specify the text column manually in the experiment settings. In this example, the comments for a fictional car rental company are used to train a model that predicts a satisfaction rating when a new comment is entered. Watch this short video to see this example and then read further details about the text feature below the video. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 In this video you'll see how to create an AutoAI experiment to perform sentiment analysis on a text file. 00:09 You can use the text feature engineering to perform text analysis in your experiments. 00:15 For example, perform basic sentiment analysis to predict an outcome based on text comments. 00:22 Start in a project and add an asset to that project, a new AutoAI experiment. 00:29 Just provide a name, description, select a machine learning service, and then create the experiment. 00:38 When the AutoAI experiment builder displays, you can add the data set. 00:43 In this case, the data set is already stored in the project as a data asset. 00:48 Select the asset to add to the experiment. 00:53 Before continuing, preview the data. 00:56 This data set has two columns. 00:59 The first contains the customers' comments and the second contains either 0, for ""Not satisfied"", or 1, for ""Satisfied"". 01:08 This isn't a time series forecast, so select ""No"" for that option. 01:13 Then select the column to predict, which is ""Satisfaction"" in this example. 01:19 AutoAI determines that the satisfaction column contains two possible values, making it suitable for a binary classification model. 01:28 And the positive class is 1, for ""Satisfied"". 01:32 Open the experiment settings if you'd like to customize the experiment. 01:36 On the data source panel, you'll see some options for the text feature engineering. 01:41 You can automatically select the text columns, or you can exercise more control by manually specifying the columns for text feature engineering. 01:52 You can also select how many vectors to create for each column during text feature engineering. 01:58 A lower number faster and a higher number is more accurate, but slower. 02:03 Now, run the experiment to view the transformations and progress. 02:09 When you create an experiment that uses the text analysis feature, the AutoAI process uses the word2vec algorithm to transform the text into vectors, then compares the vectors to establish the impact on the prediction column. 02:23 During the feature engineering phase of the experiment training, twenty features are generated for the text column using the word2vec algorithm. 02:33 When the experiment completes, you can review the feature engineering results from the pipeline details page. 02:40 On the Features summary panel, you can review the text transformations. 02:45 You can see that AutoAI created several text features by applying the algorithm function to the column elements, along with the feature importance showing",how-to,1,train
BAB82891CA84875B6EEC64974558FC838197C99A,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostepnuggetnodeslots.html?context=cdpaas&locale=en,applytwostepnode properties,"applytwostepnode properties
applytwostepnode properties You can use TwoStep modeling nodes to generate a TwoStep model nugget. The scripting name of this model nugget is applytwostepnode. For more information on scripting the modeling node itself, see [twostepnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostepnodeslots.htmltwostepnodeslots). applytwostepnode properties Table 1. applytwostepnode properties applytwostepnode Properties Values Property description enable_sql_generation udf
native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.",conceptual,0,train
11A093CB8F1D24EA066663B3991084A84FC32BF2,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html?context=cdpaas&locale=en,Creating jobs in deployment spaces,"Creating jobs in deployment spaces
Creating jobs in deployment spaces A job is a way of running a batch deployment, or a self-contained asset like a script, notebook, code package, or flow in Watson Machine Learning. You can select the input and output for your job and choose to run it manually or on a schedule. From a deployment space, you can create, schedule, run, and manage jobs. Creating a batch deployment job Follow these steps when you are creating a batch deployment job: Important: You must have an existing batch deployment to create a batch job. 1. From the Deployments tab, select your deployment and click New job. The Create a job dialog box opens. 2. In the Define details section, enter your job name, an optional description, and click Next. 3. In the Configure section, select a hardware specification. You can follow these steps to optionally configure environment variables and job run retention settings: * Optional: If you are deploying a Python script, an R script, or a notebook, then you can enter environment variables to pass parameters to the job. Click Environment variables to enter the key - value pair. * Optional: To avoid finishing resources by retaining all historical job metadata, follow one of these options: * Click By amount to set thresholds for saving a set number of job runs and associated logs. * Click By duration (days) to set thresholds for saving artifacts for a specified number of days. 4. Optional: In the Schedule section, toggle the Schedule off button to schedule a run. You can set a date and time for start of schedule and set a schedule for repetition. Click Next. Note: If you don't specify a schedule, the job runs immediately. 5. Optional: In the Notify section, toggle the Off button to turn on notifications associated with this job. Click Next. Note: You can receive notifications for three types of events: success, warning, and failure. 6. In the Choose data section, provide inline data that corresponds with your model schema. You can provide input in JSON format. Click Next. See [Example JSON payload for inline data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html?context=cdpaas&locale=enexample-json). 7. In the Review and create section, verify your job details, and click Create and run. Notes: * Scheduled jobs display on the Jobs tab of the deployment space. * Results of job runs are written to the specified output file and saved as a space asset. * A data asset can be a data source file that you promoted to the space, a connected data source, or tables from databases and files from file-based data sources. * If you exclude certain weekdays in your job schedule, the job might not run as you would expect. The reason is due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the main node where the job runs. * When you create or modify a scheduled job, an API key is generated. Future runs use this generated API key. Example JSON payload for inline data { ""deployment"": { ""id"": """" }, ""space_id"": """", ""name"": ""test_v4_inline"", ""scoring"": { ""input_data"": [{ ""fields"": ""AGE"", ""SEX"", ""BP"", ""CHOLESTEROL"", ""NA"", ""K""], ""values"": 47, ""M"", ""LOW"", ""HIGH"", 0.739, 0.056], 47, ""M"", ""LOW"", ""HIGH"", 0.739, 0.056]] }] } } Queuing and concurrent job executions The maximum number of concurrent jobs for each deployment is handled internally by the deployment service. For batch deployment, by default, two jobs can be run concurrently. Any deployment job request for a batch deployment that already has two running jobs is placed in a queue for execution later. When any of the running jobs is completed, the next job in the queue is run. The queue has no size limit. Limitation on using large inline payloads for batch deployments Batch deployment jobs that use large inline payload might get stuck in starting or running state. Tip: If you provide huge payloads to batch deployments, use data references instead of inline. Retention of deployment job metadata Job-related metadata is persisted and can be accessed until the job and its deployment are deleted. Viewing deployment job details When you create or view a batch job, the deployment ID and the job ID are displayed.  * The deployment ID represents the deployment definition, including the hardware and software configurations and related assets. * The job ID represents the details for a job, including input data and an output location and a schedule for running the job. Use these IDs to refer to the job in Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) requests or in notebooks that use the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)",how-to,1,train
96597F608C26E68BFC4BDCA45061400D63793523,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html?context=cdpaas&locale=en,Data formats for tuning foundation models,"Data formats for tuning foundation models
Data formats for tuning foundation models Prepare a set of prompt examples to use to tune the model. The examples must contain the type of input that the model will need to process at run time and the appropriate output for the model to generate in response. You can add one file as training data. The maximum file size that is allowed is 200 MB. Prompt input-and-output example pairs are sometimes also referred to as samples or records. Follow these guidelines when you create your training data: * Add 100 to 1,000 labeled prompt examples to a file. Between 50 to 10,000 examples are allowed. * Use one of the following formats: * JavaScript Object Notation (JSON) * JSON Lines (JSONL) format * Each example must include one input and output pair. * The language of the training data must be English. * If the input or output text includes quotation marks, escape each quotation mark with a backslash(). For example, He said, ""Yes."". * To represent a carriage return or line break, you can use a backslash followed by n (n) to represent the new line. For example, ...end of paragraph.nStart of new paragraph. You can control the number of tokens from the input and output that are used during training. If an input or output example from the training data is longer than the specified limit, it will be truncated. Only the allowed maximum number of tokens will be used by the experiment. For more information, see [Controlling the number of tokens used](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.htmltuning-tokens). How tokens are counted differs by model, which makes the number of tokens difficult to estimate. For language-based foundation models, you can think of 256 tokens as about 130—170 words and 128 tokens as about 65—85 words. To learn more about tokens, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html). If you are using the model to classify data, follow these extra guidelines: * Try to limit the number of class labels to 10 or fewer. * Include an equal number of examples of each class type. You can use the Prompt Lab to craft examples for the training data. For more information, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). JSON example The following example shows an excerpt from a training data file with labeled prompts for a classification task in JSON format. { [ { ""input"":""Message: When I try to log in, I get an error."", ""output"":""Class name: Problem"" } { ""input"":""Message: Where can I find the plan prices?"", ""output"":""Class name: Question"" } { ""input"":""Message: What is the difference between trial and paygo?"", ""output"":""Class name: Question"" } { ""input"":""Message: The registration page crashed, and now I can't create a new account."", ""output"":""Class name: Problem"" } { ""input"":""Message: What regions are supported?"", ""output"":""Class name: Question"" } { ""input"":""Message: I can't remember my password."", ""output"":""Class name: Problem"" } { ""input"":""Message: I'm having trouble registering for a new account."", ""output"":""Classname: Problem"" } { ""input"":""Message: A teammate shared a service instance with me, but I can't access it. What's wrong?"", ""output"":""Class name: Problem"" } { ""input"":""Message: What extra privileges does an administrator have?"", ""output"":""Class name: Question"" } { ""input"":""Message: Can I create a service instance for data in a language other than English?"", ""output"":""Class name: Question"" } ] } JSONL example The following example shows an excerpt from a training data file with labeled prompts for a classification task in JSONL format. {""input"":""Message: When I try to log in, I get an error."",""output"":""Class name: Problem""} {""input"":""Message: Where can I find the plan prices?"",""output"":""Class name: Question""} {""input"":""Message: What is the difference between trial and paygo?"",""output"":""Class name: Question""} {""input"":""Message: The registration page crashed, and now I can't create a new account."",""output"":""Class name: Problem""} {""input"":""Message: What regions are supported?"",""output"":""Class name: Question""} {""input"":""Message: I can't remember my password."",""output"":""Class name: Problem""} {""input"":""Message: I'm having trouble registering for a new account."",""output"":""Classname: Problem""} {""input"":""Message: A teammate shared a service instance with me, but I can't access it. What's wrong?"",""output"":""Class name: Problem""} {""input"":""Message: What extra privileges does an administrator have?"",""output"":""Class name: Question""} {""input"":""Message: Can I create a service instance for data in a language other than English?"",""output"":""Class name: Question""} Parent topic:[Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html)",conceptual,0,train
225192BB81696D14887CC55070A6DFA14B3315F7,https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/asset_browser.html?context=cdpaas&locale=en,Adding data to Data Refinery,"Adding data to Data Refinery
Adding data to Data Refinery After you [create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) and you [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) or you [add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) to the project, you can then add data to Data Refinery and start prepping that data for analysis. You can add data to Data Refinery in one of several ways: * Select Prepare data from the overflow menu () of a data asset in the All assets list for the project * Preview a data asset in the project and then click Prepare data * Navigate to Data Refinery first and then add data to it Navigate to Data Refinery 1. Access Data Refinery from within a project. Click the Assets tab. 2. Click New asset > Prepare and visualize data. 3. Select the data that you want to work with from Data assets or from Connections. From Data assets: * Select a data file (the selection includes data files that were already shaped with Data Refinery) * Select a connected data asset From Connections: * Select a connection and file * Select a connection, folder, and file * Select a connection, schema, and table or view Data Refinery supports these file types: Avro, CSV, delimited text files, JSON, Microsoft Excel (xls and xlsx formats. First sheet only, except for connections and connected data assets.), Parquet, SAS with the ""sas7bdat"" extension (read only), TSV (read only) Data Refinery operates on a sample subset of rows in the data set. The sample size is 1 MB or 10,000 rows, whichever comes first. However, when you run a job for the Data Refinery flow, the entire data set is processed. If the Data Refinery flow fails with a large data asset, see workarounds in [Troubleshooting Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html). Data connections marked with a key icon () are locked. If you are authorized to access the data source, you are asked to enter your personal credentials the first time you select it. This one-time step permanently unlocks the connection for you. After you have unlocked the connection, the key icon is no longer displayed. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). 4. Click Add to load the data into Data Refinery. Next steps * [Refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) * [Validate your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html) * [Use visualizations to gain insights into your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html) Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)",how-to,1,train
AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC,https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en,Configuring drift v2 evaluations in watsonx.governance,"Configuring drift v2 evaluations in watsonx.governance
Configuring drift v2 evaluations in watsonx.governance You can configure drift v2 evaluations with watsonx.governance to measure changes in your data over time to ensure consistent outcomes for your model. Use drift v2 evaluations to identify changes in your model output, the accuracy of your predictions, and the distribution of your input data. The following sections describe the steps that you must complete to configure drift v2 evaluations with watsonx.governance: Set sample size watsonx.governance uses sample sizes to understand how to process the number of transactions that are evaluated during evaluations. You must set a minimum sample size to indicate the lowest number of transactions that you want watsonx.governance to evaluate. You can also set a maximum sample size to indicate the maximum number of transactions that you want watsonx.governance to evaluate. Configure baseline data watsonx.governance uses payload records to establish the baseline for drift v2 calculations. You must configure the number of records that you want to calculate as your baseline data. Set drift thresholds You must set threshold values for each metric to enable watsonx.governance to understand how to identify issues with your evaluation results. The values that you set create alerts on the evaluation summary page that appear when metric scores violate your thresholds. You must set the values between the range of 0 to 1. The metric scores must be lower than the threshold values to avoid violations. Supported drift v2 metrics When you enable drift v2 evaluations, you can view a summary of evaluation results with metrics for the type of model that you're evaluating. The following drift v2 metrics are supported by watsonx.governance: * Output drift watsonx.governance calculates output drift by measuring the change in the model confidence distribution. - How it works: watsonx.governance measures how much your model output changes from the time that you train the model. To evaluate prompt templates, watsonx.governance calculates output drift by measuring the change in distribution of prediction probabilities. The prediction probability is calculated by aggregating the log probabilities of the tokens from the model output. When you upload payload data with CSV files, you must include prediction_probability values or output drift cannot be calculated. For regression models, watsonx.governance calculates output drift by measuring the change in distribution of predictions on the training and payload data. For classification models, watsonx.governance calculates output drift for each class probability by measuring the change in distribution for class probabilities on the training and payload data. For multi-classification models, watsonx.governance also aggregates output drift for each class probability by measuring a weighted average. - Do the math: watsonx.governance uses the following formulas to calculate output drift: - [Total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=entotal-variation-distance) - [Overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enoverlap-coefficient) - Applies to prompt template evaluations: Yes - Task types: - Text summarization - Text classification - Content generation - Entity extraction - Question answering * Model quality drift watsonx.governance calculates model quality drift by comparing the estimated runtime accuracy to the training accuracy to measure the drop in accuracy. - How it works: watsonx.governance builds its own drift detection model that processes your payload data when you configure drift v2 evaluations to predict whether your model generates accurate predictions without the ground truth. The drift detection model uses the input features and class probabilities from your model to create its own input features. - Do the math: watsonx.governance uses the following formula to calculate model quality drift:  watsonx.governance calculates the accuracy of your model as the base_accuracy by measuring the fraction of correctly predicted transactions in your training data. During evaluations, your transactions are scored against the drift detection model to measure the amount of transactions that are likely predicted correctly by your model. These transactions are compared to the total number of transactions that watsonx.governance processes to calculate the predicted_accuracy. If the predicted_accuracy is less than the base_accuracy, watsonx.governance generates a model quality drift score. - Applies to prompt template evaluations: No * Feature drift watsonx.governance calculates feature drift by measuring the change in value distribution for important features. - How it works: watsonx.governance calculates drift for categorical and numeric features by measuring the probability distribution of continuous and discrete values. To identify discrete values for numeric features, watsonx.governance uses a binary logarithm to compare the number of distinct values of each feature to the total number of values of each feature. watsonx.governance uses the following binary logarithm formula to identify discrete numeric features:  If the distinct_values_count is less than the binary logarithm of the total_count, the feature is identified as discrete. - Do the math: watsonx.governance uses the following formulas to calculate feature drift: - [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enjensen-shannon-distance) - [Total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=entotal-variation-distance) - [Overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enoverlap-coefficient) - Applies to prompt template evaluations: No * Prediction drift Prediction drift measures the change in distribution of the LLM predicted classes. -",how-to,1,train
384EB2033AD74EA7044AFC8BF1DDB06FF392CB08,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html?context=cdpaas&locale=en,Compute resource options for Synthetic Data Generator in projects,"Compute resource options for Synthetic Data Generator in projects
Compute resource options for Synthetic Data Generator in projects To create data with the Synthetic Data Generator, you must have the Watson Studio and Watson Machine Learning services provisioned. Running a synthetic data flow consumes compute resources from the Watson Studio service. Capacity units per hour for Synthetic Data Generator Capacity type Capacity units per hour 2 vCPU and 8 GB RAM 7 Compute usage in projects Running a synthetic data flow consumes compute resources from the Watson Studio service. You can monitor the total monthly amount of CUH consumption for Watson Studio on the Resource usage page on the Manage tab of your project. Learn more * [Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) * [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Watson Studio service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)",conceptual,0,train
78A4D6515FAA2766FEB3A03CA6A378846CF33D83,https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html?context=cdpaas&locale=en,Managing all projects in the account,"Managing all projects in the account
Managing all projects in the account If you have the required permission, you can view and manage all projects in your IBM Cloud account. You can add yourself to a project so that you can delete it or change its collaborators. Requirements To manage all projects in the account, you must: * Restrict resources to the current account. See steps to [set the scope for resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-scope-for-resources). * Have the Manage projects permission that is provided by the IAM Manager role for the IBM Cloud Pak for Data service. Assigning the Manage projects permission To grant the Manage projects permission to a user who is already in your IBM Cloud account: 1. From the navigation menu, choose Administration > Access (IAM) to open the Manage access and users page in your IBM Cloud account. 2. Select the user on the Users page. 3. Click the Access tab and then choose Assign access+. 4. Select Access policy. 5. For Service, choose IBM Cloud Pak for Data. 6. For Service access, select the Manager role. 7. For Platform access, assign the Editor role. 8. Click Add and Assign to assign the policy to the user. Managing projects You can add yourself to a project when you need to delete the project, delete collaborators, or assign the Admin role to a collaborator in the project. To manage projects: * View all active projects on the Projects page in IBM watsonx by clicking the drop-down menu next to the search field and selecting All active projects. * Join any project as Admin by clicking Join as admin in the Your role column. * Filter projects to identify which projects you are not a collaborator in, by clicking the filter icon  and selecting Your role > No membership. For more details on managing projects, see [Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html). Learn more * [Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)",how-to,1,train
59CDBABC75E7EC8987A3C464F3277923F444A724,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_targets.html?context=cdpaas&locale=en,Defining the targets (SPSS Modeler),"Defining the targets (SPSS Modeler)
Defining the targets 1. Add a Type node after the Filler node, then double-click the Type node to open its properties. 2. Set the role to None for the DATE_ field. Set the role to Target for all other fields (the Market_n fields plus the Total field). 3. Click Read Values to populate the Values column. Figure 1. Setting the role for fields ",how-to,1,train
05D687FC92FD17804374E20E7F330EDAE142F725,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-errors.html?context=cdpaas&locale=en,Handling Pipeline errors,"Handling Pipeline errors
Handling Pipeline errors You can specify how to respond to errors in a pipeline globally, with an error policy, and locally, by overriding the policy on the node level. You can also create a custom error-handling response. Setting global error policy The error policy sets the default behavior for errors in a pipeline. You can override this behavior for any node in the pipeline. To set the global error policy: 1. Click the Manage default settings icon on the toolbar. 2. Choose the default response to an error under the Error policy: * Fail pipeline on error stops the flow and initiates an error-handling flow. * Continue pipeline on error tries to continue running the pipeline. Note: Continue pipeline on error affects nodes that use the default error policy and does not affect node-specific error policies. 3. You can optionally create a custom error-handling response for a flow failure. Specifying an error response If you opt for Fail pipeline on error for either the global error policy or for a node-specific policy, you can further specify what happens on failure. For example, if you check the Show icon on nodes that are linked to an error-handling pipeline, an icon flags a node with an error to help debug the flow. Specifying a node-specific error policy You can override the default error policy for any node in the pipeline. 1. Click a node to open the configuration pane. 2. Check the option to Override default error policy with: * Fail pipeline on error * Continue pipeline on error Viewing all node policies To view all node-specific error handling for a pipeline: 1. Click Manage default settings on the toolbar. 2. Click the view all node policies link under Error policy. A list of all nodes in the pipeline show which nodes use the default policy, and which override the default policy. Click a node name to see the policy details. Use the view filter to show: * All error policies: all nodes * Default policy: all nodes that use the default policy * Override default policy: all nodes that override the default policy * Fail pipeline on error: all nodes that stop the flow on error * Continue pipeline on error: all nodes that try to continue the flow on error Running the Fail on error flow If you specify that the flow fails on error, a secondary error handling flow starts when an error is encountered. Adding a custom error response If Create custom error handling response is checked on default settings for error policy, you can add an error handling node to the canvas so you can configure a custom error response. The response applies to all nodes configured to fail when an error occurs. Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)",how-to,1,train
CE40B0CEF1449476821A1EBD8D0CF339C866D16A,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/applyneuralnetworkslots.html?context=cdpaas&locale=en,applyneuralnetworknode properties,"applyneuralnetworknode properties
applyneuralnetworknode properties You can use Neural Network modeling nodes to generate a Neural Network model nugget. The scripting name of this model nugget is applyneuralnetworknode. For more information on scripting the modeling node itself, see [neuralnetworknode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/neuralnetworkslots.htmlneuralnetworkslots). applyneuralnetworknode properties Table 1. applyneuralnetworknode properties applyneuralnetworknode Properties Values Property description use_custom_name flag custom_name string confidence onProbability
onIncrease score_category_probabilities flag max_categories number score_propensity flag enable_sql_generation false
true
native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.",conceptual,0,train
D0907278CA0EA55B0E0ED9E834810D502A817AF0,https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ts_sd.html?context=cdpaas&locale=en,Troubleshooting Synthetic Data Generator,"Troubleshooting Synthetic Data Generator
Troubleshooting Synthetic Data Generator Use this information to resolve questions about using Synthetic Data Generator. Typeless columns ignored for an Import node When you use an Import node that contains Typeless columns, these columns will be ignored when you use the Mimic node. After pressing the Read Values button, the Typeless columns will be automatically set to Pass and will not be present in the final dataset. Suggested workaround: Add a new column in the Generate node for the missing column(s). Size limit notice The Synthetic Data Generator environment can import up to 2.5GB of data. Suggested workaround: If you receive a related error message or your data fails to import, please reduce the amount of data and try again. Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string For example, preview of data asset using Import node gives the following error: Node: Import WDP Connector Error: CDICO9999E: Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string of the Bit data type for the SecurityDelay column. This is expected behavior. In this particular case, the 1st 1000 rows were binary, 0's or 1's. The value at row 1,029 was 3. For most flat files, Synthetic Data Generator reads the 1st 1000 records to infer the data type. In this case, Synthetic Data Generator inferred binary values (0 or 1). When Synthetic Data Generator read a value of 3 at row 1,029, it threw an error, as 3 is not a binary value. Suggested workarounds: 1. Users can adjust their Infer_record_count parameter to include more data, choosing 2000 rows instead (or more). 2. Users can update the value in the first 1000 rows that is causing the error, if this is an error in the data. Error Mimic Data set no available input record. The Mimic node requires the input dataset to have at least one valid record (a record without any missing values). If your dataset is empty, or if the dataset does not contain at least one valid record, clicking Run selection gives the following error message: Node: Mimic Mimic Data set no available input record. Suggested workarounds: 1. Fix your dataset so that there is at least one record (row) that contains a value for every column and then try again. 2. Click Read values from the Import node and run your flow again.  Error: Incorrect number of fields detected in the server data model. or WDP Connector Execution Error Creating a new flow using a .synth file, then doing a migration of the Import node with a newly uploaded file to the project, and then running the flow, gives one or both of the following errors: Error: Incorrect number of fields detected in the server data model. or WDP Connector Execution Error This error is caused by using different data sets (data models) for the create flow and for the migration data. Suggested workaround: Run the Mimic node that creates the Generate node a second time. Error: Valid variable does not exist in metadata Doing a migration of the Import node and then running the flow fails and gives the error: Error: Valid variable does not exist in metadata Suggested workaround: Make sure that in your Import node you have at least one field that is not Typeless. For example, in the screen capture below, the only field in the Import node is Typeless. At least one field that is not Typeless should be added to the Import node to avoid this error. ",how-to,1,train
D1B1E93AD61D2B095BF8A00E9739FCF7D1DC974C,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_build.html?context=cdpaas&locale=en,Building a model (SPSS Modeler),"Building a model (SPSS Modeler)
Building a model By exploring and manipulating the data, you have been able to form some hypotheses. The ratio of sodium to potassium in the blood seems to affect the choice of drug, as does blood pressure. But you cannot fully explain all of the relationships yet. This is where modeling will likely provide some answers. In this case, you will try to fit the data using a rule-building model called C5.0. Since you're using a derived field, Na_to_K, you can filter out the original fields, Na and K, so they're not used twice in the modeling algorithm. You can do this by using a Filter node. 1. Place a Filter node on the canvas and connect it to the Derive node. Figure 1. Filter node  2. Double-click the Filter node to edit its properties. Name it Discard Fields. 3. For Mode, make sure Filter the selected fields is selected. Then select the K and Na fields. Click Save. 4. Place a Type node on the canvas and connect it to the Filter node. With the Type node, you can indicate the types of fields you're using and how they're used to predict the outcomes. Figure 2. Type node  5. Double-click the Type node to edit its properties. Name it Define Types. 6. Set the role for the Drug field to Target, indicating that Drug is the field you want to predict. Leave the role for the other fields set to Input so they'll be used as predictors. Click Save. 7. To estimate the model, place a C5.0 node on the canvas and attach it to the end of the flow. Then click the Run button on the toolbar to run the flow.",how-to,1,train
EAEF856F725CD9A9605000F3AE98CBE61A9F50F0,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/extraction-attack.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Extraction attack Risks associated with inputInferenceRobustnessAmplified Description An attack that attempts to copy or steal the AI model by appropriately sampling the input space, observing outputs, and building a surrogate model, is known as an extraction attack. Why is extraction attack a concern for foundation models? A successful attack mimics the model, enabling the attacker to repurpose it for their benefit such as eliminating a competitive advantage or causing reputational harm. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
41167E3AD363B416D508B03A300E5ACFAF83F042,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_evaluation.html?context=cdpaas&locale=en,Evaluation charts,"Evaluation charts
Evaluation charts Evaluation charts are similar to histograms or collection graphs. Evaluation charts show how accurate models are in predicting particular outcomes. They work by sorting records based on the predicted value and confidence of the prediction, splitting the records into groups of equal size (quantiles), and then plotting the value of the criterion for each quantile, from highest to lowest. Multiple models are shown as separate lines in the plot. Outcomes are handled by defining a specific value or range of values as a ""hit"". Hits usually indicate success of some sort (such as a sale to a customer) or an event of interest (such as a specific medical diagnosis). Flag : Output fields are straightforward; hits correspond to true values. Nominal : For nominal output fields, the first value in the set defines a hit. Continuous : For continuous output fields, hits equal values greater than the midpoint of the field's range. Evaluation charts can also be cumulative so that each point equals the value for the corresponding quantile plus all higher quantiles. Cumulative charts usually convey the overall performance of models better, whereas noncumulative charts often excel at indicating particular problem areas for models.",conceptual,0,train
A187344EB767BAC8E4D674651BEDAFA33F70BFA1,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition_test.html?context=cdpaas&locale=en,Testing (SPSS Modeler),"Testing (SPSS Modeler)
Testing Both of the generated model nuggets are connected to the Type node. 1. Reposition the nuggets as shown, so the Type node connects to the neural net nugget, which connects to the C5.0 nugget. 2. Attach an Analysis node to the C5.0 nugget. 3. Edit the Data Asset node to use the file cond2n.csv (instead of cond1n.csv), which contains unseen test data. 4. Right-click the Analysis node and select Run. Doing so yields figures reflecting the accuracy of the trained network and rule. Figure 1. Testing the trained network ",how-to,1,train
F7B2DD759B6FC618D53AD49053C24EF8D35105C5,https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html?context=cdpaas&locale=en,Deploying and managing assets,"Deploying and managing assets
Deploying and managing assets Use Watson Machine Learning to deploy models and solutions so that you can put them into productive use, then monitor the deployed assets for fairness and explainability. You can also automate the AI lifecycle to keep your deployed assets current. Completing the AI lifecycle After you prepare your data and build then train models or solutions, you complete the AI lifecycle by deploying and monitoring your assets.  Deployment is the final stage of the lifecycle of a model or script, where you run your models and code. Watson Machine Learning provides the tools that you need to deploy an asset, such as a predictive model, Python function. You can also deploy foundation model assets, such as prompt templates, to put them into production. Following deployment, you can use model management tools to evaluate your models. IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production. Finally, you can use IBM Watson Pipelines to manage your ModelOps processes. Create a pipeline that automates parts of the AI lifecycle, such as training and deploying a machine learning model. Next steps * To learn more about how to manage assets in a deployment space, see [Manage assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). * To learn more about how to deploy assets from a deployment space, see [Deploy assets from a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html). * To learn more how to deploy by using [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or , see [Sample notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html).",conceptual,0,train
5D1BCA52E974C3F4DE54366A242DF751E73ACBD2,https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=en,Troubleshooting Cloud Object Storage for projects,"Troubleshooting Cloud Object Storage for projects
Troubleshooting Cloud Object Storage for projects Use these solutions to resolve issues you might experience when using Cloud Object Storage with projects in IBM watsonx. Many errors that occur when creating projects can be resolved by correctly configuring Cloud Object Storage. For instructions, see [Setting up Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html). Possible error messages: * [Error retrieving Administrator API key token for your Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=enkey-token) * [Unable to configure credentials for your project in the selected Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=encredentials) * [User login from given IP address is not permitted](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=enrestricted-ip) * [Project cannot be created](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=enproject-failed) Cannot retrieve API key Symptoms When you create a project, the following error occurs: Error retrieving Administrator API key token for your Cloud Object Storage instance Possible Causes * You have not been assigned the Editor role in the IBM Cloud account. Possible Resolutions The account administrator must complete the following tasks: * Invite users to the IBM Cloud account and assign the Editor role. See [Add non-administrative users to your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.htmlusers). Unable to configure credentials Symptoms When you create a project and associate it to a Cloud Object Storage instance, the following error occurs: Unable to configure credentials for your project in the selected Cloud Object Storage instance. Possible Causes * You have exceeded the access policy limit for the account. * For a Lite account, you have exceeded the 25 GB limit for the Cloud Object Storage instance. Possible Resolutions For exceeding access policies: 1. Verify that you are the owner of the Cloud Object Storage instance or that the owner has granted you Administrator and Manager roles for this service instance. Otherwise, ask your IBM Cloud administrator to fix this problem. 2. Check the total number of access policies to determine whether you have reached a limit. See [IBM Cloud IAM limits](https://cloud.ibm.com/docs/account?topic=account-known-issuesiam_limits) for the limit information. 3. Delete at least 4 or more unused access policies for the service ID. See [Reducing time and effort managing access](https://cloud.ibm.com/docs/account?topic=account-account_setuplimit-policies) for strategies that you can use to ensure that you don't reach the limit. For exceeding 25 GB limit for a Lite account: For a Lite account, you have exceeded the 25 GB limit for the Cloud Object Storage instance. Possible resolutions are to upgrade to a billable account, delete stored assets for the current account, or wait until the first of the month when the limit resets. See [Set up a billable account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.htmlpaid-account). Login not permitted from IP address Symptoms When you create or work with a project, the following error occurs: User login from given IP address is not permitted. The user has configured IP address restriction for login. The given IP address 'XX.XXX.XXX.XX' is not contained in the list of allowed IP addresses. Possible Causes Restrict IP address access has been configured to allow specific IP addresses access to Watson Studio. The IP address of the computer you are using is not allowed. Possible Resolutions Add the IP address to the allowed IP addresses, if your security qualifications allow it. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlallow-specific-ip-addresses). Project cannot be created Symptoms When you create a project, the following error occurs: Project cannot be created. Possible Causes The Cloud Object Storage instance is not available, due to the Global location is not enabled for your services. Cloud Object Storage requires the Global location. Possible Resolutions Enable the Global location in your account profile. From your account, click your avatar and select Profile and settings to open your IBM watsonx profile. Under Service Filters > Locations, check the Global location as well as other locations where services are present. See [Manage your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlprofile).",how-to,1,train
0B35E778B109957EE1CC48FA8E46ED7A1633E380,https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=en,Troubleshooting Watson OpenScale,"Troubleshooting Watson OpenScale
Troubleshooting Watson OpenScale You can use the following techniques to work around problems with IBM Watson OpenScale. * [When I use AutoAI, why am I getting an error about mismatched data?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-autoai-binary) * [Why am I getting errors during model configuration?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-xgboost-wml-model-details) * [Why are my class labels missing when I use XGBoost?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-xgboost-multiclass) * [Why are the payload analytics not displaying properly?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-payloadfileformat) * [Error: An error occurred while computing feature importance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-wos-equals-sign-explainability) * [Why are some of my active debias records missing?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-payloadlogging-1000k-limit) * [Watson OpenScale does not show any available schemas](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-available-schemas) * [A monitor run fails with an OutOfResources exception error message](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-resources-exception) When I use AutoAI, why am I getting an error about mismatched data? You receive an error message about mismatched data when using AutoAI for binary classification. Note that AutoAI is only supported in IBM Watson OpenScale for IBM Cloud Pak for Data. For binary classification type, AutoAI automatically sets the data type of the prediction column to boolean. To fix this, implement one of the following solutions: * Change the label column values in the training data to integer values, such as 0 or 1 depending on the outcome. * Change the label column values in the training data to string value, such as A and B. Why am I getting errors during model configuration? The following error messages appear when you are configuring model details: Field feature_fields references column , which is missing in input_schema of the model. Feature not found in input schema. The preceding messages while completing the Model details section during configuration indicate a mismatch between the model input schema and the model training data schema: To fix the issue, you must determine which of the following conditions is causing the error and take corrective action: If you use IBM Watson Machine Learning as your machine learning provider and the model type is XGBoost/scikit-learn refer to the Machine Learning [Python SDK documentation](https://ibm.github.io/watson-machine-learning-sdk/repository) for important information about how to store the model. To generate the drift detection model, you must use scikit-learn version 0.20.2 in notebooks. For all other cases, you must ensure that the training data column names match with the input schema column names. Why are my class labels missing when I use XGBoost? Native XGBoost multiclass classification does not return class labels. By default, for binary and multiple class models, the XGBoost framework does not return class labels. For XGBoost binary and multiple class models, you must update the model to return class labels. Why are the payload analytics not displaying properly? Payload analytics does not display properly and the following error message displays: AIQDT0044E Forbidden character "" in column name For proper processing of payload analytics, Watson OpenScale does not support column names with double quotation marks ("") in the payload. This affects both scoring payload and feedback data in CSV and JSON formats. Remove double quotation marks ("") from the column names of the payload file. Error: An error occurred while computing feature importance You receive the following error message during processing: Error: An error occurred while computing feature importance. Having an equals sign (=) in the column name of a dataset causes an issue with explainability. Remove the equals sign (=) from the column name and send the dataset through processing again. Why are some of my active debias records missing? Active debias records do not reach the payload logging table. When you use the active debias API, there is a limit of 1000 records that can be sent at one time for payload logging. To avoid loss of data, you must use the active debias API to score in chunks of 1000 records or fewer. For more information, see [Reviewing debiased transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-insight-timechart.html). Watson OpenScale does not show any available schemas When a user attempts to retrieve schema information for Watson OpenScale, none are available. After attempting directly in DB2, without reference to Watson OpenScale, checking what schemas are available for the database userid also returns none. Insufficient permissions for the database userid is causing database connection issues for Watson OpenScale. Make sure the database user has the correct permissions needed for Watson OpenScale. A monitor run fails with an OutOfResources exception error message You receive an OutOfResources exception error message. Although there's no longer a limit on the number of rows you can have in the feedback payload, scoring payload, or business payload tables. The 50,000 limit now applies to the number of records you can run through the quality and bias monitors each billing period. After you reach your limit, you must either upgrade to a Standard plan or wait for the next billing period. Missing deployments A deployed model does not show up as a deployment that can be selected to create a subscription. There are different reasons that a deployment does not show up in",how-to,1,train
FD903F9A58632DF14BE5C98EEDA32E1FC2F46F4B,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_missing.html?context=cdpaas&locale=en,Defining missing values (SPSS Modeler),"Defining missing values (SPSS Modeler)
Defining missing values In the Type node settings, select the desired field in the table and then click the gear icon at the end of its row. Missing values settings are available in the window that appears. Select Define missing values to define missing value handing for this field. Here you can define explicit values to be considered as missing values for this field, or this can also be accomplished by means of a downstream Filler node.",how-to,1,train
C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/trust-calibration.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Trust calibration Risks associated with outputValue alignmentNew Description Trust calibration presents problems when a person places too little or too much trust in an AI model's guidance, resulting in poor decision making. Why is trust calibration a concern for foundation models? In tasks where humans make choices based on AI-based suggestions, consequences of poor decision making increase with the importance of the decision. Bad decisions can harm users and can lead to financial harm, reputational harm, and other legal consequences for business entities. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train
A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html?context=cdpaas&locale=en,Adding connections to projects,"Adding connections to projects
Adding connections to projects You need to create a connection asset for a data source before you can access or load data to or from it. A connection asset contains the information necessary to establish a connection to a data source. Create connections to multiple types of data sources, including IBM Cloud services, other cloud services, on-prem databases, and more. See [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) for the list of data sources. To create a new connection in a project: 1. Go to the project page, and click the Assets tab. 2. Click New asset > Connect to a data source. 3. Choose the kind of connection: * Select New connection (the default) to create a new connection in the project. * Select Platform connections to select a connection that has already been created at the platform level. * Select Deployed services to connect to a data source from a cloud service this is integrated with IBM watsonx. 4. Choose a data source. 5. Enter the connection information that is required for the data source. Typically, you need to provide information like the hostname, port number, username, and password. 6. If prompted, specify whether you want to use personal or shared credentials. You cannot change this option after you create the connection. The credentials type for the connection, either Personal or Shared, is set by the account owner on the [Account page](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html). The default setting is Shared. * Personal: With personal credentials, each user must specify their own credentials to access the connection. Each user's credentials are saved but are not shared with any other users. Use personal credentials instead of shared credentials to protect credentials. For example, if you use personal credentials and another user changes the connection properties (such as the hostname or port number), the credentials are invalidated to prevent malicious redirection. * Shared: With shared credentials, all users access the connection with the credentials that you provide. Shared credentials can potentially be retrieved by a user who has access to the connection asset. Because the credentials are shared, it is difficult to audit access to the connection, to identify the source of data loss, or identify the source of a security breach. 1. For Private connectivity: To connect to a database that is not externalized to the internet (for example, behind a firewall), see [Securing connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). 2. If available, click Test connection. 3. Click Create. The connection appears on the Assets page. You can edit the connection by clicking the connection name on the Assets page. 4. Add tables, files, or other types of data from the connection by [creating a connected data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Connections with personal credentials are marked with a key icon () on the Assets page and are locked. If you are authorized to access the connection, you can unlock it by entering your credentials the first time you select it. This is a one-time step that permanently unlocks the connection for you. After you unlock the connection, the key icon is no longer displayed. Connections with personal credentials are already unlocked if you created the connections yourself. Watch this video to see how to create a connection and add connected data to a project. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 This video shows you how to set up a connection to a data source and add connected data to a Watson Studio project. 00:08 If you have data stored in a data source, you can set up a connection to that data source from any project. 00:16 From here, you can add different elements to the project. 00:20 In this case, you want to add a connection. 00:24 You can create a new connection to an IBM service, such as IBM Db2 and Cloud Object Storage, or to a service from third parties, such as Amazon, Microsoft or Apache. 00:39 And you can filter the list based on compatible services. 00:45 You can also add a connection that was created at the platform level, which can be used across projects and catalogs. 00:54 Or you can create a connection to one of your provisioned IBM Cloud services. 00:59 In this case, select the provisioned IBM Cloud service for Db2 Warehouse on Cloud. 01:08 If the credentials are not prepopulated, you can get the credentials for the instance from the IBM Cloud service launch page. 01:17 First, test the connection and then create the connection. 01:25 The new connection now displays in the list of data assets. 01:30 Next, add connected data assets to this project. 01:37 Select the source - in this case, it's the Db2 Warehouse",how-to,1,train
23296AAD76933152D5D3E9DD875EBBD3FB7575EA,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clemoverview_container.html?context=cdpaas&locale=en,Building CLEM expressions (SPSS Modeler),"Building CLEM expressions (SPSS Modeler)
Building CLEM (legacy) expressions",conceptual,0,train
971AE69D7D2A527C25F31A6C8D8D64EE68B48519,https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/mask_mimic_data_sd.html?context=cdpaas&locale=en,Creating synthetic data from production data,"Creating synthetic data from production data
Creating synthetic data from production data Using the Synthetic Data Generator graphical editor flow tool, you can generate a structured synthetic data set based on your production data. You can import data, anonymize, mimic (to generate synthetic data), export, and review your data. Before you can use mimic and mask to create synthetic data, you need [to create a task](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.htmlcreate-synthetic). 1. The Generate synthetic tabular data flow window opens. Select use case Leverage your existing data. Click Next.  2. Select Import data. You can also drag-and-drop a data file into your project. You can also select data from a project. For more information, see [Importing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html).  3. Once you have imported your data, you can use the Synthetic Data Generator graphical flow editor tool to anonymize your production data, masking the data. You can disguise column names, column values, or both, when working with data that is to be included in a model downstream of the node. For example, you can use bank customer data and hide marital status.  4. You can then use the Synthetic Data Generator tool to mimic your production data. This will generate synthetic data, based on your production data, using a set of candidate statistical distributions to modify each column in your data.  5. You can export your synthetic data and review it. For more information, see [Exporting synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/export_data_sd.html).  Learn more [Creating synthetic data from a custom data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/generate_data_sd.html)",how-to,1,train
C471B8B14614C985391115EC1ED53E0B56D2E27E,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-poisoning.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Data poisoning Risks associated with inputTraining and tuning phaseRobustnessTraditional Description Data poisoning is a type of adversarial attack where an adversary or malicious insider injects intentionally corrupted, false, misleading, or incorrect samples into the training or fine-tuning dataset. Why is data poisoning a concern for foundation models? Poisoning data can make the model sensitive to a malicious data pattern and produce the adversary’s desired output. It can create a security risk where adversaries can force model behavior for their own benefit. In addition to producing unintended and potentially malicious results, a model misalignment from data poisoning can result in business entities facing legal consequences or reputational harms. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,train