File size: 145,497 Bytes
a2c830f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
doc_id,url,title,text,label,label_id,split
F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=en,Managing deployment jobs,"Managing deployment jobs
Managing deployment jobs A job is a way of running a batch deployment, script, or notebook in Watson Machine Learning. You can choose to run a job manually or on a schedule that you specify. After you create one or more jobs, you can view and manage them from the Jobs tab of your deployment space. From the Jobs tab of your space, you can: * See the list of the jobs in your space * View the details of each job. You can change the schedule settings of a job and pick a different environment template. * Monitor job runs * Delete jobs See the following sections for various aspects of job management: * [Creating a job for a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=encreate-jobs-batch) * [Viewing jobs in a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=enviewing-jobs-in-a-space) * [Managing job metadata retention ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=endelete-jobs) Creating a job for a batch deployment Important: You must have an existing batch deployment to create a batch job. To learn how to create a job for a batch deployment, see [Creating jobs in a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html). Viewing jobs in a space You can view all of the jobs that exist for your deployment space from the Jobs page. You can also delete a job. To view the details of a specific job, click the job. From the job's details page, you can do the following: * View the runs for that job and the status of each run. If a run failed, you can select the run and view the log tail or download the entire log file to help you troubleshoot the run. A failed run might be related to a temporary connection or environment problem. Try running the job again. If the job still fails, you can send the log to Customer Support. * When a job is running, a progress indicator on the information page displays information about relative progress of the run. You can use the progress indicator to monitor a long run. * Edit schedule settings or pick another environment template. * Run the job manually by clicking the run icon from the job action bar. You must deselect the schedule to run the job manually. Managing job metadata retention The Watson Machine Learning plan that is associated with your IBM Cloud account sets limits on the number of running and stored deployments that you can create. If you exceed your limit, you cannot create new deployments until you delete existing deployments or upgrade your plan. For more information, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Managing metadata retention and deletion programmatically If you are managing a job programmatically by using the Python client or REST API, you can retrieve metadata from the deployment endpoint by using the GET method during the 30 days. To keep the metadata for more or less than 30 days, change the query parameter from the default of retention=30 for the POST method to override the default and preserve the metadata. Note:Changing the value to retention=-1 cancels the auto-delete and preserves the metadata. To delete a job programmatically, specify the query parameter hard_delete=true for the Watson Machine Learning DELETE method to completely remove the job metadata. The following example shows how to use DELETE method: DELETE /ml/v4/deployment_jobs/{JobsID} Learn from samples Refer to [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks that demonstrate creating batch deployments and jobs by using the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)",how-to,1,validation
189F970CF3B162E67B98B2A928B36193169E3CAF,https://dataplatform.cloud.ibm.com/docs/content/wsd/dataview.html?context=cdpaas&locale=en,Working with your data (SPSS Modeler),"Working with your data (SPSS Modeler)
Working with your data To see a quick sample of a flow's data, right-click a node a select Preview. To more thoroughly examine your data, use a Charts node to launch the chart builder. With the chart builder, you can use advanced visualizations to explore your data from different perspectives and identify patterns, connections, and relationships within your data. You can also visualize your data with these same charts in a Data Refinery flow. Figure 1. Sample visualizations available for a flow ![Shows four example charts available in Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/charts_thumbnail4.png) For more information, see [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html).",how-to,1,validation
2C9D0D0309E01FF2EE0D298A16011857DE068038,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_charttypes.html?context=cdpaas&locale=en,Chart types,"Chart types
Chart types The gallery contains a collection of the most commonly used charts.",conceptual,0,validation
A4E9FAE09BE2F3C0191CBC14A56085B0773A2585,https://dataplatform.cloud.ibm.com/docs/content/wsj/satellite/satellite-connect-s3-bucket.html?context=cdpaas&locale=en,Accessing data in AWS through access points from a notebook,"Accessing data in AWS through access points from a notebook
Accessing data in AWS through access points from a notebook In IBM watsonx you can access data stored in AWS S3 buckets through access points from a notebook. Run the notebook in an environment in IBM watsonx. Create an internet-enabled access point to connect to the S3 bucket. Connecting to AWS S3 data through an internet-enabled access point You can access data in an AWS S3 bucket through an internet-enabled access point in any AWS region. To access S3 data through an internet-enabled access point: 1. Create an access point for your S3 bucket. See [Creating access points](https://docs.aws.amazon.com/AmazonS3/latest/dev/creating-access-points.html). Set the network origin to Internet. 2. After the access point is created, make a note of the Amazon resource name (ARN) for the access point. Example: ARN: arn:aws:s3:us-east-1:675068711478:accesspoint/cust-data-bucket-internet-ap. You will need to enter the ARN in your notebook. Accessing AWS S3 data from your notebook The following sample code snippet shows you how to access AWS data from your notebook by using an access point: import boto3 import pandas as pd use an access key and a secret that has access to the bucket access_key=""..."" secret=""..."" s3_client = boto3.client('s3', aws_access_key_id=access_key, aws_secret_access_key=secret) the Amazon resource name (ARN) of the access point arn = ""..."" the file you want to retrieve fileName=""customers.csv"" response = s3_client.get_object(Bucket=arn, Key=fileName) s3FileStream = response[""Body""] for other file types, change the line below to use the appropriate read_() method from pandas customerDF = pd.read_csv(s3FileStream) Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)",how-to,1,validation
6402316FEBFAD11A582D9C567811003F4BEE596A,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_export.html?context=cdpaas&locale=en,Extension Export node (SPSS Modeler),"Extension Export node (SPSS Modeler)
Extension Export node You can use the Extension Export node to run R scripts or Python for Spark scripts to export data.",conceptual,0,validation
355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=en,Extracting sentiment with a custom transformer model,"Extracting sentiment with a custom transformer model
Extracting sentiment with a custom transformer model You can train your own models for sentiment extraction based on the Slate IBM Foundation model. This pretrained model can be find-tuned for your use case by training it on your specific input data. The Slate IBM Foundation model is available only in Runtime 23.1. Note: Training transformer models is CPU and memory intensive. Depending on the size of your training data, the environment might not be large enough to complete the training. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. Use a GPU-based environment for training and also inference time, if it is available to you. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). * [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=eninput) * [Loading the pretrained model resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=enload) * [Training the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=entrain) * [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=enapply) Input data format for training You need to provide a training and development data set to the training function. The development data is usually around 10% of the training data. Each training or development sample is represented as a JSON object. It must have a text and a labels field. The text represents the training example text, and the labels field is an array, which contains exactly one label of positive, neutral, or negative. The following is an example of an array with sample training data: [ { ""text"": ""I am happy"", ""labels"": ""positive""] }, { ""text"": ""I am sad"", ""labels"": ""negative""] }, { ""text"": ""The sky is blue"", ""labels"": ""neutral""] } ] The training and development data sets are created as data streams from arrays of JSON objects. To create the data streams, you might use the utility method prepare_data_from_json: import watson_nlp from watson_nlp.toolkit.sentiment_analysis_utils.training import train_util as utils training_data_file = ""train_data.json"" dev_data_file = ""dev_data.json"" train_stream = utils.prepare_data_from_json(training_data_file) dev_stream = utils.prepare_data_from_json(dev_data_file) Loading the pretrained model resources The pretrained Slate IBM Foundation model needs to be loaded before it passes to the training algorithm. In addition, you need to load the syntax analysis models for the languages that are used in your input texts. To load the model: Load the pretrained Slate IBM Foundation model pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased') Download relevant syntax analysis models syntax_model_en = watson_nlp.load('syntax_izumo_en_stock') syntax_model_de = watson_nlp.load('syntax_izumo_de_stock') Create a list of all syntax analysis models syntax_models = [syntax_model_en, syntax_model_de] Training the model For all options that are available for configuring sentiment transformer training, enter: help(watson_nlp.workflows.sentiment.AggregatedSentiment.train_transformer) The train_transformer method creates a workflow model, which automatically runs syntax analysis and the trained sentiment classification. In a subsequent step, enable language detection so that the workflow model can run on input text without any prerequisite information. The following is a sample call using the input data and pretrained model from the previous section (Training the model): from watson_nlp.workflows.sentiment import AggregatedSentiment sentiment_model = AggregatedSentiment.train_transformer( train_data_stream = train_stream, dev_data_stream = dev_stream, syntax_model=syntax_models, pretrained_model_resource=pretrained_model_resource, label_list=['negative', 'neutral', 'positive'], learning_rate=2e-5, num_train_epochs=10, combine_approach=""NON_NEUTRAL_MEAN"", keep_model_artifacts=True ) lang_detect_model = watson_nlp.load('lang-detect_izumo_multi_stock') sentiment_model.enable_lang_detect(lang_detect_model) Applying the model on new data After you train the model on a data set, apply the model on new data by using the run() method, as you would use on any of the existing pre-trained blocks. Sample code: input_text = 'new input text' sentiment_predictions = sentiment_model.run(input_text) Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model_cloud.html)",how-to,1,validation
22D15F386DC333BC069EEA8671E895C97956E754,https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib-using.html?context=cdpaas&locale=en,Using the time series library,"Using the time series library
Using the time series library To get started working with the time series library, import the library to your Python notebook or application. Use this command to import the time series library: Import the package import tspy Creating a time series To create a time series and use the library functions, you must decide on the data source. Supported data sources include: * In-memory lists * pandas DataFrames * In-memory collections of observations (using the ObservationCollection construct) * User-defined readers (using the TimeSeriesReader construct) The following example shows ingesting data from an in-memory list: ts = tspy.time_series([5.0, 2.0, 4.0, 6.0, 6.0, 7.0]) ts The output is as follows: TimeStamp: 0 Value: 5.0 TimeStamp: 1 Value: 2.0 TimeStamp: 2 Value: 4.0 TimeStamp: 3 Value: 6.0 TimeStamp: 4 Value: 6.0 TimeStamp: 5 Value: 7.0 You can also operate on many time-series at the same time by using the MultiTimeSeries construct. A MultiTimeSeries is essentially a dictionary of time series, where each time series has its own unique key. The time series are not aligned in time. The MultiTimeSeries construct provides similar methods for transforming and ingesting as the single time series construct: mts = tspy.multi_time_series({ ""ts1"": tspy.time_series([1.0, 2.0, 3.0]), ""ts2"": tspy.time_series([5.0, 2.0, 4.0, 5.0]) }) The output is the following: ts2 time series ------------------------------ TimeStamp: 0 Value: 5.0 TimeStamp: 1 Value: 2.0 TimeStamp: 2 Value: 4.0 TimeStamp: 3 Value: 5.0 ts1 time series ------------------------------ TimeStamp: 0 Value: 1.0 TimeStamp: 1 Value: 2.0 TimeStamp: 2 Value: 3.0 Interpreting time By default, a time series uses a long data type to denote when a given observation was created, which is referred to as a time tick. A time reference system is used for time series with timestamps that are human interpretable. See [Using time reference system](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html). The following example shows how to create a simple time series where each index denotes a day after the start time of 1990-01-01: import datetime granularity = datetime.timedelta(days=1) start_time = datetime.datetime(1990, 1, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc) ts = tspy.time_series([5.0, 2.0, 4.0, 6.0, 6.0, 7.0], granularity=granularity, start_time=start_time) ts The output is as follows: TimeStamp: 1990-01-01T00:00Z Value: 5.0 TimeStamp: 1990-01-02T00:00Z Value: 2.0 TimeStamp: 1990-01-03T00:00Z Value: 4.0 TimeStamp: 1990-01-04T00:00Z Value: 6.0 TimeStamp: 1990-01-05T00:00Z Value: 6.0 TimeStamp: 1990-01-06T00:00Z Value: 7.0 Performing simple transformations Transformations are functions which, when given one or more time series, return a new time series. For example, to segment a time series into windows where each window is of size=3, sliding by 2 records, you can use the following method: window_ts = ts.segment(3, 2) window_ts The output is as follows: TimeStamp: 0 Value: original bounds: (0,2) actual bounds: (0,2) observations: [(0,5.0),(1,2.0),(2,4.0)] TimeStamp: 2 Value: original bounds: (2,4) actual bounds: (2,4) observations: [(2,4.0),(3,6.0),(4,6.0)] This example shows adding 1 to each value in a time series: add_one_ts = ts.map(lambda x: x + 1) add_one_ts The output is as follows: TimeStamp: 0 Value: 6.0 TimeStamp: 1 Value: 3.0 TimeStamp: 2 Value: 5.0 TimeStamp: 3 Value: 7.0 TimeStamp: 4 Value: 7.0 TimeStamp: 5 Value: 8.0 Or you can temporally left join a time series, for example ts with another time series ts2: ts2 = tspy.time_series([1.0, 2.0, 3.0]) joined_ts = ts.left_join(ts2) joined_ts The output is as follows: TimeStamp: 0 Value: [5.0, 1.0] TimeStamp: 1 Value: [2.0, 2.0] TimeStamp: 2 Value: [4.0, 3.0] TimeStamp: 3 Value: [6.0, null] TimeStamp: 4 Value: [6.0, null] TimeStamp: 5 Value: [7.0, null] Using transformers A rich suite of built-in transformers is provided in the transformers package. Import the package to use the provided transformer functions: from tspy.builders.functions import transformers After you have added the package, you can transform data in a time series be using the transform method. For example, to perform a difference on a time-series: ts_diff = ts.transform(transformers.difference()) Here the output is: TimeStamp: 1 Value: -3.0 TimeStamp: 2 Value: 2.0 TimeStamp: 3 Value: 2.0 TimeStamp: 4 Value: 0.0 TimeStamp: 5 Value: 1.0 Using reducers Similar to the transformers package, you can reduce a time series by using methods provided by the reducers package. You can import the reducers package as follows: from tspy.builders.functions import reducers After you have imported the package, use the reduce method to get the average over a time-series for example: avg = ts.reduce(reducers.average()) avg This outputs: 5.0 Reducers have a special property that enables them to be used alongside segmentation transformations (hourly sum, avg in the window prior to an error occurring, and others). Because the output of a segmentation + reducer is a time series, the transform method is used. For example, to segment into windows of size 3 and get the average across each window, use: avg_windows_ts = ts.segment(3).transform(reducers.average()) This results in: imeStamp: 0 Value: 3.6666666666666665 TimeStamp: 1 Value: 4.0 TimeStamp: 2 Value: 5.333333333333333 TimeStamp: 3 Value: 6.333333333333333 Graphing time series Lazy evaluation is used when graphing a time series. When you graph a time series, you can",how-to,1,validation
774FD49C617DAC62F48EB31E08757E0AEC3D1282,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/matrix.html?context=cdpaas&locale=en,Matrix node (SPSS Modeler),"Matrix node (SPSS Modeler)
Matrix node Use the Matrix to create a table that shows relationships between fields. It is most commonly used to show the relationship between two categorical fields (flag, nominal, or ordinal), but it can also be used to show relationships between continuous (numeric range) fields.",conceptual,0,validation
F495F5206C908FB1A31F18A8AB3CE9465164564C,https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html?context=cdpaas&locale=en,Getting started with IBM watsonx as a Service,"Getting started with IBM watsonx as a Service
Getting started with IBM watsonx as a Service You can sign up for IBM watsonx.ai or IBM watsonx.governance and explore the tutorials, resources, and tools to immediately get started working with models or governing models. If you are an administrator, follow the steps to set up watsonx for your organization. Start working To start working: 1. If you haven't already, [sign up](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html) for watsonx.ai or watsonx.governance. 2. Click a task tile on the watsonx home page and start working. For example, click Experiment with foundation models and build prompts to open the Prompt Lab. Then, choose a sample prompt and start experimenting. Your first project, where you save your work, is created automatically. See [Your sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html). 3. Explore your resources: * Take a [Quick start tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) * Click a category in the Samples area of the home page to try out a notebook, a prompt, or a sample project. If you are an existing Cloud Pak for Data as a Service user, you can [switch to watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html). Set up the platform as an administrator To set up the watsonx platform for your organization, see [Setting up the platform as an administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html). Learn about watsonx To understand watsonx, start with these resources: * [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) * [Video library](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html) * [Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) * [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) * [Read blogs on Medium and the IBM Community](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.htmlcommunity) Other information: * [Get help](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html) * [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html) * [Language support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html) * [IBM watsonx APIs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wdp-apis.html)",how-to,1,validation
D174298E1DD7898C08771488715D83FC7A7740AE,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=en,Working with pre-trained models,"Working with pre-trained models
Working with pre-trained models Watson Natural Language Processing provides pre-trained models in over 20 languages. They are curated by a dedicated team of experts, and evaluated for quality on each specific language. These pre-trained models can be used in production environments without you having to worry about license or intellectual property infringements. Loading and running a model To load a model, you first need to know its name. Model names follow a standard convention encoding the type of model (like classification or entity extraction), type of algorithm (like BERT or SVM), language code, and details of the type system. To find the model that matches your needs, use the task catalog. See [Watson NLP task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html). You can find the expected input for a given block class (for example to the Entity Mentions model) by using help() on the block class run() method: import watson_nlp help(watson_nlp.blocks.keywords.TextRank.run) Watson Natural Language Processing encapsulates natural language functionality through blocks and workflows. Each block or workflow supports functions to: * load(): load a model * run(): run the model on input arguments * train(): train the model on your own data (not all blocks and workflows support training) * save(): save the model that has been trained on your own data Blocks Two types of blocks exist: * [Blocks that operate directly on the input document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enoperate-data) * [Blocks that depend on other blocks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enoperate-blocks) [Workflows](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enworkflows) run one more blocks on the input document, in a pipeline. Blocks that operate directly on the input document An example of a block that operates directly on the input document is the Syntax block, which performs natural language processing operations such as tokenization, lemmatization, part of speech tagging or dependency parsing. Example: running syntax analysis on a text snippet: import watson_nlp Load the syntax model for English syntax_model = watson_nlp.load('syntax_izumo_en_stock') Run the syntax model and print the result syntax_prediction = syntax_model.run('Welcome to IBM!') print(syntax_prediction) Blocks that depend on other blocks Blocks that depend on other blocks cannot be applied on the input document directly. They are applied on the output of one or more preceeding blocks. For example, the Keyword Extraction block depends on the Syntax and Noun Phrases block. These blocks can be loaded but can only be run in a particular order on the input document. For example: import watson_nlp text = ""Anna went to school at University of California Santa Cruz. Anna joined the university in 2015."" Load Syntax, Noun Phrases and Keywords models for English syntax_model = watson_nlp.load('syntax_izumo_en_stock') noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock') keywords_model = watson_nlp.load('keywords_text-rank_en_stock') Run the Syntax and Noun Phrases models syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech')) noun_phrases = noun_phrases_model.run(text) Run the keywords model keywords = keywords_model.run(syntax_prediction, noun_phrases, limit=2) print(keywords) Workflows Workflows are predefined end-to-end pipelines from a raw document to a final block, where all necessary blocks are chained as part of the workflow pipeline. For instance, the Entity Mentions block offered in Runtime 22.2 requires syntax analysis results, so the end-to-end process would be: input text -> Syntax analysis -> Entity Mentions -> Entity Mentions results. Starting with Runtime 23.1, you can call the Entity Mentions workflow. Refer to this sample: import watson_nlp Load the workflow model mentions_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled') Run the entity extraction workflow on the input text mentions_workflow.run('IBM announced new advances in quantum computing', language_code=""en"") Parent topic:[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)",how-to,1,validation
EEC0EB0502DEF7B7ADB112F8D7D4C38E1F6D9170,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_numeric.html?context=cdpaas&locale=en,Numeric functions (SPSS Modeler),"Numeric functions (SPSS Modeler)
Numeric functions CLEM contains a number of commonly used numeric functions. CLEM numeric functions Table 1. CLEM numeric functions Function Result Description –NUM Number Used to negate NUM. Returns the corresponding number with the opposite sign. NUM1 + NUM2 Number Returns the sum of NUM1 and NUM2. NUM1 –NUM2 Number Returns the value of NUM2 subtracted from NUM1. NUM1 * NUM2 Number Returns the value of NUM1 multiplied by NUM2. NUM1 / NUM2 Number Returns the value of NUM1 divided by NUM2. INT1 div INT2 Number Used to perform integer division. Returns the value of INT1 divided by INT2. INT1 rem INT2 Number Returns the remainder of INT1 divided by INT2. For example, INT1 – (INT1 div INT2) INT2. BASE POWER Number Returns BASE raised to the power POWER, where either may be any number (except that BASE must not be zero if POWER is zero of any type other than integer 0). If POWER is an integer, the computation is performed by successively multiplying powers of BASE. Thus, if BASE is an integer, the result will be an integer. If POWER is integer 0, the result is always a 1 of the same type as BASE. Otherwise, if POWER is not an integer, the result is computed as exp(POWER * log(BASE)). abs(NUM) Number Returns the absolute value of NUM, which is always a number of the same type. exp(NUM) Real Returns e raised to the power NUM, where e is the base of natural logarithms. fracof(NUM) Real Returns the fractional part of NUM, defined as NUM–intof(NUM). intof(NUM) Integer Truncates its argument to an integer. It returns the integer of the same sign as NUM and with the largest magnitude such that abs(INT) <= abs(NUM). log(NUM) Real Returns the natural (base e) logarithm of NUM, which must not be a zero of any kind. log10(NUM) Real Returns the base 10 logarithm of NUM, which must not be a zero of any kind. This function is defined as log(NUM) / log(10). negate(NUM) Number Used to negate NUM. Returns the corresponding number with the opposite sign. round(NUM) Integer Used to round NUM to an integer by taking intof(NUM+0.5) if NUM is positive or intof(NUM–0.5) if NUM is negative. sign(NUM) Number Used to determine the sign of NUM. This operation returns –1, 0, or 1 if NUM is an integer. If NUM is a real, it returns –1.0, 0.0, or 1.0, depending on whether NUM is negative, zero, or positive. sqrt(NUM) Real Returns the square root of NUM. NUM must be positive. sum_n(LIST) Number Returns the sum of values from a list of numeric fields or null if all of the field values are null. mean_n(LIST) Number Returns the mean value from a list of numeric fields or null if all of the field values are null. sdev_n(LIST) Number Returns the standard deviation from a list of numeric fields or null if all of the field values are null.",conceptual,0,validation
33923FE20855D3EA3850294C0FB447EC3F1B7BDF,https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.html?context=cdpaas&locale=en,Decision Optimization experiments,"Decision Optimization experiments
Decision Optimization experiments If you use the Decision Optimization experiment UI, you can take advantage of its many features in this user-friendly environment. For example, you can create and solve models, produce reports, compare scenarios and save models ready for deployment with Watson Machine Learning. The Decision Optimization experiment UI facilitates workflow. Here you can: * Select and edit the data relevant for your optimization problem, see [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata) * Create, import, edit and solve Python models in the Decision Optimization experiment UI, see [Decision Optimization notebook tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b) * Create, import, edit and solve models expressed in natural language with the Modeling Assistant, see [Modeling Assistant tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.htmlcogusercase) * Create, import, edit and solve OPL models in the Decision Optimization experiment UI, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.htmltopic_oplmodels) * Generate a notebook from your model, work with it as a notebook then reload it as a model, see [Generating a notebook from a scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__generateNB) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) * Visualize data and solutions, see [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__solution) * Investigate and compare solutions for multiple scenarios, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) * Easily create and share reports with tables, charts and notes using widgets provided in the [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.htmltopic_visualization) * Save models that are ready for deployment in Watson Machine Learning, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) See the [Decision Optimization experiment UI comparison table](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOintro.htmlDOIntro__comparisontable) for a list of features available with and without the Decision Optimization experiment UI. See [Views and scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface) for a description of the user interface and scenario management.",conceptual,0,validation
3D9FB046D583A2D0177ECB4DA25EEAEB4FEBCCA9,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug.html?context=cdpaas&locale=en,Drug treatment - exploratory graphs (SPSS Modeler),"Drug treatment - exploratory graphs (SPSS Modeler)
Drug treatment - exploratory graphs In this example, imagine you're a medical researcher compiling data for a study. You've collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of five medications. Part of your job is to use data mining to find out which drug might be appropriate for a future patient with the same illness. This example uses the flow named Drug Treatment - Exploratory Graphs, available in the example project . The data file is drug1n.csv. Figure 1. Drug treatment example flow ![Drug treatment example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_data.png) The data fields used in this example are: Data field Description Age Age of patient (number) Sex M or F BP Blood pressure: HIGH, NORMAL, or LOW Cholesterol Blood cholesterol: NORMAL or HIGH Na Blood sodium concentration K Blood potassium concentration Drug Prescription drug to which a patient responded",conceptual,0,validation
4F0098CE544BA8AC594F98AF8DF26B7911399750,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/hdbscannuggetnodeslots.html?context=cdpaas&locale=en,hdbscannugget properties,"hdbscannugget properties
hdbscannugget properties You can use the HDBSCAN node to generate an HDBSCAN model nugget. The scripting name of this model nugget is hdbscannugget. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [hdbscannode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/hdbscannodeslots.htmlhdbscannodeslots).",conceptual,0,validation
0721692D3F363B864A241FC4644D7D57B2DFF881,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_dates.html?context=cdpaas&locale=en,Defining the dates (SPSS Modeler),"Defining the dates (SPSS Modeler)
Defining the dates Now you need to change the storage type of the DATE_ field to date format. 1. Attach a Filler node to the Filter node, then double-click the Filler node to open its properties 2. Add the DATE_ field, set the Replace option to Always, and set the Replace with value to to_date(DATE_). Figure 1. Setting the date storage type ![Setting the date storage type](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_date1.png)",how-to,1,validation
34FFE04319CE15E4451729B183C35F288A58A1B7,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage-rights.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Data usage rights ![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with inputTraining and tuning phaseIntellectual propertyAmplified Description Terms of service, copyright laws, or other rules restrict the ability to use certain data for building models. Why is data usage rights a concern for foundation models? Laws and regulations concerning the use of data to train AI are unsettled and can vary from country to country, which creates challenges in the development of models. If data usage violates rules or restrictions, business entities might face fines, reputational harms, and other legal consequences. Example Text Copyright Infringement Claims According to the source article, bestselling novelists Sarah Silverman, Richard Kadrey, and Christopher Golden have sued Meta and OpenAI for copyright infringement. The article further stated that the authors had alleged the two tech companies had “ingested” text from their books into generative AI software (LLMs) and failed to give them credit or compensation. Sources: [Los Angeles Times, July 2023](https://www.latimes.com/entertainment-arts/books/story/2023-07-10/sarah-silverman-authors-sue-meta-openai-chatgpt-copyright-infringement) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,validation
DE60E212953766B4698982B3B631D1A25A019F2E,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html?context=cdpaas&locale=en,Accessing project assets with ibm-watson-studio-lib,"Accessing project assets with ibm-watson-studio-lib
Accessing project assets with ibm-watson-studio-lib The ibm-watson-studio-lib library for Python and R contains a set of functions that help you to interact with IBM Watson Studio projects and project assets. You can think of the library as a programmatical interface to a project. Using the ibm-watson-studio-lib library, you can access project metadata and assets, including files and connections. The library also contains functions that simplify fetching files associated with the project. Next steps * Start using ibm-watson-studio-lib in new notebooks: * [ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html) * [ibm-watson-studio-lib for R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html) Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)",conceptual,0,validation
F965BE0F67B8B3C26BE38939A33FA8AB74AEA4CC,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/kohonen.html?context=cdpaas&locale=en,Kohonen node (SPSS Modeler),"Kohonen node (SPSS Modeler)
Kohonen node Kohonen networks are a type of neural network that perform clustering, also known as a knet or a self-organizing map. This type of network can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning. Records are grouped so that records within a group or cluster tend to be similar to each other, and records in different groups are dissimilar. The basic units are neurons, and they are organized into two layers: the input layer and the output layer (also called the output map). All of the input neurons are connected to all of the output neurons, and these connections have strengths, or weights, associated with them. During training, each unit competes with all of the others to ""win"" each record. The output map is a two-dimensional grid of neurons, with no connections between the units. Input data is presented to the input layer, and the values are propagated to the output layer. The output neuron with the strongest response is said to be the winner and is the answer for that input. Initially, all weights are random. When a unit wins a record, its weights (along with those of other nearby units, collectively referred to as a neighborhood) are adjusted to better match the pattern of predictor values for that record. All of the input records are shown, and weights are updated accordingly. This process is repeated many times until the changes become very small. As training proceeds, the weights on the grid units are adjusted so that they form a two-dimensional ""map"" of the clusters (hence the term self-organizing map). When the network is fully trained, records that are similar should be close together on the output map, whereas records that are vastly different will be far apart. Unlike most learning methods in watsonx.ai, Kohonen networks do not use a target field. This type of learning, with no target field, is called unsupervised learning. Instead of trying to predict an outcome, Kohonen nets try to uncover patterns in the set of input fields. Usually, a Kohonen net will end up with a few units that summarize many observations (strong units), and several units that don't really correspond to any of the observations (weak units). The strong units (and sometimes other units adjacent to them in the grid) represent probable cluster centers. Another use of Kohonen networks is in dimension reduction. The spatial characteristic of the two-dimensional grid provides a mapping from the k original predictors to two derived features that preserve the similarity relationships in the original predictors. In some cases, this can give you the same kind of benefit as factor analysis or PCA. Note that the method for calculating default size of the output grid is different from older versions of SPSS Modeler. The method will generally produce smaller output layers that are faster to train and generalize better. If you find that you get poor results with the default size, try increasing the size of the output grid on the Expert tab. Requirements. To train a Kohonen net, you need one or more fields with the role set to Input. Fields with the role set to Target, Both, or None are ignored. Strengths. You do not need to have data on group membership to build a Kohonen network model. You don't even need to know the number of groups to look for. Kohonen networks start with a large number of units, and as training progresses, the units gravitate toward the natural clusters in the data. You can look at the number of observations captured by each unit in the model nugget to identify the strong units, which can give you a sense of the appropriate number of clusters.",conceptual,0,validation
C52D7D525C33EB8FA5B5ACC8B16243223D78AC68,https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/generate_data_sd.html?context=cdpaas&locale=en,Creating synthetic data from a custom data schema,"Creating synthetic data from a custom data schema
Creating synthetic data from a custom data schema Using the Synthetic Data Generator graphical editor flow tool, you can generate a structured synthetic data set based on meta data, automatically or with user-specified statistical distributions. You can define the data within each table column, their distributions, and any correlations. You can then export and review your synthetic data. Before you can use generate to create synthetic data, you need [to create a task](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.htmlcreate-synthetic). 1. The Generate synthetic tabular data flow window opens. Select use case Create from custom data schema. Click Next. ![Generate synthetic tabular data flow window](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-generate-flow.png) 2. Select Generate options. You can use the Synthetic Data Generator graphical editor flow tool to specify the number of rows and add columns. You can define properties and specify fields, storage types, statistical distributions, and distribution parameters. Click Next. ![Generate options](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-generate-options.png) 3. Select Export data to select the export file name and type. For more information, see [Exporting data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/export_data_sd.html). Click Next. ![Export data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-generate-export.png) 4. Select Review to check your selection and make any updates before generating your synthetic data. Click Save and run. ![Review data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-generate-review.png) Learn more [Creating synthetic data from production data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/mask_mimic_data_sd.html)",how-to,1,validation
315971AE6C6A4EEDE13E9E1449B2A36F548B928F,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-delete.html?context=cdpaas&locale=en,Deleting a deployment,"Deleting a deployment
Deleting a deployment Delete your deployment when you no longer need it to free up resources. You can delete a deployment from a deployment space, or programmatically, by using the Python client or Watson Machine Learning APIs. Deleting a deployment from a space To remove a deployment: 1. Open the Deployments page of your deployment space. 2. Choose Delete from the action menu for the deployment name. ![Deleting a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deploy-delete.png) Deleting a deployment by using the Python client Use the following method to delete the deployment. client.deployments.delete(deployment_uid) Returns a SUCCESS message. To check that the deployment was removed, you can list deployments and make sure that the deleted deployment is no longer listed. client.deployments.list() Returns: ---- ---- ----- ------- ------------- GUID NAME STATE CREATED ARTIFACT_TYPE ---- ---- ----- ------- ------------- Deleting a deployment by using the REST API Use the DELETE method for deleting a deployment. DELETE /ml/v4/deployments/{deployment_id} For more information, see [Delete](https://cloud.ibm.com/apidocs/machine-learningdeployments-delete). For example, see the following code snippet: curl --location --request DELETE 'https://us-south.ml.cloud.ibm.com/ml/v4/deployments/:deployment_id?space_id=<string>&version=2020-09-01' Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)",how-to,1,validation
85F8B4292483C5747AB2436A2D5D5377F1F6CAB9,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/field_information.html?context=cdpaas&locale=en,Viewing or selecting values (SPSS Modeler),"Viewing or selecting values (SPSS Modeler)
Viewing or selecting values You can view field values from the Expression Builder. Note that data must be fully instantiated in an Import or Type node to use this feature, so that storage, types, and values are known. To view values for a field from the Expression Builder, select the required field and then use the Value list or perform a search with the Find in column Value field to find values for the selected field. You can then double-click a value to insert it into the current expression or list. For flag and nominal fields, all defined values are listed. For continuous (numeric range) fields, the minimum and maximum values are displayed.",how-to,1,validation
FBD84CB5A6901DDAF7412396F4C6CC190E1B7328,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameters_common.html?context=cdpaas&locale=en,Common node properties,"Common node properties
Common node properties A number of properties are common to all nodes in SPSS Modeler. Common node properties Table 1. Common node properties Property name Data type Property description use_custom_name flag name string Read-only property that reads the name (either auto or custom) for a node on the canvas. custom_name string Specifies a custom name for the node. tooltip string annotation string keywords string Structured slot that specifies a list of keywords associated with the object (for example, [""Keyword1"" ""Keyword2""]). cache_enabled flag node_type source_supernode <br> <br>process_supernode <br> <br>terminal_supernode <br> <br>all node names as specified for scripting Read-only property used to refer to a node by type. For example, instead of referring to a node only by name, such as real_income, you can also specify the type, such as userinputnode or filternode. SuperNode-specific properties are discussed separately, as with all other nodes. See [SuperNode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/defining_slot_parameters_in_supernodes.htmldefining_slot_parameters_in_supernodes) for more information.",conceptual,0,validation
2D81FCD3E78A5CC7B435198A59522AE6BF8640ED,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/survivalanalysis-guides.html?context=cdpaas&locale=en,SPSS predictive analytics survival analysis algorithms in notebooks,"SPSS predictive analytics survival analysis algorithms in notebooks
SPSS predictive analytics survival analysis algorithms in notebooks You can use non-parametric distribution fitting, parametric distribution fitting, or parametric regression modeling SPSS predictive analytics algorithms in notebooks. Non-Parametric Distribution Fitting Survival analysis analyzes data where the outcome variable is the time until the occurrence of an event of interest. The distribution of the event times is typically described by a survival function. Non-parametric Distribution Fitting (NPDF) provides an estimate of the survival function without making any assumptions concerning the distribution of the data. NPDF includes Kaplan-Meier estimation, life tables, and specialized extension algorithms to support left censored, interval censored, and recurrent event data. Python example code: from spss.ml.survivalanalysis import NonParametricDistributionFitting from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem npdf = NonParametricDistributionFitting(). setAlgorithm(""KM""). setBeginField(""time""). setStatusField(""status""). setStrataFields([""treatment""]). setGroupFields([""gender""]). setUndefinedStatus(""INTERVALCENSORED""). setDefinedStatus( DefinedStatus( failure=StatusItem(points = Points(""1"")), rightCensored=StatusItem(points = Points(""0"")))). setOutMeanSurvivalTime(True) npdfModel = npdf.fit(df) predictions = npdfModel.transform(data) predictions.show() Parametric Distribution Fitting Survival analysis analyzes data where the outcome variable is the time until the occurrence of an event of interest. The distribution of the event times is typically described by a survival function. Parametric Distribution Fitting (PDF) provides an estimate of the survival function by comparing the functions for several known distributions (exponential, Weibull, log-normal, and log-logistic) to determine which, if any, describes the data best. In addition, the distributions for two or more groups of cases can be compared. Python excample code: from spss.ml.survivalanalysis import ParametricDistributionFitting from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem pdf = ParametricDistributionFitting(). setBeginField(""begintime""). setEndField(""endtime""). setStatusField(""status""). setFreqField(""frequency""). setDefinedStatus( DefinedStatus( failure=StatusItem(points=Points(""F"")), rightCensored=StatusItem(points=Points(""R"")), leftCensored=StatusItem(points=Points(""L""))) ). setMedianRankEstimation(""RRY""). setMedianRankObtainMethod(""BetaFDistribution""). setStatusConflictTreatment(""DERIVATION""). setEstimationMethod(""MRR""). setDistribution(""Weibull""). setOutProbDensityFunc(True). setOutCumDistFunc(True). setOutSurvivalFunc(True). setOutRegressionPlot(True). setOutMedianRankRegPlot(True). setComputeGroupComparison(True) pdfModel = pdf.fit(data) predictions = pdfModel.transform(data) predictions.show() Parametric regression modeling Parametric regression modeling (PRM) is a survival analysis technique that incorporates the effects of covariates on the survival times. PRM includes two model types: accelerated failure time and frailty. Accelerated failure time models assume that the relationship of the logarithm of survival time and the covariates is linear. Frailty, or random effects, models are useful for analyzing recurrent events, correlated survival data, or when observations are clustered into groups. PRM automatically selects the survival time distribution (exponential, Weibull, log-normal, or log-logistic) that best describes the survival times. Python example code: from spss.ml.survivalanalysis import ParametricRegression from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem prm = ParametricRegression(). setBeginField(""startTime""). setEndField(""endTime""). setStatusField(""status""). setPredictorFields([""age"", ""surgery"", ""transplant""]). setDefinedStatus( DefinedStatus( failure=StatusItem(points=Points(""0.0"")), intervalCensored=StatusItem(points=Points(""1.0"")))) prmModel = prm.fit(data) PMML = prmModel.toPMML() statXML = prmModel.statXML() predictions = prmModel.transform(data) predictions.show() Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)",how-to,1,validation
909B04011F4C2211D6D945EC82217E3F89A79BD7,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/disable_nodes.html?context=cdpaas&locale=en,Disabling nodes in a flow (SPSS Modeler),"Disabling nodes in a flow (SPSS Modeler)
Disabling nodes in a flow You can disable process nodes that have a single input so that they're ignored when the flow runs. This saves you from having to remove or bypass the node and means you can leave it connected to the remaining nodes. You can still open and edit the node settings; however, any changes will not take effect until you enable the node again. For example, you might use a Filter node to filter several fields, and then build models based on the reduced data set. If you want to also build the same models without fields being filtered, to see if they improve the model results, you can disable the Filter node. When you disable the Filter node, the connections to the modeling nodes pass directly through from the Derive node to the Type node.",how-to,1,validation
CE7976AFE82E2D17EE1FA308570AFA42E0E91667,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag_build.html?context=cdpaas&locale=en,Building the flow (SPSS Modeler),"Building the flow (SPSS Modeler)
Building the flow 1. Add a Data Asset node that points to pm_customer_train1.csv. 2. Add a Type node, and select response as the target field (Role = Target). Set the measure for this field to Flag. Figure 1. Setting the measurement level and role ![Setting the measurement level and role](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_build_target.png) 3. Set the role to None for the following fields: customer_id, campaign, response_date, purchase, purchase_date, product_id, Rowid, and X_random. These fields will be ignored when you are building the model. 4. Click Read Values in the Type node to make sure that values are instantiated. As we saw earlier, our source data includes information about four different campaigns, each targeted to a different type of customer account. These campaigns are coded as integers in the data, so to make it easier to remember which account type each integer represents, let's define labels for each one. Figure 2. Choosing to specify values for a field ![Choosing to specify values for a field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_build_value.png) 5. On the row for the campaign field, click the entry in the Value mode column. 6. Choose Specify from the drop-down. Figure 3. Defining labels for the field values ![Defining labels for the field values](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_build_labels.png) 7. Click the Edit icon in the column for the campaign field. Type the labels as shown for each of the four values. 8. Click OK. Now the labels will be displayed in output windows instead of the integers. 9. Attach a Table node to the Type node. 10. Right-click the Table node and select Run. 11. In the Outputs panel, double-click the table output to open it. 12. Click OK to close the output window. Although the data includes information about four different campaigns, you will focus the analysis on one campaign at a time. Since the largest number of records fall under the Premium account campaign (coded campaign=2 in the data), you can use a Select node to include only these records in the flow. Figure 4. Selecting records for a single campaign ![Selecting records for a single campaign](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_build_select.png)",how-to,1,validation
A69DA07F8EE0529080646A4B1EAB45C1074AB683,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_model.html?context=cdpaas&locale=en,Creating the model (SPSS Modeler),"Creating the model (SPSS Modeler)
Creating the model 1. Double-click the Time Series node to open its properties. 2. Under FIELDS, add all 5 of the markets to the Candidate Inputs lists. Also add the Total field to the Targets list. 3. Under BUILD OPTIONS - GENERAL, make sure the Expert Modeler method is selected using all default settings. Doing so enables the Expert Modeler to decide the most appropriate model to use for each time series. Figure 1. Choosing the Expert Modeler method for Time Series ![Choosing the Expert Modeler method for Time Series](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_expert.png) 4. Save the settings and then run the flow. A Time Series model nugget is generated. Attach it to the Time Series node. 5. Attach a Table node to the Time Series model nugget and run the flow again. Figure 2. Example flow showing Time Series modeling ![Example flow showing Time Series modeling](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_flow.png) There are now three new rows appended to the end of the original data. These are the rows for the forecast period, in this case January to March 2004. Several new columns are also present now. The $TS- columns are added by the Time Series node. The columns indicate the following for each row (that is, for each interval in the time series data): Column Description $TS-colname The generated model data for each column of the original data. $TSLCI-colname The lower confidence interval value for each column of the generated model data. $TSUCI-colname The upper confidence interval value for each column of the generated model data. $TS-Total The total of the $TS-colname values for this row. $TSLCI-Total The total of the $TSLCI-colname values for this row. $TSUCI-Total The total of the $TSUCI-colname values for this row. The most significant columns for the forecast operation are the $TS-Market_n, $TSLCI-Market_n, and $TSUCI-Market_n columns. In particular, these columns in the last three rows contain the user subscription forecast data and confidence intervals for each of the local markets.",how-to,1,validation
1DD1ED59E93DA4F6576E7EB1E420213AB34DD1DD,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/knn.html?context=cdpaas&locale=en,KNN node (SPSS Modeler),"KNN node (SPSS Modeler)
KNN node Nearest Neighbor Analysis is a method for classifying cases based on their similarity to other cases. In machine learning, it was developed as a way to recognize patterns of data without requiring an exact match to any stored patterns, or cases. Similar cases are near each other and dissimilar cases are distant from each other. Thus, the distance between two cases is a measure of their dissimilarity. Cases that are near each other are said to be ""neighbors."" When a new case (holdout) is presented, its distance from each of the cases in the model is computed. The classifications of the most similar cases – the nearest neighbors – are tallied and the new case is placed into the category that contains the greatest number of nearest neighbors. You can specify the number of nearest neighbors to examine; this value is called k. The pictures show how a new case would be classified using two different values of k. When k = 5, the new case is placed in category 1 because a majority of the nearest neighbors belong to category 1. However, when k = 9, the new case is placed in category 0 because a majority of the nearest neighbors belong to category 0. Nearest neighbor analysis can also be used to compute values for a continuous target. In this situation, the average or median target value of the nearest neighbors is used to obtain the predicted value for the new case.",conceptual,0,validation
035EF4A1D7C465E8A72ACC1C5C98198B4E95068B,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=en,Adding conditions to the pipeline,"Adding conditions to the pipeline
Adding conditions to the pipeline Add conditions to a pipeline to handle various scenarios. Configuring conditions for the pipeline As you create a pipeline, you can specify conditions that must be met before you run the pipeline. For example, you can set a condition that the output from a node must satisfy a particular condition before you proceed with the pipeline execution. To define a condition: 1. Hover over the link between two nodes. 2. Click Add condition. 3. Choose the type of condition: * [Condition Response](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=ennode) checks a condition on the status of the previous node. * [Simple condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=ensimple) is a no-code condition in the form of an if-then statement. * [Advanced condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=enadvanced) Advanced condition uses expression code, providing the most features and flexibility. 4. Define and save your expression. ![Defining a condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipelines_adding_condition.gif) When you define your expression, a summary captures the condition and the expected result. For example: If Run AutoAI is Successful, then Create deployment node. When you return to the flow, you see an indicator that you defined a condition. Hover over the icon to edit or delete the condition. ![Viewing a successful condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-condition1.png) Configuring a condition based on node status If you select Condition Response as your condition type, the previous node status must satisfy at least one of these conditions to continue with the flow: * Completed - the node activity is completed without error. * Completed with warnings - the node activity is completed but with warnings. * Completed with errors - the node activity is completed, but with errors. * Failed - the node activity failed to complete. * Cancelled - the previous action or activity was canceled. Configuring a simple condition To configure a simple condition, choose the condition that must be satisfied to continue with the flow. 1. Optional: edit the default name. 2. Depending on the node, choose a variable from the drop-down options. For example, if you are creating a condition based on a Run AutoAI node, you can choose Model metric as the variable to base your condition on. 3. Based on the variable, choose an operator from: Equal to, Not equal to, Greater than, Less than, Greater than or equal to, Less than or equal to. 4. Specify the required value. For example, if you are basing a condition on an AutoAI metric, specify a list of values that consists of the available metrics. 5. Optional: click the plus icon to add an And (all conditions must be met) or an Or (either condition must be met) to the expression to build a compound conditional statement. 6. Review the summary and save the condition. Configuring an advanced condition Use coding constructs to build a more complex condition. The next node runs when the condition is met. You build the advanced condition by using the expression builder. 1. Optional: edit the default name. 2. Add items from the Expression elements panel to the Expression canvas to build your condition. You can also type your conditions and the elements autocomplete. 3. When your expression is complete, review the summary and save the condition. Learn more For more information on using the code editor to build an expression, see: * [Functions used in pipelines Expression Builder](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html) Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)",how-to,1,validation
3873A285DCB38EF4B4ED663BFA0DF4047AB7692D,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_wordcloud.html?context=cdpaas&locale=en,Word cloud charts,"Word cloud charts
Word cloud charts Word cloud charts present data as words, where the size and placement of any individual word is determined by how it is weighted.",conceptual,0,validation
5F398F2A5F6A2E75B9376B755C3ECF4B7F18B149,https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html?context=cdpaas&locale=en,Adding a connected folder asset to a project,"Adding a connected folder asset to a project
Adding a connected folder asset to a project You can create a connected folder asset based on a path within an IBM Cloud Object Storage system that is accessed through a connection. You can view the files and subfolders that share the path with the connected folder asset. The files that you can view within the connected folder asset are not themselves data assets. For example, you can create a connected folder asset for a path that contains news feeds that are continuously updated. Required permissions : You must have the Admin or Editor role in the project to add a connected folder asset. Watch this video to see how to add a connected folder asset in a project, then follow the steps below the video. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. To add a connected folder asset from a connection to a project: 1. If necessary, [create a connection asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). Include an Access Key and a Secret Key to your IBM Cloud Object Storage connection to enable the downloading of files within the connected folder asset. If you're using an existing IBM Cloud Object Storage connection asset that doesn't have an Access Key and Secret Key, edit the connection asset and add them. 2. Click Import assets > Connected data. 3. Select an existing connection asset as the source of the data. 4. Select the folder you want and click Import. 5. Type a name and description. 6. Click Create. The connected folder asset appears on the project Assets page in the Data assets category. Click the connected folder asset name to view the contents of the connected folder asset. Click the eye (![eye icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/visibility-on.svg)) icon next to a file name to view the contents of the files within the folder that have these formats: * CSV * JSON * Parquet You can refine the files within a connected folder asset and then save the result as a data asset. While viewing the connected folder asset, select a file and then click Prepare data. You can view the files within the connected folder asset if the IBM Cloud Object Storage connection asset that's associated with the connected folder asset has an Access Key and a Secret Key (also known as HMAC credentials). For more information about HMAC credentials, see [IBM Cloud Object Storage Service credentials](https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.htmlservice-credentials). Next steps * [Refining a file within the folder](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Parent topic:[Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)",how-to,1,validation
400E9E780D8A149530DF21E38B256B71BDA12D83,https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_classify_build.html?context=cdpaas&locale=en,Building the flow (SPSS Modeler),"Building the flow (SPSS Modeler)
Building the flow Figure 1. Example flow to classify customers using multinomial logistic regression ![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify.png) 1. Add a Data Asset node that points to telco.csv. 2. Add a Type node, double-click it to open its properties, and click Read Values. Make sure all measurement levels are set correctly. For example, most fields with values of 0.0 and 1.0 can be regarded as flags. Figure 2. Measurement levels ![Measurement levels](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_measurement.png) Notice that gender is more correctly considered as a field with a set of two values, instead of a flag, so leave its measurement value as Nominal. 3. Set the role for the custcat field to Target. Leave the role for all other fields set to Input. 4. Since this example focuses on demographics, use a Filter node to include only the relevant fields: region, age, marital, address, income, ed, employ, retire, gender, reside, and custcat). Other fields will be excluded for the purpose of this analysis. To filter them out, in the Filter node properties, click Add Columns and select the fields to exclude. Figure 3. Filtering on demographic fields ![Filtering on demographic fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_filter.png) (Alternatively, you could change the role to None for these fields rather than excluding them, or select the fields you want to use in the modeling node.) 5. In the Logistic node properties, under MODEL SETTINGS, select the Stepwise method. Also select Multinomial, Main Effects, and Include constant in equation. Figure 4. Example flow to classify customers using multinomial logistic regression ![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_logistic.png) 6. Under EXPERT OPTIONS, select Expert mode, expand the Output section, and select Classification table. Figure 5. Example flow to classify customers using multinomial logistic regression ![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_output.png)",how-to,1,validation
C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/copyright-infringement.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Copyright infringement ![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with outputIntellectual propertyNew Description Generative AI output that is too similar or identical to existing work risks claims of copyright infringement. Uncertainty and variability around the ownership, copyrightability, and patentability of output generated by AI increases the risk of copyright infringement problems. Why is copyright infringement a concern for foundation models? Laws and regulations concerning the use of content that looks the same or closely similar to other copyrighted data are largely unsettled and can vary from country to country, providing challenges in determining and implementing compliance. Business entities could face fines, reputational harms, and other legal consequences. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,validation
24D2987869B1C8C34EFA1204903A7A8F3E35D459,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_values_geo.html?context=cdpaas&locale=en,Specifying values for geospatial data (SPSS Modeler),"Specifying values for geospatial data (SPSS Modeler)
Specifying values for geospatial data Geospatial fields display geospatial data that's in a list. For the Geospatial measurement level, you can use various options to set the measurement level of the elements within the list. Type. Select the measurement sublevel of the geospatial field. The available sublevels are determined by the depth of the list field. The defaults are: Point (zero depth), LineString (depth of one), and Polygon (depth of one). For more information about sublevels, see [Geospatial measurement sublevels](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_levels_geo.html). Coordinate system. This option is only available if you changed the measurement level to Geospatial from a non-geospatial level. To apply a coordinate system to your geospatial data, select this option. To use a different coordinate system, click Change.",how-to,1,validation
DECCA51BACC7BE33F484D36177B24C4BD0FE4CFD,https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/preparedataIO.html?context=cdpaas&locale=en,Decision Optimization input and output data,"Decision Optimization input and output data
Input and output data You can access the input and output data you defined in the experiment UI by using the following dictionaries. The data that you imported in the Prepare data view in the experiment UI is accessible from the input dictionary. You must define each table by using the syntax inputs['tablename']. For example, here food is an entity that is defined from the table called diet_food: food = inputs['diet_food'] Similarly, to show tables in the Explore solution view of the experiment UI you must specify them using the syntax outputs['tablename']. For example, outputs['solution'] = solution_df defines an output table that is called solution. The entity solution_df in the Python model defines this table. You can find this Diet example in the Model_Builder folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). To import and run (solve) it in the experiment UI, see [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b).",conceptual,0,validation
C0CC7AE4029730B9846B6A05F4160643D3A8C393,https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-comments.html?context=cdpaas&locale=en,Adding comments and annotations to SPSS Modeler flows,"Adding comments and annotations to SPSS Modeler flows
You may need to describe a flow to others in your organization. To help you do this, you can attach explanatory comments to nodes, and model nuggets. Others can then view these comments on-screen, or you might even print out an image of the flow that includes the comments. You can also add notes in the form of text annotations to nodes and model nuggets by means of the Annotations tab in a node's properties. These annotations are visible only when the Annotations tab is open.",how-to,1,validation
CDF460B2BB910F74723297BCB8E940BF370C6FFD,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-scikit.html?context=cdpaas&locale=en,Batch deployment input details for Scikit-learn and XGBoost models,"Batch deployment input details for Scikit-learn and XGBoost models
Batch deployment input details for Scikit-learn and XGBoost models Follow these rules when you are specifying input details for batch deployments of Scikit-learn and XGBoost models. Data type summary table: Data Description Type inline, data references File formats CSV, .zip archive that contains CSV files Data source If you are specifying input/output data references programmatically: * Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). Notes: * The environment variables parameter of deployment jobs is not applicable. * For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main), Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)",conceptual,0,validation
D5FAFC625D1A1D0793D9521351E9B59A04AF00E9,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/missingvalues_overview.html?context=cdpaas&locale=en,Missing data values (SPSS Modeler),"Missing data values (SPSS Modeler)
Missing data values During the data preparation phase of data mining, you will often want to replace missing values in the data. Missing values are values in the data set that are unknown, uncollected, or incorrectly entered. Usually, such values aren't valid for their fields. For example, the field Sex should contain the values M and F. If you discover the values Y or Z in the field, you can safely assume that such values aren't valid and should therefore be interpreted as blanks. Likewise, a negative value for the field Age is meaningless and should also be interpreted as a blank. Frequently, such obviously wrong values are purposely entered, or fields are left blank, during a questionnaire to indicate a nonresponse. At times, you may want to examine these blanks more closely to determine whether a nonresponse, such as the refusal to give one's age, is a factor in predicting a specific outcome. Some modeling techniques handle missing data better than others. For example, the [C5.0 node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/c50.html) and the [Apriori node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/apriori.html) cope well with values that are explicitly declared as ""missing"" in a [Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type.html). Other modeling techniques have trouble dealing with missing values and experience longer training times, resulting in less-accurate models. There are several types of missing values recognized by : * Null or system-missing values. These are nonstring values that have been left blank in the database or source file and have not been specifically defined as ""missing"" in an [Import](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_import.html) or Type node. System-missing values are displayed as $null$. Note that empty strings are not considered nulls in , although they may be treated as nulls by certain databases. * Empty strings and white space. Empty string values and white space (strings with no visible characters) are treated as distinct from null values. Empty strings are treated as equivalent to white space for most purposes. For example, if you select the option to treat white space as blanks in an Import or Type node, this setting applies to empty strings as well. * Blank or user-defined missing values. These are values such as unknown, 99, or –1 that are explicitly defined in an Import node or Type node as missing. Optionally, you can also choose to treat nulls and white space as blanks, which allows them to be flagged for special treatment and to be excluded from most calculations. For example, you can use the @BLANK function to treat these values, along with other types of missing values, as blanks. Reading in mixed data. Note that when you're reading in fields with numeric storage (either integer, real, time, timestamp, or date), any non-numeric values are set to null or system missing. This is because, unlike some applications, doesn't allow mixed storage types within a field. To avoid this, you should read in any fields with mixed data as strings by changing the storage type in the Import node or external application as necessary. Reading empty strings from Oracle. When reading from or writing to an Oracle database, be aware that, unlike and unlike most other databases, Oracle treats and stores empty string values as equivalent to null values. This means that the same data extracted from an Oracle database may behave differently than when extracted from a file or another database, and the data may return different results.",conceptual,0,validation
2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/jailbreaking.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Jailbreaking ![icon for multi-category risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-multi-category.svg)Risks associated with inputInferenceMulti-categoryAmplified Description An attack that attempts to break through the guardrails established in the model is known as jailbreaking. Why is jailbreaking a concern for foundation models? Jailbreaking attacks can be used to alter model behavior and benefit the attacker. If not properly controlled, business entities can face fines, reputational harm, and other legal consequences. Example Bypassing LLM guardrails Cited in a [study](https://arxiv.org/abs/2307.15043) from researchers at Carnegie Mellon University, The Center for AI Safety, and the Bosch Center for AI, claims to have discovered a simple prompt addendum that allowed the researchers to trick models into answering dangerous or sensitive questions and is simple enough to be automated and used for a wide range of commercial and open-source products, including ChatGPT, Google Bard, Meta’s LLaMA, Vicuna, Claude, and others. According to the paper, the researchers were able to use the additions to reliably coax forbidden answers for Vicuna (99%), ChatGPT 3.5 and 4.0 (up to 84%), and PaLM-2 (66%). Sources: [SC Magazine, July 2023](https://www.scmagazine.com/news/researchers-find-universal-jailbreak-prompts-for-multiple-ai-chat-models) [The New York Times, July 2023](https://www.nytimes.com/2023/07/27/business/ai-chatgpt-safety-research.html) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,validation
D1AFA9BB4E0475A56190DC8254E004308BEA484D,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html?context=cdpaas&locale=en,Creating notebooks,"Creating notebooks
Creating notebooks You can add a notebook to your project by using one of these methods: creating a notebook file or copying a sample notebook from the Samples. Required permissions : You must have the Admin or Editor role in the project to create a notebook. Watch this short video to learn the basics of Jupyter notebooks. This video provides a visual method to learn the concepts and tasks in this documentation. Creating a notebook file in the notebook editor To create a notebook file in the notebook editor: 1. From your project, click New asset > Work with data and models in Python or R notebooks. 2. On the New Notebook page, specify the method to use to create your notebook. You can create a blank notebook, upload a notebook file from your file system, or upload a notebook file from a URL: * The notebook file you select to upload must follow these requirements: * The file type must be .ipynb. * The file name must not exceed 255 characters. * The file name must not contain these characters: < > : ” / | ( ) ? * The URL must be a public URL that is shareable and doesn't require authentication. ![Notebook options](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/createnotebook.png) 3. Specify the runtime environment for the language you want to use (Python or R). You can select a provided environment template or an environment template which you created and configured under Templates on the Environments page on the Manage tab of your project. For more information on environments, see [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html). 4. Click Create Notebook. The notebook opens in edit mode. Note that the time that it takes to create a new notebook or to open an existing one for editing might vary. If no runtime container is available, a container needs to be created and only after it is available, the Jupyter notebook user interface can be loaded. The time it takes to create a container depends on the cluster load and size. Once a runtime container exists, subsequent calls to open notebooks will be significantly faster. The opened notebook is locked by you. For more information, see [Locking and unlocking notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html?context=cdpaas&locale=enlocking-and-unlocking). 5. Tell the service to trust your notebook content and execute all cells. When a new notebook is opened in edit mode, the notebook is considered to be untrusted by the Jupyter service by default. When you run an untrusted notebook, content deemed untrusted will not be executed. Untrusted content includes any Javascript, HTML or Javascript in Markdown cells or in any output cells that you did not generate. 1. Click Not Trusted in the upper right corner of the notebook. 2. Click Trust to execute all cells. Adding a notebook from the Samples Notebooks from the Samples are based on real-world scenarios and contain many useful examples of computations and visualizations that you can adapt to your analysis needs. To copy a sample notebook: 1. In the main menu, click Samples, then filter for Notebooks to show only notebook cards. 2. Find the card for the sample notebook you want, and click the card. You can view the notebook contents to browse the steps and the code that it contains. 3. To work with a copy of the sample notebook, click Add to project. 4. Choose the project for the notebook, and click Add. 5. Optional: Change the name and description for the notebook. 6. Specify the runtime environment. If you created an environment template on the Environments page of your project, it will display in the list of runtimes you can select from. 7. Click Create. The notebook opens in edit mode and is locked by you. Locking the file avoids possible merge conflicts that might be caused by competing changes to the file. To get familiar with the structure of a notebook, see [Parts of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html). Locking and unlocking notebooks If you open a notebook in edit mode, this notebook is locked by you. While you hold the lock, only you can make changes to the notebook. All other projects users will see the lock icon on the notebook. Only project administrators are able to unlock a locked notebook and open it in edit mode. When you close the notebook, the lock is released and another user can select to open the notebook in edit mode. Note that you must close the notebook while the runtime environment is still active. The notebook lock can't be released for you if the runtime was stopped or is in idle state. If the notebook lock is not released for you, you can unlock the notebook from the project's Assets page. Locking the file avoids possible merge conflicts that might be caused by competing changes to the file. Finding your notebooks You can find and open notebooks from the",how-to,1,validation
DD88591C39C90F2CF211C3EE3330B7E7939C3472,https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/decision-bias.html?context=cdpaas&locale=en,{{ document.title.text }},"{{ document.title.text }}
Decision bias ![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg)Risks associated with outputFairnessNew Description Decision bias occurs when one group is unfairly advantaged over another due to decisions of the model. This bias can result from bias in the training data or as an unintended consequence of how the model was trained. Why is decision bias a concern for foundation models? Bias can harm persons affected by the decisions of the model. Business entities could face fines, reputational harms, and other legal consequences. Example Unfair health risk assignment for black patients A study on racial bias in health algorithms estimated that racial bias reduces the number of black patients identified for extra care by more than half. The study found that bias occurred because the algorithm used health costs as a proxy for health needs. Less money is spent on black patients who have the same level of need, and the algorithm thus falsely concludes that black patients are healthier than equally sick white patients. Sources: [Science, October 2019](https://www.science.org/doi/10.1126/science.aax2342) [American Civil Liberties Union, 2022](https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism::text=In%202019%2C%20a%20bombshell%20study,recommended%20for%20the%20same%20care) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)",conceptual,0,validation
F4A482326D45DC729EB8D1A6735CEFACD7AE5578,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=en,Creating online deployments in Watson Machine Learning,"Creating online deployments in Watson Machine Learning
Creating online deployments in Watson Machine Learning Create an online (also called Web service) deployment to load a model or Python code when the deployment is created to generate predictions online, in real time. For example, if you create a classification model to test whether a new customer is likely to participate in a sales promotion, you can create an online deployment for the model. Then, you can enter the new customer data to get an immediate prediction. Supported frameworks Online deployment is supported for these frameworks: * PMML * Python Function * PyTorch-Onnx * Tensorflow * Scikit-Learn * Spark MLlib * SPSS * XGBoost You can create an online deployment [from the user interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enonline-interface) or [programmatically](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enonline-programmatically). To send payload data to an asset that is deployed online, you must know the endpoint URL of the deployment. Examples include, classification of data, or making predictions from the data. For more information, see [Retrieving the deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enget-online-endpoint). Additionally, you can: * [Test your online deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=entest-online-deployment) * [Access the deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enaccess-online-details) Creating an online deployment from the User Interface 1. From the deployment space, click the name of the asset that you want to deploy. The details page opens. 2. Click New deployment. 3. Choose Online as the deployment type. 4. Provide a name and an optional description for the deployment. 5. If you want to specify a name to be used instead of deployment ID, use the Serving name field. * The name must be validated to be unique per IBM cloud region (all names in a specific region share a global namespace). * The name must contain only these characters: [a-z,0-9,_] and must be a maximum 36 characters long. * Serving name works only as part of the prediction URL. In some cases, you must still use the deployment ID. 6. Click Create to create the deployment. Creating an online deployment programmatically Refer to [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks. These notebooks demonstrate creating online deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). Retrieving the online deployment endpoint You can find the endpoint URL of a deployment in these ways: * From the Deployments tab of your space, click your deployment name. A page with deployment details opens. You can find the endpoint there. * Using the Watson Machine Learning Python client: 1. List the deployments by calling the [Python client method](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlclient.Deployments.list)client.deployments.list() 2. Find the row with your deployment. The deployment endpoint URL is listed in the url column. Notes: * If you added Serving name to the deployment, two alternative endpoint URLs show on the screen; one containing the deployment ID, and the other containing your serving name. You can use either one of these URLs with your deployment. * The API Reference tab also shows code snippets in various programming languages that illustrate how to access the deployment. For more information, see [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learningendpoint-url). Testing your online deployment From the Deployments tab of your space, click your deployment name. A page with deployment details opens. The Test tab provides a place where you can enter data and get a prediction back from the deployed model. If your model has a defined schema, a form shows on screen. In the form, you can enter data in one of these ways: * Enter data directly in the form * Download a CSV template, enter values, and upload the input data * Upload a file that contains input data from your local file system or from the space * Change to the JSON tab and enter your input data as JSON code Regardless of method, the input data must match the schema of the model. Submit the input data and get a score, or prediction, back. Sample deployment code When you submit JSON code as the payload, or input data, for a deployment, your input data must match the schema of the model. The 'fields' must match the column headers for the data, and the 'values' must contain the data, in the same order. Use this format: {""input_data"":[{ ""fields"": <field1>, <field2>, ...], ""values"": <value1>, <value2>, ...]] }]} Refer to this example: {""input_data"":[{ ""fields"": ""PassengerId"",""Pclass"",""Name"",""Sex"",""Age"",""SibSp"",""Parch"",""Ticket"",""Fare"",""Cabin"",""Embarked""], ""values"": 1,3,""Braund, Mr. Owen Harris"",0,22,1,0,""A/5 21171"",7.25,null,""S""]] }]} Notes: * All strings are enclosed in double quotation marks. The Python notation for dictionaries looks similar, but Python strings in single quotation marks are not accepted in the JSON data. * Missing values can be indicated with null. * You can specify a hardware specification for an online deployment, for example if you are [scaling a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html). Preparing payload that matches the schema of an existing model Refer to this sample code: model_details = client.repository.get_details(""<model_id>"") retrieves details and includes schema columns_in_schema = [] for i in range(0, len(model_details['entity']['input'].get('fields'))): columns_in_schema.append(model_details['entity']['input'].get('fields')[i]) X = X[columns_in_schema] where X is a pandas",how-to,1,validation
E777A9C7D0450D572431F168374224179C1AE7C4,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_multiseries.html?context=cdpaas&locale=en,Multiple series charts,"Multiple series charts
Multiple series charts Multiple series charts are similar to line charts, with the exception that you can chart multiple variables on the Y-axis.",conceptual,0,validation
B3FFE77064106EE619C664233B7B7A9ABA75C30A,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/webnodeslots.html?context=cdpaas&locale=en,webnode properties,"webnode properties
webnode properties ![Time Plot node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/webnodeicon.png)The Web node illustrates the strength of the relationship between values of two or more symbolic (categorical) fields. The graph uses lines of various widths to indicate connection strength. You might use a Web node, for example, to explore the relationship between the purchase of a set of items at an e-commerce site. webnode properties Table 1. webnode properties webnode properties Data type Property description use_directed_web flag fields list to_field field from_fields list true_flags_only flag line_values AbsoluteOverallPctPctLargerPctSmaller strong_links_heavier flag num_links ShowMaximumShowLinksAboveShowAll max_num_links number links_above number discard_links_min flag links_min_records number discard_links_max flag links_max_records number weak_below number strong_above number link_size_continuous flag web_display CircularNetworkDirectedGrid graph_background color Standard graph colors are described at the beginning of this section. symbol_size number Specifies a symbol size. directed_line_values AbsoluteOverallPctPctToPctFrom Specify a threshold type. show_legend boolean You can specify whether the legend is displayed. For plots with a large number of fields, hiding the legend may improve the appearance of the plot. labels_as_nodes boolean You can include the label text within each node rather than displaying adjacent labels. For plots with a small number of fields, this may result in a more readable chart.",conceptual,0,validation
E5D702E67E93752155510B56A3B2F464E190EBA2,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en,Sample foundation model prompts for common tasks,"Sample foundation model prompts for common tasks
Sample foundation model prompts for common tasks Try these samples to learn how different prompts can guide foundation models to do common tasks. How to use this topic Explore the sample prompts in this topic: * Copy and paste the prompt text and input parameter values into the Prompt Lab in IBM watsonx.ai * See what text is generated. * See how different models generate different output. * Change the prompt text and parameters to see how results vary. There is no one right way to prompt foundation models. But patterns have been found, in academia and industry, that work fairly reliably. Use the samples in this topic to build your skills and your intuition about prompt engineering through experimentation. This video provides a visual method to learn the concepts and tasks in this documentation. Video chapters [ 0:11 ] Introduction to prompts and Prompt Lab [ 0:33 ] Key concept: Everything is text completion [ 1:34 ] Useful prompt pattern: Few-shot prompt [ 1:58 ] Stopping criteria: Max tokens, stop sequences [ 3:32 ] Key concept: Fine-tuning [ 4:32 ] Useful prompt pattern: Zero-shot prompt [ 5:32 ] Key concept: Be flexible, try different prompts [ 6:14 ] Next steps: Experiment with sample prompts Samples overview You can find samples that prompt foundation models to generate output that supports the following tasks: * [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enclassification) * [Extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enextraction) * [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=engeneration) * [Question answering (QA)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enqa) * [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensummarization) * [Code generation and conversion](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=encode) * [Dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=endialogue) The following table shows the foundation models that are used in task-specific samples. A checkmark indicates that the model is used in a sample for the associated task. Table 1. Models used in samples for certain tasks Model Classification Extraction Generation QA Summarization Coding Dialogue flan-t5-xxl-11b ✓ ✓ flan-ul2-20b ✓ ✓ ✓ gpt-neox-20b ✓ ✓ ✓ granite-13b-chat-v1 ✓ granite-13b-instruct-v1 ✓ ✓ granite-13b-instruct-v2 ✓ ✓ ✓ llama-2 chat ✓ mpt-7b-instruct2 ✓ ✓ mt0-xxl-13b ✓ ✓ starcoder-15.5b ✓ The following table summarizes the available sample prompts. Table 2. List of sample prompts Scenario Prompt editor Prompt format Model Decoding Notes [Sample 1a: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1a) Freeform Zero-shot * mt0-xxl-13b <br>* flan-t5-xxl-11b <br>* flan-ul2-20b Greedy * Uses the class names as stop sequences to stop the model after it prints the class name [Sample 1b: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1b) Freeform Few-shot * gpt-neox-20b <br>* mpt-7b-instruct Greedy * Uses the class names as stop sequences [Sample 1c: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1c) Structured Few-shot * gpt-neox-20b <br>* mpt-7b-instruct Greedy * Uses the class names as stop sequences [Sample 2a: Extract details from a complaint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample2a) Freeform Zero-shot * flan-ul2-20b <br>* granite-13b-instruct-v2 Greedy [Sample 3a: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3a) Freeform Few-shot * gpt-neox-20b Sampling * Generates formatted output <br>* Uses two newline characters as a stop sequence to stop the model after one list [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3b) Structured Few-shot * gpt-neox-20b Sampling * Generates formatted output. <br>* Uses two newline characters as a stop sequence [Sample 3c: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3c) Freeform Zero-shot * granite-13b-instruct-v1 <br>* granite-13b-instruct-v2 Greedy * Generates formatted output [Sample 4a: Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4a) Freeform Zero-shot * mt0-xxl-13b <br>* flan-t5-xxl-11b <br>* flan-ul2-20b Greedy * Uses a period ""."" as a stop sequence to cause the model to return only a single sentence [Sample 4b: Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4b) Structured Zero-shot * mt0-xxl-13b <br>* flan-t5-xxl-11b <br>* flan-ul2-20b Greedy * Uses a period ""."" as a stop sequence <br>* Generates results for multiple inputs at once [Sample 4c: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4c) Freeform Zero-shot * granite-13b-instruct-v2 Greedy [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4d) Freeform Zero-shot * granite-13b-instruct-v1 Greedy [Sample 5a: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5a) Freeform Zero-shot * flan-t5-xxl-11b <br>* flan-ul2-20b <br>* mpt-7b-instruct2 Greedy [Sample 5b: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5b) Freeform Few-shot * gpt-neox-20b Greedy [Sample 5c: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5c) Structured Few-shot * gpt-neox-20b Greedy * Generates formatted output <br>* Uses two newline characters as a stop sequence to stop the model after one list [Sample 6a: Generate programmatic code from instructions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample6a) Freeform Few-shot * starcoder-15.5b Greedy * Generates programmatic code as output <br>* Uses <end of code> as a stop sequence [Sample 6b: Convert code from one programming language to another](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample6b) Freeform Few-shot * starcoder-15.5b Greedy * Generates programmatic code as output <br>* Uses <end of code> as a stop sequence [Sample 7a: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample7a) Freeform Custom structure * granite-13b-chat-v1 Greedy * Generates dialogue output like a chatbot <br>* Uses a special token that is named END_KEY as a stop sequence [Sample 7b: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample7b) Freeform Custom structure * llama-2 chat Greedy * Generates dialogue output like a chatbot <br>* Uses a model-specific prompt format Classification Classification is useful for predicting data in distinct categories. Classifications can be binary, with two classes of data, or multi-class.",conceptual,0,validation
29A9834843B2D6E7417C09A5385B83BCB13D814C,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en,Managing outdated software specifications or frameworks,"Managing outdated software specifications or frameworks
Managing outdated software specifications or frameworks Use these guidelines when you are updating assets that refer to outdated software specifications or frameworks. In some cases, asset update is seamless. In other cases, you must retrain or redeploy the assets. For general guidelines, refer to [Migrating assets that refer to discontinued software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=endiscont-soft-spec) or [Migrating assets that refer to discontinued framework versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=endiscont-framewrk). For more information, see the following sections: * [Updating software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs) * [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model) * [Updating a Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgr-function) * [Retraining an SPSS Modeler flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enretrain-spss) Managing assets that refer to discontinued software specifications * During migration, assets that refer to the discontinued software specification are mapped to a comparable-supported default software specification (only in cases where the model type is still supported). * When you create new deployments of the migrated assets, the updated software specification in the asset metadata is used. * Existing deployments of the migrated assets are updated to use the new software specification. If deployment or scoring fails due to framework or library version incompatibilities, follow the instructions in [Updating software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs). If the problem persists, follow the steps that are listed in [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model). Migrating assets that refer to discontinued framework versions * During migration, model types are not be updated. You must manually update this data. For more information, see [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model). * After migration, the existing deployments are removed and new deployments for the deprecated framework are not allowed. Updating software specifications You can update software specifications from the UI or by using the API. For more information, see the following sections: * [Updating software specifications from the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs-ui) * [Updating software specifications by using the API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs-api) Updating software specifications from the UI 1. From the deployment space, click the model (make sure it does not have any active deployments.) 2. Click the i symbol to check model details. 3. Use the dropdown list to update the software specification. Refer to the example image: ![Updating software specifications through the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/update-software-spec-via-ui.png) Updating software specifications by using the API You can update a software specification by using the API Patch command: For software_spec field, type /software_spec. For value field, use either the ID or the name of the new software specification. Refer to this example: curl -X PATCH '<deployment endpoint url>/ml/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802&project_id=<project_id>&version=<YYYY-MM-DD>' n--data-raw '[ { ""op"":""replace"", ""path"":""/software_spec"", ""value"":{ ""id"":""6f01d512-fe0f-41cd-9a52-1e200c525c84"" // or ""name"":""tensorflow_rt22.1-py3.9"" } } ]' For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api). Updating a machine learning model Follow these steps to update a model built with a deprecated framework. Option 1: Save the model with a compatible framework 1. Download the model by using either the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). The following example shows how to download your model: client.repository.download(<model-id>, filename=""xyz.tar.gz"") 2. Edit model metadata with the model type and version that is supported in the current release. For more information, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). The following example shows how to edit model metadata: model_metadata = { client.repository.ModelMetaNames.NAME: ""example model"", client.repository.ModelMetaNames.DESCRIPTION: ""example description"", client.repository.ModelMetaNames.TYPE: ""<new model type>"", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: client.software_specifications.get_uid_by_name(""<new software specification name>"") } 3. Save the model to the Watson Machine Learning repository. The following example shows how to save the model to the repository: model_details = client.repository.store_model(model=""xyz.tar.gz"", meta_props=model_metadata) 4. Deploy the model. 5. Score the model to generate predictions. If deployment or scoring fails, the model is not compatible with the new version that was used for saving the model. In this case, use [Option 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enretrain-option2). Option 2: Retrain the model with a compatible framework 1. Retrain the model with a model type and version that is supported in the current version. 2. Save the model with the supported model type and version. 3. Deploy and score the model. It is also possible to update a model by using the API. For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api). Updating a Python function Follow these steps to update a Python function built with a deprecated framework. Option 1: Save the Python function with a compatible runtime or software specification 1. Download the Python function by using either the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). 2. Save the Python function with a supported runtime or software specification version. For more information, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). 3. Deploy the Python function. 4. Score the Python function to generate predictions. If your Python function fails during scoring, the function is not compatible with the new runtime or software specification version that was used for saving the Python function. In this case, use [Option 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enmodify-option2). Option 2: Modify the function code and save it with a compatible runtime or software specification 1. Modify the",how-to,1,validation
6349E43EA9B4AC5775DB122E0F6C365D5DB810BF,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html?context=cdpaas&locale=en,Managing the lifecycle of notebooks and scripts,"Managing the lifecycle of notebooks and scripts
Managing the lifecycle of notebooks and scripts After you have created and tested your notebooks, you can add them to pipelines, publish them to a catalog so that other catalog members can use the notebook in their projects, or share read-only copies outside of Watson Studio so that people who aren't collaborators in your Watson Studio projects can see and use them. R scripts and Shiny apps can't be published or shared using functionality in a project at this time. You can use any of these methods for notebooks: * [Add notebooks to a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html) * [Share a URL on social media](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html) * [Publish on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html) * [Publish as a gist](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html) * [Publish your notebook to a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html) Make sure that before you share or publish a notebook, you hide any sensitive code, like credentials, that you don't want others to see! See [Hide sensitive cells in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/hide_code.html). Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)",how-to,1,validation
EED64F79EBFDD957DEEBEC6261B3A70A248F3D35,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/filter.html?context=cdpaas&locale=en,Filter node (SPSS Modeler),"Filter node (SPSS Modeler)
Filter node You can rename or exclude fields at any point in a flow. For example, as a medical researcher, you may not be concerned about the potassium level (field-level data) of patients (record-level data); therefore, you can filter out the K (potassium) field. This can be done using a separate Filter node or using the Filter tab on an import or output node. The functionality is the same regardless of which node it's accessed from. * From import nodes, you can rename or filter fields as the data is read in. * Using a Filter node, you can rename or filter fields at any point in the flow. * You can use the Filter tab in various nodes to define or edit multiple response sets. * Finally, you can use a Filter node to map fields from one import node to another.",conceptual,0,validation
31A670D6B3F0D7AB4EAD7DAE3795589F161249DE,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_tawindow.html?context=cdpaas&locale=en,The Categories tab (SPSS Modeler),"The Categories tab (SPSS Modeler)
The Categories tab In the Text Analytics Workbench, you can use the Categories tab to create and explore categories as well as tweak the extraction results. Extraction results can be refined by modifying the linguistic resources, which you can do directly from the Categories tab. Figure 1. Categories tab ![Categories tab](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tmwb_categoryview.png)",conceptual,0,validation
B8C3B95FC688C347D679F81711781B29578CFC19,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_info.html?context=cdpaas&locale=en,Viewing and setting information about types (SPSS Modeler),"Viewing and setting information about types (SPSS Modeler)
Viewing and setting information about types From the Type node, you can specify field metadata and properties that are invaluable to modeling and other work. These properties include: * Specifying a usage type, such as range, set, ordered set, or flag, for each field in your data * Setting options for handling missing values and system nulls * Setting the role of a field for modeling purposes * Specifying values for a field and options used to automatically read values from your data * Specifying value labels",how-to,1,validation
6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-regex.html?context=cdpaas&locale=en,Detecting entities with regular expressions,"Detecting entities with regular expressions
Detecting entities with regular expressions Similar to detecting entities with dictionaries, you can use regex pattern matches to detect entities. Regular expressions are not provided in files like dictionaries but in-memory within a regex configuration. You can use multiple regex configurations during the same extraction. Regexes that you define with Watson Natural Language Processing can use token boundaries. This way, you can ensure that your regular expression matches within one or more tokens. This is a clear advantage over simpler regular expression engines, especially when you work with a language that is not separated by whitespace, such as Chinese. Regular expressions are processed by a dedicated component called Rule-Based Runtime, or RBR for short. Creating regex configurations Begin by creating a module directory inside your notebook. This is a directory inside the notebook file system that is used temporarily to store the files created by the RBR training. This module directory can be the same directory that you created and used for dictionary-based entity extraction. Dictionaries and regular expressions can be used in the same training run. To create the module directory in your notebook, enter the following in a code cell. Note that the module directory can't contain a dash (-). import os import watson_nlp module_folder = ""NLP_RBR_Module_2"" os.makedirs(module_folder, exist_ok=True) A regex configuration is a Python dictionary, with the following attributes: Available attributes in regex configurations with their values, descriptions of use and indication if required or not Attribute Value Description Required name string The name of the regular expression. Matches of the regular expression in the input text are tagged with this name in the output. Yes regexes list (string of perl based regex patterns) Should be non-empty. Multiple regexes can be provided. Yes flags Delimited string of valid flags Flags such as UNICODE or CASE_INSENSITIVE control the matching. Can also be a combination of flags. For the supported flags, see [Pattern (Java Platform SE 8)](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html). No (defaults to DOTALL) token_boundary.min int token_boundary indicates whether to match the regular expression only on token boundaries. Specified as a dict object with min and max attributes. No (returns the longest non-overlapping match at each character position in the input text) token_boundary.max int max is an optional attribute for token_boundary and needed when the boundary needs to extend for a range (between min and max tokens). token_boundary.max needs to be >= token_boundary.min No (if token_boundary is specified, the min attribute can be specified alone) groups list (string labels for matching groups) String index in list corresponds to matched group in pattern starting with 1 where 0 index corresponds to entire match. For example: regex: (a)(b) on ab with group: ['full', 'first', 'second'] will yield full: ab, first: a, second: b No (defaults to label match on full match) The regex configurations can be loaded using the following helper methods: * To load a single regex configuration, use watson_nlp.toolkit.RegexConfig.load(<regex configuration>) * To load multiple regex configurations, use watson_nlp.toolkit.RegexConfig.load_all([<regex configuration>)]) Code sample This sample shows you how to load two different regex configurations. The first configuration detects person names. It uses the groups attribute to allow easy access to the full, first and last name at a later stage. The second configuration detects acronyms as a sequence of all-uppercase characters. By using the token_boundary attribute, it prevents matches in words that contain both uppercase and lowercase characters. from watson_nlp.toolkit.rule_utils import RegexConfig Load some regex configs, for instance to match First names or acronyms regexes = RegexConfig.load_all([ { 'name': 'full names', 'regexes': '(A-Z]a-z]) (A-Z]a-z])'], 'groups': 'full name', 'first name', 'last name'] }, { 'name': 'acronyms', 'regexes': '(A-Z]+)'], 'groups': 'acronym'], 'token_boundary': { 'min': 1, 'max': 1 } } ]) Training a model that contains regular expressions After you have loaded the regex configurations, create an RBR model using the RBR.train() method. In the method, specify: * The module directory * The language of the text * The regex configurations to use This is the same method that is used to train RBR with dictionary-based extraction. You can pass the dictionary configuration in the same method call. Code sample Train the RBR model custom_regex_block = watson_nlp.resources.feature_extractor.RBR.train(module_path=module_folder, language='en', regexes=regexes) Applying the model on new data After you have trained the dictionaries, apply the model on new data using the run() method, as you would use on any of the existing pre-trained blocks. Code sample custom_regex_block.run('Bruce Wayne works for NASA') Output of the code sample: {(0, 11): ['regex::full names'], (0, 5): ['regex::full names'], (6, 11): ['regex::full names'], (22, 26): ['regex::acronyms']} To show the matching subgroups or the matched text: import json Get the raw response including matching groups full_regex_result = custom_regex_block.executor.get_raw_response('Bruce Wayne works for NASA‘, language='en') print(json.dumps(full_regex_result, indent=2)) Output of the code sample: { ""annotations"": { ""View_full names"": [ { ""label"": ""regex::full names"", ""fullname"": { ""location"": { ""begin"": 0, ""end"": 11 }, ""text"": ""Bruce Wayne"" }, ""firstname"": {",how-to,1,validation
BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html?context=cdpaas&locale=en,Creating environment templates,"Creating environment templates
Creating environment templates You can create custom environment templates if you do not want to use the default environments provided by Watson Studio. Required permissions : To create an environment template, you must have the Admin or Editor role within the project. You can create environment templates for the following types of assets: * Notebooks in the Notebook editor * Notebooks in RStudio * Modeler flows in the SPSS Modeler * Data Refinery flows * Jobs that run operational assets, such as Data Refinery flows, or Notebooks in a project Note: To create an environment template: 1. On the Manage tab of your project, select the Environments page and click New template under Templates. 2. Enter a name and a description. 3. Select one of the following engine types: * Default: Select for Python, R, and RStudio runtimes for Watson Studio. * Spark: Select for Spark with Python or R runtimes for Watson Studio. * GPU: Select for more computing power to improve model training performance for Watson Studio. 4. Select the hardware configuration from the Hardware configuration drop-down menu. 5. Select the software version if you selected a runtime of ""Default,"" ""Spark,"" or ""GPU."" Where to find your custom environment template Your new environment template is listed under Templates on the Environments page in the Manage tab of your project. From this page, you can: * Check which runtimes are active * Update custom environment templates * Track the number of capacity units per hour that your runtimes have consumed so far * Stop active runtimes. Limitations The default environments provided by Watson Studio cannot be edited or modified. Notebook environments (Anaconda Python or R distributions): : - You can't add a software customization to the default Python and R environment templates included in Watson Studio. You can only add a customization to an environment template that you create. : - If you add a software customization using conda, your environment must have at least 2 GB RAM. : - You can't customize an R environment for a notebook by installing R packages directly from CRAN or GitHub. You can check if the CRAN package you want is available only from conda channels and, if the package is available, add that package name in the customization list as r-<package-name>. * After you have started a notebook in an Watson Studio environment, you can't create another conda environment from inside that notebook and use it. Watson Studio environments do not behave like a Conda environment manager. Spark environments: : - You can't customize the software configuration of a Spark environment template. Next steps * [Customize environment templates for Python or R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) Learn more Parent topic:[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html)",how-to,1,validation
2B6DC49F4AFDE44DD385AE09CAAB02A3F1DB4259,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html?context=cdpaas&locale=en,Choosing compute resources for running tools in projects,"Choosing compute resources for running tools in projects
Choosing compute resources for running tools in projects You use compute resources in projects when you run jobs and most tools. Depending on the tool, you might have a choice of compute resources for the runtime for the tool. Compute resources are known as either environment templates or hardware and software specifications. In general, compute resources with larger hardware configurations incur larger usage costs. These tools have multiple choices for configuring runtimes that you can choose from: * [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) * [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html) * [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html) * [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html) * [Decision Optimization experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html) * [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html) * [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html) * [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-fm-tuning.html) Prompt Lab does not consume compute resources. Prompt Lab usage is measured by the number of processed tokens. Learn more * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)",conceptual,0,validation
32AFAFA1C90D43BA1D3330A64491039F63D9FEB5,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html?context=cdpaas&locale=en,Deploying scripts in Watson Machine Learning,"Deploying scripts in Watson Machine Learning
Deploying scripts in Watson Machine Learning When a script is copied to a deployment space, you can deploy it for use. Supported script types are Python scripts. [Batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) is the only supported deployment type for a script. * When the script is promoted from a project, your software specification is included. * When you create a deployment job for a script, you must manually override the default environment with the correct environment for your script. For more information, see [Creating a deployment job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html) Learn more * To learn more about supported input and output types and setting environment variables, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html). * To learn more about software specifications, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)",conceptual,0,validation
FE88457CA86FFE3BE30873156A7A0A4FD12975AF,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/ebuilder_accessing.html?context=cdpaas&locale=en,Accessing the Expression Builder (SPSS Modeler),"Accessing the Expression Builder (SPSS Modeler)
Accessing the Expression Builder The Expression Builder is available in all nodes where CLEM expressions are used, including Select, Balance, Derive, Filler, Analysis, Report, and Table nodes. You can open it by double-clicking the node to open its properties, then click the calculator button by the formula field.",how-to,1,validation
14416203D840C788359110B18CFD9CE922DE0D67,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autoclusternuggetnodeslots.html?context=cdpaas&locale=en,applyautoclusternode properties,"applyautoclusternode properties
applyautoclusternode properties You can use Auto Cluster modeling nodes to generate an Auto Cluster model nugget. The scripting name of this model nugget is applyautoclusternode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [autoclusternode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autoclusternodeslots.htmlautoclusternodeslots).",conceptual,0,validation
C81BEEA067CCC7FED12806F3FF0F20519092F2E4,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/statistics.html?context=cdpaas&locale=en,Statistics (SPSS Modeler),"Statistics (SPSS Modeler)
Statistics node The Statistics node gives you basic summary information about numeric fields. You can get summary statistics for individual fields and correlations between fields.",conceptual,0,validation
49724D4B7690D4B215FE6F1C0A49C8B347F0C9A1,https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_customize.html?context=cdpaas&locale=en,Custom charts,"Custom charts
Custom charts The custom charts option provides options for pasting or editing JSON code to create the wanted chart.",conceptual,0,validation
6049D5AA5DE41309E6281534A464ABD6898A758C,https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=en,Building reusable prompts,"Building reusable prompts
Building reusable prompts Prompt engineering to find effective prompts for a model takes time and effort. Stretch the benefits of your work by building prompts that you can reuse and share with others. A great way to add flexibility to a prompt is to add prompt variables. A prompt variable is a placeholder keyword that you include in the static text of your prompt at creation time and replace with text dynamically at run time. Using variables to change prompt text dynamically Variables help you to generalize a prompt so that it can be reused more easily. For example, a prompt for a generative task might contain the following static text: Write a story about a dog. If you replace the text dog with a variable that is named {animal}, you add support for dynamic content to the prompt. Write a story about a {animal}. With the variable {animal}, the text can still be used to prompt the model for a story about a dog. But now it can be reused to ask for a story about a cat, a mouse, or another animal, simply by swapping the value that is specified for the {animal} variable. Creating prompt variables To create a prompt variable, complete the following steps: 1. From the Prompt Lab, review the text in your prompt for words or phrases that, when converted to a variable, will make the prompt easier to reuse. 2. Click the Prompt variables icon (![{#}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/parameter.svg)) at the start of the page. The Prompt variables panel is displayed where you can add variable name-and-value pairs. 3. Click New variable. 4. Click to add a variable name, tab to the next field, and then add a default value. The variable name can contain alphanumeric characters or an underscore (_), but cannot begin with a number. The default value for the variable is a fallback value; it is used every time that the prompt is submitted, unless someone overwrites the default value by specifying a new value for the variable. 5. Repeat the previous step to add more variables. The following table shows some examples of the types of variables that you might want to add. | Variable name | Default value | |---------------|---------------| | country | Ireland | | city | Boston | | project | Project X | | company | IBM | 6. Replace static text in the prompt with your variables. Select the word or phrase in the prompt that you want to replace, and then click the Prompt variables icon (![{#}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/parameter.svg)) within the text box to see a list of available variables. Click the variable that you want to use from the list. The variable replaces the selected text. It is formatted with the syntax {variable name}, where the variable name is surrounded by braces. If your static text already contains variables that are formatted with braces, they are ignored unless prompt variables of the same name exist. 7. To specify a value for a variable at run time, open the Prompt variables panel, click Preview, and then add a value for the variable. You can also change the variable value from the edit view of the Prompt variables panel, but the value you specify will become the new default value. When you find a set of prompt static text, prompt variables, and prompt engineering parameters that generates the results you want from a model, save the prompt as a prompt template asset. After you save the prompt template asset, you can reuse the prompt or share it with collaborators in the current project. For more information, see [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html). Examples of reusing prompts The following examples help illustrate ways that using prompt variables can add versatility to your prompts. * [Thank you note example](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=enthank-you-example) * [Devil's advocate example](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=endevil-example) Thank you note example Replace static text in the Thank you note generation built-in sample prompt with variables to make the prompt reusable. To add versatility to a built-in prompt, complete the following steps: 1. From the Prompt Lab, click Sample prompts to list the built-in sample prompts. From the Generation section, click Thank you note generation. The input for the built-in sample prompt is added to the prompt editor and the flan-ul2-20b model is selected. Write a thank you note for attending a workshop. Attendees: interns Topic: codefest, AI Tone: energetic 2. Review the text for words or phrases that make good variable candidates. In this example, if the following words are replaced, the prompt meaning will change: * workshop * interns * codefest * AI * energetic 3. Create a variable to represent each word in the list. Add the current value as the default value for the variable. | Variable name | Value | |---------------|---------------| | event | workshop | | attendees | interns | | topic1 |",how-to,1,validation
AEE1A739F2EA11F815EC571163BA99C9B2A97245,https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/selflearnnuggetnodeslots.html?context=cdpaas&locale=en,applyselflearningnode properties,"applyselflearningnode properties
applyselflearningnode properties You can use Self-Learning Response Model (SLRM) modeling nodes to generate a SLRM model nugget. The scripting name of this model nugget is applyselflearningnode. For more information on scripting the modeling node itself, see [slrmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/selflearnnodeslots.htmlselflearnnodeslots). applyselflearningnode properties Table 1. applyselflearningnode properties applyselflearningnode Properties Values Property description max_predictions number randomization number scoring_random_seed number sort ascending <br>descending Specifies whether the offers with the highest or lowest scores will be displayed first. model_reliability flag Takes account of the model reliability option in the node settings. enable_sql_generation false <br>native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.",conceptual,0,validation
30A8256A4972314DA32827A081B7541138B454A9,https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html?context=cdpaas&locale=en,Creating Synthetic data,"Creating Synthetic data
Creating Synthetic data Use the graphical flow editor tool Synthetic Data Generator to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. To create synthetic data, the first option is to use the Synthetic Data Generator graphical flow editor tool to mask and mimic production data, and then to load the result into a different location. The second option is to use the Synthetic Data Generator graphical flow editor to generate synthetic data from a custom data schema using visual flows and modeling algorithms. This image shows an overview of the Synthetic Data Generator graphical flow editor. ![Synthetic Data Generator overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-overview.png) Data format Learn more about [Creating synthetic data from imported data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html). Data size : The Synthetic Data Generator environment can import up to 2.5GB of data. Prerequisites Before you can create synthetic data, you need [to create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html). Create synthetic data 1. Access the Synthetic Data Generator tool from within a project. To select a new asset, open a tool, and create an asset, click New asset. 2. Select All > Prepare Data > Generate synthetic tabular data from the What do you want to do? window. ![What do you want to do window](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-what-do-you-want.png) 3. The Generate synthetic tabular data window opens. Add a name for the asset and a description (optional). Click Create. The flow will open and it might take a minute to create a new session for the flow. ![Generate synthetic tabular data flow asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-generate-synthetic-tabular-data-flow-asset.png) 4. The Welcome to Synthetic Data Generator wizard opens. You can choose to get started as a first time or experienced user. ![Synthetic Data Generator Get started wizard](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-wizard.png) 5. If you choose to get started as a first time user, the Generate synthetic tabular data flow window opens. ![Generate synthetic tabular data flow window](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-mimic-mask-flow.png) Learn more * [Creating synthetic data from production data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/mask_mimic_data_sd.html) * [Creating synthetic data from a custom data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/generate_data_sd.html) * Try the [Generate synthetic tabular data tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html)",how-to,1,validation