id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
0cf6075a77a6-1
|
The rules in a given queue apply only to queries running in that queue\. A rule is independent of other rules\.
WLM evaluates metrics every 10 seconds\. If more than one rule is triggered during the same period, WLM initiates the most severe action—abort, then hop, then log\. If the action is hop or abort, the action is logged and the query is evicted from the queue\. If the action is log, the query continues to run in the queue\. WLM initiates only one log action per query per rule\. If the queue contains other rules, those rules remain in effect\. If the action is hop and the query is routed to another queue, the rules for the new queue apply\.
When all of a rule's predicates are met, WLM writes a row to the [STL\_WLM\_RULE\_ACTION](r_STL_WLM_RULE_ACTION.md) system table\. In addition, Amazon Redshift records query metrics for currently running queries to [STV\_QUERY\_METRICS](r_STV_QUERY_METRICS.md)\. Metrics for completed queries are stored in [STL\_QUERY\_METRICS](r_STL_QUERY_METRICS.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md
|
c873e665526b-0
|
You create query monitoring rules as part of your WLM configuration, which you define as part of your cluster's parameter group definition\.
You can create rules using the AWS Management Console or programmatically using JSON\.
**Note**
If you choose to create rules programmatically, we strongly recommend using the console to generate the JSON that you include in the parameter group definition\. For more information, see [Creating or Modifying a Query Monitoring Rule Using the Console](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html#parameter-group-modify-qmr-console) and [Configuring Parameter Values Using the AWS CLI](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html#configure-parameters-using-the-cli) in the *Amazon Redshift Cluster Management Guide*\.
To define a query monitoring rule, you specify the following elements:
+ A rule name – Rule names must be unique within the WLM configuration\. Rule names can be up to 32 alphanumeric characters or underscores, and can't contain spaces or quotation marks\. You can have up to 25 rules per queue, and the total limit for all queues is 25 rules\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md
|
c873e665526b-1
|
+ One or more predicates – You can have up to three predicates per rule\. If all the predicates for any rule are met, the associated action is triggered\. A predicate is defined by a metric name, an operator \( =, <, or > \), and a value\. An example is `query_cpu_time > 100000`\. For a list of metrics and examples of values for different metrics, see [Query monitoring metrics](#cm-c-wlm-query-monitoring-metrics) following in this section\.
+ An action – If more than one rule is triggered, WLM chooses the rule with the most severe action\. Possible actions, in ascending order of severity, are:
+ Log – Record information about the query in the STL\_WLM\_RULE\_ACTION system table\. Use the Log action when you want to only write a log record\. WLM creates at most one log per query, per rule\. Following a log action, other rules remain in force and WLM continues to monitor the query\.
+ Hop \(only available with manual WLM\) – Log the action and hop the query to the next matching queue\. If there isn't another matching queue, the query is canceled\. QMR hops only [CREATE TABLE AS](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_AS.html) \(CTAS\) statements and read\-only queries, such as SELECT statements\. For more information, see [WLM query queue hopping](wlm-queue-hopping.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md
|
c873e665526b-2
|
+ Abort – Log the action and terminate the query\. QMR doesn't abort COPY statements and maintenance operations, such as ANALYZE and VACUUM\.
+ Change priority \(only available with automatic WLM\) – Change the priority of a query\.
To limit the runtime of queries, we recommend creating a query monitoring rule instead of using WLM timeout\. For example, you can set `max_execution_time` to 50,000 milliseconds as shown in the following JSON snippet\.
```
"max_execution_time": 50000
```
But we recommend instead that you define an equivalent query monitoring rule that sets `query_execution_time` to 50 seconds as shown in the following JSON snippet\.
```
"rules":
[
{
"rule_name": "rule_query_execution",
"predicate": [
{
"metric_name": "query_execution_time",
"operator": ">",
"value": 50
}
],
"action": "abort"
}
]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md
|
c873e665526b-3
|
}
],
"action": "abort"
}
]
```
For steps to create or modify a query monitoring rule, see [Creating or Modifying a Query Monitoring Rule Using the Console](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html#parameter-group-modify-qmr-console) and [Properties in the wlm\_json\_configuration Parameter](https://docs.aws.amazon.com/redshift/latest/mgmt/workload-mgmt-config.html#wlm-json-config-properties) in the *Amazon Redshift Cluster Management Guide*\.
You can find more information about query monitoring rules in the following topics:
+ [Query monitoring metrics](#cm-c-wlm-query-monitoring-metrics)
+ [Query monitoring rules templates](#cm-c-wlm-query-monitoring-templates)
+ [Creating a Rule Using the Console](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-parameter-groups-console.html#parameter-group-modify-qmr-console)
+ [Configuring Workload Management](https://docs.aws.amazon.com/redshift/latest/mgmt/workload-mgmt-config.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md
|
c873e665526b-4
|
+ [System tables and views for query monitoring rules](#cm-c-wlm-qmr-tables-and-views)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md
|
37009a8369a9-0
|
The following table describes the metrics used in query monitoring rules\. \(These metrics are distinct from the metrics stored in the [STV\_QUERY\_METRICS](r_STV_QUERY_METRICS.md) and [STL\_QUERY\_METRICS](r_STL_QUERY_METRICS.md) system tables\.\)
For a given metric, the performance threshold is tracked either at the query level or the segment level\. For more information about segments and steps, see [Query planning and execution workflow](c-query-planning.md)\.
**Note**
The [WLM timeout](cm-c-defining-query-queues.md#wlm-timeout) parameter is distinct from query monitoring rules\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-query-monitoring-rules.html)
**Note**
The hop action is not supported with the `query_queue_time` predicate\. That is, rules defined to hop when a `query_queue_time` predicate is met are ignored\.
Short segment execution times can result in sampling errors with some metrics, such as `io_skew` and `query_cpu_percent`\. To avoid or reduce sampling errors, include segment execution time in your rules\. A good starting point is `segment_execution_time > 10`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md
|
37009a8369a9-1
|
The [SVL\_QUERY\_METRICS](r_SVL_QUERY_METRICS.md) view shows the metrics for completed queries\. The [SVL\_QUERY\_METRICS\_SUMMARY](r_SVL_QUERY_METRICS_SUMMARY.md) view shows the maximum values of metrics for completed queries\. Use the values in these views as an aid to determine threshold values for defining query monitoring rules\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md
|
8e2bd3a27bae-0
|
When you add a rule using the Amazon Redshift console, you can choose to create a rule from a predefined template\. Amazon Redshift creates a new rule with a set of predicates and populates the predicates with default values\. The default action is log\. You can modify the predicates and action to meet your use case\.
The following table lists available templates\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-query-monitoring-rules.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md
|
93745441717f-0
|
When all of a rule's predicates are met, WLM writes a row to the [STL\_WLM\_RULE\_ACTION](r_STL_WLM_RULE_ACTION.md) system table\. This row contains details for the query that triggered the rule and the resulting action\.
In addition, Amazon Redshift records query metrics the following system tables and views\.
+ The [STV\_QUERY\_METRICS](r_STV_QUERY_METRICS.md) table displays the metrics for currently running queries\.
+ The [STL\_QUERY\_METRICS](r_STL_QUERY_METRICS.md) table records the metrics for completed queries\.
+ The [SVL\_QUERY\_METRICS](r_SVL_QUERY_METRICS.md) view shows the metrics for completed queries\.
+ The [SVL\_QUERY\_METRICS\_SUMMARY](r_SVL_QUERY_METRICS_SUMMARY.md) view shows the maximum values of metrics for completed queries\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md
|
1419ee8b4bf1-0
|
The DATEFORMAT and TIMEFORMAT options in the COPY command take format strings\. These strings can contain datetime separators \(such as '`-`', '`/`', or '`:`'\) and the following "dateparts" and "timeparts"\.
**Note**
If the COPY command doesn't recognize the format of your date or time values, or if your date and time values use formats different from each other, use the `'auto'` argument with the TIMEFORMAT parameter\. The `'auto'` argument recognizes several formats that aren't supported when using a DATEFORMAT and TIMEFORMAT string\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_DATEFORMAT_and_TIMEFORMAT_strings.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEFORMAT_and_TIMEFORMAT_strings.md
|
1419ee8b4bf1-1
|
The default date format is YYYY\-MM\-DD\. The default time stamp without time zone \(TIMESTAMP\) format is YYYY\-MM\-DD HH:MI:SS\. The default time stamp with time zone \(TIMESTAMPTZ\) format is YYYY\-MM\-DD HH:MI:SSOF, where OF is the offset from UTC \(for example, \-8:00\. You can't include a time zone specifier \(TZ, tz, or OF\) in the timeformat\_string\. The seconds \(SS\) field also supports fractional seconds up to a microsecond level of detail\. To load TIMESTAMPTZ data that is in a format different from the default format, specify 'auto'\. For more information, see [Using automatic recognition with DATEFORMAT and TIMEFORMAT](automatic-recognition.md)\.
For example, the following DATEFORMAT and TIMEFORMAT strings are valid\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_DATEFORMAT_and_TIMEFORMAT_strings.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEFORMAT_and_TIMEFORMAT_strings.md
|
ff3bb9665e42-0
|
Synonym of LEN function\.
See [LEN function](r_LEN.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TEXTLEN.md
|
9e3b34647569-0
|
Format specification \(**ISO**, Postgres, SQL, or German\), and year/month/day ordering \(DMY, **MDY**, YMD\)\.
**ISO, MDY**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_datestyle.md
|
834aa6bb4aab-0
|
Sets the display format for date and time values as well as the rules for interpreting ambiguous date input values\. The string contains two parameters that can be changed separately or together\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_datestyle.md
|
db0e6c0c3e93-0
|
```
show datestyle;
DateStyle
-----------
ISO, MDY
(1 row)
set datestyle to 'SQL,DMY';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_datestyle.md
|
05a64890b9d0-0
|
If you are a database user, database designer, database developer, or database administrator, the following table will help you find what you're looking for\.
| If you want to \.\.\. | We recommend |
| --- | --- |
| <a name="c-other-resources"></a><a name="c-other-resources.title"></a>Quickly start using Amazon Redshift | Begin by following the steps in [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/) to quickly deploy a cluster, connect to a database, and try out some queries\. When you are ready to build your database, load data into tables, and write queries to manipulate data in the data warehouse, return here to the Database Developer Guide\. |
| Learn about the internal architecture of the Amazon Redshift data warehouse\. | The [System and architecture overview](c_redshift_system_overview.md) gives a high\-level overview of Amazon Redshift's internal architecture\. If you want a broader overview of the Amazon Redshift web service, go to the [Amazon Redshift](https://aws.amazon.com/redshift/) product detail page\. |
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-who-should-use-this-guide.md
|
05a64890b9d0-1
|
| Create databases, tables, users, and other database objects\. | [Getting started using databases](c_intro_to_admin.md) is a quick introduction to the basics of SQL development\. The [Amazon Redshift SQL](c_redshift-sql.md) has the syntax and examples for Amazon Redshift SQL commands and functions and other SQL elements\. [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md) provides a summary of our recommendations for choosing sort keys, distribution keys, and compression encodings\. |
| Learn how to design tables for optimum performance\. | [Designing tables](t_Creating_tables.md) details considerations for applying compression to the data in table columns and choosing distribution and sort keys\. |
| Load data\. | [Loading data](t_Loading_data.md) explains the procedures for loading large datasets from Amazon DynamoDB tables or from flat files stored in Amazon S3 buckets\. [Amazon Redshift best practices for loading data](c_loading-data-best-practices.md) provides for tips for loading your data quickly and effectively\. |
| Manage users, groups, and database security\. | [Managing database security](r_Database_objects.md) covers database security topics\. |
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-who-should-use-this-guide.md
|
05a64890b9d0-2
|
| Manage users, groups, and database security\. | [Managing database security](r_Database_objects.md) covers database security topics\. |
| Monitor and optimize system performance\. | The [System tables reference](cm_chap_system-tables.md) details system tables and views that you can query for the status of the database and monitor queries and processes\. You should also consult the [Amazon Redshift Cluster Management Guide](https://docs.aws.amazon.com/redshift/latest/mgmt/) to learn how to use the AWS Management Console to check the system health, monitor metrics, and back up and restore clusters\. |
| Analyze and report information from very large datasets\. | Many popular software vendors are certifying Amazon Redshift with their offerings to enable you to continue to use the tools you use today\. For more information, see the [Amazon Redshift partner page](https://aws.amazon.com/redshift/partners/)\. The [SQL reference](cm_chap_SQLCommandRef.md) has all the details for the SQL expressions, commands, and functions Amazon Redshift supports\. |
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-who-should-use-this-guide.md
|
9f7e8dd6a3f9-0
|
A database contains one or more named schemas\. Each schema in a database contains tables and other kinds of named objects\. By default, a database has a single schema, which is named PUBLIC\. You can use schemas to group database objects under a common name\. Schemas are similar to file system directories, except that schemas cannot be nested\.
Identical database object names can be used in different schemas in the same database without conflict\. For example, both MY\_SCHEMA and YOUR\_SCHEMA can contain a table named MYTABLE\. Users with the necessary privileges can access objects across multiple schemas in a database\.
By default, an object is created within the first schema in the search path of the database\. For information, see [Search path](#c_Search_path) later in this section\.
Schemas can help with organization and concurrency issues in a multi\-user environment in the following ways:
+ To allow many developers to work in the same database without interfering with each other\.
+ To organize database objects into logical groups to make them more manageable\.
+ To give applications the ability to put their objects into separate schemas so that their names will not collide with the names of objects used by other applications\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Schemas_and_tables.md
|
17de5608c700-0
|
Any user can create schemas and alter or drop schemas they own\.
You can perform the following actions:
+ To create a schema, use the [CREATE SCHEMA](r_CREATE_SCHEMA.md) command\.
+ To change the owner of a schema, use the [ALTER SCHEMA](r_ALTER_SCHEMA.md) command\.
+ To delete a schema and its objects, use the [DROP SCHEMA](r_DROP_SCHEMA.md) command\.
+ To create a table within a schema, create the table with the format *schema\_name\.table\_name*\.
To view a list of all schemas, query the PG\_NAMESPACE system catalog table:
```
select * from pg_namespace;
```
To view a list of tables that belong to a schema, query the PG\_TABLE\_DEF system catalog table\. For example, the following query returns a list of tables in the PG\_CATALOG schema\.
```
select distinct(tablename) from pg_table_def
where schemaname = 'pg_catalog';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Schemas_and_tables.md
|
62447e339c53-0
|
The search path is defined in the search\_path parameter with a comma\-separated list of schema names\. The search path specifies the order in which schemas are searched when an object, such as a table or function, is referenced by a simple name that does not include a schema qualifier\.
If an object is created without specifying a target schema, the object is added to the first schema that is listed in search path\. When objects with identical names exist in different schemas, an object name that does not specify a schema will refer to the first schema in the search path that contains an object with that name\.
To change the default schema for the current session, use the [SET](r_SET.md) command\.
For more information, see the [search\_path](r_search_path.md) description in the Configuration Reference\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Schemas_and_tables.md
|
1ff1bb3acc90-0
|
Schema\-based privileges are determined by the owner of the schema:
+ By default, all users have CREATE and USAGE privileges on the PUBLIC schema of a database\. To disallow users from creating objects in the PUBLIC schema of a database, use the [REVOKE](r_REVOKE.md) command to remove that privilege\.
+ Unless they are granted the USAGE privilege by the object owner, users cannot access any objects in schemas they do not own\.
+ If users have been granted the CREATE privilege to a schema that was created by another user, those users can create objects in that schema\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Schemas_and_tables.md
|
4c205ec8b1d8-0
|
**Topics**
+ [CHANGE\_QUERY\_PRIORITY](r_CHANGE_QUERY_PRIORITY.md)
+ [CHANGE\_SESSION\_PRIORITY](r_CHANGE_SESSION_PRIORITY.md)
+ [CHANGE\_USER\_PRIORITY](r_CHANGE_USER_PRIORITY.md)
+ [CURRENT\_SETTING](r_CURRENT_SETTING.md)
+ [PG\_CANCEL\_BACKEND](PG_CANCEL_BACKEND.md)
+ [PG\_TERMINATE\_BACKEND](PG_TERMINATE_BACKEND.md)
+ [SET\_CONFIG](r_SET_CONFIG.md)
Amazon Redshift supports several system administration functions\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_System_administration_functions.md
|
a27cf2f59cb6-0
|
Records internal processing errors generated by the Amazon Redshift database engine\. STL\_ERROR does not record SQL errors or messages\. The information in STL\_ERROR is useful for troubleshooting certain errors\. An AWS support engineer might ask you to provide this information as part of the troubleshooting process\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
For a list of error codes that can be generated while loading data with the Copy command, see [Load error reference](r_Load_Error_Reference.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ERROR.md
|
538ae5fd515b-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_ERROR.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ERROR.md
|
c8d5180ace69-0
|
The following example retrieves the error information from STL\_ERROR\.
```
select process, errcode, linenum as line,
trim(error) as err
from stl_error;
process | errcode | line | err
--------------+---------+------+------------------------------------------------------------------
padbmaster | 8001 | 194 | Path prefix: s3://awssampledb/testnulls/venue.txt*
padbmaster | 8001 | 529 | Listing bucket=awssampledb prefix=tests/category-csv-quotes
padbmaster | 2 | 190 | database "template0" is not currently accepting connections
padbmaster | 32 | 1956 | pq_flush: could not send data to client: Broken pipe
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ERROR.md
|
13e89c955ef1-0
|
Don't make it a practice to use the maximum column size for convenience\.
Instead, consider the largest values you are likely to store in a VARCHAR column, for example, and size your columns accordingly\. Because Amazon Redshift compresses column data very effectively, creating columns much larger than necessary has minimal impact on the size of data tables\. During processing for complex queries, however, intermediate query results might need to be stored in temporary tables\. Because temporary tables are not compressed, unnecessarily large columns consume excessive memory and temporary disk space, which can affect query performance\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-smallest-column-size.md
|
8ad5918d5e50-0
|
ST\_X returns the first coordinate of an input point\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_X-function.md
|
9b3866947543-0
|
```
ST_X(point)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_X-function.md
|
cdb9b69e175a-0
|
*point*
A `POINT` value of data type `GEOMETRY`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_X-function.md
|
20618be09441-0
|
`DOUBLE PRECISION` value of the first coordinate\.
If *point* is null, then null is returned\.
If *point* is not a `POINT`, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_X-function.md
|
2d28adad75d0-0
|
The following SQL returns the first coordinate of a point\.
```
SELECT ST_X(ST_Point(1,2));
```
```
st_x
-----------
1.0
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_X-function.md
|
6e1f2094531f-0
|
Some Amazon Redshift queries are distributed and executed on the compute nodes; other queries execute exclusively on the leader node\.
The leader node distributes SQL to the compute nodes when a query references user\-created tables or system tables \(tables with an STL or STV prefix and system views with an SVL or SVV prefix\)\. A query that references only catalog tables \(tables with a PG prefix, such as PG\_TABLE\_DEF\) or that does not reference any tables, runs exclusively on the leader node\.
Some Amazon Redshift SQL functions are supported only on the leader node and are not supported on the compute nodes\. A query that uses a leader\-node function must execute exclusively on the leader node, not on the compute nodes, or it will return an error\.
The documentation for each leader\-node only function includes a note stating that the function will return an error if it references user\-defined tables or Amazon Redshift system tables\.
For more information, see [SQL functions supported on the leader node](c_sql-functions-leader-node.md)\.
The following SQL functions are leader\-node only functions and are not supported on the compute nodes:
System information functions
+ CURRENT\_SCHEMA
+ CURRENT\_SCHEMAS
+ HAS\_DATABASE\_PRIVILEGE
+ HAS\_SCHEMA\_PRIVILEGE
+ HAS\_TABLE\_PRIVILEGE
The following leader\-node only functions are deprecated:
Date functions
+ AGE
+ CURRENT\_TIME
+ CURRENT\_TIMESTAMP
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_SQL_functions_leader_node_only.md
|
6e1f2094531f-1
|
Date functions
+ AGE
+ CURRENT\_TIME
+ CURRENT\_TIMESTAMP
+ LOCALTIME
+ ISFINITE
+ NOW
String functions
+ ASCII
+ GET\_BIT
+ GET\_BYTE
+ SET\_BIT
+ SET\_BYTE
+ TO\_ASCII
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_SQL_functions_leader_node_only.md
|
629d19739486-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_datetable.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_datetable.md
|
cbfb4007fa52-0
|
**UTC**, time zone
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_timezone_config.md
|
31e23b82f722-0
|
```
SET timezone { TO | = } [ time_zone | DEFAULT ]
SET time zone [ time_zone | DEFAULT ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_timezone_config.md
|
cf90ab1803f9-0
|
Sets the time zone for the current session\. The time zone can be the offset from Coordinated Universal Time \(UTC\) or a time zone name\.
**Note**
You can't set the `timezone` configuration parameter by using a cluster parameter group\. The time zone can be set only for the current session by using a SET command\. To set the time zone for all sessions run by a specific database user, use the [ALTER USER](r_ALTER_USER.md) command\. ALTER USER … SET TIMEZONE changes the time zone for subsequent sessions, not for the current session\.
When you set the time zone using the `SET timezone` \(one word\) command with either `TO` or `=`, you can specify *time\_zone* as a time zone name, a POSIX\-style format offset, or an ISO\-8601 format offset, as shown following\.
```
SET timezone { TO | = } time_zone
```
When you set the time zone using the SET time zone command *without* `TO` or `=`, you can specify *time\_zone* using an INTERVAL as well as a time zone name, a POSIX\-style format offset, or an ISO\-8601 format offset, as shown following\.
```
SET time zone time_zone
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_timezone_config.md
|
0b22b52480a4-0
|
Amazon Redshift supports the following time zone formats:
+ Time zone name
+ INTERVAL
+ POSIX\-style time zone specification
+ ISO\-8601 offset
Because time zone abbreviations, such as PST or PDT, are defined as a fixed offset from UTC and don't include daylight savings time rules, the SET command doesn't support time zone abbreviations\.
For more details on time zone formats, see the following\.
**Time zone name** – The full time zone name, such as America/New\_York\. Full time zone names can include daylight savings rules\.
The following are examples of time zone names:
+ Etc/Greenwich
+ America/New\_York
+ CST6CDT
+ GB
**Note**
Many time zone names, such as EST, MST, NZ, and UCT, are also abbreviations\.
To view a list of valid time zone names, run the following command\.
```
select pg_timezone_names();
```
**INTERVAL** – An offset from UTC\. For example, PST is –8:00 or –8 hours\.
The following are examples of INTERVAL time zone offsets:
+ –8:00
+ –8 hours
+ 30 minutes
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_timezone_config.md
|
0b22b52480a4-1
|
The following are examples of INTERVAL time zone offsets:
+ –8:00
+ –8 hours
+ 30 minutes
**POSIX\-style format** – A time zone specification in the form *STDoffset* or *STDoffsetDST*, where *STD* is a time zone abbreviation, *offset* is the numeric offset in hours west from UTC, and *DST* is an optional daylight\-savings zone abbreviation\. Daylight savings time is assumed to be one hour ahead of the given offset\.
POSIX\-style time zone formats use positive offsets west of Greenwich, in contrast to the ISO\-8601 convention, which uses positive offsets east of Greenwich\.
The following are examples of POSIX\-style time zones:
+ PST8
+ PST8PDT
+ EST5
+ EST5EDT
**Note**
Amazon Redshift doesn't validate POSIX\-style time zone specifications, so it is possible to set the time zone to an invalid value\. For example, the following command doesn't return an error, even though it sets the time zone to an invalid value\.
```
set timezone to ‘xxx36’;
```
**ISO\-8601 Offset** – The offset from UTC in the form `±[hh]:[mm]`\.
The following are examples of ISO\-8601 offsets:
+ \-8:00
+ \+7:30
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_timezone_config.md
|
ba002c451bfb-0
|
The following example sets the time zone for the current session to New York\.
```
set timezone = 'America/New_York';
```
The following example sets the time zone for the current session to UTC–8 \(PST\)\.
```
set timezone to '-8:00';
```
The following example uses INTERVAL to set the time zone to PST\.
```
set timezone interval '-8 hours'
```
The following example resets the time zone for the current session to the system default time zone \(UTC\)\.
```
set timezone to default;
```
To set the time zone for database user, use an ALTER USER … SET statement\. The following example sets the time zone for dbuser to New York\. The new value persists for the user for all subsequent sessions\.
```
ALTER USER dbuser SET timezone to 'America/New_York';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_timezone_config.md
|
83fb345cf3a6-0
|
The NULLIF expression compares two arguments and returns null if the arguments are equal\. If they are not equal, the first argument is returned\. This expression is the inverse of the NVL or COALESCE expression\.
```
NULLIF ( expression1, expression2 )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NULLIF_function.md
|
77252a219379-0
|
*expression1, expression2*
The target columns or expressions that are compared\. The return type is the same as the type of the first expression\. The default column name of the NULLIF result is the column name of the first expression\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NULLIF_function.md
|
bfdc0f9001e0-0
|
In the following example, the query returns null when the LISTID and SALESID values match:
```
select nullif(listid,salesid), salesid
from sales where salesid<10 order by 1, 2 desc;
listid | salesid
--------+---------
4 | 2
5 | 4
5 | 3
6 | 5
10 | 9
10 | 8
10 | 7
10 | 6
| 1
(9 rows)
```
You can use NULLIF to ensure that empty strings are always returned as nulls\. In the example below, the NULLIF expression returns either a null value or a string that contains at least one character\.
```
insert into category
values(0,'','Special','Special');
select nullif(catgroup,'') from category
where catdesc='Special';
catgroup
----------
null
(1 row)
```
NULLIF ignores trailing spaces\. If a string is not empty but contains spaces, NULLIF still returns null:
```
create table nulliftest(c1 char(2), c2 char(2));
insert into nulliftest values ('a','a ');
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NULLIF_function.md
|
bfdc0f9001e0-1
|
insert into nulliftest values ('a','a ');
insert into nulliftest values ('b','b');
select nullif(c1,c2) from nulliftest;
c1
------
null
null
(2 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NULLIF_function.md
|
a253ce95c188-0
|
**Topics**
+ [Storage and ranges](#r_Character_types-storage-and-ranges)
+ [CHAR or CHARACTER](#r_Character_types-char-or-character)
+ [VARCHAR or CHARACTER VARYING](#r_Character_types-varchar-or-character-varying)
+ [NCHAR and NVARCHAR types](#r_Character_types-nchar-and-nvarchar-types)
+ [TEXT and BPCHAR types](#r_Character_types-text-and-bpchar-types)
+ [Significance of trailing blanks](#r_Character_types-significance-of-trailing-blanks)
+ [Examples with character types](r_Examples_with_character_types.md)
Character data types include CHAR \(character\) and VARCHAR \(character varying\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Character_types.md
|
e31f7aea6c3b-0
|
CHAR and VARCHAR data types are defined in terms of bytes, not characters\. A CHAR column can only contain single\-byte characters, so a CHAR\(10\) column can contain a string with a maximum length of 10 bytes\. A VARCHAR can contain multibyte characters, up to a maximum of four bytes per character\. For example, a VARCHAR\(12\) column can contain 12 single\-byte characters, 6 two\-byte characters, 4 three\-byte characters, or 3 four\-byte characters\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_Character_types.html)
**Note**
The CREATE TABLE syntax supports the MAX keyword for character data types\. For example:
```
create table test(col1 varchar(max));
```
The MAX setting defines the width of the column as 4096 bytes for CHAR or 65535 bytes for VARCHAR\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Character_types.md
|
150f6ff496fd-0
|
Use a CHAR or CHARACTER column to store fixed\-length strings\. These strings are padded with blanks, so a CHAR\(10\) column always occupies 10 bytes of storage\.
```
char(10)
```
A CHAR column without a length specification results in a CHAR\(1\) column\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Character_types.md
|
1e17c23340bc-0
|
Use a VARCHAR or CHARACTER VARYING column to store variable\-length strings with a fixed limit\. These strings are not padded with blanks, so a VARCHAR\(120\) column consists of a maximum of 120 single\-byte characters, 60 two\-byte characters, 40 three\-byte characters, or 30 four\-byte characters\.
```
varchar(120)
```
If you use the VARCHAR data type without a length specifier in a CREATE TABLE statement, the default length is 256\. If used in an expression, the size of the output is determined using the input expression \(up to 65535\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Character_types.md
|
b3b462e87c9d-0
|
You can create columns with the NCHAR and NVARCHAR types \(also known as NATIONAL CHARACTER and NATIONAL CHARACTER VARYING types\)\. These types are converted to CHAR and VARCHAR types, respectively, and are stored in the specified number of bytes\.
An NCHAR column without a length specification is converted to a CHAR\(1\) column\.
An NVARCHAR column without a length specification is converted to a VARCHAR\(256\) column\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Character_types.md
|
6238075ce407-0
|
You can create an Amazon Redshift table with a TEXT column, but it is converted to a VARCHAR\(256\) column that accepts variable\-length values with a maximum of 256 characters\.
You can create an Amazon Redshift column with a BPCHAR \(blank\-padded character\) type, which Amazon Redshift converts to a fixed\-length CHAR\(256\) column\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Character_types.md
|
b3dcc929b0b4-0
|
Both CHAR and VARCHAR data types store strings up to *n* bytes in length\. An attempt to store a longer string into a column of these types results in an error, unless the extra characters are all spaces \(blanks\), in which case the string is truncated to the maximum length\. If the string is shorter than the maximum length, CHAR values are padded with blanks, but VARCHAR values store the string without blanks\.
Trailing blanks in CHAR values are always semantically insignificant\. They are disregarded when you compare two CHAR values, not included in LENGTH calculations, and removed when you convert a CHAR value to another string type\.
Trailing spaces in VARCHAR and CHAR values are treated as semantically insignificant when values are compared\.
Length calculations return the length of VARCHAR character strings with trailing spaces included in the length\. Trailing blanks are not counted in the length for fixed\-length character strings\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Character_types.md
|
fb1744c11347-0
|
The following example creates a table called EVENT\_BACKUP for the EVENT table:
```
create table event_backup as select * from event;
```
The resulting table inherits the distribution and sort keys from the EVENT table\.
```
select "column", type, encoding, distkey, sortkey
from pg_table_def where tablename = 'event_backup';
column | type | encoding | distkey | sortkey
----------+-----------------------------+----------+---------+--------
catid | smallint | none | false | 0
dateid | smallint | none | false | 1
eventid | integer | none | true | 0
eventname | character varying(200) | none | false | 0
starttime | timestamp without time zone | none | false | 0
venueid | smallint | none | false | 0
```
The following command creates a new table called EVENTDISTSORT by selecting four columns from the EVENT table\. The new table is distributed by EVENTID and sorted by EVENTID and DATEID:
```
create table eventdistsort
distkey (1)
sortkey (1,3)
as
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_examples.md
|
fb1744c11347-1
|
create table eventdistsort
distkey (1)
sortkey (1,3)
as
select eventid, venueid, dateid, eventname
from event;
```
The result is as follows:
```
select "column", type, encoding, distkey, sortkey
from pg_table_def where tablename = 'eventdistsort';
column | type | encoding | distkey | sortkey
---------+------------------------+----------+---------+-------
eventid | integer | none | t | 1
venueid | smallint | none | f | 0
dateid | smallint | none | f | 2
eventname | character varying(200)| none | f | 0
```
You could create exactly the same table by using column names for the distribution and sort keys\. For example:
```
create table eventdistsort1
distkey (eventid)
sortkey (eventid, dateid)
as
select eventid, venueid, dateid, eventname
from event;
```
The following statement applies even distribution to the table but doesn't define an explicit sort key\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_examples.md
|
fb1744c11347-2
|
from event;
```
The following statement applies even distribution to the table but doesn't define an explicit sort key\.
```
create table eventdisteven
diststyle even
as
select eventid, venueid, dateid, eventname
from event;
```
The table doesn't inherit the sort key from the EVENT table \(EVENTID\) because EVEN distribution is specified for the new table\. The new table has no sort key and no distribution key\.
```
select "column", type, encoding, distkey, sortkey
from pg_table_def where tablename = 'eventdisteven';
column | type | encoding | distkey | sortkey
----------+------------------------+----------+---------+---------
eventid | integer | none | f | 0
venueid | smallint | none | f | 0
dateid | smallint | none | f | 0
eventname | character varying(200) | none | f | 0
```
The following statement applies even distribution and defines a sort key:
```
create table eventdistevensort diststyle even sortkey (venueid)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_examples.md
|
fb1744c11347-3
|
```
create table eventdistevensort diststyle even sortkey (venueid)
as select eventid, venueid, dateid, eventname from event;
```
The resulting table has a sort key but no distribution key\.
```
select "column", type, encoding, distkey, sortkey
from pg_table_def where tablename = 'eventdistevensort';
column | type | encoding | distkey | sortkey
----------+------------------------+----------+---------+-------
eventid | integer | none | f | 0
venueid | smallint | none | f | 1
dateid | smallint | none | f | 0
eventname | character varying(200) | none | f | 0
```
The following statement redistributes the EVENT table on a different key column from the incoming data, which is sorted on the EVENTID column, and defines no SORTKEY column; therefore the table isn't sorted\.
```
create table venuedistevent distkey(venueid)
as select * from event;
```
The result is as follows:
```
select "column", type, encoding, distkey, sortkey
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_examples.md
|
fb1744c11347-4
|
The result is as follows:
```
select "column", type, encoding, distkey, sortkey
from pg_table_def where tablename = 'venuedistevent';
column | type | encoding | distkey | sortkey
----------+-----------------------------+----------+---------+-------
eventid | integer | none | f | 0
venueid | smallint | none | t | 0
catid | smallint | none | f | 0
dateid | smallint | none | f | 0
eventname | character varying(200) | none | f | 0
starttime | timestamp without time zone | none | f | 0
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_examples.md
|
bcf2f055d607-0
|
The *staging table* is a temporary table that holds all of the data that will be used to make changes to the *target table*, including both updates and inserts\.
A merge operation requires a join between the staging table and the target table\. To collocate the joining rows, set the staging table's distribution key to the same column as the target table's distribution key\. For example, if the target table uses a foreign key column as its distribution key, use the same column for the staging table's distribution key\. If you create the staging table by using a [CREATE TABLE LIKE](r_CREATE_TABLE_NEW.md#create-table-like) statement, the staging table will inherit the distribution key from the parent table\. If you use a CREATE TABLE AS statement, the new table does not inherit the distribution key\. For more information, see [Choosing a data distribution style](t_Distributing_data.md)
If the distribution key is not the same as the primary key and the distribution key is not updated as part of the merge operation, add a redundant join predicate on the distribution key columns to enable a collocated join\. For example:
```
where target.primarykey = stage.primarykey
and target.distkey = stage.distkey
```
To verify that the query will use a collocated join, run the query with [EXPLAIN](r_EXPLAIN.md) and check for DS\_DIST\_NONE on all of the joins\. For more information, see [Evaluating the query plan](c_data_redistribution.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/merge-create-staging-table.md
|
dd728c097b75-0
|
By default, only the master user that you created when you launched the cluster has access to the initial database in the cluster\. To grant other users access, you must create one or more user accounts\. Database user accounts are global across all the databases in a cluster; they do not belong to individual databases\.
Use the CREATE USER command to create a new database user\. When you create a new user, you specify the name of the new user and a password\. A password is required\. It must have between 8 and 64 characters, and it must include at least one uppercase letter, one lowercase letter, and one numeral\.
For example, to create a user named **GUEST** with password **ABCd4321**, issue the following command:
```
create user guest password 'ABCd4321';
```
For information about other command options, see [CREATE USER](r_CREATE_USER.md) in the SQL Command Reference\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_adding_redshift_user_cmd.md
|
3a665be21432-0
|
The following is a list of Amazon Redshift reserved words\. You can use the reserved words with delimited identifiers \(double quotes\)\.
For more information, see [Names and identifiers](r_names.md)\.
```
AES128
AES256
ALL
ALLOWOVERWRITE
ANALYSE
ANALYZE
AND
ANY
ARRAY
AS
ASC
AUTHORIZATION
AZ64
BACKUP
BETWEEN
BINARY
BLANKSASNULL
BOTH
BYTEDICT
BZIP2
CASE
CAST
CHECK
COLLATE
COLUMN
CONSTRAINT
CREATE
CREDENTIALS
CROSS
CURRENT_DATE
CURRENT_TIME
CURRENT_TIMESTAMP
CURRENT_USER
CURRENT_USER_ID
DEFAULT
DEFERRABLE
DEFLATE
DEFRAG
DELTA
DELTA32K
DESC
DISABLE
DISTINCT
DO
ELSE
EMPTYASNULL
ENABLE
ENCODE
ENCRYPT
ENCRYPTION
END
EXCEPT
EXPLICIT
FALSE
FOR
FOREIGN
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_pg_keywords.md
|
3a665be21432-1
|
ENCRYPTION
END
EXCEPT
EXPLICIT
FALSE
FOR
FOREIGN
FREEZE
FROM
FULL
GLOBALDICT256
GLOBALDICT64K
GRANT
GROUP
GZIP
HAVING
IDENTITY
IGNORE
ILIKE
IN
INITIALLY
INNER
INTERSECT
INTO
IS
ISNULL
JOIN
LANGUAGE
LEADING
LEFT
LIKE
LIMIT
LOCALTIME
LOCALTIMESTAMP
LUN
LUNS
LZO
LZOP
MINUS
MOSTLY13
MOSTLY32
MOSTLY8
NATURAL
NEW
NOT
NOTNULL
NULL
NULLS
OFF
OFFLINE
OFFSET
OID
OLD
ON
ONLY
OPEN
OR
ORDER
OUTER
OVERLAPS
PARALLEL
PARTITION
PERCENT
PERMISSIONS
PLACING
PRIMARY
RAW
READRATIO
RECOVER
REFERENCES
RESPECT
REJECTLOG
RESORT
RESTORE
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_pg_keywords.md
|
3a665be21432-2
|
READRATIO
RECOVER
REFERENCES
RESPECT
REJECTLOG
RESORT
RESTORE
RIGHT
SELECT
SESSION_USER
SIMILAR
SNAPSHOT
SOME
SYSDATE
SYSTEM
TABLE
TAG
TDES
TEXT255
TEXT32K
THEN
TIMESTAMP
TO
TOP
TRAILING
TRUE
TRUNCATECOLUMNS
UNION
UNIQUE
USER
USING
VERBOSE
WALLET
WHEN
WHERE
WITH
WITHOUT
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_pg_keywords.md
|
094ff1bd3880-0
|
**0 \(defaults to maximum\)** \- 14400000 MB
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/max_cursor_result_set_size.md
|
62162ba2dbcb-0
|
The *max\_cursor\_result\_set\_size* parameter is deprecated\. For more information about cursor result set size, see [Cursor constraints](declare.md#declare-constraints)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/max_cursor_result_set_size.md
|
55a820151b7d-0
|
Groups are collections of users who are all granted whatever privileges are associated with the group\. You can use groups to assign privileges by role\. For example, you can create different groups for sales, administration, and support and give the users in each group the appropriate access to the data they require for their work\. You can grant or revoke privileges at the group level, and those changes will apply to all members of the group, except for superusers\.
To view all user groups, query the PG\_GROUP system catalog table:
```
select * from pg_group;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Groups.md
|
b1f9161022a8-0
|
Only a superuser can create, alter, or drop groups\.
You can perform the following actions:
+ To create a group, use the [CREATE GROUP](r_CREATE_GROUP.md) command\.
+ To add users to or remove users from an existing group, use the [ALTER GROUP](r_ALTER_GROUP.md) command\.
+ To delete a group, use the [DROP GROUP](r_DROP_GROUP.md) command\. This command only drops the group, not its member users\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Groups.md
|
dc70283aabd1-0
|
**Topics**
+ [Modifying the server configuration](t_Modifying_the_default_settings.md)
+ [analyze\_threshold\_percent](r_analyze_threshold_percent.md)
+ [datestyle](r_datestyle.md)
+ [describe\_field\_name\_in\_uppercase](r_describe_field_name_in_uppercase.md)
+ [enable\_result\_cache\_for\_session](r_enable_result_cache_for_session.md)
+ [enable\_vacuum\_boost](r_enable_vacuum_boost.md)
+ [error\_on\_nondeterministic\_update](r_error_on_nondeterministic_update.md)
+ [extra\_float\_digits](r_extra_float_digits.md)
+ [json\_serialization\_enable](r_json_serialization_enable.md)
+ [json\_serialization\_parse\_nested\_strings](r_json_serialization_parse_nested_strings.md)
+ [max\_concurrency\_scaling\_clusters](r_max_concurrency_scaling_clusters.md)
+ [max\_cursor\_result\_set\_size](max_cursor_result_set_size.md)
+ [query\_group](r_query_group.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm_chap_ConfigurationRef.md
|
dc70283aabd1-1
|
+ [query\_group](r_query_group.md)
+ [search\_path](r_search_path.md)
+ [statement\_timeout](r_statement_timeout.md)
+ [stored\_proc\_log\_min\_messages](r_stored_proc_log_min_messages.md)
+ [timezone](r_timezone_config.md)
+ [wlm\_query\_slot\_count](r_wlm_query_slot_count.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm_chap_ConfigurationRef.md
|
e4e3e9f6f997-0
|
Defines the default set of access privileges to be applied to objects that are created in the future by the specified user\. By default, users can change only their own default access privileges\. Only a superuser can specify default privileges for other users\.
You can apply default privileges to users or user groups\. You can set default privileges globally for all objects created in the current database, or for objects created only in the specified schemas\.
Default privileges apply only to new objects\. Running ALTER DEFAULT PRIVILEGES doesn’t change privileges on existing objects\.
For more information about privileges, see [GRANT](r_GRANT.md)\.
To view information about the default privileges for database users, query the [PG\_DEFAULT\_ACL](r_PG_DEFAULT_ACL.md) system catalog table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DEFAULT_PRIVILEGES.md
|
efdad8a9d305-0
|
```
ALTER DEFAULT PRIVILEGES
[ FOR USER target_user [, ...] ]
[ IN SCHEMA schema_name [, ...] ]
grant_or_revoke_clause
where grant_or_revoke_clause is one of:
GRANT { { SELECT | INSERT | UPDATE | DELETE | REFERENCES } [,...] | ALL [ PRIVILEGES ] }
ON TABLES
TO { user_name [ WITH GRANT OPTION ]| GROUP group_name | PUBLIC } [, ...]
GRANT { EXECUTE | ALL [ PRIVILEGES ] }
ON FUNCTIONS
TO { user_name [ WITH GRANT OPTION ] | GROUP group_name | PUBLIC } [, ...]
GRANT { EXECUTE | ALL [ PRIVILEGES ] }
ON PROCEDURES
TO { user_name [ WITH GRANT OPTION ] | GROUP group_name | PUBLIC } [, ...]
REVOKE [ GRANT OPTION FOR ] { { SELECT | INSERT | UPDATE | DELETE | REFERENCES } [,...] | ALL [ PRIVILEGES ] }
ON TABLES
FROM user_name [, ...] [ CASCADE | RESTRICT ]
REVOKE { { SELECT | INSERT | UPDATE | DELETE | REFERENCES } [,...] | ALL [ PRIVILEGES ] }
ON TABLES
FROM { GROUP group_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ]
REVOKE [ GRANT OPTION FOR ] { EXECUTE | ALL [ PRIVILEGES ] }
ON FUNCTIONS
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DEFAULT_PRIVILEGES.md
|
efdad8a9d305-1
|
REVOKE [ GRANT OPTION FOR ] { EXECUTE | ALL [ PRIVILEGES ] }
ON FUNCTIONS
FROM user_name [, ...] [ CASCADE | RESTRICT ]
REVOKE { EXECUTE | ALL [ PRIVILEGES ] }
ON FUNCTIONS
FROM { GROUP group_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ]
REVOKE [ GRANT OPTION FOR ] { EXECUTE | ALL [ PRIVILEGES ] }
ON PROCEDURES
FROM user_name [, ...] [ CASCADE | RESTRICT ]
REVOKE { EXECUTE | ALL [ PRIVILEGES ] }
ON PROCEDURES
FROM { GROUP group_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DEFAULT_PRIVILEGES.md
|
90454f576286-0
|
FOR USER *target\_user* <a name="default-for-user"></a>
Optional\. The name of the user for which default privileges are defined\. Only a superuser can specify default privileges for other users\. The default value is the current user\.
IN SCHEMA *schema\_name* <a name="default-in-schema"></a>
Optional\. If an IN SCHEMA clause appears, the specified default privileges are applied to new objects created in the specified *schema\_name*\. In this case, the user or user group that is the target of ALTER DEFAULT PRIVILEGES must have CREATE privilege for the specified schema\. Default privileges that are specific to a schema are added to existing global default privileges\. By default, default privileges are applied globally to the entire database\.
GRANT <a name="default-grant"></a>
The set of privileges to grant to the specified users or groups for all new tables, functions, or stored procedures created by the specified user\. You can set the same privileges and options with the GRANT clause that you can with the [GRANT](r_GRANT.md) command\.
WITH GRANT OPTION <a name="default-grant-option"></a>
A clause that indicates that the user receiving the privileges can in turn grant the same privileges to others\. You can't grant WITH GRANT OPTION to a group or to PUBLIC\.
TO *user\_name* \| GROUP *group\_name* <a name="default-to"></a>
The name of the user or user group to which the specified default privileges are applied\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DEFAULT_PRIVILEGES.md
|
90454f576286-1
|
The name of the user or user group to which the specified default privileges are applied\.
REVOKE <a name="default-revoke"></a>
The set of privileges to revoke from the specified users or groups for all new tables, functions, or stored procedures created by the specified user\. You can set the same privileges and options with the REVOKE clause that you can with the [REVOKE](r_REVOKE.md) command\.
GRANT OPTION FOR <a name="default-revoke-option"></a>
A clause that revokes only the option to grant a specified privilege to other users and doesn't revoke the privilege itself\. You can't revoke GRANT OPTION from a group or from PUBLIC\.
FROM *user\_name* \| GROUP *group\_name* <a name="default-from"></a>
The name of the user or user group from which the specified privileges are revoked by default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DEFAULT_PRIVILEGES.md
|
8d12ad8ac5a1-0
|
Suppose that you want to allow any user in the user group `report_readers` to view all tables created by the user `report_admin`\. In this case, execute the following command as a superuser\.
```
alter default privileges for user report_admin grant select on tables to group report_readers;
```
In the following example, the first command grants SELECT privileges on all new tables you create\.
```
alter default privileges grant select on tables to public;
```
The following example grants INSERT privilege to the `sales_admin` user group for all new tables and views that you create in the `sales` schema\.
```
alter default privileges in schema sales grant insert on tables to group sales_admin;
```
The following example reverses the ALTER DEFAULT PRIVILEGES command in the preceding example\.
```
alter default privileges in schema sales revoke insert on tables from group sales_admin;
```
By default, the PUBLIC user group has EXECUTE permission for all new user\-defined functions\. To revoke `public` EXECUTE permissions for your new functions and then grant EXECUTE permission only to the `dev_test` user group, execute the following commands\.
```
alter default privileges revoke execute on functions from public;
alter default privileges grant execute on functions to group dev_test;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DEFAULT_PRIVILEGES.md
|
763b0cbc0bb4-0
|
Displays the EXPLAIN plan for a query that has been submitted for execution\.
**Note**
System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the tables with the prefix STL except that the STL tables provide information only for queries run on the main cluster\.
This table is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_EXPLAIN.md
|
2c8c2ec3ad85-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_EXPLAIN.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_EXPLAIN.md
|
5d72b8e4e2c6-0
|
Consider the following EXPLAIN output for an aggregate join query:
```
explain select avg(datediff(day, listtime, saletime)) as avgwait
from sales, listing where sales.listid = listing.listid;
QUERY PLAN
------------------------------------------------------------------------------
XN Aggregate (cost=6350.30..6350.31 rows=1 width=16)
-> XN Hash Join DS_DIST_NONE (cost=47.08..6340.89 rows=3766 width=16)
Hash Cond: ("outer".listid = "inner".listid)
-> XN Seq Scan on listing (cost=0.00..1924.97 rows=192497 width=12)
-> XN Hash (cost=37.66..37.66 rows=3766 width=12)
-> XN Seq Scan on sales (cost=0.00..37.66 rows=3766 width=12)
(6 rows)
```
If you run this query and its query ID is 10, you can use the SVCS\_EXPLAIN table to see the same kind of information that the EXPLAIN command returns:
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_EXPLAIN.md
|
5d72b8e4e2c6-1
|
```
select query,nodeid,parentid,substring(plannode from 1 for 30),
substring(info from 1 for 20) from svcs_explain
where query=10 order by 1,2;
query| nodeid |parentid| substring | substring
-----+--------+--------+--------------------------------+-------------------
10 | 1 | 0 |XN Aggregate (cost=6717.61..6 |
10 | 2 | 1 | -> XN Merge Join DS_DIST_NO| Merge Cond:("outer"
10 | 3 | 2 | -> XN Seq Scan on lis |
10 | 4 | 2 | -> XN Seq Scan on sal |
(4 rows)
```
Consider the following query:
```
select event.eventid, sum(pricepaid)
from event, sales
where event.eventid=sales.eventid
group by event.eventid order by 2 desc;
eventid | sum
--------+----------
289 | 51846.00
7895 | 51049.00
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_EXPLAIN.md
|
5d72b8e4e2c6-2
|
289 | 51846.00
7895 | 51049.00
1602 | 50301.00
851 | 49956.00
7315 | 49823.00
...
```
If this query's ID is 15, the following system table query returns the plan nodes that were executed\. In this case, the order of the nodes is reversed to show the actual order of execution:
```
select query,nodeid,parentid,substring(plannode from 1 for 56)
from svcs_explain where query=15 order by 1, 2 desc;
query|nodeid|parentid| substring
-----+------+--------+--------------------------------------------------------
15 | 8 | 7 | -> XN Seq Scan on eve
15 | 7 | 5 | -> XN Hash(cost=87.98..87.9
15 | 6 | 5 | -> XN Seq Scan on sales(cos
15 | 5 | 4 | -> XN Hash Join DS_DIST_OUTER(cos
15 | 4 | 3 | -> XN HashAggregate(cost=862286577.07..
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_EXPLAIN.md
|
5d72b8e4e2c6-3
|
15 | 4 | 3 | -> XN HashAggregate(cost=862286577.07..
15 | 3 | 2 | -> XN Sort(cost=1000862287175.47..10008622871
15 | 2 | 1 | -> XN Network(cost=1000862287175.47..1000862287197.
15 | 1 | 0 |XN Merge(cost=1000862287175.47..1000862287197.46 rows=87
(8 rows)
```
The following query retrieves the query IDs for any query plans that contain a window function:
```
select query, trim(plannode) from svcs_explain
where plannode like '%Window%';
query| btrim
-----+------------------------------------------------------------------------
26 | -> XN Window(cost=1000985348268.57..1000985351256.98 rows=170 width=33)
27 | -> XN Window(cost=1000985348268.57..1000985351256.98 rows=170 width=33)
(2 rows)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_EXPLAIN.md
|
5d72b8e4e2c6-4
|
(2 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_EXPLAIN.md
|
57bec3145fee-0
|
Within the constraints listed in this topic, you can use UDFs anywhere you use the Amazon Redshift built\-in scalar functions\. For more information, see [SQL functions reference](c_SQL_functions.md)\.
Amazon Redshift Python UDFs have the following constraints:
+ Python UDFs cannot access the network or read or write to the file system\.
+ The total size of user\-installed Python libraries cannot exceed 100 MB\.
+ The number of Python UDFs that can run concurrently per cluster is limited to one\-fourth of the total concurrency level for the cluster\. For example, if the cluster is configured with a concurrency of 15, a maximum of three UDFs can run concurrently\. After the limit is reached, UDFs are queued for execution within workload management queues\. SQL UDFs don't have a concurrency limit\. For more information, see [Implementing workload management](cm-c-implementing-workload-management.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-constraints.md
|
f5b16db31bcc-0
|
ST\_LineFromMultiPoint returns a linestring from an input multipoint geometry\. The order of the points is preserved\. The spatial reference system identifier \(SRID\) of the returned geometry is the same as that of the input geometry\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_LineFromMultiPoint-function.md
|
65a3d81b93a7-0
|
```
ST_LineFromMultiPoint(geom)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_LineFromMultiPoint-function.md
|
9145db6d6b84-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `MULTIPOINT`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_LineFromMultiPoint-function.md
|
7280c1347929-0
|
`GEOMETRY`
If *geom* is null, then null is returned\.
If *geom* isn't a `MULTIPOINT`, then error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_LineFromMultiPoint-function.md
|
47525e338218-0
|
The following SQL creates a linestring from a multipoint\.
```
SELECT ST_AsEWKT(ST_LineFromMultiPoint(ST_GeomFromText('MULTIPOINT(0 0,10 0,10 10,5 5,0 5)',4326)));
```
```
st_asewkt
---------------------------------------------
SRID=4326;LINESTRING(0 0,10 0,10 10,5 5,0 5)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_LineFromMultiPoint-function.md
|
d1e8fd731bde-0
|
Use the STV\_TBL\_TRANS table to find out information about the transient database tables that are currently in memory\.
Transient tables are typically temporary row sets that are used as intermediate results while a query runs\. STV\_TBL\_TRANS differs from [STV\_TBL\_PERM](r_STV_TBL_PERM.md) in that STV\_TBL\_PERM contains information about permanent database tables\.
STV\_TBL\_TRANS is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_TBL_TRANS.md
|
49ac9fb7ae1c-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_TBL_TRANS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_TBL_TRANS.md
|
7a222e110b6c-0
|
To view transient table information for a query with a query ID of 90, type the following command:
```
select slice, id, rows, size, query_id, ref_cnt
from stv_tbl_trans
where query_id = 90;
```
This query returns the transient table information for query 90, as shown in the following sample output:
```
slice | id | rows | size | query_ | ref_ | from_ | prep_
| | | | id | cnt | suspended | swap
------+----+------+------+--------+------+-----------+-------
1013 | 95 | 0 | 0 | 90 | 4 | 0 | 0
7 | 96 | 0 | 0 | 90 | 4 | 0 | 0
10 | 96 | 0 | 0 | 90 | 4 | 0 | 0
17 | 96 | 0 | 0 | 90 | 4 | 0 | 0
14 | 96 | 0 | 0 | 90 | 4 | 0 | 0
3 | 96 | 0 | 0 | 90 | 4 | 0 | 0
1013 | 99 | 0 | 0 | 90 | 4 | 0 | 0
9 | 96 | 0 | 0 | 90 | 4 | 0 | 0
5 | 96 | 0 | 0 | 90 | 4 | 0 | 0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_TBL_TRANS.md
|
7a222e110b6c-1
|
9 | 96 | 0 | 0 | 90 | 4 | 0 | 0
5 | 96 | 0 | 0 | 90 | 4 | 0 | 0
19 | 96 | 0 | 0 | 90 | 4 | 0 | 0
2 | 96 | 0 | 0 | 90 | 4 | 0 | 0
1013 | 98 | 0 | 0 | 90 | 4 | 0 | 0
13 | 96 | 0 | 0 | 90 | 4 | 0 | 0
1 | 96 | 0 | 0 | 90 | 4 | 0 | 0
1013 | 96 | 0 | 0 | 90 | 4 | 0 | 0
6 | 96 | 0 | 0 | 90 | 4 | 0 | 0
11 | 96 | 0 | 0 | 90 | 4 | 0 | 0
15 | 96 | 0 | 0 | 90 | 4 | 0 | 0
18 | 96 | 0 | 0 | 90 | 4 | 0 | 0
```
In this example, you can see that the query data involves tables 95, 96, and 98\. Because zero bytes are allocated to this table, this query can run in memory\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_TBL_TRANS.md
|
f77ac7319519-0
|
If you need to add a large quantity of data, load the data in sequential blocks according to sort order to eliminate the need to vacuum\.
For example, suppose that you need to load a table with events from January 2017 to December 2017\. Assuming each month is in a single file, load the rows for January, then February, and so on\. Your table is completely sorted when your load completes, and you don't need to run a vacuum\. For more information, see [Use time\-series tables](c_best-practices-time-series-tables.md)\.
When loading very large datasets, the space required to sort might exceed the total available space\. By loading data in smaller blocks, you use much less intermediate sort space during each load\. In addition, loading smaller blocks make it easier to restart if the COPY fails and is rolled back\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-load-data-in-sequential-blocks.md
|
e5c2039c1eca-0
|
Displays the EXPLAIN plan for a query that has been submitted for execution\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_EXPLAIN.md
|
7026213dd349-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_EXPLAIN.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_EXPLAIN.md
|
0f6583f8ec4e-0
|
Consider the following EXPLAIN output for an aggregate join query:
```
explain select avg(datediff(day, listtime, saletime)) as avgwait
from sales, listing where sales.listid = listing.listid;
QUERY PLAN
------------------------------------------------------------------------------
XN Aggregate (cost=6350.30..6350.31 rows=1 width=16)
-> XN Hash Join DS_DIST_NONE (cost=47.08..6340.89 rows=3766 width=16)
Hash Cond: ("outer".listid = "inner".listid)
-> XN Seq Scan on listing (cost=0.00..1924.97 rows=192497 width=12)
-> XN Hash (cost=37.66..37.66 rows=3766 width=12)
-> XN Seq Scan on sales (cost=0.00..37.66 rows=3766 width=12)
(6 rows)
```
If you run this query and its query ID is 10, you can use the STL\_EXPLAIN table to see the same kind of information that the EXPLAIN command returns:
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_EXPLAIN.md
|
0f6583f8ec4e-1
|
```
select query,nodeid,parentid,substring(plannode from 1 for 30),
substring(info from 1 for 20) from stl_explain
where query=10 order by 1,2;
query| nodeid |parentid| substring | substring
-----+--------+--------+--------------------------------+-------------------
10 | 1 | 0 |XN Aggregate (cost=6717.61..6 |
10 | 2 | 1 | -> XN Merge Join DS_DIST_NO | Merge Cond:("outer"
10 | 3 | 2 | -> XN Seq Scan on lis |
10 | 4 | 2 | -> XN Seq Scan on sal |
(4 rows)
```
Consider the following query:
```
select event.eventid, sum(pricepaid)
from event, sales
where event.eventid=sales.eventid
group by event.eventid order by 2 desc;
eventid | sum
--------+----------
289 | 51846.00
7895 | 51049.00
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_EXPLAIN.md
|
0f6583f8ec4e-2
|
289 | 51846.00
7895 | 51049.00
1602 | 50301.00
851 | 49956.00
7315 | 49823.00
...
```
If this query's ID is 15, the following system view query returns the plan nodes that were executed\. In this case, the order of the nodes is reversed to show the actual order of execution:
```
select query,nodeid,parentid,substring(plannode from 1 for 56)
from stl_explain where query=15 order by 1, 2 desc;
query|nodeid|parentid| substring
-----+------+--------+--------------------------------------------------------
15 | 8 | 7 | -> XN Seq Scan on eve
15 | 7 | 5 | -> XN Hash(cost=87.98..87.9
15 | 6 | 5 | -> XN Seq Scan on sales(cos
15 | 5 | 4 | -> XN Hash Join DS_DIST_OUTER(cos
15 | 4 | 3 | -> XN HashAggregate(cost=862286577.07..
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_EXPLAIN.md
|
0f6583f8ec4e-3
|
15 | 4 | 3 | -> XN HashAggregate(cost=862286577.07..
15 | 3 | 2 | -> XN Sort(cost=1000862287175.47..10008622871
15 | 2 | 1 | -> XN Network(cost=1000862287175.47..1000862287197.
15 | 1 | 0 |XN Merge(cost=1000862287175.47..1000862287197.46 rows=87
(8 rows)
```
The following query retrieves the query IDs for any query plans that contain a window function:
```
select query, trim(plannode) from stl_explain
where plannode like '%Window%';
query| btrim
-----+------------------------------------------------------------------------
26 | -> XN Window(cost=1000985348268.57..1000985351256.98 rows=170 width=33)
27 | -> XN Window(cost=1000985348268.57..1000985351256.98 rows=170 width=33)
(2 rows)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_EXPLAIN.md
|
0f6583f8ec4e-4
|
(2 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_EXPLAIN.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.