id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
0f17c12490b7-0
|
`BOOLEAN`
If *geom1* or *geom2* is null, then null is returned\.
If *geom1* and *geom2* don't have the same value for the spatial reference system identifier \(SRID\), then an error is returned\.
If *geom1* or *geom2* is a geometry collection, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Covers-function.md
|
c18289d33730-0
|
The following SQL checks if the first polygon covers the second polygon\.
```
SELECT ST_Covers(ST_GeomFromText('POLYGON((0 2,1 1,0 -1,0 2))'), ST_GeomFromText('POLYGON((-1 3,2 1,0 -3,-1 3))'));
```
```
st_covers
-----------
false
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Covers-function.md
|
9cb3087d90a9-0
|
Use the STV\_LOCKS table to view any current updates on tables in the database\.
Amazon Redshift locks tables to prevent two users from updating the same table at the same time\. While the STV\_LOCKS table shows all current table updates, query the [STL\_TR\_CONFLICT](r_STL_TR_CONFLICT.md) table to see a log of lock conflicts\. Use the [SVV\_TRANSACTIONS](r_SVV_TRANSACTIONS.md) view to identify open transactions and lock contention issues\.
STV\_LOCKS is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_LOCKS.md
|
c862c5e6db1b-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_LOCKS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_LOCKS.md
|
a7ce4fb71d90-0
|
To view all locks taking place in current transactions, type the following command:
```
select table_id, last_update, lock_owner, lock_owner_pid from stv_locks;
```
This query returns the following sample output, which displays three locks currently in effect:
```
table_id | last_update | lock_owner | lock_owner_pid
----------+----------------------------+------------+----------------
100004 | 2008-12-23 10:08:48.882319 | 1043 | 5656
100003 | 2008-12-23 10:08:48.779543 | 1043 | 5656
100140 | 2008-12-23 10:08:48.021576 | 1043 | 5656
(3 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_LOCKS.md
|
578d717d4a0e-0
|
If you have a table with very few columns but a very large number of rows, the three hidden metadata identity columns \(INSERT\_XID, DELETE\_XID, ROW\_ID\) will consume a disproportionate amount of the disk space for the table\.
In order to optimize compression of the hidden columns, load the table in a single COPY transaction where possible\. If you load the table with multiple separate COPY commands, the INSERT\_XID column will not compress well\. You will need to perform a vacuum operation if you use multiple COPY commands, but it will not improve compression of INSERT\_XID\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_load_compression_hidden_cols.md
|
5ea47c7a1156-0
|
CHANGE\_QUERY\_PRIORITY enables superusers to modify the priority of a query that is either running or waiting in workload management \(WLM\)\.
This function enables superusers to immediately change the priority of any query in the system\. Only one query, user, or session can run with the priority `CRITICAL`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_QUERY_PRIORITY.md
|
ca0890994859-0
|
```
CHANGE_QUERY_PRIORITY(query_id, priority)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_QUERY_PRIORITY.md
|
20d39daff378-0
|
*query\_id*
The query identifier of the query whose priority is changed\.
*priority*
The new priority to be assigned to the query\. This argument must be a string with the value `CRITICAL`, `HIGHEST`, `HIGH`, `NORMAL`, `LOW`, or `LOWEST`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_QUERY_PRIORITY.md
|
7179a485ebd6-0
|
None
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_QUERY_PRIORITY.md
|
d6e1446c31f6-0
|
The following example shows the column `query_priority` in the STV\_WLM\_QUERY\_STATE system table\.
```
select query, service_class, query_priority, state
from stv_wlm_query_state where service_class = 101;
query | service_class | query_priority | state
-------+---------------+----------------------+------------------
1076 | 101 | Lowest | Running
1075 | 101 | Lowest | Running
(2 rows)
```
The following example shows the results of a superuser running the function `change_query_priority` to change the priority to `CRITICAL`\.
```
select change_query_priority(1076, 'Critical');
change_query_priority
--------------------------------------------------
Succeeded to change query priority. Priority changed from Lowest to Critical.
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_QUERY_PRIORITY.md
|
dcb7c7d4bd02-0
|
The QUOTE\_LITERAL function returns the specified string as a quoted string so that it can be used as a string literal in a SQL statement\. If the input parameter is a number, QUOTE\_LITERAL treats it as a string\. Appropriately doubles any embedded single quotes and backslashes\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_QUOTE_LITERAL.md
|
7c56b060ff15-0
|
```
QUOTE_LITERAL(string)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_QUOTE_LITERAL.md
|
0a88afb44cde-0
|
*string*
The input parameter is a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_QUOTE_LITERAL.md
|
cfdf434a2b59-0
|
The QUOTE\_LITERAL function returns a string that is the same data type as the input string \(CHAR or VARCHAR\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_QUOTE_LITERAL.md
|
fd670b11e350-0
|
The following example returns the CATID column surrounded by quotes\. Note that the ordering now treats this column as a string:
```
select quote_literal(catid), catname
from category
order by 1,2;
quote_literal | catname
--------------+-----------
'1' | MLB
'10' | Jazz
'11' | Classical
'2' | NHL
'3' | NFL
'4' | NBA
'5' | MLS
'6' | Musicals
'7' | Plays
'8' | Opera
'9' | Pop
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_QUOTE_LITERAL.md
|
e7df8b16d9bb-0
|
**Topics**
+ [PG\_ATTRIBUTE\_INFO](r_PG_ATTRIBUTE_INFO.md)
+ [PG\_CLASS\_INFO](r_PG_CLASS_INFO.md)
+ [PG\_DATABASE\_INFO](r_PG_DATABASE_INFO.md)
+ [PG\_DEFAULT\_ACL](r_PG_DEFAULT_ACL.md)
+ [PG\_EXTERNAL\_SCHEMA](r_PG_EXTERNAL_SCHEMA.md)
+ [PG\_LIBRARY](r_PG_LIBRARY.md)
+ [PG\_PROC\_INFO](r_PG_PROC_INFO.md)
+ [PG\_STATISTIC\_INDICATOR](r_PG_STATISTIC_INDICATOR.md)
+ [PG\_TABLE\_DEF](r_PG_TABLE_DEF.md)
+ [Querying the catalog tables](c_join_PG.md)
The system catalogs store schema metadata, such as information about tables and columns\. System catalog tables have a PG prefix\.
The standard PostgreSQL catalog tables are accessible to Amazon Redshift users\. For more information about PostgreSQL system catalogs, see [PostgreSQL system tables](https://www.postgresql.org/docs/8.0/static/catalogs.html#CATALOGS-OVERVIEW)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_catalog_views.md
|
8a1f1b5a4856-0
|
Displays row and block statistics for tables that have been vacuumed\.
The view shows information specific to when each vacuum operation started and finished, and demonstrates the benefits of running the operation\. For information about the requirements for running this command, see the [VACUUM](r_VACUUM_command.md) command description\.
This view is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_VACUUM.md
|
d775c0d5a2e4-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_VACUUM.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_VACUUM.md
|
b876ced9c705-0
|
The following query reports vacuum statistics for table 108313\. The table was vacuumed following a series of inserts and deletes\.
```
select xid, table_id, status, rows, sortedrows, blocks, eventtime
from stl_vacuum where table_id=108313 order by eventtime;
xid | table_id | status | rows | sortedrows | blocks | eventtime
-------+----------+----------------------+------------+------------+--------+---------------------
14294 | 108313 | Started | 1950266199 | 400043488 | 280887 | 2016-05-19 17:36:01
14294 | 108313 | Finished | 600099388 | 600099388 | 88978 | 2016-05-19 18:26:13
15126 | 108313 | Skipped(sorted>=95%) | 600099388 | 600099388 | 88978 | 2016-05-19 18:26:38
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_VACUUM.md
|
b876ced9c705-1
|
```
At the start of the VACUUM, the table contained 1,950,266,199 rows stored in 280,887 1 MB blocks\. In the delete phase \(transaction 14294\) completed, vacuum reclaimed space for the deleted rows\. The ROWS column shows a value of 400,043,488, and the BLOCKS column has dropped from 280,887 to 88,978\. The vacuum reclaimed 191,909 blocks \(191\.9 GB\) of disk space\.
In the sort phase \(transaction 15126\), the vacuum was able to skip the table because the rows were inserted in sort key order\.
The following example shows the statistics for a SORT ONLY vacuum on the SALES table \(table 110116 in this example\) after a large INSERT operation:
```
vacuum sort only sales;
select xid, table_id, status, rows, sortedrows, blocks, eventtime
from stl_vacuum order by xid, table_id, eventtime;
xid |table_id| status | rows |sortedrows|blocks| eventtime
----+--------+-----------------+-------+----------+------+--------------------
...
2925| 110116 |Started Sort Only|1379648| 172456 | 132 | 2011-02-24 16:25:21...
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_VACUUM.md
|
b876ced9c705-2
|
2925| 110116 |Started Sort Only|1379648| 172456 | 132 | 2011-02-24 16:25:21...
2925| 110116 |Finished |1379648| 1379648 | 132 | 2011-02-24 16:26:28...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_VACUUM.md
|
eb2b929c8465-0
|
**Topics**
+ [Syntax](#r_conditions-synopsis)
+ [Comparison condition](r_comparison_condition.md)
+ [Logical conditions](r_logical_condition.md)
+ [Pattern\-matching conditions](pattern-matching-conditions.md)
+ [BETWEEN range condition](r_range_condition.md)
+ [Null condition](r_null_condition.md)
+ [EXISTS condition](r_exists_condition.md)
+ [IN condition](r_in_condition.md)
A condition is a statement of one or more expressions and logical operators that evaluates to true, false, or unknown\. Conditions are also sometimes referred to as predicates\.
**Note**
All string comparisons and LIKE pattern matches are case\-sensitive\. For example, 'A' and 'a' do not match\. However, you can do a case\-insensitive pattern match by using the ILIKE predicate\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_conditions.md
|
185158ad4785-0
|
```
comparison_condition
| logical_condition
| range_condition
| pattern_matching_condition
| null_condition
| EXISTS_condition
| IN_condition
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_conditions.md
|
108aaff7e89c-0
|
**To retrieve the Amazon Redshift cluster public key and cluster node IP addresses for your cluster using the console**
1. Access the Amazon Redshift Management Console\.
1. Click the **Clusters** link in the navigation pane\.
1. Select your cluster from the list\.
1. Locate the **SSH Ingestion Settings** group\.
Note the **Cluster Public Key** and **Node IP addresses**\. You will use them in later steps\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/copy-from-ssh-console-2.png)
You will use the Private IP addresses in Step 3 to configure the Amazon EC2 host to accept the connection from Amazon Redshift\.
To retrieve the cluster public key and cluster node IP addresses for your cluster using the Amazon Redshift CLI, execute the describe\-clusters command\. For example:
```
aws redshift describe-clusters --cluster-identifier <cluster-identifier>
```
The response will include a ClusterPublicKey value and the list of private and public IP addresses, similar to the following:
```
{
"Clusters": [
{
"VpcSecurityGroups": [],
"ClusterStatus": "available",
"ClusterNodes": [
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-emr-steps-retrieve-key-and-ips.md
|
108aaff7e89c-1
|
"VpcSecurityGroups": [],
"ClusterStatus": "available",
"ClusterNodes": [
{
"PrivateIPAddress": "10.nnn.nnn.nnn",
"NodeRole": "LEADER",
"PublicIPAddress": "10.nnn.nnn.nnn"
},
{
"PrivateIPAddress": "10.nnn.nnn.nnn",
"NodeRole": "COMPUTE-0",
"PublicIPAddress": "10.nnn.nnn.nnn"
},
{
"PrivateIPAddress": "10.nnn.nnn.nnn",
"NodeRole": "COMPUTE-1",
"PublicIPAddress": "10.nnn.nnn.nnn"
}
],
"AutomatedSnapshotRetentionPeriod": 1,
"PreferredMaintenanceWindow": "wed:05:30-wed:06:00",
"AvailabilityZone": "us-east-1a",
"NodeType": "ds2.xlarge",
"ClusterPublicKey": "ssh-rsa AAAABexamplepublickey...Y3TAl Amazon-Redshift",
...
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-emr-steps-retrieve-key-and-ips.md
|
108aaff7e89c-2
|
...
...
}
```
To retrieve the cluster public key and cluster node IP addresses for your cluster using the Amazon Redshift API, use the `DescribeClusters` action\. For more information, see [describe\-clusters](https://docs.aws.amazon.com/cli/latest/reference/redshift/describe-clusters.html) in the *Amazon Redshift CLI Guide* or [DescribeClusters](https://docs.aws.amazon.com/redshift/latest/APIReference/API_DescribeClusters.html) in the Amazon Redshift API Guide\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-emr-steps-retrieve-key-and-ips.md
|
836440b0e2be-0
|
Standard window function syntax is as follows\.
```
function (expression) OVER (
[ PARTITION BY expr_list ]
[ ORDER BY order_list [ frame_clause ] ] )
```
Here, *function* is one of the functions described in this section and *expr\_list* is as follows\.
```
expression | column_name [, expr_list ]
```
The *order\_list* is as follows\.
```
expression | column_name [ ASC | DESC ]
[ NULLS FIRST | NULLS LAST ]
[, order_list ]
```
The *frame\_clause* is as follows\.
```
ROWS
{ UNBOUNDED PRECEDING | unsigned_value PRECEDING | CURRENT ROW } |
{BETWEEN
{ UNBOUNDED PRECEDING | unsigned_value { PRECEDING | FOLLOWING } |
CURRENT ROW}
AND
{ UNBOUNDED FOLLOWING | unsigned_value { PRECEDING | FOLLOWING } |
CURRENT ROW }}
```
**Note**
STDDEV\_SAMP and VAR\_SAMP are synonyms for STDDEV and VARIANCE, respectively\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Window_function_synopsis.md
|
899538cee467-0
|
*function*
For details, see the individual function descriptions\.
OVER
Clause that defines the window specification\. The OVER clause is mandatory for window functions and differentiates window functions from other SQL functions\.
PARTITION BY *expr\_list*
\(Optional\) The PARTITION BY clause subdivides the result set into partitions, much like the GROUP BY clause\. If a partition clause is present, the function is calculated for the rows in each partition\. If no partition clause is specified, a single partition contains the entire table, and the function is computed for that complete table\.
The ranking functions, DENSE\_RANK, NTILE, RANK, and ROW\_NUMBER, require a global comparison of all the rows in the result set\. When a PARTITION BY clause is used, the query optimizer can execute each aggregation in parallel by spreading the workload across multiple slices according to the partitions\. If the PARTITION BY clause is not present, the aggregation step must be executed serially on a single slice, which can have a significant negative impact on performance, especially for large clusters\.
ORDER BY *order\_list*
\(Optional\) The window function is applied to the rows within each partition sorted according to the order specification in ORDER BY\. This ORDER BY clause is distinct from and completely unrelated to an ORDER BY clause in a nonwindow function \(outside of the OVER clause\)\. The ORDER BY clause can be used without the PARTITION BY clause\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Window_function_synopsis.md
|
899538cee467-1
|
For the ranking functions, the ORDER BY clause identifies the measures for the ranking values\. For aggregation functions, the partitioned rows must be ordered before the aggregate function is computed for each frame\. For more about window function types, see [Window functions](c_Window_functions.md)\.
Column identifiers or expressions that evaluate to column identifiers are required in the order list\. Neither constants nor constant expressions can be used as substitutes for column names\.
NULLS values are treated as their own group, sorted and ranked according to the NULLS FIRST or NULLS LAST option\. By default, NULL values are sorted and ranked last in ASC ordering, and sorted and ranked first in DESC ordering\.
If the ORDER BY clause is omitted, the order of the rows is nondeterministic\.
In any parallel system such as Amazon Redshift, when an ORDER BY clause doesn't produce a unique and total ordering of the data, the order of the rows is nondeterministic\. That is, if the ORDER BY expression produces duplicate values \(a partial ordering\), the return order of those rows might vary from one run of Amazon Redshift to the next\. In turn, window functions might return unexpected or inconsistent results\. For more information, see [Unique ordering of data for window functions](r_Examples_order_by_WF.md)\.
*column\_name*
Name of a column to be partitioned by or ordered by\.
ASC \| DESC
Option that defines the sort order for the expression, as follows:
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Window_function_synopsis.md
|
899538cee467-2
|
ASC \| DESC
Option that defines the sort order for the expression, as follows:
+ ASC: ascending \(for example, low to high for numeric values and 'A' to 'Z' for character strings\)\. If no option is specified, data is sorted in ascending order by default\.
+ DESC: descending \(high to low for numeric values; 'Z' to 'A' for strings\)\.
NULLS FIRST \| NULLS LAST
Option that specifies whether NULLS should be ordered first, before non\-null values, or last, after non\-null values\. By default, NULLS are sorted and ranked last in ASC ordering, and sorted and ranked first in DESC ordering\.
*frame\_clause*
For aggregate functions, the frame clause further refines the set of rows in a function's window when using ORDER BY\. It enables you to include or exclude sets of rows within the ordered result\. The frame clause consists of the ROWS keyword and associated specifiers\.
The frame clause doesn't apply to ranking functions\. Also, the frame clause isn't required when no ORDER BY clause is used in the OVER clause for an aggregate function\. If an ORDER BY clause is used for an aggregate function, an explicit frame clause is required\.
When no ORDER BY clause is specified, the implied frame is unbounded, equivalent to ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING\.
ROWS
This clause defines the window frame by specifying a physical offset from the current row\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Window_function_synopsis.md
|
899538cee467-3
|
ROWS
This clause defines the window frame by specifying a physical offset from the current row\.
This clause specifies the rows in the current window or partition that the value in the current row is to be combined with\. It uses arguments that specify row position, which can be before or after the current row\. The reference point for all window frames is the current row\. Each row becomes the current row in turn as the window frame slides forward in the partition\.
The frame can be a simple set of rows up to and including the current row\.
```
{UNBOUNDED PRECEDING | offset PRECEDING | CURRENT ROW}
```
Or it can be a set of rows between two boundaries\.
```
BETWEEN
{UNBOUNDED PRECEDING | offset { PRECEDING | FOLLOWING }
| CURRENT ROW}
AND
{UNBOUNDED FOLLOWING | offset { PRECEDING | FOLLOWING }
| CURRENT ROW}
```
UNBOUNDED PRECEDING indicates that the window starts at the first row of the partition; *offset* PRECEDING indicates that the window starts a number of rows equivalent to the value of offset before the current row\. UNBOUNDED PRECEDING is the default\.
CURRENT ROW indicates the window begins or ends at the current row\.
UNBOUNDED FOLLOWING indicates that the window ends at the last row of the partition; *offset* FOLLOWING indicates that the window ends a number of rows equivalent to the value of offset after the current row\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Window_function_synopsis.md
|
899538cee467-4
|
*offset* identifies a physical number of rows before or after the current row\. In this case, *offset* must be a constant that evaluates to a positive numeric value\. For example, 5 FOLLOWING ends the frame five rows after the current row\.
Where BETWEEN is not specified, the frame is implicitly bounded by the current row\. For example, `ROWS 5 PRECEDING` is equal to `ROWS BETWEEN 5 PRECEDING AND CURRENT ROW`\. Also, `ROWS UNBOUNDED FOLLOWING` is equal to `ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING`\.
You can't specify a frame in which the starting boundary is greater than the ending boundary\. For example, you can't specify any of the following frames\.
```
between 5 following and 5 preceding
between current row and 2 preceding
between 3 following and current row
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Window_function_synopsis.md
|
ec92821edbac-0
|
The following illustration provides a high\-level view of the query planning and execution workflow\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/07-QueryPlanning.png)
The query planning and execution workflow follow these steps:
1. The leader node receives the query and parses the SQL\.
1. The parser produces an initial query tree that is a logical representation of the original query\. Amazon Redshift then inputs this query tree into the query optimizer\.
1. The optimizer evaluates and if necessary rewrites the query to maximize its efficiency\. This process sometimes results in creating multiple related queries to replace a single one\.
1. The optimizer generates a query plan \(or several, if the previous step resulted in multiple queries\) for the execution with the best performance\. The query plan specifies execution options such as join types, join order, aggregation options, and data distribution requirements\.
You can use the [EXPLAIN](r_EXPLAIN.md) command to view the query plan\. The query plan is a fundamental tool for analyzing and tuning complex queries\. For more information, see [Query plan](c-the-query-plan.md)\.
1. The execution engine translates the query plan into *steps*, *segments* and *streams*:
**Step**
Each step is an individual operation needed during query execution\. Steps can be combined to allow compute nodes to perform a query, join, or other database operation\.
**Segment**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-planning.md
|
ec92821edbac-1
|
**Segment**
A combination of several steps that can be done by a single process, also the smallest compilation unit executable by a compute node slice\. A *slice* is the unit of parallel processing in Amazon Redshift\. The segments in a stream run in parallel\.
**Stream**
A collection of segments to be parceled out over the available compute node slices\.
The execution engine generates compiled code based on steps, segments, and streams\. Compiled code executes faster than interpreted code and uses less compute capacity\. This compiled code is then broadcast to the compute nodes\.
**Note**
When benchmarking your queries, you should always compare the times for the second execution of a query, because the first execution time includes the overhead of compiling the code\. For more information, see [Factors affecting query performance](c-query-performance.md)\.
1. The compute node slices execute the query segments in parallel\. As part of this process, Amazon Redshift takes advantage of optimized network communication, memory, and disk management to pass intermediate results from one query plan step to the next, which also helps to speed query execution\.
Steps 5 and 6 happen once for each stream\. The engine creates the executable segments for one stream and sends them to the compute nodes\. When the segments of that stream are complete, the engine generates the segments for the next stream\. In this way, the engine can analyze what happened in the prior stream \(for example, whether operations were disk\-based\) to influence the generation of segments in the next stream\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-planning.md
|
ec92821edbac-2
|
When the compute nodes are done, they return the query results to the leader node for final processing\. The leader node merges the data into a single result set and addresses any needed sorting or aggregation\. The leader node then returns the results to the client\.
**Note**
The compute nodes might return some data to the leader node during query execution if necessary\. For example, if you have a subquery with a LIMIT clause, the limit is applied on the leader node before data is redistributed across the cluster for further processing\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-planning.md
|
560ec6b0e721-0
|
The CONCAT function concatenates two character strings and returns the resulting string\. To concatenate more than two strings, use nested CONCAT functions\. The concatenation operator \(`||`\) between two strings produces the same results as the CONCAT function\.
**Note**
For both the CONCAT function and the concatenation operator, if one or both strings is null, the result of the concatenation is null\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CONCAT.md
|
077eab161770-0
|
```
CONCAT ( string1, string2 )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CONCAT.md
|
0484fc0d6f41-0
|
*string1*, *string2*
Both arguments can be fixed\-length or variable\-length character strings or expressions\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CONCAT.md
|
20725e0f9fbb-0
|
CONCAT returns a string\. The data type of the string is the same type as the input arguments\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CONCAT.md
|
f0111ab5e32b-0
|
The following example concatenates two character literals:
```
select concat('December 25, ', '2008');
concat
-------------------
December 25, 2008
(1 row)
```
The following query, using the `||` operator instead of CONCAT, produces the same result:
```
select 'December 25, '||'2008';
?column?
-------------------
December 25, 2008
(1 row)
```
The following example uses two CONCAT functions to concatenate three character strings:
```
select concat('Thursday, ', concat('December 25, ', '2008'));
concat
-----------------------------
Thursday, December 25, 2008
(1 row)
```
To concatenate columns that might contain nulls, use the [NVL expression](r_NVL_function.md)\. The following example uses NVL to return a 0 whenever NULL is encountered\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CONCAT.md
|
f0111ab5e32b-1
|
```
select concat(venuename, concat(' seats ', nvl(venueseats, 0))) as seating
from venue where venuestate = 'NV' or venuestate = 'NC'
order by 1
limit 5;
seating
-----------------------------------
Ballys Hotel seats 0
Bank of America Stadium seats 73298
Bellagio Hotel seats 0
Caesars Palace seats 0
Harrahs Hotel seats 0
(5 rows)
```
The following query concatenates CITY and STATE values from the VENUE table:
```
select concat(venuecity, venuestate)
from venue
where venueseats > 75000
order by venueseats;
concat
-------------------
DenverCO
Kansas CityMO
East RutherfordNJ
LandoverMD
(4 rows)
```
The following query uses nested CONCAT functions\. The query concatenates CITY and STATE values from the VENUE table but delimits the resulting string with a comma and a space:
```
select concat(concat(venuecity,', '),venuestate)
from venue
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CONCAT.md
|
f0111ab5e32b-2
|
```
select concat(concat(venuecity,', '),venuestate)
from venue
where venueseats > 75000
order by venueseats;
concat
---------------------
Denver, CO
Kansas City, MO
East Rutherford, NJ
Landover, MD
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CONCAT.md
|
9b59e1507c5f-0
|
To maintain continuous availability following certain internal events, Amazon Redshift might restart an active session with a new process ID \(PID\)\. When Amazon Redshift restarts a session, STL\_RESTARTED\_SESSIONS records the new PID and the old PID\.
For more information, see the examples following in this section\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_RESTARTED_SESSIONS.md
|
80b989f522e5-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_RESTARTED_SESSIONS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_RESTARTED_SESSIONS.md
|
7633d7221859-0
|
The following example joins STL\_RESTARTED\_SESSIONS with STL\_SESSIONS to show user names for sessions that have been restarted\.
```
select process, stl_restarted_sessions.newpid, user_name
from stl_sessions
inner join stl_restarted_sessions on stl_sessions.process = stl_restarted_sessions.oldpid
order by process;
...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_RESTARTED_SESSIONS.md
|
ce1252ad01a5-0
|
Synonym of SHA1 function\.
See [SHA1 function](SHA1.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA.md
|
1bf3740e1cb4-0
|
Analyzes insert execution steps for queries\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_INSERT.md
|
b85720ca0924-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_INSERT.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_INSERT.md
|
4c7cae03acf8-0
|
The following example returns insert execution steps for the most recent query\.
```
select slice, segment, step, tasknum, rows, tbl
from stl_insert
where query=pg_last_query_id();
```
```
slice | segment | step | tasknum | rows | tbl
-------+---------+------+---------+-------+--------
0 | 2 | 2 | 15 | 24958 | 100548
1 | 2 | 2 | 15 | 25032 | 100548
(2 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_INSERT.md
|
aa824a0af9ae-0
|
IS\_VALID\_JSON validates a JSON string\. The function returns Boolean `true` \(`t`\) if the string is properly formed JSON or `false` \(`f`\) if the string is malformed\. To validate a JSON array, use [IS\_VALID\_JSON\_ARRAY function](IS_VALID_JSON_ARRAY.md)
For more information, see [JSON functions](json-functions.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON.md
|
38e13a680862-0
|
```
is_valid_json('json_string')
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON.md
|
5cdf80b51da0-0
|
*json\_string*
A string or expression that evaluates to a JSON string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON.md
|
3f9e078160fc-0
|
BOOLEAN
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON.md
|
847d0223b12b-0
|
The following example creates a table and inserts JSON strings for testing\.
```
create table test_json(id int identity(0,1), json_strings varchar);
-- Insert valid JSON strings --
insert into test_json(json_strings) values
('{"a":2}'),
('{"a":{"b":{"c":1}}}'),
('{"a": [1,2,"b"]}');
-- Insert invalid JSON strings --
insert into test_json(json_strings)values
('{{}}'),
('{1:"a"}'),
('[1,2,3]');
```
The following example validates the strings in the preceding example\.
```
select id, json_strings, is_valid_json(json_strings)
from test_json order by id;
id | json_strings | is_valid_json
---+---------------------+--------------
0 | {"a":2} | true
2 | {"a":{"b":{"c":1}}} | true
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON.md
|
847d0223b12b-1
|
0 | {"a":2} | true
2 | {"a":{"b":{"c":1}}} | true
4 | {"a": [1,2,"b"]} | true
6 | {{}} | false
8 | {1:"a"} | false
10 | [1,2,3] | false
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON.md
|
1bd8c1036dbc-0
|
Amazon Redshift Advisor offers recommendations about how to optimize your Amazon Redshift cluster to increase performance and save on operating costs\. You can find explanations for each recommendation in the console, as described preceding\. You can find further details on these recommendations in the following sections\.
**Topics**
+ [Compress table data](#cluster-compression-recommendation)
+ [Compress Amazon S3 file objects loaded by COPY](#cluster-compress-s3-recommendation)
+ [Isolate multiple active databases](#isolate-active-dbs-recommendation)
+ [Reallocate workload management \(WLM\) memory](#reallocate-wlm-recommendation)
+ [Skip compression analysis during COPY](#skip-compression-analysis-recommendation)
+ [Split Amazon S3 objects loaded by COPY](#split-s3-objects-recommendation)
+ [Update table statistics](#update-table-statistics-recommendation)
+ [Enable short query acceleration](#enable-sqa-recommendation)
+ [Replace single\-column interleaved sort keys](#single-column-interleaved-sort-recommendation)
+ [Alter distribution keys on tables](#alter-diststyle-distkey-recommendation)
+ [Alter sort keys on tables](#alter-sortkey-recommendation)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
d021257c64ab-0
|
Amazon Redshift is optimized to reduce your storage footprint and improve query performance by using compression encodings\. When you don't use compression, data consumes additional space and requires additional disk I/O\. Applying compression to large uncompressed columns can have a big impact on your cluster\.
**Analysis**
The compression analysis in Advisor tracks uncompressed storage allocated to permanent user tables\. It reviews storage metadata associated with large uncompressed columns that aren't sort key columns\. Advisor offers a recommendation to rebuild tables with uncompressed columns when the total amount of uncompressed storage exceeds 15 percent of total storage space, or at the following node\-specific thresholds\.
| Cluster Size | Threshold |
| --- | --- |
| DC2\.LARGE | 480 GB |
| DC2\.8XLARGE | 2\.56 TB |
| DS2\.XLARGE | 4 TB |
| DS2\.8XLAGE | 16 TB |
**Recommendation**
Addressing uncompressed storage for a single table is a one\-time optimization that requires the table to be rebuilt\. We recommend that you rebuild any tables that contain uncompressed columns that are both large and frequently accessed\. To identify which tables contain the most uncompressed storage, run the following SQL command as a superuser\.
```
SELECT
ti.schema||'.'||ti."table" tablename,
raw_size.size uncompressed_mb,
ti.size total_mb
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
d021257c64ab-1
|
raw_size.size uncompressed_mb,
ti.size total_mb
FROM svv_table_info ti
LEFT JOIN (
SELECT tbl table_id, COUNT(*) size
FROM stv_blocklist
WHERE (tbl,col) IN (
SELECT attrelid, attnum-1
FROM pg_attribute
WHERE attencodingtype IN (0,128)
AND attnum>0 AND attsortkeyord != 1)
GROUP BY tbl) raw_size USING (table_id)
WHERE raw_size.size IS NOT NULL
ORDER BY raw_size.size DESC;
```
The data returned in the `uncompressed_mb` column represents the total number of uncompressed 1\-MB blocks for all columns in the table\.
When you rebuild the tables, use the `ENCODE` parameter to explicitly set column compression\.
**Implementation tips**
+ Leave any columns that are the first column in a compound sort key uncompressed\. The Advisor analysis doesn't count the storage consumed by those columns\.
+ Compressing large columns has a higher impact on performance and storage than compressing small columns\.
+ If you are unsure which compression is best, use the [ANALYZE COMPRESSION](r_ANALYZE_COMPRESSION.md) command to suggest a compression\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
d021257c64ab-2
|
+ If you are unsure which compression is best, use the [ANALYZE COMPRESSION](r_ANALYZE_COMPRESSION.md) command to suggest a compression\.
+ To generate the data definition language \(DDL\) statements for existing tables, you can use the AWS [Generate Table DDL](https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_generate_tbl_ddl.sql) utility, found on GitHub\.
+ To simplify the compression suggestions and the process of rebuilding tables, you can use the [Amazon Redshift Column Encoding Utility](https://github.com/awslabs/amazon-redshift-utils/tree/master/src/ColumnEncodingUtility), found on GitHub\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
d508a2b99f0e-0
|
The COPY command takes advantage of the massively parallel processing \(MPP\) architecture in Amazon Redshift to read and load data in parallel\. It can read files from Amazon S3, DynamoDB tables, and text output from one or more remote hosts\.
When loading large amounts of data, we strongly recommend using the COPY command to load compressed data files from S3\. Compressing large datasets saves time uploading the files to S3\. COPY can also speed up the load process by uncompressing the files as they are read\.
**Analysis**
Long\-running COPY commands that load large uncompressed datasets often have an opportunity for considerable performance improvement\. The Advisor analysis identifies COPY commands that load large uncompressed datasets\. In such a case, Advisor generates a recommendation to implement compression on the source files in S3\.
**Recommendation**
Ensure that each COPY that loads a significant amount of data, or runs for a significant duration, ingests compressed data objects from S3\. You can identify the COPY commands that load large uncompressed datasets from S3 by running the following SQL command as a superuser\.
```
SELECT
wq.userid, query, exec_start_time AS starttime, COUNT(*) num_files,
ROUND(MAX(wq.total_exec_time/1000000.0),2) execution_secs,
ROUND(SUM(transfer_size)/(1024.0*1024.0),2) total_mb,
SUBSTRING(querytxt,1,60) copy_sql
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
d508a2b99f0e-1
|
SUBSTRING(querytxt,1,60) copy_sql
FROM stl_s3client s
JOIN stl_query q USING (query)
JOIN stl_wlm_query wq USING (query)
WHERE s.userid>1 AND http_method = 'GET'
AND POSITION('COPY ANALYZE' IN querytxt) = 0
AND aborted = 0 AND final_state='Completed'
GROUP BY 1, 2, 3, 7
HAVING SUM(transfer_size) = SUM(data_size)
AND SUM(transfer_size)/(1024*1024) >= 5
ORDER BY 6 DESC, 5 DESC;
```
If the staged data remains in S3 after you load it, which is common in data lake architectures, storing this data in a compressed form can reduce your storage costs\.
**Implementation tips**
+ The ideal object size is 1–128 MB after compression\.
+ You can compress files with gzip, lzop, or bzip2 format\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
7f33a6e683f5-0
|
As a best practice, we recommend isolating databases in Amazon Redshift from one another\. Queries run in a specific database and can't access data from any other database on the cluster\. However, the queries that you run in all databases of a cluster share the same underlying cluster storage space and compute resources\. When a single cluster contains multiple active databases, their workloads are usually unrelated\.
**Analysis**
The Advisor analysis reviews all databases on the cluster for active workloads running at the same time\. If there are active workloads running at the same time, Advisor generates a recommendation to consider migrating databases to separate Amazon Redshift clusters\.
**Recommendation**
Consider moving each actively queried database to a separate dedicated cluster\. Using a separate cluster can reduce resource contention and improve query performance\. It can do so because it enables you to set the size for each cluster for the storage, cost, and performance needs of each workload\. Also, unrelated workloads often benefit from different workload management configurations\.
To identify which databases are actively used, you can run this SQL command as a superuser\.
```
SELECT database,
COUNT(*) as num_queries,
AVG(DATEDIFF(sec,starttime,endtime)) avg_duration,
MIN(starttime) as oldest_ts,
MAX(endtime) as latest_ts
FROM stl_query
WHERE userid > 1
GROUP BY database;
```
**Implementation tips**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
7f33a6e683f5-1
|
WHERE userid > 1
GROUP BY database;
```
**Implementation tips**
+ Because a user must connect to each database specifically, and queries can only access a single database, moving databases to separate clusters has minimal impact for users\.
+ One option to move a database is to take the following steps:
1. Temporarily restore a snapshot of the current cluster to a cluster of the same size\.
1. Delete all databases from the new cluster except the target database to be moved\.
1. Resize the cluster to an appropriate node type and count for the database's workload\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
1dc5fa515b31-0
|
Amazon Redshift routes user queries to [Implementing manual WLM](cm-c-defining-query-queues.md) for processing\. Workload management \(WLM\) defines how those queries are routed to the queues\. Amazon Redshift allocates each queue a portion of the cluster's available memory\. A queue's memory is divided among the queue's query slots\.
When a queue is configured with more slots than the workload requires, the memory allocated to these unused slots goes underutilized\. Reducing the configured slots to match the peak workload requirements redistributes the underutilized memory to active slots, and can result in improved query performance\.
**Analysis**
The Advisor analysis reviews workload concurrency requirements to identify query queues with unused slots\. Advisor generates a recommendation to reduce the number of slots in a queue when it finds the following:
+ A queue with slots that are completely inactive throughout the analysis
+ A queue with more than four slots that had at least two inactive slots throughout the analysis
**Recommendation**
Reducing the configured slots to match peak workload requirements redistributes underutilized memory to active slots\. Consider reducing the configured slot count for queues where the slots have never been fully utilized\. To identify these queues, you can compare the peak hourly slot requirements for each queue by running the following SQL command as a superuser\.
```
WITH
generate_dt_series AS (select sysdate - (n * interval '5 second') as dt from (select row_number() over () as n from stl_scan limit 17280)),
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
1dc5fa515b31-1
|
apex AS (
SELECT iq.dt, iq.service_class, iq.num_query_tasks, count(iq.slot_count) as service_class_queries, sum(iq.slot_count) as service_class_slots
FROM
(select gds.dt, wq.service_class, wscc.num_query_tasks, wq.slot_count
FROM stl_wlm_query wq
JOIN stv_wlm_service_class_config wscc ON (wscc.service_class = wq.service_class AND wscc.service_class > 5)
JOIN generate_dt_series gds ON (wq.service_class_start_time <= gds.dt AND wq.service_class_end_time > gds.dt)
WHERE wq.userid > 1 AND wq.service_class > 5) iq
GROUP BY iq.dt, iq.service_class, iq.num_query_tasks),
maxes as (SELECT apex.service_class, trunc(apex.dt) as d, date_part(h,apex.dt) as dt_h, max(service_class_slots) max_service_class_slots
from apex group by apex.service_class, apex.dt, date_part(h,apex.dt))
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
1dc5fa515b31-2
|
from apex group by apex.service_class, apex.dt, date_part(h,apex.dt))
SELECT apex.service_class - 5 AS queue, apex.service_class, apex.num_query_tasks AS max_wlm_concurrency, maxes.d AS day, maxes.dt_h || ':00 - ' || maxes.dt_h || ':59' as hour, MAX(apex.service_class_slots) as max_service_class_slots
FROM apex
JOIN maxes ON (apex.service_class = maxes.service_class AND apex.service_class_slots = maxes.max_service_class_slots)
GROUP BY apex.service_class, apex.num_query_tasks, maxes.d, maxes.dt_h
ORDER BY apex.service_class, maxes.d, maxes.dt_h;
```
The `max_service_class_slots` column represents the maximum number of WLM query slots in the query queue for that hour\. If underutilized queues exist, implement the slot reduction optimization by [modifying a parameter group](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-parameter-groups-console.html#parameter-group-modify), as described in the *Amazon Redshift Cluster Management Guide\.*
**Implementation tips**
+ If your workload is highly variable in volume, make sure that the analysis captured a peak utilization period\. If it didn't, run the preceding SQL repeatedly to monitor peak concurrency requirements\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
1dc5fa515b31-3
|
+ For more details on interpreting the query results from the preceding SQL code, see the [wlm\_apex\_hourly\.sql script](https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminScripts/wlm_apex_hourly.sql) on GitHub\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
37463e42d4f6-0
|
When you load data into an empty table with compression encoding declared with the COPY command, Amazon Redshift applies storage compression\. This optimization ensures that data in your cluster is stored efficiently even when loaded by end users\. The analysis required to apply compression can require significant time\.
**Analysis**
The Advisor analysis checks for COPY operations that were delayed by automatic compression analysis\. The analysis determines the compression encodings by sampling the data while it's being loaded\. This sampling is similar to that performed by the [ANALYZE COMPRESSION](r_ANALYZE_COMPRESSION.md) command\.
When you load data as part of a structured process, such as in an overnight extract, transform, load \(ETL\) batch, you can define the compression beforehand\. You can also optimize your table definitions to permanently skip this phase without any negative impacts\.
**Recommendation**
To improve COPY responsiveness by skipping the compression analysis phase, implement either of the following two options:
+ Use the column `ENCODE` parameter when creating any tables that you load using the COPY command\.
+ Disable compression altogether by supplying the `COMPUPDATE OFF` parameter in the COPY command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
37463e42d4f6-1
|
+ Disable compression altogether by supplying the `COMPUPDATE OFF` parameter in the COPY command\.
The best solution is generally to use column encoding during table creation, because this approach also maintains the benefit of storing compressed data on disk\. You can use the ANALYZE COMPRESSION command to suggest compression encodings, but you must recreate the table to apply these encodings\. To automate this process, you can use the AWS [ColumnEncodingUtility](https://github.com/awslabs/amazon-redshift-utils/tree/master/src/ColumnEncodingUtility), found on GitHub\.
To identify recent COPY operations that triggered automatic compression analysis, run the following SQL command\.
```
WITH xids AS (
SELECT xid FROM stl_query WHERE userid>1 AND aborted=0
AND querytxt = 'analyze compression phase 1' GROUP BY xid
INTERSECT SELECT xid FROM stl_commit_stats WHERE node=-1)
SELECT a.userid, a.query, a.xid, a.starttime, b.complyze_sec,
a.copy_sec, a.copy_sql
FROM (SELECT q.userid, q.query, q.xid, date_trunc('s',q.starttime)
starttime, substring(querytxt,1,100) as copy_sql,
ROUND(datediff(ms,starttime,endtime)::numeric / 1000.0, 2) copy_sec
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
37463e42d4f6-2
|
ROUND(datediff(ms,starttime,endtime)::numeric / 1000.0, 2) copy_sec
FROM stl_query q JOIN xids USING (xid)
WHERE (querytxt ilike 'copy %from%' OR querytxt ilike '% copy %from%')
AND querytxt not like 'COPY ANALYZE %') a
LEFT JOIN (SELECT xid,
ROUND(sum(datediff(ms,starttime,endtime))::numeric / 1000.0,2) complyze_sec
FROM stl_query q JOIN xids USING (xid)
WHERE (querytxt like 'COPY ANALYZE %'
OR querytxt like 'analyze compression phase %')
GROUP BY xid ) b ON a.xid = b.xid
WHERE b.complyze_sec IS NOT NULL ORDER BY a.copy_sql, a.starttime;
```
**Implementation tips**
+ Ensure that all tables of significant size created during your ETL processes \(for example, staging tables and temporary tables\) declare a compression encoding for all columns except the first sort key\.
+ Estimate the expected lifetime size of the table being loaded for each of the COPY commands identified by the SQL command preceding\. If you are confident that the table will remain extremely small, disable compression altogether with the `COMPUPDATE OFF` parameter\. Otherwise, create the table with explicit compression before loading it with the COPY command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
eb16749604b6-0
|
The COPY command takes advantage of the massively parallel processing \(MPP\) architecture in Amazon Redshift to read and load data from files on Amazon S3\. The COPY command loads the data in parallel from multiple files, dividing the workload among the nodes in your cluster\. To achieve optimal throughput, we strongly recommend that you divide your data into multiple files to take advantage of parallel processing\.
**Analysis**
The Advisor analysis identifies COPY commands that load large datasets contained in a small number of files staged in S3\. Long\-running COPY commands that load large datasets from a few files often have an opportunity for considerable performance improvement\. When Advisor identifies that these COPY commands are taking a significant amount of time, it creates a recommendation to increase parallelism by splitting the data into additional files in S3\.
**Recommendation**
In this case, we recommend the following actions, listed in priority order:
1. Optimize COPY commands that load fewer files than the number of cluster nodes\.
1. Optimize COPY commands that load fewer files than the number of cluster slices\.
1. Optimize COPY commands where the number of files is not a multiple of the number of cluster slices\.
Certain COPY commands load a significant amount of data or run for a significant duration\. For these commands, we recommend that you load a number of data objects from S3 that is equivalent to a multiple of the number of slices in the cluster\. To identify how many S3 objects each COPY command has loaded, run the following SQL code as a superuser\.
```
SELECT
query, COUNT(*) num_files,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
eb16749604b6-1
|
```
SELECT
query, COUNT(*) num_files,
ROUND(MAX(wq.total_exec_time/1000000.0),2) execution_secs,
ROUND(SUM(transfer_size)/(1024.0*1024.0),2) total_mb,
SUBSTRING(querytxt,1,60) copy_sql
FROM stl_s3client s
JOIN stl_query q USING (query)
JOIN stl_wlm_query wq USING (query)
WHERE s.userid>1 AND http_method = 'GET'
AND POSITION('COPY ANALYZE' IN querytxt) = 0
AND aborted = 0 AND final_state='Completed'
GROUP BY query, querytxt
HAVING (SUM(transfer_size)/(1024*1024))/COUNT(*) >= 2
ORDER BY CASE
WHEN COUNT(*) < (SELECT max(node)+1 FROM stv_slices) THEN 1
WHEN COUNT(*) < (SELECT COUNT(*) FROM stv_slices WHERE node=0) THEN 2
ELSE 2+((COUNT(*) % (SELECT COUNT(*) FROM stv_slices))/(SELECT COUNT(*)::DECIMAL FROM stv_slices))
END, (SUM(transfer_size)/(1024.0*1024.0))/COUNT(*) DESC;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
eb16749604b6-2
|
END, (SUM(transfer_size)/(1024.0*1024.0))/COUNT(*) DESC;
```
**Implementation tips**
+ The number of slices in a node depends on the node size of the cluster\. For more information about the number of slices in the various node types, see [Clusters and Nodes in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-about-clusters-and-nodes) in the *Amazon Redshift Cluster Management Guide\.*
+ You can load multiple files by specifying a common prefix, or prefix key, for the set, or by explicitly listing the files in a manifest file\. For more information about loading files, see [Splitting your data into multiple files](t_splitting-data-files.md)\.
+ Amazon Redshift doesn't take file size into account when dividing the workload\. Split your load data files so that the files are about equal size, between 1 MB and 1 GB after compression\. For optimum parallelism, the ideal size is between 1 MB and 125 MB after compression\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
8cc0d85622a9-0
|
Amazon Redshift uses a cost\-based query optimizer to choose the optimum execution plan for queries\. The cost estimates are based on table statistics gathered using the ANALYZE command\. When statistics are out of date or missing, the database might choose a less efficient plan for query execution, especially for complex queries\. Maintaining current statistics helps complex queries run in the shortest possible time\.
**Analysis**
The Advisor analysis tracks tables whose statistics are out\-of\-date or missing\. It reviews table access metadata associated with complex queries\. If tables that are frequently accessed with complex patterns are missing statistics, Advisor creates a **critical** recommendation to run ANALYZE\. If tables that are frequently accessed with complex patterns have out\-of\-date statistics, Advisor creates a **suggested** recommendation to run ANALYZE\.
**Recommendation**
Whenever table content changes significantly, update statistics with ANALYZE\. We recommend running ANALYZE whenever a significant number of new data rows are loaded into an existing table with COPY or INSERT commands\. We also recommend running ANALYZE whenever a significant number of rows are modified using UPDATE or DELETE commands\. To identify tables with missing or out\-of\-date statistics, run the following SQL command as a superuser\. The results are ordered from largest to smallest table\.
To identify tables with missing or out\-of\-date statistics, run the following SQL command as a superuser\. The results are ordered from largest to smallest table\.
```
SELECT
ti.schema||'.'||ti."table" tablename,
ti.size table_size_mb,
ti.stats_off statistics_accuracy
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
8cc0d85622a9-1
|
ti.size table_size_mb,
ti.stats_off statistics_accuracy
FROM svv_table_info ti
WHERE ti.stats_off > 5.00
ORDER BY ti.size DESC;
```
**Implementation tips**
The default ANALYZE threshold is 10 percent\. This default means that the ANALYZE command skips a given table if fewer than 10 percent of the table's rows have changed since the last ANALYZE\. As a result, you might choose to issue ANALYZE commands at the end of each ETL process\. Taking this approach means that ANALYZE is often skipped but also ensures that ANALYZE runs when needed\.
ANALYZE statistics have the most impact for columns that are used in joins \(for example, `JOIN tbl_a ON col_b`\) or as predicates \(for example, `WHERE col_b = 'xyz'`\)\. By default, ANALYZE collects statistics for all columns in the table specified\. If needed, you can reduce the time required to run ANALYZE by running ANALYZE only for the columns where it has the most impact\. You can run the following SQL command to identify columns used as predicates\. You can also let Amazon Redshift choose which columns to analyze by specifying `ANALYZE PREDICATE COLUMNS`\.
```
WITH predicate_column_info as (
SELECT ns.nspname AS schema_name, c.relname AS table_name, a.attnum as col_num, a.attname as col_name,
CASE
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
8cc0d85622a9-2
|
CASE
WHEN 10002 = s.stakind1 THEN array_to_string(stavalues1, '||')
WHEN 10002 = s.stakind2 THEN array_to_string(stavalues2, '||')
WHEN 10002 = s.stakind3 THEN array_to_string(stavalues3, '||')
WHEN 10002 = s.stakind4 THEN array_to_string(stavalues4, '||')
ELSE NULL::varchar
END AS pred_ts
FROM pg_statistic s
JOIN pg_class c ON c.oid = s.starelid
JOIN pg_namespace ns ON c.relnamespace = ns.oid
JOIN pg_attribute a ON c.oid = a.attrelid AND a.attnum = s.staattnum)
SELECT schema_name, table_name, col_num, col_name,
pred_ts NOT LIKE '2000-01-01%' AS is_predicate,
CASE WHEN pred_ts NOT LIKE '2000-01-01%' THEN (split_part(pred_ts, '||',1))::timestamp ELSE NULL::timestamp END as first_predicate_use,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
8cc0d85622a9-3
|
CASE WHEN pred_ts NOT LIKE '%||2000-01-01%' THEN (split_part(pred_ts, '||',2))::timestamp ELSE NULL::timestamp END as last_analyze
FROM predicate_column_info;
```
For more information, see [Analyzing tables](t_Analyzing_tables.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
d52d064a0cd6-0
|
Short query acceleration \(SQA\) prioritizes selected short\-running queries ahead of longer\-running queries\. SQA executes short\-running queries in a dedicated space, so that SQA queries aren't forced to wait in queues behind longer queries\. SQA only prioritizes queries that are short\-running and are in a user\-defined queue\. With SQA, short\-running queries begin running more quickly and users see results sooner\.
If you enable SQA, you can reduce or eliminate workload management \(WLM\) queues that are dedicated to running short queries\. In addition, long\-running queries don't need to contend with short queries for slots in a queue, so you can configure your WLM queues to use fewer query slots\. When you use lower concurrency, query throughput is increased and overall system performance is improved for most workloads\. For more information, see [Working with short query acceleration](wlm-short-query-acceleration.md)\.
**Analysis**
Advisor checks for workload patterns and reports the number of recent queries where SQA would reduce latency and the daily queue time for SQA\-eligible queries\.
**Recommendation**
Modify the WLM configuration to enable SQA\. Amazon Redshift uses a machine learning algorithm to analyze each eligible query\. Predictions improve as SQA learns from your query patterns\. For more information, see [Configuring Workload Management](https://docs.aws.amazon.com/redshift/latest/mgmt/workload-mgmt-config.html)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
d52d064a0cd6-1
|
When you enable SQA, WLM sets the maximum run time for short queries to dynamic by default\. We recommend keeping the dynamic setting for SQA maximum run time\.
**Implementation tips**
To check whether SQA is enabled, run the following query\. If the query returns a row, then SQA is enabled\.
```
select * from stv_wlm_service_class_config
where service_class = 14;
```
For more information, see [Monitoring SQA](wlm-short-query-acceleration.md#wlm-monitoring-sqa)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
7b552a670ee2-0
|
Some tables use an interleaved sort key on a single column\. In general, such a table is less efficient and consumes more resources than a table that uses a compound sort key on a single column\.
Interleaved sorting improves performance in certain cases where multiple columns are used by different queries for filtering\. Using an interleaved sort key on a single column is effective only in a particular case\. That case is when queries often filter on CHAR or VARCHAR column values that have a long common prefix in the first 8 bytes\. For example, URL strings are often prefixed with "`https://`"\. For single\-column keys, a compound sort is better than an interleaved sort for any other filtering operations\. A compound sort speeds up joins, GROUP BY and ORDER BY operations, and window functions that use PARTITION BY and ORDER BY on the sorted column\. An interleaved sort doesn't benefit any of those operations\. For more information, see [Choosing sort keys](t_Sorting_data.md)\.
Using compound sort significantly reduces maintenance overhead\. Tables with compound sort keys don't need the expensive VACUUM REINDEX operations that are necessary for interleaved sorts\. In practice, compound sort keys are more effective than interleaved sort keys for the vast majority of Amazon Redshift workloads\.
**Analysis**
Advisor tracks tables that use an interleaved sort key on a single column\.
**Recommendation**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
7b552a670ee2-1
|
**Analysis**
Advisor tracks tables that use an interleaved sort key on a single column\.
**Recommendation**
If a table uses interleaved sorting on a single column, recreate the table to use a compound sort key\. When you create new tables, use a compound sort key for single\-column sorts\. To find interleaved tables that use a single\-column sort key, run the following command\.
```
SELECT schema AS schemaname, "table" AS tablename
FROM svv_table_info
WHERE table_id IN (
SELECT attrelid
FROM pg_attribute
WHERE attrelid IN (
SELECT attrelid
FROM pg_attribute
WHERE attsortkeyord <> 0
GROUP BY attrelid
HAVING MAX(attsortkeyord) = -1
)
AND NOT (atttypid IN (1042, 1043) AND atttypmod > 12)
AND attsortkeyord = -1);
```
For additional information about choosing the best sort style, see the AWS Big Data Blog post [Amazon Redshift Engineering's Advanced Table Design Playbook: Compound and Interleaved Sort Keys](https://aws.amazon.com/blogs/big-data/amazon-redshift-engineerings-advanced-table-design-playbook-compound-and-interleaved-sort-keys/)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
524878160827-0
|
Amazon Redshift distributes table rows throughout the cluster according to the table distribution style\. Tables with KEY distribution require a column as the distribution key \(DISTKEY\)\. A table row is assigned to a node slice of a cluster based on its DISTKEY column value\.
An appropriate DISTKEY places a similar number of rows on each node slice and is frequently referenced in join conditions\. An optimized join occurs when tables are joined on their DISTKEY columns, accelerating query performance\.
**Analysis**
Advisor analyzes your cluster’s workload to identify the most appropriate distribution key for the tables that can significantly benefit from a KEY distribution style\.
**Recommendation**
Advisor provides [ALTER TABLE](r_ALTER_TABLE.md) statements that alter the DISTSTYLE and DISTKEY of a table based on its analysis\. To realize a significant performance benefit, make sure to implement all SQL statements within a recommendation group\.
Redistributing a large table with ALTER TABLE consumes cluster resources and requires temporary table locks at various times\. Implement each recommendation group when other cluster workload is light\. For more details on optimizing table distribution properties, see the [Amazon Redshift Engineering's Advanced Table Design Playbook: Distribution Styles and Distribution Keys](https://aws.amazon.com/blogs/big-data/amazon-redshift-engineerings-advanced-table-design-playbook-distribution-styles-and-distribution-keys/)\.
For more information about ALTER DISTSYLE and DISTKEY, see [ALTER TABLE](r_ALTER_TABLE.md)\.
**Note**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
524878160827-1
|
For more information about ALTER DISTSYLE and DISTKEY, see [ALTER TABLE](r_ALTER_TABLE.md)\.
**Note**
If you don't see a recommendation, that doesn't necessarily mean that the current distribution styles are the most appropriate\. Advisor doesn't provide recommendations when there isn't enough data or the expected benefit of redistribution is small\.
Advisor recommendations apply to a particular table and don't necessarily apply to a table that contains a column with the same name\. Tables that share a column name can have different characteristics for those columns unless data inside the tables is the same\.
If you see recommendations for staging tables that are created or dropped by ETL jobs, modify your ETL processes to use the Advisor recommended distribution keys\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
04a985db42fe-0
|
Amazon Redshift sorts table rows according to the table [sort key](t_Sorting_data.md)\. The sorting of table rows is based on the sort key column values\.
Sorting a table on an appropriate sort key can accelerate performance of queries, especially those with range\-restricted predicates, by requiring fewer table blocks to be read from disk\.
**Analysis**
Advisor analyzes your cluster’s workload over several days to identify a beneficial sort key for your tables\.
**Recommendation**
Advisor provides ALTER TABLE statements that alter the sort key of a table based on its analysis\.
When sorting a large table with the ALTER TABLE, cluster resources are consumed and table locks are required at various times\. Implement each recommendation when a cluster's workload is moderate\. More details on optimizing table sort key configurations can be found in the [Amazon Redshift Engineering's Advanced Table Design Playbook: Compound and Interleaved Sort Keys](https://aws.amazon.com/blogs/big-data/amazon-redshift-engineerings-advanced-table-design-playbook-compound-and-interleaved-sort-keys/)\.
For more information about ALTER SORTKEY, see [ALTER TABLE](r_ALTER_TABLE.md)\.
**Note**
If you don't see a recommendation for a table, that doesn't necessarily mean that the current configuration is the best\. Advisor doesn't provide recommendations when there isn't enough data or the expected benefit of sorting is small\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
04a985db42fe-1
|
Advisor recommendations apply to a particular table and don’t necessarily apply to a table that contains a column with the same name and data type\. Tables that share column names can have different recommendations based on the data in the tables and the workload\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor-recommendations.md
|
07cf42174f47-0
|
Returns one of two values based on whether a specified expression evaluates to NULL or NOT NULL\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL2.md
|
2193fb195de0-0
|
```
NVL2 ( expression, not_null_return_value, null_return_value )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL2.md
|
25f0647f4ef0-0
|
*expression*
An expression, such as a column name, to be evaluated for null status\.
*not\_null\_return\_value*
The value returned if *expression* evaluates to NOT NULL\. The *not\_null\_return\_value* value must either have the same data type as *expression* or be implicitly convertible to that data type\.
*null\_return\_value*
The value returned if *expression* evaluates to NULL\. The *null\_return\_value* value must either have the same data type as *expression* or be implicitly convertible to that data type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL2.md
|
0dc9816f9d6f-0
|
The NVL2 return type is determined as follows:
+ If either *not\_null\_return\_value* or *null\_return\_value* is null, the data type of the not\-null expression is returned\.
If both *not\_null\_return\_value* and *null\_return\_value* are not null:
+ If *not\_null\_return\_value* and *null\_return\_value* have the same data type, that data type is returned\.
+ If *not\_null\_return\_value* and *null\_return\_value* have different numeric data types, the smallest compatible numeric data type is returned\.
+ If *not\_null\_return\_value* and *null\_return\_value* have different datetime data types, a timestamp data type is returned\.
+ If *not\_null\_return\_value* and *null\_return\_value* have different character data types, the data type of *not\_null\_return\_value* is returned\.
+ If *not\_null\_return\_value* and *null\_return\_value* have mixed numeric and non\-numeric data types, the data type of *not\_null\_return\_value* is returned\.
**Important**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL2.md
|
0dc9816f9d6f-1
|
**Important**
In the last two cases where the data type of *not\_null\_return\_value* is returned, *null\_return\_value* is implicitly cast to that data type\. If the data types are incompatible, the function fails\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL2.md
|
82c054725ec5-0
|
[DECODE expression](r_DECODE_expression.md) can be used in a similar way to NVL2 when the *expression* and *search* parameters are both null\. The difference is that for DECODE, the return will have both the value and the data type of the *result* parameter\. In contrast, for NVL2, the return will have the value of either the *not\_null\_return\_value* or *null\_return\_value* parameter, whichever is selected by the function, but will have the data type of *not\_null\_return\_value*\.
For example, assuming column1 is NULL, the following queries will return the same value\. However, the DECODE return value data type will be INTEGER and the NVL2 return value data type will be VARCHAR\.
```
select decode(column1, null, 1234, '2345');
select nvl2(column1, '2345', 1234);
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL2.md
|
dab50094a6c4-0
|
The following example modifies some sample data, then evaluates two fields to provide appropriate contact information for users:
```
update users set email = null where firstname = 'Aphrodite' and lastname = 'Acevedo';
select (firstname + ' ' + lastname) as name,
nvl2(email, email, phone) AS contact_info
from users
where state = 'WA'
and lastname like 'A%'
order by lastname, firstname;
name contact_info
--------------------+-------------------------------------------
Aphrodite Acevedo (906) 632-4407
Caldwell Acevedo Nunc.sollicitudin@Duisac.ca
Quinn Adams vel@adipiscingligulaAenean.com
Kamal Aguilar quis@vulputaterisusa.com
Samson Alexander hendrerit.neque@indolorFusce.ca
Hall Alford ac.mattis@vitaediamProin.edu
Lane Allen et.netus@risusDonec.org
Xander Allison ac.facilisis.facilisis@Infaucibus.com
Amaya Alvarado dui.nec.tempus@eudui.edu
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL2.md
|
dab50094a6c4-1
|
Amaya Alvarado dui.nec.tempus@eudui.edu
Vera Alvarez at.arcu.Vestibulum@pellentesque.edu
Yetta Anthony enim.sit@risus.org
Violet Arnold ad.litora@at.com
August Ashley consectetuer.euismod@Phasellus.com
Karyn Austin ipsum.primis.in@Maurisblanditenim.org
Lucas Ayers at@elitpretiumet.com
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL2.md
|
a7a3122c891a-0
|
Records all WLM\-related errors as they occur\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WLM_ERROR.md
|
a76931bbc437-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_WLM_ERROR.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WLM_ERROR.md
|
bb9e375a4c32-0
|
By default, all users have permission to create a procedure\. To create a procedure, you must have USAGE permission on the language PL/pgSQL, which is granted to PUBLIC by default\. Only superusers and owners have the permission to call a procedure by default\. Superusers can run REVOKE USAGE on PL/pgSQL from a user if they want to prevent the user from creating a stored procedure\.
To call a procedure, you must be granted EXECUTE permission on the procedure\. By default, EXECUTE permission for new procedures is granted to the procedure owner and superusers\. For more information, see [GRANT](r_GRANT.md)\.
The user creating a procedure is the owner by default\. The owner has CREATE, DROP, and EXECUTE privileges on the procedure by default\. Superusers have all privileges\.
The SECURITY attribute controls a procedure's privileges to access database objects\. When you create a stored procedure, you can set the SECURITY attribute to either DEFINER or INVOKER\. If you specify SECURITY INVOKER, the procedure uses the privileges of the user invoking the procedure\. If you specify SECURITY DEFINER, the procedure uses the privileges of the owner of the procedure\. INVOKER is the default\.
Because a SECURITY DEFINER procedure runs with the privileges of the user that owns it, take care to ensure that the procedure can't be misused\. To ensure that SECURITY DEFINER procedures can't be misused, do the following:
+ Grant EXECUTE on SECURITY DEFINER procedures to specific users, and not to PUBLIC\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-security-and-privileges.md
|
bb9e375a4c32-1
|
+ Grant EXECUTE on SECURITY DEFINER procedures to specific users, and not to PUBLIC\.
+ Qualify all database objects that the procedure needs to access with the schema names\. For example, use `myschema.mytable` instead of just `mytable`\.
+ If you can't qualify an object name by its schema, set `search_path` when creating the procedure by using the SET option\. Set `search_path` to exclude any schemas that are writable by untrusted users\. This approach prevents any callers of this procedure from creating objects \(for example, tables or views\) that mask objects intended to be used by the procedure\. For more information about the SET option, see [CREATE PROCEDURE](r_CREATE_PROCEDURE.md)\.
The following example sets `search_path` to `admin` to ensure that the `user_creds` table is accessed from the `admin` schema and not from public or any other schema in the caller's `search_path`\.
```
CREATE OR REPLACE PROCEDURE sp_get_credentials(userid int, o_creds OUT varchar)
AS $$
BEGIN
SELECT creds INTO o_creds
FROM user_creds
WHERE user_id = $1;
END;
$$ LANGUAGE plpgsql
SECURITY DEFINER
-- Set a secure search_path
SET search_path = admin;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-security-and-privileges.md
|
4f32fa0ff3f2-0
|
The MAX function returns the maximum value in a set of rows\. DISTINCT or ALL may be used but do not affect the result\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MAX.md
|
53828e5a31fc-0
|
```
MAX ( [ DISTINCT | ALL ] expression )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MAX.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.