id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
c554ea8a37eb-0
STV\_ACTIVE\_CURSORS displays details for currently open cursors\. For more information, see [DECLARE](declare.md)\. STV\_ACTIVE\_CURSORS is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\. A user can only view cursors opened by that user\. A superuser can view all cursors\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_ACTIVE_CURSORS.md
d082ea2f1669-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_ACTIVE_CURSORS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_ACTIVE_CURSORS.md
dd7cc4e6abfe-0
The SHA2 function uses the SHA2 cryptographic hash function to convert a variable\-length string into a character string\. The character string is a text representation of the hexadecimal value of the checksum with the specified number of bits\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA2.md
7f586470fac7-0
``` SHA2(string, bits) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA2.md
28fb97cf6135-0
*string* A variable\-length string\. *integer* The number of bits in the hash functions\. Valid values are 0 \(same as 256\), 224, 256, 384, and 512\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA2.md
8feed8f9eec9-0
The SHA2 function returns a character string that is a text representation of the hexadecimal value of the checksum or an empty string if the number of bits is invalid\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA2.md
fc4d5159c9d4-0
The following example returns the 256\-bit value for the word 'Amazon Redshift': ``` select sha2('Amazon Redshift', 256); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA2.md
abeafcc89701-0
Run a [COPY](r_COPY.md) command to connect to the host and load the data into an Amazon Redshift table\. In the COPY command, specify the explicit Amazon S3 object path for the manifest file and include the SSH option\. For example, ``` copy sales from 's3://mybucket/ssh_manifest' credentials iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '|' ssh; ``` **Note** If you use automatic compression, the COPY command performs two data reads, which means it executes the remote command twice\. The first read is to provide a sample for compression analysis, then the second read actually loads the data\. If executing the remote command twice might cause a problem because of potential side effects, you should disable automatic compression\. To disable automatic compression, run the COPY command with the COMPUPDATE option set to OFF\. For more information, see [Loading tables with automatic compression](c_Loading_tables_auto_compress.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-run-copy.md
dfcbecb50e99-0
The SVL\_QERROR view is deprecated\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QERROR.md
abd611741a78-0
Returns `true` if the user has the specified privilege for the specified table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_TABLE_PRIVILEGE.md
2aa8f9ab887e-0
**Note** This is a leader\-node function\. This function returns an error if it references a user\-created table, an STL or STV system table, or an SVV or SVL system view\. For more information about privileges, see [GRANT](r_GRANT.md)\. ``` has_table_privilege( [ user, ] table, privilege) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_TABLE_PRIVILEGE.md
600116052ec5-0
*user* Name of the user to check for table privileges\. The default is to check the current user\. *table* Table associated with the privilege\. *privilege* Privilege to check\. Valid values are: + SELECT + INSERT + UPDATE + DELETE + REFERENCES
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_TABLE_PRIVILEGE.md
42c8699e05f1-0
Returns a CHAR or VARCHAR string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_TABLE_PRIVILEGE.md
42adfe8c7d77-0
The following query finds that the GUEST user does not have SELECT privilege on the LISTING table: ``` select has_table_privilege('guest', 'listing', 'select'); has_table_privilege --------------------- false (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_TABLE_PRIVILEGE.md
cb6e6651cf14-0
You can use the query plan to get information on the individual operations required to execute a query\. Before you work with a query plan, we recommend that you first understand how Amazon Redshift handles processing queries and creating query plans\. For more information, see [Query planning and execution workflow](c-query-planning.md)\. To create a query plan, run the [EXPLAIN](r_EXPLAIN.md) command followed by the actual query text\. The query plan gives you the following information: + What operations the execution engine performs, reading the results from bottom to top\. + What type of step each operation performs\. + Which tables and columns are used in each operation\. + How much data is processed in each operation, in terms of number of rows and data width in bytes\. + The relative cost of the operation\. *Cost* is a measure that compares the relative execution times of the steps within a plan\. Cost does not provide any precise information about actual execution times or memory consumption, nor does it provide a meaningful comparison between execution plans\. It does give you an indication of which operations in a query are consuming the most resources\. The EXPLAIN command doesn't actually run the query\. It only shows the plan that Amazon Redshift runs if the query is run under current operating conditions\. If you change the schema or data for a table and run [ANALYZE](r_ANALYZE.md) again to update the statistical metadata, the query plan might be different\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
cb6e6651cf14-1
The query plan output by EXPLAIN is a simplified, high\-level view of query execution\. It doesn't illustrate the details of parallel query processing\. To see detailed information, run the query itself, and then get query summary information from the SVL\_QUERY\_SUMMARY or SVL\_QUERY\_REPORT view\. For more information about using these views, see [Analyzing the query summary](c-analyzing-the-query-summary.md)\. The following example shows the EXPLAIN output for a simple GROUP BY query on the EVENT table: ``` explain select eventname, count(*) from event group by eventname; QUERY PLAN ------------------------------------------------------------------- XN HashAggregate (cost=131.97..133.41 rows=576 width=17) -> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=17) ``` EXPLAIN returns the following metrics for each operation: **Cost**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
cb6e6651cf14-2
``` EXPLAIN returns the following metrics for each operation: **Cost** A relative value that is useful for comparing operations within a plan\. Cost consists of two decimal values separated by two periods, for example `cost=131.97..133.41`\. The first value, in this case 131\.97, provides the relative cost of returning the first row for this operation\. The second value, in this case 133\.41, provides the relative cost of completing the operation\. The costs in the query plan are cumulative as you read up the plan, so the HashAggregate cost in this example \(131\.97\.\.133\.41\) includes the cost of the Seq Scan below it \(0\.00\.\.87\.98\)\. **Rows** The estimated number of rows to return\. In this example, the scan is expected to return 8798 rows\. The HashAggregate operator on its own is expected to return 576 rows \(after duplicate event names are discarded from the result set\)\. The rows estimate is based on the available statistics generated by the ANALYZE command\. If ANALYZE has not been run recently, the estimate is less reliable\. **Width** The estimated width of the average row, in bytes\. In this example, the average row is expected to be 17 bytes wide\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
e76682ec5490-0
This section briefly describes the operators that you see most often in the EXPLAIN output\. For a complete list of operators, see [EXPLAIN](r_EXPLAIN.md) in the SQL Commands section\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
9de5b8d5e537-0
The sequential scan operator \(Seq Scan\) indicates a table scan\. Seq Scan scans each column in the table sequentially from beginning to end and evaluates query constraints \(in the WHERE clause\) for every row\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
bc35138b1666-0
Amazon Redshift selects join operators based on the physical design of the tables being joined, the location of the data required for the join, and the specific requirements of the query itself\. + **Nested Loop** The least optimal join, a nested loop is used mainly for cross\-joins \(Cartesian products\) and some inequality joins\. + **Hash Join and Hash** Typically faster than a nested loop join, a hash join and hash are used for inner joins and left and right outer joins\. These operators are used when joining tables where the join columns are not both distribution keys *and* sort keys\. The hash operator creates the hash table for the inner table in the join; the hash join operator reads the outer table, hashes the joining column, and finds matches in the inner hash table\. + **Merge Join** Typically the fastest join, a merge join is used for inner joins and outer joins\. The merge join is not used for full joins\. This operator is used when joining tables where the join columns are both distribution keys *and* sort keys, and when less than 20 percent of the joining tables are unsorted\. It reads two sorted tables in order and finds the matching rows\. To view the percent of unsorted rows, query the [SVV\_TABLE\_INFO](r_SVV_TABLE_INFO.md) system table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
612162479f62-0
The query plan uses the following operators in queries that involve aggregate functions and GROUP BY operations\. + **Aggregate** Operator for scalar aggregate functions such as AVG and SUM\. + **HashAggregate** Operator for unsorted grouped aggregate functions\. + **GroupAggregate** Operator for sorted grouped aggregate functions\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
243b93e3f9e4-0
The query plan uses the following operators when queries have to sort or merge result sets\. + **Sort** Evaluates the ORDER BY clause and other sort operations, such as sorts required by UNION queries and joins, SELECT DISTINCT queries, and window functions\. + **Merge** Produces final sorted results according to intermediate sorted results that derive from parallel operations\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
1902009d3c94-0
The query plan uses the following operators for queries that involve set operations with UNION, INTERSECT, and EXCEPT\. + **Subquery** Used to run UNION queries\. + **Hash Intersect Distinct ** Used to run INTERSECT queries\. + **SetOp Except** Used to run EXCEPT \(or MINUS\) queries\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
6677330aeb0a-0
The following operators also appear frequently in EXPLAIN output for routine queries\. + **Unique** Eliminates duplicates for SELECT DISTINCT queries and UNION queries\. + **Limit** Processes the LIMIT clause\. + **Window** Runs window functions\. + **Result** Runs scalar functions that do not involve any table access\. + **Subplan** Used for certain subqueries\. + **Network** Sends intermediate results to the leader node for further processing\. + **Materialize** Saves rows for input to nested loop joins and some merge joins\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
d757545810c0-0
The query optimizer uses different join types to retrieve table data, depending on the structure of the query and the underlying tables\. The EXPLAIN output references the join type, the tables used, and the way the table data is distributed across the cluster to describe how the query is processed\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
a2398bde829e-0
The following examples show the different join types that the query optimizer can use\. The join type used in the query plan depends on the physical design of the tables involved\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
d14c860063a6-0
The following query joins EVENT and CATEGORY on the CATID column\. CATID is the distribution and sort key for CATEGORY but not for EVENT\. A hash join is performed with EVENT as the outer table and CATEGORY as the inner table\. Because CATEGORY is the smaller table, the planner broadcasts a copy of it to the compute nodes during query processing by using DS\_BCAST\_INNER\. The join cost in this example accounts for most of the cumulative cost of the plan\. ``` explain select * from category, event where category.catid=event.catid; QUERY PLAN ------------------------------------------------------------------------- XN Hash Join DS_BCAST_INNER (cost=0.14..6600286.07 rows=8798 width=84) Hash Cond: ("outer".catid = "inner".catid) -> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=35) -> XN Hash (cost=0.11..0.11 rows=11 width=49) -> XN Seq Scan on category (cost=0.00..0.11 rows=11 width=49) ``` **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
d14c860063a6-1
``` **Note** Aligned indents for operators in the EXPLAIN output sometimes indicate that those operations do not depend on each other and can start in parallel\. In the preceding example, although the scan on the EVENT table and the hash operation are aligned, the EVENT scan must wait until the hash operation has fully completed\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
d04e6aad7029-0
The following query also uses SELECT \*, but it joins SALES and LISTING on the LISTID column, where LISTID has been set as both the distribution and sort key for both tables\. A merge join is chosen, and no redistribution of data is required for the join \(DS\_DIST\_NONE\)\. ``` explain select * from sales, listing where sales.listid = listing.listid; QUERY PLAN ----------------------------------------------------------------------------- XN Merge Join DS_DIST_NONE (cost=0.00..6285.93 rows=172456 width=97) Merge Cond: ("outer".listid = "inner".listid) -> XN Seq Scan on listing (cost=0.00..1924.97 rows=192497 width=44) -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=53) ``` The following example demonstrates the different types of joins within the same query\. As in the previous example, SALES and LISTING are merge joined, but the third table, EVENT, must be hash joined with the results of the merge join\. Again, the hash join incurs a broadcast cost\. ``` explain select * from sales, listing, event
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
d04e6aad7029-1
``` explain select * from sales, listing, event where sales.listid = listing.listid and sales.eventid = event.eventid; QUERY PLAN ---------------------------------------------------------------------------- XN Hash Join DS_BCAST_INNER (cost=109.98..3871130276.17 rows=172456 width=132) Hash Cond: ("outer".eventid = "inner".eventid) -> XN Merge Join DS_DIST_NONE (cost=0.00..6285.93 rows=172456 width=97) Merge Cond: ("outer".listid = "inner".listid) -> XN Seq Scan on listing (cost=0.00..1924.97 rows=192497 width=44) -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=53) -> XN Hash (cost=87.98..87.98 rows=8798 width=35) -> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=35) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
273c56559629-0
The following query executes a hash join of the SALES and EVENT tables, followed by aggregation and sort operations to account for the grouped SUM function and the ORDER BY clause\. The initial sort operator runs in parallel on the compute nodes\. Then the Network operator sends the results to the leader node, where the Merge operator produces the final sorted results\. ``` explain select eventname, sum(pricepaid) from sales, event where sales.eventid=event.eventid group by eventname order by 2 desc; QUERY PLAN --------------------------------------------------------------------------------- XN Merge (cost=1002815366604.92..1002815366606.36 rows=576 width=27) Merge Key: sum(sales.pricepaid) -> XN Network (cost=1002815366604.92..1002815366606.36 rows=576 width=27) Send to leader -> XN Sort (cost=1002815366604.92..1002815366606.36 rows=576 width=27) Sort Key: sum(sales.pricepaid)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
273c56559629-1
Sort Key: sum(sales.pricepaid) -> XN HashAggregate (cost=2815366577.07..2815366578.51 rows=576 width=27) -> XN Hash Join DS_BCAST_INNER (cost=109.98..2815365714.80 rows=172456 width=27) Hash Cond: ("outer".eventid = "inner".eventid) -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=14) -> XN Hash (cost=87.98..87.98 rows=8798 width=21) -> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=21) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
3cd3307547d3-0
The EXPLAIN output for joins also specifies a method for how data is moved around a cluster to facilitate the join\. This data movement can be either a broadcast or a redistribution\. In a broadcast, the data values from one side of a join are copied from each compute node to every other compute node, so that every compute node ends up with a complete copy of the data\. In a redistribution, participating data values are sent from their current slice to a new slice \(possibly on a different node\)\. Data is typically redistributed to match the distribution key of the other table participating in the join if that distribution key is one of the joining columns\. If neither of the tables has distribution keys on one of the joining columns, either both tables are distributed or the inner table is broadcast to every node\. The EXPLAIN output also references inner and outer tables\. The inner table is scanned first, and appears nearer the bottom of the query plan\. The inner table is the table that is probed for matches\. It is usually held in memory, is usually the source table for hashing, and if possible, is the smaller table of the two being joined\. The outer table is the source of rows to match against the inner table\. It is usually read from disk\. The query optimizer chooses the inner and outer table based on database statistics from the latest run of the ANALYZE command\. The order of tables in the FROM clause of a query doesn't determine which table is inner and which is outer\. Use the following attributes in query plans to identify how data is moved to facilitate a query: + **DS\_BCAST\_INNER** A copy of the entire inner table is broadcast to all compute nodes\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
3cd3307547d3-1
+ **DS\_BCAST\_INNER** A copy of the entire inner table is broadcast to all compute nodes\. + **DS\_DIST\_ALL\_NONE** No redistribution is required, because the inner table has already been distributed to every node using DISTSTYLE ALL\. + **DS\_DIST\_NONE** No tables are redistributed\. Collocated joins are possible because corresponding slices are joined without moving data between nodes\. + **DS\_DIST\_INNER** The inner table is redistributed\. + **DS\_DIST\_OUTER** The outer table is redistributed\. + **DS\_DIST\_ALL\_INNER** The entire inner table is redistributed to a single slice because the outer table uses DISTSTYLE ALL\. + **DS\_DIST\_BOTH** Both tables are redistributed\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-the-query-plan.md
0e74a3c34487-0
Splits a string on the specified delimiter and returns the part at the specified position\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SPLIT_PART.md
09bb54dc93e0-0
``` SPLIT_PART(string, delimiter, part) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SPLIT_PART.md
f6237f1e05bb-0
*string* The string to be split\. The string can be CHAR or VARCHAR\. *delimiter* The delimiter string\. If *delimiter* is a literal, enclose it in single quotes\. *part* Position of the portion to return \(counting from 1\)\. Must be an integer greater than 0\. If *part* is larger than the number of string portions, SPLIT\_PART returns an empty string\. If *delimiter* is not found in *string*, then the returned value contains the contents of the specified part, which might be the entire *string* or an empty value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SPLIT_PART.md
dea63874ac4d-0
A CHAR or VARCHAR string, the same as the string parameter\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SPLIT_PART.md
23dcb93020b8-0
The following example splits the time stamp field LISTTIME into year, month, and day components\. ``` select listtime, split_part(listtime,'-',1) as year, split_part(listtime,'-',2) as month, split_part(split_part(listtime,'-',3),' ',1) as day from listing limit 5; listtime | year | month | day ---------------------+------+-------+------ 2008-03-05 12:25:29 | 2008 | 03 | 05 2008-09-09 08:03:36 | 2008 | 09 | 09 2008-09-26 05:43:12 | 2008 | 09 | 26 2008-10-04 02:00:30 | 2008 | 10 | 04 2008-01-06 08:33:11 | 2008 | 01 | 06 (5 rows) ``` The following example selects the LISTTIME time stamp field and splits it on the `'-'` character to get the month \(the second part of the LISTTIME string\), then counts the number of entries for each month: ``` select split_part(listtime,'-',2) as month, count(*) from listing group by split_part(listtime,'-',2)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SPLIT_PART.md
23dcb93020b8-1
from listing group by split_part(listtime,'-',2) order by 1, 2; month | count -------+------- 01 | 18543 02 | 16620 03 | 17594 04 | 16822 05 | 17618 06 | 17158 07 | 17626 08 | 17881 09 | 17378 10 | 17756 11 | 12912 12 | 4589 (12 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SPLIT_PART.md
438493fa774e-0
If a COPY command is not an option and you require SQL inserts, use a multi\-row insert whenever possible\. Data compression is inefficient when you add data only one row or a few rows at a time\. Multi\-row inserts improve performance by batching up a series of inserts\. The following example inserts three rows into a four\-column table using a single INSERT statement\. This is still a small insert, shown simply to illustrate the syntax of a multi\-row insert\. ``` insert into category_stage values (default, default, default, default), (20, default, 'Country', default), (21, 'Concerts', 'Rock', default); ``` For more details and examples, see [INSERT](r_INSERT_30.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-multi-row-inserts.md
2163ca4fd4c8-0
Calculates the median value for the range of values\. NULL values in the range are ignored\. MEDIAN is an inverse distribution function that assumes a continuous distribution model\. MEDIAN is a special case of [PERCENTILE\_CONT](r_PERCENTILE_CONT.md)\(\.5\)\. MEDIAN is a compute\-node only function\. The function returns an error if the query doesn't reference a user\-defined table or Amazon Redshift system table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MEDIAN.md
202824165e9b-0
``` MEDIAN ( median_expression ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MEDIAN.md
7ab44313e406-0
*median\_expression* The target column or expression that the function operates on\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MEDIAN.md
1ee571de23a8-0
The return type is determined by the data type of *median\_expression*\. The following table shows the return type for each *median\_expression* data type\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_MEDIAN.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MEDIAN.md
d3cc3878850f-0
If the *median\_expression* argument is a DECIMAL data type defined with the maximum precision of 38 digits, it is possible that MEDIAN will return either an inaccurate result or an error\. If the return value of the MEDIAN function exceeds 38 digits, the result is truncated to fit, which causes a loss of precision\. If, during interpolation, an intermediate result exceeds the maximum precision, a numeric overflow occurs and the function returns an error\. To avoid these conditions, we recommend either using a data type with lower precision or casting the *median\_expression* argument to a lower precision\. If a statement includes multiple calls to sort\-based aggregate functions \(LISTAGG, PERCENTILE\_CONT, or MEDIAN\), they must all use the same ORDER BY values\. Note that MEDIAN applies an implicit order by on the expression value\. For example, the following statement returns an error\. ``` select top 10 salesid, sum(pricepaid), percentile_cont(0.6) within group (order by salesid), median (pricepaid) from sales group by salesid, pricepaid; An error occurred when executing the SQL command: select top 10 salesid, sum(pricepaid), percentile_cont(0.6) within group (order by salesid), median (pricepaid) from sales group by salesid, pricepai... ERROR: within group ORDER BY clauses for aggregate functions must be the same ``` The following statement executes successfully\. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MEDIAN.md
d3cc3878850f-1
``` The following statement executes successfully\. ``` select top 10 salesid, sum(pricepaid), percentile_cont(0.6) within group (order by salesid), median (salesid) from sales group by salesid, pricepaid; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MEDIAN.md
458a4740bfd5-0
The following example shows that MEDIAN produces the same results as PERCENTILE\_CONT\(0\.5\)\. ``` select top 10 distinct sellerid, qtysold, percentile_cont(0.5) within group (order by qtysold), median (qtysold) from sales group by sellerid, qtysold; sellerid | qtysold | percentile_cont | median ---------+---------+-----------------+------- 1 | 1 | 1.0 | 1.0 2 | 3 | 3.0 | 3.0 5 | 2 | 2.0 | 2.0 9 | 4 | 4.0 | 4.0 12 | 1 | 1.0 | 1.0 16 | 1 | 1.0 | 1.0 19 | 2 | 2.0 | 2.0 19 | 3 | 3.0 | 3.0 22 | 2 | 2.0 | 2.0 25 | 2 | 2.0 | 2.0 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MEDIAN.md
d5ab29cc8b5f-0
The MOD function returns a numeric result that is the remainder of two numeric parameters\. The first parameter is divided by the second parameter\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MOD.md
871af2266a0d-0
``` MOD(number1, number2) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MOD.md
71724335995b-0
*number1* The first input parameter is an INTEGER, SMALLINT, BIGINT, or DECIMAL number\. If either parameter is a DECIMAL type, the other parameter must also be a DECIMAL type\. If either parameter is an INTEGER, the other parameter can be an INTEGER, SMALLINT, or BIGINT\. Both parameters can also be SMALLINT or BIGINT, but one parameter cannot be a SMALLINT if the other is a BIGINT\. *number2* The second parameter is an INTEGER, SMALLINT, BIGINT, or DECIMAL number\. The same data type rules apply to *number2* as to *number1*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MOD.md
6bb72f0c736f-0
Valid return types are DECIMAL, INT, SMALLINT, and BIGINT\. The return type of the MOD function is the same numeric type as the input parameters, if both input parameters are the same type\. If either input parameter is an INTEGER, however, the return type will also be an INTEGER\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MOD.md
cf0687d2509b-0
The following example returns information for odd\-numbered categories in the CATEGORY table: ``` select catid, catname from category where mod(catid,2)=1 order by 1,2; catid | catname -------+----------- 1 | MLB 3 | NFL 5 | MLS 7 | Plays 9 | Pop 11 | Classical (6 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MOD.md
2468d0a600a4-0
ST\_Polygon returns a polygon geometry whose outer ring is the input linestring with the value that was input for the spatial reference system identifier \(SRID\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Polygon-function.md
b40e4744fd77-0
``` ST_Polygon(linestring, srid) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Polygon-function.md
33f30fc983e4-0
*linestring* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `LINESTRING` that represents a linestring\. The *linestring* value must be closed\. *srid* A value of data type `INTEGER` that represents a SRID\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Polygon-function.md
4e73b8c94035-0
`GEOMETRY` of subtype `POLYGON`\. The SRID value of the returned geometry is set to *srid*\. If *linestring* or *srid* is null, then null is returned\. If *linestring* is not a linestring, then an error is returned\. If *linestring* is not closed, then an error is returned\. If *srid* is negative, then an error is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Polygon-function.md
ff442e02c668-0
The following SQL constructs a polygon with an SRID value\. ``` SELECT ST_AsEWKT(ST_Polygon(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)'),4356)); ``` ``` st_asewkt ------------- SRID=4356;POLYGON((77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Polygon-function.md
f3c79697b832-0
Use the SVL\_S3PARTITION view to get details about Amazon Redshift Spectrum partitions at the segment and node slice level\. SVL\_S3PARTITION is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3PARTITION.md
16db10843976-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_S3PARTITION.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3PARTITION.md
e92719292792-0
The following example gets the partition details for the last query executed\. ``` SELECT query, segment, MIN(starttime) AS starttime, MAX(endtime) AS endtime, datediff(ms,MIN(starttime),MAX(endtime)) AS dur_ms, MAX(total_partitions) AS total_partitions, MAX(qualified_partitions) AS qualified_partitions, MAX(assignment) as assignment_type FROM svl_s3partition WHERE query=pg_last_query_id() GROUP BY query, segment ``` ``` query | segment | starttime | endtime | dur_ms| total_partitions | qualified_partitions | assignment_type ------+---------+-------------------------------+-----------------------------+-------+------------------+----------------------+----------------
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3PARTITION.md
e92719292792-1
99232 | 0 | 2018-04-17 22:43:50.201515 | 2018-04-17 22:43:54.674595 | 4473 | 2526 | 334 | p ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3PARTITION.md
37c273615be9-0
**Topics** + [Multibyte characters](#c_Supported_data_types-multi-byte-characters) + [Numeric types](r_Numeric_types201.md) + [Character types](r_Character_types.md) + [Datetime types](r_Datetime_types.md) + [Boolean type](r_Boolean_type.md) + [Type compatibility and conversion](r_Type_conversion.md) Each value that Amazon Redshift stores or retrieves has a data type with a fixed set of associated properties\. Data types are declared when tables are created\. A data type constrains the set of values that a column or argument can contain\. The following table lists the data types that you can use in Amazon Redshift tables\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_Supported_data_types.html) **Note** For information about unsupported data types, such as "char" \(notice that char is enclosed in quotation marks\), see [Unsupported PostgreSQL data types](c_unsupported-postgresql-datatypes.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Supported_data_types.md
b45f79a9d97a-0
The VARCHAR data type supports UTF\-8 multibyte characters up to a maximum of four bytes\. Five\-byte or longer characters are not supported\. To calculate the size of a VARCHAR column that contains multibyte characters, multiply the number of characters by the number of bytes per character\. For example, if a string has four Chinese characters, and each character is three bytes long, then you will need a VARCHAR\(12\) column to store the string\. VARCHAR does not support the following invalid UTF\-8 codepoints: + 0xD800 \- 0xDFFF \(Byte sequences: ED A0 80 \- ED BF BF\) + 0xFDD0 \- 0xFDEF, 0xFFFE, and 0xFFFF \(Byte sequences: EF B7 90 \- EF B7 AF, EF BF BE, and EF BF BF\) The CHAR data type does not support multibyte characters\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Supported_data_types.md
0e8486efc271-0
Return all 11 rows from the CATEGORY table, ordered by the second column, CATGROUP\. For results that have the same CATGROUP value, order the CATDESC column values by the length of the character string\. Then order by columns CATID and CATNAME\. ``` select * from category order by 2, length(catdesc), 1, 3; catid | catgroup | catname | catdesc -------+----------+-----------+---------------------------------------- 10 | Concerts | Jazz | All jazz singers and bands 9 | Concerts | Pop | All rock and pop music concerts 11 | Concerts | Classical | All symphony, concerto, and choir conce 6 | Shows | Musicals | Musical theatre 7 | Shows | Plays | All non-musical theatre 8 | Shows | Opera | All opera and light opera 5 | Sports | MLS | Major League Soccer 1 | Sports | MLB | Major League Baseball 2 | Sports | NHL | National Hockey League 3 | Sports | NFL | National Football League 4 | Sports | NBA | National Basketball Association (11 rows) ``` Return selected columns from the SALES table, ordered by the highest QTYSOLD values\. Limit the result to the top 10 rows: ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_ORDER_BY.md
0e8486efc271-1
Return selected columns from the SALES table, ordered by the highest QTYSOLD values\. Limit the result to the top 10 rows: ``` select salesid, qtysold, pricepaid, commission, saletime from sales order by qtysold, pricepaid, commission, salesid, saletime desc limit 10; salesid | qtysold | pricepaid | commission | saletime ---------+---------+-----------+------------+--------------------- 15401 | 8 | 272.00 | 40.80 | 2008-03-18 06:54:56 61683 | 8 | 296.00 | 44.40 | 2008-11-26 04:00:23 90528 | 8 | 328.00 | 49.20 | 2008-06-11 02:38:09 74549 | 8 | 336.00 | 50.40 | 2008-01-19 12:01:21 130232 | 8 | 352.00 | 52.80 | 2008-05-02 05:52:31 55243 | 8 | 384.00 | 57.60 | 2008-07-12 02:19:53 16004 | 8 | 440.00 | 66.00 | 2008-11-04 07:22:31
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_ORDER_BY.md
0e8486efc271-2
16004 | 8 | 440.00 | 66.00 | 2008-11-04 07:22:31 489 | 8 | 496.00 | 74.40 | 2008-08-03 05:48:55 4197 | 8 | 512.00 | 76.80 | 2008-03-23 11:35:33 16929 | 8 | 568.00 | 85.20 | 2008-12-19 02:59:33 (10 rows) ``` Return a column list and no rows by using LIMIT 0 syntax: ``` select * from venue limit 0; venueid | venuename | venuecity | venuestate | venueseats ---------+-----------+-----------+------------+------------ (0 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_ORDER_BY.md
9ebc48f72ff7-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_venuetable.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_venuetable.md
b43668ba1613-0
With the Concurrency Scaling feature, you can support virtually unlimited concurrent users and concurrent queries, with consistently fast query performance\. When concurrency scaling is enabled, Amazon Redshift automatically adds additional cluster capacity when you need it to process an increase in concurrent read queries\. Write operations continue as normal on your main cluster\. Users always see the most current data, whether the queries run on the main cluster or on a concurrency scaling cluster\. You're charged
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/concurrency-scaling.md
b43668ba1613-1
on a concurrency scaling cluster\. You're charged for concurrency scaling clusters only for the time they're in use\. For more information about pricing, see [Amazon Redshift pricing](https://aws.amazon.com/redshift/pricing/)\. You manage which queries are sent to the concurrency scaling cluster by configuring WLM queues\. When you enable concurrency scaling for a queue, eligible queries are sent to the concurrency scaling cluster instead of waiting in line\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/concurrency-scaling.md
c010431ef154-0
The Concurrency Scaling feature is available only in the following AWS Regions: + US East \(N\. Virginia\) Region \(us\-east\-1\) + US East \(Ohio\) Region \(us\-east\-2\) + US West \(N\. California\) Region \(us\-west\-1\) + US West \(Oregon\) Region \(us\-west\-2\) + Asia Pacific \(Mumbai\) Region \(ap\-south\-1\) + Asia Pacific \(Seoul\) Region \(ap\-northeast\-2\) + Asia Pacific \(Singapore\) Region \(ap\-southeast\-1\) + Asia Pacific \(Sydney\) Region \(ap\-southeast\-2\) + Asia Pacific \(Tokyo\) Region \(ap\-northeast\-1\) + Canada \(Central\) Region \(ca\-central\-1\) + Europe \(Frankfurt\) Region \(eu\-central\-1\) + Europe \(Ireland\) Region \(eu\-west\-1\) + Europe \(London\) Region \(eu\-west\-2\) + Europe \(Paris\) Region \(eu\-west\-3\) + South America \(São Paulo\) Region \(sa\-east\-1\)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/concurrency-scaling.md
0ee1134ad732-0
Queries are routed to the concurrency scaling cluster only when the main cluster meets the following requirements: + EC2\-VPC platform\. + Node type must be `dc2.8xlarge`, `ds2.8xlarge`, `dc2.large`, `ds2.xlarge`, `ra3.4xlarge`, or `ra3.16xlarge`\. + Maximum of 32 compute nodes for clusters with `8xlarge` or `16xlarge` node types\. In addition, the number of nodes of the main cluster can't be larger than 32 nodes when the cluster was originally created\. For example, even if a cluster currently has 20 nodes, but was originally created with 40, it does not meet the requirements for concurrency scaling\. Conversely, if a cluster currently has 40 nodes, but was originally created with 20, it does meet the requirements for concurrency scaling\. + Not a single\-node cluster\. A query must meet all the following criteria to be a candidate for concurrency scaling: + The query must be a read\-only query\. + The query doesn't reference tables that use an [interleaved sort key](t_Sorting_data.md#t_Sorting_data-interleaved)\. + The query doesn't reference user\-defined temporary tables\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/concurrency-scaling.md
142a049a9b9d-0
You route queries to concurrency scaling clusters by enabling a workload manager \(WLM\) queue as a concurrency scaling queue\. To enable concurrency scaling on a queue, set the **Concurrency Scaling mode** value to **auto**\. When the number of queries routed to a concurrency scaling queue exceeds the queue's configured concurrency, eligible queries are sent to the concurrency scaling cluster\. When slots become available, queries are run on the main cluster\. The number of queues is limited only by the number of queues permitted per cluster\. As with any WLM queue, you route queries to a concurrency scaling queue based on user groups or by labeling queries with query group labels\. You can also route queries by defining [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)\. For example, you might route all queries that take longer than 5 seconds to a concurrency scaling queue\. The default number of concurrency scaling clusters is one\. The number of concurrency scaling clusters that can be used is controlled by [max\_concurrency\_scaling\_clusters](r_max_concurrency_scaling_clusters.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/concurrency-scaling.md
11bdede0cca7-0
You can see whether a query is running on the main cluster or a concurrency scaling cluster by viewing the Amazon Redshift console, navigating to **Cluster**, and choosing a cluster\. Then choose the **Queries** tab and view the values in the column **Executed on** to determine the cluster where the query ran\. To find execution times, query the STL\_QUERY table and filter on the `concurrency_scaling_status` column\. The following query compares the queue time and execution time for queries run on the concurrency scaling cluster and queries run on the main cluster\. ``` SELECT w.service_class AS queue , q.concurrency_scaling_status , COUNT( * ) AS queries , SUM( q.aborted ) AS aborted , SUM( ROUND( total_queue_time::NUMERIC / 1000000,2 ) ) AS queue_secs , SUM( ROUND( total_exec_time::NUMERIC / 1000000,2 ) ) AS exec_secs FROM stl_query q JOIN stl_wlm_query w USING (userid,query) WHERE q.userid > 1 AND q.starttime > '2019-01-04 16:38:00' AND q.endtime < '2019-01-04 17:40:00' GROUP BY 1,2 ORDER BY 1,2; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/concurrency-scaling.md
df17debf9d65-0
A set of system views with the prefix SVCS provides details from the system log tables about queries on both the main and concurrency scaling clusters\. The following views have similar information as the corresponding STL views or SVL views: + [SVCS\_ALERT\_EVENT\_LOG](r_SVCS_ALERT_EVENT_LOG.md) + [SVCS\_COMPILE](r_SVCS_COMPILE.md) + [SVCS\_EXPLAIN](r_SVCS_EXPLAIN.md) + [SVCS\_PLAN\_INFO](r_SVCS_PLAN_INFO.md) + [SVCS\_QUERY\_SUMMARY](r_SVCS_QUERY_SUMMARY.md) + [SVCS\_STREAM\_SEGS](r_SVCS_STREAM_SEGS.md) The following views are specific to concurrency scaling\. + [SVCS\_CONCURRENCY\_SCALING\_USAGE](r_SVCS_CONCURRENCY_SCALING_USAGE.md) For more information about concurrency scaling, see the following topics in the *Amazon Redshift Cluster Management Guide*\. + [Viewing Concurrency Scaling Data](https://docs.aws.amazon.com/redshift/latest/mgmt/performance-metrics-concurrency-scaling.html) + [Viewing Cluster Performance During Query Execution](https://docs.aws.amazon.com/redshift/latest/mgmt/performance-metrics-query-cluster.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/concurrency-scaling.md
df17debf9d65-1
+ [Viewing Query Details](https://docs.aws.amazon.com/redshift/latest/mgmt/performance-metrics-query-execution-details.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/concurrency-scaling.md
4814fd364fa1-0
PL/pgSQL statements augment SQL commands with procedural constructs, including looping and conditional expressions, to control logical flow\. Most SQL commands can be used, including data modification language \(DML\) such as COPY, UNLOAD and INSERT, and data definition language \(DDL\) such as CREATE TABLE\. For a list of comprehensive SQL commands, see [SQL commands](c_SQL_commands.md)\. In addition, the following PL/pgSQL statements are supported by Amazon Redshift\. **Topics** + [Assignment](#r_PLpgSQL-assignment) + [SELECT INTO](#r_PLpgSQL-select-into) + [No\-op](#r_PLpgSQL-no-op) + [Dynamic SQL](#r_PLpgSQL-dynamic-sql) + [Return](#r_PLpgSQL-return) + [Conditionals: IF](#r_PLpgSQL-conditionals-if) + [Conditionals: CASE](#r_PLpgSQL-conditionals-case) + [Loops](#r_PLpgSQL-loops) + [Cursors](#r_PLpgSQL-cursors) + [RAISE](#r_PLpgSQL-messages-errors) + [Transaction control](#r_PLpgSQL-transaction-control)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
f68f6b1f23ef-0
The assignment statement assigns a value to a variable\. The expression must return a single value\. ``` identifier := expression; ``` Using the nonstandard `=` for assignment, instead of `:=`, is also accepted\. If the data type of the expression doesn't match the variable's data type or the variable has a size or precision, the result value is implicitly converted\. The following shows examples\. ``` customer_number := 20; tip := subtotal * 0.15; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
7a0b1239ee7b-0
The SELECT INTO statement assigns the result of multiple columns \(but only one row\) into a record variable or list of scalar variables\. ``` SELECT INTO target select_expressions FROM ...; ``` In the preceding syntax, *target* can be a record variable or a comma\-separated list of simple variables and record fields\. The *select\_expressions* list and the remainder of the command are the same as in regular SQL\. If a variable list is used as *target*, the selected values must exactly match the structure of the target, or a runtime error occurs\. When a record variable is the target, it automatically configures itself to the row type of the query result columns\. The INTO clause can appear almost anywhere in the SELECT statement\. It usually appears just after the SELECT clause, or just before FROM clause\. That is, it appears just before or just after the *select\_expressions* list\. If the query returns zero rows, NULL values are assigned to *target*\. If the query returns multiple rows, the first row is assigned to *target* and the rest are discarded\. Unless the statement contains an ORDER BY, the first row is not deterministic\. To determine whether the assignment returned at least one row, use the special FOUND variable\. ``` SELECT INTO customer_rec * FROM cust WHERE custname = lname; IF NOT FOUND THEN RAISE EXCEPTION 'employee % not found', lname; END IF; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
7a0b1239ee7b-1
IF NOT FOUND THEN RAISE EXCEPTION 'employee % not found', lname; END IF; ``` To test whether a record result is null, you can use the IS NULL conditional\. There is no way to determine whether any additional rows might have been discarded\. The following example handles the case where no rows have been returned\. ``` CREATE OR REPLACE PROCEDURE select_into_null(return_webpage OUT varchar(256)) AS $$ DECLARE customer_rec RECORD; BEGIN SELECT INTO customer_rec * FROM users WHERE user_id=3; IF customer_rec.webpage IS NULL THEN -- user entered no webpage, return "http://" return_webpage = 'http://'; END IF; END; $$ LANGUAGE plpgsql; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
576eb547d75a-0
The no\-op statement \(`NULL;`\) is a placeholder statement that does nothing\. A no\-op statement can indicate that one branch of an IF\-THEN\-ELSE chain is empty\. ``` NULL; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
a7bf4f1884c0-0
To generate dynamic commands that can involve different tables or different data types each time they are run from a PL/pgSQL stored procedure, use the `EXECUTE` statement\. ``` EXECUTE command-string [ INTO target ]; ``` In the preceding, *command\-string* is an expression yielding a string \(of type text\) that contains the command to be run\. This *command\-string* value is sent to the SQL engine\. No substitution of PL/pgSQL variables is done on the command string\. The values of variables must be inserted in the command string as it is constructed\. **Note** You can't use COMMIT and ROLLBACK statements from within dynamic SQL\. For information about using COMMIT and ROLLBACK statements within a stored procedure, see [Managing transactions](stored-procedure-transaction-management.md)\. When working with dynamic commands, you often have to handle escaping of single quotation marks\. We recommend enclosing fixed text in quotation marks in your function body using dollar quoting\. Dynamic values to insert into a constructed query require special handling because they might themselves contain quotation marks\. The following example assumes dollar quoting for the function as a whole, so the quotation marks don't need to be doubled\. ``` EXECUTE 'UPDATE tbl SET ' || quote_ident(colname) || ' = ' || quote_literal(newvalue) || ' WHERE key = ' || quote_literal(keyvalue); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
a7bf4f1884c0-1
|| ' WHERE key = ' || quote_literal(keyvalue); ``` The preceding example shows the functions `quote_ident(text)` and `quote_literal(text)`\. This example passes variables that contain column and table identifiers to the `quote_ident` function\. It also passes variables that contain literal strings in the constructed command to the `quote_literal` function\. Both functions take the appropriate steps to return the input text enclosed in double or single quotes respectively, with any embedded special characters properly escaped\. Dollar quoting is only useful for quoting fixed text\. Don't write the preceding example in the following format\. ``` EXECUTE 'UPDATE tbl SET ' || quote_ident(colname) || ' = $$' || newvalue || '$$ WHERE key = ' || quote_literal(keyvalue); ``` You don't do this because the example breaks if the contents of `newvalue` happen to contain $$\. The same problem applies to any other dollar\-quoting delimiter that you might choose\. To safely quote text that is not known in advance, use the `quote_literal` function\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
b739999371db-0
The RETURN statement returns back to the caller from a stored procedure\. ``` RETURN; ``` The following shows an example\. ``` CREATE OR REPLACE PROCEDURE return_example(a int) AS $$ BEGIN FOR b in 1..10 LOOP IF b < a THEN RAISE INFO 'b = %', b; ELSE RETURN; END IF; END LOOP; END; $$ LANGUAGE plpgsql; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
c055c7776763-0
The IF conditional statement can take the following forms in the PL/pgSQL language that Amazon Redshift uses: + IF \.\.\. THEN ``` IF boolean-expression THEN statements END IF; ``` The following shows an example\. ``` IF v_user_id <> 0 THEN UPDATE users SET email = v_email WHERE user_id = v_user_id; END IF; ``` + IF \.\.\. THEN \.\.\. ELSE ``` IF boolean-expression THEN statements ELSE statements END IF; ``` The following shows an example\. ``` IF parentid IS NULL OR parentid = '' THEN return_name = fullname; RETURN; ELSE return_name = hp_true_filename(parentid) || '/' || fullname; RETURN; END IF; ``` + IF \.\.\. THEN \.\.\. ELSIF \.\.\. THEN \.\.\. ELSE The key word ELSIF can also be spelled ELSEIF\. ``` IF boolean-expression THEN statements
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
c055c7776763-1
The key word ELSIF can also be spelled ELSEIF\. ``` IF boolean-expression THEN statements [ ELSIF boolean-expression THEN statements [ ELSIF boolean-expression THEN statements ...] ] [ ELSE statements ] END IF; ``` The following shows an example\. ``` IF number = 0 THEN result := 'zero'; ELSIF number > 0 THEN result := 'positive'; ELSIF number < 0 THEN result := 'negative'; ELSE -- the only other possibility is that number is null result := 'NULL'; END IF; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
8fdc0d593aad-0
The CASE conditional statement can take the following forms in the PL/pgSQL language that Amazon Redshift uses: + Simple CASE ``` CASE search-expression WHEN expression [, expression [ ... ]] THEN statements [ WHEN expression [, expression [ ... ]] THEN statements ... ] [ ELSE statements ] END CASE; ``` A simple CASE statement provides conditional execution based on equality of operands\. The *search\-expression* value is evaluated one time and successively compared to each *expression* in the WHEN clauses\. If a match is found, then the corresponding *statements* run, and then control passes to the next statement after END CASE\. Subsequent WHEN expressions aren't evaluated\. If no match is found, the ELSE *statements* run\. However, if ELSE isn't present, then a CASE\_NOT\_FOUND exception is raised\. The following shows an example\. ``` CASE x WHEN 1, 2 THEN msg := 'one or two'; ELSE msg := 'other value than one or two'; END CASE; ``` + Searched CASE ``` CASE WHEN boolean-expression THEN statements [ WHEN boolean-expression THEN statements ... ] [ ELSE
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
8fdc0d593aad-1
statements [ WHEN boolean-expression THEN statements ... ] [ ELSE statements ] END CASE; ``` The searched form of CASE provides conditional execution based on truth of Boolean expressions\. Each WHEN clause's *boolean\-expression* is evaluated in turn, until one is found that yields true\. Then the corresponding statements run, and then control passes to the next statement after END CASE\. Subsequent WHEN *expressions* aren't evaluated\. If no true result is found, the ELSE *statements* are run\. However, if ELSE isn't present, then a CASE\_NOT\_FOUND exception is raised\. The following shows an example\. ``` CASE WHEN x BETWEEN 0 AND 10 THEN msg := 'value is between zero and ten'; WHEN x BETWEEN 11 AND 20 THEN msg := 'value is between eleven and twenty'; END CASE; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
fbe1121af2d8-0
Loop statements can take the following forms in the PL/pgSQL language that Amazon Redshift uses: + Simple loop ``` [<<label>>] LOOP statements END LOOP [ label ]; ``` A simple loop defines an unconditional loop that is repeated indefinitely until terminated by an EXIT or RETURN statement\. The optional label can be used by EXIT and CONTINUE statements within nested loops to specify which loop the EXIT and CONTINUE statements refer to\. The following shows an example\. ``` CREATE OR REPLACE PROCEDURE simple_loop() LANGUAGE plpgsql AS $$ BEGIN <<simple_while>> LOOP RAISE INFO 'I am raised once'; EXIT simple_while; RAISE INFO 'I am not raised'; END LOOP; RAISE INFO 'I am raised once as well'; END; $$; ``` + Exit loop ``` EXIT [ label ] [ WHEN expression ]; ``` If *label* isn't present, the innermost loop is terminated and the statement following the END LOOP runs next\. If *label* is present, it must be the label of the current or some outer level of nested loop or block\. Then, the named loop or block is terminated and control continues with the statement after the loop or block corresponding END\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
fbe1121af2d8-1
If WHEN is specified, the loop exit occurs only if *expression* is true\. Otherwise, control passes to the statement after EXIT\. You can use EXIT with all types of loops; it isn't limited to use with unconditional loops\. When used with a BEGIN block, EXIT passes control to the next statement after the end of the block\. A label must be used for this purpose\. An unlabeled EXIT is never considered to match a BEGIN block\. The following shows an example\. ``` CREATE OR REPLACE PROCEDURE simple_loop_when(x int) LANGUAGE plpgsql AS $$ DECLARE i INTEGER := 0; BEGIN <<simple_loop_when>> LOOP RAISE INFO 'i %', i; i := i + 1; EXIT simple_loop_when WHEN (i >= x); END LOOP; END; $$; ``` + Continue loop ``` CONTINUE [ label ] [ WHEN expression ]; ``` If *label* is not given, the execution jumps to the next iteration of the innermost loop\. That is, all statements remaining in the loop body are skipped\. Control then returns to the loop control expression \(if any\) to determine whether another loop iteration is needed\. If *label* is present, it specifies the label of the loop whose execution is continued\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
fbe1121af2d8-2
If WHEN is specified, the next iteration of the loop is begun only if *expression* is true\. Otherwise, control passes to the statement after CONTINUE\. You can use CONTINUE with all types of loops; it isn't limited to use with unconditional loops\. ``` CONTINUE mylabel; ``` + WHILE loop ``` [<<label>>] WHILE expression LOOP statements END LOOP [ label ]; ``` The WHILE statement repeats a sequence of statements so long as the *boolean\-expression* evaluates to true\. The expression is checked just before each entry to the loop body\. The following shows an example\. ``` WHILE amount_owed > 0 AND gift_certificate_balance > 0 LOOP -- some computations here END LOOP; WHILE NOT done LOOP -- some computations here END LOOP; ``` + FOR loop \(integer variant\) ``` [<<label>>] FOR name IN [ REVERSE ] expression .. expression LOOP statements END LOOP [ label ]; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
fbe1121af2d8-3
FOR name IN [ REVERSE ] expression .. expression LOOP statements END LOOP [ label ]; ``` The FOR loop \(integer variant\) creates a loop that iterates over a range of integer values\. The variable name is automatically defined as type integer and exists only inside the loop\. Any existing definition of the variable name is ignored within the loop\. The two expressions giving the lower and upper bound of the range are evaluated one time when entering the loop\. If you specify REVERSE, then the step value is subtracted, rather than added, after each iteration\. If the lower bound is greater than the upper bound \(or less than, in the REVERSE case\), the loop body doesn't run\. No error is raised\. If a label is attached to the FOR loop, then you can reference the integer loop variable with a qualified name, using that label\. The following shows an example\. ``` FOR i IN 1..10 LOOP -- i will take on the values 1,2,3,4,5,6,7,8,9,10 within the loop END LOOP; FOR i IN REVERSE 10..1 LOOP -- i will take on the values 10,9,8,7,6,5,4,3,2,1 within the loop END LOOP; ``` + FOR loop \(result set variant\) ``` [<<label>>] FOR target IN query LOOP statements END LOOP [ label ];
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
fbe1121af2d8-4
``` [<<label>>] FOR target IN query LOOP statements END LOOP [ label ]; ``` The *target* is a record variable or comma\-separated list of scalar variables\. The target is successively assigned each row resulting from the query, and the loop body is run for each row\. The FOR loop \(result set variant\) enables a stored procedure to iterate through the results of a query and manipulate that data accordingly\. The following shows an example\. ``` CREATE PROCEDURE cs_refresh_reports() AS $$ DECLARE reports RECORD; BEGIN PERFORM cs_log('Refreshing reports...'); FOR reports IN SELECT * FROM cs_reports ORDER BY sort_key LOOP -- Now "reports" has one record from cs_reports PERFORM cs_log('Refreshing report ' || quote_ident(reports.report_name) || ' ...'); EXECUTE 'TRUNCATE TABLE ' || quote_ident(reports.report_name); EXECUTE 'INSERT INTO ' || quote_ident(reports.report_name) || ' ' || reports.report_query; END LOOP; PERFORM cs_log('Done refreshing reports.'); RETURN; END; $$ LANGUAGE plpgsql; ``` + FOR loop with dynamic SQL
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
fbe1121af2d8-5
RETURN; END; $$ LANGUAGE plpgsql; ``` + FOR loop with dynamic SQL ``` [<<label>>] FOR record_or_row IN EXECUTE text_expression LOOP statements END LOOP; ``` A FOR loop with dynamic SQL enables a stored procedure to iterate through the results of a dynamic query and manipulate that data accordingly\. The following shows an example\. ``` CREATE OR REPLACE PROCEDURE for_loop_dynamic_sql(x int) LANGUAGE plpgsql AS $$ DECLARE rec RECORD; query text; BEGIN query := 'SELECT * FROM tbl_dynamic_sql LIMIT ' || x; FOR rec IN EXECUTE query LOOP RAISE INFO 'a %', rec.a; END LOOP; END; $$; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
9c54f13779b1-0
Rather than running a whole query at once, you can set up a cursor\. A *cursor *encapsulates a query and reads the query result a few rows at a time\. One reason for doing this is to avoid memory overrun when the result contains a large number of rows\. Another reason is to return a reference to a cursor that a stored procedure has created, which allows the caller to read the rows\. This approach provides an efficient way to return large row sets from stored procedures\. To set up a cursor, first you declare a cursor variable\. All access to cursors in PL/pgSQL goes through cursor variables, which are always of the special data type `refcursor`\. A `refcursor` data type simply holds a reference to a cursor\. You can create a cursor variable by declaring it as a variable of type `refcursor`\. Or, you can use the cursor declaration syntax following\. ``` name CURSOR [ ( arguments ) ] FOR query ; ``` In the preceding, *arguments* \(if specified\) is a comma\-separated list of *name datatype* pairs that each define names to be replaced by parameter values in *query*\. The actual values to substitute for these names are specified later, when the cursor is opened\. The following shows examples\. ``` DECLARE curs1 refcursor; curs2 CURSOR FOR SELECT * FROM tenk1;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
9c54f13779b1-1
``` DECLARE curs1 refcursor; curs2 CURSOR FOR SELECT * FROM tenk1; curs3 CURSOR (key integer) IS SELECT * FROM tenk1 WHERE unique1 = key; ``` All three of these variables have the data type `refcursor`, but the first can be used with any query\. In contrast, the second has a fully specified query already bound to it, and the last has a parameterized query bound to it\. The `key` value is replaced by an integer parameter value when the cursor is opened\. The variable `curs1` is said to be *unbound *because it is not bound to any particular query\. Before you can use a cursor to retrieve rows, it must be opened\. PL/pgSQL has three forms of the OPEN statement, of which two use unbound cursor variables and the third uses a bound cursor variable: + Open for select: The cursor variable is opened and given the specified query to run\. The cursor can't be open already\. Also, it must have been declared as an unbound cursor \(that is, as a simple `refcursor` variable\)\. The SELECT query is treated in the same way as other SELECT statements in PL/pgSQL\. ``` OPEN cursor_name FOR SELECT ...; ``` The following shows an example\. ``` OPEN curs1 FOR SELECT * FROM foo WHERE key = mykey;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
9c54f13779b1-2
The following shows an example\. ``` OPEN curs1 FOR SELECT * FROM foo WHERE key = mykey; ``` + Open for execute: The cursor variable is opened and given the specified query to run\. The cursor can't be open already\. Also, it must have been declared as an unbound cursor \(that is, as a simple `refcursor` variable\)\. The query is specified as a string expression in the same way as in the EXECUTE command\. This approach gives flexibility so the query can vary from one run to the next\. ``` OPEN cursor_name FOR EXECUTE query_string; ``` The following shows an example\. ``` OPEN curs1 FOR EXECUTE ’SELECT * FROM ’ || quote_ident($1); ``` + Open a bound cursor: This form of OPEN is used to open a cursor variable whose query was bound to it when it was declared\. The cursor can't be open already\. A list of actual argument value expressions must appear if and only if the cursor was declared to take arguments\. These values are substituted in the query\. ``` OPEN bound_cursor_name [ ( argument_values ) ]; ``` The following shows an example\. ``` OPEN curs2; OPEN curs3(42); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
9c54f13779b1-3
``` OPEN curs2; OPEN curs3(42); ``` After a cursor has been opened, you can work with it by using the statements described following\. These statements don't have to occur in the same stored procedure that opened the cursor\. You can return a `refcursor` value out of a stored procedure and let the caller operate on the cursor\. All portals are implicitly closed at transaction end\. Thus, you can use a `refcursor` value to reference an open cursor only until the end of the transaction\. + FETCH retrieves the next row from the cursor into a target\. This target can be a row variable, a record variable, or a comma\-separated list of simple variables, just as with SELECT INTO\. As with SELECT INTO, you can check the special variable FOUND to see whether a row was obtained\. ``` FETCH cursor INTO target; ``` The following shows an example\. ``` FETCH curs1 INTO rowvar; ``` + CLOSE closes the portal underlying an open cursor\. You can use this statement to release resources earlier than end of the transaction\. You can also use this statement to free the cursor variable to be opened again\. ``` CLOSE cursor; ``` The following shows an example\. ``` CLOSE curs1; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
ac3fa335e785-0
Use the RAISE statement to report messages and raise errors\. ``` RAISE level 'format' [, variable [, ...]]; ``` Possible levels are NOTICE, INFO, LOG, WARNING, and EXCEPTION\. EXCEPTION raises an error, which normally aborts the current transaction\. The other levels generate only messages of different priority levels\. Inside the format string, % is replaced by the next optional argument's string representation\. Write %% to emit a literal %\. Currently, optional arguments must be simple variables, not expressions, and the format must be a simple string literal\. In the following example, the value of `v_job_id` replaces the % in the string\. ``` RAISE NOTICE ’Calling cs_create_job(%)’, v_job_id; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md
fa5c3bd1ef86-0
You can work with transaction control statements in the PL/pgSQL language that Amazon Redshift uses\. For information about using the statements COMMIT, ROLLBACK, and TRUNCATE within a stored procedure, see [Managing transactions](stored-procedure-transaction-management.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-statements.md