id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
f589ef24ab76-0
Where possible, use the standard FROM clause OUTER JOIN syntax instead of the \(\+\) operator in the WHERE clause\. Queries that contain the \(\+\) operator are subject to the following rules: + You can only use the \(\+\) operator in the WHERE clause, and only in reference to columns from tables or views\. + You can't apply the \(\+\) operator to expressions\. However, an expression can contain columns that use the \(\+\) operator\. For example, the following join condition returns a syntax error: ``` event.eventid*10(+)=category.catid ``` However, the following join condition is valid: ``` event.eventid(+)*10=category.catid ``` + You can't use the \(\+\) operator in a query block that also contains FROM clause join syntax\. + If two tables are joined over multiple join conditions, you must use the \(\+\) operator in all or none of these conditions\. A join with mixed syntax styles executes as an inner join, without warning\. + The \(\+\) operator doesn't produce an outer join if you join a table in the outer query with a table that results from an inner query\. + To use the \(\+\) operator to outer\-join a table to itself, you must define table aliases in the FROM clause and reference them in the join condition: ``` select count(*)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_oracle_outer.md
f589ef24ab76-1
``` select count(*) from event a, event b where a.eventid(+)=b.catid; count ------- 8798 (1 row) ``` + You can't combine a join condition that contains the \(\+\) operator with an OR condition or an IN condition\. For example: ``` select count(*) from sales, listing where sales.listid(+)=listing.listid or sales.salesid=0; ERROR: Outer join operator (+) not allowed in operand of OR or IN. ``` + In a WHERE clause that outer\-joins more than two tables, the \(\+\) operator can be applied only once to a given table\. In the following example, the SALES table can't be referenced with the \(\+\) operator in two successive joins\. ``` select count(*) from sales, listing, event where sales.listid(+)=listing.listid and sales.dateid(+)=date.dateid; ERROR: A table may be outer joined to at most one other table. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_oracle_outer.md
f589ef24ab76-2
ERROR: A table may be outer joined to at most one other table. ``` + If the WHERE clause outer\-join condition compares a column from TABLE2 with a constant, apply the \(\+\) operator to the column\. If you don't include the operator, the outer\-joined rows from TABLE1, which contain nulls for the restricted column, are eliminated\. See the Examples section below\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_oracle_outer.md
657c3cf23669-0
The following join query specifies a left outer join of the SALES and LISTING tables over their LISTID columns: ``` select count(*) from sales, listing where sales.listid = listing.listid(+); count -------- 172456 (1 row) ``` The following equivalent query produces the same result but uses FROM clause join syntax: ``` select count(*) from sales left outer join listing on sales.listid = listing.listid; count -------- 172456 (1 row) ``` The SALES table doesn't contain records for all listings in the LISTING table because not all listings result in sales\. The following query outer\-joins SALES and LISTING and returns rows from LISTING even when the SALES table reports no sales for a given list ID\. The PRICE and COMM columns, derived from the SALES table, contain nulls in the result set for those non\-matching rows\. ``` select listing.listid, sum(pricepaid) as price, sum(commission) as comm from listing, sales where sales.listid(+) = listing.listid and listing.listid between 1 and 5 group by 1 order by 1; listid | price | comm
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_oracle_outer.md
657c3cf23669-1
group by 1 order by 1; listid | price | comm --------+--------+-------- 1 | 728.00 | 109.20 2 | | 3 | | 4 | 76.00 | 11.40 5 | 525.00 | 78.75 (5 rows) ``` Note that when the WHERE clause join operator is used, the order of the tables in the FROM clause doesn't matter\. An example of a more complex outer join condition in the WHERE clause is the case where the condition consists of a comparison between two table columns *and* a comparison with a constant: ``` where category.catid=event.catid(+) and eventid(+)=796; ``` Note that the \(\+\) operator is used in two places: first in the equality comparison between the tables and second in the comparison condition for the EVENTID column\. The result of this syntax is the preservation of the outer\-joined rows when the restriction on EVENTID is evaluated\. If you remove the \(\+\) operator from the EVENTID restriction, the query treats this restriction as a filter, not as part of the outer\-join condition\. In turn, the outer\-joined rows that contain nulls for EVENTID are eliminated from the result set\. Here is a complete query that illustrates this behavior: ``` select catname, catgroup, eventid from category, event
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_oracle_outer.md
657c3cf23669-2
Here is a complete query that illustrates this behavior: ``` select catname, catgroup, eventid from category, event where category.catid=event.catid(+) and eventid(+)=796; catname | catgroup | eventid -----------+----------+--------- Classical | Concerts | Jazz | Concerts | MLB | Sports | MLS | Sports | Musicals | Shows | 796 NBA | Sports | NFL | Sports | NHL | Sports | Opera | Shows | Plays | Shows | Pop | Concerts | (11 rows) ``` The equivalent query using FROM clause syntax is as follows: ``` select catname, catgroup, eventid from category left join event on category.catid=event.catid and eventid=796; ``` If you remove the second \(\+\) operator from the WHERE clause version of this query, it returns only 1 row \(the row where `eventid=796`\)\. ``` select catname, catgroup, eventid from category, event where category.catid=event.catid(+) and eventid=796;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_oracle_outer.md
657c3cf23669-3
from category, event where category.catid=event.catid(+) and eventid=796; catname | catgroup | eventid -----------+----------+--------- Musicals | Shows | 796 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_oracle_outer.md
1d037adc11c5-0
Returns the query ID of the most recently executed UNLOAD command in the current session\. If no UNLOAD commands have been executed in the current session, PG\_LAST\_UNLOAD\_ID returns \-1\. The value for PG\_LAST\_UNLOAD\_ID is updated when the UNLOAD command begins the load process\. If the UNLOAD fails because of invalid load data, the UNLOAD ID is updated, so you can use the UNLOAD ID for further investigation\. If the UNLOAD transaction is rolled back, the UNLOAD ID is not updated\. The UNLOAD ID is not updated if the UNLOAD command fails because of an error that occurs before the load process begins, such as a syntax error, access error, invalid credentials, or insufficient privileges\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_UNLOAD_ID.md
055f9a68776e-0
``` PG_LAST_UNLOAD_ID() ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_UNLOAD_ID.md
fac1cae76bf6-0
Returns an integer\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_UNLOAD_ID.md
da33a2d8aee5-0
The following query returns the query ID of the latest UNLOAD command in the current session\. ``` select PG_LAST_UNLOAD_ID(); PG_LAST_UNLOAD_ID --------------- 5437 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_UNLOAD_ID.md
8ca2745152dd-0
Amazon Redshift supports standard data manipulation language \(DML\) commands \(INSERT, UPDATE, and DELETE\) that you can use to modify rows in tables\. You can also use the TRUNCATE command to do fast bulk deletes\. **Note** We strongly encourage you to use the [COPY](r_COPY.md) command to load large amounts of data\. Using individual INSERT statements to populate a table might be prohibitively slow\. Alternatively, if your data already exists in other Amazon Redshift database tables, use INSERT INTO \.\.\. SELECT FROM or CREATE TABLE AS to improve performance\. For information, see [INSERT](r_INSERT_30.md) or [CREATE TABLE AS](r_CREATE_TABLE_AS.md)\. If you insert, update, or delete a significant number of rows in a table, relative to the number of rows before the changes, run the ANALYZE and VACUUM commands against the table when you are done\. If a number of small changes accumulate over time in your application, you might want to schedule the ANALYZE and VACUUM commands to run at regular intervals\. For more information, see [Analyzing tables](t_Analyzing_tables.md) and [Vacuuming tables](t_Reclaiming_storage_space202.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Updating_tables_with_DML_commands.md
4c2c6e1fb2f6-0
Synonym of the BEGIN function\. See [BEGIN](r_BEGIN.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_START_TRANSACTION.md
ded1c253a123-0
Updates table statistics for use by the query planner\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE.md
8130e3f387e3-0
``` ANALYZE [ VERBOSE ] [ [ table_name [ ( column_name [, ...] ) ] ] [ PREDICATE COLUMNS | ALL COLUMNS ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE.md
9c7bd75b5c86-0
VERBOSE A clause that returns progress information messages about the ANALYZE operation\. This option is useful when you don't specify a table\. *table\_name* You can analyze specific tables, including temporary tables\. You can qualify the table with its schema name\. You can optionally specify a table\_name to analyze a single table\. You can't specify more than one *table\_name* with a single ANALYZE *table\_name* statement\. If you don't specify a *table\_name* value, all of the tables in the currently connected database are analyzed, including the persistent tables in the system catalog\. Amazon Redshift skips analyzing a table if the percentage of rows that have changed since the last ANALYZE is lower than the analyze threshold\. For more information, see [Analyze threshold](#r_ANALYZE-threshold)\. You don't need to analyze Amazon Redshift system tables \(STL and STV tables\)\. *column\_name* If you specify a *table\_name*, you can also specify one or more columns in the table \(as a column\-separated list within parentheses\)\. If a column list is specified, only the listed columns are analyzed\. PREDICATE COLUMNS \| ALL COLUMNS Clauses that indicates whether ANALYZE should include only predicate columns\. Specify PREDICATE COLUMNS to analyze only columns that have been used as predicates in previous queries or are likely candidates to be used as predicates\. Specify ALL COLUMNS to analyze all columns\. The default is ALL COLUMNS\. A column is included in the set of predicate columns if any of the following is true:
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE.md
9c7bd75b5c86-1
A column is included in the set of predicate columns if any of the following is true: + The column has been used in a query as a part of a filter, join condition, or group by clause\. + The column is a distribution key\. + The column is part of a sort key\. If no columns are marked as predicate columns, for example because the table has not yet been queried, all of the columns are analyzed even when PREDICATE COLUMNS is specified\. For more information about predicate columns, see [Analyzing tables](t_Analyzing_tables.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE.md
1a8717f7827a-0
Amazon Redshift automatically runs ANALYZE on tables that you create with the following commands: + CREATE TABLE AS + CREATE TEMP TABLE AS + SELECT INTO You can't analyze an external table\. You don't need to run the ANALYZE command on these tables when they are first created\. If you modify them, you should analyze them in the same way as other tables\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE.md
17fda1a9a3b8-0
To reduce processing time and improve overall system performance, Amazon Redshift skips ANALYZE for a table if the percentage of rows that have changed since the last ANALYZE command run is lower than the analyze threshold specified by the [analyze\_threshold\_percent](r_analyze_threshold_percent.md) parameter\. By default, `analyze_threshold_percent` is 10\. To change `analyze_threshold_percent` for the current session, execute the [SET](r_SET.md) command\. The following example changes `analyze_threshold_percent` to 20 percent\. ``` set analyze_threshold_percent to 20; ``` To analyze tables when only a small number of rows have changed, set `analyze_threshold_percent` to an arbitrarily small number\. For example, if you set `analyze_threshold_percent` to 0\.01, then a table with 100,000,000 rows aren't skipped if at least 10,000 rows have changed\. ``` set analyze_threshold_percent to 0.01; ``` If ANALYZE skips a table because it doesn't meet the analyze threshold, Amazon Redshift returns the following message\. ``` ANALYZE SKIP ``` To analyze all tables even if no rows have changed, set `analyze_threshold_percent` to 0\. To view the results of ANALYZE operations, query the [STL\_ANALYZE](r_STL_ANALYZE.md) system table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE.md
17fda1a9a3b8-1
To view the results of ANALYZE operations, query the [STL\_ANALYZE](r_STL_ANALYZE.md) system table\. For more information about analyzing tables, see [Analyzing tables](t_Analyzing_tables.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE.md
c3ccc242dffc-0
Analyze all of the tables in the TICKIT database and return progress information\. ``` analyze verbose; ``` Analyze the LISTING table only\. ``` analyze listing; ``` Analyze the VENUEID and VENUENAME columns in the VENUE table\. ``` analyze venue(venueid, venuename); ``` Analyze only predicate columns in the VENUE table\. ``` analyze venue predicate columns; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE.md
98a5a3c77d4f-0
JSON\_ARRAY\_LENGTH returns the number of elements in the outer array of a JSON string\. If the *null\_if\_invalid* argument is set to `true` and the JSON string is invalid, the function returns NULL instead of returning an error\. For more information, see [JSON functions](json-functions.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_ARRAY_LENGTH.md
da703af580e7-0
``` json_array_length('json_array' [, null_if_invalid ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_ARRAY_LENGTH.md
4585fd5bc2b5-0
*json\_array* A properly formatted JSON array\. *null\_if\_invalid* A Boolean value that specifies whether to return NULL if the input JSON string is invalid instead of returning an error\. To return NULL if the JSON is invalid, specify `true` \(`t`\)\. To return an error if the JSON is invalid, specify `false` \(`f`\)\. The default is `false`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_ARRAY_LENGTH.md
d0a422513a45-0
INTEGER
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_ARRAY_LENGTH.md
55c2bc7cc8a6-0
The following example returns the number of elements in the array: ``` select json_array_length('[11,12,13,{"f1":21,"f2":[25,26]},14]'); json_array_length ----------------- 5 ``` The following example returns an error because the JSON is invalid\. ``` select json_array_length('[11,12,13,{"f1":21,"f2":[25,26]},14'); An error occurred when executing the SQL command: select json_array_length('[11,12,13,{"f1":21,"f2":[25,26]},14') ``` The following example sets *null\_if\_invalid* to *true*, so the statement the returns NULL instead of returning an error for invalid JSON\. ``` select json_array_length('[11,12,13,{"f1":21,"f2":[25,26]},14',true); json_array_length ----------------- ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_ARRAY_LENGTH.md
2d3c577af981-0
As you test system performance before and after tuning your tables, you will record the following details: + Load time + Storage use + Query performance The examples in this tutorial are based on using a four\-node dw2\.large cluster\. Your results will be different, even if you use the same cluster configuration\. System performance is influenced by many factors, and no two systems will perform exactly the same\. You will record your results using the following benchmarks table\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables-test-performance.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-test-performance.md
180e25198793-0
1. Note the cumulative load time for all five tables and enter it in the benchmarks table in the Before column\. This is the value you noted in the previous step\. 1. Record storage use\. Determine how many 1 MB blocks of disk space are used for each table by querying the STV\_BLOCKLIST table and record the results in your benchmarks table\. ``` select stv_tbl_perm.name as table, count(*) as mb from stv_blocklist, stv_tbl_perm where stv_blocklist.tbl = stv_tbl_perm.id and stv_blocklist.slice = stv_tbl_perm.slice and stv_tbl_perm.name in ('lineorder','part','customer','dwdate','supplier') group by stv_tbl_perm.name order by 1 asc; ``` Your results should look similar to this: ``` table | mb ----------+------ customer | 384 dwdate | 160 lineorder | 51024 part | 200 supplier | 152 ``` 1. Test query performance\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-test-performance.md
180e25198793-1
lineorder | 51024 part | 200 supplier | 152 ``` 1. Test query performance\. The first time you run a query, Amazon Redshift compiles the code, and then sends compiled code to the compute nodes\. When you compare the execution times for queries, you should not use the results for the first time you execute the query\. Instead, compare the times for the second execution of each query\. For more information, see [Factors affecting query performance](c-query-performance.md)\. **Note** To reduce query execution time and improve system performance, Amazon Redshift caches the results of certain types of queries in memory on the leader node\. When result caching is enabled, subsequent queries run much faster, which invalidates performance comparisons\. To disable result caching for the current session, set the [enable\_result\_cache\_for\_session](r_enable_result_cache_for_session.md) parameter to `off`, as shown following\. ``` set enable_result_cache_for_session to off; ``` Run the following queries twice to eliminate compile time\. Record the second time for each query in the benchmarks table\. ``` -- Query 1 -- Restrictions on only one dimension. select sum(lo_extendedprice*lo_discount) as revenue from lineorder, dwdate where lo_orderdate = d_datekey
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-test-performance.md
180e25198793-2
from lineorder, dwdate where lo_orderdate = d_datekey and d_year = 1997 and lo_discount between 1 and 3 and lo_quantity < 24; -- Query 2 -- Restrictions on two dimensions select sum(lo_revenue), d_year, p_brand1 from lineorder, dwdate, part, supplier where lo_orderdate = d_datekey and lo_partkey = p_partkey and lo_suppkey = s_suppkey and p_category = 'MFGR#12' and s_region = 'AMERICA' group by d_year, p_brand1 order by d_year, p_brand1; -- Query 3 -- Drill down in time to just one month select c_city, s_city, d_year, sum(lo_revenue) as revenue from customer, lineorder, supplier, dwdate where lo_custkey = c_custkey and lo_suppkey = s_suppkey and lo_orderdate = d_datekey and (c_city='UNITED KI1' or c_city='UNITED KI5') and (s_city='UNITED KI1' or s_city='UNITED KI5') and d_yearmonth = 'Dec1997'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-test-performance.md
180e25198793-3
s_city='UNITED KI5') and d_yearmonth = 'Dec1997' group by c_city, s_city, d_year order by d_year asc, revenue desc; ``` Your results for the second time will look something like this: ``` SELECT executed successfully Execution time: 6.97s (Statement 1 of 3 finished) SELECT executed successfully Execution time: 12.81s (Statement 2 of 3 finished) SELECT executed successfully Execution time: 13.39s (Statement 3 of 3 finished) Script execution finished Total script execution time: 33.17s ``` The following benchmarks table shows the example results for the cluster used in this tutorial\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables-test-performance.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-test-performance.md
fa617b6efca5-0
[Step 3: Select sort keys](tutorial-tuning-tables-sort-keys.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-test-performance.md
9a73f546a14d-0
For each group in a query, the LISTAGG window function orders the rows for that group according to the ORDER BY expression, then concatenates the values into a single string\. LISTAGG is a compute\-node only function\. The function returns an error if the query doesn't reference a user\-defined table or Amazon Redshift system table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LISTAGG.md
b9f3211260e5-0
``` LISTAGG( [DISTINCT] expression [, 'delimiter' ] ) [ WITHIN GROUP (ORDER BY order_list) ] OVER ( [PARTITION BY partition_expression] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LISTAGG.md
8794c8fedb8f-0
DISTINCT \(Optional\) A clause that eliminates duplicate values from the specified expression before concatenating\. Trailing spaces are ignored, so the strings `'a'` and `'a '` are treated as duplicates\. LISTAGG uses the first value encountered\. For more information, see [Significance of trailing blanks](r_Character_types.md#r_Character_types-significance-of-trailing-blanks)\. *aggregate\_expression* Any valid expression \(such as a column name\) that provides the values to aggregate\. NULL values and empty strings are ignored\. *delimiter* \(Optional\) The string constant to will separate the concatenated values\. The default is NULL\. WITHIN GROUP \(ORDER BY *order\_list*\) \(Optional\) A clause that specifies the sort order of the aggregated values\. Deterministic only if ORDER BY provides unique ordering\. The default is to aggregate all rows and return a single value\. OVER A clause that specifies the window partitioning\. The OVER clause cannot contain a window ordering or window frame specification\. PARTITION BY *partition\_expression* \(Optional\) Sets the range of records for each group in the OVER clause\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LISTAGG.md
9a49a6b3f71e-0
VARCHAR\(MAX\)\. If the result set is larger than the maximum VARCHAR size \(64K – 1, or 65535\), then LISTAGG returns the following error: ``` Invalid operation: Result size exceeds LISTAGG limit ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LISTAGG.md
d8c5d336eb6c-0
The following examples uses the WINSALES table\. For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\. The following example returns a list of seller IDs, ordered by seller ID\. ``` select listagg(sellerid) within group (order by sellerid) over() from winsales; listagg ------------ 11122333344 ... ... 11122333344 11122333344 (11 rows) ``` The following example returns a list of seller IDs for buyer B, ordered by date\. ``` select listagg(sellerid) within group (order by dateid) over () as seller from winsales where buyerid = 'b' ; seller --------- 3233 3233 3233 3233 (4 rows) ``` The following example returns a comma\-separated list of sales dates for buyer B\. ``` select listagg(dateid,',') within group (order by sellerid desc,salesid asc)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LISTAGG.md
d8c5d336eb6c-1
``` select listagg(dateid,',') within group (order by sellerid desc,salesid asc) over () as dates from winsales where buyerid = 'b'; dates ------------------------------------------- 2003-08-02,2004-04-18,2004-04-18,2004-02-12 2003-08-02,2004-04-18,2004-04-18,2004-02-12 2003-08-02,2004-04-18,2004-04-18,2004-02-12 2003-08-02,2004-04-18,2004-04-18,2004-02-12 (4 rows) ``` The following example uses DISTINCT to return a list of unique sales dates for buyer B\. ``` select listagg(distinct dateid,',') within group (order by sellerid desc,salesid asc) over () as dates from winsales where buyerid = 'b'; dates -------------------------------- 2003-08-02,2004-04-18,2004-02-12
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LISTAGG.md
d8c5d336eb6c-2
2003-08-02,2004-04-18,2004-02-12 2003-08-02,2004-04-18,2004-02-12 2003-08-02,2004-04-18,2004-02-12 2003-08-02,2004-04-18,2004-02-12 (4 rows) ``` The following example returns a comma\-separated list of sales IDs for each buyer ID\. ``` select buyerid, listagg(salesid,',') within group (order by salesid) over (partition by buyerid) as sales_id from winsales order by buyerid; buyerid | sales_id -----------+------------------------ a |10005,40001,40005 a |10005,40001,40005 a |10005,40001,40005 b |20001,30001,30004,30003 b |20001,30001,30004,30003 b |20001,30001,30004,30003 b |20001,30001,30004,30003 c |10001,20002,30007,10006 c |10001,20002,30007,10006
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LISTAGG.md
d8c5d336eb6c-3
c |10001,20002,30007,10006 c |10001,20002,30007,10006 c |10001,20002,30007,10006 c |10001,20002,30007,10006 (11 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LISTAGG.md
cd1fd27fbb9d-0
If a user issues a query that is taking too long or is consuming excessive cluster resources, you might need to cancel the query\. For example, a user might want to create a list of ticket sellers that includes the seller's name and quantity of tickets sold\. The following query selects data from the SALES table USERS table and joins the two tables by matching SELLERID and USERID in the WHERE clause\. ``` select sellerid, firstname, lastname, sum(qtysold) from sales, users where sales.sellerid = users.userid group by sellerid, firstname, lastname order by 4 desc; ``` **Note** This is a complex query\. For this tutorial, you don't need to worry about how this query is constructed\. The previous query runs in seconds and returns 2,102 rows\. Suppose the user forgets to put in the WHERE clause\. ``` select sellerid, firstname, lastname, sum(qtysold) from sales, users group by sellerid, firstname, lastname order by 4 desc; ``` The result set will include all of the rows in the SALES table multiplied by all the rows in the USERS table \(49989\*3766\)\. This is called a Cartesian join, and it is not recommended\. The result is over 188 million rows and takes a long time to run\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cancel_query.md
cd1fd27fbb9d-1
To cancel a running query, use the CANCEL command with the query's PID\. To find the process ID, query the STV\_RECENTS table, as shown in the previous step\. The following example shows how you can make the results more readable by using the TRIM function to trim trailing spaces and by showing only the first 20 characters of the query string\. ``` select pid, trim(user_name), starttime, substring(query,1,20) from stv_recents where status='Running'; ``` The result looks something like this: ``` pid | btrim | starttime | substring -------+------------+----------------------------+---------------------- 18764 | masteruser | 2013-03-28 18:39:49.355918 | select sellerid, fir (1 row) ``` To cancel the query with PID 18764, issue the following command: ``` cancel 18764; ``` **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cancel_query.md
cd1fd27fbb9d-2
``` cancel 18764; ``` **Note** The CANCEL command will not abort a transaction\. To abort or roll back a transaction, you must use the ABORT or ROLLBACK command\. To cancel a query associated with a transaction, first cancel the query then abort the transaction\. If the query that you canceled is associated with a transaction, use the ABORT or ROLLBACK\. command to cancel the transaction and discard any changes made to the data: ``` abort; ``` Unless you are signed on as a superuser, you can cancel only your own queries\. A superuser can cancel all queries\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cancel_query.md
3fa787e47018-0
If your query tool does not support running queries concurrently, you will need to start another session to cancel the query\. For example, SQLWorkbench, which is the query tool we use in the Amazon Redshift Getting Started, does not support multiple concurrent queries\. To start another session using SQLWorkbench, select File, New Window and connect using the same connection parameters\. Then you can find the PID and cancel the query\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cancel_query.md
3d7afb4134a6-0
If your current session has too many queries running concurrently, you might not be able to run the CANCEL command until another query finishes\. In that case, you will need to issue the CANCEL command using a different workload management query queue\. Workload management enables you to execute queries in different query queues so that you don't need to wait for another query to complete\. The workload manager creates a separate queue, called the Superuser queue, that you can use for troubleshooting\. To use the Superuser queue, you must be logged on a superuser and set the query group to 'superuser' using the SET command\. After running your commands, reset the query group using the RESET command\. To cancel a query using the Superuser queue, issue these commands: ``` set query_group to 'superuser'; cancel 18764; reset query_group; ``` For information about managing query queues, see [Implementing workload management](cm-c-implementing-workload-management.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cancel_query.md
7c481555cfc5-0
Use the SVCS\_S3QUERY\_SUMMARY view to get a summary of all Redshift Spectrum queries \(S3 queries\) that have been run on the system\. One segment can perform one external table scan\. **Note** System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the views with the prefix SVL except that the SVL views provide information only for queries run on the main cluster\. SVCS\_S3QUERY\_SUMMARY is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\. For information about SVL\_S3QUERY, see [SVL\_S3QUERY](r_SVL_S3QUERY.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3QUERY_SUMMARY.md
e4a6174a87ab-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_S3QUERY_SUMMARY.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3QUERY_SUMMARY.md
9e2d45df996e-0
The following example gets the scan step details for the last query run\. ``` select query, segment, elapsed, s3_scanned_rows, s3_scanned_bytes, s3query_returned_rows, s3query_returned_bytes, files from svcs_s3query_summary where query = pg_last_query_id() order by query,segment; ``` ``` query | segment | elapsed | s3_scanned_rows | s3_scanned_bytes | s3query_returned_rows | s3query_returned_bytes | files ------+---------+---------+-----------------+------------------+-----------------------+------------------------+------ 4587 | 2 | 67811 | 0 | 0 | 0 | 0 | 0 4587 | 2 | 591568 | 172462 | 11260097 | 8513 | 170260 | 1 4587 | 2 | 216849 | 0 | 0 | 0 | 0 | 0 4587 | 2 | 216671 | 0 | 0 | 0 | 0 | 0
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3QUERY_SUMMARY.md
9e2d45df996e-1
4587 | 2 | 216671 | 0 | 0 | 0 | 0 | 0 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3QUERY_SUMMARY.md
7d0bdd5cdec5-0
Your cluster continues to accrue charges as long as it is running\. When you have completed this tutorial, you should return your environment to the previous state by following the steps in [Step 5: Revoke Access and Delete Your Sample Cluster](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-clean-up-tasks.html) in the *Amazon Redshift Getting Started*\. If you want to keep the cluster, but recover the storage used by the SSB tables, execute the following commands\. ``` drop table part cascade; drop table supplier cascade; drop table customer cascade; drop table dwdate cascade; drop table lineorder cascade; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-clean-up.md
0a758a2d65c0-0
[Summary](tutorial-tuning-tables-summary.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-clean-up.md
4389b64db63f-0
Amazon Redshift splits the results of a select statement across a set of files, one or more files per node slice, to simplify parallel reloading of the data\. Alternatively, you can specify that [UNLOAD](r_UNLOAD.md) should write the results serially to one or more files by adding the PARALLEL OFF option\. You can limit the size of the files in Amazon S3 by specifying the MAXFILESIZE parameter\. UNLOAD automatically encrypts data files using Amazon S3 server\-side encryption \(SSE\-S3\)\. You can use any select statement in the UNLOAD command that Amazon Redshift supports, except for a select that uses a LIMIT clause in the outer select\. For example, you can use a select statement that includes specific columns or that uses a where clause to join multiple tables\. If your query contains quotes \(enclosing literal values, for example\), you need to escape them in the query text \(\\'\)\. For more information, see the [SELECT](r_SELECT_synopsis.md) command reference\. For more information about using a LIMIT clause, see the [Usage notes](r_UNLOAD.md#unload-usage-notes) for the UNLOAD command\. For example, the following UNLOAD command sends the contents of the VENUE table to the Amazon S3 bucket `s3://mybucket/tickit/unload/`\. ``` unload ('select * from venue') to 's3://mybucket/tickit/unload/venue_'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Unloading_tables.md
4389b64db63f-1
unload ('select * from venue') to 's3://mybucket/tickit/unload/venue_' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ``` The file names created by the previous example include the prefix '`venue_`'\. ``` venue_0000_part_00 venue_0001_part_00 venue_0002_part_00 venue_0003_part_00 ``` By default, UNLOAD writes data in parallel to multiple files, according to the number of slices in the cluster\. To write data to a single file, specify PARALLEL OFF\. UNLOAD writes the data serially, sorted absolutely according to the ORDER BY clause, if one is used\. The maximum size for a data file is 6\.2 GB\. If the data size is greater than the maximum, UNLOAD creates additional files, up to 6\.2 GB each\. The following example writes the contents VENUE to a single file\. Only one file is required because the file size is less than 6\.2 GB\. ``` unload ('select * from venue') to 's3://mybucket/tickit/unload/venue_' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' parallel off;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Unloading_tables.md
4389b64db63f-2
parallel off; ``` **Note** The UNLOAD command is designed to use parallel processing\. We recommend leaving PARALLEL enabled for most cases, especially if the files will be used to load tables using a COPY command\. Assuming the total data size for VENUE is 5 GB, the following example writes the contents of VENUE to 50 files, each 100 MB in size\. ``` unload ('select * from venue') to 's3://mybucket/tickit/unload/venue_' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' parallel off maxfilesize 100 mb; ``` If you include a prefix in the Amazon S3 path string, UNLOAD will use that prefix for the file names\. ``` unload ('select * from venue') to 's3://mybucket/tickit/unload/venue_' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Unloading_tables.md
4389b64db63f-3
``` You can limit the access users have to your data by using temporary security credentials\. Temporary security credentials provide enhanced security because they have short life spans and cannot be reused after they expire\. A user who has these temporary security credentials can access your resources only until the credentials expire\. For more information, see [Temporary security credentials](copy-usage_notes-access-permissions.md#r_copy-temporary-security-credentials)\. To unload data using temporary access credentials, use the following syntax: ``` unload ('select * from venue') to 's3://mybucket/tickit/venue_' access_key_id '<access-key-id>' secret_access_key '<secret-access-key>' session_token '<temporary-token>'; ``` **Important** The temporary security credentials must be valid for the entire duration of the UNLOAD statement\. If the temporary security credentials expire during the load process, the UNLOAD will fail and the transaction will be rolled back\. For example, if temporary security credentials expire after 15 minutes and the UNLOAD requires one hour, the UNLOAD will fail before it completes\. You can create a manifest file that lists the unload files by specifying the MANIFEST option in the UNLOAD command\. The manifest is a text file in JSON format that explicitly lists the URL of each file that was written to Amazon S3\. The following example includes the manifest option\. ``` unload ('select * from venue')
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Unloading_tables.md
4389b64db63f-4
The following example includes the manifest option\. ``` unload ('select * from venue') to 's3://mybucket/tickit/venue_' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' manifest; ``` The following example shows a manifest for four unload files\. ``` { "entries": [ {"url":"s3://mybucket/tickit/venue_0000_part_00"}, {"url":"s3://mybucket/tickit/venue_0001_part_00"}, {"url":"s3://mybucket/tickit/venue_0002_part_00"}, {"url":"s3://mybucket/tickit/venue_0003_part_00"} ] } ``` The manifest file can be used to load the same files by using a COPY with the MANIFEST option\. For more information, see [Using a manifest to specify data files](loading-data-files-using-manifest.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Unloading_tables.md
4389b64db63f-5
After you complete an UNLOAD operation, confirm that the data was unloaded correctly by navigating to the Amazon S3 bucket where UNLOAD wrote the files\. You will see one or more numbered files per slice, starting with the number zero\. If you specified the MANIFEST option, you will also see a file ending with '`manifest`'\. For example: ``` mybucket/tickit/venue_0000_part_00 mybucket/tickit/venue_0001_part_00 mybucket/tickit/venue_0002_part_00 mybucket/tickit/venue_0003_part_00 mybucket/tickit/venue_manifest ``` You can programmatically get a list of the files that were written to Amazon S3 by calling an Amazon S3 list operation after the UNLOAD completes; however, depending on how quickly you issue the call, the list might be incomplete because an Amazon S3 list operation is eventually consistent\. To get a complete, authoritative list immediately, query STL\_UNLOAD\_LOG\. The following query returns the pathname for files that were created by an UNLOAD\. The [PG\_LAST\_QUERY\_ID](PG_LAST_QUERY_ID.md) function returns the most recent query\. ``` select query, substring(path,0,40) as path from stl_unload_log where query=2320 order by path; query | path
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Unloading_tables.md
4389b64db63f-6
from stl_unload_log where query=2320 order by path; query | path -------+-------------------------------------- 2320 | s3://my-bucket/venue0000_part_00 2320 | s3://my-bucket/venue0001_part_00 2320 | s3://my-bucket/venue0002_part_00 2320 | s3://my-bucket/venue0003_part_00 (4 rows) ``` If the amount of data is very large, Amazon Redshift might split the files into multiple parts per slice\. For example: ``` venue_0000_part_00 venue_0000_part_01 venue_0000_part_02 venue_0001_part_00 venue_0001_part_01 venue_0001_part_02 ... ``` The following UNLOAD command includes a quoted string in the select statement, so the quotes are escaped \(`=\'OH\' '`\)\. ``` unload ('select venuename, venuecity from venue where venuestate=\'OH\' ')
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Unloading_tables.md
4389b64db63f-7
``` unload ('select venuename, venuecity from venue where venuestate=\'OH\' ') to 's3://mybucket/tickit/venue/ ' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ``` By default, UNLOAD will fail rather than overwrite existing files in the destination bucket\. To overwrite the existing files, including the manifest file, specify the ALLOWOVERWRITE option\. ``` unload ('select * from venue') to 's3://mybucket/venue_pipe_' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' manifest allowoverwrite; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Unloading_tables.md
b63b9e3059e9-0
ST\_AsGeoJSON returns the GeoJSON representation of an input geometry\. For more information about GeoJSON, see [GeoJSON](https://en.wikipedia.org/wiki/GeoJSON) in Wikipedia\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsGeoJSON-function.md
c4efc7089951-0
``` ST_AsGeoJSON(geom) ``` ``` ST_AsGeoJSON(geom, precision) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsGeoJSON-function.md
9d26b5395d35-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. *precision* A value of data type `INTEGER`\. The coordinates of *geom* are displayed using the specified precision 1–20\. If *precision* is not specified, the default is 15\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsGeoJSON-function.md
c155813a9467-0
`VARCHAR` If *geom* is null, then null is returned\. If *precision* is null, then null is returned\. If the result is larger than a 64\-KB `VARCHAR`, then an error is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsGeoJSON-function.md
40f8200309a2-0
The following SQL returns the GeoJSON representation of a linestring\. ``` SELECT ST_AsGeoJSON(ST_GeomFromText('LINESTRING(3.141592653589793 -6.283185307179586,2.718281828459045 -1.414213562373095)')); ``` ``` st_asgeojson ---------------------------------------------------------------------------------------------------------------- {"type":"LineString","coordinates":[[3.14159265358979,-6.28318530717959],[2.71828182845905,-1.41421356237309]]} ``` The following SQL returns the GeoJSON representation of a linestring\. The coordinates of the geometries are displayed with six digits of precision\. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsGeoJSON-function.md
40f8200309a2-1
``` SELECT ST_AsGeoJSON(ST_GeomFromText('LINESTRING(3.141592653589793 -6.283185307179586,2.718281828459045 -1.414213562373095)'), 6); ``` ``` st_asgeojson ----------------------------------------------------------------------------- {"type":"LineString","coordinates":[[3.14159,-6.28319],[2.71828,-1.41421]]} ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsGeoJSON-function.md
70adc79e7aa0-0
Before analyzing the query plan, you should be familiar with how to read it\. If you are unfamiliar with reading a query plan, we recommend that you read [Query plan](c-the-query-plan.md) before proceeding\. Run the [EXPLAIN](r_EXPLAIN.md) command to get a query plan\. To analyze the data provided by the query plan, follow these steps: 1. Identify the steps with the highest cost\. Concentrate on optimizing those when proceeding through the remaining steps\. 1. Look at the join types: + **Nested Loop**: Such joins usually occur because a join condition was omitted\. For recommended solutions, see [Nested loop](query-performance-improvement-opportunities.md#nested-loop)\. + **Hash and Hash Join**: Hash joins are used when joining tables where the join columns are not distribution keys and also not sort keys\. For recommended solutions, see [Hash join](query-performance-improvement-opportunities.md#hash-join)\. + **Merge Join**: No change is needed\. 1. Notice which table is used for the inner join, and which for the outer join\. The query engine generally chooses the smaller table for the inner join, and the larger table for the outer join\. If such a choice doesn't occur, your statistics are likely out of date\. For recommended solutions, see [Table statistics missing or out of date](query-performance-improvement-opportunities.md#table-statistics-missing-or-out-of-date)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-analyzing-the-query-plan.md
70adc79e7aa0-1
1. See if there are any high\-cost sort operations\. If there are, see [Unsorted or missorted rows](query-performance-improvement-opportunities.md#unsorted-or-mis-sorted-rows) for recommended solutions\. 1. Look for the following broadcast operators where there are high\-cost operations: + **DS\_BCAST\_INNER**: Indicates the table is broadcast to all the compute nodes, which is fine for a small table but not ideal for a larger table\. + **DS\_DIST\_ALL\_INNER**: Indicates that all of the workload is on a single slice\. + **DS\_DIST\_BOTH**: Indicates heavy redistribution\. For recommended solutions for these situations, see [Suboptimal data distribution](query-performance-improvement-opportunities.md#suboptimal-data-distribution)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-analyzing-the-query-plan.md
996df350abbb-0
Calculates the median value for the range of values in a window or partition\. NULL values in the range are ignored\. MEDIAN is an inverse distribution function that assumes a continuous distribution model\. MEDIAN is a compute\-node only function\. The function returns an error if the query doesn't reference a user\-defined table or Amazon Redshift system table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MEDIAN.md
5da2718194d4-0
``` MEDIAN ( median_expression ) OVER ( [ PARTITION BY partition_expression ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MEDIAN.md
43cba598345a-0
*median\_expression* An expression, such as a column name, that provides the values for which to determine the median\. The expression must have either a numeric or datetime data type or be implicitly convertible to one\. OVER A clause that specifies the window partitioning\. The OVER clause cannot contain a window ordering or window frame specification\. PARTITION BY *partition\_expression* Optional\. An expression that sets the range of records for each group in the OVER clause\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MEDIAN.md
90294841d9d3-0
The return type is determined by the data type of *median\_expression*\. The following table shows the return type for each *median\_expression* data type\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_WF_MEDIAN.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MEDIAN.md
a52fbbf242d9-0
If the *median\_expression* argument is a DECIMAL data type defined with the maximum precision of 38 digits, it is possible that MEDIAN will return either an inaccurate result or an error\. If the return value of the MEDIAN function exceeds 38 digits, the result is truncated to fit, which causes a loss of precision\. If, during interpolation, an intermediate result exceeds the maximum precision, a numeric overflow occurs and the function returns an error\. To avoid these conditions, we recommend either using a data type with lower precision or casting the *median\_expression* argument to a lower precision\. For example, a SUM function with a DECIMAL argument returns a default precision of 38 digits\. The scale of the result is the same as the scale of the argument\. So, for example, a SUM of a DECIMAL\(5,2\) column returns a DECIMAL\(38,2\) data type\. The following example uses a SUM function in the *median\_expression* argument of a MEDIAN function\. The data type of the PRICEPAID column is DECIMAL \(8,2\), so the SUM function returns DECIMAL\(38,2\)\. ``` select salesid, sum(pricepaid), median(sum(pricepaid)) over() from sales where salesid < 10 group by salesid; ``` To avoid a potential loss of precision or an overflow error, cast the result to a DECIMAL data type with lower precision, as the following example shows\. ``` select salesid, sum(pricepaid), median(sum(pricepaid)::decimal(30,2))
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MEDIAN.md
a52fbbf242d9-1
``` select salesid, sum(pricepaid), median(sum(pricepaid)::decimal(30,2)) over() from sales where salesid < 10 group by salesid; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MEDIAN.md
0b75e9d0509a-0
The following example calculates the median sales quantity for each seller: ``` select sellerid, qty, median(qty) over (partition by sellerid) from winsales order by sellerid; sellerid qty median --------------------------- 1 10 10.0 1 10 10.0 1 30 10.0 2 20 20.0 2 20 20.0 3 10 17.5 3 15 17.5 3 20 17.5 3 30 17.5 4 10 25.0 4 40 25.0 ``` For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MEDIAN.md
12c08c568499-0
The null condition tests for nulls, when a value is missing or unknown\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_null_condition.md
1bb4b5e4214f-0
``` expression IS [ NOT ] NULL ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_null_condition.md
1dbca097ead7-0
*expression* Any expression such as a column\. IS NULL Is true when the expression's value is null and false when it has a value\. IS NOT NULL Is false when the expression's value is null and true when it has a value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_null_condition.md
999008ecfde3-0
This example indicates how many times the SALES table contains null in the QTYSOLD field: ``` select count(*) from sales where qtysold is null; count ------- 0 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_null_condition.md
81ecf4916fdc-0
The SQRT function returns the square root of a numeric value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SQRT.md
c1883176c178-0
``` SQRT (expression) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SQRT.md
11642327a9f4-0
*expression* The expression must have an integer, decimal, or floating\-point data type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SQRT.md
a4bf51e4121f-0
SQRT returns a DOUBLE PRECISION number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SQRT.md
58fe23bedfe2-0
The following example returns the square root for some COMMISSION values from the SALES table\. The COMMISSION column is a DECIMAL column\. ``` select sqrt(commission) from sales where salesid <10 order by salesid; sqrt ------------------ 10.4498803820905 3.37638860322683 7.24568837309472 5.1234753829798 ... ``` The following query returns the rounded square root for the same set of COMMISSION values\. ``` select salesid, commission, round(sqrt(commission)) from sales where salesid <10 order by salesid; salesid | commission | round --------+------------+------- 1 | 109.20 | 10 2 | 11.40 | 3 3 | 52.50 | 7 4 | 26.25 | 5 ... ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SQRT.md
500fbf58f402-0
Records details about actions resulting from WLM query monitoring rules associated with user\-defined queues\. For more information, see [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WLM_RULE_ACTION.md
4ec3d29276fd-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_WLM_RULE_ACTION.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WLM_RULE_ACTION.md
812be07f5e6e-0
The following example finds queries that were aborted by a query monitoring rule\. ``` Select query, rule from stl_wlm_rule_action where action = 'abort' order by query; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WLM_RULE_ACTION.md
0f61691d5e24-0
Use SVV\_TABLES to view tables in local and external catalogs\. SVV\_TABLES is visible to all users\. Superusers can see all rows; regular users can see only metadata to which they have access\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_TABLES.md
4169080b8a79-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_TABLES.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_TABLES.md
ba5ecaf372db-0
Because Amazon Redshift is based on PostgreSQL, we previously recommended using JDBC4 Postgresql driver version 8\.4\.703 and psqlODBC version 9\.x drivers; if you are currently using those drivers, we recommend moving to the new Amazon Redshift–specific drivers going forward\. For more information about drivers and configuring connections, see [JDBC and ODBC Drivers for Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/configuring-connections.html#connecting-drivers) in the *Amazon Redshift Cluster Management Guide*\. To avoid client\-side out\-of\-memory errors when retrieving large data sets using JDBC, you can enable your client to fetch data in batches by setting the JDBC fetch size parameter\. For more information, see [Setting the JDBC fetch size parameter](queries-troubleshooting.md#set-the-JDBC-fetch-size-parameter)\. Amazon Redshift does not recognize the JDBC maxRows parameter\. Instead, specify a [LIMIT](r_ORDER_BY_clause.md#order-by-clause-limit) clause to restrict the result set\. You can also use an [OFFSET](r_ORDER_BY_clause.md#order-by-clause-offset) clause to skip to a specific starting point in the result set\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_redshift-postgres-jdbc.md
a294a65eab48-0
Records errors encountered by a slice while loading a file from Amazon S3\. Use the STL\_S3CLIENT\_ERROR to find details for errors encountered while transferring data from Amazon S3 as part of a COPY command\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_S3CLIENT_ERROR.md
9a2c3dea3983-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_S3CLIENT_ERROR.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_S3CLIENT_ERROR.md
9414d518e2f5-0
If you see multiple errors with "Connection timed out", you might have a networking issue\. If you're using Enhanced VPC Routing, verify that you have a valid network path between your cluster's VPC and your data resources\. For more information, see [Amazon Redshift Enhanced VPC Routing](https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_S3CLIENT_ERROR.md
1bffbfa485ad-0
The following query returns the errors from COPY commands executed during the current session\. ``` select query, sliceid, substring(key from 1 for 20) as file, substring(error from 1 for 35) as error from stl_s3client_error where pid = pg_backend_pid() order by query desc; ``` Result ``` query | sliceid | file | error --------+---------+--------------------+------------------------------------ 362228 | 12 | part.tbl.25.159.gz | transfer closed with 1947655 bytes 362228 | 24 | part.tbl.15.577.gz | transfer closed with 1881910 bytes 362228 | 7 | part.tbl.22.600.gz | transfer closed with 700143 bytes r 362228 | 22 | part.tbl.3.34.gz | transfer closed with 2334528 bytes 362228 | 11 | part.tbl.30.274.gz | transfer closed with 699031 bytes r 362228 | 30 | part.tbl.5.509.gz | Unknown SSL protocol error in conne
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_S3CLIENT_ERROR.md
1bffbfa485ad-1
362228 | 30 | part.tbl.5.509.gz | Unknown SSL protocol error in conne 361999 | 10 | part.tbl.23.305.gz | transfer closed with 698959 bytes r 361999 | 19 | part.tbl.26.582.gz | transfer closed with 1881458 bytes 361999 | 4 | part.tbl.15.629.gz | transfer closed with 2275907 bytes 361999 | 20 | part.tbl.6.456.gz | transfer closed with 692162 bytes r (10 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_S3CLIENT_ERROR.md
6d4af176c3c0-0
Records compile time and location for each query segment of queries\. SVL\_COMPILE is visible to all users\. For information about SVCS\_COMPILE, see [SVCS\_COMPILE](r_SVCS_COMPILE.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_COMPILE.md
3ab62dba01f1-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_COMPILE.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_COMPILE.md
e3931ab73301-0
In this example, queries 35878 and 35879 executed the same SQL statement\. The compile column for query 35878 shows `1` for four query segments, which indicates that the segments were compiled\. Query 35879 shows `0` in the compile column for every segment, indicating that the segments did not need to be compiled again\. ``` select userid, xid, pid, query, segment, locus, datediff(ms, starttime, endtime) as duration, compile from svl_compile where query = 35878 or query = 35879 order by query, segment; userid | xid | pid | query | segment | locus | duration | compile --------+--------+-------+-------+---------+-------+----------+--------- 100 | 112780 | 23028 | 35878 | 0 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 1 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 2 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 3 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 4 | 1 | 0 | 0
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_COMPILE.md
e3931ab73301-1
100 | 112780 | 23028 | 35878 | 4 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 5 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 6 | 1 | 1380 | 1 100 | 112780 | 23028 | 35878 | 7 | 1 | 1085 | 1 100 | 112780 | 23028 | 35878 | 8 | 1 | 1197 | 1 100 | 112780 | 23028 | 35878 | 9 | 2 | 905 | 1 100 | 112782 | 23028 | 35879 | 0 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 1 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 2 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 3 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 4 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 5 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 6 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 7 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 8 | 1 | 0 | 0
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_COMPILE.md
e3931ab73301-2
100 | 112782 | 23028 | 35879 | 8 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 9 | 2 | 0 | 0 (20 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_COMPILE.md
ccc6a5e96c9a-0
Deletes all of the rows from a table without doing a table scan: this operation is a faster alternative to an unqualified DELETE operation\. To execute a TRUNCATE command, you must be the owner of the table or a superuser\. TRUNCATE is much more efficient than DELETE and doesn't require a VACUUM and ANALYZE\. However, be aware that TRUNCATE commits the transaction in which it is run\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNCATE.md