id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
84058a852814-1
|
1. Now run the following query from psql window 2\.
```
set query_group to test;
select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000;
```
1. In psql window 1, run the following query to see the query queue that the queries are routed to\.
```
select * from wlm_queue_state_vw;
select * from wlm_query_state_vw;
```
The following are example results\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_140.png)
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_150.png)
1. When you’re done, reset the query group\.
```
reset query_group;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
|
c8604569e229-0
|
Manage the default behavior of the load operation for troubleshooting or to reduce load times by specifying the following parameters\.
+ [COMPROWS](#copy-comprows)
+ [COMPUPDATE](#copy-compupdate)
+ [MAXERROR](#copy-maxerror)
+ [NOLOAD](#copy-noload)
+ [STATUPDATE](#copy-statupdate) <a name="copy-data-load-parameters"></a>Parameters
COMPROWS *numrows* <a name="copy-comprows"></a>
Specifies the number of rows to be used as the sample size for compression analysis\. The analysis is run on rows from each data slice\. For example, if you specify `COMPROWS 1000000` \(1,000,000\) and the system contains four total slices, no more than 250,000 rows for each slice are read and analyzed\.
If COMPROWS isn't specified, the sample size defaults to 100,000 for each slice\. Values of COMPROWS lower than the default of 100,000 rows for each slice are automatically upgraded to the default value\. However, automatic compression will not take place if the amount of data being loaded is insufficient to produce a meaningful sample\.
If the COMPROWS number is greater than the number of rows in the input file, the COPY command still proceeds and runs the compression analysis on all of the available rows\. The accepted range for this argument is a number between 1000 and 2147483647 \(2,147,483,647\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-load.md
|
c8604569e229-1
|
COMPUPDATE \[ PRESET \| \{ ON \| TRUE \} \| \{ OFF \| FALSE \} \] <a name="copy-compupdate"></a>
Controls whether compression encodings are automatically applied during a COPY\.
When COMPUPDATE is PRESET, the COPY command chooses the compression encoding for each column if the target table is empty; even if the columns already have encodings other than RAW\. Currently specified column encodings can be replaced\. Encoding for each column is based on the column data type\. No data is sampled\. Amazon Redshift automatically assigns compression encoding as follows:
+ Columns that are defined as sort keys are assigned RAW compression\.
+ Columns that are defined as BOOLEAN, REAL, or DOUBLE PRECISION data types are assigned RAW compression\.
+ Columns that are defined as SMALLINT, INTEGER, BIGINT, DECIMAL, DATE, TIMESTAMP, or TIMESTAMPTZ are assigned AZ64 compression\.
+ Columns that are defined as CHAR or VARCHAR are assigned LZO compression\.
When COMPUPDATE is omitted, the COPY command chooses the compression encoding for each column only if the target table is empty and you have not specified an encoding \(other than RAW\) for any of the columns\. The encoding for each column is determined by Amazon Redshift\. No data is sampled\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-load.md
|
c8604569e229-2
|
When COMPUPDATE is ON \(or TRUE\), or COMPUPDATE is specified without an option, the COPY command applies automatic compression if the table is empty; even if the table columns already have encodings other than RAW\. Currently specified column encodings can be replaced\. Encoding for each column is based on an analysis of sample data\. For more information, see [Loading tables with automatic compression](c_Loading_tables_auto_compress.md)\.
When COMPUPDATE is OFF \(or FALSE\), automatic compression is disabled\. Column encodings aren't changed\.
For information about the system table to analyze compression, see [STL\_ANALYZE\_COMPRESSION](r_STL_ANALYZE_COMPRESSION.md)\.
MAXERROR \[AS\] *error\_count* <a name="copy-maxerror"></a>
If the load returns the *error\_count* number of errors or greater, the load fails\. If the load returns fewer errors, it continues and returns an INFO message that states the number of rows that could not be loaded\. Use this parameter to allow loads to continue when certain rows fail to load into the table because of formatting errors or other inconsistencies in the data\.
Set this value to `0` or `1` if you want the load to fail as soon as the first error occurs\. The AS keyword is optional\. The MAXERROR default value is `0` and the limit is `100000`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-load.md
|
c8604569e229-3
|
The actual number of errors reported might be greater than the specified MAXERROR because of the parallel nature of Amazon Redshift\. If any node in the Amazon Redshift cluster detects that MAXERROR has been exceeded, each node reports all of the errors it has encountered\.
NOLOAD <a name="copy-noload"></a>
Checks the validity of the data file without actually loading the data\. Use the NOLOAD parameter to make sure that your data file loads without any errors before running the actual data load\. Running COPY with the NOLOAD parameter is much faster than loading the data because it only parses the files\.
STATUPDATE \[ \{ ON \| TRUE \} \| \{ OFF \| FALSE \} \] <a name="copy-statupdate"></a>
Governs automatic computation and refresh of optimizer statistics at the end of a successful COPY command\. By default, if the STATUPDATE parameter isn't used, statistics are updated automatically if the table is initially empty\.
Whenever ingesting data into a nonempty table significantly changes the size of the table, we recommend updating statistics either by running an [ANALYZE](r_ANALYZE.md) command or by using the STATUPDATE ON argument\.
With STATUPDATE ON \(or TRUE\), statistics are updated automatically regardless of whether the table is initially empty\. If STATUPDATE is used, the current user must be either the table owner or a superuser\. If STATUPDATE is not specified, only INSERT permission is required\.
With STATUPDATE OFF \(or FALSE\), statistics are never updated\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-load.md
|
c8604569e229-4
|
With STATUPDATE OFF \(or FALSE\), statistics are never updated\.
For additional information, see [Analyzing tables](t_Analyzing_tables.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-load.md
|
9a6a8a3abdd2-0
|
Deletes rows from tables\.
**Note**
The maximum size for a single SQL statement is 16 MB\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DELETE.md
|
86cbbba38c5e-0
|
```
DELETE [ FROM ] table_name
[ {USING } table_name, ... ]
[ WHERE condition ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DELETE.md
|
648f2435e739-0
|
FROM
The FROM keyword is optional, except when the USING clause is specified\. The statements `delete from event;` and `delete event;` are equivalent operations that remove all of the rows from the EVENT table\.
To delete all the rows from a table, [TRUNCATE](r_TRUNCATE.md) the table\. TRUNCATE is much more efficient than DELETE and doesn't require a VACUUM and ANALYZE\. However, be aware that TRUNCATE commits the transaction in which it is run\.
*table\_name*
A temporary or persistent table\. Only the owner of the table or a user with DELETE privilege on the table may delete rows from the table\.
Consider using the TRUNCATE command for fast unqualified delete operations on large tables; see [TRUNCATE](r_TRUNCATE.md)\.
After deleting a large number of rows from a table:
+ Vacuum the table to reclaim storage space and re\-sort rows\.
+ Analyze the table to update statistics for the query planner\.
USING *table\_name*, \.\.\.
The USING keyword is used to introduce a table list when additional tables are referenced in the WHERE clause condition\. For example, the following statement deletes all of the rows from the EVENT table that satisfy the join condition over the EVENT and SALES tables\. The SALES table must be explicitly named in the FROM list:
```
delete from event using sales where event.eventid=sales.eventid;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DELETE.md
|
648f2435e739-1
|
```
delete from event using sales where event.eventid=sales.eventid;
```
If you repeat the target table name in the USING clause, the DELETE operation runs a self\-join\. You can use a subquery in the WHERE clause instead of the USING syntax as an alternative way to write the same query\.
WHERE *condition*
Optional clause that limits the deletion of rows to those that match the condition\. For example, the condition can be a restriction on a column, a join condition, or a condition based on the result of a query\. The query can reference tables other than the target of the DELETE command\. For example:
```
delete from t1
where col1 in(select col2 from t2);
```
If no condition is specified, all of the rows in the table are deleted\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DELETE.md
|
d3c8c16498e0-0
|
Delete all of the rows from the CATEGORY table:
```
delete from category;
```
Delete rows with CATID values between 0 and 9 from the CATEGORY table:
```
delete from category
where catid between 0 and 9;
```
Delete rows from the LISTING table whose SELLERID values don't exist in the SALES table:
```
delete from listing
where listing.sellerid not in(select sales.sellerid from sales);
```
The following two queries both delete one row from the CATEGORY table, based on a join to the EVENT table and an additional restriction on the CATID column:
```
delete from category
using event
where event.catid=category.catid and category.catid=9;
```
```
delete from category
where catid in
(select category.catid from category, event
where category.catid=event.catid and category.catid=9);
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DELETE.md
|
bfbc102e0d20-0
|
The MIN window function returns the minimum of the input expression values\. The MIN function works with numeric values and ignores NULL values\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MIN.md
|
6e09e1e45126-0
|
```
MIN ( [ ALL ] expression ) OVER
(
[ PARTITION BY expr_list ]
[ ORDER BY order_list frame_clause ]
)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MIN.md
|
840a7d3c36e3-0
|
*expression *
The target column or expression that the function operates on\.
ALL
With the argument ALL, the function retains all duplicate values from the expression\. ALL is the default\. DISTINCT is not supported\.
OVER
Specifies the window clauses for the aggregation functions\. The OVER clause distinguishes window aggregation functions from normal set aggregation functions\.
PARTITION BY *expr\_list*
Defines the window for the MIN function in terms of one or more expressions\.
ORDER BY *order\_list*
Sorts the rows within each partition\. If no PARTITION BY is specified, ORDER BY uses the entire table\.
*frame\_clause*
If an ORDER BY clause is used for an aggregate function, an explicit frame clause is required\. The frame clause refines the set of rows in a function's window, including or excluding sets of rows within the ordered result\. The frame clause consists of the ROWS keyword and associated specifiers\. See [Window function syntax summary](r_Window_function_synopsis.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MIN.md
|
c2f6b029545c-0
|
Accepts any data type as input\. Returns the same data type as *expression*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MIN.md
|
83e9ce6f201a-0
|
The following example shows the sales ID, quantity, and minimum quantity from the beginning of the data window:
```
select salesid, qty,
min(qty) over
(order by salesid rows unbounded preceding)
from winsales
order by salesid;
salesid | qty | min
---------+-----+-----
10001 | 10 | 10
10005 | 30 | 10
10006 | 10 | 10
20001 | 20 | 10
20002 | 20 | 10
30001 | 10 | 10
30003 | 15 | 10
30004 | 20 | 10
30007 | 30 | 10
40001 | 40 | 10
40005 | 10 | 10
(11 rows)
```
For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
The following example shows the sales ID, quantity, and minimum quantity in a restricted frame:
```
select salesid, qty,
min(qty) over
(order by salesid rows between 2 preceding and 1 preceding) as min
from winsales
order by salesid;
salesid | qty | min
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MIN.md
|
83e9ce6f201a-1
|
from winsales
order by salesid;
salesid | qty | min
---------+-----+-----
10001 | 10 |
10005 | 30 | 10
10006 | 10 | 10
20001 | 20 | 10
20002 | 20 | 10
30001 | 10 | 20
30003 | 15 | 10
30004 | 20 | 10
30007 | 30 | 15
40001 | 40 | 20
40005 | 10 | 30
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MIN.md
|
01971dcd9909-0
|
COS is a trigonometric function that returns the cosine of a number\. The return value is in radians and is between PI/2 and \-PI/2\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COS.md
|
8d36e30b0cf5-0
|
```
COS(double_precision)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COS.md
|
e0aa410182c3-0
|
*number*
The input parameter is a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COS.md
|
ac166def419d-0
|
The COS function returns a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COS.md
|
10e1d57abe65-0
|
The following example returns cosine of 0:
```
select cos(0);
cos
-----
1
(1 row)
```
The following example returns the cosine of PI:
```
select cos(pi());
cos
-----
-1
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COS.md
|
28299c420183-0
|
After you create a table, you can insert rows of data into that table\.
**Note**
The [INSERT](r_INSERT_30.md) command inserts individual rows into a database table\. For standard bulk loads, use the [COPY](r_COPY.md) command\. For more information, see [Use a COPY command to load data](c_best-practices-use-copy.md)\.
For example, to insert a value of `100` into the `testtable` table \(which contains a single column\), issue the following command:
```
insert into testtable values (100);
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_inserting_data_into_table.md
|
5086eef8de23-0
|
To view the distribution style of a table, query the PG\_CLASS\_INFO view or the SVV\_TABLE\_INFO view\.
The RELEFFECTIVEDISTSTYLE column in PG\_CLASS\_INFO indicates the current distribution style for the table\. If the table uses automatic distribution, RELEFFECTIVEDISTSTYLE is 10 or 11, which indicates whether the effective distribution style is AUTO \(ALL\) or AUTO \(EVEN\)\. If the table uses automatic distribution, the distribution style might initially show AUTO \(ALL\), then change to AUTO \(EVEN\) when the table grows\.
The following table gives the distribution style for each value in RELEFFECTIVEDISTSTYLE column:
| RELEFFECTIVEDISTSTYLE | Current distribution style |
| --- | --- |
| 0 | EVEN |
| 1 | KEY |
| 8 | ALL |
| 10 | AUTO \(ALL\) |
| 11 | AUTO \(EVEN\) |
The DISTSTYLE column in SVV\_TABLE\_INFO indicates the current distribution style for the table\. If the table uses automatic distribution, DISTSTYLE is AUTO \(ALL\) or AUTO \(EVEN\)\.
The following example creates four tables using the three distribution styles and automatic distribution, then queries SVV\_TABLE\_INFO to view the distribution styles\.
```
create table dist_key (col1 int)
diststyle key distkey (col1);
create table dist_even (col1 int)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/viewing-distribution-styles.md
|
5086eef8de23-1
|
diststyle key distkey (col1);
create table dist_even (col1 int)
diststyle even;
create table dist_all (col1 int)
diststyle all;
create table dist_auto (col1 int);
select "schema", "table", diststyle from SVV_TABLE_INFO
where "table" like 'dist%';
schema | table | diststyle
------------+-----------------+------------
public | dist_key | KEY(col1)
public | dist_even | EVEN
public | dist_all | ALL
public | dist_auto | AUTO(ALL)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/viewing-distribution-styles.md
|
41083141e60e-0
|
Sometimes, users might temporarily need more resources for a particular query\. If so, they can use the wlm\_query\_slot\_count configuration setting to temporarily override the way slots are allocated in a query queue\. *Slots* are units of memory and CPU that are used to process queries\. You might override the slot count when you have occasional queries that take a lot of resources in the cluster, such as when you perform a VACUUM operation in the database\.
You might find that users often need to set wlm\_query\_slot\_count for certain types of queries\. If so, consider adjusting the WLM configuration and giving users a queue that better suits the needs of their queries\. For more information about temporarily overriding the concurrency level by using slot count, see [wlm\_query\_slot\_count](r_wlm_query_slot_count.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-query-slot-count.md
|
8ad093a46b20-0
|
For the purposes of this tutorial, we run the same long\-running SELECT query\. We run it as the `adminwlm` user using wlm\_query\_slot\_count to increase the number of slots available for the query\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-query-slot-count.md
|
a6a25e5da872-0
|
1. Increase the limit on the query to make sure that you have enough time to query the WLM\_QUERY\_STATE\_VW view and see a result\.
```
set wlm_query_slot_count to 3;
select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000;
```
1. Now, query WLM\_QUERY\_STATE\_VW use the masteruser account to see how the query is running\.
```
select * from wlm_query_state_vw;
```
The following is an example result\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_170.png)
Notice that the slot count for the query is 3\. This count means that the query is using all three slots to process the query, allocating all of the resources in the queue to that query\.
1. Now, run the following query\.
```
select * from WLM_QUEUE_STATE_VW;
```
The following is an example result\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_160.png)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-query-slot-count.md
|
a6a25e5da872-1
|
The wlm\_query\_slot\_count configuration setting is valid for the current session only\. If that session expires, or another user runs a query, the WLM configuration is used\.
1. Reset the slot count and rerun the test\.
```
reset wlm_query_slot_count;
select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000;
```
The following are example results\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_180.png)
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_190.png)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-query-slot-count.md
|
0a253a1c0901-0
|
Next, run queries from different sessions\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-query-slot-count.md
|
8e2c707036de-0
|
1. In psql window 1 and 2, run the following to use the test query group\.
```
set query_group to test;
```
1. In psql window 1, run the following long\-running query\.
```
select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000;
```
1. As the long\-running query is still going in psql window 1, run the following\. These commands increase the slot count to use all the slots for the queue and then start running the long\-running query\.
```
set wlm_query_slot_count to 2;
select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000;
```
1. Open a third psql window and query the views to see the results\.
```
select * from wlm_queue_state_vw;
select * from wlm_query_state_vw;
```
The following are example results\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_200.png)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-query-slot-count.md
|
8e2c707036de-1
|
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_210.png)
Notice that the first query is using one of the slots allocated to queue 1 to run the query\. In addition, notice that there is one query that is waiting in the queue \(where `queued` is `1` and `state` is `QueuedWaiting`\)\. After the first query completes, the second one begins running\. This execution happens because both queries are routed to the `test` query group, and the second query must wait for enough slots to begin processing\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-query-slot-count.md
|
00217949662b-0
|
Raw encoding is the default encoding for columns that are designated as sort keys and columns that are defined as BOOLEAN, REAL, or DOUBLE PRECISION data types\. With raw encoding, data is stored in raw, uncompressed form\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Raw_encoding.md
|
133a0e8b06b4-0
|
When you execute a query, the query optimizer redistributes the rows to the compute nodes as needed to perform any joins and aggregations\. The goal in selecting a table distribution style is to minimize the impact of the redistribution step by locating the data where it needs to be before the query is run\. Some suggestions for best approach follow:
1. **Distribute the fact table and one dimension table on their common columns\.**
Your fact table can have only one distribution key\. Any tables that join on another key aren't collocated with the fact table\. Choose one dimension to collocate based on how frequently it is joined and the size of the joining rows\. Designate both the dimension table's primary key and the fact table's corresponding foreign key as the DISTKEY\.
1. **Choose the largest dimension based on the size of the filtered dataset\. **
Only the rows that are used in the join need to be distributed, so consider the size of the dataset after filtering, not the size of the table\.
1. **Choose a column with high cardinality in the filtered result set\.**
If you distribute a sales table on a date column, for example, you should probably get fairly even data distribution, unless most of your sales are seasonal\. However, if you commonly use a range\-restricted predicate to filter for a narrow date period, most of the filtered rows occur on a limited set of slices and the query workload is skewed\.
1. **Change some dimension tables to use ALL distribution\.**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-best-dist-key.md
|
133a0e8b06b4-1
|
1. **Change some dimension tables to use ALL distribution\.**
If a dimension table cannot be collocated with the fact table or other important joining tables, you can improve query performance significantly by distributing the entire table to all of the nodes\. Using ALL distribution multiplies storage space requirements and increases load times and maintenance operations, so you should weigh all factors before choosing ALL distribution\.
To let Amazon Redshift choose the appropriate distribution style, don't specify DISTSTYLE\.
For more information about choosing distribution styles, see [Tutorial: Tuning table design](tutorial-tuning-tables.md) and [Choosing a data distribution style](t_Distributing_data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-best-dist-key.md
|
3ba598bb26e4-0
|
When you create a table, you can specify one or more columns as the sort key\. Amazon Redshift stores your data on disk in sorted order according to the sort key\. How your data is sorted has an important effect on disk I/O, columnar compression, and query performance\.
In this step, you choose sort keys for the SSB tables based on these best practices:
+ If recent data is queried most frequently, specify the timestamp column as the leading column for the sort key\.
+ If you do frequent range filtering or equality filtering on one column, specify that column as the sort key\.
+ If you frequently join a \(dimension\) table, specify the join column as the sort key\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-sort-keys.md
|
fb3dfeedaaa9-0
|
1. Evaluate your queries to find timestamp columns that are used to filter the results\.
For example, LINEORDER frequently uses equality filters using `lo_orderdate`\.
```
where lo_orderdate = d_datekey and d_year = 1997
```
1. Look for columns that are used in range filters and equality filters\. For example, LINEORDER also uses `lo_orderdate` for range filtering\.
```
where lo_orderdate = d_datekey and d_year >= 1992 and d_year <= 1997
```
1. Based on the first two best practices, `lo_orderdate` is a good choice for sort key\.
In the tuning table, specify `lo_orderdate` as the sort key for LINEORDER\.
1. The remaining tables are dimensions, so, based on the third best practice, specify their primary keys as sort keys\.
The following tuning table shows the chosen sort keys\. You fill in the Distribution Style column in [Step 4: Select distribution styles](tutorial-tuning-tables-distribution.md)\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables-sort-keys.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-sort-keys.md
|
54b73bf53c78-0
|
[Step 4: Select distribution styles](tutorial-tuning-tables-distribution.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-sort-keys.md
|
91aee6842507-0
|
The EXTRACT function returns a date part, such as a day, month, or year, from a time stamp value or expression\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXTRACT_function.md
|
35143cfd18bc-0
|
```
EXTRACT ( datepart FROM { TIMESTAMP 'literal' | timestamp } )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXTRACT_function.md
|
79c5a3f9bdbf-0
|
*datepart*
For possible values, see [Dateparts for Date or Time Stamp functions](r_Dateparts_for_datetime_functions.md)\.
*literal*
A time stamp value, enclosed in single quotation marks and preceded by the TIMESTAMP keyword\.
*timestamp*
A TIMESTAMP or TIMESTAMPTZ column, or an expression that implicitly converts to a time stamp or time stamp with time zone\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXTRACT_function.md
|
5c63fbea008b-0
|
INTEGER if the argument is TIMESTAMP
DOUBLE PRECISION if the argument is TIMESTAMPTZ
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXTRACT_function.md
|
4700b852b71a-0
|
Determine the week numbers for sales in which the price paid was $10,000 or more\.
```
select salesid, extract(week from saletime) as weeknum
from sales where pricepaid > 9999 order by 2;
salesid | weeknum
--------+---------
159073 | 6
160318 | 8
161723 | 26
(3 rows)
```
Return the minute value from a literal time stamp value\.
```
select extract(minute from timestamp '2009-09-09 12:08:43');
date_part
-----------
8
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXTRACT_function.md
|
cb73d411c122-0
|
UNLOAD automatically creates files using Amazon S3 server\-side encryption with AWS\-managed encryption keys \(SSE\-S3\)\. You can also specify server\-side encryption with an AWS Key Management Service key \(SSE\-KMS\) or client\-side encryption with a customer\-managed key \(CSE\-CMK\)\. UNLOAD doesn't support Amazon S3 server\-side encryption using a customer\-supplied key \(SSE\-C\)\. For more information, see [ Protecting data using server\-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html)\.
To unload to Amazon S3 using server\-side encryption with an AWS KMS key, use the KMS\_KEY\_ID parameter to provide the key ID as shown in the following example\.
```
unload ('select venuename, venuecity from venue')
to 's3://mybucket/encrypted/venue_'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
KMS_KEY_ID '1234abcd-12ab-34cd-56ef-1234567890ab'
encrypted;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_unloading_encrypted_files.md
|
cb73d411c122-1
|
encrypted;
```
If you want to provide your own encryption key, you can create client\-side encrypted data files in Amazon S3 by using the UNLOAD command with the ENCRYPTED option\. UNLOAD uses the same envelope encryption process that Amazon S3 client\-side encryption uses\. You can then use the COPY command with the ENCRYPTED option to load the encrypted files\.
The process works like this:
1. You create a base64 encoded 256\-bit AES key that you will use as your private encryption key, or *master symmetric key*\.
1. You issue an UNLOAD command that includes your master symmetric key and the ENCRYPTED option\.
1. UNLOAD generates a one\-time\-use symmetric key \(called the *envelope symmetric key*\) and an initialization vector \(IV\), which it uses to encrypt your data\.
1. UNLOAD encrypts the envelope symmetric key using your master symmetric key\.
1. UNLOAD then stores the encrypted data files in Amazon S3 and stores the encrypted envelope key and IV as object metadata with each file\. The encrypted envelope key is stored as object metadata `x-amz-meta-x-amz-key` and the IV is stored as object metadata `x-amz-meta-x-amz-iv`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_unloading_encrypted_files.md
|
cb73d411c122-2
|
For more information about the envelope encryption process, see the [Client\-side data encryption with the AWS SDK for Java and amazon S3](https://aws.amazon.com/articles/2850096021478074) article\.
To unload encrypted data files, add the master key value to the credentials string and include the ENCRYPTED option\. If you use the MANIFEST option, the manifest file is also encrypted\.
```
unload ('select venuename, venuecity from venue')
to 's3://mybucket/encrypted/venue_'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
master_symmetric_key '<master_key>'
manifest
encrypted;
```
To unload encrypted data files that are GZIP compressed, include the GZIP option along with the master key value and the ENCRYPTED option\.
```
unload ('select venuename, venuecity from venue')
to 's3://mybucket/encrypted/venue_'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
master_symmetric_key '<master_key>'
encrypted gzip;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_unloading_encrypted_files.md
|
cb73d411c122-3
|
master_symmetric_key '<master_key>'
encrypted gzip;
```
To load the encrypted data files, add the MASTER\_SYMMETRIC\_KEY parameter with the same master key value and include the ENCRYPTED option\.
```
copy venue from 's3://mybucket/encrypted/venue_'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
master_symmetric_key '<master_key>'
encrypted;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_unloading_encrypted_files.md
|
4cf7c8e7cd10-0
|
After you have verified that your cluster is up and running, you can create your first database\. This database is where you will actually create tables, load data, and run queries\. A single cluster can host multiple databases\. For example, you can have a TICKIT database and an ORDERS database on the same cluster\.
After you connect to the initial cluster database, the database you created when you launched the cluster, you use the initial database as the base for creating a new database\.
For example, to create a database named **tickit**, issue the following command in your SQL client tool:
```
create database tickit;
```
For this exercise, we'll accept the defaults\. For information about more command options, see [CREATE DATABASE](r_CREATE_DATABASE.md) in the SQL Command Reference\.
After you have created the TICKIT database, you can connect to the new database from your SQL client\. Use the same connection parameters as you used for your current connection, but change the database name to `tickit`\.
You do not need to change the database to complete the remainder of this tutorial\. If you prefer not to connect to the TICKIT database, you can try the rest of the examples in this section using the default database\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_creating_database.md
|
e984c991d5eb-0
|
Logs information about network activity during execution of query steps that distribute data\. Network traffic is captured by numbers of rows, bytes, and packets that are sent over the network during a given step on a given slice\. The duration of the step is the difference between the logged start and end times\.
To identify distribution steps in a query, look for dist labels in the QUERY\_SUMMARY view or run the EXPLAIN command and then look for step attributes that include dist\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DIST.md
|
5fb279c9c90a-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_DIST.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DIST.md
|
81afe6d11255-0
|
The following example returns distribution information for queries with one or more packets and duration greater than zero\.
```
select query, slice, step, rows, bytes, packets,
datediff(seconds, starttime, endtime) as duration
from stl_dist
where packets>0 and datediff(seconds, starttime, endtime)>0
order by query
limit 10;
```
```
query | slice | step | rows | bytes | packets | duration
--------+-------+------+--------+---------+---------+-----------
567 | 1 | 4 | 49990 | 6249564 | 707 | 1
630 | 0 | 5 | 8798 | 408404 | 46 | 2
645 | 1 | 4 | 8798 | 408404 | 46 | 1
651 | 1 | 5 | 192497 | 9226320 | 1039 | 6
669 | 1 | 4 | 192497 | 9226320 | 1039 | 4
675 | 1 | 5 | 3766 | 194656 | 22 | 1
696 | 0 | 4 | 3766 | 194656 | 22 | 1
705 | 0 | 4 | 930 | 44400 | 5 | 1
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DIST.md
|
81afe6d11255-1
|
696 | 0 | 4 | 3766 | 194656 | 22 | 1
705 | 0 | 4 | 930 | 44400 | 5 | 1
111525 | 0 | 3 | 68 | 17408 | 2 | 1
(9 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DIST.md
|
b79491638aa8-0
|
The POWER function is an exponential function that raises a numeric expression to the power of a second numeric expression\. For example, 2 to the third power is calculated as `power(2,3)`, with a result of 8\.
```
POW | POWER (expression1, expression2)
```
POW and POWER are synonyms\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_POWER.md
|
66eb19dfb1ec-0
|
*expression1*
Numeric expression to be raised\. Must be an integer, decimal, or floating\-point data type\.
*expression2*
Power to raise *expression1*\. Must be an integer, decimal, or floating\-point data type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_POWER.md
|
b300412b9ecf-0
|
POWER returns a DOUBLE PRECISION number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_POWER.md
|
4550069cbd81-0
|
In the following example, the POWER function is used to forecast what ticket sales will look like in the next 10 years, based on the number of tickets sold in 2008 \(the result of the subquery\)\. The growth rate is set at 7% per year in this example\.
```
select (select sum(qtysold) from sales, date
where sales.dateid=date.dateid
and year=2008) * pow((1+7::float/100),10) qty2010;
qty2010
------------------
679353.754088594
(1 row)
```
The following example is a variation on the previous example, with the growth rate at 7% per year but the interval set to months \(120 months over 10 years\):
```
select (select sum(qtysold) from sales, date
where sales.dateid=date.dateid
and year=2008) * pow((1+7::float/100/12),120) qty2010;
qty2010
-----------------
694034.54678046
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_POWER.md
|
859215e9939b-0
|
**Topics**
+ [Syntax](#r_SELECT_synopsis-synopsis)
+ [WITH clause](r_WITH_clause.md)
+ [SELECT list](r_SELECT_list.md)
+ [FROM clause](r_FROM_clause30.md)
+ [WHERE clause](r_WHERE_clause.md)
+ [GROUP BY clause](r_GROUP_BY_clause.md)
+ [HAVING clause](r_HAVING_clause.md)
+ [UNION, INTERSECT, and EXCEPT](r_UNION.md)
+ [ORDER BY clause](r_ORDER_BY_clause.md)
+ [Join examples](r_Join_examples.md)
+ [Subquery examples](r_Subquery_examples.md)
+ [Correlated subqueries](r_correlated_subqueries.md)
Returns rows from tables, views, and user\-defined functions\.
**Note**
The maximum size for a single SQL statement is 16 MB\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SELECT_synopsis.md
|
68386a5e0435-0
|
```
[ WITH with_subquery [, ...] ]
SELECT
[ TOP number | [ ALL | DISTINCT ]
* | expression [ AS output_name ] [, ...] ]
[ FROM table_reference [, ...] ]
[ WHERE condition ]
[ GROUP BY expression [, ...] ]
[ HAVING condition ]
[ { UNION | ALL | INTERSECT | EXCEPT | MINUS } query ]
[ ORDER BY expression [ ASC | DESC ] ]
[ LIMIT { number | ALL } ]
[ OFFSET start ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SELECT_synopsis.md
|
dbf66ca8d11a-0
|
Use a bulk insert operation with a SELECT clause for high\-performance data insertion\.
Use the [INSERT](r_INSERT_30.md) and [CREATE TABLE AS](r_CREATE_TABLE_AS.md) commands when you need to move data or a subset of data from one table into another\.
For example, the following INSERT statement selects all of the rows from the CATEGORY table and inserts them into the CATEGORY\_STAGE table\.
```
insert into category_stage
(select * from category);
```
The following example creates CATEGORY\_STAGE as a copy of CATEGORY and inserts all of the rows in CATEGORY into CATEGORY\_STAGE\.
```
create table category_stage as
select * from category;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-bulk-inserts.md
|
79ea1d9fd79d-0
|
**Topics**
+ [Are you a first\-time Amazon Redshift user?](c-first-time-user.md)
+ [Are you a database developer?](c-who-should-use-this-guide.md)
+ [Prerequisites](c-dev-guide-prereqs.md)
+ [System and architecture overview](c_redshift_system_overview.md)
This is the *Amazon Redshift Database Developer Guide*\.
Amazon Redshift is an enterprise\-level, petabyte scale, fully managed data warehousing service\.
This guide focuses on using Amazon Redshift to create and manage a data warehouse\. If you work with databases as a designer, software developer, or administrator, it gives you the information you need to design, build, query, and maintain your data warehouse\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/welcome.md
|
f216af696e78-0
|
You add the cluster public key to each host's authorized keys file so that the host will recognize Amazon Redshift and accept the SSH connection\.
**To add the Amazon Redshift cluster public key to the host's authorized keys file**
1. Access the host using an SSH connection\.
For information about connecting to an instance using SSH, see [Connect to Your Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-connect-to-instance-linux.html) in the *Amazon EC2 User Guide*\.
1. Copy the Amazon Redshift public key from the console or from the CLI response text\.
1. Copy and paste the contents of the public key into the `/home/<ssh_username>/.ssh/authorized_keys` file on the remote host\. The `<ssh_username>` must match the value for the "username" field in the manifest file\. Include the complete string, including the prefix "`ssh-rsa` " and suffix "`Amazon-Redshift`"\. For example:
```
ssh-rsa AAAACTP3isxgGzVWoIWpbVvRCOzYdVifMrh… uA70BnMHCaMiRdmvsDOedZDOedZ Amazon-Redshift
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-add-key-to-host.md
|
802701c26d31-0
|
Synonym of the NVL expression\.
See [NVL expression](r_NVL_function.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COALESCE.md
|
f86bbc4877d5-0
|
Computes the 64\-bit FNV\-1a non\-cryptographic hash function for all basic data types\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FNV_HASH.md
|
c795000807e2-0
|
```
FNV_HASH(value [, seed])
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FNV_HASH.md
|
abb344cd4f62-0
|
*value*
The input value to be hashed\. Amazon Redshift uses the binary representation of the value to hash the input value; for instance, INTEGER values are hashed using 4 bytes and BIGINT values are hashed using 8 bytes\. Also, hashing CHAR and VARCHAR inputs does not ignore trailing spaces\.
*seed*
The BIGINT seed of the hash function is optional\. If not given, Amazon Redshift uses the default FNV seed\. This enables combining the hash of multiple columns without any conversions or concatenations\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FNV_HASH.md
|
dc338537b3d3-0
|
BIGINT
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FNV_HASH.md
|
d3b53f65a4fb-0
|
The following examples return the FNV hash of a number, the string 'Amazon Redshift', and the concatenation of the two\.
```
select fnv_hash(1);
fnv_hash
----------------------
-5968735742475085980
(1 row)
```
```
select fnv_hash('Amazon Redshift');
fnv_hash
---------------------
7783490368944507294
(1 row)
```
```
select fnv_hash('Amazon Redshift', fnv_hash(1));
fnv_hash
----------------------
-2202602717770968555
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FNV_HASH.md
|
1aa043d410bb-0
|
+ To compute the hash of a table with multiple columns, you can compute the FNV hash of the first column and pass it as a seed to the hash of the second column\. Then, it passes the FNV hash of the second column as a seed to the hash of the third column\.
The following example creates seeds to hash a table with multiple columns\.
```
select fnv_hash(column_3, fnv_hash(column_2, fnv_hash(column_1))) from sample_table;
```
+ The same property can be used to compute the hash of a concatenation of strings\.
```
select fnv_hash('abcd');
fnv_hash
---------------------
-281581062704388899
(1 row)
```
```
select fnv_hash('cd', fnv_hash('ab'));
fnv_hash
---------------------
-281581062704388899
(1 row)
```
+ The hash function uses the type of the input to determine the number of bytes to hash\. Use casting to enforce a specific type, if necessary\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FNV_HASH.md
|
1aa043d410bb-1
|
+ The hash function uses the type of the input to determine the number of bytes to hash\. Use casting to enforce a specific type, if necessary\.
The following examples use different types of input to produce different results\.
```
select fnv_hash(1::smallint);
fnv_hash
--------------------
589727492704079044
(1 row)
```
```
select fnv_hash(1);
fnv_hash
----------------------
-5968735742475085980
(1 row)
```
```
select fnv_hash(1::bigint);
fnv_hash
----------------------
-8517097267634966620
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FNV_HASH.md
|
7fbf0017b1b3-0
|
Captures the following DDL statements that were run on the system\.
These DDL statements include the following queries and objects:
+ CREATE SCHEMA, TABLE, VIEW
+ DROP SCHEMA, TABLE, VIEW
+ ALTER SCHEMA, TABLE
See also [STL\_QUERYTEXT](r_STL_QUERYTEXT.md), [STL\_UTILITYTEXT](r_STL_UTILITYTEXT.md), and [SVL\_STATEMENTTEXT](r_SVL_STATEMENTTEXT.md)\. These views provide a timeline of the SQL commands that are executed on the system; this history is useful for troubleshooting purposes and for creating an audit trail of all system activities\.
Use the STARTTIME and ENDTIME columns to find out which statements were logged during a given time period\. Long blocks of SQL text are broken into lines 200 characters long; the SEQUENCE column identifies fragments of text that belong to a single statement\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
bee2ab34f300-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_DDLTEXT.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
bdb0916bc489-0
|
The following query shows the DDL for four CREATE TABLE statements\. The DDL text column is truncated for readability\.
```
select xid, starttime, sequence, substring(text,1,40) as text
from stl_ddltext order by xid desc, sequence;
xid | starttime | sequence | text
------+----------------------------+----------+------------------------------------------
1806 | 2013-10-23 00:11:14.709851 | 0 | CREATE TABLE supplier ( s_suppkey int4 N
1806 | 2013-10-23 00:11:14.709851 | 1 | s_comment varchar(101) NOT NULL )
1805 | 2013-10-23 00:11:14.496153 | 0 | CREATE TABLE region ( r_regionkey int4 N
1804 | 2013-10-23 00:11:14.285986 | 0 | CREATE TABLE partsupp ( ps_partkey int8
1803 | 2013-10-23 00:11:14.056901 | 0 | CREATE TABLE part ( p_partkey int8 NOT N
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
bdb0916bc489-1
|
1803 | 2013-10-23 00:11:14.056901 | 0 | CREATE TABLE part ( p_partkey int8 NOT N
1803 | 2013-10-23 00:11:14.056901 | 1 | ner char(10) NOT NULL , p_retailprice nu
(6 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-0
|
To reconstruct the SQL stored in the `text` column of STL\_DDLTEXT, run a SELECT statement to create SQL from 1 or more parts in the `text` column\. Before running the reconstructed SQL, replace any \(`\n`\) special characters with a new line\. The result of the following SELECT statement is rows of reconstructed SQL in the `query_statement` field\.
```
SELECT query, LISTAGG(CASE WHEN LEN(RTRIM(text)) = 0 THEN text ELSE RTRIM(text) END) WITHIN GROUP (ORDER BY sequence) as query_statement, COUNT(*) as row_count
FROM stl_ddltext GROUP BY query ORDER BY query desc;
```
For example, the following query runs several DDL statements\. The query itself is longer than 200 characters and is stored in several parts in STL\_DDLTEXT\.
```
DROP TABLE IF EXISTS public.t_tx_trunc;
CREATE TABLE public.t_tx_trunc(a varchar);
CREATE OR REPLACE PROCEDURE public.sp_truncate_top_level()
LANGUAGE plpgsql
AS $$
DECLARE
row_cnt int;
BEGIN
INSERT INTO public.t_tx_trunc VALUES ('Insert in SP: Before Truncate 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000');
select count(*) into row_cnt from public.t_tx_trunc;
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-1
|
select count(*) into row_cnt from public.t_tx_trunc;
RAISE INFO 'sp_truncate_top_level: RowCount after 1st Insert: %', row_cnt;
truncate table public.t_tx_trunc;
select count(*) into row_cnt from public.t_tx_trunc;
RAISE INFO 'sp_truncate_top_level: RowCount After Truncate: %', row_cnt;
INSERT INTO public.t_tx_trunc VALUES ('Insert 1 in SP: After Truncate');
select count(*) into row_cnt from public.t_tx_trunc;
RAISE INFO 'sp_truncate_top_level: RowCount after 2nd Insert: %', row_cnt;
INSERT INTO public.t_tx_trunc VALUES ('Insert 2 in SP: After Truncate');
select count(*) into row_cnt from public.t_tx_trunc;
RAISE INFO 'sp_truncate_top_level: RowCount after 3rd Insert: %', row_cnt;
END
$$;
DROP PROCEDURE sp_truncate_top_level();
DROP TABLE IF EXISTS public.t_tx_trunc;
```
In this example, the query is stored in many parts \(rows\) in the `text` column of STL\_DDLTEXT\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-2
|
In this example, the query is stored in many parts \(rows\) in the `text` column of STL\_DDLTEXT\.
```
select starttime, sequence, text
from stl_ddltext where query=pg_last_query_id() order by starttime, sequence limit 10;
```
```
starttime | sequence | text
----------------------------+----------+-------------------------------------------------------------------------------------------------------------------------
2019-07-23 23:08:15.672457 | 0 | DROP TABLE IF EXISTS public.t_tx_trunc;
2019-07-23 23:08:15.676281 | 0 | CREATE TABLE public.t_tx_trunc(a varchar);
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-3
|
2019-07-23 23:08:15.676281 | 0 | CREATE TABLE public.t_tx_trunc(a varchar);
2019-07-23 23:08:15.727303 | 0 | CREATE OR REPLACE PROCEDURE public.sp_truncate_top_level()\nLANGUAGE plpgsql\nAS $$\nDECLARE\n row_cnt int;\nBEGIN\n INSERT INTO public.t_tx_trunc VALUES ('Insert in SP: Before Truncate 000000000
2019-07-23 23:08:15.727303 | 1 | 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000');\n select count(*) into row_cnt from public.t
2019-07-23 23:08:15.727303 | 2 | _tx_trunc;\n RAISE INFO 'sp_truncate_top_level: RowCount after 1st Insert: %', row_cnt;\n truncate table public.t_tx_trunc;\n select count(*) into row_cnt from public.t_tx_trunc;\n RAISE INFO 'sp_
2019-07-23 23:08:15.727303 | 3 | truncate_top_level: RowCount After Truncate: %', row_cnt;\n INSERT INTO public.t_tx_trunc VALUES ('Insert 1 in SP: After Truncate');\n select count(*) into row_cnt from public.t_tx_trunc;\n RAISE I
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-4
|
2019-07-23 23:08:15.727303 | 4 | NFO 'sp_truncate_top_level: RowCount after 2nd Insert: %', row_cnt;\n INSERT INTO public.t_tx_trunc VALUES ('Insert 2 in SP: After Truncate');\n select count(*) into row_cnt from public.t_tx_trunc;
2019-07-23 23:08:15.727303 | 5 | \n RAISE INFO 'sp_truncate_top_level: RowCount after 3rd Insert: %', row_cnt;\nEND\n$$;
2019-07-23 23:08:15.76039 | 0 | DROP PROCEDURE sp_truncate_top_level();
2019-07-23 23:08:16.454956 | 0 | DROP TABLE IF EXISTS public.t_tx_trunc;
```
To reconstruct the SQL stored in STL\_DDLTEXT, run the following SQL\.
```
SELECT LISTAGG(CASE WHEN LEN(RTRIM(text)) = 0 THEN text ELSE RTRIM(text) END) WITHIN GROUP (ORDER BY sequence) as query_statement
FROM stl_ddltext GROUP BY xid order by xid;
```
To use the resulting reconstructed SQL in your client, replace any \(`\n`\) special characters with a new line\.
```
query_statement
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-5
|
-------------------------------------------------------------------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-6
|
--------------------------------------------------------------------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-7
|
--------------------------------------------------------------------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-8
|
--------------------------------------------------------------------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-9
|
--------------------------------------------------------------------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-10
|
----------------------------------------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-11
|
DROP TABLE IF EXISTS public.t_tx_trunc;
CREATE TABLE public.t_tx_trunc(a varchar);
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-12
|
CREATE TABLE public.t_tx_trunc(a varchar);
CREATE OR REPLACE PROCEDURE public.sp_truncate_top_level()\nLANGUAGE plpgsql\nAS $$\nDECLARE\n row_cnt int;\nBEGIN\n INSERT INTO public.t_tx_trunc VALUES ('Insert in SP: Before Truncate 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000');\n select count(*) into row_cnt from public.t_tx_trunc;\n RAISE INFO 'sp_truncate_top_level: RowCount after 1st Insert: %', row_cnt;\n truncate table public.t_tx_trunc;\n select count(*) into row_cnt from public.t_tx_trunc;\n RAISE INFO 'sp_truncate_top_level: RowCount After Truncate: %', row_cnt;\n INSERT INTO public.t_tx_trunc VALUES ('Insert 1 in SP: After Truncate');\n select count(*) into row_cnt from public.t_tx_trunc;\n RAISE INFO 'sp_truncate_top_level: RowCount after 2nd Insert: %', row_cnt;\n INSERT INTO public.t_tx_trunc VALUES ('Insert 2 in SP: After Truncate');\n select count(*) into row_cnt from public.t_tx_trunc;\n RAISE INFO 'sp_truncate_top_level: RowCount after 3rd Insert: %', row_cnt;\nEND\n$$;
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
567e9ce698b9-13
|
DROP PROCEDURE sp_truncate_top_level();
DROP TABLE IF EXISTS public.t_tx_trunc;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DDLTEXT.md
|
736f17d7786e-0
|
Returns the characters extracted from a string based on the specified character position for a specified number of characters\.
The character position and number of characters are based on the number of characters, not bytes, so that multi\-byte characters are counted as single characters\. You cannot specify a negative length, but you can specify a negative starting position\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUBSTRING.md
|
f8492a5d9599-0
|
```
SUBSTRING(string FROM start_position [ FOR number_characters ] )
```
```
SUBSTRING(string, start_position, number_characters )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUBSTRING.md
|
555cc02cec94-0
|
*string*
The string to be searched\. Non\-character data types are treated like a string\.
*start\_position*
The position within the string to begin the extraction, starting at 1\. The *start\_position* is based on the number of characters, not bytes, so that multi\-byte characters are counted as single characters\. This number can be negative\.
*number\_characters*
The number of characters to extract \(the length of the substring\)\. The *number\_characters* is based on the number of characters, not bytes, so that multi\-byte characters are counted as single characters\. This number cannot be negative\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUBSTRING.md
|
d66c732495e6-0
|
VARCHAR
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUBSTRING.md
|
0ffc56861a70-0
|
The following example returns a four\-character string beginning with the sixth character\.
```
select substring('caterpillar',6,4);
substring
-----------
pill
(1 row)
```
If the *start\_position* \+ *number\_characters* exceeds the length of the *string*, SUBSTRING returns a substring starting from the *start\_position* until the end of the string\. For example:
```
select substring('caterpillar',6,8);
substring
-----------
pillar
(1 row)
```
If the `start_position` is negative or 0, the SUBSTRING function returns a substring beginning at the first character of string with a length of `start_position` \+ `number_characters` \-1\. For example:
```
select substring('caterpillar',-2,6);
substring
-----------
cat
(1 row)
```
If `start_position` \+ `number_characters` \-1 is less than or equal to zero, SUBSTRING returns an empty string\. For example:
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUBSTRING.md
|
0ffc56861a70-1
|
```
select substring('caterpillar',-5,4);
substring
-----------
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUBSTRING.md
|
09c316368910-0
|
The following example returns the month from the LISTTIME string in the LISTING table:
```
select listid, listtime,
substring(listtime, 6, 2) as month
from listing
order by 1, 2, 3
limit 10;
listid | listtime | month
--------+---------------------+-------
1 | 2008-01-24 06:43:29 | 01
2 | 2008-03-05 12:25:29 | 03
3 | 2008-11-01 07:35:33 | 11
4 | 2008-05-24 01:18:37 | 05
5 | 2008-05-17 02:29:11 | 05
6 | 2008-08-15 02:08:13 | 08
7 | 2008-11-15 09:38:15 | 11
8 | 2008-11-09 05:07:30 | 11
9 | 2008-09-09 08:03:36 | 09
10 | 2008-06-17 09:44:54 | 06
(10 rows)
```
The following example is the same as above, but uses the FROM\.\.\.FOR option:
```
select listid, listtime,
substring(listtime from 6 for 2) as month
from listing
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUBSTRING.md
|
09c316368910-1
|
```
select listid, listtime,
substring(listtime from 6 for 2) as month
from listing
order by 1, 2, 3
limit 10;
listid | listtime | month
--------+---------------------+-------
1 | 2008-01-24 06:43:29 | 01
2 | 2008-03-05 12:25:29 | 03
3 | 2008-11-01 07:35:33 | 11
4 | 2008-05-24 01:18:37 | 05
5 | 2008-05-17 02:29:11 | 05
6 | 2008-08-15 02:08:13 | 08
7 | 2008-11-15 09:38:15 | 11
8 | 2008-11-09 05:07:30 | 11
9 | 2008-09-09 08:03:36 | 09
10 | 2008-06-17 09:44:54 | 06
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUBSTRING.md
|
09c316368910-2
|
10 | 2008-06-17 09:44:54 | 06
(10 rows)
```
You cannot use SUBSTRING to predictably extract the prefix of a string that might contain multi\-byte characters because you need to specify the length of a multi\-byte string based on the number of bytes, not the number of characters\. To extract the beginning segment of a string based on the length in bytes, you can CAST the string as VARCHAR\(*byte\_length*\) to truncate the string, where *byte\_length* is the required length\. The following example extracts the first 5 bytes from the string `'Fourscore and seven'`\.
```
select cast('Fourscore and seven' as varchar(5));
varchar
-------
Fours
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUBSTRING.md
|
cba25b37f194-0
|
Analyzes execution steps that occur when a DISTINCT function is used in the SELECT list or when duplicates are removed in a UNION or INTERSECT query\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_UNIQUE.md
|
763f905192ae-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_UNIQUE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_UNIQUE.md
|
b480b09b2bba-0
|
Suppose you execute the following query:
```
select distinct eventname
from event order by 1;
```
Assuming the ID for the previous query is 6313, the following example shows the number of rows produced by the unique step for each slice in segments 0 and 1\.
```
select query, slice, segment, step, datediff(msec, starttime, endtime) as msec, tasknum, rows
from stl_unique where query = 6313
order by query desc, slice, segment, step;
```
```
query | slice | segment | step | msec | tasknum | rows
-------+-------+---------+------+------+---------+------
6313 | 0 | 0 | 2 | 0 | 22 | 550
6313 | 0 | 1 | 1 | 256 | 20 | 145
6313 | 1 | 0 | 2 | 1 | 23 | 540
6313 | 1 | 1 | 1 | 42 | 21 | 127
6313 | 2 | 0 | 2 | 1 | 22 | 540
6313 | 2 | 1 | 1 | 255 | 20 | 158
6313 | 3 | 0 | 2 | 1 | 23 | 542
6313 | 3 | 1 | 1 | 38 | 21 | 146
(8 rows)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_UNIQUE.md
|
b480b09b2bba-1
|
6313 | 3 | 1 | 1 | 38 | 21 | 146
(8 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_UNIQUE.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.