id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
ec6f39cabd05-1
|
1. Choose **Edit workload queues**\.
1. Choose **Add queue** twice to add two queues\. Now there are three queues: **Queue 1**, **Queue 2**, and **Default queue**\.
1. Enter information for each queue as follows:
+ For **Queue 1**, enter **30** for **Memory \(%\)**, **2** for **Concurrency on main**, and **test** for **Query groups**\. Leave the other settings with their default values\.
+ For **Queue 2**, enter **40** for **Memory \(%\)**, **3** for **Concurrency on main**, and **admin** for **User groups**\. Leave the other settings with their default values\.
+ Don't make any changes to the **Default queue**\. WLM assigns unallocated memory to the default queue\.
1. Choose **Save** to save your settings\.
Next, associate the parameter group that has the manual WLM configuration with a cluster\.
**To associate a parameter group with a manual WLM configuration with a cluster**
1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
ec6f39cabd05-2
|
1. On the navigation menu, choose **CLUSTERS**, then choose **Clusters** to display a list of your clusters\.
1. Choose your cluster, such as `examplecluster`, and for **Actions** choose **Modify**\.
1. In the **Database configuration** section, choose the **wlmtutorial** parameter group that you created for **Parameter groups**\.
1. Choose **Modify cluster** to associate the parameter group\.
The cluster is modified with the changed parameter group\. However, you need to reboot the cluster for the changes to also be applied to the database\.
1. Choose your cluster, and then choose **Reboot cluster** for **Actions**\.
After the cluster is rebooted, its status returns to **Available**\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
24e6bfb4dd93-0
|
To create a manual WLM configuration and associate it to a cluster\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
3aa9f61d1ef8-0
|
In this step, you create a new parameter group to use to configure WLM for this tutorial\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
720b37391138-0
|
1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\.
1. In the navigation pane, choose **Workload management**\.
1. Choose **Create parameter group**\.
1. In the **Create Cluster Parameter Group** dialog box, enter `wlmtutorial` for **Parameter group name** and enter `WLM tutorial` for **Description**\. You can leave the **Parameter group family** setting as is\. Then choose **Create**\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/console_create_cluster_param_group.png)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
2e1764622f6e-0
|
In this step, you modify the default settings of your new parameter group\. You add two new query queues to the WLM configuration and specify different settings for each queue\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
b4e3c613f1b5-0
|
1. On the **Parameter Groups** page of the Amazon Redshift console, choose `wlmtutorial`\. Doing this opens the **Parameters** page for `wlmtutorial`\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/console_param_group_list.png)
1. Choose **Switch WLM mode**\. On the **WLM settings** page, choose **Manual WLM** and **Save**\.
1. Choose the **Workload Management** tab\. Choose **Add queue** twice to add two new queues to this WLM configuration\. Configure the queues with the following values:
+ For queue 1, enter `2` for **Concurrency on main**, `test` for **Query groups**, and `30` for **Memory \(%\)**\. Leave the other settings with their default values\.
+ For queue 2, enter `3` for **Concurrency on main**, `admin` for **User groups**, and `40` for **Memory \(%\)**\. Leave the other settings with their default values\.
+ Don't make any changes to the **Default queue**\. WLM assigns unallocated memory to the default queue\.
1. Choose **Save**\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
2dd5745da95e-0
|
In this step, you open your sample cluster and associate it with the new parameter group\. After you do this, you reboot the cluster so that Amazon Redshift can apply the new settings to the database\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
068a3ff5d4bc-0
|
1. In the navigation pane, choose **Clusters**, and then click your cluster to open it\. If you are using the same cluster from *Amazon Redshift Getting Started*, your cluster is named `examplecluster`\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/console_clusters_examplecluster.png)
1. On the **Configuration** tab, choose **Modify** for **Cluster**\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/console_clusters_examplecluster_cluster_menu_modify.png)
1. In the **Modify Cluster** dialog box, choose `wlmtutorial` for **Cluster Parameter Group**, and then choose **Modify**\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/console_clusters_examplecluster_modify.png)
The statuses shown in the **Cluster Parameter Group** and **Parameter Group Apply Status** change from **in\-sync** to **applying** as shown in the following\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/console_clusters_examplecluster_modify_applying.png)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
068a3ff5d4bc-1
|
After the new parameter group is applied to the cluster, the **Cluster Properties** and **Cluster Status** show the new parameter group that you associated with the cluster\. You need to reboot the cluster so that these settings can be applied to the database also\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/console_clusters_examplecluster_pending_reboot.png)
1. For **Cluster**, choose **Reboot**\. The status shown in **Cluster Status** changes from **available** to **rebooting**\. After the cluster is rebooted, the status returns to **available**\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/console_clusters_examplecluster_cluster_menu_reboot.png)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
d5c6031fd278-0
|
JSON\_EXTRACT\_ARRAY\_ELEMENT\_TEXT returns a JSON array element in the outermost array of a JSON string, using a zero\-based index\. The first element in an array is at position 0\. If the index is negative or out of bound, JSON\_EXTRACT\_ARRAY\_ELEMENT\_TEXT returns empty string\. If the *null\_if\_invalid* argument is set to `true` and the JSON string is invalid, the function returns NULL instead of returning an error\.
For more information, see [JSON functions](json-functions.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_ARRAY_ELEMENT_TEXT.md
|
c352fc6349aa-0
|
```
json_extract_array_element_text('json string', pos [, null_if_invalid ] )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_ARRAY_ELEMENT_TEXT.md
|
80a79e91fe5d-0
|
*json\_string*
A properly formatted JSON string\.
*pos*
An integer representing the index of the array element to be returned, using a zero\-based array index\.
*null\_if\_invalid*
A Boolean value that specifies whether to return NULL if the input JSON string is invalid instead of returning an error\. To return NULL if the JSON is invalid, specify `true` \(`t`\)\. To return an error if the JSON is invalid, specify `false` \(`f`\)\. The default is `false`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_ARRAY_ELEMENT_TEXT.md
|
950e28ab2c80-0
|
A VARCHAR string representing the JSON array element referenced by *pos*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_ARRAY_ELEMENT_TEXT.md
|
714c5e19aebe-0
|
The following example returns array element at position 2, which is the third element of a zero\-based array index:
```
select json_extract_array_element_text('[111,112,113]', 2);
json_extract_array_element_text
-------------------------------
113
```
The following example returns an error because the JSON is invalid\.
```
select json_extract_array_element_text('["a",["b",1,["c",2,3,null,]]]',1);
An error occurred when executing the SQL command:
select json_extract_array_element_text('["a",["b",1,["c",2,3,null,]]]',1)
```
The following example sets *null\_if\_invalid* to *true*, so the statement returns NULL instead of returning an error for invalid JSON\.
```
select json_extract_array_element_text('["a",["b",1,["c",2,3,null,]]]',1,true);
json_extract_array_element_text
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_ARRAY_ELEMENT_TEXT.md
|
714c5e19aebe-1
|
json_extract_array_element_text
-------------------------------
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_ARRAY_ELEMENT_TEXT.md
|
3bd795831616-0
|
**1**, 0 to 10
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_max_concurrency_scaling_clusters.md
|
717070157940-0
|
Sets the maximum number of concurrency scaling clusters allowed when concurrency scaling is enabled\. Increase this value if more concurrency scaling is required\. Decrease this value to reduce the usage of concurrency scaling clusters and the resulting billing charges\.
The maximum number of concurrency scaling clusters is an adjustable quota\. For more information, see [Amazon Redshift quotas](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html#amazon-redshift-limits-quota) in the *Amazon Redshift Cluster Management Guide*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_max_concurrency_scaling_clusters.md
|
1ec43fbd2b8e-0
|
The COUNT window function counts the rows defined by the expression\.
The COUNT function has two variations\. COUNT\(\*\) counts all the rows in the target table whether they include nulls or not\. COUNT\(expression\) computes the number of rows with non\-NULL values in a specific column or expression\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_COUNT.md
|
8b2b84387b4e-0
|
```
COUNT ( * | [ ALL ] expression) OVER
(
[ PARTITION BY expr_list ]
[ ORDER BY order_list
frame_clause ]
)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_COUNT.md
|
a8ef6667d15d-0
|
*expression *
The target column or expression that the function operates on\.
ALL
With the argument ALL, the function retains all duplicate values from the expression for counting\. ALL is the default\. DISTINCT is not supported\.
OVER
Specifies the window clauses for the aggregation functions\. The OVER clause distinguishes window aggregation functions from normal set aggregation functions\.
PARTITION BY *expr\_list*
Defines the window for the COUNT function in terms of one or more expressions\.
ORDER BY *order\_list*
Sorts the rows within each partition\. If no PARTITION BY is specified, ORDER BY uses the entire table\.
*frame\_clause*
If an ORDER BY clause is used for an aggregate function, an explicit frame clause is required\. The frame clause refines the set of rows in a function's window, including or excluding sets of rows within the ordered result\. The frame clause consists of the ROWS keyword and associated specifiers\. See [Window function syntax summary](r_Window_function_synopsis.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_COUNT.md
|
712fef92e993-0
|
The COUNT function supports all argument data types\.
The return type supported by the COUNT function is BIGINT\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_COUNT.md
|
f6932926137a-0
|
The following example shows the sales ID, quantity, and count of all rows from the beginning of the data window:
```
select salesid, qty,
count(*) over (order by salesid rows unbounded preceding) as count
from winsales
order by salesid;
salesid | qty | count
---------+-----+-----
10001 | 10 | 1
10005 | 30 | 2
10006 | 10 | 3
20001 | 20 | 4
20002 | 20 | 5
30001 | 10 | 6
30003 | 15 | 7
30004 | 20 | 8
30007 | 30 | 9
40001 | 40 | 10
40005 | 10 | 11
(11 rows)
```
For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
The following example shows how the sales ID, quantity, and count of non\-null rows from the beginning of the data window\. \(In the WINSALES table, the QTY\_SHIPPED column contains some NULLs\.\)
```
select salesid, qty, qty_shipped,
count(qty_shipped)
over (order by salesid rows unbounded preceding) as count
from winsales
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_COUNT.md
|
f6932926137a-1
|
count(qty_shipped)
over (order by salesid rows unbounded preceding) as count
from winsales
order by salesid;
salesid | qty | qty_shipped | count
---------+-----+-------------+-------
10001 | 10 | 10 | 1
10005 | 30 | | 1
10006 | 10 | | 1
20001 | 20 | 20 | 2
20002 | 20 | 20 | 3
30001 | 10 | 10 | 4
30003 | 15 | | 4
30004 | 20 | | 4
30007 | 30 | | 4
40001 | 40 | | 4
40005 | 10 | 10 | 5
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_COUNT.md
|
478f2312761c-0
|
Displays a log that records when invalid UTF\-8 characters were replaced by the [COPY](r_COPY.md) command with the ACCEPTINVCHARS option\. A log entry is added to STL\_REPLACEMENTS for each of the first 100 rows on each node slice that required at least one replacement\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_REPLACEMENTS.md
|
666be9199740-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_REPLACEMENTS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_REPLACEMENTS.md
|
933d9783e8f6-0
|
The following example returns replacements for the most recent COPY operation\.
```
select query, session, filename, line_number, colname
from stl_replacements
where query = pg_last_copy_id();
query | session | filename | line_number | colname
------+---------+-----------------------------------+-------------+--------
96 | 6314 | s3://mybucket/allusers_pipe.txt | 251 | city
96 | 6314 | s3://mybucket/allusers_pipe.txt | 317 | city
96 | 6314 | s3://mybucket/allusers_pipe.txt | 569 | city
96 | 6314 | s3://mybucket/allusers_pipe.txt | 623 | city
96 | 6314 | s3://mybucket/allusers_pipe.txt | 694 | city
...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_REPLACEMENTS.md
|
2c7678ebb17e-0
|
Returns the natural logarithm of the input parameter\. Synonym of the DLOG1 function\.
Synonym of [DLOG1 function](r_DLOG1.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LN.md
|
3cda1d913214-0
|
```
LN(expression)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LN.md
|
e442c64652c7-0
|
*expression*
The target column or expression that the function operates on\.
This function returns an error for some data types if the expression references an Amazon Redshift user\-created table or an Amazon Redshift STL or STV system table\.
Expressions with the following data types produce an error if they reference a user\-created or system table\. Expressions with these data types run exclusively on the leader node:
+ BOOLEAN
+ CHAR
+ DATE
+ DECIMAL or NUMERIC
+ TIMESTAMP
+ VARCHAR
Expressions with the following data types run successfully on user\-created tables and STL or STV system tables:
+ BIGINT
+ DOUBLE PRECISION
+ INTEGER
+ REAL
+ SMALLINT
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LN.md
|
473e5a8605c7-0
|
The LN function returns the same type as the expression\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LN.md
|
1d8c8ce9c755-0
|
The following example returns the natural logarithm, or base e logarithm, of the number 2\.718281828:
```
select ln(2.718281828);
ln
--------------------
0.9999999998311267
(1 row)
```
Note that the answer is nearly equal to 1\.
This example returns the natural logarithm of the values in the USERID column in the USERS table:
```
select username, ln(userid) from users order by userid limit 10;
username | ln
----------+-------------------
JSG99FHE | 0
PGL08LJI | 0.693147180559945
IFT66TXU | 1.09861228866811
XDZ38RDD | 1.38629436111989
AEB55QTM | 1.6094379124341
NDQ15VBM | 1.79175946922805
OWY35QYB | 1.94591014905531
AZG78YIP | 2.07944154167984
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LN.md
|
1d8c8ce9c755-1
|
AZG78YIP | 2.07944154167984
MSD36KVR | 2.19722457733622
WKW41AIW | 2.30258509299405
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LN.md
|
2796dbb5a482-0
|
TO\_TIMESTAMP converts a TIMESTAMP string to TIMESTAMPTZ\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_TIMESTAMP.md
|
1418abc92b4f-0
|
```
to_timestamp ('timestamp', 'format')
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_TIMESTAMP.md
|
5fe8b051c350-0
|
*timestamp*
A string that represents a time stamp value in the format specified by *format*\.
*format*
The format for the *timestamp* value\. Formats that include a time zone \(**TZ**, **tz**, or **OF**\) are not supported as input\. For valid time stamp formats, see [Datetime format strings](r_FORMAT_strings.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_TIMESTAMP.md
|
1a70064fbfaa-0
|
TIMESTAMPTZ
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_TIMESTAMP.md
|
a7082e6ae40d-0
|
The following example demonstrates using the TO\_TIMESTAMP function to convert a TIMESTAMP string to a TIMESTAMPTZ
```
select sysdate,
to_timestamp (sysdate, 'HH24:MI:SS') as seconds;
timestamp |seconds
-------------------|----------------------
2018-05-17 23:54:51|0001-03-24 18:05:17.0
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_TIMESTAMP.md
|
734a9130230c-0
|
You can often significantly improve query performance by using an interleaved sort style, but over time performance might degrade if the distribution of the values in the sort key columns changes\.
When you initially load an empty interleaved table using COPY or CREATE TABLE AS, Amazon Redshift automatically builds the interleaved index\. If you initially load an interleaved table using INSERT, you need to run VACUUM REINDEX afterwards to initialize the interleaved index\.
Over time, as you add rows with new sort key values, performance might degrade if the distribution of the values in the sort key columns changes\. If your new rows fall primarily within the range of existing sort key values, you don’t need to reindex\. Run VACUUM SORT ONLY or VACUUM FULL to restore the sort order\.
The query engine is able to use sort order to efficiently select which data blocks need to be scanned to process a query\. For an interleaved sort, Amazon Redshift analyzes the sort key column values to determine the optimal sort order\. If the distribution of key values changes, or skews, as rows are added, the sort strategy will no longer be optimal, and the performance benefit of sorting will degrade\. To reanalyze the sort key distribution you can run a VACUUM REINDEX\. The reindex operation is time consuming, so to decide whether a table will benefit from a reindex, query the [SVV\_INTERLEAVED\_COLUMNS](r_SVV_INTERLEAVED_COLUMNS.md) view\.
For example, the following query shows details for tables that use interleaved sort keys\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_vacuum-decide-whether-to-reindex.md
|
734a9130230c-1
|
For example, the following query shows details for tables that use interleaved sort keys\.
```
select tbl as tbl_id, stv_tbl_perm.name as table_name,
col, interleaved_skew, last_reindex
from svv_interleaved_columns, stv_tbl_perm
where svv_interleaved_columns.tbl = stv_tbl_perm.id
and interleaved_skew is not null;
tbl_id | table_name | col | interleaved_skew | last_reindex
--------+------------+-----+------------------+--------------------
100048 | customer | 0 | 3.65 | 2015-04-22 22:05:45
100068 | lineorder | 1 | 2.65 | 2015-04-22 22:05:45
100072 | part | 0 | 1.65 | 2015-04-22 22:05:45
100077 | supplier | 1 | 1.00 | 2015-04-22 22:05:45
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_vacuum-decide-whether-to-reindex.md
|
734a9130230c-2
|
100077 | supplier | 1 | 1.00 | 2015-04-22 22:05:45
(4 rows)
```
The value for `interleaved_skew` is a ratio that indicates the amount of skew\. A value of 1 means there is no skew\. If the skew is greater than 1\.4, a VACUUM REINDEX will usually improve performance unless the skew is inherent in the underlying set\.
You can use the date value in `last_reindex` to determine how long it has been since the last reindex\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_vacuum-decide-whether-to-reindex.md
|
0542285b8414-0
|
The following pseudo\-code examples demonstrate how transactions either proceed or wait when they are run concurrently\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Serializable_isolation_example.md
|
5fe0b93089b1-0
|
Transaction 1 copies rows into the LISTING table:
```
begin;
copy listing from ...;
end;
```
Transaction 2 starts concurrently in a separate session and attempts to copy more rows into the LISTING table\. Transaction 2 must wait until transaction 1 releases the write lock on the LISTING table, then it can proceed\.
```
begin;
[waits]
copy listing from ;
end;
```
The same behavior would occur if one or both transactions contained an INSERT command instead of a COPY command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Serializable_isolation_example.md
|
252ccf56c88e-0
|
Transaction 1 deletes rows from a table:
```
begin;
delete from listing where ...;
end;
```
Transaction 2 starts concurrently and attempts to delete rows from the same table\. It will succeed because it waits for transaction 1 to complete before attempting to delete rows\.
```
begin
[waits]
delete from listing where ;
end;
```
The same behavior would occur if one or both transactions contained an UPDATE command to the same table instead of a DELETE command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Serializable_isolation_example.md
|
3c8aac7a9085-0
|
In this example, transaction 1 deletes rows from the USERS table, reloads the table, runs a COUNT\(\*\) query, and then ANALYZE, before committing:
```
begin;
delete one row from USERS table;
copy ;
select count(*) from users;
analyze ;
end;
```
Meanwhile, transaction 2 starts\. This transaction attempts to copy additional rows into the USERS table, analyze the table, and then run the same COUNT\(\*\) query as the first transaction:
```
begin;
[waits]
copy users from ...;
select count(*) from users;
analyze;
end;
```
The second transaction will succeed because it must wait for the first to complete\. Its COUNT query will return the count based on the load it has completed\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Serializable_isolation_example.md
|
f4f06fd134a9-0
|
Use SVV\_EXTERNAL\_TABLES to view details for external tables\. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\.
SVV\_EXTERNAL\_TABLES is visible to all users\. Superusers can see all rows; regular users can see only metadata to which they have access\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_TABLES.md
|
6fef73717fa5-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_EXTERNAL_TABLES.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_TABLES.md
|
0467b47c8b2e-0
|
The following example shows details svv\_external\_tables with a predicate on the external schema used by a federated query\.
```
select schemaname, tablename from svv_external_tables where schemaname = 'apg_tpch';
schemaname | tablename
------------+-----------
apg_tpch | customer
apg_tpch | lineitem
apg_tpch | nation
apg_tpch | orders
apg_tpch | part
apg_tpch | partsupp
apg_tpch | region
apg_tpch | supplier
(8 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_TABLES.md
|
fb3ba822b7bc-0
|
TO\_CHAR converts a time stamp or numeric expression to a character\-string data format\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_CHAR.md
|
c1534593243f-0
|
```
TO_CHAR (timestamp_expression | numeric_expression , 'format')
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_CHAR.md
|
e54c89ba64d3-0
|
*timestamp\_expression*
An expression that results in a TIMESTAMP or TIMESTAMPTZ type value or a value that can implicitly be coerced to a time stamp\.
*numeric\_expression*
An expression that results in a numeric data type value or a value that can implicitly be coerced to a numeric type\. For more information, see [Numeric types](r_Numeric_types201.md)\. TO\_CHAR inserts a space to the left of the numeral string\.
TO\_CHAR does not support 128\-bit DECIMAL values\.
*format*
The format for the new value\. For valid formats, see [Datetime format strings](r_FORMAT_strings.md) and [ Numeric Format Strings](r_Numeric_formating.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_CHAR.md
|
a5ae6a1ad213-0
|
VARCHAR
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_CHAR.md
|
2b9018eb7cf3-0
|
The following example converts each STARTTIME value in the EVENT table to a string that consists of hours, minutes, and seconds\.
```
select to_char(starttime, 'HH12:MI:SS')
from event where eventid between 1 and 5
order by eventid;
to_char
----------
02:30:00
08:00:00
02:30:00
02:30:00
07:00:00
(5 rows)
```
The following example converts an entire time stamp value into a different format\.
```
select starttime, to_char(starttime, 'MON-DD-YYYY HH12:MIPM')
from event where eventid=1;
starttime | to_char
---------------------+---------------------
2008-01-25 14:30:00 | JAN-25-2008 02:30PM
(1 row)
```
The following example converts a time stamp literal to a character string\.
```
select to_char(timestamp '2009-12-31 23:15:59','HH24:MI:SS');
to_char
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_CHAR.md
|
2b9018eb7cf3-1
|
to_char
----------
23:15:59
(1 row)
```
The following example converts a number to a character string\.
```
select to_char(-125.8, '999D99S');
to_char
---------
125.80-
(1 row)
```
The following example subtracts the commission from the price paid in the sales table\. The difference is then rounded up and converted to a roman numeral, shown in the to\_char column:
```
select salesid, pricepaid, commission, (pricepaid - commission)
as difference, to_char(pricepaid - commission, 'rn') from sales
group by sales.pricepaid, sales.commission, salesid
order by salesid limit 10;
salesid | pricepaid | commission | difference | to_char
---------+-----------+------------+------------+-----------------
1 | 728.00 | 109.20 | 618.80 | dcxix
2 | 76.00 | 11.40 | 64.60 | lxv
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_CHAR.md
|
2b9018eb7cf3-2
|
2 | 76.00 | 11.40 | 64.60 | lxv
3 | 350.00 | 52.50 | 297.50 | ccxcviii
4 | 175.00 | 26.25 | 148.75 | cxlix
5 | 154.00 | 23.10 | 130.90 | cxxxi
6 | 394.00 | 59.10 | 334.90 | cccxxxv
7 | 788.00 | 118.20 | 669.80 | dclxx
8 | 197.00 | 29.55 | 167.45 | clxvii
9 | 591.00 | 88.65 | 502.35 | dii
10 | 65.00 | 9.75 | 55.25 | lv
(10 rows)
```
The following example adds the currency symbol to the difference values shown in the to\_char column:
```
select salesid, pricepaid, commission, (pricepaid - commission)
as difference, to_char(pricepaid - commission, 'l99999D99') from sales
group by sales.pricepaid, sales.commission, salesid
order by salesid limit 10;
salesid | pricepaid | commission | difference | to_char
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_CHAR.md
|
2b9018eb7cf3-3
|
order by salesid limit 10;
salesid | pricepaid | commission | difference | to_char
--------+-----------+------------+------------+------------
1 | 728.00 | 109.20 | 618.80 | $ 618.80
2 | 76.00 | 11.40 | 64.60 | $ 64.60
3 | 350.00 | 52.50 | 297.50 | $ 297.50
4 | 175.00 | 26.25 | 148.75 | $ 148.75
5 | 154.00 | 23.10 | 130.90 | $ 130.90
6 | 394.00 | 59.10 | 334.90 | $ 334.90
7 | 788.00 | 118.20 | 669.80 | $ 669.80
8 | 197.00 | 29.55 | 167.45 | $ 167.45
9 | 591.00 | 88.65 | 502.35 | $ 502.35
10 | 65.00 | 9.75 | 55.25 | $ 55.25
(10 rows)
```
The following example lists the century in which each sale was made\.
```
select salesid, saletime, to_char(saletime, 'cc') from sales
order by salesid limit 10;
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_CHAR.md
|
2b9018eb7cf3-4
|
```
select salesid, saletime, to_char(saletime, 'cc') from sales
order by salesid limit 10;
salesid | saletime | to_char
---------+---------------------+---------
1 | 2008-02-18 02:36:48 | 21
2 | 2008-06-06 05:00:16 | 21
3 | 2008-06-06 08:26:17 | 21
4 | 2008-06-09 08:38:52 | 21
5 | 2008-08-31 09:17:02 | 21
6 | 2008-07-16 11:59:24 | 21
7 | 2008-06-26 12:56:06 | 21
8 | 2008-07-10 02:12:36 | 21
9 | 2008-07-22 02:23:17 | 21
10 | 2008-08-06 02:51:55 | 21
(10 rows)
```
The following example converts each STARTTIME value in the EVENT table to a string that consists of hours, minutes, seconds, and time zone\.
```
select to_char(starttime, 'HH12:MI:SS TZ')
from event where eventid between 1 and 5
order by eventid;
to_char
----------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_CHAR.md
|
2b9018eb7cf3-5
|
order by eventid;
to_char
----------
02:30:00 UTC
08:00:00 UTC
02:30:00 UTC
02:30:00 UTC
07:00:00 UTC
(5 rows)
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_CHAR.md
|
8a918405f48c-0
|
An IN condition tests a value for membership in a set of values or in a subquery\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_in_condition.md
|
5d33a227be8c-0
|
```
expression [ NOT ] IN (expr_list | table_subquery)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_in_condition.md
|
6f448e40d018-0
|
*expression*
A numeric, character, or datetime expression that is evaluated against the *expr\_list* or *table\_subquery* and must be compatible with the data type of that list or subquery\.
*expr\_list*
One or more comma\-delimited expressions, or one or more sets of comma\-delimited expressions bounded by parentheses\.
*table\_subquery*
A subquery that evaluates to a table with one or more rows, but is limited to only one column in its select list\.
IN \| NOT IN
IN returns true if the expression is a member of the expression list or query\. NOT IN returns true if the expression is not a member\. IN and NOT IN return NULL and no rows are returned in the following cases: If *expression* yields null; or if there are no matching *expr\_list* or *table\_subquery* values and at least one of these comparison rows yields null\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_in_condition.md
|
acb4c646ccc4-0
|
The following conditions are true only for those values listed:
```
qtysold in (2, 4, 5)
date.day in ('Mon', 'Tues')
date.month not in ('Oct', 'Nov', 'Dec')
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_in_condition.md
|
ae4b1329645e-0
|
To optimize query performance, an IN list that includes more than 10 values is internally evaluated as a scalar array\. IN lists with fewer than 10 values are evaluated as a series of OR predicates\. This optimization is supported for SMALLINT, INTEGER, BIGINT, REAL, DOUBLE PRECISION, BOOLEAN, CHAR, VARCHAR, DATE, TIMESTAMP, and TIMESTAMPTZ data types\.
Look at the EXPLAIN output for the query to see the effect of this optimization\. For example:
```
explain select * from sales
QUERY PLAN
--------------------------------------------------------------------
XN Seq Scan on sales (cost=0.00..6035.96 rows=86228 width=53)
Filter: (salesid = ANY ('{1,2,3,4,5,6,7,8,9,10,11}'::integer[]))
(2 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_in_condition.md
|
07620ee7b294-0
|
The unsorted region grows when you load large amounts of new data into tables that already contain data or when you do not vacuum tables as part of your routine maintenance operations\. To avoid long\-running vacuum operations, use the following practices:
+ Run vacuum operations on a regular schedule\.
If you load your tables in small increments \(such as daily updates that represent a small percentage of the total number of rows in the table\), running VACUUM regularly will help ensure that individual vacuum operations go quickly\.
+ Run the largest load first\.
If you need to load a new table with multiple COPY operations, run the largest load first\. When you run an initial load into a new or truncated table, all of the data is loaded directly into the sorted region, so no vacuum is required\.
+ Truncate a table instead of deleting all of the rows\.
Deleting rows from a table does not reclaim the space that the rows occupied until you perform a vacuum operation; however, truncating a table empties the table and reclaims the disk space, so no vacuum is required\. Alternatively, drop the table and re\-create it\.
+ Truncate or drop test tables\.
If you are loading a small number of rows into a table for test purposes, don't delete the rows when you are done\. Instead, truncate the table and reload those rows as part of the subsequent production load operation\.
+ Perform a deep copy\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_vacuum_diskspacereqs.md
|
07620ee7b294-1
|
+ Perform a deep copy\.
If a table that uses a compound sort key table has a large unsorted region, a deep copy is much faster than a vacuum\. A deep copy recreates and repopulates a table by using a bulk insert, which automatically re\-sorts the table\. If a table has a large unsorted region, a deep copy is much faster than a vacuum\. The trade off is that you cannot make concurrent updates during a deep copy operation, which you can do during a vacuum\. For more information, see [Amazon Redshift best practices for designing queries](c_designing-queries-best-practices.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_vacuum_diskspacereqs.md
|
4b5125f2a8d9-0
|
Analyzes query steps that parse strings into binary values for loading\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PARSE.md
|
53f60cc0c784-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_PARSE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PARSE.md
|
7fac52759eaf-0
|
The following example returns all query step results for slice 1 and segment 0 where strings were parsed into binary values\.
```
select query, step, starttime, endtime, tasknum, rows
from stl_parse
where slice=1 and segment=0;
```
```
query | step | starttime | endtime | tasknum | rows
-------+------+---------------------+---------------------+---------+--------
669 | 1 | 2013-08-12 22:35:13 | 2013-08-12 22:35:17 | 32 | 192497
696 | 1 | 2013-08-12 22:35:49 | 2013-08-12 22:35:49 | 32 | 0
525 | 1 | 2013-08-12 22:32:03 | 2013-08-12 22:32:03 | 13 | 49990
585 | 1 | 2013-08-12 22:33:18 | 2013-08-12 22:33:19 | 13 | 202
621 | 1 | 2013-08-12 22:34:03 | 2013-08-12 22:34:03 | 27 | 365
651 | 1 | 2013-08-12 22:34:47 | 2013-08-12 22:34:53 | 35 | 192497
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PARSE.md
|
7fac52759eaf-1
|
651 | 1 | 2013-08-12 22:34:47 | 2013-08-12 22:34:53 | 35 | 192497
590 | 1 | 2013-08-12 22:33:28 | 2013-08-12 22:33:28 | 19 | 0
599 | 1 | 2013-08-12 22:33:39 | 2013-08-12 22:33:39 | 31 | 11
675 | 1 | 2013-08-12 22:35:26 | 2013-08-12 22:35:27 | 38 | 3766
567 | 1 | 2013-08-12 22:32:47 | 2013-08-12 22:32:48 | 23 | 49990
630 | 1 | 2013-08-12 22:34:17 | 2013-08-12 22:34:17 | 36 | 0
572 | 1 | 2013-08-12 22:33:04 | 2013-08-12 22:33:04 | 29 | 0
645 | 1 | 2013-08-12 22:34:37 | 2013-08-12 22:34:38 | 29 | 8798
604 | 1 | 2013-08-12 22:33:47 | 2013-08-12 22:33:47 | 37 | 0
(14 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PARSE.md
|
8c6ec5d6aa1e-0
|
STV\_CURSOR\_CONFIGURATION displays cursor configuration constraints\. For more information, see [Cursor constraints](declare.md#declare-constraints)\.
STV\_CURSOR\_CONFIGURATION is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_CURSOR_CONFIGURATION.md
|
18988e7d9e39-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_CURSOR_CONFIGURATION.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_CURSOR_CONFIGURATION.md
|
a1caebe124b4-0
|
The COPY command loads data in parallel from Amazon S3, Amazon EMR, Amazon DynamoDB, or multiple data sources on remote hosts\. COPY loads large amounts of data much more efficiently than using INSERT statements, and stores the data more effectively as well\.
For more information about using the COPY command, see [Loading data from Amazon S3](t_Loading-data-from-S3.md) and [Loading data from an Amazon DynamoDB table](t_Loading-data-from-dynamodb.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-use-copy.md
|
79d5ae189bbc-0
|
The COPY command loads the data in parallel from multiple files, dividing the workload among the nodes in your cluster\. When you load all the data from a single large file, Amazon Redshift is forced to perform a serialized load, which is much slower\. Split your load data files so that the files are about equal size, between 1 MB and 1 GB after compression\. For optimum parallelism, the ideal size is between 1 MB and 125
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-use-multiple-files.md
|
79d5ae189bbc-1
|
ideal size is between 1 MB and 125 MB after compression\. The number of files should be a multiple of the number of slices in your cluster\. For more information about how to split your data into files and examples of using COPY to load data, see [Loading data from Amazon S3](t_Loading-data-from-S3.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-use-multiple-files.md
|
70e5cee6905f-0
|
**Topics**
+ [CATEGORY table](r_categorytable.md)
+ [DATE table](r_datetable.md)
+ [EVENT table](r_eventtable.md)
+ [VENUE table](r_venuetable.md)
+ [USERS table](r_userstable.md)
+ [LISTING table](r_listingtable.md)
+ [SALES table](r_salestable.md)
Most of the examples in the Amazon Redshift documentation use a sample database called TICKIT\. This small database consists of seven tables: two fact tables and five dimensions\. You can load the TICKIT dataset by following the steps in [Step 6: Load sample data from amazon S3](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-create-sample-db.html) in the Amazon Redshift Getting Started\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/tickitdb.png)
This sample database application helps analysts track sales activity for the fictional TICKIT web site, where users buy and sell tickets online for sporting events, shows, and concerts\. In particular, analysts can identify ticket movement over time, success rates for sellers, and the best\-selling events, venues, and seasons\. Analysts can use this information to provide incentives to buyers and sellers who frequent the site, to attract new users, and to drive advertising and promotions\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_sampledb.md
|
70e5cee6905f-1
|
For example, the following query finds the top five sellers in San Diego, based on the number of tickets sold in 2008:
```
select sellerid, username, (firstname ||' '|| lastname) as name,
city, sum(qtysold)
from sales, date, users
where sales.sellerid = users.userid
and sales.dateid = date.dateid
and year = 2008
and city = 'San Diego'
group by sellerid, username, name, city
order by 5 desc
limit 5;
sellerid | username | name | city | sum
----------+----------+-------------------+-----------+-----
49977 | JJK84WTE | Julie Hanson | San Diego | 22
19750 | AAS23BDR | Charity Zimmerman | San Diego | 21
29069 | SVL81MEQ | Axel Grant | San Diego | 17
43632 | VAG08HKW | Griffin Dodson | San Diego | 16
36712 | RXT40MKU | Hiram Turner | San Diego | 14
(5 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_sampledb.md
|
70e5cee6905f-2
|
36712 | RXT40MKU | Hiram Turner | San Diego | 14
(5 rows)
```
The database used for the examples in this guide contains a small data set; the two fact tables each contain less than 200,000 rows, and the dimensions range from 11 rows in the CATEGORY table up to about 50,000 rows in the USERS table\.
In particular, the database examples in this guide demonstrate the key features of Amazon Redshift table design:
+ Data distribution
+ Data sort
+ Columnar compression
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_sampledb.md
|
2a1e4e3521fc-0
|
**off \(false\)**, on \(true\)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_describe_field_name_in_uppercase.md
|
14b52afa0229-0
|
Specifies whether column names returned by SELECT statements are uppercase or lowercase\. If on, column names are returned in uppercase\. If off, column names are returned in lowercase\. Amazon Redshift stores column names in lowercase regardless of the setting for `describe_field_name_in_uppercase`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_describe_field_name_in_uppercase.md
|
709a2e6d3c5c-0
|
```
set describe_field_name_in_uppercase to on;
show describe_field_name_in_uppercase;
DESCRIBE_FIELD_NAME_IN_UPPERCASE
--------------------------------
on
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_describe_field_name_in_uppercase.md
|
f620ddddab68-0
|
If your data has a fixed retention period, you can organize your data as a sequence of time\-series tables\. In such a sequence, each table is identical but contains data for different time ranges\.
You can easily remove old data simply by running a DROP TABLE command on the corresponding tables\. This approach is much faster than running a large\-scale DELETE process and saves you from having to run a subsequent VACUUM process to reclaim space\. To hide the fact that the data is stored in different tables, you can create a UNION ALL view\. When you delete old data, simply refine your UNION ALL view to remove the dropped tables\. Similarly, as you load new time periods into new tables, add the new tables to the view\. To signal the optimizer to skip the scan on tables that don't match the query filter, your view definition filters for the date range that corresponds to each table\.
Avoid having too many tables in the UNION ALL view\. Each additional table adds a small processing time to the query\. Tables don't need to use the same time frame\. For example, you might have tables for differing time periods, such as daily, monthly, and yearly\.
If you use time\-series tables with a timestamp column for the sort key, you effectively load your data in sort key order\. Doing this eliminates the need to vacuum to re\-sort the data\. For more information, see [Loading your data in sort key order](vacuum-load-in-sort-key-order.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-time-series-tables.md
|
96e71c5e9c4f-0
|
The COPY command needs authorization to access data in another AWS resource, including in Amazon S3, Amazon EMR, Amazon DynamoDB, and Amazon EC2\. You can provide that authorization by referencing an [AWS Identity and Access Management \(IAM\) role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that is attached to your cluster \(*role\-based access control*\) or by providing the access credentials for an IAM user \(*key\-based access control*\)\. For increased security and flexibility, we recommend using IAM role\-based access control\. COPY can also use temporary credentials to limit access to your load data, and you can encrypt your load data on Amazon S3\.
The following topics provide more details and examples of authentication options:
+ [IAM permissions for COPY, UNLOAD, and CREATE LIBRARY](copy-usage_notes-access-permissions.md#copy-usage_notes-iam-permissions)
+ [Role\-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-role-based)
+ [Key\-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-key-based)
Use one of the following to provide authorization for the COPY command:
+ [IAM_ROLE](#copy-iam-role) parameter
+ [ACCESS_KEY_ID and SECRET_ACCESS_KEY](#copy-access-key-id) parameters
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-authorization.md
|
96e71c5e9c4f-1
|
+ [ACCESS_KEY_ID and SECRET_ACCESS_KEY](#copy-access-key-id) parameters
+ [CREDENTIALS](#copy-credentials) clause<a name="copy-authorization-parameters-list"></a>
IAM\_ROLE '*iam\-role\-arn*' <a name="copy-iam-role"></a>
The Amazon Resource Name \(ARN\) for an IAM role that your cluster uses for authentication and authorization\. If you specify IAM\_ROLE, you can't use ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY, SESSION\_TOKEN, or CREDENTIALS\.
The following shows the syntax for the IAM\_ROLE parameter\.
```
IAM_ROLE 'arn:aws:iam::<aws-account-id>:role/<role-name>'
```
For more information, see [Role\-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-role-based)\.
ACCESS\_KEY\_ID '*access\-key\-id *' SECRET\_ACCESS\_KEY '*secret\-access\-key*' <a name="copy-access-key-id"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-authorization.md
|
96e71c5e9c4f-2
|
The access key ID and secret access key for an IAM user that is authorized to access the AWS resources that contain the data\. ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY must be used together\. Optionally, you can provide temporary access credentials and also specify the [SESSION_TOKEN](#copy-token) parameter\.
The following shows the syntax for the ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY parameters\.
```
ACCESS_KEY_ID '<access-key-id>'
SECRET_ACCESS_KEY '<secret-access-key>';
```
For more information, see [Key\-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-key-based)\.
If you specify ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY, you can't use IAM\_ROLE or CREDENTIALS\.
Instead of providing access credentials as plain text, we strongly recommend using role\-based authentication by specifying the IAM\_ROLE parameter\. For more information, see [Role\-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-role-based)\.
SESSION\_TOKEN '*temporary\-token*' <a name="copy-token"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-authorization.md
|
96e71c5e9c4f-3
|
SESSION\_TOKEN '*temporary\-token*' <a name="copy-token"></a>
The session token for use with temporary access credentials\. When SESSION\_TOKEN is specified, you must also use ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY to provide temporary access key credentials\. If you specify SESSION\_TOKEN you can't use IAM\_ROLE or CREDENTIALS\. For more information, see [Temporary security credentials](copy-usage_notes-access-permissions.md#r_copy-temporary-security-credentials) in the IAM User Guide\.
Instead of creating temporary security credentials, we strongly recommend using role\-based authentication\. When you authorize using an IAM role, Amazon Redshift automatically creates temporary user credentials for each session\. For more information, see [Role\-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-role-based)\.
The following shows the syntax for the SESSION\_TOKEN parameter with the ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY parameters\.
```
ACCESS_KEY_ID '<access-key-id>'
SECRET_ACCESS_KEY '<secret-access-key>'
SESSION_TOKEN '<temporary-token>';
```
If you specify SESSION\_TOKEN you can't use CREDENTIALS or IAM\_ROLE\.
\[WITH\] CREDENTIALS \[AS\] '*credentials\-args*' <a name="copy-credentials"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-authorization.md
|
96e71c5e9c4f-4
|
\[WITH\] CREDENTIALS \[AS\] '*credentials\-args*' <a name="copy-credentials"></a>
A clause that indicates the method your cluster will use when accessing other AWS resources that contain data files or manifest files\. You can't use the CREDENTIALS parameter with IAM\_ROLE or ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY\.
For increased flexibility, we recommend using the [IAM_ROLE](#copy-iam-role) or [ACCESS_KEY_ID and SECRET_ACCESS_KEY](#copy-access-key-id) parameters instead of the CREDENTIALS parameter\.
Optionally, if the [ENCRYPTED](copy-parameters-data-source-s3.md#copy-encrypted) parameter is used, the *credentials\-args* string also provides the encryption key\.
The *credentials\-args* string is case\-sensitive and must not contain spaces\.
The keywords WITH and AS are optional and are ignored\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-authorization.md
|
96e71c5e9c4f-5
|
The *credentials\-args* string is case\-sensitive and must not contain spaces\.
The keywords WITH and AS are optional and are ignored\.
You can specify either [role-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-role-based.phrase) or [key-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-key-based.phrase)\. In either case, the IAM role or IAM user must have the permissions required to access the specified AWS resources\. For more information, see [IAM permissions for COPY, UNLOAD, and CREATE LIBRARY](copy-usage_notes-access-permissions.md#copy-usage_notes-iam-permissions)\.
To safeguard your AWS credentials and protect sensitive data, we strongly recommend using role\-based access control\.
To specify role\-based access control, provide the *credentials\-args* string in the following format\.
```
'aws_iam_role=arn:aws:iam::<aws-account-id>:role/<role-name>'
```
To specify key\-based access control, provide the *credentials\-args* in the following format\.
```
'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret-access-key>'
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-authorization.md
|
96e71c5e9c4f-6
|
```
To use temporary token credentials, you must provide the temporary access key ID, the temporary secret access key, and the temporary token\. The *credentials\-args* string is in the following format\.
```
CREDENTIALS
'aws_access_key_id=<temporary-access-key-id>;aws_secret_access_key=<temporary-secret-access-key>;token=<temporary-token>'
```
For more information, see [Temporary security credentials](copy-usage_notes-access-permissions.md#r_copy-temporary-security-credentials)\.
If the [ENCRYPTED](copy-parameters-data-source-s3.md#copy-encrypted) parameter is used, the *credentials\-args* string is in the following format, where *<master\-key>* is the value of the master key that was used to encrypt the files\.
```
CREDENTIALS
'<credentials-args>;master_symmetric_key=<master-key>'
```
For example, the following COPY command uses role\-based access control with an encryption key\.
```
copy customer from 's3://mybucket/mydata'
credentials
'aws_iam_role=arn:aws:iam::<account-id>:role/<role-name>;master_symmetric_key=<master-key>'
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-authorization.md
|
96e71c5e9c4f-7
|
```
The following COPY command shows key\-based access control with an encryption key\.
```
copy customer from 's3://mybucket/mydata'
credentials
'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret-access-key>;master_symmetric_key=<master-key>'
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-authorization.md
|
cfddd2a163d4-0
|
The LIKE operator compares a string expression, such as a column name, with a pattern that uses the wildcard characters % \(percent\) and \_ \(underscore\)\. LIKE pattern matching always covers the entire string\. To match a sequence anywhere within a string, the pattern must start and end with a percent sign\.
LIKE is case\-sensitive; ILIKE is case\-insensitive\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_patternmatching_condition_like.md
|
7c5e14615765-0
|
```
expression [ NOT ] LIKE | ILIKE pattern [ ESCAPE 'escape_char' ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_patternmatching_condition_like.md
|
35103b30294a-0
|
*expression*
A valid UTF\-8 character expression, such as a column name\.
LIKE \| ILIKE
LIKE performs a case\-sensitive pattern match\. ILIKE performs a case\-insensitive pattern match for single\-byte UTF\-8 \(ASCII\) characters\. To perform a case\-insensitive pattern match for multibyte characters, use the [LOWER](r_LOWER.md) function on *expression* and *pattern* with a LIKE condition\.
In contrast to comparison predicates, such as = and <>, LIKE and ILIKE predicates do not implicitly ignore trailing spaces\. To ignore trailing spaces, use RTRIM or explicitly cast a CHAR column to VARCHAR\.
*pattern*
A valid UTF\-8 character expression with the pattern to be matched\.
*escape\_char*
A character expression that will escape metacharacters characters in the pattern\. The default is two backslashes \('\\\\'\)\.
If *pattern* does not contain metacharacters, then the pattern only represents the string itself; in that case LIKE acts the same as the equals operator\.
Either of the character expressions can be CHAR or VARCHAR data types\. If they differ, Amazon Redshift converts *pattern* to the data type of *expression*\.
LIKE supports the following pattern\-matching metacharacters:
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_patternmatching_condition_like.md
|
35103b30294a-1
|
LIKE supports the following pattern\-matching metacharacters:
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_patternmatching_condition_like.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_patternmatching_condition_like.md
|
e067548e6e48-0
|
The following table shows examples of pattern matching using LIKE:
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_patternmatching_condition_like.html)
The following example finds all cities whose names start with "E":
```
select distinct city from users
where city like 'E%' order by city;
city
---------------
East Hartford
East Lansing
East Rutherford
East St. Louis
Easthampton
Easton
Eatontown
Eau Claire
...
```
The following example finds users whose last name contains "ten" :
```
select distinct lastname from users
where lastname like '%ten%' order by lastname;
lastname
-------------
Christensen
Wooten
...
```
The following example finds cities whose third and fourth characters are "ea"\. The command uses ILIKE to demonstrate case insensitivity:
```
select distinct city from users where city ilike '__EA%' order by city;
city
-------------
Brea
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_patternmatching_condition_like.md
|
e067548e6e48-1
|
city
-------------
Brea
Clearwater
Great Falls
Ocean City
Olean
Wheaton
(6 rows)
```
The following example uses the default escape string \(\\\\\) to search for strings that include "\_":
```
select tablename, "column" from pg_table_def
where "column" like '%start\\_%'
limit 5;
tablename | column
-------------------+---------------
stl_s3client | start_time
stl_tr_conflict | xact_start_ts
stl_undone | undo_start_ts
stl_unload_log | start_time
stl_vacuum_detail | start_row
(5 rows)
```
The following example specifies '^' as the escape character, then uses the escape character to search for strings that include "\_":
```
select tablename, "column" from pg_table_def
where "column" like '%start^_%' escape '^'
limit 5;
tablename | column
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_patternmatching_condition_like.md
|
e067548e6e48-2
|
where "column" like '%start^_%' escape '^'
limit 5;
tablename | column
-------------------+---------------
stl_s3client | start_time
stl_tr_conflict | xact_start_ts
stl_undone | undo_start_ts
stl_unload_log | start_time
stl_vacuum_detail | start_row
(5 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_patternmatching_condition_like.md
|
5ee061a7ba9a-0
|
ST\_GeometryN returns a geometry pointed to by the input index of the input geometry, as follows:
+ If the input is a point, linestring, or polygon, then a geometry is returned as is if the index is equal to one \(1\), and null if the index is other than one \(1\)\.
+ If the input is a multipoint, multilinestring, multipolygon, or geometry collection, then a point, linestring, polygon, or geometry collection is returned as pointed to by an input index\.
The index is one\-based\. The spatial reference system identifier \(SRID\) of the result is the same as that of the input geometry\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeometryN-function.md
|
3b4160037f57-0
|
```
ST_GeometryN(geom, index)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeometryN-function.md
|
cc800dea4d17-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
*index*
A value of data type `INTEGER` that represents the position of a one\-based index\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeometryN-function.md
|
4d1ec128de34-0
|
`GEOMETRY`
If *geom* or *index* is null, then null is returned\.
If *index* is out of range, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeometryN-function.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.