id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
35d801a8e723-0
|
*expression *
The target column or expression that the function operates on\. This expression must have an INT, INT2, or INT8 data type\. The function returns an equivalent INT, INT2, or INT8 data type\.
DISTINCT \| ALL
With the argument DISTINCT, the function eliminates all duplicate values for the specified expression before calculating the result\. With the argument ALL, the function retains all duplicate values\. ALL is the default\. For more information, see [DISTINCT support for bit\-wise aggregations](c_bitwise_aggregate_functions.md#distinct-support-for-bit-wise-aggregations)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BIT_OR.md
|
e7cd30eff42a-0
|
The following query applies the BIT\_OR function to the LIKES column in a table called USERLIKES and groups the results by the CITY column\.
```
select city, bit_or(likes) from userlikes group by city
order by city;
city | bit_or
--------------+--------
Los Angeles | 127
Sacramento | 255
San Francisco | 255
San Jose | 255
Santa Barbara | 255
(5 rows)
```
For four of the cities listed, all of the event types are liked by at least one user \(`255=11111111`\)\. For Los Angeles, all of the event types except sports are liked by at least one user \(`127=01111111`\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BIT_OR.md
|
f34ec7c8b2cb-0
|
SVV views are system views that contain references to STV tables and snapshots for more detailed information\.
**Topics**
+ [SVV\_COLUMNS](r_SVV_COLUMNS.md)
+ [SVV\_DISKUSAGE](r_SVV_DISKUSAGE.md)
+ [SVV\_EXTERNAL\_COLUMNS](r_SVV_EXTERNAL_COLUMNS.md)
+ [SVV\_EXTERNAL\_DATABASES](r_SVV_EXTERNAL_DATABASES.md)
+ [SVV\_EXTERNAL\_PARTITIONS](r_SVV_EXTERNAL_PARTITIONS.md)
+ [SVV\_EXTERNAL\_SCHEMAS](r_SVV_EXTERNAL_SCHEMAS.md)
+ [SVV\_EXTERNAL\_TABLES](r_SVV_EXTERNAL_TABLES.md)
+ [SVV\_INTERLEAVED\_COLUMNS](r_SVV_INTERLEAVED_COLUMNS.md)
+ [SVV\_QUERY\_INFLIGHT](r_SVV_QUERY_INFLIGHT.md)
+ [SVV\_QUERY\_STATE](r_SVV_QUERY_STATE.md)
+ [SVV\_SCHEMA\_QUOTA\_STATE](r_SVV_SCHEMA_QUOTA_STATE.md)
+ [SVV\_TABLES](r_SVV_TABLES.md)
+ [SVV\_TABLE\_INFO](r_SVV_TABLE_INFO.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/svv_views.md
|
f34ec7c8b2cb-1
|
+ [SVV\_TABLE\_INFO](r_SVV_TABLE_INFO.md)
+ [SVV\_TRANSACTIONS](r_SVV_TRANSACTIONS.md)
+ [SVV\_VACUUM\_PROGRESS](r_SVV_VACUUM_PROGRESS.md)
+ [SVV\_VACUUM\_SUMMARY](r_SVV_VACUUM_SUMMARY.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/svv_views.md
|
c5c695ede5dc-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_eventtable.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_eventtable.md
|
0ed446e5236d-0
|
An expression list is a combination of expressions, and can appear in membership and comparison conditions \(WHERE clauses\) and in GROUP BY clauses\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_expression_lists.md
|
a0e2c1253bc8-0
|
```
expression , expression , ... | (expression, expression, ...)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_expression_lists.md
|
0e593c8f10a1-0
|
*expression*
A simple expression that evaluates to a value\. An expression list can contain one or more comma\-separated expressions or one or more sets of comma\-separated expressions\. When there are multiple sets of expressions, each set must contain the same number of expressions, and be separated by parentheses\. The number of expressions in each set must match the number of expressions before the operator in the condition\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_expression_lists.md
|
b268796275ee-0
|
The following are examples of expression lists in conditions:
```
(1, 5, 10)
('THESE', 'ARE', 'STRINGS')
(('one', 'two', 'three'), ('blue', 'yellow', 'green'))
```
The number of expressions in each set must match the number in the first part of the statement:
```
select * from venue
where (venuecity, venuestate) in (('Miami', 'FL'), ('Tampa', 'FL'))
order by venueid;
venueid | venuename | venuecity | venuestate | venueseats
---------+-------------------------+-----------+------------+------------
28 | American Airlines Arena | Miami | FL | 0
54 | St. Pete Times Forum | Tampa | FL | 0
91 | Raymond James Stadium | Tampa | FL | 65647
(3 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_expression_lists.md
|
61efbdb75a82-0
|
The most common s3ServiceException errors are caused by an improperly formatted or incorrect credentials string, having your cluster and your bucket in different AWS Regions, and insufficient Amazon S3 privileges\.
The section provides troubleshooting information for each type of error\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/s3serviceexception-error.md
|
948c5cddc194-0
|
If your credentials string was improperly formatted, you will receive the following error message:
```
ERROR: Invalid credentials. Must be of the format: credentials
'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret-access-key>
[;token=<temporary-session-token>]'
```
Verify that the credentials string does not contain any spaces or line breaks, and is enclosed in single quotes\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/s3serviceexception-error.md
|
4e279ada80f8-0
|
If your access key id does not exist, you will receive the following error message:
```
[Amazon](500310) Invalid operation: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.
```
This is often a copy and paste error\. Verify that the access key ID was entered correctly\. Also, if you are using temporary session keys, check that the value for `token` is set\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/s3serviceexception-error.md
|
b0e70b5dfccb-0
|
If your secret access key is incorrect, you will receive the following error message:
```
[Amazon](500310) Invalid operation: S3ServiceException:The request signature we calculated does not match the signature you provided.
Check your key and signing method.,Status 403,Error SignatureDoesNotMatch
```
This is often a copy and paste error\. Verify that the secret access key was entered correctly and that it is the correct key for the access key ID\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/s3serviceexception-error.md
|
6baa238edf05-0
|
The Amazon S3 bucket specified in the COPY command must be in the same AWS Region as the cluster\. If your Amazon S3 bucket and your cluster are in different Regions, you will receive an error similar to the following:
```
ERROR: S3ServiceException:The bucket you are attempting to access must be addressed using the specified endpoint.
```
You can create an Amazon S3 bucket in a specific Region either by selecting the Region when you create the bucket by using the Amazon S3 Management Console, or by specifying an endpoint when you create the bucket using the Amazon S3 API or CLI\. For more information, see [Uploading files to Amazon S3](t_uploading-data-to-S3.md)\.
For more information about Amazon S3 regions, see [Accessing a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro) in the *Amazon Simple Storage Service Developer Guide*\.
Alternatively, you can specify the Region using the [REGION](copy-parameters-data-source-s3.md#copy-region) option with the COPY command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/s3serviceexception-error.md
|
ed5883fdd271-0
|
The user account identified by the credentials must have LIST and GET access to the Amazon S3 bucket\. If the user does not have sufficient privileges, you will receive the following error message:
```
ERROR: S3ServiceException:Access Denied,Status 403,Error AccessDenied
```
For information about managing user access to buckets, see [Access Control](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAuthAccess.html) in the *Amazon S3 Developer Guide*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/s3serviceexception-error.md
|
b74a6282a60d-0
|
The MAX window function returns the maximum of the input expression values\. The MAX function works with numeric values and ignores NULL values\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MAX.md
|
351b104b170f-0
|
```
MAX ( [ ALL ] expression ) OVER
(
[ PARTITION BY expr_list ]
[ ORDER BY order_list frame_clause ]
)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MAX.md
|
43eac3f2c0f0-0
|
*expression *
The target column or expression that the function operates on\.
ALL
With the argument ALL, the function retains all duplicate values from the expression\. ALL is the default\. DISTINCT is not supported\.
OVER
A clause that specifies the window clauses for the aggregation functions\. The OVER clause distinguishes window aggregation functions from normal set aggregation functions\.
PARTITION BY *expr\_list*
Defines the window for the MAX function in terms of one or more expressions\.
ORDER BY *order\_list*
Sorts the rows within each partition\. If no PARTITION BY is specified, ORDER BY uses the entire table\.
*frame\_clause*
If an ORDER BY clause is used for an aggregate function, an explicit frame clause is required\. The frame clause refines the set of rows in a function's window, including or excluding sets of rows within the ordered result\. The frame clause consists of the ROWS keyword and associated specifiers\. See [Window function syntax summary](r_Window_function_synopsis.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MAX.md
|
d90c45b67cb0-0
|
Accepts any data type as input\. Returns the same data type as *expression*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MAX.md
|
633a3c23aa8d-0
|
The following example shows the sales ID, quantity, and maximum quantity from the beginning of the data window:
```
select salesid, qty,
max(qty) over (order by salesid rows unbounded preceding) as max
from winsales
order by salesid;
salesid | qty | max
---------+-----+-----
10001 | 10 | 10
10005 | 30 | 30
10006 | 10 | 30
20001 | 20 | 30
20002 | 20 | 30
30001 | 10 | 30
30003 | 15 | 30
30004 | 20 | 30
30007 | 30 | 30
40001 | 40 | 40
40005 | 10 | 40
(11 rows)
```
For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
The following example shows the salesid, quantity, and maximum quantity in a restricted frame:
```
select salesid, qty,
max(qty) over (order by salesid rows between 2 preceding and 1 preceding) as max
from winsales
order by salesid;
salesid | qty | max
---------+-----+-----
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MAX.md
|
633a3c23aa8d-1
|
salesid | qty | max
---------+-----+-----
10001 | 10 |
10005 | 30 | 10
10006 | 10 | 30
20001 | 20 | 30
20002 | 20 | 20
30001 | 10 | 20
30003 | 15 | 20
30004 | 20 | 15
30007 | 30 | 20
40001 | 40 | 30
40005 | 10 | 40
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_MAX.md
|
fda038af640b-0
|
**Topics**
+ [Using a manifest to specify data files](loading-data-files-using-manifest.md)
+ [Loading compressed data files from Amazon S3](t_loading-gzip-compressed-data-files-from-S3.md)
+ [Loading fixed\-width data from Amazon S3](t_loading_fixed_width_data.md)
+ [Loading multibyte data from Amazon S3](t_loading_unicode_data.md)
+ [Loading encrypted data files from Amazon S3](c_loading-encrypted-files.md)
Use the [COPY](r_COPY.md) command to load a table in parallel from data files on Amazon S3\. You can specify the files to be loaded by using an Amazon S3 object prefix or by using a manifest file\.
The syntax to specify the files to be loaded by using a prefix is as follows:
```
copy <table_name> from 's3://<bucket_name>/<object_prefix>'
authorization;
```
The manifest file is a JSON\-formatted file that lists the data files to be loaded\. The syntax to specify the files to be loaded by using a manifest file is as follows:
```
copy <table_name> from 's3://<bucket_name>/<manifest_file>'
authorization
manifest;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_loading-tables-from-s3.md
|
fda038af640b-1
|
authorization
manifest;
```
The table to be loaded must already exist in the database\. For information about creating a table, see [CREATE TABLE](r_CREATE_TABLE_NEW.md) in the SQL Reference\.
The values for *authorization* provide the AWS authorization your cluster needs to access the Amazon S3 objects\. For information about required permissions, see [IAM permissions for COPY, UNLOAD, and CREATE LIBRARY](copy-usage_notes-access-permissions.md#copy-usage_notes-iam-permissions)\. The preferred method for authentication is to specify the IAM\_ROLE parameter and provide the Amazon Resource Name \(ARN\) for an IAM role with the necessary permissions\. Alternatively, you can specify the ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY parameters and provide the access key ID and secret access key for an authorized IAM user as plain text\. For more information, see [Role\-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-role-based) or [Key\-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-key-based)\.
To authenticate using the IAM\_ROLE parameter, replace *<aws\-account\-id>* and *<role\-name>* as shown in the following syntax\.
```
IAM_ROLE 'arn:aws:iam::<aws-account-id>:role/<role-name>'
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_loading-tables-from-s3.md
|
fda038af640b-2
|
```
The following example shows authentication using an IAM role\.
```
copy customer
from 's3://mybucket/mydata'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole';
```
To authenticate using IAM user credentials, replace *<access\-key\-id>* and *<secret\-access\-key* with an authorized user's access key ID and full secret access key for the ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY parameters as shown following\.
```
ACCESS_KEY_ID '<access-key-id>'
SECRET_ACCESS_KEY '<secret-access-key>';
```
The following example shows authentication using IAM user credentials\.
```
copy customer
from 's3://mybucket/mydata'
access_key_id '<access-key-id>'
secret_access_key '<secret-access-key';
```
For more information about other authorization options, see [Authorization parameters](copy-parameters-authorization.md)
If you want to validate your data without actually loading the table, use the NOLOAD option with the [COPY](r_COPY.md) command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_loading-tables-from-s3.md
|
fda038af640b-3
|
If you want to validate your data without actually loading the table, use the NOLOAD option with the [COPY](r_COPY.md) command\.
The following example shows the first few rows of a pipe\-delimited data in a file named `venue.txt`\.
```
1|Toyota Park|Bridgeview|IL|0
2|Columbus Crew Stadium|Columbus|OH|0
3|RFK Stadium|Washington|DC|0
```
Before uploading the file to Amazon S3, split the file into multiple files so that the COPY command can load it using parallel processing\. The number of files should be a multiple of the number of slices in your cluster\. Split your load data files so that the files are about equal size, between 1 MB and 1 GB after compression\. For more information, see [Splitting your data into multiple files](t_splitting-data-files.md)\.
For example, the `venue.txt` file might be split into four files, as follows:
```
venue.txt.1
venue.txt.2
venue.txt.3
venue.txt.4
```
The following COPY command loads the VENUE table using the pipe\-delimited data in the data files with the prefix 'venue' in the Amazon S3 bucket `mybucket`\.
**Note**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_loading-tables-from-s3.md
|
fda038af640b-4
|
**Note**
The Amazon S3 bucket `mybucket` in the following examples does not exist\. For sample COPY commands that use real data in an existing Amazon S3 bucket, see [ Step 4: Load sample data](cm-dev-t-load-sample-data.md)\.
```
copy venue from 's3://mybucket/venue'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
delimiter '|';
```
If no Amazon S3 objects with the key prefix 'venue' exist, the load fails\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_loading-tables-from-s3.md
|
1a074c2ac156-0
|
Returns `true` if the user has the specified privilege for the specified schema\. For more information about privileges, see [GRANT](r_GRANT.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_SCHEMA_PRIVILEGE.md
|
48a6fb39697d-0
|
**Note**
This is a leader\-node function\. This function returns an error if it references a user\-created table, an STL or STV system table, or an SVV or SVL system view\.
```
has_schema_privilege( [ user, ] schema, privilege)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_SCHEMA_PRIVILEGE.md
|
621a71c21d08-0
|
*user*
Name of the user to check for schema privileges\. Default is to check the current user\.
*schema*
Schema associated with the privilege\.
*privilege*
Privilege to check\. Valid values are:
+ CREATE
+ USAGE
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_SCHEMA_PRIVILEGE.md
|
90f903c396b7-0
|
Returns a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_SCHEMA_PRIVILEGE.md
|
f7ec8791de83-0
|
The following query confirms that the GUEST user has the CREATE privilege on the PUBLIC schema:
```
select has_schema_privilege('guest', 'public', 'create');
has_schema_privilege
----------------------
true
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_SCHEMA_PRIVILEGE.md
|
9295a9ef0e64-0
|
You can combine the extensions described previously with the usual SQL features\. The following use cases illustrate some common combinations\. These examples help demonstrate how you can use nested data\. They aren't part of the tutorial\.
**Topics**
+ [Ingesting nested data](#ingesting-nested-data)
+ [Aggregating nested data with subqueries](#aggregating-with-subquery)
+ [Joining Amazon Redshift and nested data](#joining-redshift-data)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/nested-data-use-cases.md
|
19df1e5df808-0
|
You can use a `CREATE TABLE AS` statement to ingest data from an external table that contains complex data types\. The following query extracts all customers and their phone numbers from the external table, using `LEFT JOIN`, and stores them in the Amazon Redshift table `CustomerPhones`\.
```
CREATE TABLE CustomerPhones AS
SELECT c.name.given, c.name.family, p AS phone
FROM spectrum.customers c LEFT JOIN c.phones p ON true
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/nested-data-use-cases.md
|
03b9e45fa6be-0
|
You can use a subquery to aggregate nested data\. The following example illustrates this approach\.
```
SELECT c.name.given, c.name.family, (SELECT COUNT(*) FROM c.orders o) AS ordercount
FROM spectrum.customers c
```
The following data is returned\.
```
given | family | ordercount
--------|----------|--------------
Jenny | Doe | 0
John | Smith | 2
Andy | Jones | 1
(3 rows)
```
**Note**
When you aggregate nested data by grouping by the parent row, the most efficient way is the one shown in the previous example\. In that example, the nested rows of `c.orders` are grouped by their parent row `c`\. Alternatively, if you know that `id` is unique for each `customer` and `o.shipdate` is never null, you can aggregate as shown in the following example\. However, this approach generally isn't as efficient as the previous example\.
```
SELECT c.name.given, c.name.family, COUNT(o.shipdate) AS ordercount
FROM spectrum.customers c LEFT JOIN c.orders o ON true
GROUP BY c.id, c.name.given, c.name.family
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/nested-data-use-cases.md
|
03b9e45fa6be-1
|
GROUP BY c.id, c.name.given, c.name.family
```
You can also write the query by using a subquery in the FROM clause that refers to an alias \(`c`\) of the ancestor query and extracts array data\. The following example demonstrates this approach\.
```
SELECT c.name.given, c.name.family, s.count AS ordercount
FROM spectrum.customers c, (SELECT count(*) AS count FROM c.orders o) s
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/nested-data-use-cases.md
|
7b334a127404-0
|
You can also join Amazon Redshift data with nested data in an external table\. For example, suppose that you have the following nested data in Amazon S3\.
```
CREATE EXTERNAL TABLE spectrum.customers2 (
id int,
name struct<given:varchar(20), family:varchar(20)>,
phones array<varchar(20)>,
orders array<struct<shipdate:timestamp, item:int>>
)
```
Suppose also that you have the following table in Amazon Redshift\.
```
CREATE TABLE prices (
id int,
price double precision
)
```
The following query finds the total number and amount of each customer's purchases based on the preceding\. The following example is only an illustration\. It only returns data if you have created the tables described previously\.
```
SELECT c.name.given, c.name.family, COUNT(o.date) AS ordercount, SUM(p.price) AS ordersum
FROM spectrum.customers2 c, c.orders o, prices p ON o.item = p.id
GROUP BY c.id, c.name.given, c.name.family
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/nested-data-use-cases.md
|
9e2628396fbc-0
|
After the load operation is complete, query the [STL\_LOAD\_COMMITS](r_STL_LOAD_COMMITS.md) system table to verify that the expected files were loaded\. Execute the COPY command and load verification within the same transaction so that if there is problem with the load you can roll back the entire transaction\.
The following query returns entries for loading the tables in the TICKIT database:
```
select query, trim(filename) as filename, curtime, status
from stl_load_commits
where filename like '%tickit%' order by query;
query | btrim | curtime | status
-------+---------------------------+----------------------------+--------
22475 | tickit/allusers_pipe.txt | 2013-02-08 20:58:23.274186 | 1
22478 | tickit/venue_pipe.txt | 2013-02-08 20:58:25.070604 | 1
22480 | tickit/category_pipe.txt | 2013-02-08 20:58:27.333472 | 1
22482 | tickit/date2008_pipe.txt | 2013-02-08 20:58:28.608305 | 1
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/verifying-that-data-loaded-correctly.md
|
9e2628396fbc-1
|
22482 | tickit/date2008_pipe.txt | 2013-02-08 20:58:28.608305 | 1
22485 | tickit/allevents_pipe.txt | 2013-02-08 20:58:29.99489 | 1
22487 | tickit/listings_pipe.txt | 2013-02-08 20:58:37.632939 | 1
22489 | tickit/sales_tab.txt | 2013-02-08 20:58:37.632939 | 1
(6 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/verifying-that-data-loaded-correctly.md
|
2eb16b07e7f4-0
|
[Tutorial: Tuning table design](tutorial-tuning-tables.md) walks you step by step through the process of choosing sort keys, distribution styles, and compression encodings, and shows you how to compare system performance before and after tuning\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-tutorial-tuning-tables.md
|
4eacf4966e35-0
|
**Topics**
+ [Credentials and access permissions](loading-data-access-permissions.md)
+ [Preparing your input data](t_preparing-input-data.md)
+ [Loading data from Amazon S3](t_Loading-data-from-S3.md)
+ [Loading data from Amazon EMR](loading-data-from-emr.md)
+ [Loading data from remote hosts](loading-data-from-remote-hosts.md)
+ [Loading data from an Amazon DynamoDB table](t_Loading-data-from-dynamodb.md)
+ [Verifying that the data loaded correctly](verifying-that-data-loaded-correctly.md)
+ [Validating input data](t_Validating_input_files.md)
+ [Loading tables with automatic compression](c_Loading_tables_auto_compress.md)
+ [Optimizing storage for narrow tables](c_load_compression_hidden_cols.md)
+ [Loading default column values](c_loading_default_values.md)
+ [Troubleshooting data loads](t_Troubleshooting_load_errors.md)
The COPY command leverages the Amazon Redshift massively parallel processing \(MPP\) architecture to read and load data in parallel from files on Amazon S3, from a DynamoDB table, or from text output from one or more remote hosts\.
**Note**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Loading_tables_with_the_COPY_command.md
|
4eacf4966e35-1
|
**Note**
We strongly recommend using the COPY command to load large amounts of data\. Using individual INSERT statements to populate a table might be prohibitively slow\. Alternatively, if your data already exists in other Amazon Redshift database tables, use INSERT INTO \.\.\. SELECT or CREATE TABLE AS to improve performance\. For information, see [INSERT](r_INSERT_30.md) or [CREATE TABLE AS](r_CREATE_TABLE_AS.md)\.
To load data from another AWS resource, your cluster must have permission to access the resource and perform the necessary actions\.
To grant or revoke privilege to load data into a table using a COPY command, grant or revoke the INSERT privilege\.
Your data needs to be in the proper format for loading into your Amazon Redshift table\. This section presents guidelines for preparing and verifying your data before the load and for validating a COPY statement before you execute it\.
To protect the information in your files, you can encrypt the data files before you upload them to your Amazon S3 bucket; COPY will decrypt the data as it performs the load\. You can also limit access to your load data by providing temporary security credentials to users\. Temporary security credentials provide enhanced security because they have short life spans and cannot be reused after they expire\.
You can compress the files using gzip, lzop, or bzip2 to save time uploading the files\. COPY can then speed up the load process by uncompressing the files as they are read\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Loading_tables_with_the_COPY_command.md
|
4eacf4966e35-2
|
To help keep your data secure in transit within the AWS cloud, Amazon Redshift uses hardware accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY, UNLOAD, backup, and restore operations\.
When you load your table directly from an Amazon DynamoDB table, you have the option to control the amount of Amazon DynamoDB provisioned throughput you consume\.
You can optionally let COPY analyze your input data and automatically apply optimal compression encodings to your table as part of the load process\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Loading_tables_with_the_COPY_command.md
|
fcf9d433ef78-0
|
Choosing distribution styles is just one aspect of database design\. You should consider distribution styles only within the context of the entire system, balancing distribution with other important factors such as cluster size, compression encoding methods, sort keys, and table constraints\.
Test your system with data that is as close to real data as possible\.
In order to make good choices for distribution styles, you need to understand the query patterns for your Amazon Redshift application\. Identify the most costly queries in your system and base your initial database design on the demands of those queries\. Factors that determine the total cost of a query are how long the query takes to execute, how much computing resources it consumes, how often it is executed, and how disruptive it is to other queries and database operations\.
Identify the tables that are used by the most costly queries, and evaluate their role in query execution\. Consider how the tables are joined and aggregated\.
Use the guidelines in this section to choose a distribution style for each table\. When you have done so, create the tables, load them with data that is as close as possible to real data, and then test the tables for the types of queries that you expect to use\. You can evaluate the query explain plans to identify tuning opportunities\. Compare load times, storage space, and query execution times in order to balance your system's overall requirements\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_evaluating_query_patterns.md
|
3cd0cd8e60ae-0
|
Use the SVL\_S3LIST view to get details about Amazon Redshift Spectrum queries at the segment level\.
SVL\_S3LIST is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3LIST.md
|
f11c8a01a110-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_S3LIST.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3LIST.md
|
7912889b2a20-0
|
The following example queries SVL\_S3LIST for the last query executed\.
```
select *
from svl_s3list
where query = pg_last_query_id()
order by query,segment;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3LIST.md
|
9e9b42e9eaca-0
|
Computes a checksum value for building a hash index\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHECKSUM.md
|
471e23fa7698-0
|
```
CHECKSUM(expression)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHECKSUM.md
|
d188a14fa85d-0
|
*expression*
The input expression must be a VARCHAR, INTEGER, or DECIMAL data type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHECKSUM.md
|
4c19dbffd03a-0
|
The CHECKSUM function returns an integer\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHECKSUM.md
|
ae8d98f37a4d-0
|
The following example computes a checksum value for the COMMISSION column:
```
select checksum(commission)
from sales
order by salesid
limit 10;
checksum
----------
10920
1140
5250
2625
2310
5910
11820
2955
8865
975
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHECKSUM.md
|
0ab2688ab7d9-0
|
Truncates a time stamp and returns a date\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC_date.md
|
5d288362235a-0
|
```
TRUNC(timestamp)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC_date.md
|
aa6df4e0d4ba-0
|
*timestamp*
A timestamp column or an expression that implicitly converts to a time stamp\.
To return a time stamp value with `00:00:00` as the time, cast the function result to a TIMESTAMP\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC_date.md
|
790d177f0432-0
|
DATE
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC_date.md
|
11168e41b20b-0
|
Return the date portion from the result of the SYSDATE function \(which returns a time stamp\):
```
select sysdate;
timestamp
----------------------------
2011-07-21 10:32:38.248109
(1 row)
select trunc(sysdate);
trunc
------------
2011-07-21
(1 row)
```
Apply the TRUNC function to a TIMESTAMP column\. The return type is a date\.
```
select trunc(starttime) from event
order by eventid limit 1;
trunc
------------
2008-01-25
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC_date.md
|
f27a048052f8-0
|
**Topics**
+ [Syntax](#r_CREATE_TABLE_NEW-synopsis)
+ [Parameters](#r_CREATE_TABLE_NEW-parameters)
+ [Usage notes](r_CREATE_TABLE_usage.md)
+ [Examples](r_CREATE_TABLE_examples.md)
Creates a new table in the current database\. The owner of this table is the issuer of the CREATE TABLE command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
d4793a48b074-0
|
```
CREATE [ [LOCAL ] { TEMPORARY | TEMP } ] TABLE
[ IF NOT EXISTS ] table_name
( { column_name data_type [column_attributes] [ column_constraints ]
| table_constraints
| LIKE parent_table [ { INCLUDING | EXCLUDING } DEFAULTS ] }
[, ... ] )
[ BACKUP { YES | NO } ]
[table_attribute]
where column_attributes are:
[ DEFAULT default_expr ]
[ IDENTITY ( seed, step ) ]
[ GENERATED BY DEFAULT AS IDENTITY ( seed, step ) ]
[ ENCODE encoding ]
[ DISTKEY ]
[ SORTKEY ]
and column_constraints are:
[ { NOT NULL | NULL } ]
[ { UNIQUE | PRIMARY KEY } ]
[ REFERENCES reftable [ ( refcolumn ) ] ]
and table_constraints are:
[ UNIQUE ( column_name [, ... ] ) ]
[ PRIMARY KEY ( column_name [, ... ] ) ]
[ FOREIGN KEY (column_name [, ... ] ) REFERENCES reftable [ ( refcolumn ) ]
and table_attributes are:
[ DISTSTYLE { AUTO | EVEN | KEY | ALL } ]
[ DISTKEY ( column_name ) ]
[ [COMPOUND | INTERLEAVED ] SORTKEY ( column_name [, ...] ) ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-0
|
LOCAL
Optional\. Although this keyword is accepted in the statement, it has no effect in Amazon Redshift\.
TEMPORARY \| TEMP
Keyword that creates a temporary table that is visible only within the current session\. The table is automatically dropped at the end of the session in which it is created\. The temporary table can have the same name as a permanent table\. The temporary table is created in a separate, session\-specific schema\. \(You can't specify a name for this schema\.\) This temporary schema becomes the first schema in the search path, so the temporary table will take precedence over the permanent table unless you qualify the table name with the schema name to access the permanent table\. For more information about schemas and precedence, see [search\_path](r_search_path.md)\.
By default, users have permission to create temporary tables by their automatic membership in the PUBLIC group\. To deny this privilege to a user, revoke the TEMP privilege from the PUBLIC group, and then explicitly grant the TEMP privilege only to specific users or groups of users\.
IF NOT EXISTS
Clause that indicates that if the specified table already exists, the command should make no changes and return a message that the table exists, rather than terminating with an error\. Note that the existing table might be nothing like the one that would have been created; only the table name is used for comparison\.
This clause is useful when scripting, so the script doesn’t fail if CREATE TABLE tries to create a table that already exists\.
*table\_name*
Name of the table to be created\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-1
|
*table\_name*
Name of the table to be created\.
If you specify a table name that begins with '\# ', the table is created as a temporary table\. The following is an example:
```
create table #newtable (id int);
```
The maximum length for the table name is 127 bytes; longer names are truncated to 127 bytes\. You can use UTF\-8 multibyte characters up to a maximum of four bytes\. Amazon Redshift enforces a quota of the number of tables per cluster by node type, including user\-defined temporary tables and temporary tables created by Amazon Redshift during query processing or system maintenance\. Optionally, the table name can be qualified with the database and schema name\. In the following example, the database name is `tickit` , the schema name is `public`, and the table name is `test`\.
```
create table tickit.public.test (c1 int);
```
If the database or schema doesn't exist, the table isn't created, and the statement returns an error\. You can't create tables or views in the system databases `template0`, `template1`, and `padb_harvest`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-2
|
If a schema name is given, the new table is created in that schema \(assuming the creator has access to the schema\)\. The table name must be a unique name for that schema\. If no schema is specified, the table is created by using the current database schema\. If you are creating a temporary table, you can't specify a schema name, because temporary tables exist in a special schema\.
Multiple temporary tables with the same name can exist at the same time in the same database if they are created in separate sessions because the tables are assigned to different schemas\. For more information about valid names, see [Names and identifiers](r_names.md)\.
*column\_name*
Name of a column to be created in the new table\. The maximum length for the column name is 127 bytes; longer names are truncated to 127 bytes\. You can use UTF\-8 multibyte characters up to a maximum of four bytes\. The maximum number of columns you can define in a single table is 1,600\. For more information about valid names, see [Names and identifiers](r_names.md)\.
If you are creating a "wide table," take care that your list of columns doesn't exceed row\-width boundaries for intermediate results during loads and query processing\. For more information, see [Usage notes](r_CREATE_TABLE_usage.md)\.
*data\_type*
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-3
|
*data\_type*
Data type of the column being created\. For CHAR and VARCHAR columns, you can use the MAX keyword instead of declaring a maximum length\. MAX sets the maximum length to 4,096 bytes for CHAR or 65535 bytes for VARCHAR\. The maximum size of a GEOMETRY object is 1,048,447 bytes\.
The following [Data types](c_Supported_data_types.md) are supported:
+ SMALLINT \(INT2\)
+ INTEGER \(INT, INT4\)
+ BIGINT \(INT8\)
+ DECIMAL \(NUMERIC\)
+ REAL \(FLOAT4\)
+ DOUBLE PRECISION \(FLOAT8\)
+ BOOLEAN \(BOOL\)
+ CHAR \(CHARACTER\)
+ VARCHAR \(CHARACTER VARYING\)
+ DATE
+ TIMESTAMP
+ TIMESTAMPTZ
+ GEOMETRY
DEFAULT *default\_expr* <a name="create-table-default"></a>
Clause that assigns a default data value for the column\. The data type of *default\_expr* must match the data type of the column\. The DEFAULT value must be a variable\-free expression\. Subqueries, cross\-references to other columns in the current table, and user\-defined functions aren't allowed\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-4
|
The *default\_expr* expression is used in any INSERT operation that doesn't specify a value for the column\. If no default value is specified, the default value for the column is null\.
If a COPY operation with a defined column list omits a column that has a DEFAULT value, the COPY command inserts the value of *default\_expr*\.
IDENTITY\(*seed*, *step*\) <a name="identity-clause"></a>
Clause that specifies that the column is an IDENTITY column\. An IDENTITY column contains unique autogenerated values\. The data type for an IDENTITY column must be either INT or BIGINT\.
When you add rows using an `INSERT` or `INSERT INTO [tablename] VALUES()` statement, these values start with the value specified as *seed* and increment by the number specified as *step*\.
When you load the table using an `INSERT INTO [tablename] SELECT * FROM` or `COPY` statement, the data is loaded in parallel and distributed to the node slices\. To be sure that the identity values are unique, Amazon Redshift skips a number of values when creating the identity values\. Identity values are unique, but the order might not match the order in the source files\.
GENERATED BY DEFAULT AS IDENTITY\(*seed*, *step*\) <a name="identity-generated-bydefault-clause"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-5
|
GENERATED BY DEFAULT AS IDENTITY\(*seed*, *step*\) <a name="identity-generated-bydefault-clause"></a>
Clause that specifies that the column is a default IDENTITY column and enables you to automatically assign a unique value to the column\. The data type for an IDENTITY column must be either INT or BIGINT\. When you add rows without values, these values start with the value specified as *seed* and increment by the number specified as *step*\. For information about how values are generated, see [IDENTITY](#identity-clause) \.
Also, during INSERT, UPDATE, or COPY you can provide a value without EXPLICIT\_IDS\. Amazon Redshift uses that value to insert into the identity column instead of using the system\-generated value\. The value can be a duplicate, a value less than the seed, or a value between step values\. Amazon Redshift doesn't check the uniqueness of values in the column\. Providing a value doesn't affect the next system\-generated value\.
If you require uniqueness in the column, don't add a duplicate value\. Instead, add a unique value that is less than the seed or between step values\.
Keep in mind the following about default identity columns:
+ Default identity columns are NOT NULL\. NULL can't be inserted\.
+ To insert a generated value into a default identity column, use the keyword `DEFAULT`\.
```
INSERT INTO tablename (identity-column-name) VALUES (DEFAULT);
```
+ Overriding values of a default identity column doesn't affect the next generated value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-6
|
```
+ Overriding values of a default identity column doesn't affect the next generated value\.
+ You can't add a default identity column with the ALTER TABLE ADD COLUMN statement\.
+ You can append a default identity column with the ALTER TABLE APPEND statement\.
ENCODE *encoding*
Compression encoding for a column\. If no compression is selected, Amazon Redshift automatically assigns compression encoding as follows:
+ All columns in temporary tables are assigned RAW compression by default\.
+ Columns that are defined as sort keys are assigned RAW compression\.
+ Columns that are defined as BOOLEAN, REAL, DOUBLE PRECISION, or GEOMETRY data type are assigned RAW compression\.
+ Columns that are defined as SMALLINT, INTEGER, BIGINT, DECIMAL, DATE, TIMESTAMP, or TIMESTAMPTZ are assigned AZ64 compression\.
+ Columns that are defined as CHAR or VARCHAR are assigned LZO compression\.
If you don't want a column to be compressed, explicitly specify RAW encoding\.
The following [compression encodings](c_Compression_encodings.md#compression-encoding-list) are supported:
+ AZ64
+ BYTEDICT
+ DELTA
+ DELTA32K
+ LZO
+ MOSTLY8
+ MOSTLY16
+ MOSTLY32
+ RAW \(no compression\)
+ RUNLENGTH
+ TEXT255
+ TEXT32K
+ ZSTD
DISTKEY
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-7
|
+ RUNLENGTH
+ TEXT255
+ TEXT32K
+ ZSTD
DISTKEY
Keyword that specifies that the column is the distribution key for the table\. Only one column in a table can be the distribution key\. You can use the DISTKEY keyword after a column name or as part of the table definition by using the DISTKEY \(*column\_name*\) syntax\. Either method has the same effect\. For more information, see the DISTSTYLE parameter later in this topic\.
SORTKEY
Keyword that specifies that the column is the sort key for the table\. When data is loaded into the table, the data is sorted by one or more columns that are designated as sort keys\. You can use the SORTKEY keyword after a column name to specify a single\-column sort key, or you can specify one or more columns as sort key columns for the table by using the SORTKEY \(*column\_name* \[, \.\.\.\]\) syntax\. Only compound sort keys are created with this syntax\.
If you don't specify any sort keys, the table isn't sorted\. You can define a maximum of 400 SORTKEY columns per table\.
NOT NULL \| NULL
NOT NULL specifies that the column isn't allowed to contain null values\. NULL, the default, specifies that the column accepts null values\. IDENTITY columns are declared NOT NULL by default\.
UNIQUE
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-8
|
UNIQUE
Keyword that specifies that the column can contain only unique values\. The behavior of the unique table constraint is the same as that for column constraints, with the additional capability to span multiple columns\. To define a unique table constraint, use the UNIQUE \( *column\_name* \[, \.\.\. \] \) syntax\.
Unique constraints are informational and aren't enforced by the system\.
PRIMARY KEY
Keyword that specifies that the column is the primary key for the table\. Only one column can be defined as the primary key by using a column definition\. To define a table constraint with a multiple\-column primary key, use the PRIMARY KEY \( *column\_name* \[, \.\.\. \] \) syntax\.
Identifying a column as the primary key provides metadata about the design of the schema\. A primary key implies that other tables can rely on this set of columns as a unique identifier for rows\. One primary key can be specified for a table, whether as a column constraint or a table constraint\. The primary key constraint should name a set of columns that is different from other sets of columns named by any unique constraint defined for the same table\.
Primary key constraints are informational only\. They aren't enforced by the system, but they are used by the planner\.
References *reftable* \[ \( *refcolumn* \) \]
Clause that specifies a foreign key constraint, which implies that the column must contain only values that match values in the referenced column of some row of the referenced table\. The referenced columns should be the columns of a unique or primary key constraint in the referenced table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-9
|
Foreign key constraints are informational only\. They aren't enforced by the system, but they are used by the planner\.
LIKE *parent\_table* \[ \{ INCLUDING \| EXCLUDING \} DEFAULTS \] <a name="create-table-like"></a>
A clause that specifies an existing table from which the new table automatically copies column names, data types, and NOT NULL constraints\. The new table and the parent table are decoupled, and any changes made to the parent table aren't applied to the new table\. Default expressions for the copied column definitions are copied only if INCLUDING DEFAULTS is specified\. The default behavior is to exclude default expressions, so that all columns of the new table have null defaults\.
Tables created with the LIKE option don't inherit primary and foreign key constraints\. Distribution style, sort keys,BACKUP, and NULL properties are inherited by LIKE tables, but you can't explicitly set them in the CREATE TABLE \.\.\. LIKE statement\.
BACKUP \{ YES \| NO \} <a name="create-table-backup"></a>
A clause that specifies whether the table should be included in automated and manual cluster snapshots\. For tables, such as staging tables, that don't contain critical data, specify BACKUP NO to save processing time when creating snapshots and restoring from snapshots and to reduce storage space on Amazon Simple Storage Service\. The BACKUP NO setting has no affect on automatic replication of data to other nodes within the cluster, so tables with BACKUP NO specified are restored in a node failure\. The default is BACKUP YES\.
DISTSTYLE \{ AUTO \| EVEN \| KEY \| ALL \}
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-10
|
DISTSTYLE \{ AUTO \| EVEN \| KEY \| ALL \}
Keyword that defines the data distribution style for the whole table\. Amazon Redshift distributes the rows of a table to the compute nodes according to the distribution style specified for the table\. The default is AUTO\.
The distribution style that you select for tables affects the overall performance of your database\. For more information, see [Choosing a data distribution style](t_Distributing_data.md)\. Possible distribution styles are as follows:
+ AUTO: Amazon Redshift assigns an optimal distribution style based on the table data\. For example, if AUTO distribution style is specified, Amazon Redshift initially assigns ALL distribution to a small table, then changes the table to EVEN distribution when the table grows larger\. The change in distribution occurs in the background, in a few seconds\. Amazon Redshift never changes the distribution style from EVEN to ALL\. To view the distribution style applied to a table, query the PG\_CLASS system catalog table\. For more information, see [Viewing distribution styles](viewing-distribution-styles.md)\.
+ EVEN: The data in the table is spread evenly across the nodes in a cluster in a round\-robin distribution\. Row IDs are used to determine the distribution, and roughly the same number of rows are distributed to each node\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-11
|
+ KEY: The data is distributed by the values in the DISTKEY column\. When you set the joining columns of joining tables as distribution keys, the joining rows from both tables are collocated on the compute nodes\. When data is collocated, the optimizer can perform joins more efficiently\. If you specify DISTSTYLE KEY, you must name a DISTKEY column, either for the table or as part of the column definition\. For more information, see the DISTKEY parameter earlier in this topic\.
+ ALL: A copy of the entire table is distributed to every node\. This distribution style ensures that all the rows required for any join are available on every node, but it multiplies storage requirements and increases the load and maintenance times for the table\. ALL distribution can improve execution time when used with certain dimension tables where KEY distribution isn't appropriate, but performance improvements must be weighed against maintenance costs\.
DISTKEY \( *column\_name* \)
Constraint that specifies the column to be used as the distribution key for the table\. You can use the DISTKEY keyword after a column name or as part of the table definition, by using the DISTKEY \(*column\_name*\) syntax\. Either method has the same effect\. For more information, see the DISTSTYLE parameter earlier in this topic\.
\[ COMPOUND \| INTERLEAVED \] SORTKEY \(* column\_name* \[,\.\.\. \] \)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-12
|
\[ COMPOUND \| INTERLEAVED \] SORTKEY \(* column\_name* \[,\.\.\. \] \)
Specifies one or more sort keys for the table\. When data is loaded into the table, the data is sorted by the columns that are designated as sort keys\. You can use the SORTKEY keyword after a column name to specify a single\-column sort key, or you can specify one or more columns as sort key columns for the table by using the `SORTKEY (column_name [ , ... ] )` syntax\.
You can optionally specify COMPOUND or INTERLEAVED sort style\. The default is COMPOUND\. For more information, see [Choosing sort keys](t_Sorting_data.md)\.
If you don't specify any sort keys, the table isn't sorted by default\. You can define a maximum of 400 COMPOUND SORTKEY columns or 8 INTERLEAVED SORTKEY columns per table\.
COMPOUND
Specifies that the data is sorted using a compound key made up of all of the listed columns, in the order they are listed\. A compound sort key is most useful when a query scans rows according to the order of the sort columns\. The performance benefits of sorting with a compound key decrease when queries rely on secondary sort columns\. You can define a maximum of 400 COMPOUND SORTKEY columns per table\.
INTERLEAVED
Specifies that the data is sorted using an interleaved sort key\. A maximum of eight columns can be specified for an interleaved sort key\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-13
|
Specifies that the data is sorted using an interleaved sort key\. A maximum of eight columns can be specified for an interleaved sort key\.
An interleaved sort gives equal weight to each column, or subset of columns, in the sort key, so queries don't depend on the order of the columns in the sort key\. When a query uses one or more secondary sort columns, interleaved sorting significantly improves query performance\. Interleaved sorting carries a small overhead cost for data loading and vacuuming operations\.
Don’t use an interleaved sort key on columns with monotonically increasing attributes, such as identity columns, dates, or timestamps\.
UNIQUE \( *column\_name* \[,\.\.\.\] \)
Constraint that specifies that a group of one or more columns of a table can contain only unique values\. The behavior of the unique table constraint is the same as that for column constraints, with the additional capability to span multiple columns\. In the context of unique constraints, null values aren't considered equal\. Each unique table constraint must name a set of columns that is different from the set of columns named by any other unique or primary key constraint defined for the table\.
Unique constraints are informational and aren't enforced by the system\.
PRIMARY KEY \( *column\_name* \[,\.\.\.\] \)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
03aed3372569-14
|
PRIMARY KEY \( *column\_name* \[,\.\.\.\] \)
Constraint that specifies that a column or a number of columns of a table can contain only unique \(nonduplicate\) non\-null values\. Identifying a set of columns as the primary key also provides metadata about the design of the schema\. A primary key implies that other tables can rely on this set of columns as a unique identifier for rows\. One primary key can be specified for a table, whether as a single column constraint or a table constraint\. The primary key constraint should name a set of columns that is different from other sets of columns named by any unique constraint defined for the same table\.
Primary key constraints are informational only\. They aren't enforced by the system, but they are used by the planner\.
FOREIGN KEY \( *column\_name* \[, \.\.\. \] \) REFERENCES *reftable* \[ \( *refcolumn* \) \]
Constraint that specifies a foreign key constraint, which requires that a group of one or more columns of the new table must only contain values that match values in the referenced column or columns of some row of the referenced table\. If *refcolumn* is omitted, the primary key of *reftable* is used\. The referenced columns must be the columns of a unique or primary key constraint in the referenced table\.
Foreign key constraints are informational only\. They aren't enforced by the system, but they are used by the planner\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_NEW.md
|
47c891cee5a9-0
|
LAST\_DAY returns the date of the last day of the month that contains *date*\. The return type is always DATE, regardless of the data type of the *date* argument\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LAST_DAY.md
|
d9f8020cd1e4-0
|
```
LAST_DAY ( { date | timestamp } )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LAST_DAY.md
|
8efde837438f-0
|
*date* \| *timestamp*
A date or timestamp column or an expression that implicitly converts to a date or time stamp\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LAST_DAY.md
|
5a114a074ae7-0
|
DATE
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LAST_DAY.md
|
052b22abec82-0
|
The following example returns the date of the last day in the current month:
```
select last_day(sysdate);
last_day
------------
2014-01-31
(1 row)
```
The following example returns the number of tickets sold for each of the last 7 days of the month:
```
select datediff(day, saletime, last_day(saletime)) as "Days Remaining", sum(qtysold)
from sales
where datediff(day, saletime, last_day(saletime)) < 7
group by 1
order by 1;
days remaining | sum
----------------+-------
0 | 10140
1 | 11187
2 | 11515
3 | 11217
4 | 11446
5 | 11708
6 | 10988
(7 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LAST_DAY.md
|
1011c4183fc3-0
|
To create a federated query, you follow this general approach:
1. Set up connectivity from your Amazon Redshift cluster to your Amazon RDS or Aurora PostgreSQL DB instance\.
To do this, make sure that your RDS PostgreSQL or Aurora PostgreSQL DB instance can accept connections from your Amazon Redshift cluster\. We recommend that your Amazon Redshift cluster and Amazon RDS or Aurora PostgreSQL instance be in the same VPC and subnet group\. This way, you can add the security group for the Amazon Redshift cluster to the inbound rules of the security group for your RDS or Aurora PostgreSQL DB instance\.
You can also set up VPC peering or other networking that allows Amazon Redshift to make connections to your RDS or Aurora PostgreSQL instance\. For more information about VPC networking, see [Working with a DB instance in a VPC ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html) in the *Amazon RDS User Guide*\.
1. Set up secrets in AWS Secrets Manager for your RDS PostgreSQL and Aurora PostgreSQL databases\. Then reference the secrets in AWS Identity and Access Management \(IAM\) access policies and roles\. For more information, see [Creating a secret and an IAM role to use federated queries](federated-create-secret-iam-role.md)\.
**Note**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/getting-started-federated.md
|
1011c4183fc3-1
|
**Note**
If your cluster uses enhanced VPC routing, you might need to configure an interface VPC endpoint for AWS Secrets Manager\. This is necessary when the VPC and subnet of your Amazon Redshift cluster don’t have access to the public AWS Secrets Manager endpoint\. When you use a VPC interface endpoint, communication between the Amazon Redshift cluster in your VPC and AWS Secrets Manager is routed privately from your VPC to the endpoint interface\. For more information, see [Creating an interface endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint) in the *Amazon VPC User Guide*\.
1. Apply the IAM role that you previously created to the Amazon Redshift cluster\. For more information, see [Creating a secret and an IAM role to use federated queries](federated-create-secret-iam-role.md)\.
1. Connect to your RDS PostgreSQL and Aurora PostgreSQL databases with an external schema\. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\. For examples on how to use federated query, see [Example of using a federated query](federated_query_example.md)\.
1. Run your SQL queries referencing the external schema that references your RDS PostgreSQL and Aurora PostgreSQL databases\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/getting-started-federated.md
|
ed78458a368b-0
|
The RTRIM function trims a specified set of characters from the end of a string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RTRIM.md
|
c169962b1cd0-0
|
```
RTRIM( string, trim_chars )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RTRIM.md
|
cc571b86f4c7-0
|
*string*
The string column or expression to be trimmed\.
*trim\_chars*
A string column or expression representing the characters to be trimmed from the end of *string*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RTRIM.md
|
55a65474e3ff-0
|
A string that is the same data type as the *string* argument\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RTRIM.md
|
aa7c83bbd944-0
|
The following example trims the characters 'Park' from the end of VENUENAME where present:
```
select venueid, venuename, rtrim(venuename, 'Park')
from venue
order by 1, 2, 3
limit 10;
venueid | venuename | rtrim
--------+----------------------------+-------------------------
1 | Toyota Park | Toyota
2 | Columbus Crew Stadium | Columbus Crew Stadium
3 | RFK Stadium | RFK Stadium
4 | CommunityAmerica Ballpark | CommunityAmerica Ballp
5 | Gillette Stadium | Gillette Stadium
6 | New York Giants Stadium | New York Giants Stadium
7 | BMO Field | BMO Field
8 | The Home Depot Center | The Home Depot Cente
9 | Dick's Sporting Goods Park | Dick's Sporting Goods
10 | Pizza Hut Park | Pizza Hut
(10 rows)
```
Note that RTRIM removes any of the characters `P`, `a`, `r`, or `k` when they appear at the end of a VENUENAME\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RTRIM.md
|
a4f70345ac9b-0
|
The following limitations apply to nested data:
+ An array can only contain scalars or `struct` types\. `Array` types can't contain `array` or `map` types\.
+ Redshift Spectrum supports complex data types only as external tables\.
+ Query and subquery result columns must be scalar\.
+ If an `OUTER JOIN` expression refers to a nested table, it can refer only to that table and its nested arrays \(and maps\)\. If an `OUTER JOIN` expression doesn't refer to a nested table, it can refer to any number of non\-nested tables\.
+ If a `FROM` clause in a subquery refers to a nested table, it can't refer to any other table\.
+ If a subquery depends on a nested table that refers to a parent, you can use the parent only in the `FROM` clause\. You can't use the query in any other clauses, such as a `SELECT` or `WHERE` clause\. For example, the following query isn't run\.
```
SELECT c.name.given
FROM spectrum.customers c
WHERE (SELECT COUNT(c.id) FROM c.phones p WHERE p LIKE '858%') > 1
```
The following query works because the parent `c` is used only in the `FROM` clause of the subquery\.
```
SELECT c.name.given
FROM spectrum.customers c
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/nested-data-restrictions.md
|
a4f70345ac9b-1
|
```
SELECT c.name.given
FROM spectrum.customers c
WHERE (SELECT COUNT(*) FROM c.phones p WHERE p LIKE '858%') > 1
```
+ A subquery that accesses nested data anywhere other than the `FROM` clause must return a single value\. The only exceptions are `(NOT) EXISTS` operators in a `WHERE` clause\.
+ `(NOT) IN` is not supported\.
+ The maximum nesting depth for all nested types is 100\. This restriction applies to all file formats \(Parquet, ORC, Ion, and JSON\)\.
+ Aggregation subqueries that access nested data can only refer to `arrays` and `maps` in their `FROM` clause, not to an external table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/nested-data-restrictions.md
|
dbc7e1cfc1e9-0
|
Amazon Redshift achieves extremely fast query execution by employing these performance features\.
**Topics**
+ [Massively parallel processing](#massively-parallel-processing)
+ [Columnar data storage](#columnar-data-storage)
+ [Data compression](#data-compression)
+ [Query optimizer](#query-optimizer)
+ [Result caching](#result-caching)
+ [Compiled code](#compiled-code)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_challenges_achieving_high_performance_queries.md
|
90a19a6f4d89-0
|
Massively parallel processing \(MPP\) enables fast execution of the most complex queries operating on large amounts of data\. Multiple compute nodes handle all query processing leading up to final result aggregation, with each core of each node executing the same compiled query segments on portions of the entire data\.
Amazon Redshift distributes the rows of a table to the compute nodes so that the data can be processed in parallel\. By selecting an appropriate distribution key for each table, you can optimize the distribution of data to balance the workload and minimize movement of data from node to node\. For more information, see [Choose the best distribution style](c_best-practices-best-dist-key.md)\.
Loading data from flat files takes advantage of parallel processing by spreading the workload across multiple nodes while simultaneously reading from multiple files\. For more information about how to load data into tables, see [Amazon Redshift best practices for loading data](c_loading-data-best-practices.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_challenges_achieving_high_performance_queries.md
|
85ad641512d4-0
|
Columnar storage for database tables drastically reduces the overall disk I/O requirements and is an important factor in optimizing analytic query performance\. Storing database table information in a columnar fashion reduces the number of disk I/O requests and reduces the amount of data you need to load from disk\. Loading less data into memory enables Amazon Redshift to perform more in\-memory processing when executing queries\. See [Columnar storage](c_columnar_storage_disk_mem_mgmnt.md) for a more detailed explanation\.
When columns are sorted appropriately, the query processor is able to rapidly filter out a large subset of data blocks\. For more information, see [Choose the best sort key](c_best-practices-sort-key.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_challenges_achieving_high_performance_queries.md
|
8f581c616f05-0
|
Data compression reduces storage requirements, thereby reducing disk I/O, which improves query performance\. When you execute a query, the compressed data is read into memory, then uncompressed during query execution\. Loading less data into memory enables Amazon Redshift to allocate more memory to analyzing the data\. Because columnar storage stores similar data sequentially, Amazon Redshift is able to apply adaptive compression encodings specifically tied to columnar data types\. The best way to enable data compression
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_challenges_achieving_high_performance_queries.md
|
8f581c616f05-1
|
types\. The best way to enable data compression on table columns is by allowing Amazon Redshift to apply optimal compression encodings when you load the table with data\. To learn more about using automatic data compression, see [Loading tables with automatic compression](c_Loading_tables_auto_compress.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_challenges_achieving_high_performance_queries.md
|
546cdfc93def-0
|
The Amazon Redshift query execution engine incorporates a query optimizer that is MPP\-aware and also takes advantage of the columnar\-oriented data storage\. The Amazon Redshift query optimizer implements significant enhancements and extensions for processing complex analytic queries that often include multi\-table joins, subqueries, and aggregation\. To learn more about optimizing queries, see [Tuning query performance](c-optimizing-query-performance.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_challenges_achieving_high_performance_queries.md
|
5f68172b578b-0
|
To reduce query execution time and improve system performance, Amazon Redshift caches the results of certain types of queries in memory on the leader node\. When a user submits a query, Amazon Redshift checks the results cache for a valid, cached copy of the query results\. If a match is found in the result cache, Amazon Redshift uses the cached results and doesn't execute the query\. Result caching is transparent to the user\.
Result caching is enabled by default\. To disable result caching for the current session, set the [enable\_result\_cache\_for\_session](r_enable_result_cache_for_session.md) parameter to `off`\.
Amazon Redshift uses cached results for a new query when all of the following are true:
+ The user submitting the query has access privilege to the objects used in the query\.
+ The table or views in the query haven't been modified\.
+ The query doesn't use a function that must be evaluated each time it's run, such as GETDATE\.
+ The query doesn't reference Amazon Redshift Spectrum external tables\.
+ Configuration parameters that might affect query results are unchanged\.
+ The query syntactically matches the cached query\.
To maximize cache effectiveness and efficient use of resources, Amazon Redshift doesn't cache some large query result sets\. Amazon Redshift determines whether to cache query results based on a number of factors\. These factors include the number of entries in the cache and the instance type of your Amazon Redshift cluster\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_challenges_achieving_high_performance_queries.md
|
5f68172b578b-1
|
To determine whether a query used the result cache, query the [SVL\_QLOG](r_SVL_QLOG.md) system view\. If a query used the result cache, the source\_query column returns the query ID of the source query\. If result caching wasn't used, the source\_query column value is NULL\.
The following example shows that queries submitted by userid 104 and userid 102 use the result cache from queries run by userid 100\.
```
select userid, query, elapsed, source_query from svl_qlog
where userid > 1
order by query desc;
userid | query | elapsed | source_query
-------+--------+----------+-------------
104 | 629035 | 27 | 628919
104 | 629034 | 60 | 628900
104 | 629033 | 23 | 628891
102 | 629017 | 1229393 |
102 | 628942 | 28 | 628919
102 | 628941 | 57 | 628900
102 | 628940 | 26 | 628891
100 | 628919 | 84295686 |
100 | 628900 | 87015637 |
100 | 628891 | 58808694 |
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_challenges_achieving_high_performance_queries.md
|
5f68172b578b-2
|
100 | 628900 | 87015637 |
100 | 628891 | 58808694 |
```
For details about the queries used to create the results shown in the previous example, see [Step 2: Test system performance to establish a baseline](tutorial-tuning-tables-test-performance.md) in the [Tuning Table Design](tutorial-tuning-tables.md) tutorial\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_challenges_achieving_high_performance_queries.md
|
14012cdce5b5-0
|
The leader node distributes fully optimized compiled code across all of the nodes of a cluster\. Compiling the query eliminates the overhead associated with an interpreter and therefore increases the execution speed, especially for complex queries\. The compiled code is cached and shared across sessions on the same cluster, so subsequent executions of the same query will be faster, often even with different parameters\.
The execution engine compiles different code for the JDBC connection protocol and for ODBC and psql \(libq\) connection protocols, so two clients using different protocols will each incur the first\-time cost of compiling the code\. Other clients that use the same protocol, however, will benefit from sharing the cached code\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_challenges_achieving_high_performance_queries.md
|
555385d6f19a-0
|
**Topics**
+ [Serializable isolation](c_serial_isolation.md)
+ [Write and read\-write operations](c_write_readwrite.md)
+ [Concurrent write examples](r_Serializable_isolation_example.md)
Amazon Redshift allows tables to be read while they are being incrementally loaded or modified\.
In some traditional data warehousing and business intelligence applications, the database is available to users only when the nightly load is complete\. In such cases, no updates are allowed during regular work hours, when analytic queries are run and reports are generated; however, an increasing number of applications remain live for long periods of the day or even all day, making the notion of a load window obsolete\.
Amazon Redshift supports these types of applications by allowing tables to be read while they are being incrementally loaded or modified\. Queries simply see the latest committed version, or *snapshot*, of the data, rather than waiting for the next version to be committed\. If you want a particular query to wait for a commit from another write operation, you have to schedule it accordingly\.
The following topics describe some of the key concepts and use cases that involve transactions, database snapshots, updates, and concurrent behavior\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Concurrent_writes.md
|
1a53dc0196e6-0
|
**Topics**
+ [LIKE](r_patternmatching_condition_like.md)
+ [SIMILAR TO](pattern-matching-conditions-similar-to.md)
+ [POSIX operators](pattern-matching-conditions-posix.md)
A pattern\-matching operator searches a string for a pattern specified in the conditional expression and returns true or false depend on whether it finds a match\. Amazon Redshift uses three methods for pattern matching:
+ LIKE expressions
The LIKE operator compares a string expression, such as a column name, with a pattern that uses the wildcard characters `%` \(percent\) and `_` \(underscore\)\. LIKE pattern matching always covers the entire string\. LIKE performs a case\-sensitive match and ILIKE performs a case\-insensitive match\.
+ SIMILAR TO regular expressions
The SIMILAR TO operator matches a string expression with a SQL standard regular expression pattern, which can include a set of pattern\-matching metacharacters that includes the two supported by the LIKE operator\. SIMILAR TO matches the entire string and performs a case\-sensitive match\.
+ POSIX\-style regular expressions
POSIX regular expressions provide a more powerful means for pattern matching than the LIKE and SIMILAR TO operators\. POSIX regular expression patterns can match any portion of the string and performs a case\-sensitive match\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions.md
|
1a53dc0196e6-1
|
Regular expression matching, using SIMILAR TO or POSIX operators, is computationally expensive\. We recommend using LIKE whenever possible, especially when processing a very large number of rows\. For example, the following queries are functionally identical, but the query that uses LIKE executes several times faster than the query that uses a regular expression:
```
select count(*) from event where eventname SIMILAR TO '%(Ring|Die)%';
select count(*) from event where eventname LIKE '%Ring%' OR eventname LIKE '%Die%';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.